anchor
stringlengths
86
24.4k
positive
stringlengths
174
15.6k
negative
stringlengths
76
13.7k
anchor_status
stringclasses
3 values
# Slacker created by Albert Lai, Hady Ibrahim, and Varun Kothandaraman github : *[Slacker Github](https://github.com/albertlai431/slacker-chore)* ## Inspiration Among shared housing, chores are a major hassle for most people to deal with organizing to ensure everyone is doing their fair share of the work. In most cases, without direct instruction, most people simply forget about their slice of work they need to complete. ## What it does Slacker is a web-app that allows users to join a group that contains multiple members of their household and through an overall bigger list of items - tasks get automatically assigned to each member in the group. Each member in the group has a couple of task view points with the main pages being the user’s own personal list, the total group list, each group member’s activity, and settings. The user’s personal list of chores constantly refreshes over each week through one-time and repeating chores for each task. WIth forgetting/overdue chores appearing at the top of the screen on every group member’s personal page for quicker completion. ## How we built it Slacker was built using a combination of React and Chakra UI through github source control. Additionally, we have created mockups of both the desktop pages and the mobile app we were planning on creating. To find pictures of the mockups kindly check out the images we have attached to this devpost for the items that we have created so far. ## Challenges we ran into Originally, our plan was to create an ios/android app through react native and create our fleshed out figma app mockups. The full idea simply had too many features and details to work as both: * Create the mobile application * Create the full application, with all the features we brainstormed The first challenge that we ran into was the mockup and design of the application. UI/UX design caused us a lot of grief as we found it difficult to create some design that we felt both looked good and were easy to understand in terms of functionality. The second challenge that we faced was the google authentication feature we created for logging into the website. The main issue was that the implementation of the feature created a lot of issues and bugs that delayed our total work time by a considerable amount of time. Additionally with the time constraint, we were able to create a React web application that has some basic functionality as a prototype for our original idea. ## Accomplishments that we're proud of We are happy with the web application that we have created so far in our prototype with the given time so far: We have implemented: * Finished the landing page * Finished the google authentication * Home screen * Create tasks that will be automatically assigned to users on a recurring basis * Create invite and join group * Labels slacker member with least tasks * Donut graphs for indication of task completion every week * The ability to see every task for each day * The ability to sign out of the webpage * and even more! ## What we learned As a group, since for the majority of us it was our first hackathon, we put more emphasis and time on brainstorming an idea instead of just sitting down and starting to code our project up. We definitely learned that coming into the hackathon with some preconceived notions of what we individually wanted to code would have saved us around more than half a day in time. We also were surprised to learn how useful figma is as a tool for UI/UX design for web development. The ability to copy-paste CSS code for each element of the webpage was instrumental in our ability to create a working prototype faster. ## What's next for Slacker For Slacker, the next steps are to: * Finish the web application with all of the features * Create and polish the full web application, with all the visual features we brainstormed * Finish the mobile application with all of the same features as the web application we aim to complete
# Mental-Health-Tracker ## Mental & Emotional Health Diary This project was made because we all know how much of a pressing issue that mental health and depression can have not only on ourselves, but thousands of other students. Our goal was to make something where someone could have the chance to accurately assess and track their own mental health using the tools that Google has made available to access. We wanted the person to be able to openly express their feelings towards the diary for their own personal benefit. Along the way, we learned about using Google's Natural Language processor, developing using Android Studio, as well as deploying an app using Google's App Engine with a `node.js` framework. Those last two parts turned out to be the greatest challenges. Android Studio was a challenge as one of our developers had not used Java for a long time, nor had he ever developed using `.xml`. He was pushed to learn a lot about the program in a limited amount of time. The greatest challenge, however, was deploying the app using Google App Engine. This tool is extremely useful, and was made to seem that it would be easy to use, but we struggled to implement it using `node.js`. Issues arose with errors involving `favicon.ico` and `index.js`. It took us hours to resolve this issue and we were very discouraged, but we pushed though. After all, we had everything else - we knew we could push through this. The end product involves and app in which the user signs in using their google account. It opens to the home page, where the user is prompted to answer four question relating to their mental health for the day, and then rate themselves on a scale of 1-10 in terms of their happiness for the day. After this is finished, the user is given their mental health score, along with an encouraging message tagged with a cute picture. After this, the user has the option to view a graph of their mental health and happiness statistics to see how they progressed over the past week, or else a calendar option to see their happiness scores and specific answers for any day of the year. Overall, we are very happy with how this turned out. We even have ideas for how we could do more, as we know there is always room to improve!
## What it does Take a picture, get a 3D print of it! ## Challenges we ran into The 3D printers going poof on the prints. ## How we built it * AI model transforms the picture into depth data. Then post-processing was done to make it into a printable 3D model. And of course, real 3D printing. * MASV to transfer the 3D model files seamlessly. * RBC reward system to incentivize users to engage more. * Cohere to edit image prompts to be culturally appropriate for Flux to generate images. * Groq to automatically edit the 3D models via LLMs. * VoiceFlow to create an AI agent that guides the user through the product.
partial
## Inspiration *InTouch* was inspired by our joint frustration at the current system of networking. Despite constant contact with new people, we and many others, find the majority of our connections to be unutilized and superficial. Since research has shown that the strength of acquaintances is what leads to career growth, current methods of networking may be ineffective. We hope to spur a paradigm shift, fostering mutually beneficial and genuine relationships out of our most distant ties. ## What it does Based on research from Harvard Business School, *InTouch* is a mobile platform that analyzes, personalizes, and nurtures real relationships from superficial connections. *InTouch* will focus on three approaches: a) contact prioritization, b) regular interaction, and c) substance of contact. Through personalized data analysis/optimization, the platform will remind the user to reach out on a pre-determined schedule. We envision *InTouch* as an extension to many social networking sites. For instance, *InTouch* could assist in cultivating genuine relationships from new, but distant Linkedin connections. ## How we built it The system is centered around a flask web server deployed using Google App Engine. It makes use of a FireStore database for storing data, and queries LinkedIn's API to gather information on users and how their network changes. The information is displayed to the user through a flutter application written in dart which is compatible with web, android, and iOS. We handle reminding users to keep in contact with their network using Twilio, which we think is beneficial over push notifications, as it is much easier to come back to a text message if you're busy at the time you receive the notification. ## Challenges we ran into We ran into several challenges, including understanding and accessing Linkedin API, and installing Google Cloud. We found the documentation for the LinkedIn API to be unclear in parts, so we spent a lot of time working together to try and understand how to use it. ## Accomplishments that we're proud of We think that our idea is quite original and that it has a lot of potential, envisioning it even being useful for our own network. We spent over 6 hours deciding on it, so we're really proud that after all that time and discussion that we ended up with something, which we think could help people. ## What we learned We spent a lot more time than we normally would coming up with the idea, and this proved fruitful, so we learned that stopping to think about what you're doing can really help in the long run. ## What's next for *InTouch* There are many research articles that suggest ways to cultivate and maintain a large network. For instance, frequency of contact and how personal a certain message is, can greatly strengthen connections. We hope to integrate many of these aspects into *InTouch*.
## Inspiration The loneliness epidemic is a real thing and you don't get meaningful engagements with others, just by liking and commenting on Instagram posts, you get meaningful engagement by having real conversations with others, whether it's a text exchange, phone call, or zoom meeting. This project was inspired by the idea of reviving weak links in our network as described in *The Defining Decade* "Weak ties are the people we have met, or are connected to somehow, but do not currently know well. Maybe they are the coworkers we rarely talk with or the neighbor we only say hello to. We all have acquaintances we keep meaning to go out with but never do, and friends we lost touch with years ago. Weak ties are also our former employers or professors and any other associations who have not been promoted to close friends." ## What it does This web app helps bridge the divide between wanting to connect with others, to actually connecting with others. In our MVP, the Web App brings up a card with information on someone you are connected to. Users can swipe right to show interest in reconnecting or swipe left if they are not interested. In this way the process of finding people to reconnect with is gamified. If both people show interest in reconnecting, you are notified and can now connect! And if one person isn't interested, the other person will never know ... no harm done! ## How we built it The Web App was built using react and deployed with Google cloud's Firebase ## Challenges we ran into We originally planned to use Twitters API to aggregate data and recommend matches for our demo, but getting the developer account took longer than expected. After getting a developer account, we realized that we didn't use Twitter all that much, so we had no data to display. Another challenge we ran into was that we didn't have a lot of experience building Web Apps, so we had to learn on the fly. ## Accomplishments that we're proud of We came into this hackathon with little experience in Web development, so it's amazing to see how far we have been able to progress in just 36 hours! ## What we learned REACT! Also, we learned about how to publish a website, and how to access APIs! ## What's next for Rekindle Since our product is an extension or application within an existing social media, Our next steps would be to partner with Facebook, Twitter, LinkedIn, or other social media sites. Afterward, we would develop an algorithm to aggregate a user's connections on a given social media site and optimize the card swiping feature to recommend the people you will most likely connect with.
## Inspiration We all deal with nostalgia. Sometimes we miss our loved ones or places we visited and look back at our pictures. But what if we could revolutionize the way memories are shown? What if we said you can relive your memories and mean it literally? ## What it does retro.act takes in a user prompt such as "I want uplifting 80s music" and will then use sentiment analysis and Cohere's chat feature to find potential songs out of which the user picks one. Then the user chooses from famous dance videos (such as by Michael Jackson). Finally, we will either let the user choose an image from their past or let our model match images based on the mood of the music and implant the dance moves and music into the image/s. ## How we built it We used Cohere classify for sentiment analysis and to filter out songs whose mood doesn't match the user's current state. Then we use Cohere's chat and RAG based on the database of filtered songs to identify songs based on the user prompt. We match images to music by first generating a caption of the images using the Azure computer vision API doing a semantic search using KNN and Cohere embeddings and then use Cohere rerank to smooth out the final choices. Finally we make the image come to life by generating a skeleton of the dance moves using OpenCV and Mediapipe and then using a pretrained model to transfer the skeleton to the image. ## Challenges we ran into This was the most technical project any of us have ever done and we had to overcome huge learning curves. A lot of us were not familiar with some of Cohere's features such as re rank, RAG and embeddings. In addition, generating the skeleton turned out to be very difficult. Apart from simply generating a skeleton using the standard Mediapipe landmarks, we realized we had to customize which landmarks we are connecting to make it a suitable input for the pertained model. Lastly, understanding and being able to use the model was a huge challenge. We had to deal with issues such as dependency errors, lacking a GPU, fixing import statements, deprecated packages. ## Accomplishments that we're proud of We are incredibly proud of being able to get a very ambitious project done. While it was already difficult to get a skeleton of the dance moves, manipulating the coordinates to fit our pre trained model's specifications was very challenging. Lastly, the amount of experimentation and determination to find a working model that could successfully take in a skeleton and output an "alive" image. ## What we learned We learned about using media pipe and manipulating a graph of coordinates depending on the output we need, We also learned how to use pre trained weights and run models from open source code. Lastly, we learned about various new Cohere features such as RAG and re rank. ## What's next for retro.act Expand our database of songs and dance videos to allow for more user options, and get a more accurate algorithm for indexing to classify iterate over/classify the data from the db. We also hope to make the skeleton's motions more smooth for more realistic images. Lastly, this is very ambitious, but we hope to make our own model to transfer skeletons to images instead of using a pretrained one.
losing
## Inspiration Today we live in a world that is all online with the pandemic forcing us at home. Due to this, our team and the people around us were forced to rely on video conference apps for school and work. Although these apps function well, there was always something missing and we were faced with new problems we weren't used to facing. Personally, forgetting to mute my mic when going to the door to yell at my dog, accidentally disturbing the entire video conference. For others, it was a lack of accessibility tools that made the experience more difficult. Then for some, simply scared of something embarrassing happening during class while it is being recorded to be posted and seen on repeat! We knew something had to be done to fix these issues. ## What it does Our app essentially takes over your webcam to give the user more control of what it does and when it does. The goal of the project is to add all the missing features that we wished were available during all our past video conferences. Features: Webcam: 1 - Detect when user is away This feature will automatically blur the webcam feed when a User walks away from the computer to ensure the user's privacy 2- Detect when user is sleeping We all fear falling asleep on a video call and being recorded by others, our app will detect if the user is sleeping and will automatically blur the webcam feed. 3- Only show registered user Our app allows the user to train a simple AI face recognition model in order to only allow the webcam feed to show if they are present. This is ideal to prevent ones children from accidentally walking in front of the camera and putting on a show for all to see :) 4- Display Custom Unavailable Image Rather than blur the frame, we give the option to choose a custom image to pass to the webcam feed when we want to block the camera Audio: 1- Mute Microphone when video is off This option allows users to additionally have the app mute their microphone when the app changes the video feed to block the camera. Accessibility: 1- ASL Subtitle Using another AI model, our app will translate your ASL into text allowing mute people another channel of communication 2- Audio Transcriber This option will automatically transcribe all you say to your webcam feed for anyone to read. Concentration Tracker: 1- Tracks the user's concentration level throughout their session making them aware of the time they waste, giving them the chance to change the bad habbits. ## How we built it The core of our app was built with Python using OpenCV to manipulate the image feed. The AI's used to detect the different visual situations are a mix of haar\_cascades from OpenCV and deep learning models that we built on Google Colab using TensorFlow and Keras. The UI of our app was created using Electron with React.js and TypeScript using a variety of different libraries to help support our app. The two parts of the application communicate together using WebSockets from socket.io as well as synchronized python thread. ## Challenges we ran into Dam where to start haha... Firstly, Python is not a language any of us are too familiar with, so from the start, we knew we had a challenge ahead. Our first main problem was figuring out how to highjack the webcam video feed and to pass the feed on to be used by any video conference app, rather than make our app for a specific one. The next challenge we faced was mainly figuring out a method of communication between our front end and our python. With none of us having too much experience in either Electron or in Python, we might have spent a bit too much time on Stack Overflow, but in the end, we figured out how to leverage socket.io to allow for continuous communication between the two apps. Another major challenge was making the core features of our application communicate with each other. Since the major parts (speech-to-text, camera feed, camera processing, socket.io, etc) were mainly running on blocking threads, we had to figure out how to properly do multi-threading in an environment we weren't familiar with. This caused a lot of issues during the development, but we ended up having a pretty good understanding near the end and got everything working together. ## Accomplishments that we're proud of Our team is really proud of the product we have made and have already begun proudly showing it to all of our friends! Considering we all have an intense passion for AI, we are super proud of our project from a technical standpoint, finally getting the chance to work with it. Overall, we are extremely proud of our product and genuinely plan to better optimize it in order to use within our courses and work conference, as it is really a tool we need in our everyday lives. ## What we learned From a technical point of view, our team has learnt an incredible amount the past few days. Each of us tackled problems using technologies we have never used before that we can now proudly say we understand how to use. For me, Jonathan, I mainly learnt how to work with OpenCV, following a 4-hour long tutorial learning the inner workings of the library and how to apply it to our project. For Quan, it was mainly creating a structure that would allow for our Electron app and python program communicate together without killing the performance. Finally, Zhi worked for the first time with the Google API in order to get our speech to text working, he also learned a lot of python and about multi-threadingbin Python to set everything up together. Together, we all had to learn the basics of AI in order to implement the various models used within our application and to finally attempt (not a perfect model by any means) to create one ourselves. ## What's next for Boom. The Meeting Enhancer This hackathon is only the start for Boom as our team is exploding with ideas!!! We have a few ideas on where to bring the project next. Firstly, we would want to finish polishing the existing features in the app. Then we would love to make a market place that allows people to choose from any kind of trained AI to determine when to block the webcam feed. This would allow for limitless creativity from us and anyone who would want to contribute!!!!
## Inspiration It's easy to zone off in online meetings/lectures, and it's difficult to rewind without losing focus at the moment. It could also be disrespectful to others if you expose the fact that you weren't paying attention. Wouldn't it be nice if we can just quickly skim through a list of keywords to immediately see what happened? ## What it does Rewind is an intelligent, collaborative and interactive web canvas with built in voice chat that maintains a list of live-updated keywords that summarize the voice chat history. You can see timestamps of the keywords and click on them to reveal the actual transcribed text. ## How we built it Communications: WebRTC, WebSockets, HTTPS We used WebRTC, a peer-to-peer protocol to connect the users though a voice channel, and we used websockets to update the web pages dynamically, so the users would get instant feedback for others actions. Additionally, a web server is used to maintain stateful information. For summarization and live transcript generation, we used Google Cloud APIs, including natural language processing as well as voice recognition. Audio transcription and summary: Google Cloud Speech (live transcription) and natural language APIs (for summarization) ## Challenges we ran into There are many challenges that we ran into when we tried to bring this project to reality. For the backend development, one of the most challenging problems was getting WebRTC to work on both the backend and the frontend. We spent more than 18 hours on it to come to a working prototype. In addition, the frontend development was also full of challenges. The design and implementation of the canvas involved many trial and errors and the history rewinding page was also time-consuming. Overall, most components of the project took the combined effort of everyone on the team and we have learned a lot from this experience. ## Accomplishments that we're proud of Despite all the challenges we ran into, we were able to have a working product with many different features. Although the final product is by no means perfect, we had fun working on it utilizing every bit of intelligence we had. We were proud to have learned many new tools and get through all the bugs! ## What we learned For the backend, the main thing we learned was how to use WebRTC, which includes client negotiations and management. We also learned how to use Google Cloud Platform in a Python backend and integrate it with the websockets. As for the frontend, we learned to use various javascript elements to help develop interactive client webapp. We also learned event delegation in javascript to help with an essential component of the history page of the frontend. ## What's next for Rewind We imagined a mini dashboard that also shows other live-updated information, such as the sentiment, summary of the entire meeting, as well as the ability to examine information on a particular user.
## Inspiration Every student knows the struggle that is course registration. You're tossed into an unfamiliar system with little advice and all these vague rules and restrictions to follow. All the while, courses are filling up rapidly. Far too often students—often underclassmen— are stuck without the courses they need. We were inspired by these pain points to create Schedge, an automatic schedule generator. ## What it does Schedge helps freshmen build their schedule by automatically selecting three out of a four course load. The three courses consist of a Writing the Essay course, the mandatory writing seminar for NYU students, a Core course like Quantitative Reasoning, and a course in the major of the student's choosing. Furthermore, we provide sophomores with potential courses to take after their freshman year, whether that's a follow up to Writing the Essay, or a more advanced major course. ## How we built it We wrote the schedule generation algorithm in Rust, as we needed it to be blazing fast and well designed. The front end is React with TypeScript and Material UI. The algorithm, while technically NP complete for all courses, uses some shortcuts and heuristics to allow for fast schedule generation. ## Challenges we ran into We had some trouble with the data organization, especially with structuring courses with their potential meeting times. ## Accomplishments that we're proud of Using a more advanced systems language such as Rust in a hackathon. Also our project has immediate real world applications at NYU. We plan on extending it and providing it as a service. ## What we learned Courses have a lot of different permutations and complications. ## What's next for Schedge More potential majors and courses! Features for upperclassmen!
winning
## Inspiration We were inspired by Plaid's challenge to "help people make more sense of their financial lives." We wanted to create a way for people to easily view where they are spending their money so that they can better understand how to conserve it. Plaid's API allows us to see financial transactions, and Google Maps API serves as a great medium to display the flow of money. ## What it does GeoCash starts by prompting the user to login through the Plaid API. Once the user is authorized, we are able to send requests for transactions, given the public\_token of the user. We then displayed the locations of these transactions on the Google Maps API. ## How I built it We built this using JavaScript, including Meteor, React, and Express frameworks. We also utilized the Plaid API for the transaction data and the Google Maps API to display the data. ## Challenges I ran into Data extraction/responses from Plaid API, InfoWindow displays in Google Maps ## Accomplishments that I'm proud of Successfully implemented meteor webapp, integrated two different APIs into our product ## What I learned Meteor (Node.js and React.js), Plaid API, Google Maps API, Express framework ## What's next for GeoCash We plan on integrating real user information into our webapp; we currently only using the sandbox user, which has a very limited scope of transactions. We would like to implement differently size displays on the Maps API to represent the amount of money spent at the location. We would like to display different color displays based on the time of day, which was not included in the sandbox user. We would also like to implement multiple different user displays at the same time, so that we can better describe the market based on the different categories of transactions.
## Inspiration In the modern world, with a plethora of distractions and opportunities to spend money frivolously, the concept of saving has been neglected. Being college students ourselves, we understand the importance of every penny spent or saved. The idea of using spreadsheets to maintain balances monthly is cumbersome and can be messy. Hence, our team has developed an app called Cache to make saving fun and rewarding. ## What it does Cache provides users with multiple saving strategies based on their predefined goals. We reward them for reaching those goals with offers and discounts that match the category of spending for which they plan to save money. The app keeps track of their overall expenditures and automatically classifies spending into different categories. It also maps the amount saved in each category towards their goals. If the user ends up spending more in any of the pre-defined spending categories (essential & non-essential) then we suggest methods to reduce spending. Moreover, the app provides relevant and rewarding offers to users based on the timely meeting of their set goals. The offers may contain access to additional airline mileage points, cash back offers and other discount coupons to name a few. ## How we built it We used React to build the frontend and used Firebase services for the backend. All background tasks were carried out using cloud functions and cloud firestore was used to store the app data. ## Challenges we ran into Initially, we had planned to use Plaid to connect real bank accounts and fetch transaction history from the bank accounts. However, Plaid requires the app to go through a verification process that takes several days. Therefore, we decided to use populate our database with dummy data for the time being. ## Accomplishments that we're proud of We are proud of the fact that our app promotes healthy saving habits amongst people of all ages. Our app is not restricted to any particular demographic of people or to any income level. It rewards users in return for meeting their goals which forms a mutually nurturing relationship and creates long term financial well being. We hope that users can improve their budgeting skills and financial habits so that apps like Cache will not be required in the future. ## What's next for Cache We plan to use Cache to provide investing tips based on the users financial budget and investing knowledge. This includes investing in the stock market, mutual funds, cryptocurrency and exchange traded funds (ETF’s). We can notify our users to pay their dues in time so that they don’t incur any additional costs (penalties). In addition, we plan to provide a credit journey report which would allow users to understand how their credit score has been hit or improved in the last few months. Improving the credit history knowledge of the user would enable them to sensibly choose the kind and amount of debt they intend to take on.
## Inspiration Resumes are boring, and we wanted something that would help us find a good job and develop our careers ## What it does So far, it is a bundle of webpages in html ## How we built it With teamwork ## What we learned So much that you could call it beautiful ## What's next for Resume Customizer We plan to learn machine vision and AI to scan for keywords in the future
partial
## Inspiration JetBlue challenge of YHack ## What it does Website with sentiment analysis of JetBlue ## How I built it Python, Data scraping, used textblob for sentiment analysis ## Challenges I ran into choosing between textblob and nltk ## Accomplishments that I'm proud of Having a finished product ## What I learned How to do sentiment analysis ## What's next for FeelingBlue
## Inspiration We got our inspiration from looking at the tools provided to us in the Hackathon. We saw that we cold use the Google API’s effectively when analyzing the sentiment of the customers review on social media platforms. With the wide range of possibilities, it gave us we got the idea of using programs to see the data visually ## What it does JetBlueByMe is a program which takes over 16000 reviews from trip advisor, and hundreds of tweets from twitter to present them in a graphable way. The first representation is an effective yet simple word cloud which shows more frequently described adjective larger. The other is a bar graph to show which word appears most consistently. ## How we built it The first step was to scrape data off multiple websites. To do this a web scraping robot by UiPath was used. This saved a lot of time and allowed us to focus on other aspects of the program. For Twitter, Python had to be used in junction with Beautiful Soup library to extract the tweets and hashtags. This was only possible after receiving permission 10 hours after applying to Twitter for its API use. The Google sentiment API and Syntax API were used to create the final product. The syntax API helped extract the adjectives from the reviews so we can show a word cloud. To display the word cloud, the programming was done in R as it is an effective language for data manipulation. ## Challenges we ran into We were unable to initially use UiPath for Twitter to scrape data as it didn’t have a next button, so the robot did not continue on its own. This was fixed using beautiful soup on Python. Also, when trying to extract the adjectives, the compiling was very slow causing us to fall back about 2 hours. None of us knew the inns and outs of web hence it was a challenging problem for us. ## Accomplishments that we're proud of We are happy about finding an effective way to word scrape using both UiPath and BeautifulSoup. Also, we weren't aware that Google provided an API for sentiment analysis, access to that was a big plus. We learned how to utilize our tools and incorporated them into our project. We also used Firebase to help store data on the cloud so we know its secure. ## What we learned Word scraping was a big thing that we all learned as it was new to all of us. We had to extensively research before applying any idea. Most of the group did not know how to use the language R but we understood the basics by the end. We also learned how to set up a firebase and google-cloud service that will definitely be a big asset in our future programming endeavours. ## What's next for JetBlueByMe Our web scraping application can be optimized and we plan on getting a live feed set up to show reviews sentiment in real-time. With time and resources, we would be able to implement that.
## Inspiration We wanted to find a way to make transit data more accessible to the public as well as provide fun insights into their transit activity. As we've seen in Spotify Wrapped, people love seeing data about themselves. In addition, we wanted to develop a tool to help city organizers make data-driven decisions on how they operate their networks. ## What it does Transit Tracker is simultaneously a tool for operators to analyze their network as well as an app for users to learn about their own activities and how it lessens their impact on the environment. For network operators, Transit Tracker allows them to manage data for a system of riders and individual trips. We developed a visual map that shows the activity of specific sections between train stations. For individuals, we created an app that shows data from their own transit activities. This includes gallons of gas saved, time spent riding, and their most visited stops. ## How we built it We primarily used Palantir Foundry to provide a platform for our back-end data management. Used objects within Foundry to facilitate dataset transformation using SQL and python. Utilized Foundry Workshop to create user interface to display information. ## Challenges we ran into Working with the geoJSON file format proved to be particularly challenging, because it is semi-structured data and not easily compatible with the datasets we were working with. Another large challenge we ran into was learning how to use Foundry. This was our first time using the software, we had to first learn the basics before we could even begin tackling our problem. ## Accomplishments that we're proud of With Treehacks being all of our first hackathons, we're proud of making it to the finish line and building something that is both functional and practical. Additionally, we're proud of the skills we've gained from learning to deal with large data as well as our ability to learn and use foundry in the short time frame we had. ## What we learned We learned just how much we take everyday data analysis for granted. The amount of information being processed everyday in regards to data is unreal. We only tackled a small level of data analysis and even we had a multitude of difficult issues that had to be dealt with. The understanding we’ve learned from dealing with data is so valuable and the skills we’ve gained in using a completely foreign application to build something in such a short amount of time has been truly insightful. ## What's next for Transit Tracker The next step for Transit Tracker would be to be able to translate our data (that is being generated through objects) onto a visual map where the routes would constantly be changing in regards to the data being collected. Being able to visually represent the change onto a graph would be such a valuable step to achieve as it would mean we are working our way towards a functional application.
losing
## Inspiration An Article, about 86 per cent of Canada's plastic waste ends up in landfill, a big part due to Bad Sorting. We thought it shouldn't be impossible to build a prototype for a Smart bin. ## What it does The Smart bin is able, using Object detection, to sort Plastic, Glass, Metal, and Paper We see all around Canada the trash bins split into different types of trash. It sometimes becomes frustrating and this inspired us to built a solution that doesn't require us to think about the kind of trash being thrown The Waste Wizard takes any kind of trash you want to throw, uses machine learning to detect what kind of bin it should be disposed in, and drops it in the proper disposal bin ## How we built it\ Using Recyclable Cardboard, used dc motors, and 3d printed parts. ## Challenges we ran into We had to train our Model for the ground up, even getting all the data ## Accomplishments that we're proud of We managed to get the whole infrastructure build and all the motor and sensors working. ## What we learned How to create and train model, 3d print gears, use sensors ## What's next for Waste Wizard A Smart bin able to sort the 7 types of plastic
## Inspiration The EPA estimates that although 75% of American waste is recyclable, only 30% gets recycled. Our team was inspired to create RecyclAIble by the simple fact that although most people are not trying to hurt the environment, many unknowingly throw away recyclable items in the trash. Additionally, the sheer amount of restrictions related to what items can or cannot be recycled might dissuade potential recyclers from making this decision. Ultimately, this is detrimental since it can lead to more trash simply being discarded and ending up in natural lands and landfills rather than being recycled and sustainably treated or converted into new materials. As such, RecyclAIble fulfills the task of identifying recycling objects with a machine learning-based computer vision software, saving recyclers the uncertainty of not knowing whether they can safely dispose of an object or not. Its easy-to-use web interface lets users track their recycling habits and overall statistics like the number of items disposed of, allowing users to see and share a tangible representation of their contributions to sustainability, offering an additional source of motivation to recycle. ## What it does RecyclAIble is an AI-powered mechanical waste bin that separates trash and recycling. It employs a camera to capture items placed on an oscillating lid and, with the assistance of a motor, tilts the lid in the direction of one compartment or another depending on whether the AI model determines the object as recyclable or not. Once the object slides into the compartment, the lid will re-align itself and prepare for proceeding waste. Ultimately, RecyclAIBle autonomously helps people recycle as much as they can and waste less without them doing anything different. ## How we built it The RecyclAIble hardware was constructed using cardboard, a Raspberry Pi 3 B+, an ultrasonic sensor, a Servo motor, and a Logitech plug-in USB web camera, and Raspberry PI. Whenever the ultrasonic sensor detects an object placed on the surface of the lid, the camera takes an image of the object, converts it into base64 and sends it to a backend Flask server. The server receives this data, decodes the base64 back into an image file, and inputs it into a Tensorflow convolutional neural network to identify whether the object seen is recyclable or not. This data is then stored in an SQLite database and returned back to the hardware. Based on the AI model's analysis, the Servo motor in the Raspberry Pi flips the lip one way or the other, allowing the waste item to slide into its respective compartment. Additionally, a reactive, mobile-friendly web GUI was designed using Next.js, Tailwind.css, and React. This interface provides the user with insight into their current recycling statistics and how they compare to the nationwide averages of recycling. ## Challenges we ran into The prototype model had to be assembled, measured, and adjusted very precisely to avoid colliding components, unnecessary friction, and instability. It was difficult to get the lid to be spun by a single Servo motor and getting the Logitech camera to be propped up to get a top view. Additionally, it was very difficult to get the hardware to successfully send the encoded base64 image to the server and for the server to decode it back into an image. We also faced challenges figuring out how to publicly host the server before deciding to use ngrok. Additionally, the dataset for training the AI demanded a significant amount of storage, resources and research. Finally, establishing a connection from the frontend website to the backend server required immense troubleshooting and inspect-element hunting for missing headers. While these challenges were both time-consuming and frustrating, we were able to work together and learn about numerous tools and techniques to overcome these barriers on our way to creating RecyclAIble. ## Accomplishments that we're proud of We all enjoyed the bittersweet experience of discovering bugs, editing troublesome code, and staying up overnight working to overcome the various challenges we faced. We are proud to have successfully made a working prototype using various tools and technologies new to us. Ultimately, our efforts and determination culminated in a functional, complete product we are all very proud of and excited to present. Lastly, we are proud to have created something that could have a major impact on the world and help clean our environment clean. ## What we learned First and foremost, we learned just how big of a problem under-recycling was in America and throughout the world, and how important recycling is to Throughout the process of creating RecyclAIble, we had to do a lot of research on the technologies we wanted to use, the hardware we needed to employ and manipulate, and the actual processes, institutions, and statistics related to the process of recycling. The hackathon has motivated us to learn a lot more about our respective technologies - whether it be new errors, or desired functions, new concepts and ideas had to be introduced to make the tech work. Additionally, we educated ourselves on the importance of sustainability and recycling as well to better understand the purpose of the project and our goals. ## What's next for RecyclAIble RecycleAIble has a lot of potential as far as development goes. RecyclAIble's AI can be improved with further generations of images of more varied items of trash, enabling it to be more accurate and versatile in determining which items to recycle and which to trash. Additionally, new features can be incorporated into the hardware and website to allow for more functionality, like dates, tracking features, trends, and the weights of trash, that expand on the existing information and capabilities offered. And we’re already thinking of ways to make this device better, from a more robust AI to the inclusion of much more sophisticated hardware/sensors. Overall, RecyclAIble has the potential to revolutionize the process of recycling and help sustain our environment for generations to come.
## Inspiration We are designers from Sheridan College who wanted to create a game with social impact. When we were brainstorming about issues we could tackle, we wanted to make an educational game that didn't take away any of the fun factors. A theme that caught our attention was **environment**. Did you know that Toronto is the worst offender in recycling contamination with a whopping 26% rate? Or that 15% of the average homeowner's garbage is recyclable? In 2017, an estimated 55,000 tonnes of non-recyclabe material was going into blue bins! This is costing Toronto **millions of dollars** because people don't know any better. Some of the most common waste mistakes are: • Throwing out toys/applicances in the recycling bin thinking that someone will reuse it • Throwing out batteries into the garbage • Throwing out coffee cups into the blue bins when they belong in recycling • ...and the list goes on! The reason this is happening is due to a **lack of knowledge** - after all, there's no government official telling us where to throw our garbage. Current products that exist in the market such as Waste Wizard or other waste management games/tools rely on people being proactive, but we put fun as priority so that learning comes naturally. We made it our mission to start a movement from the bottom up - and it starts with children becoming more educated on where waste really belongs. Our game teaches children about some of the most common recycling and garbage mistakes you can make, as well as alternatives such as donating. Join Pandee and Pandoo on their journey to sort through the 6ix's waste and put them in their place! ## What it does Using the **Nintendo Joy Cons**, players will control Pandee and Pandoo to sort through the trash that has landed in their backyard. The waste can be sorted into 5 different receptacles and players will need to use their wits to figure out which container it belongs in. The goal is to use teamwork to get the most points possible and have the cleanest lawn while minimizing their impact on the environment. ## How we built it We used unity and C# to bring it all together. Additionally, we used the joycon library to integrate the Nintendo switch controllers into the game. ## Challenges we ran into We had trouble texturing the raccoon at the beginning because it didn't map properly onto the mesh. It was also difficult to integrate the Nintendo motion controls into the game design. It was the first time our team used it as our hardware and it proved to be difficult. ## Accomplishments that we're proud of We managed to finish it without major bugs! It works and it has a lot of different emergent designs. The game itself feels like it has a lot of potential. This game is a result of our 2 years of schooling - we used everything we learned to create this game. Shoutout to our professors and classmates. ## What we learned Tibi: I learned that coffee cups go into the trash - I used to think they go in the recycling. We also learned how to use the Nintendo joycons. ## What's next for Recoon We want to add secondary mechanics into the game to make it more interesting and flesh out the user experience.
winning
## Inspiration Our inspiration comes from our own experiences as programmers, where we realized that it was sometimes difficult (and a productivity drain), to move our right hands between our keyboard and our mouse. We wondered whether there was any possibility of redesigning a new version of the mouse to work without the need to move our hand from the keyboard. ## What it does nullHands utilizes a variety of measurements to provide users with an accurate mouse movement system. First, we utilize a gyroscope to grab the user's head movements, mapping that to the mouse movements on the screen. Then, we monitored the user's eyes, mapping the left and right eyes to the left and right respective mouse clicks. ## How we built it We built this using iOS Swift 3, to monitor the gyroscope. We then used Python with sockets to run the backend, as well as to run the OpenCV and Mouse movement libraries ## Challenges we ran into We did not have the hardware we needed, so we had to improvise with an iPhone, as well as a headband. ## Accomplishments that we're proud of We are proud of managing to create a working prototype in under 24 hours. ## What we learned We learned about sockets and direct communication between iOS and computer. ## What's next for nullHands We are really passionate about nullHands, and think that this is a project that can definitely help a lot of people. Therefore, we plan on continuing to work on nullHands, improving and adding functionality so that we can one day release this as a product so that everyone can experience nullHands.
# The Guiding Hand ## Are things becoming difficult to understand? Why not use The Guiding Hand? ## The Problem Ever since the onset of covid, the world has heavily been depending on various forms of online communication in order to keep the flow of work going. For the first time ever, we experienced teaching via zoom, Microsoft teams, and various other platforms. Understanding concepts without visual representation has never been easy and not all teachers or institutions can afford Edupen, iPads, or other annotation software. ## What it does Our product aims at building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces). It is an alternative user interface for providing real-time data to a computer instead of typing with keys thereby reducing the amount of effort required to communicate over platforms. When a user opens the Guiding Hand's website they see a button. On clicking the button they are led to our application. The application uses hand gestures to perform various functions. * 1 finger to draw on the screen. * 2 fingers to change pen color or select eraser. * 3 fingers to take a screenshot. * 4 fingers to clear the screen. ## How we built it We used React to build the site, and Flask and Python libraries such as mediapipe, opencv, numpy, and pyschreenshot to build the interface. ## Challenges we ran into Our biggest challenge was integrating our Python application with our Reactjs website. To overcome this issue we created a webserver using Flask. ## Accomplishments that we're proud of We are very proud of the fact that we came up with a fully functional site and application in such a short duration of time. ## What we learned We learned how to create a site in ReactJS, link it to a Flask server that opens up our application that allows a user to draw on the screen using just hand gestures. ## What's next for The Guiding Hand * Adding more hand gestures for more features. * Allowing multiple people to draw on the same screen simultaneously. * Annotating over a shared screen. * Converting hand-drawn text on-screen to text and saving it in a document.
## Inspiration Due to a lot of work and family problems, People often forget to take care of their health and food. Common health problems people know a day's faces are BP, heart problems, and Diabetics. Most people face mental health problems, due to studies, jobs, or any other problems. This Project can help people find out their health problems. It helps people in easy recycling of items, as they are divided into 12 different classes. It will help people who do not have any knowledge of plants and predict if the plant is having any diseases or not. ## What it does On the Garbage page, When we upload an image it classifies which kind of garbage it is. It helps people with easy recycling. On the mental health page, When we answer some of the questions. It will predict If we are facing some kind of mental health issue. The Health Page is divided into three parts. One page predicts if you are having heart disease or not. The second page predicts if you are having Diabetics or not. The third page predicts if you are having BP or not. Covid 19 page classify if you are having covid or not Plant\_Disease page predicts if a plant is having a disease or not. ## How we built it I built it using streamlit and OpenCV. ## Challenges we ran into Deploying the website to Heroku was very difficult because I generally deploy it. Most of this was new to us except for deep learning and ML so it was very difficult overall due to the time restraint. The overall logic and finding out how we should calculate everything was difficult to determine within the time limit. Overall, time was the biggest constraint. ## Accomplishments that we're proud of ## What we learned Tensorflow, Streamlit, Python, HTML5, CSS3, Opencv, Machine learning, Deep learning, and using different python packages. ## What's next for Arogya
losing
## Inspiration: The developer of the team was inspired by his laziness. ## What it does: The script opens a web browser and lets you log in to your Tinder account. Then, it automates the swiping right process for a set amount of time. ## How we built it: We developed a script with python and selenium. ## Challenges we ran into: For the script to identify and respond to other activities that happen on tinder such as pop ups. ## Accomplishments that we're proud of That it works (most of the time)! ## What we learned How to implement the selenium framework with python and the implicit and explicit waits necessary for all the components to load ## What's next for Lazy seduction * Being able to respond to matches using generative AI * Receive notifications of matches on your cellphone
## Inspiration Using the Tinder app has three main drawbacks. Firstly, why focus on someone's physical appearance when instead you can be less shallow and focus on their bio, to really get to know them as a person? Secondly, the app is more of a solo activity - why not include your friends on your lovemaking decisions? We set out to fix these issues for Valentines Day 2016. Thirdly, what if the user is vision impaired? Making Tinder accessible to this group of users was central to our goals of making Tinder the best it can be for all. ## What it does Alexa Tinder lets you Tinder using voice commands, asking Tinder for details about the user, such as their bio, job title, and more. From there, you can choose to either swipe right or swipe left, with similar voice commands, and go through your whole Tinder card stack! ## How we built it We used the Alexa API and Amazon Lambda functions to set up the Alexa environment, and then used Python to interact with the API. For the image descriptions, we used Clarifai. ## Challenges we ran into Learning the Alexa API/Amazon Lambda functions was the main drawback as it had some funky quirks when being used with Python! Check it out at [GitHub](https://github.com/Pinkerton/alexa-tinder)
## Inspiration There are many scary things in the world ranging from poisonous spiders to horrifying ghosts, but none of these things scare people more than the act of public speaking. Over 75% of humans suffer from a fear of public speaking but what if there was a way to tackle this problem? That's why we created Strive. ## What it does Strive is a mobile application that leverages voice recognition and AI technologies to provide instant actionable feedback in analyzing the voice delivery of a person's presentation. Once you have recorded your speech Strive will calculate various performance variables such as: voice clarity, filler word usage, voice speed, and voice volume. Once the performance variables have been calculated, Strive will then render your performance variables in an easy-to-read statistic dashboard, while also providing the user with a customized feedback page containing tips to improve their presentation skills. In the settings page, users will have the option to add custom filler words that they would like to avoid saying during their presentation. Users can also personalize their speech coach for a more motivational experience. On top of the in-app given analysis, Strive will also send their feedback results via text-message to the user, allowing them to share/forward an analysis easily. ## How we built it Utilizing the collaboration tool Figma we designed wireframes of our mobile app. We uses services such as Photoshop and Gimp to help customize every page for an intuitive user experience. To create the front-end of our app we used the game engine Unity. Within Unity we sculpt each app page and connect components to backend C# functions and services. We leveraged IBM Watson's speech toolkit in order to perform calculations of the performance variables and used stdlib's cloud function features for text messaging. ## Challenges we ran into Given our skillsets from technical backgrounds, one challenge we ran into was developing out a simplistic yet intuitive user interface that helps users navigate the various features within our app. By leveraging collaborative tools such as Figma and seeking inspiration from platforms such as Dribbble, we were able to collectively develop a design framework that best suited the needs of our target user. ## Accomplishments that we're proud of Creating a fully functional mobile app while leveraging an unfamiliar technology stack to provide a simple application that people can use to start receiving actionable feedback on improving their public speaking skills. By building anyone can use our app to improve their public speaking skills and conquer their fear of public speaking. ## What we learned Over the course of the weekend one of the main things we learned was how to create an intuitive UI, and how important it is to understand the target user and their needs. ## What's next for Strive - Your Personal AI Speech Trainer * Model voices of famous public speakers for a more realistic experience in giving personal feedback (using the Lyrebird API). * Ability to calculate more performance variables for a even better analysis and more detailed feedback
losing
## Inspiration Have you ever lost a valuable item that’s really important to you, only for it to never be seen again? * Over **60% of people** have lost something in their lifetime. * In the **US alone**, over **400 million items** are lost and found every year. * The average person loses up to **nine items** every day. The most commonly lost items include **wallets, keys, and phones**. While some are lucky enough to find their lost items at home or in their car, those who lose things in public often never see them again. The good news is that most places have a **“lost and found”** system, but the problem? It's **manual**, requiring you to reach out to someone to find out if your item has been turned in. ## What it does **LossEndFound** solves this problem by **automating the lost and found process**. It connects users who report lost items with those who find them. * Whether you're looking for something or reporting something found, the system uses **AI-powered vector similarity search** to match items based on descriptions provided by users. ## How we built it We built **LossEndFound** to make reconnecting lost items with their owners **seamless**: * **FastAPI** powers our backend for its speed and reliability. * **Cohere embeddings** capture the key features of each item. * **ChromaDB** stores and performs vector similarity searches, matching lost and found items based on cosine similarity. * On the frontend, we used **React.js** to create a user-friendly experience that makes the process quick and easy. ## Challenges we ran into As first-time hackers, we faced a few challenges: * **Backend development** was tough, especially when handling **numpy array dimensions**, which slowed us down during key calculations. * **Frontend-backend integration** was a challenge since it was our first time bridging these systems, making the process more complex than expected. ## Accomplishments that we're proud of We’re proud of how we pushed ourselves to learn and integrate new technologies: * **ChromaDB**, **Cohere**, and **CORS** were all new tools that we successfully implemented. * Overcoming these challenges showed us what’s possible when we **step outside our comfort zone** and **collaborate effectively**. ## What we learned We learned several key lessons during this project: * The importance of **clear requirements** to guide development. * How to navigate new technologies under pressure. * How to **grow, adapt, and collaborate** as a team to tackle complex problems. ## What's next for LossEndFound Moving forward, we plan to: * Add **better filters** for more precise searches (by date, location, and category). * Introduce **user profiles** to track lost/found items. * Streamline the process for reporting or updating item statuses. These improvements will make the app even more **efficient** and **user-friendly**, keeping the focus on **simplicity and effectiveness**.
## Inspiration We hear all the time that people want a dog but they don't want the committment and yet there's still issues with finding a pet sitter! We flipped the 'tinder'-esque mobile app experience around to reflect just how many people are desparate and willing to spend time with a furry friend! ## What it does Our web app allows users to create an account and see everyone who is currently looking to babysit a cute puppy or is trying to find a pet sitter so that they can go away for vacation! The app also allows users to engage in chat messages so they can find a perfect weekend getaway for their dogs. ## How we built it Our web app is primariy a react app on the front end and we used a combination of individual programming and extreme programming when we hit walls. Ruby on rails and SQLite run the back end and so with a team of four we had two people manning the keyboards for the front end and the other two working diligently on the backend. ## Challenges we ran into GITHUB!!!! Merging, pushing, pulling, resolving, crying, fetching, syncing, sobbing, approving, etc etc. We put our repo through a stranglehold of indecipherable commits more than a few times and it was our greatest rival ## Accomplishments that we're proud of IT WORKS! We're so proud to build an app that looks amazing and also communicates on a sophisticated level. The user experience is cute and delightful but the complexities are still baked in like session tokens and password hashing (plus salt!) ## What we learned The only way to get fast is to go well. The collaboration phase with github ate up a large part of our time every couple of hours and there was nobody to blame but ourselves. ## What's next for Can I Borrow Your Dog We think this a pretty cool little app that could do a LARGE refactoring. Whether we keep in touch as a gorup and maintain this project to spruce up our resumes is definitely being considered. We'd like to show our friends and family how much we accomplished in just 36 hours (straight lol)!
## Why We Created **Here** As college students, one question that we catch ourselves asking over and over again is – “Where are you studying today?” One of the most popular ways for students to coordinate is through texting. But messaging people individually can be time consuming and awkward for both the inviter and the invitee—reaching out can be scary, but turning down an invitation can be simply impolite. Similarly, group chats are designed to be a channel of communication, and as a result, a message about studying at a cafe two hours from now could easily be drowned out by other discussions or met with an awkward silence. Just as Instagram simplified casual photo sharing from tedious group-chatting through stories, we aim to simplify casual event coordination. Imagine being able to efficiently notify anyone from your closest friends to lecture buddies about what you’re doing—on your own schedule. Fundamentally, **Here** is an app that enables you to quickly notify either custom groups or general lists of friends of where you will be, what you will be doing, and how long you will be there for. These events can be anything from an open-invite work session at Bass Library to a casual dining hall lunch with your philosophy professor. It’s the perfect dynamic social calendar to fit your lifestyle. Groups are customizable, allowing you to organize your many distinct social groups. These may be your housemates, Friday board-game night group, fellow computer science majors, or even a mixture of them all. Rather than having exclusive group chat plans, **Here** allows for more flexibility to combine your various social spheres, casually and conveniently forming and strengthening connections. ## What it does **Here** facilitates low-stakes event invites between users who can send their location to specific groups of friends or a general list of everyone they know. Similar to how Instagram lowered the pressure involved in photo sharing, **Here** makes location and event sharing casual and convenient. ## How we built it UI/UX Design: Developed high fidelity mockups on Figma to follow a minimal and efficient design system. Thought through user flows and spoke with other students to better understand needed functionality. Frontend: Our app is built on React Native and Expo. Backend: We created a database schema and set up in Google Firebase. Our backend is built on Express.js. All team members contributed code! ## Challenges Our team consists of half first years and half sophomores. Additionally, the majority of us have never developed a mobile app or used these frameworks. As a result, the learning curve was steep, but eventually everyone became comfortable with their specialties and contributed significant work that led to the development of a functional app from scratch. Our idea also addresses a simple problem which can conversely be one of the most difficult to solve. We needed to spend a significant amount of time understanding why this problem has not been fully addressed with our current technology and how to uniquely position **Here** to have real change. ## Accomplishments that we're proud of We are extremely proud of how developed our app is currently, with a fully working database and custom frontend that we saw transformed from just Figma mockups to an interactive app. It was also eye opening to be able to speak with other students about our app and understand what direction this app can go into. ## What we learned Creating a mobile app from scratch—from designing it to getting it pitch ready in 36 hours—forced all of us to accelerate our coding skills and learn to coordinate together on different parts of the app (whether that is dealing with merge conflicts or creating a system to most efficiently use each other’s strengths). ## What's next for **Here** One of **Here’s** greatest strengths is the universality of its usage. After helping connect students with students, **Here** can then be turned towards universities to form a direct channel with their students. **Here** can provide educational institutions with the tools to foster intimate relations that spring from small, casual events. In a poll of more than sixty university students across the country, most students rarely checked their campus events pages, instead planning their calendars in accordance with what their friends are up to. With **Here**, universities will be able to more directly plug into those smaller social calendars to generate greater visibility over their own events and curate notifications more effectively for the students they want to target. Looking at the wider timeline, **Here** is perfectly placed at the revival of small-scale interactions after two years of meticulously planned agendas, allowing friends who have not seen each other in a while casually, conveniently reconnect. The whole team plans to continue to build and develop this app. We have become dedicated to the idea over these last 36 hours and are determined to see just how far we can take **Here**!
losing
## Inspiration Inspired by [SIU Carbondale's Green Roof](https://greenroof.siu.edu/siu-green-roof/), we wanted to create an automated garden watering system that would help address issues ranging from food deserts to lack of agricultural space to storm water runoff. ## What it does This hardware solution takes in moisture data from soil and determines whether or not the plant needs to be watered. If the soil's moisture is too low, the valve will dispense water and the web server will display that water has been dispensed. ## How we built it First, we tested the sensor and determined the boundaries between dry, damp, and wet based on the sensor's output values. Then, we took the boundaries and divided them by percentage soil moisture. Specifically, the sensor that measures the conductivity of the material around it, so water being the most conductive had the highest values and air being the least conductive had the lowest. Soil would fall in the middle and the ranges in moisture were defined by the pure air and pure water boundaries. From there, we visualized the hardware set up with the sensor connected to an Arduino UNO microcontroller connected to a Raspberry Pi 4 controlling a solenoid valve that releases water when the soil moisture meter is less than 40% wet. ## Challenges we ran into At first, we aimed too high. We wanted to incorporate weather data into our water dispensing system, but the information flow and JSON parsing were not cooperating with the Arduino IDE. We consulted with a mentor, Andre Abtahi, who helped us get a better perspective of our project scope. It was difficult to focus on what it meant to truly create a minimum viable product when we had so many ideas. ## Accomplishments that we're proud of Even though our team is spread across the country (California, Washington, and Illinois), we were still able to create a functioning hardware hack. In addition, as beginners we are very excited of this hackathon's outcome. ## What we learned We learned about wrangling APIs, how to work in a virtual hackathon, and project management. Upon reflection, creating a balance between feasibility, ability, and optimism is important for guiding the focus and motivations of a team. Being mindful about energy levels is especially important for long sprints like hackathons. ## What's next for Water Smarter Lots of things! What's next for Water Smarter is weather controlled water dispensing. Given humidity, precipitation, and other weather data, our water system will dispense more or less water. This adaptive water feature will save water and let nature pick up the slack. We would use the OpenWeatherMap API to gather the forecasted volume of rain, predict the potential soil moisture, and have the watering system dispense an adjusted amount of water to maintain correct soil moisture content. In a future iteration of Water Smarter, we want to stretch the use of live geographic data even further by suggesting appropriate produce for which growing zone in the US, which will personalize the water conservation process. Not all plants are appropriate for all locations so we would want to provide the user with options for optimal planting. We can use software like ArcScene to look at sunlight exposure according to regional 3D topographical images and suggest planting locations and times. We want our product to be user friendly, so we want to improve our aesthetics and show more information about soil moisture beyond just notifying water dispensing.
## Inspiration As the world progresses into the digital age, there is a huge simultaneous focus on creating various sources of clean energy that is sustainable and affordable. Unfortunately, there is minimal focus on ways to sustain the increasingly rapid production of energy. Energy is wasted everyday as utility companies over supply power to certain groups of consumers. ## What It Does Thus, we bring you Efficity, a device that helps utility companies analyze and predict the load demand of a housing area. By leveraging the expanding, ubiquitous arrival of Internet of Things devices, we can access energy data in real-time. Utility companies could then estimate the ideal power to supply to a housing area, while keeping in mind to satisfy the load demand. With this, not too much energy will be wasted and thus improving energy efficiency. On top of that, everyday consumers can also have easy access to their own personal usage for tracking. ## How We Built It Our prototype is built primarily around a Dragonboard 410c, where a potentiometer is used to represent the varying load demand of consumers. By using the analog capabilities of a built in Arduino (ATMega328p), we can calculate the power that is consumed by the load in real time. A Python script is then run via the Dragonboard to receive the data from the Arduino through serial communication. The Dragonboard then further complements our design by having built-in WiFi capabilities. With this in mind, we can send HTTP requests to a webserver hosted by energy companies. In our case, we explored sending this data to a free IOT platform webserver, which can allow a user from anywhere to track energy usage as well as perform analytics such as using MATLAB. In addition, the Dragonboard comes with a fully usable GUI and compatible HDMI monitor for users that are less familiar with command line controls. ## Challenges We Ran Into There were many challenges throughout the Hackathon. First, we had trouble grasping the operations of a Dragonboard. The first 12 hours was spent only on learning how to use the device itself—it also did not help that our first Dragonboard was defective and did not come with a pre-flashed operating system! Next time, we plan to ask more questions early on rather than fixating on problems we believed were trivial. Next, we had a hard time coding the Wi-Fi functionality of the DragonBoard. This was largely due to the lack of expertise in the area from most members. For future references, we find it advisable to have a larger diversity of team members to facilitate faster development. ## Accomplishments That We're Proud Of Overall, we are proud of what we have achieved as this was our first time participating in a hackathon. We ranged from first all the way to fourth year students! From learning how to operate the Dragonboard 410c to having hands on experience in implementing IOT capabilities, we thoroughly believe that HackWestern has broaden all our perspectives on technology. ## What's Next for Efficity If this pitch is successful in this hackathon, we are planning to further improvise and make iterations and develop the full potential of the Dragonboard prototype. There are numerous algorithms we would love to implement and explore to process the collected data since the Dragonboard is quite a powerful device with its own operation systems. We may also want to include extra hardware add-ons such as silent arms for over-usage or solar panels to allow a fully self-sustained device. To take this one step further--if we were able to have a fully functional product, we can opt to pitch this idea to investors!
If we take a moment to stop and think of those who can't speak or hear, we will realize and be thankful for what we have. To make the lives of these differently ables people, we needed to come up with a solution and here we present you with Proximity. Proximity uses the Myo armband for sign recognition and an active voice for speech recognition. The armband is trained on a ML model reads the signs made by human hands and interprets them, thereby, helping the speech impaired to share their ideas and communicate with people and digital assistants alike. The service is also for those who are hearing impaired, so that they can know when somebody is calling their name or giving them a task. We're proud of successfully recognizing a few gestures and setting up a web app that understands and learns the name of a person. Apart from that we have calibrated a to-do list that can enable the hearing impaired people to actively note down tasks assigned to them. We learned an entirely new language, Lua to set up and use the Myo Armband SDK. Apart from that we used vast array of languages, scripts, APIs, and products for different parts of the product including Python, C++, Lua, Js, NodeJs, HTML, CSS, the Azure Machine Learning Studio, and Google Firebase. We look forward to explore the unlimited opportunities with Proximity. From training it to recognize the entire American Sign Language using the powerful computing capabilities of the Azure Machine Learning Studio to advancing our speech recognition app for it to understand more complex conversations. Proximity should integrate seamlessly into the lives of the differently abled.
partial
## Inspiration The inspiration behind MoodJournal comes from a desire to reflect on and cherish the good times, especially in an era where digital overload often makes us overlook the beauty of everyday moments. We wanted to create a digital sanctuary where users can not only store their daily and memories but also discover what truly makes them happy. By leveraging cutting-edge technology, we sought to bring a modern twist to the nostalgic act of keeping a diary, transforming it into a dynamic tool for self-reflection and emotional well-being ## What it does MoodJournal is a digital diary app that allows users to capture their daily life through text entries and photographs. Utilizing semantic analysis and image-to-text conversion technologies, the app evaluates the emotional content of each entry to generate 5 happiness scores in range from Very Happy to Very Sad. This innovative approach enables MoodJournal to identify and highlight the user's happiest moments. At the end of the year, it creates a personalized collage of these joyous times, showcasing a summary of the texts and photos from those special days, serving as a powerful visual reminder of the year's highlights ## How we built it MoodJournal's development combined React and JavaScript for a dynamic frontend, utilizing open-source libraries for enhanced functionality. The backend was structured around Python and Flask, providing a solid foundation for simple REST APIs. Cohere's semantic classification API was integrated for text analysis, enabling accurate emotion assessment. ChatGPT helped generate training data, ensuring our algorithms could effectively analyze and interpret users' entries. ## Challenges we ran into The theme of nostalgia itself presented a conceptual challenge, making it difficult initially to settle on a compelling idea. Our limited experience in frontend development and UX/UI design further complicated the project, requiring substantial effort and learning. Thanks to invaluable guidance from mentors like Leon, Shiv, and Arash, we navigated these obstacles. Additionally, while the Cohere API served our text analysis needs well, we recognized the necessity for a larger dataset to enhance accuracy, underscoring the critical role of comprehensive data in achieving precise analytical outcomes. ## Accomplishments that we're proud of We take great pride in achieving meaningful results from the Cohere API, which enabled us to conduct a thorough analysis of emotions from text entries. A significant breakthrough was our innovative approach to photo emotion analysis; by generating descriptive text from images using ChatGPT and then analyzing these descriptions with Cohere, we established a novel method for capturing emotional insights from visual content. Additionally, completing the core functionalities of MoodJournal to demonstrate an end-to-end flow of our primary objective was a milestone accomplishment. This project marked our first foray into utilizing a range of technologies, including React, Firebase, the Cohere API, and Flask. Successfully integrating these tools and delivering a functioning app, despite being new to them, is something we are especially proud of. ## What we learned This hackathon was a tremendous learning opportunity. We dove into tools and technologies new to us, such as React, where we explored new libraries and features like useEffect, and Firebase, achieving data storage and retrieval. Our first-hand experience with Cohere's APIs, facilitated by direct engagement with their team, was invaluable, enhancing our app's text and photo analysis capabilities. Additionally, attending workshops, particularly on Cohere technologies like RAG, broadened our understanding of AI's possibilities. This event not only expanded our technical skills but also opened new horizons for future projects. ## What's next for MoodJournal We're planning exciting updates to make diary-keeping easier and more engaging: * AI-Generated Entries: Users can have diary entries created by AI, simplifying daily reflections. * Photo Analysis for Entry Generation: Transform photos into diary texts with AI, offering an effortless way to document days. * Integration with Snapchat Memories: This feature will allow users to turn snaps into diary entries, merging social moments with personal reflections. * Monthly Collages and Emotional Insights: We'll introduce monthly summaries and visual insights into past emotions, alongside our yearly wrap-ups. * User Accounts: Implementing login/signup functionality for a personalized and secure experience. These enhancements aim to streamline the journaling process and deepen user engagement with MoodJournal.
# Emotify ## Inspiration We all cared deeply about mental health and we wanted to help those in need. 280 million people have depression in this world. However, we found out that people play a big role in treating depression - some teammates have experienced this first hand! So, we created Emotify, which brings back the memories of nostalgia and happy moments with friends. ## What it does The application utilizes an image classification program to classify photos locally stored on one's device. The application then "brings back memories and feelings of nostalgia" by displaying photos which either match a person's mood (if positive) or inverts a person's mood (if negative). Input mood is determined by Cohere's NLP API; negatively associated moods (such as "sad") are associated with happy photos to cheer people up. The program can also be used to find images, being able to distinguish between request of individual and group photos, as well as the mood portrayed within the photo. ## How we built it We used DeepFace api to effective predict facial emotions that sort into different emotions which are happy, sad, angry, afraid, surprise, and disgust. Each of these emotions will be token to generate the picture intelligently thanks to Cohere. Their brilliant NLP helped us to build a model that guesses what token we should feed our sorted picture generator to bring happiness and take them a trip down the memory lane to remind them of the amazing moments that they been through with their closed ones or times where they were proud of themselves. Take a step back and look back the journey they been through by using React frame work to display images that highlight their fun times. We only do two at a time for our generator because we want people to really enjoy these photos and remind what happened in these two photos (especially happy ones). Thanks to implementing a streamline pipeline, we managed to turn these pictures into objects that can return file folders that feed into the front end through getting their static images folder using the Flask api. We ask the users for their inputs, then run it through our amazing NLP that backed by Cohere to generate meaning token that produce quality photos. We trained the model in advance since it is very time consuming for the DeepFace api to go through all the photos. Of course, we have privacy in mind which thanks to Auth0, we could implement the user base system to securely protect their data and have their own privacy using the system. ## Challenges we ran into One major challenge includes front end development. We were split on the frameworks to use (Flask? Django? React?). how the application was to be designed, the user experience workflow, and any changes we had to make to implement third party integrations (such as Auth0) and make the application look visually appealing. ## Accomplishments that we're proud of We are very satisfied with the work that we were able to do at UofT hacks, and extremely proud of the project we created. Many of the features of this project are things that we did not have knowledge on prior to the event. So, to have been able to successfully complete everything we set out to do and more, while meeting the criteria for four of the challenges, has been very encouraging to say the least. ## What we learned The most experienced among us has been to 2 hackathons, while it was the first for the rest of us. For that reason this learning experience has been overwhelming. Having the opportunity to work with new technologies while creating a project we are proud of within 36 hours has forced us to fill in many of the gaps in our skillset, especially with ai/ml and full stack programming. ## What's next for Emotify We plan to further develop this application during our free time, such that we 'polish it' to our standards, and to ensure it meets our intended purpose. The developers definitely would enjoy using such an app in our daily lives to keep us going with more positive energy. Of course, winning UofTHacks is an asset.
## Inspiration As someone who has always wanted to speak in ASL (American Sign Language), I have always struggled with practicing my gestures, as I, unfortunately, don't know any ASL speakers to try and have a conversation with. Learning ASL is an amazing way to foster an inclusive community, for those who are hearing impaired or deaf. DuoASL is the solution for practicing ASL for those who want to verify their correctness! ## What it does DuoASL is a learning app, where users can sign in to their respective accounts, and learn/practice their ASL gestures through a series of levels. Each level has a *"Learn"* section, with a short video on how to do the gesture (ie 'hello', 'goodbye'), and a *"Practice"* section, where the user can use their camera to record themselves performing the gesture. This recording is sent to the backend server, where it is validated with our Action Recognition neural network to determine if you did the gesture correctly! ## How we built it DuoASL is built up of two separate components; **Frontend** - The Frontend was built using Next.js (React framework), Tailwind and Typescript. It handles the entire UI, as well as video collection during the *"Learn"* Section, which it uploads to the backend **Backend** - The Backend was built using Flask, Python, Jupyter Notebook and TensorFlow. It is run as a Flask server that communicates with the front end and stores the uploaded video. Once a video has been uploaded, the server runs the Jupyter Notebook containing the Action Recognition neural network, which uses OpenCV and Tensorflow to apply the model to the video and determine the most prevalent ASL gesture. It saves this output to an array, which the Flask server reads and responds to the front end. ## Challenges we ran into As this was our first time using a neural network and computer vision, it took a lot of trial and error to determine which actions should be detected using OpenCV, and how the landmarks from the MediaPipe Holistic (which was used to track the hands and face) should be converted into formatted data for the TensorFlow model. We, unfortunately, ran into a very specific and undocumented bug with using Python to run Jupyter Notebooks that import Tensorflow, specifically on M1 Macs. I spent a short amount of time (6 hours :) ) trying to fix it before giving up and switching the system to a different computer. ## Accomplishments that we're proud of We are proud of how quickly we were able to get most components of the project working, especially the frontend Next.js web app and the backend Flask server. The neural network and computer vision setup was pretty quickly finished too (excluding the bugs), especially considering how for many of us this was our first time even using machine learning on a project! ## What we learned We learned how to integrate a Next.js web app with a backend Flask server to upload video files through HTTP requests. We also learned how to use OpenCV and MediaPipe Holistic to track a person's face, hands, and pose through a camera feed. Finally, we learned how to collect videos and convert them into data to train and apply an Action Detection network built using TensorFlow ## What's next for DuoASL We would like to: * Integrate video feedback, that provides detailed steps on how to improve (using an LLM?) * Add more words to our model! * Create a practice section that lets you form sentences! * Integrate full mobile support with a PWA!
losing
## Inspiration Our team firmly believes that a hackathon is the perfect opportunity to learn technical skills while having fun. Especially because of the hardware focus that MakeUofT provides, we decided to create a game! This electrifying project places two players' minds to the test, working together to solve various puzzles. Taking heavy inspiration from the hit videogame, "Keep Talking and Nobody Explodes", what better way to engage the players than have then defuse a bomb! ## What it does The MakeUofT 2024 Explosive Game includes 4 modules that must be disarmed, each module is discrete and can be disarmed in any order. The modules the explosive includes are a "cut the wire" game where the wires must be cut in the correct order, a "press the button" module where different actions must be taken depending the given text and LED colour, an 8 by 8 "invisible maze" where players must cooperate in order to navigate to the end, and finally a needy module which requires players to stay vigilant and ensure that the bomb does not leak too much "electronic discharge". ## How we built it **The Explosive** The explosive defuser simulation is a modular game crafted using four distinct modules, built using the Qualcomm Arduino Due Kit, LED matrices, Keypads, Mini OLEDs, and various microcontroller components. The structure of the explosive device is assembled using foam board and 3D printed plates. **The Code** Our explosive defuser simulation tool is programmed entirely within the Arduino IDE. We utilized the Adafruit BusIO, Adafruit GFX Library, Adafruit SSD1306, Membrane Switch Module, and MAX7219 LED Dot Matrix Module libraries. Built separately, our modules were integrated under a unified framework, showcasing a fun-to-play defusal simulation. Using the Grove LCD RGB Backlight Library, we programmed the screens for our explosive defuser simulation modules (Capacitor Discharge and the Button). This library was used in addition for startup time measurements, facilitating timing-based events, and communicating with displays and sensors that occurred through the I2C protocol. The MAT7219 IC is a serial input/output common-cathode display driver that interfaces microprocessors to 64 individual LEDs. Using the MAX7219 LED Dot Matrix Module we were able to optimize our maze module controlling all 64 LEDs individually using only 3 pins. Using the Keypad library and the Membrane Switch Module we used the keypad as a matrix keypad to control the movement of the LEDs on the 8 by 8 matrix. This module further optimizes the maze hardware, minimizing the required wiring, and improves signal communication. ## Challenges we ran into Participating in the biggest hardware hackathon in Canada, using all the various hardware components provided such as the keypads or OLED displays posed challenged in terms of wiring and compatibility with the parent code. This forced us to adapt to utilize components that better suited our needs as well as being flexible with the hardware provided. Each of our members designed a module for the puzzle, requiring coordination of the functionalities within the Arduino framework while maintaining modularity and reusability of our code and pins. Therefore optimizing software and hardware was necessary to have an efficient resource usage, a challenge throughout the development process. Another issue we faced when dealing with a hardware hack was the noise caused by the system, to counteract this we had to come up with unique solutions mentioned below: ## Accomplishments that we're proud of During the Makeathon we often faced the issue of buttons creating noise, and often times the noise it would create would disrupt the entire system. To counteract this issue we had to discover creative solutions that did not use buttons to get around the noise. For example, instead of using four buttons to determine the movement of the defuser in the maze, our teammate Brian discovered a way to implement the keypad as the movement controller, which both controlled noise in the system and minimized the number of pins we required for the module. ## What we learned * Familiarity with the functionalities of new Arduino components like the "Micro-OLED Display," "8 by 8 LED matrix," and "Keypad" is gained through the development of individual modules. * Efficient time management is essential for successfully completing the design. Establishing a precise timeline for the workflow aids in maintaining organization and ensuring successful development. * Enhancing overall group performance is achieved by assigning individual tasks. ## What's next for Keep Hacking and Nobody Codes * Ensure the elimination of any unwanted noises in the wiring between the main board and game modules. * Expand the range of modules by developing additional games such as "Morse-Code Game," "Memory Game," and others to offer more variety for players. * Release the game to a wider audience, allowing more people to enjoy and play it.
## Inspiration In online documentaries, we saw visually impaired individuals and their vision consisted of small apertures. We wanted to develop a product that would act as a remedy for this issue. ## What it does When a button is pressed, a picture is taken of the user's current view. This picture is then analyzed using OCR (Optical Character Recognition) and the text is extracted from the image. The text is then converted to speech for the user to listen to. ## How we built it We used a push button connected to the GPIO pins on the Qualcomm DragonBoard 410c. The input is taken from the button and initiates a python script that connects to the Azure Computer Vision API. The resulting text is sent to the Azure Speech API. ## Challenges we ran into Coming up with an idea that we were all interested in, incorporated a good amount of hardware, and met the themes of the makeathon was extremely difficult. We attempted to use Speech Diarization initially but realized the technology is not refined enough for our idea. We then modified our idea and wanted to use a hotkey detection model but had a lot of difficulty configuring it. In the end, we decided to use a pushbutton instead for simplicity in favour of both the user and us, the developers. ## Accomplishments that we're proud of This is our very first Makeathon and we were proud of accomplishing the challenge of developing a hardware project (using components we were completely unfamiliar with) within 24 hours. We also ended up with a fully function project. ## What we learned We learned how to operate and program a DragonBoard, as well as connect various APIs together. ## What's next for Aperture We want to implement hot-key detection instead of the push button to eliminate the need of tactile input altogether.
## Inspiration The best learning is interactive. We have an interest in education, Virtual Reality, and fun. This was the intersection. ## What it does Beginning of an interactive, 3-dimensional Virtual Reality game. Our vision for this game is rooted in education and fun, designed to draw on users' knowledge of words and vocabulary-building. Although we would still have to create logic to guide user interactions, we have created the (virtual) fundamental building blocks for such a game. ## How we built it We built it using Unity, HTC Hive, and C#. The main frame of the program and building of Virtual Reality system was programmed with C#. ## Challenges we ran into Unity presented a large learning curve, and compounding our lack of previous experience with Virtual Reality, we had a large learning experience ourselves! ## Accomplishments that we're proud of Building blocks, creating scripts for interactions, and working towards our vision for this game. Overcoming our lack of knowledge and building our skill set. Solving unanticipated problems associated with combining different softwares. ## What we learned Camaraderie. the kindness of mentors, a whole lot of newfound technical expertise on C#, Unity, and Virtual Reality. ## What's next for Virtual Reality Language Builder Time provided, continued building with the hope of forming a complete educational game.
winning
## Inspiration Our inspiration came from the challenge proposed by Varient, a data aggregation platform that connects people with similar mutations together to help doctors and users. ## What it does Our application works by allowing the user to upload an image file. The image is then sent to Google Cloud’s Document AI to extract the body of text, process it, and then matched it against the datastore of gene names for matches. ## How we built it While originally we had planned to feed this body of text to a Vertex AI ML for entity extraction, the trained model was not accurate due to a small dataset. Additionally, we attempted to build a BigQuery ML model but struggled to format the data in the target column as required. Due to time constraints, we explored a different path and downloaded a list of gene symbols from HUGO Gene Nomenclature Committee’s website (<https://www.genenames.org/>). Using Node.js, and Multer, the image is processed and the text contents are efficiently matched against the datastore of gene names for matches. The app returns a JSON of the matching strings in order of highest frequency. This web app is then hosted on Google Cloud through App Engine at (<https://uofthacksix-chamomile.ue.r.appspot.com/>). The UI while very simple is easy to use. The intent of this project was to create something that could easily be integrated into Varient’s architecture. Converting this project into an API to pass the JSON to the client would be very simple. ## How it meets the theme "restoration" The overall goal of this application, which has been partially implemented, was to create an application that could match mutated gene names from user-uploaded documents and connect them with resources, and others who share the common gene mutation. This would allow them to share strategies or items that have helped them deal with living with the gene mutation. This would allow these individuals to restore some normalcy in their lives again. ## Challenges we ran into Some of the challenges we faced: * having a small data set to train the Vertex AI on * time constraints on learning the new technologies, and the best way to effectively use it * formatting the data in the target column when attempting to build a BigQuery ML model ## Accomplishments that we're proud of The accomplishment that we are all proud of is the exposure we gained to all the new technologies we discovered and used this weekend. We had no idea how many AI tools Google offers. The exposure to new technologies and taking the risk to step out of our comfort zone and attempt to learn and use it this weekend in such a short amount of time is something we are all proud of. ## What we learned This entire project was new to all of us. We have never used google cloud in this manner before, only for Firestore. We were unfamiliar with using Express and working with machine learning was something only one of us had a small amount of experience with. We learned a lot about Google Cloud and how to access the API through Python and Node.js. ## What's next for Chamomile The hack is not as complete as we would like since ideally there would be a machine learning aspect to confirm the guesses made by the substring matching and more data to improve the Vertex AI model. Improving on this would be a great step for this project. Also, add a more put-together UI to match the theme of this application.
## Inspiration The inspiration for this project came from UofTHacks Restoration theme and Varient's project challenge. The initial idea was to detect a given gene mutation for a given genetic testing report. This is an extremely valuable asset for the medical community, given the current global situation with the COVID-19 pandemic. As we can already see, misinformation and distrust in the medical community continue to grow in popularity, thus we must try to leverage technology to solve this ever-expanding problem. One way Geneticheck can restore public trust in the medical community is by providing a way to bridge the gap between confusing medical reports and the average person's medical understanding. ## What it does Geneticheck is a smart software that allows a patient or parents of patients with rare diseases to gather more information about their specific conditions and genetic mutations. The reports are scanned through to find the gene mutation and shows where the gene mutation is located on the original report. Genticheck also provides the patient with more information regarding their gene mutation. Specifically, to provide them with the associated Diseases and Phenotypes (or related symptoms) they may now have. The software, given a gene mutation, searches through the Human Phenotype Ontology database and auto-generates a pdf report that lists off all the necessary information a patient will need following a genetic test. The descriptions for each phenotype are given in a laymen-like language, which allows the patient to understand the symptoms associated with the gene mutation, resulting in the patients and loved ones being more observant over their status. ## How we built it Geneticheck was built using Python and Google Cloud's Vision API. Other libraries were also explored, such as PyTesseract, however, yielded lower gene detection results ## Challenges we ran into One major challenge was initially designing the project in Python. Development in python was initially chosen for its rapid R&D capabilities and the potential need to do image processing in OpenCV. As the project was developed and Google Cloud Vision API was deemed acceptable for use, moving to a web-based python framework was deemed too time-consuming. In the interest of time, the python-based command line tool had to be selected as the current basis of interaction ## Accomplishments that we're proud of One proud accomplishment of this project is the success rate of the overall algorithm, being able to successfully detect all 47 gene mutations with their related image. The other great accomplishment was the quick development of PDF generation software to expand the capabilities and scope of the project, to provide the end-user/patient with more information about their condition, ultimately restoring their faith in the medical field through a better understanding/knowledge. ## What we learned Topics learned include OCR for python, optimizing images for text OCR for PyTesseract, PDF generation in python, setting up Flask servers, and alot about Genetic data! ## What's next for Geneticheck The next steps include poring over the working algorithms to a web-based framework, such as React. Running the algorithms on Javascript would allow the user web-based interaction, which is the best interactive format for the everyday person. Other steps is to gather more genetic tests results and to provide treatments options in the reports as well.
## Inspiration More than 2.7 million pets will be killed over the next year because shelters cannot accommodate them. Many of them will be abandoned by owners who are unprepared for the real responsibilities of raising a pet, and the vast majority of them will never find a permanent home. The only sustainable solution to animal homelessness is to maximize adoptions of shelter animals by families who are equipped to care for them, so we created Homeward as a one-stop foster tool to streamline this process. ## What it does Homeward allows shelters and pet owners to offer animals for adoption online. A simple and intuitive UI allows adopters to describe the pet they want and uses Google Cloud ML's Natural Language API to match their queries with available pets. Our feed offers quick browsing of available animals, multiple ranking options, and notifications of which pets are from shelters and which will be euthanized soon. ## How we built it We used the Node.js framework with Express for routing and MongoDB as our database. Our front-end was built with custom CSS/Jade mixed with features from several CSS frameworks. Entries in our database were sourced from the RescueGroups API, and salient keywords for query matching were extracted using Google's Natural Language API. Our application is hosted with Google App Engine. ## Challenges we ran into Incorporating Google's Natural Language API was challenging at first and we had to design a responsive front-end that would update the feed as the user updated their query. Some pets' descriptions had extraneous HTML and links that added noise to our extracted tags. We also found it tedious to clean and migrate the data to MongoDB. ## Accomplishments that we're proud of We successfully leveraged Google Cloud ML to detect salient attributes in users' queries and rank animals in our feed accordingly. We also managed to utilize real animal data from the RescueGroups API. Our front-end also turned out to be cleaner and more user-friendly than we anticipated. ## What we learned We learned first-hand about the challenges of applying natural language processing to potentially noisy user queries in real life applications. We also learned more about good javascript coding practices and robust back-end communication between our application and our database. But most importantly, we learned about the alarming state of animal homelessness and its origins. ## What's next for Homeward We can enhance posted pet management by creating a simple account system for shelters. We would also like to create a scheduling mechanism that lets users "book" animals for fostering, thereby maximizing the probability of adoption. In order to scale Homeward, we need to clean and integrate more shelters' databases and adjust entries to match our schema.
losing
## Inspiration Our inspiration for TRACY came from the desire to enhance tennis training through advanced technology. One of our members was a former tennis enthusiast who has always strived to refine their skills. They soon realized that the post-game analysis process took too much time in their busy schedule. We aimed to create a system that not only analyzes gameplay but also provides personalized insights for players to improve their skills. ## What it does and how we built it TRACY utilizes computer vision algorithms and pre-trained neural networks to analyze tennis footage, tracking player movements, and ball trajectories. The system then employs ChatGPT for AI-driven insights, generating personalized natural language summaries highlighting players' strengths and weaknesses. The output includes dynamic visuals and statistical data using React.js, offering a comprehensive overview and further insights into the player's performance. ## Challenges we ran into Developing a seamless integration between computer vision, ChatGPT, and real-time video analysis posed several challenges. Ensuring accuracy in 2D ball tracking from a singular camera angle, optimizing processing speed, and fine-tuning the algorithm for accurate tracking were a key hurdle we overcame during the development process. The depth of the ball became a challenge as we were limited to one camera angle but we were able to tackle it by using machine learning techniques. ## Accomplishments that we're proud of We are proud to have successfully created TRACY, a system that brings together state-of-the-art technologies to provide valuable insights to tennis players. Achieving a balance between accuracy, speed, and interpretability was a significant accomplishment for our team. ## What we learned Through the development of TRACY, we gained valuable insights into the complexities of integrating computer vision with natural language processing. We also enhanced our understanding of the challenges involved in real-time analysis of sports footage and the importance of providing actionable insights to users. ## What's next for TRACY Looking ahead, we plan to further refine TRACY by incorporating user feedback and expanding the range of insights it can offer. Additionally, we aim to explore potential collaborations with tennis coaches and players to tailor the system to meet the diverse needs of the tennis community.
## Inspiration Imagine: A major earthquake hits. Thousands call 911 simultaneously. In the call center, a handful of operators face an impossible task. Every line is ringing. Every second counts. There aren't enough people to answer every call. This isn't just hypothetical. It's a real risk in today's emergency services. A startling **82% of emergency call centers are understaffed**, pushed to their limits by non-stop demands. During crises, when seconds mean lives, staffing shortages threaten our ability to mitigate emergencies. ## What it does DispatchAI reimagines emergency response with an empathetic AI-powered system. It leverages advanced technologies to enhance the 911 call experience, providing intelligent, emotion-aware assistance to both callers and dispatchers. Emergency calls are aggregated onto a single platform, and filtered based on severity. Critical details such as location, time of emergency, and caller's emotions are collected from the live call. These details are leveraged to recommend actions, such as dispatching an ambulance to a scene. Our **human-in-the-loop-system** enforces control of human operators is always put at the forefront. Dispatchers make the final say on all recommended actions, ensuring that no AI system stands alone. ## How we built it We developed a comprehensive systems architecture design to visualize the communication flow across different softwares. ![Architecture](https://i.imgur.com/FnXl7c2.png) We developed DispatchAI using a comprehensive tech stack: ### Frontend: * Next.js with React for a responsive and dynamic user interface * TailwindCSS and Shadcn for efficient, customizable styling * Framer Motion for smooth animations * Leaflet for interactive maps ### Backend: * Python for server-side logic * Twilio for handling calls * Hume and Hume's EVI for emotion detection and understanding * Retell for implementing a voice agent * Google Maps geocoding API and Street View for location services * Custom-finetuned Mistral model using our proprietary 911 call dataset * Intel Dev Cloud for model fine-tuning and improved inference ## Challenges we ran into * Curated a diverse 911 call dataset * Integrating multiple APIs and services seamlessly * Fine-tuning the Mistral model to understand and respond appropriately to emergency situations * Balancing empathy and efficiency in AI responses ## Accomplishments that we're proud of * Successfully fine-tuned Mistral model for emergency response scenarios * Developed a custom 911 call dataset for training * Integrated emotion detection to provide more empathetic responses ## Intel Dev Cloud Hackathon Submission ### Use of Intel Hardware We fully utilized the Intel Tiber Developer Cloud for our project development and demonstration: * Leveraged IDC Jupyter Notebooks throughout the development process * Conducted a live demonstration to the judges directly on the Intel Developer Cloud platform ### Intel AI Tools/Libraries We extensively integrated Intel's AI tools, particularly IPEX, to optimize our project: * Utilized Intel® Extension for PyTorch (IPEX) for model optimization * Achieved a remarkable reduction in inference time from 2 minutes 53 seconds to less than 10 seconds * This represents a 80% decrease in processing time, showcasing the power of Intel's AI tools ### Innovation Our project breaks new ground in emergency response technology: * Developed the first empathetic, AI-powered dispatcher agent * Designed to support first responders during resource-constrained situations * Introduces a novel approach to handling emergency calls with AI assistance ### Technical Complexity * Implemented a fine-tuned Mistral LLM for specialized emergency response with Intel Dev Cloud * Created a complex backend system integrating Twilio, Hume, Retell, and OpenAI * Developed real-time call processing capabilities * Built an interactive operator dashboard for data summarization and oversight ### Design and User Experience Our design focuses on operational efficiency and user-friendliness: * Crafted a clean, intuitive UI tailored for experienced operators * Prioritized comprehensive data visibility for quick decision-making * Enabled immediate response capabilities for critical situations * Interactive Operator Map ### Impact DispatchAI addresses a critical need in emergency services: * Targets the 82% of understaffed call centers * Aims to reduce wait times in critical situations (e.g., Oakland's 1+ minute 911 wait times) * Potential to save lives by ensuring every emergency call is answered promptly ### Bonus Points * Open-sourced our fine-tuned LLM on HuggingFace with a complete model card (<https://huggingface.co/spikecodes/ai-911-operator>) + And published the training dataset: <https://huggingface.co/datasets/spikecodes/911-call-transcripts> * Submitted to the Powered By Intel LLM leaderboard (<https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard>) * Promoted the project on Twitter (X) using #HackwithIntel (<https://x.com/spikecodes/status/1804826856354725941>) ## What we learned * How to integrate multiple technologies to create a cohesive, functional system * The potential of AI to augment and improve critical public services ## What's next for Dispatch AI * Expand the training dataset with more diverse emergency scenarios * Collaborate with local emergency services for real-world testing and feedback * Explore future integration
## Inspiration Our inspiration for creating this AI project stemmed from the desire to leverage current technology to enhance the lives of our community members, particularly the elderly. Recognizing the difficulties faced by aging individuals, including our own grandparents, to embrace technological advancements, and their concerns about health conditions, we sought a way to integrate technology and healthcare services seamlessly into their lives. Thus, SageWell was conceived—an AI companion designed to provide personalized support to seniors by navigating medical information alongside their healthcare providers. At SageWell, we believe in accessibility of AI for all age groups. ## What it does SageWell enables users to seek medical information through natural language interactions, allowing elderly individuals to pose questions verbally and receive spoken responses, mimicking human conversation. ## How we built it SageWell leverages the capabilities of two Monster APIs: OpenAI-Whisper Large-v2 for speech-to-text conversion and Meta's llama-2-7b-chat-hf for refining our reinforcement learning model. Our model was trained on MedQuAD, a comprehensive dataset containing over 47,000 medical question-answer pairs, Drugs, Side Effects and Medical Condition dataset, and Drugs Related to Medical Conditions dataset. The frontend of the web app was built using React, a JavaScript library, and the service we integrated into our app to store all the audio files was Firebase. ## Challenges we ran into In the process of developing SageWell, we encountered several challenges. Since there were many integrations in our application, from speech-to-text transcription and MonsterAPIs for fine-tuned LLMs, to storage service providers, we faced difficulties trying to link all the individual pieces together. We learned several new tools and technologies and also spent time fine-tuning models to provide medical information. Additionally, we encountered setbacks when we were finishing up our project as we were facing CORS errors when making API calls from the browser - we were able to work around this by adding a proxy, which served as a bridge between our client and server. ## Accomplishments that we're proud of We empowered our primary user demographic, the elderly, to engage with SageWell through voice interactions instead of having to type text. We addressed the challenges that many seniors encounter when typing on digital devices, thereby increasing the accessibility of the process of seeking medical information and ensuring inclusivity in AI for all age groups, especially the elderly. Our reinforcement learning model has demonstrated effectiveness, evidenced by the decrease in training loss after each iteration and to 0.756 at the end. This indicates that the model fits well on the training data. We merged three datasets: the Medical Question Answering data set, the Drugs, Side Effects and Medical Condition data set, and the Drugs Related to Medical Conditions data set. By training our model for SageWell on these datasets obtained from NIH websites and drugs.com, we allowed SageWell to provide medical information to users based on reliable, comprehensive databases. ## What we learned Through this project, we gained valuable insights into the startup ecosystem and developer space. We expanded our skill set by refining our model, working with new APIs and storage providers, and creating a solution that addresses the specific challenges faced by our target audience. ## What's next for SageWell The journey of SageWell is just beginning. Moving forward, we aim to expand its capabilities to assist in additional areas crucial for the well-being of the elderly, such as medication reminders and guidance on accessing support for domestic chores. Furthermore, we envision integrating features that facilitate connections between seniors and younger generations, including their grandchildren and other youth in their communities. By fostering these intergenerational connections, SageWell will expand its targeting market size by not only keeping the elderly engaged with their loved ones but also ensuring they remain connected to the evolving world around them. Through SageWell, we look forward to continuing to push for accessibility of AI for all age groups.
winning
# Inspiration As a team we decided to develop a service that we thought would not only be extremely useful to us, but to everyone around the world that struggles with storing physical receipts. We were inspired to build an eco friendly as well as innovative application that targets the pain points behind filing receipts, losing receipts, missing return policy deadlines, not being able to find the proper receipt with a particular item as well as tracking potentially bad spending habits. # What it does To solve these problems, we are proud to introduce, Receipto, a universal receipt tracker who's mission is to empower users with their personal finances, to track spending habits more easily as well as to replace physical receipts to reduce global paper usage. With Receipto you can upload or take a picture of a receipt, and it will automatically recognize all of the information found on the receipt. Once validated, it saves the picture and summarizes the data in a useful manner. In addition to storing receipts in an organized manner, you can get valuable information on your spending habits, you would also be able to search through receipt expenses based on certain categories, items and time frames. The most interesting feature is that once a receipt is loaded and validated, it will display a picture of all the items purchased thanks to the use of item codes and an image recognition API. Receipto will also notify you when a receipt may be approaching its potential return policy deadline which is based on a user input during receipt uploads. # How we built it We have chosen to build Receipto as a responsive web application, allowing us to develop a better user experience. We first drew up story boards by hand to visually predict and explore the user experience, then we developed the app using React, ViteJS, ChakraUI and Recharts. For the backend, we decided to use NodeJS deployed on Google Cloud Compute Engine. In order to read and retrieve information from the receipt, we used the Google Cloud Vision API along with our own parsing algorithm. Overall, we mostly focused on developing the main ideas, which consist of scanning and storing receipts as well as viewing the images of the items on the receipts. # Challenges we ran into Our main challenge was implementing the image recognition API, as it involved a lot of trial and error. Almost all receipts are different depending on the store and province. For example, in Quebec, there are two different taxes displayed on the receipt, and that affected how our app was able to recognize the data. To fix that, we made sure that if two types of taxes are displayed, our app would recognize that it comes from Quebec, and it would scan it as such. Additionally, almost all stores have different receipts, so we have adapted the app to recognize most major stores, but we also allow a user to manually add the data in case a receipt is very different. Either way, a user will know when it's necessary to change or to add data with visual alerts when uploading receipts. Another challenge was displaying the images of the items on the receipts. Not all receipts had item codes, stores that did have these codes ended up having different APIs. We overcame this challenge by finding an API called stocktrack.ca that combines the most popular store APIs in one place. # Accomplishments that we're proud of We are all very proud to have turned this idea into a working prototype as we agreed to pursue this idea knowing the difficulty behind it. We have many great ideas to implement in the future and have agreed to continue this project beyond McHacks in hopes of one day completing it. We our grateful to have had the opportunity to work together with such talented, patient, and organized team members. # What we learned With all the different skills each team member brought to the table, we were able to pick up new skills from each other. Some of us got introduced to new coding languages, others learned new UI design skills as well as simple organization and planning skills. Overall, McHacks has definitely showed us the value of team work, we all kept each other motivated and helped each other overcome each obstacle as a team. # What's next for Receipto? Now that we have a working prototype ready, we plan to further test our application with a selected sample of users to improve the user experience. Our plan is to polish up the main functionality of the application, and to expand the idea by adding exciting new features that we just didn't have time to add. Although we may love the idea, we need to make sure to conduct more market research to see if it could be a viable service that could change the way people perceive receipts and potentially considering adapting Receipto.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
## Inspiration Games are the only force in the known universe that can get people to take actions against their self-interest, in a predictable way, without using force. I have been attracted to Game Development for a while and I was planning to make something of my own and I love First Person Shooting games so, I thought let's build something that I can enjoy myself. I took inspiration from CS Go and other FPS games to get a brief idea of the mechanics and environment. ## What it does Shoot'em Up is a first-person action shooter in which everything depends on your skill. Carefully consider your tactics for each battle, explore different locations and modes, develop your shooting skills and demonstrate your superiority! Step into the thrilling solo play campaign as you shoot your way through one dire situation after another to save the world as you launch an attack against a lunatic’s apocalyptic plans. ## How I built it I used Unity Game Engine to build this game from scratch. I put all my creativity to design and create the gameplay environment. Everything is built from the ground-up. Unity enabled me to create 3D models and figurines. I used Unity's C# scripting API to form gameplay physics and actions. ## Challenges I ran into I was not familiar with C# at a deeper level so I had to keep lookup for stuff over StackOverflow and the environment design was the hardest part of the job. ## Accomplishments that I am proud of Building a game of my own is the biggest and most prestigious accomplishment for me. ## What I learned I learned that developing a game is not a piece of cake and it takes immense commitment and hard work and I also got myself familiar with the functionalities of Unity Game Engine. ## What's next for SHOOT 'EM UP * Currently, the game only has 2 levels as of now but I would love to add more levels to make the game even more enjoyable. * The game can be played on PC for now but I would love to port the game for cross-platform use. * There are some nicks and cuts here and there and I would love to make the gameplay smoother. ## Note * I have uploaded various code snippets over my Github but that would make no sense at all until you get all the assets files, So I have uploaded the complete project file along with all the assets on my Google Drive whose link is attracted with the submission. * There was some problem with my Mic😥 so I was not able to do a voiceover, Pardon Me for it.
winning
## What it does Khaledifier replaces all quotes and images around the internet with pictures and quotes of DJ Khaled! ## How we built it Chrome web app written in JS interacts with live web pages to make changes. The app sends a quote to a server which tokenizes words into types using NLP This server then makes a call to an Azure Machine Learning API that has been trained on DJ Khaled quotes to return the closest matching one. ## Challenges we ran into Keeping the server running with older Python packages and for free proved to be a bit of a challenge
## Inspiration Inspired by a team member's desire to study through his courses by listening to his textbook readings recited by his favorite anime characters, functionality that does not exist on any app on the market, we realized that there was an opportunity to build a similar app that would bring about even deeper social impact. Dyslexics, the visually impaired, and those who simply enjoy learning by having their favorite characters read to them (e.g. children, fans of TV series, etc.) would benefit from a highly personalized app. ## What it does Our web app, EduVoicer, allows a user to upload a segment of their favorite template voice audio (only needs to be a few seconds long) and a PDF of a textbook and uses existing Deepfake technology to synthesize the dictation from the textbook using the users' favorite voice. The Deepfake tech relies on a multi-network model trained using transfer learning on hours of voice data. The encoder first generates a fixed embedding of a given voice sample of only a few seconds, which characterizes the unique features of the voice. Then, this embedding is used in conjunction with a seq2seq synthesis network that generates a mel spectrogram based on the text (obtained via optical character recognition from the PDF). Finally, this mel spectrogram is converted into the time-domain via the Wave-RNN vocoder (see [this](https://arxiv.org/pdf/1806.04558.pdf) paper for more technical details). Then, the user automatically downloads the .WAV file of his/her favorite voice reading the PDF contents! ## How we built it We combined a number of different APIs and technologies to build this app. For leveraging scalable machine learning and intelligence compute, we heavily relied on the Google Cloud APIs -- including the Google Cloud PDF-to-text API, Google Cloud Compute Engine VMs, and Google Cloud Storage; for the deep learning techniques, we mainly relied on existing Deepfake code written for Python and Tensorflow (see Github repo [here](https://github.com/rodrigo-castellon/Real-Time-Voice-Cloning), which is a fork). For web server functionality, we relied on Python's Flask module, the Python standard library, HTML, and CSS. In the end, we pieced together the web server with Google Cloud Platform (GCP) via the GCP API, utilizing Google Cloud Storage buckets to store and manage the data the app would be manipulating. ## Challenges we ran into Some of the greatest difficulties were encountered in the superficially simplest implementations. For example, the front-end initially seemed trivial (what's more to it than a page with two upload buttons?), but many of the intricacies associated with communicating with Google Cloud meant that we had to spend multiple hours creating even a landing page with just drag-and-drop and upload functionality. On the backend, 10 excruciating hours were spent attempting (successfully) to integrate existing Deepfake/Voice-cloning code with the Google Cloud Platform. Many mistakes were made, and in the process, there was much learning. ## Accomplishments that we're proud of We're immensely proud of piecing all of these disparate components together quickly and managing to arrive at a functioning build. What started out as merely an idea manifested itself into usable app within hours. ## What we learned We learned today that sometimes the seemingly simplest things (dealing with python/CUDA versions for hours) can be the greatest barriers to building something that could be socially impactful. We also realized the value of well-developed, well-documented APIs (e.g. Google Cloud Platform) for programmers who want to create great products. ## What's next for EduVoicer EduVoicer still has a long way to go before it could gain users. Our first next step is to implementing functionality, possibly with some image segmentation techniques, to decide what parts of the PDF should be scanned; this way, tables and charts could be intelligently discarded (or, even better, referenced throughout the audio dictation). The app is also not robust enough to handle large multi-page PDF files; the preliminary app was designed as a minimum viable product, only including enough to process a single-page PDF. Thus, we plan on ways of both increasing efficiency (time-wise) and scaling the app by splitting up PDFs into fragments, processing them in parallel, and returning the output to the user after collating individual text-to-speech outputs. In the same vein, the voice cloning algorithm was restricted by length of input text, so this is an area we seek to scale and parallelize in the future. Finally, we are thinking of using some caching mechanisms server-side to reduce waiting time for the output audio file.
## Inspiration Our team was united in our love for animals, and our anger about the thousands of shelter killings that happen every day due to overcrowding. In order to raise awareness and educate others about the importance of adopting rather than shopping for their next pet, we framed this online web application from a dog's perspective of the process of trying to get adopted. ## What it does In *Overpupulation,* users can select a dog who they will control in order to try to convince visitors to adopt them. To illustrate the realistic injustices some breeds face in shelters, different dogs in the game have different chances of getting adopted. After each rejection from a potential adoptee, we expose some of their faulty reasoning behind their choices to try to debunk false misconceptions. At the conclusion of the game, we present ways for individuals to get involved and support their local shelters. ## How we built it This web application is built in Javascript/JQuery, HTML, and CSS. ## Accomplishments that we're proud of For most of us, this was our first experience working in a team coding environment. We all walked away with a better understanding of git, the front-end languages we utilized, and design. We have purchased the domain name overpupulation.com, but are still trying to work through redirecting issues. :)
winning
## Inspiration As university students, we often find that we have groceries in the fridge but we end up eating out and the groceries end up going bad. ## What It Does After you buy groceries from supermarkets, you can use our app to take a picture of your receipt. Our app will parse through the items in the receipts and add the items into the database representing your fridge. Using the items you have in your fridge, our app will be able to recommend recipes for dishes for you to make. ## How We Built It On the back-end, we have a Flask server that receives the image from the front-end through ngrok and then sends the image of the receipt to Google Cloud Vision to get the text extracted. We then post-process the data we receive to filter out any unwanted noise in the data. On the front-end, our app is built using react-native, using axios to query from the recipe API, and then stores data into Firebase. ## Challenges We Ran Into Some of the challenges we ran into included deploying our Flask to Google App Engine, and styling in react. We found that it was not possible to write into Google App Engine storage, instead we had to write into Firestore and have that interact with Google App Engine. On the frontend, we had trouble designing the UI to be responsive across platforms, especially since we were relatively inexperienced with React Native development. We also had trouble finding a recipe API that suited our needs and had sufficient documentation.
## Inspiration Being frugal students, we all wanted to create an app that would tell us what kind of food we could find around us based on a budget that we set. And so that’s exactly what we made! ## What it does You give us a price that you want to spend and the radius that you are willing to walk or drive to a restaurant; then voila! We give you suggestions based on what you can get in that price in different restaurants by providing all the menu items with price and calculated tax and tips! We keep the user history (the food items they chose) and by doing so we open the door to crowdsourcing massive amounts of user data and as well as the opportunity for machine learning so that we can give better suggestions for the foods that the user likes the most! But we are not gonna stop here! Our goal is to implement the following in the future for this app: * We can connect the app to delivery systems to get the food for you! * Inform you about the food deals, coupons, and discounts near you ## How we built it ### Back-end We have both an iOS and Android app that authenticates users via Facebook OAuth and stores user eating history in the Firebase database. We also made a REST server that conducts API calls (using Docker, Python and nginx) to amalgamate data from our targeted APIs and refine them for front-end use. ### iOS Authentication using Facebook's OAuth with Firebase. Create UI using native iOS UI elements. Send API calls to Soheil’s backend server using json via HTTP. Using Google Map SDK to display geo location information. Using firebase to store user data on cloud and capability of updating to multiple devices in real time. ### Android The android application is implemented with a great deal of material design while utilizing Firebase for OAuth and database purposes. The application utilizes HTTP POST/GET requests to retrieve data from our in-house backend server, uses the Google Maps API and SDK to display nearby restaurant information. The Android application also prompts the user for a rating of the visited stores based on how full they are; our goal was to compile a system that would incentive foodplaces to produce the highest “food per dollar” rating possible. ## Challenges we ran into ### Back-end * Finding APIs to get menu items is really hard at least for Canada. * An unknown API kept continuously pinging our server and used up a lot of our bandwith ### iOS * First time using OAuth and Firebase * Creating Tutorial page ### Android * Implementing modern material design with deprecated/legacy Maps APIs and other various legacy code was a challenge * Designing Firebase schema and generating structure for our API calls was very important ## Accomplishments that we're proud of **A solid app for both Android and iOS that WORKS!** ### Back-end * Dedicated server (VPS) on DigitalOcean! ### iOS * Cool looking iOS animations and real time data update * Nicely working location features * Getting latest data from server ## What we learned ### Back-end * How to use Docker * How to setup VPS * How to use nginx ### iOS * How to use Firebase * How to OAuth works ### Android * How to utilize modern Android layouts such as the Coordinator, Appbar, and Collapsible Toolbar Layout * Learned how to optimize applications when communicating with several different servers at once ## What's next for How Much * If we get a chance we all wanted to work on it and hopefully publish the app. * We were thinking to make it open source so everyone can contribute to the app.
## Inspiration As a group of avid travelers and adventurers we all share the same problem: googling "places to visit" for hours, deliberating reviews, etc. We really wished someone did all that for us. This inspired us to create WizeWay.AI, which creates a personalized itinerary using AI algorithms. ## What it does WizeWay.AI takes in your inputs, such as the info of the travelers, the destination, individual preferences such as pet accommodations and dietary restrictions, and creates an itinerary based on your departure/return date. ## How we built it We utilized available generative AI APIs such as GPT 4.0.11 interfacing with the Taipy framework to create a web application in Python. ## Challenges we ran into Using Taipy was fairly challenging as the only help available on the internet is the Taipy documentation. We further ran into challenges with formatting using Taipy. ## Accomplishments that we're proud of We created a web application that uses AI to make peoples liver easier, which is what coding and artificial intelligence are all about. ## What we learned we learned how to use the Taipy framework and the OpenAI API. We also learned proper practice for code documentation and github collaboration. We furthered our full stack development skills as we incorporated a bit of CSS into the front end. ## What's next for WizeWay.AI We are hoping to continue the project using interfacing with google maps, location tracking, shortest path algorithms, public trasnsit integration, account validation, offline support and much more.
partial
## Inspiration Osu! players often use drawing tablets instead of a normal mouse and keyboard setup because a tablet gives more precision than a mouse can provide. These tablets also provide a better way to input to devices. Overuse of conventional keyboards and mice can lead to carpal tunnel syndrome, and can be difficult to use for those that have specific disabilities. Tablet pens can provide an alternate form of HID, and have better ergonomics reducing the risk of carpal tunnel. Digital artists usually draw on these digital input tablets, as mice do not provide the control over the input as needed by artists. However, tablets can often come at a high cost of entry, and are not easy to bring around. ## What it does Limestone is an alternate form of tablet input, allowing you to input using a normal pen and using computer vision for the rest. That way, you can use any flat surface as your tablet ## How we built it Limestone is built on top of the neural network library mediapipe from google. mediapipe hands provide a pretrained network that returns the 3D position of 21 joints in any hands detected in a photo. This provides a lot of useful data, which we could probably use to find the direction each finger points in and derive the endpoint of the pen in the photo. To safe myself some work, I created a second neural network that takes in the joint data from mediapipe and derive the 2D endpoint of the pen. This second network is extremely simple, since all the complex image processing has already been done. I used 2 1D convolutional layers and 4 hidden dense layers for this second network. I was only able to create about 40 entries in a dataset after some experimentation with the formatting, but I found a way to generate fairly accurate datasets with some work. ## Dataset Creation I created a small python script that marks small dots on your screen for accurate spacing. I could the place my pen on the dot, take a photo, and enter in the coordinate of the point as the label. ## Challenges we ran into It took a while to tune the hyperparameters of the network. Fortunately, due to the small size it was not too hard to get it into a configuration that could train and improve. However, it doesn't perform as well as I would like it to but due to time constraints I couldn't experiment further. The mean average error loss of the final model trained for 1000 epochs was around 0.0015 Unfortunately, the model was very overtrained. The dataset was o where near large enough. Adding noise probably could have helped to reduce overtraining, but I doubt by much. There just wasn't anywhere enough data, but the framework is there. ## Whats Next If this project is to be continued, the model architecture would have to be tuned much more, and the dataset expanded to at least a few hundred entries. Adding noise would also definitely help with the variance of the dataset. There is still a lot of work to be done on limestone, but the current code at least provides some structure and a proof of concept.
Fujifusion is our group's submission for Hack MIT 2018. It is a data-driven application for predicting corporate credit ratings. ## Problem Scholars and regulators generally agree that credit rating agency failures were at the center of the 2007-08 global financial crisis. ## Solution \*Train a machine learning model to automate the prediction of corporate credit ratings. \*Compare vendor ratings with predicted ratings to identify discrepancies. \*Present this information in a cross-platform application for RBC’s traders and clients. ## Data Data obtained from RBC Capital Markets consists of 20 features recorded for 27 companies at multiple points in time for a total of 524 samples. Available at <https://github.com/em3057/RBC_CM> ## Analysis We took two approaches to analyzing the data: a supervised approach to predict corporate credit ratings and an unsupervised approach to try to cluster companies into scoring groups. ## Product We present a cross-platform application built using Ionic that works with Android, iOS, and PCs. Our platform allows users to view their investments, our predicted credit rating for each company, a vendor rating for each company, and visual cues to outline discrepancies. They can buy and sell stock through our app, while also exploring other companies they would potentially be interested in investing in.
## Inspiration Self-driving cars seem to be the focus of the cutting-edge industry. Although there have been many self-driving cars (such as Tesla), none of them have been ported to the cloud to allow for modularity and availability to everyone. Perhaps self-driving can be served just as how IaaS, PaaS, and SaaS are served. This was also very much inspired by our long living hero, @elonmusk, who was, unfortunately, unable to attend this year's McHacks. ## What it does SDaaS is a cloud provider for serving steering instructions to self-driving cars from camera images. ## How We Built It The integral component of our project is an N series GPU-enabled VM hosted on Microsoft Azure. This allowed us to efficiently train a convolutional neural network (Identical to Nvidia's End to End Learning) to control our project. To show the extensibility of our API, we used an open source car simulator called The Open Racing Car Simulator (TORCS) that interfaced with the backend that we had created before. The backend is a Python socket server that processes calls and replies to image frames with steering angles. ## Challenges we ran into Being inexperienced with C++, many of our hours was spent looking through countless pages of documentation and Stack Overflow forums to fix simple bugs. Setting up sockets along with a connection from the C++ code proved to be very difficult. ## Accomplishments that we're proud of We managed to setup almost all of the features that we had proposed in the beginning. ## What's next for SDAAS- Self Driving As A Service Since we only had a virtual simulator for testing purposes, perhaps next time we may use a real car.
winning
## Inspiration We’ve all had the experience of needing assistance with a task but not having friends available to help. As a last resort, one has to resort to large, class-wide GroupMe’s to see if anyone can help. But most students have those because they’re filled with a lot of spam. As a result, the most desperate calls for help often go unanswered. We realized that we needed to streamline the process for getting help. So, we decided to build an app to do just that. For every Yalie who needs help, there are a hundred who are willing to offer it—but they just usually aren’t connected. So, we decided to build YHelpUs, with a mission to help every Yalie get better help. ## What it does YHelpUs provides a space for students that need something to create postings rather than those that have something to sell creating them. This reverses the roles of a traditional marketplace and allows for more personalized assistance. University students can sign up with their school email accounts and then be able to view other students’ posts for help as well as create their own posts. Users can access a chat for each posting discussing details about the author’s needs. In the future, more features relating to task assignment will be implemented. ## How we built it Hoping to improve our skills as developers, we decided to carry out the app’s development with the MERNN stack; although we had some familiarity with standard MERN, developing for mobile with React Native was a unique challenge for us all. Throughout the entire development phase, we had to balance what we wanted to provide the user and how these relationships could present themselves in our code. In the end, we managed to deliver on all the basic functionalities required to answer our initial problem. ## Challenges we ran into The most notable challenge we faced was the migration towards React Native. Although plenty of documentation exists for the framework, many of the errors we faced were specific enough to force development to stop for a prolonged period of time. From handling multi-layered navigation to user authentication across all our views, we encountered problems we couldn’t have expected when we began the project, but every solution we created simply made us more prepared for the next. ## Accomplishments that we're proud of Enhancing our product with automated content moderation using Google Cloud Natural Language API. Also, our sidequest developing a simple matching algorithm for LightBox. ## What we learned Learned new frameworks (MERNN) and how to use Google Cloud API. ## What's next for YHelpUs Better filtering options and a more streamlined UI. We also want to complete the accepted posts feature, and enhance security for users of YHelpUs.
## Inspiration As university students, we have been noticing issues with very large class sizes. With lectures often being taught to over 400 students, it becomes very difficult and anxiety-provoking to speak up when you don't understand the content. As well, with classes of this size, professors do not have time to answer every student who raises their hand. This raises the problem of professors not being able to tell if students are following the lecture, and not answering questions efficiently. Our hack addresses these issues by providing a real-time communication environment between the class and the professor. KeepUp has the potential to increase classroom efficiency and improve student experiences worldwide. ## What it does KeepUp allows the professor to gauge the understanding of the material in real-time while providing students a platform to pose questions. It allows students to upvote questions asked by their peers that they would like to hear answered, making it easy for a professor to know which questions to prioritize. ## How We built it KeepUp was built using JavaScript and Firebase, which provided hosting for our web app and the backend database. ## Challenges We ran into As it was, for all of us, our first time working with a firebase database, we encountered some difficulties when it came to pulling data out of the firebase. It took a lot of work to finally get this part of the hack working which unfortunately took time away from implementing some other features (See what’s next section). But it was very rewarding to have a working backend in Firebase and we are glad we worked to overcome the challenge. ## Accomplishments that We are proud of We are proud of creating a useful app that helps solve a problem that affects all of us. We recognized that there is a gap in between students and teachers when it comes to communication and question answering and we were able to implement a solution. We are proud of our product and its future potential and scalability. ## What We learned We all learned a lot throughout the implementation of KeepUp. First and foremost, we got the chance to learn how to use Firebase for hosting a website and interacting with the backend database. This will prove useful to all of us in future projects. We also further developed our skills in web design. ## What's next for KeepUp * There are several features we would like to add to KeepUp to make it more efficient in classrooms: * Add a timeout feature so that questions disappear after 10 minutes of inactivity (10 minutes of not being upvoted) * Adding a widget feature so that the basic information from the website can be seen in the corner of your screen at all time * Adding Login for users for more specific individual functions. For example, a teacher can remove answered questions, or the original poster can mark their question as answered. * Censoring of questions as they are posted, so nothing inappropriate gets through.
## Inspiration Being Asian-Canadian, we had parents who had to immigrate to Canada. As newcomers, adjusting to a new way of life was scary and difficult. Our parents had very few physical possessions and assets, meaning they had to buy everything from winter clothes to pots and pans. Ensuring we didn't miss any sales to maximize savings made a HUGE difference. That's why we created Shop Buddy - an easy-to-use and convenient tool for people to keep an eye out for opportunities to save money without needing to monitor constantly. This means that people can focus on their other tasks AND know when to get their shopping done. ## What it does Shop Buddy allows users to input links to products they are interested in and what strike price they want to wait for. When the price hits their desired price point, Shop Buddy will send a text to the user's cell phone, notifying them of the price point. Furthermore, to save even MORE time, users can directly purchase the product by simply replying to the text message alert. Since security and transparency are a huge deal these days - especially with retail and e-commerce - we implemented a blockchain where all approved transactions are recorded for full transparency and security. ## How we built it The user submission form is built on a website using HTML/CSS/Javascript. All forms submissions are sent through requests to the Python Backend served via a Flask REST API. When a new alert is submitted via the user, the user is messaged via SMS using the Twilio API. If the user replies to a notification on their phone to instantly purchase the product, the transaction is performed with the Python Chrome Web Driver and then the transaction is recorded on the Shop Buddy Blockchain. ## Challenges we ran into The major challenge we faced was connecting the backend to the frontend. We worked with the mentors to help us submit POST and GET requests. Another challenge was testing. Websites have automatic bot detection, so when we tested our code to check prices and purchase items, we were warned by several sites that bots were detected. To overcome this challenge, we coded mock online retailer webpages, that would allow us to test our code. ## Accomplishments that we're proud of We're proud of completing this project in a group of 2! We both expanded our skillsets to complete Shop Buddy. We are proud of our idea as we believe it can help people be more productive and help newcomers to Canada. ## What we learned We wanted to learn something new, and we both did. Sheridan learned how to code in JavaScript and do Post and Get requests, and Ben learned how to use Blockchain and code a bot to buy items. Overall, we are very happy to see this project come together. ## What's next for Shop Buddy Currently, our product can only be used to purchase items from select retailers. We plan to expand our retail customer list as we get used to working with different websites. Shop Buddy's goal is to help those in need and those who want to be more productive. We would focus on companies catering to a wider audience range to meet these goals.
partial
## 💡 INSPIRATION 💡 Many students have **poor spending habits** and losing track of one's finances may cause **unnecessary stress**. As university students ourselves, we're often plagued with financial struggles. As young adults down on our luck, we often look to open up a credit card or take out student loans to help support ourselves. However, we're deterred from loans because they normally involve phoning automatic call centers which are robotic and impersonal. We also don't know why or what to do when we've been rejected from loans. Many of us weren't taught how to plan our finances properly and we frequently find it difficult to keep track of our spending habits. To address this problem troubling our generation, we decided to create AvaAssist! The goal of the app is to **provide a welcoming place where you can seek financial advice and plan for your future.** ## ⚙️ WHAT IT DOES ⚙️ **AvaAssist is a financial advisor built to support young adults and students.** Ava can provide loan evaluation, financial planning, and monthly spending breakdowns. If you ever need banking advice, Ava's got your back! ## 🔎RESEARCH🔍 ### 🧠UX Research🧠 To discover the pain points of existing banking practices, we interviewed 2 and surveyed 7 participants on their current customer experience and behaviors. The results guided us in defining a major problem area and the insights collected contributed to discovering our final solution. ### 💸Loan Research💸 To properly predict whether a loan would be approved or not, we researched what goes into the loan approval process. The resulting research guided us towards ensuring that each loan was profitable and didn't take on too much risk for the bank. ## 🛠️ HOW WE BUILT IT🛠️ ### ✏️UI/UX Design✏️ ![alt text](https://cdn.discordapp.com/attachments/910655355661463584/911782991204876348/Loan_Amount.gif) Figma was used to create a design prototype. The prototype was designed in accordance with Voice UI (VUI) design principles & Material design as a base. This expedited us to the next stage of development because the programmers had visual guidance in developing the app. With the use of Dasha.AI, we were able to create an intuitive user experience in supporting customers through natural dialog via the chatbot, and a friendly interface with the use of an AR avatar. Check out our figma [here](https://www.figma.com/proto/0pAhUPJeuNRzYDBr07MBrc/Hack-Western?node-id=206%3A3694&scaling=min-zoom&page-id=206%3A3644&starting-point-node-id=206%3A3694&show-proto-sidebar=1) Check out our presentation [here](https://www.figma.com/proto/0pAhUPJeuNRzYDBr07MBrc/Hack-Western?node-id=61%3A250&scaling=min-zoom&page-id=2%3A2) ### 📈Predictive Modeling📈 The final iteration of each model has a **test prediction accuracy of +85%!** ![alt text](https://cdn.discordapp.com/attachments/910655355661463584/911592566829486120/unknown.png) We only got to this point because of our due diligence, preprocessing, and feature engineering. After coming up with our project, we began thinking about and researching HOW banks evaluate loans. Loan evaluation at banks is extremely complex and we tried to capture some aspects of it in our model. We came up with one major aspect to focus on during preprocessing and while searching for our datasets, profitability. There would be no point for banks to take on a loan if it weren't profitable. We found a couple of databases with credit card and loan data on Kaggle. The datasets were smaller than desired. We had to be very careful during preprocessing when deciding what data to remove and how to fill NULL values to preserve as much data as possible. Feature engineering was certainly the most painstaking part of building the prediction model. One of the most important features we added was the Risk Free Rate (CORRA). The Risk Free Rate is the rate of return of an investment with no risk of loss. It helped with the engineering process of another feature, min\_loan, which is the minimum amount of money that the bank can make with no risk of loss. Min\_loan would ultimately help our model understand which loans are profitable and which aren't. As a result, the model learned to decline unprofitable loans. ![alt text](https://cdn.discordapp.com/attachments/910655355661463584/911981729168887948/unknown.png) We also did market research on the average interest rate of specific types of loans to make assumptions about certain features to supplement our lack of data. For example, we used the average credit card loan interest rate of 22%. The culmination of newly engineered features and the already existing data resulted in our complex, high accuracy models. We have a model for Conventional Loans, Credit Card Loans, and Student Loans. The model we used was RandomForests from sklearn because of its wide variety of hyperparameters and robustness. It was fine-tuned using gridsearchCV to find its best hyperparameters. We designed a pipeline for each model using Pipeline, OneHotEncoder, StandardScaler, FunctionTransformer, GradientBoostingClassifier, and RandomForestClassifier from sklearn. Finally, the models were saved as pickle files for front-end deployment. ### 🚀Frontend Deployment🚀 Working on the frontend was a very big challenge. Since we didn't have a dedicated or experienced frontend developer, there was a lot of work and learning to be done. Additionally, a lot of ideas had to be cut from our final product as well. First, we had to design the frontend with React Native, using our UI/UX Designer's layout. For this we decided to use Figma, and we were able to dynamically update our design to keep up with any changes that were made. Next, we decided to tackle hooking up the machine learning models to React with Flask. Having Typescript communicate with Python was difficult. Thanks to these libraries and a lot of work, we were able to route requests from the frontend to the backend, and vice versa. This way, we could send the values that our user inputs on the frontend to be processed by the ML models, and have them give an accurate result. Finally, we took on the challenge of learning how to use Dasha.AI and integrating it with the frontend. Learning how to use DashaScript (Dasha.AI's custom programming language) took time, but eventually, we started getting the hang of it, and everything was looking good! ## 😣 CHALLENGES WE RAN INTO 😣 * Our teammate, Abdullah, who is no longer on our team, had family issues come up and was no longer able to attend HackWestern unfortunately. This forced us to get creative when deciding a plan of action to execute our ambitious project. We needed to **redistribute roles, change schedules, look for a new teammate, but most importantly, learn EVEN MORE NEW SKILLS and adapt our project to our changing team.** As a team, we had to go through our ideation phase again to decide what would and wouldn't be viable for our project. We ultimately decided to not use Dialogflow for our project. However, this was a blessing in disguise because it allowed us to hone in on other aspects of our project such as finding good data to enhance user experience and designing a user interface for our target market. * The programmers had to learn DashaScript on the fly which was a challenge as we normally code with OOP’s. But, with help from mentors and workshops, we were able to understand the language and implement it into our project * Combining the frontend and backend processes proved to be very troublesome because the chatbot needed to get user data and relay it to the model. We eventually used react-native to store the inputs across instances/files. * The entire team has very little experience and understanding of the finance world, it was both difficult and fun to research different financial models that banks use to evaluate loans. * We had initial problems designing a UI centered around a chatbot/machine learning model because we couldn't figure out a user flow that incorporated all of our desired UX aspects. * Finding good data to train the prediction models off of was very tedious, even though there are some Kaggle datasets there were few to none that were large enough for our purposes. The majority of the datasets were missing information and good datasets were hidden behind paywalls. It was for this reason that couldn't make a predictive model for mortgages. To overcome this, I had to combine datasets/feature engineer to get a useable dataset. ## 🎉 ACCOMPLISHMENTS WE ARE PROUD OF 🎉 * Our time management was impeccable, we are all very proud of ourselves since we were able to build an entire app with a chat bot and prediction system within 36 hours * Organization within the team was perfect, we were all able to contribute and help each other when needed; ex. the UX/UI design in figma paved the way for our front end developer * Super proud of how we were able to overcome missing a teammate and build an amazing project! * We are happy to empower people during their financial journey and provide them with a welcoming source to gain new financial skills and knowledge * Learning and implementing DashaAi was a BLAST and we're proud that we could learn this new and very useful technology. We couldn't have done it without mentor help, 📣shout out to Arthur and Sreekaran📣 for providing us with such great support. * This was a SUPER amazing project! We're all proud to have done it in such a short period of time, everyone is new to the hackathon scene and are still eager to learn new technologies ## 📚 WHAT WE LEARNED 📚 * DashaAi is a brand new technology we learned from the DashaAi workshop. We wanted to try and implement it in our project. We needed a handful of mentor sessions to figure out how to respond to inputs properly, but we're happy we learned it! * React-native is a framework our team utilized to its fullest, but it had its learning curve. We learned how to make asynchronous calls to integrate our backend with our frontend. * Understanding how to take the work of the UX/UI designer and apply it dynamically was important because of the numerous design changes we had throughout the weekend. * How to use REST APIs to predict an output with flask using the models we designed was an amazing skill that we learned * We were super happy that we took the time to learn Expo-cli because of how efficient it is, we could check how our mobile app would look on our phones immediately. * First time using AR models in Animaze, it took some time to understand, but it ultimately proved to be a great tool! ## ⏭️WHAT'S NEXT FOR AvaAssist⏭️ AvaAssist has a lot to do before it can be deployed as a genuine app. It will only be successful if the customer is satisfied and benefits from using it, otherwise, it will be a failure. Our next steps are to implement more features for the user experience. For starters, we want to implement Dialogflow back into our idea. Dialogflow would be able to understand the intent behind conversations and the messages it exchanges with the user. The long-term prospect of this would be that we could implement more functions for Ava. In the future Ava could be making investments for the user, moving money between personal bank accounts, setting up automatic/making payments, and much more. Finally, we also hope to create more tabs within the AvaAssist app where the user can see their bank account history and its breakdown, user spending over time, and a financial planner where users can set intervals to put aside/invest their money. ## 🎁 ABOUT THE TEAM🎁 Yifan is a 3rd year interactive design student at Sheridan College, currently interning at SAP. With experience in designing for social startups and B2B software, she is interested in expanding her repertoire in designing for emerging technologies and healthcare. You can connect with her at her [LinkedIn](https://www.linkedin.com/in/yifan-design/) or view her [Portfolio](https://yifan.design/) Alan is a 2nd year computer science student at the University of Calgary. He's has a wide variety of technical skills in frontend and backend development! Moreover, he has a strong passion for both data science and app development. You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/alanayy/) Matthew is a 2nd year student at Simon Fraser University studying computer science. He has formal training in data science. He's interested in learning new and honing his current frontend skills/technologies. Moreover, he has a deep understanding of machine learning, AI and neural networks. He's always willing to have a chat about games, school, data science and more! You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/matthew-wong-240837124/) **📣📣 SHOUT OUT TO ABDULLAH FOR HELPING US THROUGH IDEATION📣📣** You can still connect with Abdullah at his [LinkedIn](https://www.linkedin.com/in/abdullah-sahapdeen/) He's super passionate about reactJS and wants to learn more about machine learning and AI! ### 🥳🎉 THANK YOU UW FOR HOSTING HACKWESTERN🥳🎉
## Inspiration Our inspiration came from the annoying amount of times we have had to take out a calculator after a meal with friends and figure out how much to pay each other, make sure we have a common payment method (Venmo, Zelle), and remember if we paid each other back or not a week later. So to answer this question we came up with a Split that can easily divide our expenses for us, and organize the amount we owe a friend, and payments without having a common platform at all in one. ## What it does This application allows someone to put in a value that someone owes them or they owe someone and organize it. During this implementation of a due to someone, you can also split an entire amount with multiple individuals which will be reflected in the amount owed to each person. Additionally, you are able to clear your debts and make payments through the built-in Checkbook service that allows you to pay just given their name, phone number, and value amount. ## How we built it We built this project using html, css, python, and SQL implemented with Flask. Alongside using these different languages we utilized the Checkbook API to streamline the payment process. ## Challenges we ran into Some challenges we ran into were, not knowing how to implement new parts of web development. We had difficulty implementing the API we used, “Checkbook” , using python into the backend of our website. We had no experience with APIs and so implementing this was a challenge that took some time to resolve. Another challenge that we ran into was coming up with different ideas that were more complex than we could design. During the brainstorming phase we had many ideas of what would be impactful projects but were left with the issue of not knowing how to put that into code, so brainstorming, planning, and getting an attainable solution down was another challenge. ## Accomplishments that we're proud of We were able to create a fully functioning, ready to use product with no prior experience with software engineering and very limited exposure to web dev. ## What we learned Some things we learned from this project were first that communication was the most important thing in the starting phase of this project. While brainstorming, we had different ideas that we would agree on, start, and then consider other ideas which led to a loss of time. After completing this project we found that communicating what we could do and committing to that idea would have been the most productive decision toward making a great project. To complement that, we also learned to play to our strengths in the building of this project. In addition, we learned about how to best structure databases in SQL to achieve our intended goals and we learned how to implement APIs. ## What's next for Split The next step for Split would be to move into a mobile application scene. Doing this would allow users to use this convenient application in the application instead of a browser. Right now the app is fully supported for a mobile phone screen and thus users on iPhone could also use the “save to HomeScreen” feature to utilize this effectively as an app while we create a dedicated app. Another feature that can be added to this application is bill scanning using a mobile camera to quickly split and organize payments. In addition, the app could be reframed as a social media with a messenger and friend system.
## Inspiration We were looking at the Apple Magic TrackPads last week since they seemed pretty cool. But then we saw the price tag, $130! That's crazy! So we set out to create a college student budget friendly "magic" trackpad. ## What it does Papyr is a trackpad for your computer that is just a single sheet of paper, no wires, strings, or pressure detecting devices attached. Paypr allows you to browse the computer just like any other trackpad and supports clicking and scrolling. ## How we built it We use a webcam and a whole lot of computer vision to make the magic happen. The webcam first calibrates itself by detecting the four corners of the paper and maps every point on the sheet to a location on the actual screen. Our program then tracks the finger on the sheet by analyzing the video stream in real time, frame by frame, blurring, thresholding, performing canny edge detection, then detecting the contours in the final result. The furthest point in the hand’s contour corresponds to the user's fingertip and is translated into both movement and actions on the computer screen. Clicking is detailed in the next section, with scrolling is activated by double clicking. ## Challenges we ran into Light sensitivity proved to be very challenging since depending on the environment, the webcam would sometimes have trouble tracking our fingers. However, finding a way to detect clicking was by far the most difficult part of the project. The problem is the webcam has no sense of depth perception: it sees each frame as a 2D image and as a result there is no way to detect if your hand is on or off the paper. We turned to the Internet hoping for some previous work that would guide us in the right direction, but everything we found required either glass panels, infrared sensors, or other non college student budget friendly hardware. We were on our own. We made many attempts including: having the user press down very hard on the paper so that their skin would turn white and detect this change of color, track the shadow the user's finger makes on the paper and detect when the shadow disappears, which occurs when the user places his finger on the paper. None of these methods proved fruitful, so we sat down and for the better part of 5 hours thought about how to solve this issue. Finally, what worked for us was to track the “hand pixel” changes across several frames to detect a valid sequence that can qualify as a “click”. Given the 2D image perception with our web cam, it was no easy task and there was a lot of experimentation that went into this. ## Accomplishments that we're proud of We are extremely proud of getting clicking to work. It was no easy feat. We also developed our own algorithms for fingertip tracking and click detection and wrote code from scratch. We set out to create a cheap trackpad and we were able to. In the end we transformed a piece of paper, something that is portable and available nearly anywhere, into a makeshift-high tech device with only the help of a standard webcam. Also one of the team members was able to win a ranked game of Hearthstone using a piece of paper so that was cool (not the match shown in the video). ## What we learned From normalizing the environment's lighting and getting rid of surrounding noise to coming up with the algorithm to provide depth perception to a 2D camera, this project taught us a great deal about computer vision. We also learned about efficiency and scalability since numerous calculations need to be made each second in analyze each frame and everything going on in them. ## What's next for Papyr - A Paper TrackPad We would like to improve the accuracy and stability of Papyr. This would allow Papyr to function as a very cheap replacement for Wacom digital tablets. Papyr already supports various "pointers" such as fingers or pens.
losing
# DeezSquats: Break PRs, not spines! 💪 Tired of feeling stuck? 🏋️‍♀️ Ready to take control of your health and fitness without the risk of injury? **DeezSquats** is your personalized fitness coach, designed to make exercise safe, enjoyable, and accessible for everyone. ❌ No more guesswork, no more fear. ✅ Our real-time feedback system ensures you're doing things right, every step of the way. Whether you're a seasoned athlete or just starting out, **DeezSquats** empowers you to move confidently and feel great. ## How it works: ``` 1. Personalized Training: Get tailored exercise plans based on your goals and fitness level. 2. Real-time Feedback: Our AI analyzes your form and provides instant guidance to prevent injuries and maximize results. 3. Accessible Fitness: Enjoy professional-quality training right from your phone, anytime, anywhere. 4. Data analyst approach to training: get all your fancy graphs on whatever statistics you want ``` Join a community of like-minded individuals. Together, we'll create a healthier, more vibrant world, one squat at a time. Are you ready to transform your fitness journey? Let's get started today! ## Key Features: ``` 1. AI-powered feedback ✅ 2. Personalized training plans 💪 3. User-friendly interface 📱 4. Community support 🤗 ``` ## What we've achieved: ``` 1. Created a unique solution to promote safe and effective exercise 🎉 2. Mitigated the risks of improper form and injury ❌ 3. Implemented state-of-the-art technology for a seamless user experience 🤖 ``` ## What's next for **DeezSquats**: ``` 1. Expanding our exercise library 🏋️‍♂️ 2. Introducing video feedback for enhanced guidance 🎥 3. Enhancing our virtual gym buddy experience 👯‍♀️ ``` Ready to take the next step? Join **DeezSquats** and experience the future of fitness. ## Authors Alankrit Verma Borys Łangowicz Adibvafa Fallahpour
**Come check out our fun Demo near the Google Cloud Booth in the West Atrium!! Could you use a physiotherapy exercise?** ## The problem A specific application of physiotherapy is that joint movement may get limited through muscle atrophy, surgery, accident, stroke or other causes. Reportedly, up to 70% of patients give up physiotherapy too early — often because they cannot see the progress. Automated tracking of ROM via a mobile app could help patients reach their physiotherapy goals. Insurance studies showed that 70% of the people are quitting physiotherapy sessions when the pain disappears and they regain their mobility. The reasons are multiple, and we can mention a few of them: cost of treatment, the feeling that they recovered, no more time to dedicate for recovery and the loss of motivation. The worst part is that half of them are able to see the injury reappear in the course of 2-3 years. Current pose tracking technology is NOT realtime and automatic, requiring the need for physiotherapists on hand and **expensive** tracking devices. Although these work well, there is a HUGE room for improvement to develop a cheap and scalable solution. Additionally, many seniors are unable to comprehend current solutions and are unable to adapt to current in-home technology, let alone the kinds of tech that require hours of professional setup and guidance, as well as expensive equipment. [![IMAGE ALT TEXT HERE](https://res.cloudinary.com/devpost/image/fetch/s--GBtdEkw5--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://img.youtube.com/vi/PrbmBMehYx0/0.jpg)](http://www.youtube.com/watch?feature=player_embedded&v=PrbmBMehYx0) ## Our Solution! * Our solution **only requires a device with a working internet connection!!** We aim to revolutionize the physiotherapy industry by allowing for extensive scaling and efficiency of physiotherapy clinics and businesses. We understand that in many areas, the therapist to patient ratio may be too high to be profitable, reducing quality and range of service for everyone, so an app to do this remotely is revolutionary. We collect real-time 3D position data of the patient's body while doing exercises for the therapist to adjust exercises using a machine learning model directly implemented into the browser, which is first analyzed within the app, and then provided to a physiotherapist who can further analyze the data. It also asks the patient for subjective feedback on a pain scale This makes physiotherapy exercise feedback more accessible to remote individuals **WORLDWIDE** from their therapist ## Inspiration * The growing need for accessible physiotherapy among seniors, stroke patients, and individuals in third-world countries without access to therapists but with a stable internet connection * The room for AI and ML innovation within the physiotherapy market for scaling and growth ## How I built it * Firebase hosting * Google cloud services * React front-end * Tensorflow PoseNet ML model for computer vision * Several algorithms to analyze 3d pose data. ## Challenges I ran into * Testing in React Native * Getting accurate angle data * Setting up an accurate timer * Setting up the ML model to work with the camera using React ## Accomplishments that I'm proud of * Getting real-time 3D position data * Supporting multiple exercises * Collection of objective quantitative as well as qualitative subjective data from the patient for the therapist * Increasing the usability for senior patients by moving data analysis onto the therapist's side * **finishing this within 48 hours!!!!** We did NOT think we could do it, but we came up with a working MVP!!! ## What I learned * How to implement Tensorflow models in React * Creating reusable components and styling in React * Creating algorithms to analyze 3D space ## What's next for Physio-Space * Implementing the sharing of the collected 3D position data with the therapist * Adding a dashboard onto the therapist's side
## Inspiration We visit many places, we know very less about the historic events or the historic places around us. Today In History notifies you of historic places near you so that you do not miss them. ## What it does Today In History notifies you about important events that took place exactly on the same date as today but a number of years ago in history. It also notifies the historical places that are around you along with the distance and directions. Today In History is also available as an Amazon Alexa skill. You can always ask Alexa, "Hey Alexa, ask Today In History what's historic around me? What Happened Today? What happened today in India....... ## How we built it We have two data sources: one is Wikipedia -- we are pulling all the events from the wiki for the date and filter them based on users location. We use the data from Philadelphia to fetch the historic places nearest to the user's location and used Mapquest libraries to give directions in real time. ## Challenges we ran into Alexa does not know a person's location except the address it is registered with, but we built a novel backend that acts as a bridge between the web app and Alexa to keep them synchronized with the user's location.
partial
README.md exists but content is empty.
Downloads last month
31