anchor
stringlengths
159
16.8k
positive
stringlengths
184
16.2k
negative
stringlengths
167
16.2k
anchor_status
stringclasses
3 values
## Inspiration Our team wanted improve the daily lives of our society and third world countries. We realized that a lot of fatigue is caused by dehydration, and that it is easily improved by drinking more water. However, we often forget as our lives get very busy, but what we don't forget to do is to check our phones every minute! We wanted to incorporate a healthier habit with our phones, to help remind us to drink enough water every day. We also realized the importance of drinking clean, and pure water, and that some people in this world are not priveledged enough to have pure water. Our product promotes the user's physical well being, and shows them how to drink different, and also raises awareness of the impure water that many individuals have to drink. ## What it does The bottle senses the resistance of the water, and uses this data to determine whether or not the water is safe to drink. The bottle also senses the change in mass of the bottle to determine your daily intake. Using this data, it will send a text message to your phone to remind you to drink water, and if the water you are about to drink is safe or not. ## How we built it The resistance sensor is essentially a voltage divider. The voltage produced from the Photon is split between a known resistance and the water of unknown resistance. The voltage of the water, the total voltage and the resistance from one resistor is known. From there, the program conducts multiple trials and chooses the most accurate data to calculate its resistance. The pressure sensor senses the pressure placed and changes the resistance accordingly. Its voltage is then recorded and processed within our code. The changes in pressure and resistance that are sent from the sensors first passes through the Google Cloud Platform publisher/subscriber API. Then they proceed to a python script which will send the data back to the Google Cloud, but this time to the datastore, which, optimally, would use machine learning to analyze the data and figure out the information to return. This processed information would then be sent to a Twilio script in order to be sent as a text message to the designated individual's phone number. ## Challenges we ran into Our biggest challenge was learning the new material is a short amount of time. A lot of the concepts were quite foreign to us, and learning these new concepts took a lot of time and effort. Furthermore, there were several issues and inconsistancies with our circuits and sensors. They were quite time consuming to fix, and required us to trace back our circuits and modify the program. However, these challenges were more than enjoyable to overcome and an amazing learning opportunity for our entire team. ## Accomplishments that we're proud of Our team is firstly proud of finishing the entire project while using foreign software and hardware. It was the first time we used Google Cloud Platform and the Particle Photon, and a lot of the programming was quite foreign. The project required a lot of intricate design and programming. There were a lot of small and complex parts of the project, and given the time restraint and minor malfunctions, it was very difficult to accomplish everything. ## What we learned Our team developed our previous knowledge in programming and sensors. We learned how to integrate things with Google Cloud Platform, how to operate Twilio, and how setup and use a Particle Photon. Our team also learned about the engineering process of design, prototyping and pitching a novel idea. This improves what to expect if any of us decide to do a startup. ## What's next for SmartBottle In the future, we want to develop an app that sends notifications to your phone instead of texts, and use machine learning to monitor your water intake, and recommend how you should incorporate it in your life. More importantly, we want to integrate the electrical components within the bottle instead of the external prototype we have. We imagine the force sensor sill being at the bottom, and a more slick design for the resistance sensor.
FOR THE DEMO VIDEO ->>> THIS LINK <https://youtu.be/lCZJ5zlxt2Q> ## Inspiration We hear the benefits and consequence of not drinking enough water. But forcing yourself to remember to drink takes a VERY strong determination during a hectic day of work. ## What it does We remind you to drink water, 5 seconds after the last time you drank water. If you keep drinking water, you'll not have to worry about under drinking! ## How we built it We used a tilt sensor to detect drinking and Arduino micro controller ## Challenges we ran into -> Gyroscope & Accelerometer Sensor was working before but then sensors started giving 0 as the reading... We realized I2C connection needed a solide connection using soldering... But we didn't have the equipment so we pivoted. -> LCD panel required more pins So we upgraded to uno. -> LCD panel backlight We were using schematics that used digital pins to power the backlight but power? ## Accomplishments that we're proud of We didn't have any electrical experience before but we learned arduino and learned about circuiting, arduino programming, and used sensors to create a working product. ## What we learned The importance of soldering... The importance of getting materials and tools before the hackathon... The importance of experience! ## What's next for Stay Hydrated! We will learn more about sending the data from the chips to computer.
## Inspiration One of our team members was stunned by the number of colleagues who became self-described "shopaholics" during the pandemic. Understanding their wishes to return to normal spending habits, we thought of a helper extension to keep them on the right track. ## What it does Stop impulse shopping at its core by incentivizing saving rather than spending with our Chrome extension, IDNI aka I Don't Need It! IDNI helps monitor your spending habits and gives recommendations on whether or not you should buy a product. It also suggests if there are local small business alternatives so you can help support your community! ## How we built it React front-end, MongoDB, Express REST server ## Challenges we ran into Most popular extensions have company deals that give them more access to product info; we researched and found the Rainforest API instead, which gives us the essential product info that we needed in our decision algorithm. However this proved, costly as each API call took upwards of 5 seconds to return a response. As such, we opted to process each product page manually to gather our metrics. ## Completion In its current state IDNI is able to perform CRUD operations on our user information (allowing users to modify their spending limits and blacklisted items on the settings page) with our custom API, recognize Amazon product pages and pull the required information for our pop-up display, and dynamically provide recommendations based on these metrics. ## What we learned Nobody on the team had any experience creating Chrome Extensions, so it was a lot of fun to learn how to do that. Along with creating our extensions UI using React.js this was a new experience for everyone. A few members of the team were also able to spend the weekend learning how to create an express.js API with a MongoDB database, all from scratch! ## What's next for IDNI - I Don't Need It! We plan to look into banking integration, compatibility with a wider array of online stores, cleaner integration with small businesses, and a machine learning model to properly analyze each metric individually with one final pass of these various decision metrics to output our final verdict. Then finally, publish to the Chrome Web Store!
losing
Problem: Have you ever been at a party and questioned who chose the music? Or debated who would be on aux? Shuffle is an app that is designed to sync multiple user’s most played songs to create a combined playlist that everyone loves. The app requires you to have a Spotify account, so when you download the Shuffle app, your profile is loaded into the app. Shuffle then allows you to choose the people you want to create a playlist with, from 2 to 5 or more friends, the app then uses an algorithm to create a tailored playlist that contains music that everyone will love. It does this using underlying user data from Spotify that contains each user's favorite music and listening trends in the short, medium, and long term. The app also gives users an easy way to add or remove songs onto the playlist after it has been created so the perfect songs are always playing. Shuffle ensures that you and your friends will always have music that everyone can listen to together.
## Inspiration We often encountered challenges navigating automated call systems, which left us spending excessive time on hold and feeling frustrated. These experiences made us realize how much valuable time was being wasted when we could have been focusing on more productive tasks and we were left angry and wondering if there was a better way to optimize telecommunication systems. This frustration inspired us to develop a solution that streamlines the process, minimizing wait times and improving the overall customer experience. ## What it does The system includes a form where clients can enter their name, phone number, and a brief description of their issue, such as requesting a refund or returning an item. Once submitted, the VAPI system automatically places a call to the provided number. A virtual assistant then guides the client through a series of questions to better understand their problem with the VAPI system answering the questions based off the description given. The VAPI setup even handles the wait time on the client's behalf, ensuring they’re connected directly to the appropriate support agent without unnecessary delays. ## How we built it We implemented the solution using React.js for the front-end interface and VAPI for handling the automated calls. The form submission triggers the VAPI system, which initiates and manages the call flow. For version control and collaboration, we hosted the project on a GitHub repository, utilizing GitHub Actions for continuous integration and automated testing to ensure a smooth deployment process. We used llama within Groq as the LLM as we saw significant difference in response time when using groq vs openai. This setup allowed us to efficiently manage code updates and track changes while leveraging VAPI’s capabilities to handle real-time interactions with clients. ## Challenges we ran into We encountered challenges managing different branches, as the primary branch frequently stalled during the process. ## Accomplishments that we're proud of We were able to integrate the front end with the VAPI connector after clicking the submit button which took time, but we were persistent in solving the problem. ## What we learned We explored various functionalities within the React ecosystem, gaining a deeper understanding of tools and techniques available to enhance our applications. For instance, we learned about the @media query, which allows us to create responsive designs by applying different styles based on screen size and device characteristics. Additionally, we became proficient in utilizing VAPI to manage automated calls, including how to implement its features for efficient interaction with clients. This knowledge has equipped us to build more dynamic and user-friendly applications. ## What's next for Letmetalktohuman AI We aim to implement a feature that recognizes and performs specific dial tones, as these are a common part of phone interactions. This feature will enhance the user experience by allowing the system to respond to different inputs appropriately.
NwHacks 2019: Product Idea: An app that crowd-sources group recommendations for songs in a live location. Be it a restaurant, a club, etc, the app will detect your tastes in music based on your Spotify preferences and send it to the receiving end, which will control the music that is being played in the room. Target Audience: Young adults (early-mid 20’s) who go out on a constant basis and wish to have their favorite songs playing all night long. What is the problem? Bars and nightclubs have been having trouble with choosing what music to play. Many venues heavily rely on a standardized corporate playlist that has worked consistently, but that doesn’t necessarily meet customer satisfaction. Frequent visitors have been saying that they are unhappy with the music being played and (in the case of a bar), the DJ never plays any of their song recommendations. What if… There was a way of controlling the music based on the visiting crowd, would it increase customer satisfaction? How happy does a person feel when they hear their favorite song at a club? How better would they say their night was when all of their friends are having fun listening to their favorite music? How would business be impacted by playing music that people like the most? What wows? Being able to crowd-source a music playlist by personal preferences and location so that bars & nightclubs can play music that is catered for the audience that is playing. What works? Spotify playlists are pretty accurate at being functional, but they normally don’t offer enough customization. Features: * Playlist generator based on the people who are around you. * By entering your preferences and tastes (possibly even link up Spotify), venues will generate better music Usability: * Download app * Log in (Facebook) * Asks for preferences * Loads preferences * Homepage screen * Automatically suggests songs to venue How can the venue measure performance metrics? * With new data the system can get better at predicting what music people want to listen to. CODE HERE <https://github.com/cara-wong/SonaShare>
partial
## Inspiration The inspiration for Unity Style stemmed from the desire to bridge the gap between the fashion industry and individuals with disabilities. Witnessing the challenges faced by people like Alex, who live with disabilities, and realizing the lack of inclusivity in fashion, sparked the idea to create a platform that empowers them to embrace their unique style without limitations. ## What it does Unity Style is a revolutionary platform that brings inclusivity to the forefront of fashion. It provides adaptive clothing, wheel-chair-friendly fashion, sensory-friendly attire, a user-friendly online platform, detailed product information, and a vibrant community, for individuals with disabilities to engage with fashion. By offering accessible options, detailed product information, and supportive space, Unity Style enables users to express themselves authentically through their clothing choices. ## How we built it Leveraging Node.js, we crafted the server-side application that forms the core of our platform's functionality. PostgreSQL was the backbone of our data management, enabling us to store and retrieve structured data seamlessly. React, the front-end JavaScript library, empowered us to develop dynamic user interfaces using reusable UI components. Combining React with GraphQL, we constructed a responsive and tailored front-end experience that efficiently fetched data based on component-specific requirements. The integration of these technologies was a strategic choice that allowed us to create a versatile, scalable, and feature-rich platform. ## Challenges we ran into Integrating the different technologies seamlessly required careful coordination and understanding of their nuances. Adapting to the event-driven and asynchronous nature of Node.js demanded a shift in our approach to server-side development. Navigating the intricacies of database interactions in PostgreSQL and optimizing data retrieval was a learning curve. Additionally, while React facilitated dynamic UI development, fine-tuning server-side rendering for performance optimization was a complex task. Addressing these challenges was a collaborative effort that allowed us to deepen our expertise in these technologies. ## Accomplishments that we're proud of Building a modular platform with Node.js affirmed its scalability. Utilizing PostgreSQL for data integrity marked a significant milestone. Creating a feature-rich admin panel with React highlighted UI prowess. Merging React and GraphQL for responsive experiences underscored our expertise. ## What we learned Exploring Node.js's event-driven nature revealed the significance of collaborative problem-solving. PostgreSQL's data management intricacies deepened our shared understanding of structured databases. We honed our skills in React by building UI components together. Mastering GraphQL for precise data retrieval was a collective effort that enhanced our technical abilities. ## What's next for Unity Style Our sights are set on expanding adaptive designs to embrace an even wider range of disabilities, fostering inclusivity. Innovations like integrating augmented reality (AR) for virtual try-ons and AI-guided personalized recommendations are on our horizon. Nurturing partnerships with fashion brands, accessibility champions, and influencers will amplify our impact.
## Inspiration How many times have you opened your fridge door and examined its contents for something to eat/cook/stare at and ended up finding a forgotten container of food in the back of the fridge (a month past its expiry date) instead? Are you brave enough to eat it, or does it immediately go into the trash? The most likely answer would be to dispose of it right away for health and safety reasons, but you'd be surprised - food wastage is a problem that [many countries such as Canada](https://seeds.ca/schoolfoodgardens/food-waste-in-canada-3/) contend with every year, even as world hunger continues to increase! Big corporations and industries contribute to most of the wastage that occurs worldwide, but we as individual consumers can do our part to reduce food wastage as well by minding our fridges and pantries and making sure we eat everything that we procure for ourselves. Enter chec:xpire - the little app that helps reduce food wastage, one ingredient at a time! ## What it does chec:xpire takes stock of the contents of your fridge and informs you which food items are close to their best-before date. chec:xpire also provides a suggested recipe which makes use of the expiring ingredients, allowing you to use the ingredients in your next meal without letting them go to waste due to spoilage! ## How we built it We built the backend using Express.js, which laid the groundwork for interfacing with Solace, an event broker. The backend tracks food items (in a hypothetical refrigerator) as well as their expiry dates, and picks out those that are two days away from their best-before date so that the user knows to consume them soon. The app also makes use of the co:here AI to retrieve and return recipes that make use of the expiring ingredients, thus providing a convenient way to use up the expiring food items without having to figure out what to do with them in the next couple days. The frontend is a simple Node.js app that subscribes to "events" (in this case, food approaching their expiry date) through Solace, which sends the message to the frontend app once the two-day mark before the expiry date is reached. A notification is sent to the user detailing which ingredients (and their quantities) are expiring soon, along with a recipe that uses the ingredients up. ## Challenges we ran into The scope of our project was a little too big for our current skillset; we ran into a few problems finding ways to implement the features that we wanted to include in the project, so we had to find ways to accomplish what we wanted to do using other methods. ## Accomplishments that we're proud of All but one member of the team are first-time hackathon participants - we're very proud of the fact that we managed to create a working program that did what we wanted it to, despite the hurdles we came across while trying to figure out what frameworks we wanted to use for the project! ## What we learned * planning out a project that's meant to be completed within 36 hours is difficult, especially if you've never done it before! * there were some compromises that needed to be made due to a combination of framework-related hiccups and the limitations of our current skillsets, but there's victory to be had in seeing a project through to the end even if we weren't able to accomplish every single little thing we wanted to * Red Bull gives you wings past midnight, apparently ## What's next for chec:xpire A fully implemented frontend would be great - we ran out of time!
## Inspiration Peer-review is critical to modern science, engineering, and healthcare endeavors. However, the system for implementing this process has lagged behind and results in expensive costs for publishing and accessing material, long turn around times reminiscent of snail-mail, and shockingly opaque editorial practices. Astronomy, Physics, Mathematics, and Engineering use a "pre-print server" ([arXiv](https://arxiv.org)) which was the early internet's improvement upon snail-mailing articles to researchers around the world. This pre-print server is maintained by a single university, and is constantly requesting donations to keep up the servers and maintenance. While researchers widely acknowledge the importance of the pre-print server, there is no peer-review incorporated, and none planned due to technical reasons. Thus, researchers are stuck with spending >$1000 per paper to be published in journals, all the while individual article access can cost as high as $32 per paper! ([source](https://www.nature.com/subscriptions/purchasing.html)). For reference, a single PhD thesis can contain >150 references, or essentially cost $4800 if purchased individually. The recent advance of blockchain and smart contract technology ([Ethereum](https://www.ethereum.org/)) coupled with decentralized file sharing networks ([InterPlanetaryFileSystem](https://ipfs.io)) naturally lead us to believe that archaic journals and editors could be bypassed. We created our manuscript distribution and reviewing platform based on the arXiv, but in a completely decentralized manner. Users utilize, maintain, and grow the network of scholarship by simply running a simple program and web interface. ## What it does arXain is a Dapp that deals with all the aspects of a peer-reviewed journal service. An author (wallet address) will come with a bomb-ass paper they wrote. In order to "upload" their paper to the blockchain, they will first need to add their file/directory to the IPFS distributed file system. This will produce a unique reference number (DOI is currently used in journals) and hash corresponding to the current paper file/directory. The author can then use their address on the Ethereum network to create a new contract to submit the paper using this reference number and paperID. In this way, there will be one paper per contract. The only other action the author can make to that paper is submitting another draft. Others can review and comment on papers, but an address can not comment/review its own paper. The reviews are rated on a "work needed", "acceptable" basis and the reviewer can also upload an IPFS hash of their comments file/directory. Protection is also built in such that others can not submit revisions of the original author's paper. The blockchain will have a record of the initial paper submitted, revisions made by the author, and comments/reviews made by peers. The beauty of all of this is one can see the full transaction histories and reconstruct the full evolution of the document. One can see the initial draft, all suggestions from reviewers, how many reviewers, and how many of them think the final draft is reasonable. ## How we built it There are 2 main back-end components, the IPFS file hosting service and the Ethereum blockchain smart contracts. They are bridged together with ([MetaMask](https://metamask.io/)), a tool for connecting the distributed blockchain world, and by extension the distributed papers, to a web browser. We designed smart contracts in Solidity. The IPFS interface was built using a combination of Bash, HTML, and a lot of regex! . Then we connected the IPFS distributed net with the Ethereum Blockchain using MetaMask and Javascript. ## Challenges we ran into On the Ethereum side, setting up the Truffle Ethereum framework and test networks were challenging. Learning the limits of Solidity and constantly reminding ourselves that we had to remain decentralized was hard! The IPFS side required a lot of clever regex-ing. Ensuring that public access to researchers manuscript and review history requires other proper identification and distribution on the network. The hardest part was using MetaMask and Javascript to call our contracts and connect the blockchain to the browser. We struggled for about hours trying to get javascript to deploy a contract on the blockchain. We were all new to functional programming. ## Accomplishments that we're proud of Closing all the curly bois and close parentheticals in javascript. Learning a whole lot about the blockchain and IPFS. We went into this weekend wanting to learning about how the blockchain worked, and came out learning about Solidity, IPFS, Javascript, and a whole lot more. You can see our "genesis-paper"on an IPFS gateway (a bridge between HTTP and IPFS) [here](https://gateway.ipfs.io/ipfs/QmdN2Hqp5z1kmG1gVd78DR7vZmHsXAiSbugCpXRKxen6kD/0x627306090abaB3A6e1400e9345bC60c78a8BEf57_1.pdf) ## What we learned We went into this with knowledge that was a way to write smart contracts, that IPFS existed, and minimal Javascript. We learned intimate knowledge of setting up Ethereum Truffle frameworks, Ganache, and test networks along with the development side of Ethereum Dapps like the Solidity language, and javascript tests with the Mocha framework. We learned how to navigate the filespace of IPFS, hash and and organize directories, and how the file distribution works on a P2P swarm. ## What's next for arXain With some more extensive testing, arXain is ready for the Ropsten test network *at the least*. If we had a little more ETH to spare, we would consider launching our Dapp on the Main Network. arXain PDFs are already on the IPFS swarm and can be accessed by any IPFS node.
losing
## Inspiration We were inspired by our mothers, who are both educators for children. Many people want to know what their children do at school, since at younger ages when parents ask their kids "What did you do at school", the question is rarely met with anything beyond shrugs and incoherent ideas. This responsibility to communicate then falls to teachers. We looked at the other products on the market and thought of a way we could use AI and Machine Learning to automate the process, helping teachers share student's foundational education experiences with their guardians. ## What it does Our app and camera system, monitors your kids throughout the school day and notifies you when there are noteworthy events with a collection of photos of your student that's been personally curated by our learning system. ## How we built it We built this technology with Android Studio for mobile app and python for the data processing/machine learning back end. The back end was made with Google Cloud Vision, Sklearn, Gensim, Facebook's bAbI dataset, and communicates to the mobile application via Firebase's Realtime Database. ## Challenges we ran into We had to make so many parts work fluidly together in a short amount of time. We also ran into some technical challenges that took a couple creative innovations to make it through. Lastly, my computer restarted unexpectedly at least 5 times, probably because I was trying to do so much on it. ## Accomplishments that we're proud of We are proud that we were able to make a system that will help with parental communication in elementary school classrooms and hopefully in the future offset some of the major work done by elementary and pre-k teachers (who are much in need). ## What we learned A lot. We can't wait to tell you, but here are some hints (NLP, App Dev, and Family). ## What's next for xylophone We hope to finish up some of the fixes, beta test it as a project for our schools elementary schools, as well as learn about Users Experience when using the app in the real world.
## Inspiration Have you ever wanted to cross the street but found a horde of speeding cars in front of you? We often find ourselves in this situation when we aren't sure if a gap between the cars is large enough for us to safely cross the street. Crossy Road, a piece of novel headband technology, aims to conquer this problem. A lot of times we want to walk across the street, but we need to wait too long. Also, 3000 pound metal cars are speeding down the street in front of us. Clearly there is something to be improved upon here. We need a faster and most importantly safer way to cross the street. We built a way to play crossy road in real life with a hack. You just run in the middle but it's okay because you have two lives and this is just a tutorial. ## What it does Crossy road is a novel sensory device built off of two cameras, one on each side of the head. By utilizing an object recognition server we calculate the velocity of the cars to predict the position of cars in the future, allowing the user to know whether it is safe to cross the street or not. ## How we built it The server is made with python. The app is made with swift. The device is made with extremely high quality deluxe (TM) state of the art corrugated cardboard platinum PLUS edition. ## Challenges we ran into Getting the object detection model to run with high accuracy was hard. ## Accomplishments that we're proud of It works!!!!!! I was able to cross the street. ## What we learned ## What's next for crossy road
## Inspiration We wanted to solve a unique problem we felt was impacting many people but was not receiving enough attention. With emerging and developing technology, we implemented neural network models to recognize objects and images, and converting them to an auditory output. ## What it does XTS takes an **X** and turns it **T**o **S**peech. ## How we built it We used PyTorch, Torchvision, and OpenCV using Python. This allowed us to utilize pre-trained convolutional neural network models and region-based convolutional neural network models without investing too much time into training an accurate model, as we had limited time to build this program. ## Challenges we ran into While attempting to run the Python code, the video rendering and text-to-speech were out of sync and the frame-by-frame object recognition was limited in speed by our system's graphics processing and machine-learning model implementing capabilities. We also faced an issue while trying to use our computer's GPU for faster video rendering, which led to long periods of frustration trying to solve this issue due to backwards incompatibilities between module versions. ## Accomplishments that we're proud of We are so proud that we were able to implement neural networks as well as implement object detection using Python. We were also happy to be able to test our program with various images and video recordings, and get an accurate output. Lastly we were able to create a sleek user-interface that would be able to integrate our program. ## What we learned We learned how neural networks function and how to augment the machine learning model including dataset creation. We also learned object detection using Python.
losing
## Inspiration With the increase in Covid-19 cases, the healthcare sector has experienced a shortage of PPE supplies. Many hospitals have turned to the public for donations. However, people who are willing to donate may not know what items are needed, which hospitals need it urgently, or even how to donate. ## What it does Corona Helping Hands is a real-time website that sources data directly from hospitals and ranks their needs based on bed capacity and urgency of necessary items. An interested donor can visit the website and see the hospitals in their area that are accepting donations, what specific items, and how to donate. ## How we built it We built the donation web application using: 1) HTML/ CSS/ Bootstrap (Frontend Web Development) 2) Flask (Backend Web Development) 3) Python (Back-End Language) ## Challenges we ran into We ran into issues getting integrating our map with the HTML page. Taking data and displaying it on the web application was not easy at first, but we were able to pull it off at the end. ## Accomplishments that we're proud of None of us had a lot of experience in frontend web development, so that was challenging for all of us. However, we were able to complete a web application by the end of this hackathon which we are all proud of. We are also proud of creating a platform that can help users help hospitals in need and give them an easy way to figure out how to donate. ## What we learned This was most of our first times working with web development, so we learned a lot on that aspect of the project. We also learned how to integrate an API with our project to show real-time data. ## What's next for Corona Helping Hands We hope to further improve our web application by integrating data from across the nation. We would also like to further improve on the UI/UX of the app to enhance the user experience.
## SafeWatch *Elderly Patient Fall-Detection | Automated First Responder Information Relay* Considering the increasing number of **elderly patient falls**, SafeWatch automates the responsibilities of senior resident caregivers thus relieving them of substantial time commitments. SafeWatch is a **motion detection software** which recognizes collapsed persons and losses of balance. It is coupled with an instantaneous alert system which can notify building security, off-location loved ones or first responders. Easy integration into pre-existing surveillance camera systems allows for **low implementation costs**. It is a technology which allows us to continuously keep a watchful eye on our loved ones in their old age. **Future applications** of this software include expansion into public areas for rapid detection of car crashes, physical violence, and illicit activity.
## Inspiration ``` **In 9th grade, we were each given a cup of a clear liquid. We were told to walk around the class and exchange our cup of liquid with three other people in the class.** ``` \_One person in our class had a chemical that would cause the mixed liquid to turn red once two liquids were combined. The red liquid indicated that that person was infected. Each exchange consisted of pouring all the liquid from one cup to another, mixing, and pouring half of it back. At the end of the exercise, we were surprised to find that 90% of the class had been infected in just 3 exchanges by one single person. This exercise outlined how easy it is for an epidemic to turn into an uncontrollable pandemic. In this situation, prevention is the only effective method for stopping an epidemic. So, our team decided to create a product focused on aiding epidemiologists prevent epidemic outbreaks. ## How it works The user enters a disease into our search filter and clicks on the disease he/she is looking for. The user then gets a map on where that the disease was mentioned the most in the past month with places where it was recently mentioned a lot on Twitter. This provides data on the spread of disease/ ## How we built it ``` The website uses a Twitter API to access Tiwtter's post database. We used Flask to connect front-end and back-end. ``` ## Challenges we ran into ``` One of the biggest challenges we ran into was definitely our skill and experience level with coding and web design which were both well...sub-par. We only knew a basic amount of HTML and CSS. When we first started designing our website, it looked like one of those pages that appear when the actual page of a website can't load fast enough. It took us a fair amount of googling and mentorship to get our website to look professional. But that wasn't even the hard part. None of us were familiar with back-end design, nor did we know the software required to connect front-end and back-end. We only recently discovered what an API was and by recently I mean 2 days ago, as in the day before the Hackathon started. We didn't know about the Python Flask required to connect our front-end and back-end, the javascript required for managing search results, and the Restful Python with JSON required to bring specific parts of the Twitter API database to users. In fact, by the time I send this devpost out, we're probably still working on the back-end because we still haven't figured out how to make it work. (But we promise it will work out by the deadline). Another challenge was our group dynamic. We struggled at the beginning to agree on an idea. But, in the end, we fleshed out our ideas and came to an unconditional agreement. ``` ## Accomplishments that we're proud of ``` When my group told me that they were serious about making something that was obviously way beyond our skill level, I told them to snap back to reality because we didn't know how to make the vast majority of the things we wanted to make. In fact, we didn't even know what was required to make what we wanted to make. I'm actually really glad they didn't listen to me because we ended up doing things that we would have never imagined we do. Looking back, it's actually pretty incredible that we were able to make a professional looking and functioning site, coming in with basic HTML and CSS abilities. I'm really proud of the courage my team had to dive into unknown waters and be willing to learn, even if they risk having no tangible things to show for it. ``` ## What we learned ``` From Googling, soliciting help from mentors and our peers, we got to sharpen the knowledge we already had about web design while getting exposure to so many other languages with different syntaxes and functions. Before HW3 I had no idea what bootstrap, css ``` ## What's next for Reverse Outbreak ``` We will improve the algorithm of our website, develop an app, and incorporate more epidemics around the world. ```
partial
## Inspiration We were inspired by Katie's 3-month hospital stay as a child when she had a difficult-to-diagnose condition. During that time, she remembers being bored and scared -- there was nothing fun to do and no one to talk to. We also looked into the larger problem and realized that 10-15% of kids in hospitals develop PTSD from their experience (not their injury) and 20-25% in ICUs develop PTSD. ## What it does The AR iOS app we created presents educational, gamification aspects to make the hospital experience more bearable for elementary-aged children. These features include: * An **augmented reality game system** with **educational medical questions** that pop up based on image recognition of given hospital objects. For example, if the child points the phone at an MRI machine, a basic quiz question about MRIs will pop-up. * If the child chooses the correct answer in these quizzes, they see a sparkly animation indicating that they earned **gems**. These gems go towards their total gem count. * Each time they earn enough gems, kids **level-up**. On their profile, they can see a progress bar of how many total levels they've conquered. * Upon leveling up, children are presented with an **emotional check-in**. We do sentiment analysis on their response and **parents receive a text message** of their child's input and an analysis of the strongest emotion portrayed in the text. * Kids can also view a **leaderboard of gem rankings** within their hospital. This social aspect helps connect kids in the hospital in a fun way as they compete to see who can earn the most gems. ## How we built it We used **Xcode** to make the UI-heavy screens of the app. We used **Unity** with **Augmented Reality** for the gamification and learning aspect. The **iOS app (with Unity embedded)** calls a **Firebase Realtime Database** to get the user’s progress and score as well as push new data. We also use **IBM Watson** to analyze the child input for sentiment and the **Twilio API** to send updates to the parents. The backend, which communicates with the **Swift** and **C# code** is written in **Python** using the **Flask** microframework. We deployed this Flask app using **Heroku**. ## Accomplishments that we're proud of We are proud of getting all the components to work together in our app, given our use of multiple APIs and development platforms. In particular, we are proud of getting our flask backend to work with each component. ## What's next for HealthHunt AR In the future, we would like to add more game features like more questions, detecting real-life objects in addition to images, and adding safety measures like GPS tracking. We would also like to add an ALERT button for if the child needs assistance. Other cool extensions include a chatbot for learning medical facts, a QR code scavenger hunt, and better social interactions. To enhance the quality of our quizzes, we would interview doctors and teachers to create the best educational content.
## Inspiration We got together a team passionate about social impact, and all the ideas we had kept going back to loneliness and isolation. We have all been in high pressure environments where mental health was not prioritized and we wanted to find a supportive and unobtrusive solution. After sharing some personal stories and observing our skillsets, the idea for Remy was born. **How can we create an AR buddy to be there for you?** ## What it does **Remy** is an app that contains an AR buddy who serves as a mental health companion. Through information accessed from "Apple Health" and "Google Calendar," Remy is able to help you stay on top of your schedule. He gives you suggestions on when to eat, when to sleep, and personally recommends articles on mental health hygiene. All this data is aggregated into a report that can then be sent to medical professionals. Personally, our favorite feature is his suggestions on when to go on walks and your ability to meet other Remy owners. ## How we built it We built an iOS application in Swift with ARKit and SceneKit with Apple Health data integration. Our 3D models were created from Mixima. ## Challenges we ran into We did not want Remy to promote codependency in its users, so we specifically set time aside to think about how we could specifically create a feature that focused on socialization. We've never worked with AR before, so this was an entirely new set of skills to learn. His biggest challenge was learning how to position AR models in a given scene. ## Accomplishments that we're proud of We have a functioning app of an AR buddy that we have grown heavily attached to. We feel that we have created a virtual avatar that many people really can fall for. ## What we learned Aside from this being many of the team's first times work on AR, the main learning point was about all the data that we gathered on the suicide epidemic for adolescents. Suicide rates have increased by 56% in the last 10 years, and this will only continue to get worse. We need change. ## What's next for Remy While our team has set out for Remy to be used in a college setting, we envision many other relevant use cases where Remy will be able to better support one's mental health wellness. Remy can be used as a tool by therapists to get better insights on sleep patterns and outdoor activity done by their clients, and this data can be used to further improve the client's recovery process. Clients who use Remy can send their activity logs to their therapists before sessions with a simple click of a button. To top it off, we envisage the Remy application being a resource hub for users to improve their overall wellness. Through providing valuable sleep hygiene tips and even lifestyle advice, Remy will be the one-stop, holistic companion for users experiencing mental health difficulties to turn to as they take their steps towards recovery.
# Inspiration In the past few years, we have witnessed a large spike in crime in the US, making the streets less safe and putting innocent people at risk. HomeSafe is an attempt to solve this problem by providing a simple solution to help people navigate the streets without taking unnecessary risks. # What HomeSafe does HomeSafe is a cross-platform web application that aggregates crime data from multiple different sources - including real-time radio data analyzed using voice recognition and NLP - to assess the safety of different areas and help users navigate around areas deemed particularly unsafe, while still allowing them to comfortably reach their destination. # How we built it * Frontend: HTML/CSS/JavaScript * Backend: Flask * Database: Cockroach DB Serverless * Voice Recognition: Google Cloud Services * NLP: Cohere We aggregate data from multiple different sources, including crime data scraped from the Berkeley PD and real-time reports from police scanners. For this, we use Google Cloud Services to transcribe the incoming audio and then extract locations of interest using a custom Cohere model. All of the extracted data is stored using Cockroach DB. After extracting the location and severity of crimes based on our data, we assign safety ratings to areas based on crime frequency, severity and population density. We then use these ratings to assign weights to every path taking into account distance, safety and user priority. Finally, we find the shortest path using the GraphHopper routing engine with custom weights. # Challenges This was our first foray into voice recognition and NLP. Thus, the initial learning curve was quite steep, as we had to learn many new concepts and technologies.
winning
## Inspiration Food waste is an unfortunate and prevalent issue in society. According to the UN Environment Programme’s Food Waste Index Report, over 1 billion tons of food went to waste globally in the most recent year. In particular, households often forget about or leave food long past its shelf life, contributing significantly to this problem. We wanted to tackle this issue by helping people keep track of the food they have at home and make better use of their purchases, benefiting both their wallets and the environment. ## What it does Wastefree is an app that helps users manage their pantry and fridge by scanning receipts, tracking items, and providing notifications when food is nearing expiration. The app offers recipe suggestions based on what’s available and highlights how much food and money has been wasted, giving users insights to make more informed decisions about their consumption habits. ## How we built it We built the app using the Python programming language and the Kivy framework for the front-end interface. Receipt scanning and data extraction are powered by Mindee's Optical Character Recognition (OCR) API. For data management, we used SQLite as the database to store pantry items and user information. Passwords are securely hashed using bcrypt to ensure user safety. ## Challenges we ran into We faced challenges when integrating the OCR functionality with the app, particularly in parsing and processing various receipt formats. Additionally, designing an intuitive and user-friendly UI for managing pantry items while also allowing for receipt data edits was a key challenge we worked through. ## Accomplishments that we're proud of We’re proud of the app’s ability to accurately extract and manage receipt data and translate it into actionable insights for users. Successfully integrating multiple technologies like Kivy, SQLite, and Mindee into one cohesive solution was a big win for us. ## What we learned Through this project, we learned a lot about app development, particularly in balancing functionality with usability. We also gained deeper insights into working with OCR APIs, managing databases, and securing user information through password hashing. ## What's next for WasteFree Moving forward, we want to refine our receipt scanning capabilities to handle a wider variety of receipts and packaging formats. Additionally, we plan to integrate more advanced data analytics and provide users with personalized sustainability tips based on their waste habits. Expanding the app to support multiple users and collaborative features like shared pantry management is also in our future plans.
## Inspiration Each year, over approximately 1.3 billion tonnes of produced food is wasted ever year, a startling statistic that we found to be truly unacceptable, especially for the 21st century. The impacts of such waste are wide spread, ranging from the millions of starving individuals around the world that could in theory have been fed with this food to the progression of global warming caused by the greenhouse gases released as a result of emissions from decaying food waste. Ultimately, the problem at hand was one that we wanted to fix using an application, which led us precisely to the idea of Cibus, an application that helps the common householder manage the food in their fridge with ease and minimize waste throughout the year. ## What it does Essentially, our app works in two ways. First, the app uses image processing to take pictures of receipts and extract the information from it that we then further process in order to identify the food purchased and the amount of time till that particular food item will expire. This information is collectively stored in a dictionary that is specific to each user on the app. The second thing our app does is sort through the list of food items that a user has in their home and prioritize the foods that are closest to expiry. With this prioritized list, the app then suggests recipes that maximize the use of food that is about to expire so that as little of it goes to waste as possible once the user makes the recipes using the ingredients that are about to expire in their home. ## How we built it We essentially split the project into front end and back end work. On the front end, we used iOS development in order to create the design for the app and sent requests to the back end for information that would create the information that needed to be displayed on the app itself. Then, on the backend, we used flask as well as Cloud9 for a development environment in order to compose the code necessary to help the app run. We incorporated image processing APIs as well as a recipe API in order to help our app accomplish the goals we set out for it. Furthermore, we were able to code our app such that individual accounts can be created within it and most of the functionalities of it were implemented here. We used Google Cloud Vision for OCR and Microsoft Azure for cognitive processing in order to implement a spell check in our app. ## Challenges we ran into A lot of the challenges initially derived from identifying the scope of the program and how far we wanted to take the app. Ultimately, we were able to decide on an end goal and we began programming. Along the way, many road blocks occurred including how to integrate the backend seamlessly into the front end and more importantly, how to integrate the image processing API into the app. Our first attempts at the image processing API did not end as well as the API only allowed for one website to be searched at a time for at a time, when more were required to find instances of all of the food items necessary to plug into the app. We then turned to Google Cloud Vision, which worked well with the app and allowed us to identify the writing on receipts. ## Accomplishments that we're proud of We are proud to report that the app works and that a user can accurately upload information onto the app and generate recipes that correspond to the items that are about to expire the soonest. Ultimately, we worked together well throughout the weekend and are proud of the final product. ## What we learned We learnt that integrating image processing can be harder than initially expected, but manageable. Additionally, we learned how to program an app from front to back in a manner that blends harmoniously such that the app itself is solid on the interface and in calling information. ## What's next for Cibus There remain a lot of functionalities that can be further optimized within the app, like number of foods with corresponding expiry dates in the database. Furthermore, we would in the future like the user to be able to take a picture of a food item and have it automatically upload the information on it to the app.
## Inspiration It's more important than ever for folks to inform themselves on how to reduce waste production, and the information isn't always as easy to find as it could be. With GreenShare, a variety of information on how to reduce, reuse and recycle any product is just a click away. ## What it does Allows you to scan by barcode or object any product, and then gives you insight on how you might proceed to reduce your waste ## How we built it Using a Flutter barcode scanner and image recognition package, and firebase for our database ## Challenges we ran into The image recognition proved to be more finicky than anticipated ## Accomplishments that we're proud of Having produced a nice UI that has all the basic functionality it needs to be the start of something new ## What we learned To never underestimate the roadblocks you'll come across at any time! Also lots of Flutter ## What's next for GreenShare Implementing some more communal aspects which would allow users to collaborate more in the green effort ## Ivey business model challenge <https://docs.google.com/presentation/d/18f5gj-cJ79kL53FLt4npz_2CM2wjaN04F9OoPKwpPKs/edit#slide=id.g76c67f0fc7_6_66>
losing
# 💡 Inspiration Meeting new people is an excellent way to broaden your horizons and discover different cuisines. Dining with others is a wonderful opportunity to build connections and form new friendships. In fact, eating alone is one of the primary causes of unhappiness, second only to mental illness and financial problems. Therefore, it is essential to make an effort to find someone to share meals with. By trying new cuisines with new people and exploring new neighbourhoods, you can make new connections while enjoying delicious food. # ❓ What it does PlateMate is a unique networking platform that connects individuals in close proximity and provides the setup of an impromptu meeting over some great food! It enables individuals to explore new cuisines and new individuals by using Cohere to process human-written text and discern an individual’s preferences, interests, and other attributes. This data is then aggregated to optimize a matching algorithm that pairs users. Along with a matchmaking feature, PlateMate utilizes Google APIs to highlight nearby restaurant options that fit into users’ budgets. The app’s recommendations consider a user’s budget to help regulate spending habits and make managing finances easier. PlateMate takes into account many factors to ensure that users have an enjoyable and reliable experience on the platform. # 🚀 Exploration PlateMate provides opportunities for exploration by expanding social circles with interesting individuals with different life experiences and backgrounds. You are matched to other nearby users with similar cuisine preferences but differing interests. Restaurant suggestions are also provided based on your characteristics and your match’s characteristics. This provides invaluable opportunities to explore new cultures and identities. As the world emerges from years of lockdown and the COVID-19 pandemic, it is more important than ever to find ways to reconnect with others and explore different perspectives. # 🧰 How we built it **React, Tailwind CSS, Figma**: The client side of our web app was built using React and styled with Tailwind CSS based on a high-fidelity mockup created on Figma. **Express.js**: The backend server was made using Express.js and managed routes that allowed our frontend to call third-party APIs and obtain results from Cohere’s generative models. **Cohere**: User-specific keywords were extracted from brief user bios using Cohere’s generative LLMs. Additionally, after two users were matched, Cohere was used to generate a brief justification of why the two users would be a good match and provide opportunities for exploration. **Google Maps Platform APIs**: The Google Maps API was used to display a live and dynamic map on the homepage and provide autocomplete search suggestions. The Google Places API obtained lists of nearby restaurants, as well as specific information about restaurants that users were matched to. **Firebase**: User data for both authentication and matching purposes, such as preferred cuisines and interests, were stored in a Cloud Firestore database. # 🤔 Challenges we ran into * Obtaining desired output and formatting from Cohere with longer and more complicated prompts * Lack of current and updated libraries for the Google Maps API * Creating functioning Express.js routes that connected to our React client * Maintaining a cohesive and productive team environment when sleep deprived # 🏆 Accomplishments that we're proud of * This was the first hackathon for two of our team members * Creating a fully-functioning full-stack web app with several new technologies we had never touched before, including Cohere and Google Maps Platform APIs * Extracting keywords and generating JSON objects with a high degree of accuracy using Cohere # 🧠 What we learned * Prompt engineering, keyword extraction, and text generation in Cohere * Server and route management in Express.js * Design and UI development with Tailwind CSS * Dynamic map display and search autocomplete with Google Maps Platform APIs * UI/UX design in Figma * REST API calls # 👉 What's next for PlateMate * Provide restaurant suggestions that are better tailored to users’ budgets by using Plaid’s financial APIs to accurately determine their average spending * Connect users directly through an in-app chat function * Friends and network system * Improved matching algorithm
## Inspiration Have you ever stared into your refrigerator or pantry and had no idea what to eat for dinner? Pantri provides a visually compelling picture of which foods you should eat based on your current nutritional needs and offers recipe suggestions through Amazon Alexa voice integration. ## What it does Pantri uses FitBit data to determine which nutritional goals you have or haven't met for the day, then transforms your kitchen with Intel Edison-connected RGB LCD screens and LIFX lighting to lead you in the direction of healthy options as well as offer up recipe suggestions to balance out your diet and clean out your fridge. ## How we built it The finished hack is a prototype food storage unit (pantry) made of cardboard, duct tape, and plexiglass. It is connected to the backend through button and touch sensors as well as LIFX lights and RGB LCD screens. Pressing the button allows you to view the nutritional distribution without opening the door, and opening the door activates the touch sensor. The lights and screens indicate which foods (sorted onto shelves based on nutritional groups) are the best choices. Users can also verbally request a dinner suggestion which will be offered based on which nutritional categories are most needed. At the center of the project is a Ruby on Rails server hosted live on Heroku. It stores user nutrition data, provides a mobile web interface, processes input from the button and touch sensors, and controls the LIFX lights as well as the color and text of the RGB LCD screens. Additionally, we set up three Intel Edison microprocessors running Cylon.js (built on top of Node.js) with API interfaces so information about their attached button, touch sensor, and RGB LCD screens can be connected to the Rails server. Finally, an Amazon Alexa (connected through an Amazon Echo) connects users with the best recipes based on their nutritional needs through a voice interface.
## Inspiration We, as college students, are everyday facing the problem of having to text lots of people in order to ask them to go to the dining hall together, and we are all feeling that we might annoy the other person. In the times of COVID-19, local restaurants are economically suffering as more and more people opt to cook their own meals. However, Lezeat is the solution! With the Lezeat you can quickly see who is able to go for a lunch, and arrange a location and time just in one click. Not only that you will be able to gain valuable friendships, but also help the growth of local restaurants community. Furthermore, we plan on integrating a “meat someone” feature, where you can grab a meal with people outside of your friend group but within your community (school/place of employment). ## What we learned During the research we found out that the restaurant industry has grown dramatically over the last few decades, going from a total sales amount of $379 billion in 2000 to $798.7 billion in 2017! Additionally, Americans have been continuing to budget more and more of their money towards eating out, which is a positive for Lezeat. Various fast-food restaurants such as Chipotle and McDonald’s posted 10% and 5.7% growth in sales in Q2 of 2019. Furthermore, we further developed our teamwork and programming skills. ## How did we build? After brainstorming various ideas, we got to the work and divided ourselves into the groups. Not only that we finished whole prototype in Figma and wrote Software Requirements Specification on Overleaf, but we also have coded a big chunk of the application in React Native and Node.js . Furthermore, we built the logo, explored the market, and potential competitors. ## Challenges we faced Filtering out the ideas
partial
## Inspiration Many times, humans are not fully informed about the invisible irritants in their environment. For every single location on the Earth, there exists many different environmental sources that have the potential to cause harm, from minor inconveniences to life-altering reactions and medical emergencies. The 3 most common environmentally affected medical conditions are: allergies, asthma, and melanoma. Currently, society does not have an accurate way to pinpoint the exact parameters of all the different types of irritants. ## What it does GeoHealth, a revolutionary technology designed to disrupt the irritant detection space. It allows the user to select any point on the earth, and view a score based on an array of considered parameters. This score will be used to provide an index, which accurately quantifies the amount of danger present within that respective category. The app uses the Google Maps SDK for Android and its APIs. This enables the location selection on the Google map, as well as showing and finding current location. Once a location is selected on the map, the user then confirms it, and the score is then calculated with a range OpenWeather APIs. The APIs return values with important information for people with health conditions. It can thus help the affected persons with deciding the safety of travelling to the selected location, and it presents that data. ## How we built it We used Android studio, Writing code in java to create the android app. Javascript on the backend to call the OpenWeather APIs, and a Google Firebase realtime database to transmit data to and from the app and server. We used the Google Maps SDK for Android and its APIs. ## Challenges we ran into The time crunch of one weekend at Yale, was quite the challenge to complete our ambitious goals. Also some interesting event listener, and some 403 errors. ## Accomplishments that we're proud of We really bonded well as a team, and we were able to make a project we are all proud of. An idea that was both uniquely ours, and impactful for the health of many. ## What we learned We learned that a good team will persevere, and that online maps are quite impressive. ## What's next for GeoHealth Sleeping :)
## Inspiration The loneliness epidemic is a real thing and you don't get meaningful engagements with others, just by liking and commenting on Instagram posts, you get meaningful engagement by having real conversations with others, whether it's a text exchange, phone call, or zoom meeting. This project was inspired by the idea of reviving weak links in our network as described in *The Defining Decade* "Weak ties are the people we have met, or are connected to somehow, but do not currently know well. Maybe they are the coworkers we rarely talk with or the neighbor we only say hello to. We all have acquaintances we keep meaning to go out with but never do, and friends we lost touch with years ago. Weak ties are also our former employers or professors and any other associations who have not been promoted to close friends." ## What it does This web app helps bridge the divide between wanting to connect with others, to actually connecting with others. In our MVP, the Web App brings up a card with information on someone you are connected to. Users can swipe right to show interest in reconnecting or swipe left if they are not interested. In this way the process of finding people to reconnect with is gamified. If both people show interest in reconnecting, you are notified and can now connect! And if one person isn't interested, the other person will never know ... no harm done! ## How we built it The Web App was built using react and deployed with Google cloud's Firebase ## Challenges we ran into We originally planned to use Twitters API to aggregate data and recommend matches for our demo, but getting the developer account took longer than expected. After getting a developer account, we realized that we didn't use Twitter all that much, so we had no data to display. Another challenge we ran into was that we didn't have a lot of experience building Web Apps, so we had to learn on the fly. ## Accomplishments that we're proud of We came into this hackathon with little experience in Web development, so it's amazing to see how far we have been able to progress in just 36 hours! ## What we learned REACT! Also, we learned about how to publish a website, and how to access APIs! ## What's next for Rekindle Since our product is an extension or application within an existing social media, Our next steps would be to partner with Facebook, Twitter, LinkedIn, or other social media sites. Afterward, we would develop an algorithm to aggregate a user's connections on a given social media site and optimize the card swiping feature to recommend the people you will most likely connect with.
## Inspiration Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need. ## What it does It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive. ## How we built it The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted. ## Challenges we ran into There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users. ## Accomplishments that we're proud of We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before.
losing
## What it does Blink is a communication tool for those who cannot speak or move, while being significantly more affordable and accurate than current technologies on the market. [The ALS Association](http://www.alsa.org/als-care/augmentative-communication/communication-guide.html) recommends a $10,000 communication device to solve this problem—but Blink costs less than $20 to build. You communicate using Blink through a modified version of **Morse code**. Blink out letters and characters to spell out words, and in real time from any device, your caretakers can see what you need. No complicated EEG pads or camera setup—just a small, unobtrusive sensor can be placed to read blinks! The Blink service integrates with [GIPHY](https://giphy.com) for GIF search, [Earth Networks API](https://www.earthnetworks.com) for weather data, and [News API](https://newsapi.org) for news. ## Inspiration Our inspiration for this project came from [a paper](http://www.wearabletechnologyinsights.com/articles/11443/powering-devices-through-blinking) published on an accurate method of detecting blinks, but it uses complicated, expensive, and less-accurate hardware like cameras—so we made our own **accurate, low-cost blink detector**. ## How we built it The backend consists of the sensor and a Python server. We used a capacitive touch sensor on a custom 3D-printed mounting arm to detect blinks. This hardware interfaces with an Arduino, which sends the data to a Python/Flask backend, where the blink durations are converted to Morse code and then matched to English characters. The frontend is written in React with [Next.js](https://github.com/zeit/next.js) and [`styled-components`](https://styled-components.com). In real time, it fetches data from the backend and renders the in-progress character and characters recorded. You can pull up this web app from multiple devices—like an iPad in the patient’s lap, and the caretaker’s phone. The page also displays weather, news, and GIFs for easy access. **Live demo: [blink.now.sh](https://blink.now.sh)** ## Challenges we ran into One of the biggest technical challenges building Blink was decoding blink durations into short and long blinks, then Morse code sequences, then standard characters. Without any libraries, we created our own real-time decoding process of Morse code from scratch. Another challenge was physically mounting the sensor in a way that would be secure but easy to place. We settled on using a hat with our own 3D-printed mounting arm to hold the sensor. We iterated on several designs for the arm and methods for connecting the wires to the sensor (such as aluminum foil). ## Accomplishments that we're proud of The main point of PennApps is to **build a better future**, and we are proud of the fact that we solved a real-world problem applicable to a lot of people who aren't able to communicate. ## What we learned Through rapid prototyping, we learned to tackle difficult problems with new ways of thinking. We learned how to efficiently work in a group with limited resources and several moving parts (hardware, a backend server, a frontend website), and were able to get a working prototype ready quickly. ## What's next for Blink In the future, we want to simplify the physical installation, streamline the hardware, and allow multiple users and login on the website. Instead of using an Arduino and breadboard, we want to create glasses that would provide a less obtrusive mounting method. In essence, we want to perfect the design so it can easily be used anywhere. Thank you!
## Inspiration Retinal degeneration affects 1 in 3000 people, slowly robbing them of vision over the course of their mid-life. The need to adjust to life without vision, often after decades of relying on it for daily life, presents a unique challenge to individuals facing genetic disease or ocular injury, one which our teammate saw firsthand in his family, and inspired our group to work on a modular, affordable solution. Current technologies which provide similar proximity awareness often cost many thousands of dollars, and require a niche replacement in the user's environment; (shoes with active proximity sensing similar to our system often cost $3-4k for a single pair of shoes). Instead, our group has worked to create a versatile module which can be attached to any shoe, walker, or wheelchair, to provide situational awareness to the thousands of people adjusting to their loss of vision. ## What it does (Higher quality demo on google drive link!: <https://drive.google.com/file/d/1o2mxJXDgxnnhsT8eL4pCnbk_yFVVWiNM/view?usp=share_link> ) The module is constantly pinging its surroundings through a combination of IR and ultrasonic sensors. These are readily visible on the prototype, with the ultrasound device looking forward, and the IR sensor looking to the outward flank. These readings are referenced, alongside measurements from an Inertial Measurement Unit (IMU), to tell when the user is nearing an obstacle. The combination of sensors allows detection of a wide gamut of materials, including those of room walls, furniture, and people. The device is powered by a 7.4v LiPo cell, which displays a charging port on the front of the module. The device has a three hour battery life, but with more compact PCB-based electronics, it could easily be doubled. While the primary use case is envisioned to be clipped onto the top surface of a shoe, the device, roughly the size of a wallet, can be attached to a wide range of mobility devices. The internal logic uses IMU data to determine when the shoe is on the bottom of a step 'cycle', and touching the ground. The Arduino Nano MCU polls the IMU's gyroscope to check that the shoe's angular speed is close to zero, and that the module is not accelerating significantly. After the MCU has established that the shoe is on the ground, it will then compare ultrasonic and IR proximity sensor readings to see if an obstacle is within a configurable range (in our case, 75cm front, 10cm side). If the shoe detects an obstacle, it will activate a pager motor which vibrates the wearer's shoe (or other device). The pager motor will continue vibrating until the wearer takes a step which encounters no obstacles, thus acting as a toggle flip-flop. An RGB LED is added for our debugging of the prototype: RED - Shoe is moving - In the middle of a step GREEN - Shoe is at bottom of step and sees an obstacle BLUE - Shoe is at bottom of step and sees no obstacles While our group's concept is to package these electronics into a sleek, clip-on plastic case, for now the electronics have simply been folded into a wearable form factor for demonstration. ## How we built it Our group used an Arduino Nano, batteries, voltage regulators, and proximity sensors from the venue, and supplied our own IMU, kapton tape, and zip ties. (yay zip ties!) I2C code for basic communication and calibration was taken from a user's guide of the IMU sensor. Code used for logic, sensor polling, and all other functions of the shoe was custom. All electronics were custom. Testing was done on the circuits by first assembling the Arduino Microcontroller Unit (MCU) and sensors on a breadboard, powered by laptop. We used this setup to test our code and fine tune our sensors, so that the module would behave how we wanted. We tested and wrote the code for the ultrasonic sensor, the IR sensor, and the gyro separately, before integrating as a system. Next, we assembled a second breadboard with LiPo cells and a 5v regulator. The two 3.7v cells are wired in series to produce a single 7.4v 2S battery, which is then regulated back down to 5v by an LM7805 regulator chip. One by one, we switched all the MCU/sensor components off of laptop power, and onto our power supply unit. Unfortunately, this took a few tries, and resulted in a lot of debugging. . After a circuit was finalized, we moved all of the breadboard circuitry to harnessing only, then folded the harnessing and PCB components into a wearable shape for the user. ## Challenges we ran into The largest challenge we ran into was designing the power supply circuitry, as the combined load of the sensor DAQ package exceeds amp limits on the MCU. This took a few tries (and smoked components) to get right. The rest of the build went fairly smoothly, with the other main pain points being the calibration and stabilization of the IMU readings (this simply necessitated more trials) and the complex folding of the harnessing, which took many hours to arrange into its final shape. ## Accomplishments that we're proud of We're proud to find a good solution to balance the sensibility of the sensors. We're also proud of integrating all the parts together, supplying them with appropriate power, and assembling the final product as small as possible all in one day. ## What we learned Power was the largest challenge, both in terms of the electrical engineering, and the product design- ensuring that enough power can be supplied for long enough, while not compromising on the wearability of the product, as it is designed to be a versatile solution for many different shoes. Currently the design has a 3 hour battery life, and is easily rechargeable through a pair of front ports. The challenges with the power system really taught us firsthand how picking the right power source for a product can determine its usability. We were also forced to consider hard questions about our product, such as if there was really a need for such a solution, and what kind of form factor would be needed for a real impact to be made. Likely the biggest thing we learned from our hackathon project was the importance of the end user, and of the impact that engineering decisions have on the daily life of people who use your solution. For example, one of our primary goals was making our solution modular and affordable. Solutions in this space already exist, but their high price and uni-functional design mean that they are unable to have the impact they could. Our modular design hopes to allow for greater flexibility, acting as a more general tool for situational awareness. ## What's next for Smart Shoe Module Our original idea was to use a combination of miniaturized LiDAR and ultrasound, so our next steps would likely involve the integration of these higher quality sensors, as well as a switch to custom PCBs, allowing for a much more compact sensing package, which could better fit into the sleek, usable clip on design our group envisions. Additional features might include the use of different vibration modes to signal directional obstacles and paths, and indeed expanding our group's concept of modular assistive devices to other solution types. We would also look forward to making a more professional demo video Current example clip of the prototype module taking measurements:(<https://youtube.com/shorts/ECUF5daD5pU?feature=share>)
## Inspiration We wanted to come to Deltahacks not to just create some product but create a product with lasting impact. We choose to tackle an area that we felt was underserved. We decided to create a command portal for people with ALS. Command portal ? You're probably wondering what I'm saying. We'll show you as we go along! ## What it does We created a command portal that allows people with ALS to communicate with the world. If you know anyone who suffers from ALS then you would also know how they are completely immobilized. We decide to factor the medium for communication between the device and the user with blinking! With this we were able to have users open and close doors, send help messages to a loved one and even open a blink to text translator that would allow the user to communicate with the world in ways which we have never fathomed. ## How we built it The entire setup consists or 4 main features. The first feature is the blink detection which is powered with 3d vector and spatial mapping. With this technology we are able to map the human eye with the capabilities of any device with a depth sensor. Next up we have the text message feature. In this we see how a user with ALS who is in trouble can blink a certain number of times to trigger a automated message that allows a loved one to know they are in trouble. This is powered by the Twilio API. Next up we have the door open and close feature. This feature allows for users who wish to open doors to blink a certain pattern to trigger a door opening or closing. The final feature is a blink to text translator which is using morse code to identify certain letters and display it on the screen. We feel that with morse code the possibilities are endless for people with ALS to start to move towards a more connected life! ## What's next for BlinkBuddy We plan to divert in either 1 of 2 possibilities. The first possibility is to work on the computer vision and spatial mapping and increase the accuracy of such or to potentially convert our blink detection to be more accurate using EMGs. EMGs or more commonly known as electromyography are muscle detectors and could be placed in pair with the blink detector CV to identify blink with greater accuracy
partial
## Inspiration The COVID-19 pandemic has changed the way we go about everyday errands and trips. Along with needing to plan around wait times, distance, and reviews for a location we may want to visit, we now also need to consider how many other people will be there and whether its even a safe establishment to visit. *Planwise helps us plan our trips better.* ## What it does Planwise searches for the places around you that you want to visit and calculates a PlanScore that weighs the Google **reviews**, current **attendance** vs usual attendance, **visits**, and **wait times** so that locations that are rated highly, have few people currently visit them compared to their usual weekly attendance, and have low waiting times are rated highly. A location's PlanScore **changes by the hour** to give users the most up-to-date information about whether they should visit an establishment. Furthermore, PlanWise also **flags** common types of places that are prone to promoting the spread of COVID-19, but still allows users to search for them in case they need to visit them for **essential work**. ## How we built it We built Planwise as a web app with Python, Flask, and HTML/CSS. We used the Google Places and Populartimes APIs to get and rank places. ## Challenges we ran into The hardest challenges weren't technical - they had more to do with our *algorithm* and considering the factors of the pandemic. Should we penalize an essential grocery store for being busy? Should we even display results for gyms in counties which have enforced shutdowns on them? Calculating the PlanScore was tough because a lot of places didn't have some of the information needed. We also spent some time considering which factors to weigh more heavily in the score. ## Accomplishments that we are proud of We're proud of being able to make an application that has actual use in our daily lives. Planwise makes our lives not just easier but **safer**. ## What we learned We learned a lot about location data and what features are relevant when ranking search results. ## What's next for Planwise We plan to further develop the web application and start a mobile version soon! We would like to further **localize** advisory flags on search results depending on the county. For example, if a county has strict lockdown, then Planwise should flag more types of places than the average county.
## Inspiration Are you out in public but scared about people standing too close? Do you want to catch up on the social interactions at your cozy place but do not want to endanger your guests? Or you just want to be notified as soon as you have come in close contact to an infected individual? With this app, we hope to provide the tools to users to navigate social distancing more easily amidst this worldwide pandemic. ## What it does The Covid Resource App aims to bring a one-size-fits-all solution to the multifaceted issues that COVID-19 has spread in our everyday lives. Our app has 4 features, namely: - A social distancing feature which allows you to track where the infamous "6ft" distance lies - A visual planner feature which allows you to verify how many people you can safely fit in an enclosed area - A contact tracing feature that allows the app to keep a log of your close contacts for the past 14 days - A self-reporting feature which enables you to notify your close contacts by email in case of a positive test result ## How we built it We made use primarily of Android Studio, Java, Firebase technologies and XML. Each collaborator focused on a task and bounced ideas off of each other when needed. The social distancing feature functions based on a simple trigonometry concept and uses the height from ground and tilt angle of the device to calculate how far exactly is 6ft. The visual planner adopts a tactile and object-oriented approach, whereby a room can be created with desired dimensions and the touch input drops 6ft radii into the room. The contact tracing functions using Bluetooth connection and consists of phones broadcasting unique ids, in this case, email addresses, to each other. Each user has their own sign-in and stores their keys on a Firebase database. Finally, the self-reporting feature retrieves the close contacts from the past 14 days and launches a mass email to them consisting of quarantining and testing recommendations. ## Challenges we ran into Only two of us had experience in Java, and only one of us had used Android Studio previously. It was a steep learning curve but it was worth every frantic google search. ## What we learned * Android programming and front-end app development * Java programming * Firebase technologies ## Challenges we faced * No unlimited food
## Inspiration There are many occasions where we see a place in a magazine, or just any image source online and we don't know where the place is. There is no description anywhere, and a possible vacation destination may very possibly just disappear into thin air. We certainly did not want to miss out. ## What it does Take a picture of a place. Any place. And upload it onto our web app. We will not only tell you where that place is located, but immediately generate a possible trip plan from your current location. That way, you will be able to know how far away you are from your desired destination, as well as how feasible this trip is in the near future. ## How we built it We first figured out how to use Google Cloud Vision to retrieve the data we wanted. We then processed pictures uploaded to our Flask application, retrieved the location, and wrote the location to a text file. We then used Beautiful Soup to read the location from the text file, and integrated the Google Maps API, along with numerous tools within the API, to display possible vacation plans, and the route to the location. ## Challenges we ran into This was our first time building a dynamic web app, and using so many API’s so it was pretty challenging. Our final obstacle of reading from a text file using JavaScript turned out to be our toughest challenge, because we realized it was not possible due to security concerns, so we had to do it through Beautiful Soup. ## Accomplishments that we're proud of We're proud of being able to integrate many different API's into our application, and being able to make significant progress on the front end, despite having only two beginner members. We encountered many difficulties throughout the building process, and had some doubts, but we were still able to pull through and create a product with an aesthetically pleasing GUI that users can easily interact with. ## What we learned We got better at reading documentation for different API's, learned how to integrate multiple API's together in a single application, and realized we could create something useful with just a bit of knowledge. ## What's next for TravelAnyWhere TravelAnyWhere can definitely be taken on to a whole other level. Users could be provided with different potential routes, along with recommended trip plans that visit other locations along the way. We could also allow users to add multiple pictures corresponding to the same location to get a more precise reading on the destination through machine learning techniques.
winning
## Inspiration * Everyone and their dog have a big idea for the next hit app but most people lack the skills or resources to build them. * Having used some commercial app-building and prototyping tools in the past, we consider them inefficient as they don't reflect what the app is actually going to look like until it is run on a real device. ## What it does Appception allows you to build mobile apps on your iPhone through a simple drag and drop interface. While building your app, it is always running and it has its own behaviors and states. With Appception, anyone can build apps that use the device's sensors, persist and retrieve data locally or remotely, and interact with third party services. If you are pursuing more complex development, with just a tap of a button, we'll generate the source code of your app and send it you. ## How we built it We built Appception with React Native, a new open-source framework by Facebook for building mobile cross platform native apps using JavaScript. Using Redux, a predictable state container JavaScript library, we can handle the state of the user created app. We designed a software architecture that allowed us to effectively isolate the states of the app builder and the user generated app, within the same iOS app. (hence App-ception) Appception communicates with a server application in order to deliver a generated app to the user. ## Challenges I ran into We ran into a lot of challenges with creating barriers, keeping the building app and the dynamic app separate, while at the same time expanding the possible functionality that a user can build. ## Accomplishments that I'm proud of We're proud to have built a proof of concept app that, if deployed at scale will lower the barrier of entry for people to build apps that create value for their users. Everyone, including your grandma can build the dumb ideas that never got built because uber for cats actually isn’t a good idea. ## What I learned Today we all learned React Native. Although some of us were familiar before hand, creating an app with JavaScript was a whole new experience for some others. ## What's next for Appception Expanding the range of apps that you can build with Appception by providing more draggable components and integrations. Integrate deployment facilities within the Appception iPhone app to allow users to ship the app to beta users and push updates directly to their devices instantly.
![UpLync](https://s18.postimg.org/5syr0jrg9/ss_2016_10_15_at_06_36_48.png) ## Inspiration Two weeks ago you attended an event and have met some wonderful people to help get through the event, each one of you exchanged contact information and hope to keep in touch with each other. Neither one of you contacted each other and eventually lost contact with each other. A potentially valuable friendship is lost due to neither party taking the initiative to talk to each other before. This is where *UpLync* comes to the rescue, a mobile app that is able to ease the connectivity with lost contacts. ## What it does? It helps connect to people you have not been in touch for a while, the mobile application also reminds you have not been contacting a certain individual in some time. In addition, it has a word prediction function that allows users to send a simple greeting message using the gestures of a finger. ## Building process We used mainly react-native to build the app, we use this javascript framework because it has cross platform functionality. Facebook has made a detailed documented tutorial at [link](https://facebook.github.io/react-native) and also [link](http://nativebase.io/) for easier cross-platform coding, we started with * Designing a user interface that can be easily coded for both iOS and Android * Functionality of the Lazy Typer * Touch up with color scheme * Coming up with a name for the application * Designing a Logo ## Challenges we ran into * non of the team members know each other before the event * Coding in a new environment * Was to come up with a simple UI that is easy on the eyes * Keeping people connected through a mobile app * Reduce the time taken to craft a message and send ## Accomplishments that we're proud of * Manage to create a product with React-Native for the first time * We are able to pick out a smooth font and colour scheme to polish up the UI * Enabling push notifications to remind the user to reply * The time taken to craft a message was reduced by 35% with the help of our lazy typing function ## What we learned. We are able to learn the ins-and-outs of react-native framework, it saves us work to use android studio to create the application. ## What's next for UpLync The next step for UpLync is to create an AI that learns the way how the user communicates with their peers and provide a suitable sentence structure. This application offers room to provide support for other languages and hopefully move into wearable technology.
## Inspiration Determined to create a project that was able to make impactful change, we sat and discussed together as a group our own lived experiences, thoughts, and opinions. We quickly realized the way that the lack of thorough sexual education in our adolescence greatly impacted each of us as we made the transition to university. Furthermore, we began to really see how this kind of information wasn't readily available to female-identifying individuals (and others who would benefit from this information) in an accessible and digestible manner. We chose to name our idea 'Illuminate' as we are bringing light to a very important topic that has been in the dark for so long. ## What it does This application is a safe space for women (and others who would benefit from this information) to learn more about themselves and their health regarding their sexuality and relationships. It covers everything from menstruation to contraceptives to consent. The app also includes a space for women to ask questions, find which products are best for them and their lifestyles, and a way to find their local sexual health clinics. Not only does this application shed light on a taboo subject but empowers individuals to make smart decisions regarding their bodies. ## How we built it Illuminate was built using Flutter as our mobile framework in order to be able to support iOS and Android. We learned the fundamentals of the dart language to fully take advantage of Flutter's fast development and created a functioning prototype of our application. ## Challenges we ran into As individuals who have never used either Flutter or Android Studio, the learning curve was quite steep. We were unable to even create anything for a long time as we struggled quite a bit with the basics. However, with lots of time, research, and learning, we quickly built up our skills and were able to carry out the rest of our project. ## Accomplishments that we're proud of In all honesty, we are so proud of ourselves for being able to learn as much as we did about Flutter in the time that we had. We really came together as a team and created something we are all genuinely really proud of. This will definitely be the first of many stepping stones in what Illuminate will do! ## What we learned Despite this being our first time, by the end of all of this we learned how to successfully use Android Studio, Flutter, and how to create a mobile application! ## What's next for Illuminate In the future, we hope to add an interactive map component that will be able to show users where their local sexual health clinics are using a GPS system.
partial
## TLDR Duolingo is one of our favorite apps of all time for learning. For DeerHacks, we wanted to bring the amazing learning experience from Duolingo even more interactive by bringing it to life in VR, making it more accessible by offering it for free for all, and making it more personalized by offering courses beyond languages so everyone can find a topic they enjoy. Welcome to the future of learning with Boolingo, let's make learning a thrill again! ## Inspiration 🌟 We were inspired by the monotonous grind of traditional learning methods that often leave students disengaged and uninterested. We wanted to transform learning into an exhilarating adventure, making it as thrilling as gaming. Imagine diving into the depths of mathematics, exploring the vast universe of science, or embarking on quests through historical times—all while having the time of your life. That's the spark that ignited BooLingo! 🚀 ## What it does 🎮 BooLingo redefines the learning experience by merging education with the immersive world of virtual reality (VR). It’s not just a game; it’s a journey through knowledge. Players can explore different subjects like Math, Science, Programming, and even Deer Facts, all while facing challenges, solving puzzles, and unlocking levels in a VR landscape. BooLingo makes learning not just interactive, but utterly captivating! 🌈 ## How we built it 🛠️ We leveraged the power of Unity and C# to craft an enchanting VR world, filled with rich, interactive elements that engage learners like never before. By integrating the XR Plug-in Management for Oculus support, we ensured that BooLingo delivers a seamless and accessible experience on the Meta Quest 2, making educational adventures available to everyone, everywhere. The journey from concept to reality has been nothing short of a magical hackathon ride! ✨ ## Challenges we ran into 🚧 Embarking on this adventure wasn’t without its trials. From debugging intricate VR mechanics to ensuring educational content was both accurate and engaging, every step presented a new learning curve. Balancing educational value with entertainment, especially in a VR environment, pushed us to our creative limits. Yet, each challenge only fueled our passion further, driving us to innovate and iterate relentlessly. 💪 ## Accomplishments that we're proud of 🏆 Seeing BooLingo come to life has been our greatest achievement. We're incredibly proud of creating an educational platform that’s not only effective but also enormously fun. Watching players genuinely excited to learn, laughing, and learning simultaneously, has been profoundly rewarding. We've turned the daunting into the delightful, and that’s a victory we’ll cherish forever. 🌟 ## What we learned 📚 This journey taught us the incredible power of merging education with technology. We learned that when you make learning fun, the potential for engagement and retention skyrockets. The challenges of VR development also taught us a great deal about patience, perseverance, and the importance of a user-centric design approach. BooLingo has been a profound learning experience in itself, teaching us that the sky's the limit when passion meets innovation. 🛸 ## What's next for BooLingo 🚀 The adventure is just beginning! We envision BooLingo expanding its universe to include more subjects, languages, and historical epochs, creating a limitless educational playground. We’re also exploring social features, allowing learners to team up or compete in knowledge quests. Our dream is to see BooLingo in classrooms and homes worldwide, making learning an adventure that everyone looks forward to. Join us on this exhilarating journey to make education thrillingly unforgettable! Let's change the world, one quest at a time. 🌍💫
## Inspiration Research has shown us that new hires, women and under-represented minorities in the workplace could feel intimidated or uncomfortable in team meetings. Since the start of remote work, new hires lack the in real life connections, are unable to take a pulse of the group and are fearful to speak what’s on their mind. Majority of the time this is also due to more experienced individuals interrupting them or talking over them without giving them a chance to speak up. This feeling of being left out often makes people not contribute to their highest potential. Links to the reference studies and articles are at the bottom. As new hire interns every summer, we personally experienced the communication and participation problem in team meetings and stand ups. We were new and felt intimidated to share our thoughts in fear of them being dismissed or ignored. Even though we were new hires and had little background, we still had some sound ideas and opinions to share that were instead bottled up inside us. We found out that the situation is the same for women in general and especially under-represented minorities. We built this tool for ourselves and to those around us to feel comfortable and inclusive in team meetings. Companies and organizations must do their part in ensuring that their workplace is an inclusive community for all and that everyone has the opportunity to participate equally in their highest potential. With the pandemic and widespread adoption of virtual meetings, this is an important problem globally that we must all address and we believe Vocal aims to help solve it. ## What it does Vocal empowers new hires, women, and under-represented minorities to be more involved and engaged in virtual meetings for a more inclusive team. Google Chrome is extremely prevalent and our solution is a proof-of-concept Chrome Extension and Web Dashboard that works with Google Meet meetings. Later we would support others platforms such as Zoom, Webex, Skype, and others. When the user joins the Google Meet meeting, our Extension automatically detects it and collects statistics regarding the participation of each team member. A percentage is shown next to their name to indicate their contribution and also a ranking is shown that indicates how often you spoke compared to others. When the meeting ends, all of this data is sent to the web app dashboard using Google Cloud and Firebase database. On the web app, the users can see their participation in the current meeting and progress from the past meetings with different metrics. Plants are how we gamify participation. Your personal plant grows, the more you contribute in meetings. Meetings are organized through sprints and contribution throughout the sprint will be reflected in the growth of the plant. **Dashboard**: You can see your personal participation statistics. It show your plant, monthly interaction level graph, percent interaction with other team members (how often and which teammates you piggy back on when responding). Lastly, it also has Overall Statistics such as percent increase in interactions compared to last week, meeting participation streak, average engagement time, and total time spoken. You can see your growth in participation reflected in the plant growth. **Vocal provides lots of priceless data for the management, HR, and for the team overall to improve productivity and inclusivity.** **Team**: Many times our teammates are stressed or go through other feelings but simply bottle it up. In the Team page, we provide Team Sentiment Graph and Team Sentiments. The graphs shows how everyone in the team has been feeling for the current sprint. Team members would check in anonymously at the end of the every week on how they’re feeling (Stressed, Anxious, Neutral, Calm, Joyful) and the whole team can see it. If someone’s feeling low, other teammates can reach out anonymously in the chat and offer them support and they both can choose to reveal their identity if they want. **Feeling that your team cares about you and your mental health can foster an inclusive community.** **Sprints Garden**: This includes all of the previous sprints that you completed. It also shows the whole team’s garden so you can compare across teammates on how much you have been contributing relatively. **Profile**: This is your personal profile where you will see your personal details, the plants you have grown in the past over all the sprints you have worked on - your forest, your anonymous conversations with your team members. Your garden is here to motivate you and help you grow more plants and ultimately contribute more to meetings. **Ethics/Privacy: We found very interesting ways to collect speaking data without being intrusive. When the user is talking only the mic pulses are recorded and analyzed as a person spoken. No voice data or transcription is done to ensure that everyone can feel safe while using the extension.** **Sustainability/Social Good**: Companies that use Vocal can partner to plant the trees grown during sprints in real life by partnering with organizations that plant real trees under the corporate social responsibility (CSR) initiative. ## How we built it The System is made up of three independent modules. Chrome Extension: This module works with Google meet and calculates the statistics of the people who joined the meet and stores the information of the amount of time an individual contributes and pushes those values to the database. Firebase: It stores the stats available for each user and their meeting attended. Percentage contribution, their role, etc. Web Dashboard: Contains the features listed above. It fetches data from firebase and then renders it to display 3 sections on the portal. a. Personal Garden - where an individual can see their overall performance, their stats and maintain a personal plant streak. b. Group Garden - where you can see the overall performance of the team, team sentiment, anonymous chat function. After each sprint cycle, individual plants are added to the nursery. c. Profile with personal meeting logs, ideas and thoughts taken in real-time calls. ## Challenges we ran into We had challenges while connecting the database with the chrome extension. The Google Meet statistics was also difficult to do since we needed to find clever ways to collect the speaking statistics without infringing on privacy. Also, 36 hours was a very short time span for us to implement so many features, we faced a lot of time pressure but we learned to work well under pressure! ## Accomplishments that we're proud of This was an important problem that we all deeply cared about since we saw people around us face this on a daily basis. We come from different backgrounds, but for this project we worked as one team and used our expertise, and learned what we weren’t familiar with in this project. We are so proud to have created a tool to make under-represented minorities, women and new hires feel more inclusive and involved. We see this product as a tool we’d love to use when we start our professional journeys. Something that brings out the benefits of remote work, at the same time being tech that is humane and delightful to use. ## What's next for Vocal Vocal is a B2B product that companies and organizations can purchase. The chrome extension to show meeting participation would be free for everyone. The dashboard and the analytics will be priced depending on the company. The number of insights and data that can be extracted from one data point(user participation) will be beneficial to the company (HR & Management) to make their workplace more inclusive and productive. The data can also be analyzed to promote inclusion initiatives and other events to support new hires, women, and under-represented minorities. We already have so many use cases that were hard to build in the duration of the hackathon. Our next step would be to create a Mobile app, more Video Calling platform integrations including Zoom, Microsoft Teams, Devpost video call, and implement chat features. We also see this also helping in other industries like ed-tech, where teachers and students could benefit form active participation. ## References 1. <https://www.nytimes.com/2020/04/14/us/zoom-meetings-gender.html> 2. <https://www.nature.com/articles/nature.2014.16270> 3. ​​<https://www.fastcompany.com/3030861/why-women-fail-to-speak-up-at-high-level-meetings-and-what-everyone-can-do-about> 4. <https://hbr.org/2014/06/women-find-your-voice> 5. <https://www.cnbc.com/2020/09/03/45percent-of-women-business-leaders-say-its-difficult-for-women-to-speak-up-in-virtual-meetings.html>
## Inspiration In the next 2-3 years, Metaverse is going to be the net big thing. Please don’t quote me on that. The applications of metaverse and VR is ubiquitous, from healthcare to defense training and much more. We want to explore its applications in education and how it can enhance the experience for the user who is learning from online universities. ## What it does There are many different online universities such as Udemy, Coursera, EdX and many more. However, none of them operate on the metaverse and we feel that this is an untapped potential that can be used to better benefit students learning online. Our goal is to create a website catered to online education that would work in the Metaverse. ## Challenges we ran into We had to find some user interviews and get feedback on the UI that we developed. So, we created a google form and sent them to a few friends of ours who are currently students to get feedback of our project. While we wanted to create a chatbot and incorporate onto our website, creating a chatbot from scratch is time-consuming and a tedious process. Although that would offer a better experience for the user and be more accurate, we decided to include the chat function as a WhatsApp widget that would redirect the user to a WhatsApp chat with the representative of the online university ## Accomplishments that we're proud of We used an extension of live share on Visual Studio Code which enabled all the team members to use, edit and debug the code on a single file. We deployed the website on Firebase. This was the first time any of us ever used Firebase. We ran into some snags but finally, we figured it out and implemented the website, giving us a sense of accomplishment ## What we learned 1) Incorporation into the metaverse and creating a virtual space where students can learn and explore new leading topics in their industry. 2) Develop affordable hardware for students so that they can use the online platform effortlessly. 3) Advertising about this innovation and how it would benefit the students in the long run 4) Conducting a market study to see how it would be received by the public. 5) Getting educators and course instructors to create courses on this website that is compatible with Metaverse.
winning
## What it does Take a picture, get a 3D print of it! ## Challenges we ran into The 3D printers going poof on the prints. ## How we built it * AI model transforms the picture into depth data. Then post-processing was done to make it into a printable 3D model. And of course, real 3D printing. * MASV to transfer the 3D model files seamlessly. * RBC reward system to incentivize users to engage more. * Cohere to edit image prompts to be culturally appropriate for Flux to generate images. * Groq to automatically edit the 3D models via LLMs. * VoiceFlow to create an AI agent that guides the user through the product.
## Inspiration We wanted to pioneer the use of computationally intensive image processing and machine learning algorithms for use in low resource robotic or embedded devices by leveraging cloud computing. ## What it does CloudChaser (or "Chase" for short) allows the user to input custom objects for chase to track. To do this Chase will first rotate counter-clockwise until the object comes into the field of view of the front-facing camera, then it will continue in the direction of the object, continually updating its orientation. ## How we built it "Chase" was built with four continuous rotation servo motors mounted onto our custom modeled 3D-printed chassis. Chase's front facing camera was built using a raspberry pi 3 camera mounted onto a custom 3D-printed camera mount. The four motors and the camera are controlled by the raspberry pi 3B which streams video to and receives driving instructions from our cloud GPU server through TCP sockets. We interpret this cloud data using YOLO (our object recognition library) which is connected through another TCP socket to our cloud-based parser script, which interprets the data and tells the robot which direction to move. ## Challenges we ran into The first challenge we ran into was designing the layout and model for the robot chassis. Because the print for the chassis was going to take 12 hours, we had to make sure we had the perfect dimensions on the very first try, so we took calipers to the motors, dug through the data sheets, and made test mounts to ensure we nailed the print. The next challenge was setting up the TCP socket connections and developing our software such that it could accept data from multiple different sources in real time. We ended up solving the connection timing issue by using a service called cam2web to stream the webcam to a URL instead of through TCP, allowing us to not have to queue up the data on our server. The biggest challenge however by far was dealing with the camera latency. We wanted to camera to be as close to live as possible so we took all possible processing to be on the cloud and none on the pi, but since the rasbian operating system would frequently context switch away from our video stream, we still got frequent lag spikes. We ended up solving this problem by decreasing the priority of our driving script relative to the video stream on the pi. ## Accomplishments that we're proud of We're proud of the fact that we were able to model and design a robot that is relatively sturdy in such a short time. We're also really proud of the fact that we were able to interface the Amazon Alexa skill with the cloud server, as nobody on our team had done an Alexa skill before. However, by far, the accomplishment that we are the most proud of is the fact that our video stream latency from the raspberry pi to the cloud is low enough that we can reliably navigate the robot with that data. ## What we learned Through working on the project, our team learned how to write an skill for Amazon Alexa, how to design and model a robot to fit specific hardware, and how to program and optimize a socket application for multiple incoming connections in real time with minimal latency. ## What's next for CloudChaser In the future we would ideally like for Chase to be able to compress a higher quality video stream and have separate PWM drivers for the servo motors to enable higher precision turning. We also want to try to make Chase aware of his position in a 3D environment and track his distance away from objects, allowing him to "tail" objects instead of just chase them. ## CloudChaser in the news! <https://medium.com/penn-engineering/object-seeking-robot-wins-pennapps-xvii-469adb756fad> <https://penntechreview.com/read/cloudchaser>
## Inspiration Students often do not have a financial background and want to begin learning about finance, but the sheer amount of resources that exist online make it difficult to know which articles are good for people to read. Thus we thought the best way to tackle this problem was to use a machine learning technique known as sentiment analysis to determine the tone of articles, allowing us to recommend more neutral options to users and provide a visual view of the different articles available so that users can make more informed decisions on the articles they read. ## What it does This product is a web based application that performs sentiment analysis on a large scope of articles to aid users in finding biased, or un - biased articles. We also offer three data visualizations of each topic, an interactive graph that shows the distribution of sentiment scores on articles, a heatmap of the sentiment scores and a word cloud showing common key words among the articles. ## How we built it Around 80 unique articles from 10 different domains were scraped from the web using scrapy. This data was then processed with the help of Indico's machine learning API. The API provided us with the tools to perform sentiment analysis on all of our articles which was the main feature of our product. We then further used the summarize feature of Indico api to create shorter descriptions of the article for our users. Indico api also powers the other two data visualizations that we provide to our users. The first of the two visualizations would be the heatmap which is also created through tableau and takes the sentimenthq scores to better visualize and compare articles and the difference between the sentiment scores. The last visualization is powered by wordcloud which is built on top of pillow and matplotlib. It takes keywords generated by Indico api and displays the most frequent keywords across all articles.The web application is powered by Django and a SQL lite database in the backend, bootstrap for the frontend and is all hosted on a google cloud platform app engine. ## Challenges we ran into The project itself was a challenge since it was our first time building a web application with Django and hosting on a cloud platform. Another challenge arose in data scraping, when finding the titles of the articles, different domains placed their article titles in different locations and tags making it difficult to make one scraper that could abstract to many websites. Not only this, but the data that was returned by the scraper was not the correct format for us to easily manipulate so unpackaging dictionaries and such were small little tasks that we had to do in order for us to solve these problems. On the data visualization side, there was no graphic library that would fit our vision for the interactive graph, so we had to build that on our own! ## Accomplishments that we're proud of Being able to accomplish the goals that we set out for the project and actually generating useful information in our web application based on the data that we ran through Indico API. ## What we learned We learned how to build websites using Django, generate word clouds using matplotlib and pandas, host websites on google cloud platform, how to utilize the Indico api and researched various types of data visualization techniques. ## What's next for DataFeels Lots of improvements could still be made to this project and here are just some of the different things that could be done. The scraper created for the data required us to manually run the script for every new link but creating an automated scraper that built the correct data structures for us to directly pipeline to our website would be much more ideal. Next we would expand our website to have not just financial categories but any topic that has articles about it.
winning
## Vision vs Reality We originally had a much more robust idea for this hackathon: an open vision doorbell to figure out who is at the door, without needing to go to the door. The plan was to use an Amazon Echo Dot to connect our vision solution, a Logitech c270 HD webcam, with our storage, a Firebase database. This final integration step between the Echo Dot and OpenCV services ended up being our downfall as the never-ending wave of vague errors kept getting thrown and we failed to learn how to swim. Instead of focusing on our downfalls, we want to show the progress that was made in the past 36 hours that we believe shows the potential behind what our idea sought to accomplish. Using OpenCV3 and Python3, we created multiple vision solutions such as motion detection and image detection. Ultimately, we decided that a facial recognition program would be ideal for our design. Our facial recognition program includes a vision model that has both Jet's and I's faces learned as well as an unknown catch-all type that aims to cover any unknown or masked faces. While not the most technically impressive, this does show the solid base work and the right step that we took to get to our initial idea. ## The Development Process These past 36 hours presented us with a lot full of trials and tribulations and it would be a shame if we did not mention them considering the result. In the beginning, we considered using the RaisingAI platform for our vision rather than OpenCV. However, when we attended their workshop, we saw that it relied on a Raspberry Pi which we originally wanted to avoid using due to our lack of server experience. Also, the performance seemed to vary and it did not seem like it was aimed for facial recognition. We planned and were excited to use a NVIDIA Jetson due to how great the performance is and we saw that the NVIDIA booth was using a Jetson to run a resource intensive vision program smoothly. Unfortunately, we could not get the Jetson setup due to a lack of a monitor. After not being able to successfully run the Jetson, we reluctantly switched to a Raspberry Pi but we were pleasantly surprised at how well it performed and how easily it was to setup without a monitor. At this stage is also when we started learning how to develop the Amazon Echo Dot. Since this was our first time ever using an Alexa-based device, it took a while to develop even a simple Hello, World! application. However, we definitely learned a lot about how smart devices work and got to work with many AWS utilities as a part of this development process. As a team, we knew that integrating the vision and Alexa would not be an easy task even at the start of the hackathon. Neither of us predicated just how difficult it would actually be. As a result, this vision-Alexa integration took up a majority of our overall development time. We also took on the task of integrating Firebase for storage at this step, but since this is the one technology in this project that we have had past experience with, we thought it would be no problem. ## What We Built At the end of the day (...more like morning), we were able to create a simple Python program and dataset that allows us to show off our base vision module. This comprises of 3 different programs: facial detection from a custom dataset of images, a ML model to associate facial features to a specific person, and applying that model to a live webcam feed. Additionally, we were also able to create our own Alexa skill that allowed us to dictate how we interact with the Echo. ## Accomplishments that I'm proud of * Learning how to use/create Amazon Skills * Getting our feet wet with an introduction to Raspberry Pi * Creating our own ML model * Utilizing OpenCV & Python to create a custom vision program ## Future Goals * Figure out how to integrate Alexa and Python programs * Seek mentor help in a more relaxed environment * Use a NVIDIA Jetson * Create a 3D printed housing for our product / overall final product refinement
## Inspiration Have you ever want to become environmentally friendly? Trick question! Of course yes! If you are like us, however, you probably don't prefer the process of classifying your trash into separate categories, especially when you're not even sure if your classification is correct. Are food containers recyclable? Such questions would need one to go put it down, google search, understanding its materials, and so on. This is where Trashier would save you a ton of time. ## What it does Using a webcam and a microphone, Trashier uses a deep-learning neural networks model and webcam to classify new images into different waste categories (plastic bottles, plastic bags, metal can, etc.) and into two Recycle or Compostable. This is done instantaneously, updating with real-time webcam input. On top of that, voice assistants work alongside providing users with first-hand knowledge about recycling/waste management. Simply asking a question, such as "Is plastic bottles recyclable?", and you are responded with not only facts about its recycling ability but also further advice about washing your bottle before putting it in the bin. ## How I built it We trained a deep learning model that generalizes itself to some common categories of trashes, using trash products we found on Treehacks venue. To increase its accuracy, we limit to only 1 item per frame and incorporated additional data from user Sashaank Sekar on Kaggle. The dataset can be found here: <https://www.kaggle.com/techsash/waste-classification-data>. After training the model and getting the appropriate weights, we incorporated them into our backend. Here, we develop a real-time and constant loop where the input data into our webcam is constantly run through model and predict the appropriate categories. Next, we use API from Houndify to assist our voice assistance application. In this process, we came up with as many examples as we can about certain questions that the machine can be asked, such as "What products can be recycled?", or "Can banana be recycled?" (of course no). We also add some other common domains, such as the weather, or the stock market data. This will satisfy users who are interested in **stock investment while putting away their trash**. ## Challenges I ran into Implementing the API and adding the deep-learning model was first difficult for us at first. After consulting some mentor, we were able to understand it better, and we found some useful youtube tutorials that becomes our friend through the process. Training the data was also challenging. First, we took online data, which is good and have high accuracy, but is somewhat unstable (it was switching between plastic bag/plastic bottles for a while). Later on, we wanted to design an enclosed environment to increase the accuracy, and this is where we must manually select and take new data into account. Data cleaning, normalizing, and preparing stage needed some more work. ## Accomplishments that I'm proud of We're proud that we were able to make something like this at this hackathon, while also had a lot of fun! We're also proud we had over 10 hours of sleep during the process (for 2 nights). This is really impressive xD. ## What I learned This is the first hackathon in which we were able to create a product, so we learned a lot. We learned how to incorporate and debug API effectively, how computer vision can be integrated into day-to-day applications, and more. Most importantly, we learned how to keep up with hard work, don't give up, and have great fun in hackathons. ## What's next for Trashier Our UI can definitely get some more work, such as formatting the voice assistance application into a chatbox. In terms of functionality, one goal is to improve the accuracy and performance of the current model. Trash is currently classified only to compostable/recycle (due to limitations in data), while having it in 3 categories (recycle, compostable, and landfill) will be more environmentally friendly. The chatbox can also use more work, mainly to improve its ability to answer a diverse set of questions from users. We would also introduce additional functionality, including user feedback. This means users can help to classify certain images, such as that of a plastic bag, to proper categories by simply asserting the algorithm as "good", "bad", or "close". Using the feedback, we can incorporate more data into our dataset and improving its accuracy. On a long-term scale, the goal would be to have the program be cheap enough that it can be incorporated into the city environment.
# Inspiration and Product There's a certain feeling we all have when we're lost. It's a combination of apprehension and curiosity – and it usually drives us to explore and learn more about what we see. It happens to be the case that there's a [huge disconnect](http://www.purdue.edu/discoverypark/vaccine/assets/pdfs/publications/pdf/Storylines%20-%20Visual%20Exploration%20and.pdf) between that which we see around us and that which we know: the building in front of us might look like an historic and famous structure, but we might not be able to understand its significance until we read about it in a book, at which time we lose the ability to visually experience that which we're in front of. Insight gives you actionable information about your surroundings in a visual format that allows you to immerse yourself in your surroundings: whether that's exploring them, or finding your way through them. The app puts the true directions of obstacles around you where you can see them, and shows you descriptions of them as you turn your phone around. Need directions to one of them? Get them without leaving the app. Insight also supports deeper exploration of what's around you: everything from restaurant ratings to the history of the buildings you're near. ## Features * View places around you heads-up on your phone - as you rotate, your field of vision changes in real time. * Facebook Integration: trying to find a meeting or party? Call your Facebook events into Insight to get your bearings. * Directions, wherever, whenever: surveying the area, and find where you want to be? Touch and get instructions instantly. * Filter events based on your location. Want a tour of Yale? Touch to filter only Yale buildings, and learn about the history and culture. Want to get a bite to eat? Change to a restaurants view. Want both? You get the idea. * Slow day? Change your radius to a short distance to filter out locations. Feeling adventurous? Change your field of vision the other way. * Want get the word out on where you are? Automatically check-in with Facebook at any of the locations you see around you, without leaving the app. # Engineering ## High-Level Tech Stack * NodeJS powers a RESTful API powered by Microsoft Azure. * The API server takes advantage of a wealth of Azure's computational resources: + A Windows Server 2012 R2 Instance, and an Ubuntu 14.04 Trusty instance, each of which handle different batches of geospatial calculations + Azure internal load balancers + Azure CDN for asset pipelining + Azure automation accounts for version control * The Bing Maps API suite, which offers powerful geospatial analysis tools: + RESTful services such as the Bing Spatial Data Service + Bing Maps' Spatial Query API + Bing Maps' AJAX control, externally through direction and waypoint services * iOS objective-C clients interact with the server RESTfully and display results as parsed ## Application Flow iOS handles the entirety of the user interaction layer and authentication layer for user input. Users open the app, and, if logging in with Facebook or Office 365, proceed through the standard OAuth flow, all on-phone. Users can also opt to skip the authentication process with either provider (in which case they forfeit the option to integrate Facebook events or Office365 calendar events into their views). After sign in (assuming the user grants permission for use of these resources), and upon startup of the camera, requests are sent with the user's current location to a central server on an Ubuntu box on Azure. The server parses that location data, and initiates a multithread Node process via Windows 2012 R2 instances. These processes do the following, and more: * Geospatial radial search schemes with data from Bing * Location detail API calls from Bing Spatial Query APIs * Review data about relevant places from a slew of APIs After the data is all present on the server, it's combined and analyzed, also on R2 instances, via the following: * Haversine calculations for distance measurements, in accordance with radial searches * Heading data (to make client side parsing feasible) * Condensation and dynamic merging - asynchronous cross-checking from the collection of data which events are closest Ubuntu brokers and manages the data, sends it back to the client, and prepares for and handles future requests. ## Other Notes * The most intense calculations involved the application of the [Haversine formulae](https://en.wikipedia.org/wiki/Haversine_formula), i.e. for two points on a sphere, the central angle between them can be described as: ![Haversine 1](https://upload.wikimedia.org/math/1/5/a/15ab0df72b9175347e2d1efb6d1053e8.png) and the distance as: ![Haversine 2](https://upload.wikimedia.org/math/0/5/5/055b634f6fe6c8d370c9fa48613dd7f9.png) (the result of which is non-standard/non-Euclidian due to the Earth's curvature). The results of these formulae translate into the placement of locations on the viewing device. These calculations are handled by the Windows R2 instance, essentially running as a computation engine. All communications are RESTful between all internal server instances. ## Challenges We Ran Into * *iOS and rotation*: there are a number of limitations in iOS that prevent interaction with the camera in landscape mode (which, given the need for users to see a wide field of view). For one thing, the requisite data registers aren't even accessible via daemons when the phone is in landscape mode. This was the root of the vast majority of our problems in our iOS, since we were unable to use any inherited or pre-made views (we couldn't rotate them) - we had to build all of our views from scratch. * *Azure deployment specifics with Windows R2*: running a pure calculation engine (written primarily in C# with ASP.NET network interfacing components) was tricky at times to set up and get logging data for. * *Simultaneous and asynchronous analysis*: Simultaneously parsing asynchronously-arriving data with uniform Node threads presented challenges. Our solution was ultimately a recursive one that involved checking the status of other resources upon reaching the base case, then using that knowledge to better sort data as the bottoming-out step bubbled up. * *Deprecations in Facebook's Graph APIs*: we needed to use the Facebook Graph APIs to query specific Facebook events for their locations: a feature only available in a slightly older version of the API. We thus had to use that version, concurrently with the newer version (which also had unique location-related features we relied on), creating some degree of confusion and required care. ## A few of Our Favorite Code Snippets A few gems from our codebase: ``` var deprecatedFQLQuery = '... ``` *The story*: in order to extract geolocation data from events vis-a-vis the Facebook Graph API, we were forced to use a deprecated API version for that specific query, which proved challenging in how we versioned our interactions with the Facebook API. ``` addYaleBuildings(placeDetails, function(bulldogArray) { addGoogleRadarSearch(bulldogArray, function(luxEtVeritas) { ... ``` *The story*: dealing with quite a lot of Yale API data meant we needed to be creative with our naming... ``` // R is the earth's radius in meters var a = R * 2 * Math.atan2(Math.sqrt((Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) * Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) + Math.cos((Math.PI / 180)(latitude1)) * Math.cos((Math.PI / 180)(latitude2)) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2))), Math.sqrt(1 - (Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) * Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) + Math.cos((Math.PI / 180)(latitude1)) * Math.cos((Math.PI / 180)(latitude2)) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2);))); ``` *The story*: while it was shortly after changed and condensed once we noticed it's proliferation, our implementation of the Haversine formula became cumbersome very quickly. Degree/radian mismatches between APIs didn't make things any easier.
losing
## Inspiration Jessica here - I came up with the idea for BusPal out of expectation that the skill has already existed. With my Amazon Echo Dot, I was already doing everything from checking the weather to turning off and on my lights with Amazon skills and routines. The fact that she could not check when my bus to school was going to arrive was surprising at first - until I realized that Amazon and Google are one of the biggest rivalries there is between 2 tech giants. However, I realized that the combination of Alexa's genuine personality and the powerful location ability of Google Maps would fill a need that I'm sure many people have. That was when the idea for BusPal was born: to be a convenient Alexa skill that will improve my morning routine - and everyone else's. ## What it does This skill enables Amazon Alexa users to ask Alexa when their bus to a specified location is going to arrive and to text the directions to a phone number - all hands-free. ## How we built it Through the Amazon Alexa builder, Google API, and AWS. ## Challenges we ran into We originally wanted to use stdlib, however with a lack of documentation for the new Alexa technology, the team made an executive decision to migrate to AWS roughly halfway into the hackathon. ## Accomplishments that we're proud of Completing Phase 1 of the project - giving Alexa the ability to take in a destination, and deliver a bus time, route, and stop to leave for. ## What we learned We learned how to use AWS, work with Node.js, and how to use Google APIs. ## What's next for Bus Pal Improve the text ability of the skill, and enable calendar integration.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
## Inspiration On the bus ride to another hackathon, one of our teammates was trying to get some sleep, but was having trouble because of how complex and loud the sound of people in the bus was. This led to the idea that in sufficiently noisy environment, hearing could be just as descriptive and rich as seeing. Therefore to better enable people with visual impairments to be able to navigate and understand their environment, we created a piece of software that is able to describe and create an auditory map of ones environment. ## What it does In a sentence, it uses machine vision to give individuals a kind of echo location. More specifically, one simply needs to hold their cell phone up, and the software will work to guide them using a 3D auditory map. The video feed is streamed over to a server where our modified version of the yolo9000 classification convolutional neural network identifies and localizes the objects of interest within the image. It will then return the position and name of each object back to ones phone. It also uses the Watson IBM api to further augment its readings by validating what objects are actually in the scene, and whether or not they have been misclassified. From here, we make it seem as though each object essentially says its own name, so that individual can essentially create a spacial map of their environment just through audio cues. The sounds get quieter the further away the objects are, and the ratio of sounds between the left and right are also varied as the object moves around the use. The phone are records its orientation, and remembers where past objects were for a few seconds afterwards, even if it is no longer seeing them. However, we also thought about where in everyday life you would want extra detail, and one aspect that stood out to us was faces. Generally, people use specific details on and individual's face to recognize them, so using microsoft's face recognition api, we added a feature that will allow our system to identify and follow friend and family by name. All one has to do is set up their face as a recognizable face, and they are now their own identifiable feature in one's personal system. ## What's next for SoundSight This system could easily be further augmented with voice recognition and processing software that would allow for feedback that would allow for a much more natural experience. It could also be paired with a simple infrared imaging camera to be used to navigate during the night time, making it universally usable. A final idea for future improvement could be to further enhance the machine vision of the system, thereby maximizing its overall effectiveness
winning
## Try ChartGPT out for yourself: <https://chart-gpt.vercel.app/> ## Inspiration ChartGPT was inspired by the desire to make chart creation and modification easier for everyone, regardless of their level of expertise in data analytics or design. We have experienced firsthand the tediousness of creating and modifying charts in our prior jobs, and they wanted to find a way to make the process more accessible and enjoyable. At the same time, we're avid users of large language models (LLMs), and are fascinated by their capabilities. We were looking for a project which would be visually appealing to present while showcasing the multi-modal abilities of LLMs. ## What it does ChartGPT creates beautiful charts from natural language prompts. Users are able to provide their own data sources, either through drag and drop (csv upload) or through manual data input. Then the user can chat with our API to create and beautify a chart from the data. For example, the user can prompt 'Remove gridlines' or 'Change all colors to gradients of green'. User prompts are interpreted for their intent and the presumed intentions are applied to the chart. Users can prompt multiple intentions at once. ## How we built it Full-stack framework: *T3* (typescript, tailwind, trpc). Graphs are generated with AirBnB *visx* charting library. A user inputs prompts into a chat interface. User prompts are fed into the OpenAI API with User data and chart history on *Supabase*. Deployed on *Vercel*. The user inputs, together with an engineered prompt of ours, are interpreted by the Da-Vinci model from OpenAI. Da-Vinci replies with prompts that trigger custom functions that can manipulate the aesthetics in visx. ## Challenges we ran into The most challenging part of the project was to figure out how to use and prompt the OpenAI API to manipulate the relevant parts of the charts. Creating a whole chart every step from scratch was not only extremely error prone, but also very slow (>20s response time). ## Accomplishments that we're proud of Shipping a live application (<https://chart-gpt.vercel.app>). Building a product that makes data analytics more accessible to people less apt in chart creation. ## What we learned First full-stack application with the T3 stack. Had lots of fun (and pain) wrangling with visx. Learned to engineer prompt that leads OpenAI to create ideally formatted outputs that can be used in our data visualization. ## What's next for ChartGPT Build a Google Sheets and Microsoft Office plugin. Build user authentication. For a given user provide a chart and version history that is downloadable and shareable. Improve charting capabilities to more chart types. Enable users to provide and import from multiple data sources.
## Inspiration We developed ApplyCanada based on the experiences of international students grappling with the financial burden of tuition and living expenses. Looking friends around us, those who have already pursued studies abroad face challenges in covering international student fees. So they prompt aspirations for Permanent Residency (PR) or citizenship and some seek ways to finance their education. Additionally, prospective international students are often uncertain about the specific living costs in Canada, hindering their confidence in committing to study abroad. To address these concerns, we created the ApplyCanada website. ## What it does ApplyCanada provides a wealth of information to navigate the intricacies of immigration and student life in Canada. Our extensive resources covering Permanent Residency (PR) pathways, gain insights into the realistic living costs across different provinces, and explore the various financial aid options available to international undergraduates. Our platform fosters community engagement through a dedicated Q&A page, where users can share valuable information, exchange thoughts, and build connections. ## How we built it First, we brainstormed ideas and designed the layout and menu. Then, we conducted research, created a SQL file, and used HTML, CSS, and JavaScript to complete the frontend. Afterward, we proceeded with the backend development. ## Challenges we ran into 1.Learning a new language 2.Connecting SQL, Node.js, and HTML simultaneously using AJAX 3.Adjusting was challenging due to slight variations in screen ratios for each device. 4.There were differing preferences in development. ## Accomplishments that we're proud of Proud that we have shaped the form of the website within the given timeframe. ## What we learned 1. Creating the website takes significant time(even the simple ones take long) 2. Communication is the most important part in team project. ## What's next for Apply Canada 1. Need to add more informations. 2. Design needs some more refinement.
## Inspiration We all understand how powerful ChatGPT is right now, and we thought it would be really cool to make it available to directly call ChatGPT for help. This not only saves time, it is also more convenient. People do not need to be in front of a computer to access ChatGPT, simply call a number and that is it. This also has the potential for accessibility, people who have disabilities in their eyes might struggle to access ChatGPT through their computer. Now, this will not be an issue. ## What it does An application that allows users to make a phone call to ChatGPT for easier access. Our goal with this project is to make ChatGPT more convenient and accessible. People can access it with just Wifi and a phone number. ## How we built it We use the TWILIO API to set up the call service. The call is connected to our backend code, which uses Flask and Twilio API. The code will receive speech from the user and translate it into text so that ChatGPT can understand it. The code will feed the text to ChatGPT through the OpenAi API. Finally, the result from ChatGPT will be fed back to the user through the call, and the user may choose between continuing the call or hanging up. Meanwhile, all the call history will be recorded and the user may access them through our website using a password generated by our code. ## Challenges we ran into There were a lot of challenges in the front end, believe it or now. Trying to design a good way to represent all the data that we collected from the calls, and connecting them from the backend to the front end. Also, setting up Twilio was kind of a challenge since no one on our team was familiar with anything about call services. ## Accomplishments that we're proud of We finished the majority of our code at a fairly fast speed. We are really proud of this. And this led us to explore more options. In the end, we did implement a lot more features into our project like a login system. Collecting call history, etc. ## What we learned We learned a lot of things. We never knew that services like Twilio existed, and we are genuinely impressed with what it can accomplish. Since we had some free time, we also learned something about lip-syncing with audio and videos using ML algorithms. Unfortunately, we did not implement this as it was way too much to do and we did not have enough time. We went to a lot of workshops. They had some really interesting stuff. ## What's next for our group We will ready up for the next Hackathon, and make sure we can do better.
losing
## Inspiration Recognizing the disastrous effects of the auto industry on the environment, our team wanted to find a way to help the average consumer mitigate the effects of automobiles on global climate change. We felt that there was an untapped potential to create a tool that helps people visualize cars' eco-friendliness, and also helps them pick a vehicle that is right for them. ## What it does CarChart is an eco-focused consumer tool which is designed to allow a consumer to make an informed decision when it comes to purchasing a car. However, this tool is also designed to measure the environmental impact that a consumer would incur as a result of purchasing a vehicle. With this tool, a customer can make an auto purhcase that both works for them, and the environment. This tool allows you to search by any combination of ranges including Year, Price, Seats, Engine Power, CO2 Emissions, Body type of the car, and fuel type of the car. In addition to this, it provides a nice visualization so that the consumer can compare the pros and cons of two different variables on a graph. ## How we built it We started out by webscraping to gather and sanitize all of the datapoints needed for our visualization. This scraping was done in Python and we stored our data in a Google Cloud-hosted MySQL database. Our web app is built on the Django web framework, with Javascript and P5.js (along with CSS) powering the graphics. The Django site is also hosted in Google Cloud. ## Challenges we ran into Collectively, the team ran into many problems throughout the weekend. Finding and scraping data proved to be much more difficult than expected since we could not find an appropriate API for our needs, and it took an extremely long time to correctly sanitize and save all of the data in our database, which also led to problems along the way. Another large issue that we ran into was getting our App Engine to talk with our own database. Unfortunately, since our database requires a white-listed IP, and we were using Google's App Engine (which does not allow static IPs), we spent a lot of time with the Google Cloud engineers debugging our code. The last challenge that we ran into was getting our front-end to play nicely with our backend code ## Accomplishments that we're proud of We're proud of the fact that we were able to host a comprehensive database on the Google Cloud platform, in spite of the fact that no one in our group had Google Cloud experience. We are also proud of the fact that we were able to accomplish 90+% the goal we set out to do without the use of any APIs. ## What We learned Our collaboration on this project necessitated a comprehensive review of git and the shared pain of having to integrate many moving parts into the same project. We learned how to utilize Google's App Engine and utilize Google's MySQL server. ## What's next for CarChart We would like to expand the front-end to have even more functionality Some of the features that we would like to include would be: * Letting users pick lists of cars that they are interested and compare * Displaying each datapoint with an image of the car * Adding even more dimensions that the user is allowed to search by ## Check the Project out here!! <https://pennapps-xx-252216.appspot.com/>
## Inspiration As a team, we had a collective interest in sustainability and knew that if we could focus on online consumerism, we would have much greater potential to influence sustainable purchases. We also were inspired by Honey -- we wanted to create something that is easily accessible across many websites, with readily available information for people to compare. Lots of people we know don’t take the time to look for sustainable items. People typically say if they had a choice between sustainable and non-sustainable products around the same price point, they would choose the sustainable option. But, consumers often don't make the deliberate effort themselves. We’re making it easier for people to buy sustainably -- placing the products right in front of consumers. ## What it does greenbeans is a Chrome extension that pops up when users are shopping for items online, offering similar alternative products that are more eco-friendly. The extension also displays a message if the product meets our sustainability criteria. ## How we built it Designs in Figma, Bubble for backend, React for frontend. ## Challenges we ran into Three beginner hackers! First time at a hackathon for three of us, for two of those three it was our first time formally coding in a product experience. Ideation was also challenging to decide which broad issue to focus on (nutrition, mental health, environment, education, etc.) and in determining specifics of project (how to implement, what audience/products we wanted to focus on, etc.) ## Accomplishments that we're proud of Navigating Bubble for the first time, multiple members coding in a product setting for the first time... Pretty much creating a solid MVP with a team of beginners! ## What we learned In order to ensure that a project is feasible, at times it’s necessary to scale back features and implementation to consider constraints. Especially when working on a team with 3 first time hackathon-goers, we had to ensure we were working in spaces where we could balance learning with making progress on the project. ## What's next for greenbeans Lots to add on in the future: Systems to reward sustainable product purchases. Storing data over time and tracking sustainable purchases. Incorporating a community aspect, where small businesses can link their products or websites to certain searches. Including information on best prices for the various sustainable alternatives, or indicators that a product is being sold by a small business. More tailored or specific product recommendations that recognize style, scent, or other niche qualities.
## Inspiration Even though technology speeds up many areas of our lives, the process of saving contact information is no faster than it was with pencil and paper. The daily exchange of emails then phone numbers then other information is tedious at best, and a waste of time at worst. With current methods of directly inputting information into contacts, considerable amounts of time are lost, leaving fewer opportunities for talking or networking. This direct input also leads to either rapidly entered, disorganized contact lists, or a mess of business cards that seems impossible to sort through. With this in mind, we created Cardz.me. Through it, we aim to expedite the information sharing process through instant contact information exchange. ## What it does Cardz.me has two components: a mobile app which uses a QR code to contain identifying information, and a website which appears when the QR code is scanned and allows the QR code scanner to swap information. The mobile app allows the user to create a profile. This includes the user's name, bio, business contact information (including email, address, and website profiles), meeting location, and important details about themselves. The user can also indicate whether they want to use two-way contact sharing (default) or not. This information is attached to a QR code, so information can be shared even if the person scanning it don't have the app. When the QR code is scanned, the app user's information is immediately added to the contacts of the person scanning it. The person who scanned is then directed to a website where they see the app user's information and are able to send their own information back. Once the exchange occurs, the information of the person scanning is added to the contacts of the app user. In this way, the "business card" of an individual can instantly be shared and swapped. ## How we built it For the iPhone app, ReactNative and Javascript are used to create it, including the account's QR code and user information. Firebase is then used to store this information and send it to the person who scanned the QR code. Flask, specifically Javascript and CSS, are used to design the webpage that the person who scanned the QR code is taken to. Flask then stores the person who scanned the QR code's information and sends it back to Firebase, which sends it to the mobile app user's phone. ## Challenges we ran into * Coming up with a great Figma wireframe design was time-consuming * We had trouble finding an idea, and didn't end up deciding on one until Friday night/Saturday morning ## Accomplishments we're proud of * The website works and looks similar to our Figma design * Live and working version of the mobile app in React Native / VCF encoding works * We got the website live! ## What we learned * How to use Flask for Javascript/CSS * How to use GitHub and Devpost * How to put together a product quickly ## What's next for Cardz.me? We were not able to put the app on the app store or have the website function fully due to time constraints. Moving forward, we'd like to implement these features to have a completely functional product.
winning
## Inspiration A couple weeks ago, a friend was hospitalized for taking Advil–she accidentally took 27 pills, which is nearly 5 times the maximum daily amount. Apparently, when asked why, she responded that thats just what she always had done and how her parents have told her to take Advil. The maximum Advil you are supposed to take is 6 per day, before it becomes a hazard to your stomach. #### PillAR is your personal augmented reality pill/medicine tracker. It can be difficult to remember when to take your medications, especially when there are countless different restrictions for each different medicine. For people that depend on their medication to live normally, remembering and knowing when it is okay to take their medication is a difficult challenge. Many drugs have very specific restrictions (eg. no more than one pill every 8 hours, 3 max per day, take with food or water), which can be hard to keep track of. PillAR helps you keep track of when you take your medicine and how much you take to keep you safe by not over or under dosing. We also saw a need for a medicine tracker due to the aging population and the number of people who have many different medications that they need to take. According to health studies in the U.S., 23.1% of people take three or more medications in a 30 day period and 11.9% take 5 or more. That is over 75 million U.S. citizens that could use PillAR to keep track of their numerous medicines. ## How we built it We created an iOS app in Swift using ARKit. We collect data on the pill bottles from the iphone camera and passed it to the Google Vision API. From there we receive the name of drug, which our app then forwards to a Python web scraping backend that we built. This web scraper collects usage and administration information for the medications we examine, since this information is not available in any accessible api or queryable database. We then use this information in the app to keep track of pill usage and power the core functionality of the app. ## Accomplishments that we're proud of This is our first time creating an app using Apple's ARKit. We also did a lot of research to find a suitable website to scrape medication dosage information from and then had to process that information to make it easier to understand. ## What's next for PillAR In the future, we hope to be able to get more accurate medication information for each specific bottle (such as pill size). We would like to improve the bottle recognition capabilities, by maybe writing our own classifiers or training a data set. We would also like to add features like notifications to remind you of good times to take pills to keep you even healthier.
## Inspiration While we were brainstorming for ideas, we realized that two of our teammates are international students from India also from the University of Waterloo. Their inspiration to create this project was based on what they had seen in real life. Noticing how impactful access to proper healthcare can be, and how much this depends on your socioeconomic status, we decided on creating a healthcare kiosk that can be used by people in developing nations. By designing interface that focuses heavily on images; it can be understood by those that are illiterate as is the case in many developing nations, or can bypass language barriers. This application is the perfect combination of all of our interests; and allows us to use tech for social good by improving accessibility in the healthcare industry. ## What it does Our service, Medi-Stand is targeted towards residents of regions who will have the opportunity to monitor their health through regular self administered check-ups. By creating healthy citizens, Medi-Stand has the potential to curb the spread of infectious diseases before they become a bane to the society and build a more productive society. Health care reforms are becoming more and more necessary for third world nations to progress economically and move towards developed economies that place greater emphasis on human capital. We have also included supply-side policies and government injections through the integration of systems that have the ability to streamline this process through the creation of a database and eliminating all paper-work thus making the entire process more streamlined for both- patients and the doctors. This service will be available to patients through kiosks present near local communities to save their time and keep their health in check. By creating a profile the first time you use this system in a government run healthcare facility, users can create a profile and upload health data that is currently on paper or in emails all over the interwebs. By inputting this information for the first time manually into the database, we can access it later using the system we’ve developed. Overtime, the data can be inputted automatically using sensors on the Kiosk and by the doctor during consultations; but this depends on 100% compliance. ## How I built it In terms of the UX/UI, this was designed using Sketch. By beginning with the creation of mock ups on various sheets of paper, 2 members of the team brainstormed customer requirements for a healthcare system of this magnitude and what features we would be able to implement in a short period of time. After hours of deliberation and finding ways to present this, we decided to create a simple interface with 6 different screens that a user would be faced with. After choosing basic icons, a font that could be understood by those with dyslexia, and accessible colours (ie those that can be understood even by the colour blind); we had successfully created a user interface that could be easily understood by a large population. In terms of developing the backend, we wanted to create the doctor’s side of the app so that they could access patient information. IT was written in XCode and connects to Firebase database which connects to the patient’s information and simply displays this visually on an IPhone emulator. The database entries were fetched in Json notation, using requests. In terms of using the Arduino hardware, we used Grove temperature sensor V1.2 along with a Grove base shield to read the values from the sensor and display it on the screen. The device has a detectable range of -40 to 150 C and has an accuracy of ±1.5 C. ## Challenges I ran into When designing the product, one of the challenges we chose to tackle was an accessibility challenge. We had trouble understanding how we can turn a healthcare product more accessible. Oftentimes, healthcare products exist from the doctor side and the patients simply take home their prescription and hold the doctors to an unnecessarily high level of expectations. We wanted to allow both sides of this interaction to understand what was happening, which is where the app came in. After speaking to our teammates, they made it clear that many of the people from lower income households in a developing nation such as India are not able to access hospitals due to the high costs; and cannot use other sources to obtain this information due to accessibility issues. I spent lots of time researching how to make this a user friendly app and what principles other designers had incorporated into their apps to make it accessible. By doing so, we lost lots of time focusing more on accessibility than overall design. Though we adhered to the challenge requirements, this may have come at the loss of a more positive user experience. ## Accomplishments that I'm proud of For half the team, this was their first Hackathon. Having never experienced the thrill of designing a product from start to finish, being able to turn an idea into more than just a set of wireframes was an amazing accomplishment that the entire team is proud of. We are extremely happy with the UX/UI that we were able to create given that this is only our second time using Sketch; especially the fact that we learned how to link and use transactions to create a live demo. In terms of the backend, this was our first time developing an iOS app, and the fact that we were able to create a fully functioning app that could demo on our phones was a pretty great feat! ## What I learned We learned the basics of front-end and back-end development as well as how to make designs more accessible. ## What's next for MediStand Integrate the various features of this prototype. How can we make this a global hack? MediStand is a private company that can begin to sell its software to the governments (as these are the people who focus on providing healthcare) Finding more ways to make this product more accessible
## Inspiration Recently we have noticed an influx in elaborate spam calls, email, and texts. Although for a native English and technologically literate person in Canada, these phishing attempts are a mere inconvenience, to susceptible individuals falling for these attempts may result in heavy personal or financial loss. We aim to reduce this using our hack. We created PhishBlock to address the disparity in financial opportunities faced by minorities and vulnerable groups like the elderly, visually impaired, those with limited technological literacy, and ESL individuals. These groups are disproportionately targeted by financial scams. The PhishBlock app is specifically designed to help these individuals identify and avoid scams. By providing them with the tools to protect themselves, the app aims to level the playing field and reduce their risk of losing savings, ultimately giving them the same financial opportunities as others. ## What it does PhishBlock is a web application that leverages LLMs to parse and analyze email messages and recorded calls. ## How we built it We leveraged the following technologies to create a pipeline classify potentially malicious email from safe ones. Gmail API: Integrated reading a user’s email. Cloud Tech: Enabled voice recognition, data processing and training models. Google Cloud Enterprise (Vertex AI): Leveraged for secure cloud infrastructure. GPT: Employed for natural language understanding and generation. numPy, Pandas: Data collection and cleaning Scikit-learn: Applied for efficient model training ## Challenges we ran into None of our team members had worked with google’s authentication process and the gmail api, so much of saturday was devoted to hashing out technical difficulties with these things. On the AI side, data collection is an important step in training and fine tuning. Assuring the quality of the data was essential ## Accomplishments that we're proud of We are proud of coming together as a group and creating a demo to a project in such a short time frame ## What we learned The hackathon was just one day, but we realized we could get much more done than we initially intended. Our goal seemed tall when we planned it on Friday, but by Saturday night all the functionality we imagined had fallen into place. On the technical side, we didn’t use any frontend frameworks and built interactivity the classic way and it was incredibly challenging. However, we discovered a lot about what we’re capable of under extreme time pressures! ## What's next for PhishBlock We used closed source OpenAI API to fine tune a GPT 3.5 Model. This has obvious privacy concerns, but as a proof of concept it demonstrate the ability of LLMs to detect phishing attempts. With more computing power open source models can be used.
winning
## Inspiration On a night in January 2018, at least 7 students reported symptoms of being drugged after attending a fraternity party at Stanford [link](https://abc7news.com/2957171/). Although we are only halfway into this academic year, Stanford has already issued seven campus-wide reports about possible aggravated assault/drugging. This is not just a problem within Stanford, drug-facilitated sexual assault (DFSA) is a serious problem among teens and college students nationwide. Our project is deeply motivated by this saddening situation that people around us at Stanford, and the uneasiness caused by the possibility of experiencing such crimes. This project delivers SafeCup, a sensor-embedded smart cup that warns owners if their drink has been tampered with. ## What it does SafeCup is embedded with a simple yet highly sensitive electrical conductivity (EC) sensor which detects concentration of total dissolved solids (TDS). Using an auto-ranging resistance measurement system, designed to measure the conductivity of various liquids, the cup takes several measurements within a certain timeframe and warns the owner by pushing a notification to their phone if it senses a drastic change in the concentration of TDS. This change signifies a change in the content of the drink, which can be caused by the addition of chemicals such as drugs. ## How we built it We used a high surface area electrodes, a set of resistors to build the EC sensor and an Arduino microprocessor to collect the data. The microprocessor then sends the data to a computer, which analyzes the measurements and performs the computation, which then notifies the owner through "pushed", an API that sends push notifications to Android or IOS devices. ## Challenges we ran into The main challenge is getting a stable and accurate EC reading from the home-made sensor. EC is depended on the surface area and the distance between the electrodes, thus we had to designed an electrode where the distance does between the electrod does not vary due to movements. Liquids can have a large range of conductivity, from 0.005 mS/cm to 5000 mS/cm. In order to measure the conductivity at the lower range, we increased the surface area of our electrodes significantly, around 80 cm^2, while typical commercial TDS sensors are less than 0.5 cm^2. In order to measure such a large range of values, we had to design a dynamic auto-ranging system with a range of reference resistors. Another challenge is that we are unable to make our cup look more beautiful, or normal/party-like... This is mainly because of the size of the Arduino UNO microprocessor, which is hard to disguise under the size of a normal party solo cup. This is why after several failed cup designs, we decided to make the cup simple and transparent, and focus on demonstrating the technology instead of the aesthetics. ## Accomplishments that we're proud of We're most proud of the simplicity of the device. The device is made from commonly found items. This also means the device can be very cheap to manufacture. Typical commercial TDS measuring pen can be found for as low as $5 and this device is even simpler than a typical TDS sensor. We are also proud of the auto-ranging resistance measurement. Our cup is able to automatically calibrate to the new drink being poured in, to adjust to its level of resistance (note that different drinks have different chemical compositions and therefore has different resistance). This allows us to make our cup accommodate a wide range of different drinks. We are also proud of finding a simple solution to notify users - developing an app would have take away too much time that we could otherwise put into furthering the cup's hardware design, given a small team of just two first-time hackers. ## What we learned We learned a lot about Arduino development, circuits, and refreshed our knowledge of Ohm's law. ## What's next for SafeCup The prototype we've delivered for this project is definitely not a finished product that is ready to be used. We have not performed any test on whether liquids from the cup are actually consumable since the liquids had been in touch with non-food-grade metal and may undergo electrochemical transformation due to the applied potential on the liquid. Our next step would be to ensure consumer safety. TDS sensor also might not be sensitive enough alone for liquids with already high amount of TDS. Adding other simple complementary sensors can greatly increase the sensitive of the device. Other simple sensors may include dielectric constant sensor, turbidity sensor, simple UV-Vis light absorption sensor, or even making simple electrochemical mesurements. Other sensors such as water level sensor can even be used to keep track of amount of drink you have had throughout the night. We would also use a smaller footprint microprocessor, which can greatly compact the device. In addition, we would like to incorporate wireless features that would eliminate the need to wire to a computer. ## Ethical Implications For "Most Ethically Engaged Hack" We believe that our project could mean a lot to young people facing the risk of DFSA. These people, statistically, mostly consist of college students and teenagers who surround us all the time, and are especially vulnerable to such type of crimes. We have come a long way to show that the idea of using simple TDS sensor for illegal drugging works. With future improvements in its beauty and safety, we believe it could be a viable product that improves safety of many people around us in colleges and parties.
## Problem Statement As the number of the elderly population is constantly growing, there is an increasing demand for home care. In fact, the market for safety and security solutions in the healthcare sector is estimated to reach $40.1 billion by 2025. The elderly, disabled, and vulnerable people face a constant risk of falls and other accidents, especially in environments like hospitals, nursing homes, and home care environments, where they require constant supervision. However, traditional monitoring methods, such as human caregivers or surveillance cameras, are often not enough to provide prompt and effective responses in emergency situations. This potentially has serious consequences, including injury, prolonged recovery, and increased healthcare costs. ## Solution The proposed app aims to address this problem by providing real-time monitoring and alert system, using a camera and cloud-based machine learning algorithms to detect any signs of injury or danger, and immediately notify designated emergency contacts, such as healthcare professionals, with information about the user's condition and collected personal data. We believe that the app has the potential to revolutionize the way vulnerable individuals are monitored and protected, by providing a safer and more secure environment in designated institutions. ## Developing Process Prior to development, our designer used Figma to create a prototype which was used as a reference point when the developers were building the platform in HTML, CSS, and ReactJs. For the cloud-based machine learning algorithms, we used Computer Vision, Open CV, Numpy, and Flask to train the model on a dataset of various poses and movements and to detect any signs of injury or danger in real time. Because of limited resources, we decided to use our phones as an analogue to cameras to do the live streams for the real-time monitoring. ## Impact * **Improved safety:** The real-time monitoring and alert system provided by the app helps to reduce the risk of falls and other accidents, keeping vulnerable individuals safer and reducing the likelihood of serious injury. * **Faster response time:** The app triggers an alert and sends notifications to designated emergency contacts in case of any danger or injury, which allows for a faster response time and more effective response. * **Increased efficiency:** Using cloud-based machine learning algorithms and computer vision techniques allow the app to analyze the user's movements and detect any signs of danger without constant human supervision. * **Better patient care:** In a hospital setting, the app could be used to monitor patients and alert nurses if they are in danger of falling or if their vital signs indicate that they need medical attention. This could lead to improved patient care, reduced medical costs, and faster recovery times. * **Peace of mind for families and caregivers:** The app provides families and caregivers with peace of mind, knowing that their loved ones are being monitored and protected and that they will be immediately notified in case of any danger or emergency. ## Challenges One of the biggest challenges have been integrating all the different technologies, such as live streaming and machine learning algorithms, and making sure they worked together seamlessly. ## Successes The project was a collaborative effort between a designer and developers, which highlights the importance of cross-functional teams in delivering complex technical solutions. Overall, the project was a success and resulted in a cutting-edge solution that can help protect vulnerable individuals. ## Things Learnt * **Importance of cross-functional teams:** As there were different specialists working on the project, it helped us understand the value of cross-functional teams in addressing complex challenges and delivering successful results. * **Integrating different technologies:** Our team learned the challenges and importance of integrating different technologies to deliver a seamless and effective solution. * **Machine learning for health applications:** After doing the research and completing the project, our team learned about the potential and challenges of using machine learning in the healthcare industry, and the steps required to build and deploy a successful machine learning model. ## Future Plans for SafeSpot * First of all, the usage of the app could be extended to other settings, such as elderly care facilities, schools, kindergartens, or emergency rooms to provide a safer and more secure environment for vulnerable individuals. * Apart from the web, the platform could also be implemented as a mobile app. In this case scenario, the alert would pop up privately on the user’s phone and notify only people who are given access to it. * The app could also be integrated with wearable devices, such as fitness trackers, which could provide additional data and context to help determine if the user is in danger or has been injured.
## Inspiration As high school students, we relate to the issue of procrastination, especially when mobile devices are in close reach. Rather than deleting social media or downloading app blockers to limit distractions, our team was inspired to solve this issue at the root by rewiring students' relationship with tech. ## What it does Our product is designed to let YOU take charge of your relationship with your mobile devices. With *Rewire*, you keep your phone in front of a sensor that detects its presence. By removing your phone away from our product, loud noises and a red light will prompt you to place your device back and stay focused without distractions. ## How we built it We used an Arduino Uno and SparkFun Inventor's Kit to power our project. The ultrasonic sensor detects when your phone is placed or removed in front of the sensor. *Rewire* alerts the user through a Piezo buzzer and colorful LEDs that are activated when specific distance conditions are met. ## Challenges we ran into As a team of beginner hackers, we wanted to step outside our comfort zone and create a project using hardware. After receiving parts from the hardware hub, we ran into the challenge of finding a problem to solve in this open-ended hackathon. We started to tackle this ideation challenge by speaking with mentors, who provided us with great advice. Ultimately, we overcame this challenge by choosing an issue that was relevant to us as high school students. ## Accomplishments that we're proud of We are proud of our perseverance throughout this hackathon that led us to successfully complete this project. Although we dealt with roadblocks such as miswired electrical components and bugs in our code, we remained persistent and worked together to achieve our goal. Also, we're so proud and excited to be a part of HTN 2024! ## What we learned We developed skills in C++ and refined our skills of wiring breadboards, overall combining software and hardware to create a final product. ## What's next for Rewire We plan to utilize buttons so users can input their required study hours, and the LCD Display from SparkFun Inventor’s Kit can be used to display remaining hours as well as encouraging messages for users. In addition, we can implement productivity methods such as Pomodoro technique to take strategic breaks to boost productivity.
winning
## What it does KokoRawr at its core is a Slack App that facilitates new types of interactions via chaotic cooperative gaming through text. Every user is placed on a team based on their Slack username and tries to increase their team's score by playing games such as Tic Tac Toe, Connect 4, Battleship, and Rock Paper Scissors. Teams must work together to play. However, a "Twitch Plays Pokemon" sort of environment can easily be created where multiple people are trying to execute commands at the same time and step on each others' toes. Additionally, people can visualize the games via a web app. ## How we built it We jumped off the deep into the land of microservices. We made liberal use of StdLib with node.js to deploy a service for every feature in the app, amounting to 10 different services. The StdLib services all talk to each other and to Slack. We also have a visualization of the game boards that is hosted as a Flask server on Heroku that talks to the microservices to get information. ## Challenges we ran into * not getting our Slack App banned by HackPrinceton * having tokens show up correctly on the canvas * dealing with all of the madness of callbacks * global variables causing bad things to happen ## Accomplishments that we're proud of * actually chaotically play games with each other on Slack * having actions automatically showing up on the web app * The fact that we have **10 microservices** ## What we learned * StdLib way of microservices * Slack integration * HTML5 canvas * how to have more fun with each other ## Possible Use Cases * Friendly competitive way for teams at companies to get to know each other better and learn to work together * New form of concurrent game playing for friend groups with "unlimited scalability" ## What's next for KokoRawr We want to add more games to play and expand the variety of visualizations that are shown to include more games. Some service restructuring would be need to be done to reduce the Slack latency. Also, game state would need to be more persistent for the services.
## Summary OrganSafe is a revolutionary web application that tackles the growing health & security problem of black marketing of donated organs. The verification of organ recipients leverages the Ethereum Blockchain to provide critical security and prevent improper allocation for such a pivotal resource. ## Inspiration The [World Health Organization (WHO)](https://slate.com/business/2010/12/can-economists-make-the-system-for-organ-transplants-more-humane-and-efficient.html) estimates that one in every five kidneys transplanted per year comes from the black market. There is a significant demand for solving this problem which impacts thousands of people every year who are struggling to find a donor for a significantly need transplant. Modern [research](https://ieeexplore.ieee.org/document/8974526) has shown that blockchain validation of organ donation transactions can help reduce this problem and authenticate transactions to ensure that donated organs go to the right place! ## What it does OrganSafe facilitates organ donations with authentication via the Ethereum Blockchain. Users can start by registering on OrganSafe with their health information and desired donation and then application's algorithms will automatically match the users based on qualifying priority for available donations. Hospitals can easily track donations of organs and easily record when recipients receive their donation. ## How we built it This application was built using React.js for the frontend of the platform, Python Flask for the backend and API endpoints, and Solidity+Web3.js for Ethereum Blockchain. ## Challenges we ran into Some of the biggest challenges we ran into were connecting the different components of our project. We had three major components: frontend, backend, and the blockchain that were developed on that needed to be integrated together. This turned to be the biggest hurdle in our project that we needed to figure out. Dealing with the API endpoints and Solidarity integration was one of the problems we had to leave for future developments. One solution to a challenge we solved was dealing with difficulty of the backend development and setting up API endpoints. Without a persistent data storage in the backend, we attempted to implement basic storage using localStorage in the browser to facilitate a user experience. This allowed us to implement a majority of our features as a temporary fix for our demonstration. Some other challenges we faced included figuring certain syntactical elements to the new technologies we dealt (such as using Hooks and States in React.js). It was a great learning opportunity for our group as immersing ourselves in the project allowed us to become more familiar with each technology! ## Accomplishments that we're proud of One notable accomplishment is that every member of our group interface with new technology that we had little to no experience with! Whether it was learning how to use React.js (such as learning about React fragments) or working with Web3.0 technology such as the Ethereum Blockchain (using MetaMask and Solidity), each member worked on something completely new! Although there were many components we simply just did not have the time to complete due to the scope of TreeHacks, we were still proud with being able to put together a minimum viable product in the end! ## What we learned * Fullstack Web Development (with React.js frontend development and Python Flask backend development) * Web3.0 & Security (with Solidity & Ethereum Blockchain) ## What's next for OrganSafe After TreeHacks, OrganSafe will first look to tackle some of the potential areas that we did not get to finish during the time of the Hackathon. Our first step would be to finish development of the full stack web application that we intended by fleshing out our backend and moving forward from there. Persistent user data in a data base would also allow users and donors to continue to use the site even after an individual session. Furthermore, scaling both the site and the blockchain for the application would allow greater usage by a larger audience, allowing more recipients to be matched with donors.
## Introduction Imagine having a trusted companion who listens to your thoughts, helps you reflect on your day, and supports your journey to improved mental health. Introducing Reflexion Buddy, your AI-based journaling partner bringing increased enjoyment and accessibility to the journaling process! Our project is designed to empower individuals to embrace the benefits of reflection through journaling consistently. We believe that taking time to reflect is a powerful tool for enhancing mental well-being and personal growth, which is an approach backed by science and used by countless successful people who attribute journaling to their success, productivity, and overall well-being. ## Inspiration The inspiration behind Reflexion Buddy comes from the increasing importance of mental health and well-being in today's fast-paced world. We live in an age where stress, anxiety, and burnout are prevalent, and many people struggle to find time for self-reflection. We wanted to create a solution that encourages individuals to prioritize their mental health by making journaling a seamless, accessible, and enjoyable experience. ## What it does Reflexion Buddy offers a seamless journaling experience by transforming spoken words into written entries through Voice-to-Text Journaling. It supports multiple languages and goes beyond mere transcription. It also engages in meaningful two-way conversations It then takes journaling a step further by summarizing these entries, inferring key themes, and generating images that encapsulate the main ideas. Beyond documentation, our AI system also performs Emotion/Sentiment Analysis to help users track their emotional journey over time. These insights are organized into a personal digital diary, providing users with a convenient way to reflect on their thoughts, experiences, and personal growth. Reflexion Buddy has the following features to improve both the accessibility and personal impact of journaling: * Language picker: To allow users of any language background to use the app * Speech-to-text: For those who have difficulty typing/writing, or just enjoy voice-based journaling more * Response interaction: To provide insightful questions and feedback on the journal * Image generation: To make the journaling experience more enjoyable and visual, thereby increasing the probability that someone will journal * Text-to-speech: Reads responses out loud to users * With these innovations, we are certain that users' mental health and well-being will be positively affected, and the journaling process will become more accessible. ## How we built it We harnessed a diverse array of technologies to create a comprehensive journaling platform. Powered by Streamlit and coded entirely in Python, our project integrates five essential AI components: IBM Speech to Text for effortless voice-to-text transcription, GPT-4 Chat for engaging conversational capabilities, GPT-4 Summarize for insightful content condensation, DALL-E Stable Diffusion for generating meaningful visual representations, and Google Text-to-Speech for interactive communication. We leveraged IBM Watson Natural Language Understanding for sentiment analysis, allowing users to track their emotions over time. Additionally, MATLAB was employed to craft intuitive mood visualizations. This amalgamation of AI, programming proficiency, and data visualization expertise has culminated in Reflexion Buddy, a versatile and user-centric journaling solution. ## Challenges we ran into Certainly! Here's the list of challenges we ran into in bullet point format using markup language: * **Initial Idea:** + Challenge: AI medical assistant limitations. * **Pivot to Mental Health Support:** + Challenge: Finding a reliable LLM model for mental health. * **Technical Challenges:** + Setting up IBM Cloud and understanding IBM speech-to-text API + Incorporating IBM Natural Language Understanding API. + Using OpenAI API effectively. + Providing context and intent to GPT-4. + Generating meaningful images from text. + Integrating different components. + Creating a PDF from the conversation data. ## Accomplishments that we're proud of We are proud of completing the development of all the features on time. We are also proud of how we pivoted to a better idea and had a seamless collaboration among our teammates by dividing the work efficiently. We learned a lot about various APIs and different technologies. ## What we learned We've learned the importance of adaptability and pivoting in response to challenges. We gained hands-on experience in integrating multiple AI technologies, cloud services, and sentiment analysis, strengthening our skills in AI development and interdisciplinary collaboration. ## What's next for Reflexion Buddy We envision transforming it into a comprehensive mobile application. Users will have the opportunity to create individual accounts, providing them with a personal diary that spans 365 days. This extended functionality will allow users to maintain a year-long record of their thoughts, emotions, and personal growth while benefiting from AI-powered features for enhanced mental well-being. Additionally, we plan to further enhance its capabilities by incorporating more advanced AI models, expanding language support, and refining the user experience. We aim to integrate additional features for personalized mental health insights, such as mood trend analysis and actionable recommendations. Additionally, we plan to explore partnerships with mental health professionals to ensure Reflexion Buddy becomes a valuable tool for individuals seeking emotional well-being.
winning
*\*\*OUR PROJECT IS NOT COMPLETE*\* ## Inspiration Due to the pandemic, lots of people order in food instead of going to restaurant to be safe. There are many popular food delivery applications available and for a lot of people, they scroll through multiple apps to search for the cheapest price for the same items. It is always nice to save money and our app can definitely help people with this. Our proof-of-concept application utilizes dummy data from our own database. The reason for this is because there is a lack of publicly available APIs to gather food delivery company information that is required. ## What it does The user enters in a delivery address, and this gets a list of restaurants. Then, the user selects a restaurant, selects the menu items and the quantity of each item, and then they will be able to see a price breakdown and the price total between the available food delivery services. ## How We built it We decided to create a Flutter application to challenge ourselves. None of us had worked with Flutter and the Dart language before and this was a fun and difficult process. 3 of us developed the frontend. The backend was created using Express.js, database using Google Cloud SQL, and the server hosted on Heroku. 1 of us developed the backend (which was amazing!) ## Challenges We ran into As we are all unfamiliar with Dart and Flutter, it took us more time than with a familiar tool to develop. The time pressure of the hackathon was also a challenge. Although we didn't finish on time, this was still nonetheless a wonderful experience to develop something cool with a new technology. ## Accomplishments that We're proud of We are proud to learn a bit about Dart and Flutter, and to be able to develop for most of the hackathon. We accomplished a lot but if we had more time, we could have finished the project. ## What I learned Dart and Flutter. Working with API calls in Dart. ## What's next for our App There are a few features in the roadmap, if we were to continue working on this app we would: * add promotions. This is a key feature because the price between the services vary greatly if promotions are taken into account * add login functionality * web scrape (or find publicly available APIs) popular food delivery services and obtain real data to utiilize * add images to restaurants and menu items
## Inspiration Everybody eats and in college if you are taking difficult classes it is often challenging to hold a job. Therefore as college students we have no income during the year. Our inspiration came as we have moved off campus this year to live in apartments with full kitchens but the lack of funds to make complete meals at a reasonable price. So along came the thought that we couldn't be the only ones with this issue, so..."what if we made an app where all of us could connect via a social media platform and share and post our meals with the price range attached to it so that we don't have to come up with good cost effective meals on our own". ## What it does Our app connects college students, or anyone that is looking for a great way to find good cost effective meals and doesn't want to come up with the meals on their own, by allowing everyone to share their meals and create an abundant database of food. ## How we built it We used android studio to create the application and tested the app using the built in emulator to see how the app was coming along when viewed on the phone. Specifically we used an MVVM design to interweave the functionality with the graphical display of the app. ## Challenges we ran into The backend that we were familiar with ended up not working well for us, so we had to transition over to another backend holder called back4app. We also were challenged with the user personal view and being able to save the users data all the time. ## Accomplishments that we're proud of We are proud of all the work that we put into the application in a very short amount of time, and learning how to work with a new backend during the same time so that everything worked as intended. We are proud of the process and organization we had throughout the project, beginning with a wire frame and building our way up part by part until the finished project. ## What we learned We learned how to work with drop down menus to hold multiple values of possible data for the user to choose from. And for one of our group members learned how to work with app development on the full size scale. ## What's next for Forkollege In version 2.0 we plan on implementing a better settings page that allows the user to change their password, we also plan on fixing the for you page specifically for each recipe displayed we were not able to come up with a way to showcase the number of $ sign's and instead opted for using stars again. As an outside user this is a little confusing, so updating this aspect is of the most importance.
## Inspiration The first step of our development process was conducting user interviews with University students within our social circles. When asked of some recently developed pain points, 40% of respondents stated that grocery shopping has become increasingly stressful and difficult with the ongoing COVID-19 pandemic. The respondents also stated that some motivations included a loss of disposable time (due to an increase in workload from online learning), tight spending budgets, and fear of exposure to covid-19. While developing our product strategy, we realized that a significant pain point in grocery shopping is the process of price-checking between different stores. This process would either require the user to visit each store (in-person and/or online) and check the inventory and manually price check. Consolidated platforms to help with grocery list generation and payment do not exist in the market today - as such, we decided to explore this idea. **What does G.e.o.r.g.e stand for? : Grocery Examiner Organizer Registrator Generator (for) Everyone** ## What it does The high-level workflow can be broken down into three major components: 1: Python (flask) and Firebase backend 2: React frontend 3: Stripe API integration Our backend flask server is responsible for web scraping and generating semantic, usable JSON code for each product item, which is passed through to our React frontend. Our React frontend acts as the hub for tangible user-product interactions. Users are given the option to search for grocery products, add them to a grocery list, generate the cheapest possible list, compare prices between stores, and make a direct payment for their groceries through the Stripe API. ## How we built it We started our product development process with brainstorming various topics we would be interested in working on. Once we decided to proceed with our payment service application. We drew up designs as well as prototyped using Figma, then proceeded to implement the front end designs with React. Our backend uses Flask to handle Stripe API requests as well as web scraping. We also used Firebase to handle user authentication and storage of user data. ## Challenges we ran into Once we had finished coming up with our problem scope, one of the first challenges we ran into was finding a reliable way to obtain grocery store information. There are no readily available APIs to access price data for grocery stores so we decided to do our own web scraping. This lead to complications with slower server response since some grocery stores have dynamically generated websites, causing some query results to be slower than desired. Due to the limited price availability of some grocery stores, we decided to pivot our focus towards e-commerce and online grocery vendors, which would allow us to flesh out our end-to-end workflow. ## Accomplishments that we're proud of Some of the websites we had to scrape had lots of information to comb through and we are proud of how we could pick up new skills in Beautiful Soup and Selenium to automate that process! We are also proud of completing the ideation process with an application that included even more features than our original designs. Also, we were scrambling at the end to finish integrating the Stripe API, but it feels incredibly rewarding to be able to utilize real money with our app. ## What we learned We picked up skills such as web scraping to automate the process of parsing through large data sets. Web scraping dynamically generated websites can also lead to slow server response times that are generally undesirable. It also became apparent to us that we should have set up virtual environments for flask applications so that team members do not have to reinstall every dependency. Last but not least, deciding to integrate a new API at 3am will make you want to pull out your hair, but at least we now know that it can be done :’) ## What's next for G.e.o.r.g.e. Our next steps with G.e.o.r.g.e. would be to improve the overall user experience of the application by standardizing our UI components and UX workflows with Ecommerce industry standards. In the future, our goal is to work directly with more vendors to gain quicker access to price data, as well as creating more seamless payment solutions.
losing
## Inspiration We wanted to help hostage negotiators train for real life hostage situations by utilizing emotion and voice detecting AI to create realistic situations that the user can navigate by talking the situation down. ## What it does Users are given thirty seconds to talk down a suspect who is taking a location hostage. The suspect has secret objectives that the player needs to discover through talking to the suspect. How the suspect reacts will depend on the users words as well as emotions as Hume interprets both and decides how the suspect will react. ## How we built it We created a python backend to power the AI features of our unity game, specifically using Hume AI to power text to speech, emotion detection, and audio transcription. We also used context injection to modify the Hume responses according to our scoring algorithm that decided how well the user was doing. ## Challenges we ran into Integrating everything together proved to be a challenge, as some things didn't work with others well so we had to go back and modify code to work with other sections of the game. ## Accomplishments that we're proud of Being able to get Hume's emotion detection to work in parallel with the context injection was a great moment for us. ## What we learned Going into the project none of our team had very little experience with any of the tech used in our stack, so we all had to learn Unity, Hume, C#, and more, which was a fun experience for our team. ## What's next for The Situation Room We plan on adding more levels, more difficulty settings, and more detailed animations to the game next, to allow it to be even more useful for real-life training.
## Inspiration We would like to invoke great communication between two people, whether that be friends, partners, or family members. With this lovely, two-persons game, the players can improve their relationships since they can build on the trust of each other by completing the maze missions together. ## What it does It is a Virtual Reality game which requires one player to see through the VR glasses, and the other player to control the movement of the character in the game. The player with the VR glasses will need to give clear instructions to the other player with the controller in order to walk to the end of the maze. ## How we built it We built it primarily with Google's Firebase, a mobile application development platform, which takes part in the mobile controller. With the VR part of the game, we used Unity to create the maze environments. ## Challenges we ran into Sometimes, it was hard to compile the Unity codes to the mobile phone. However, with our patience and problem-solving skills, we eventually made the game working! ## Accomplishments that we're proud of We are very proud that our team could learn how to use Unity and create a game from scratch within 36 hours since no one on the team had experience with Unity and Firebase beforehand. ## What we learned We learned how to use Unity and Firebase. Also, we further improved our interpersonal skills, i.e. teamwork. People tend to be more emotional when working at night and pulling all-nighters. Nonetheless, we were able to control our emotions and created a game together. ## What's next for Our Game Since there was a time limit, Our Game is not in its fullest potential yet. If there is more time allowed in the future, our team will add more details and features to further enhance the users' experience with the game.
## Inspiration Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves. ## What it does The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done. ## How We built it We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations. ## Challenges we ran into We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time. ## Accomplishments that we're proud of We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches. ## What we learned We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start. ## What's next for RealTalk We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server. ![Image](https://i.imgur.com/aehDk3L.gif) Tracks movement of hands and face to provide real-time analysis on expressions and body-language. ![Image](https://i.imgur.com/tZAM0sI.gif)
losing
## Inspiration Augmented reality games (e.g. Pokemon Go) inspired us to make a game that lets users interact with their environment while having fun. With our app, we intent to simplify and revolutionize scavenger hunts (and Easter egg hunt!) ## What it does ScavaHunt is a scavenger hunt game on your phone. You choose your category (school, home, etc.) and the app will ask you to find a specific item related to your category. You will have a fixed amount of time to find the items and each correct answer will give you points. You have infinite many trials and you can skip items, but be fast because the clock is ticking! ## How we built it This is an android app built using Java and integrated with the Watson Visual Recognition api to validate each image. ## Challenges we ran into Screen overflow problem because we instantiated too many screens, but we didn't clear the screens once we were done with the activities Passing data across activities Initial setup issues with Android Studio and merge conflicts ## Accomplishments that we are proud of Being able to parse data from Watson Visual Recognition api and making our game work. The timer in the app needed to persist even when the player is taking a picture with the camera app. The timer needed also to be paused while the app is making api calls to Watson since a long processing time would make it unfair to players. ## What we learned * Android Studio and Java (front end and back end) * ML image recognition with Watson Visual Recognition api (and its limitations) * Solve git merge conflicts ## What's next for ScavaHunt Multiplayer, more categories and/or themed categories (e.g Easter egg hunt with our app!)
# Mode - nwHacks 2023 ![Intro](https://res.cloudinary.com/devpost/image/fetch/s--atMxOlmh--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/v1/./START.png) ## Devpost ## Instructions 1. cd into mode/frontend 2. yarn 3. yarn start 4. You are good to go! ## Inspiration To build an economically, and environmentally competent solution to combat the growing industry of fast fashion. ## What it does Mode serves as a marketplace for users around the world to upload their 2nd-hand clothes, minimizing the accumulation of clothes-related waste and increasing the longevity of garment pieces. To perserve the exclusivity of owning the clothes, we rely on Verbwire to mint the clothes uploaded by a user and store them in their own digital wallets. ## How we built it Low Fi to Mid Fi to High Fi, async building backend and frontend with a final merge and a lot of challenging issues and learning moments. ## Challenges we ran into 1. Accessing Verbwire API 2. Deployment for Backend ## Accomplishments that we're proud of We managed to get it up and working through the unforseen obstacles. ## What we learned Even when you've done something countless times, when it goes wrong it can still be very costly. Never be complacent. ## What's next Polishing the web app, deploying it more securely, and adding & polishing the features.
## Inspiration In a world where a tweet out of context can cost you your career, it is increasingly important to be in the right, but this rigidity alienates a productive and proud group of people in the world--the impulsive. Politically Correct is a solution for those who would risk a slap for a laugh and who would make light of a dark situation. The question of whether we have stepped too far over the line often comes into our minds, sparking the endless internal debate of "Should I?" or "Should I not?" Politically Correct, leveraging both artificial and natural intelligence, gives its users the opportunity to get safe and effective feedback to end these constant internal dialogues. ## What it does Through a carefully integrated messaging backend, this application utilizes Magnet's API to send the text that the user wants to verify to a randomly selected group of users. These user express their opinion of the anonymous user's statement, rating it as acceptable or unacceptable. This application enhances the user's experience with a seamless graphical interface with a "Feed" giving the user messages from others to judge and "My Questions" allowing users to receive feedback. The machine learning component, implemented and ready to be rolled out in "My Questions", will use an Azure-based logistic regression to automatically classify text as politically correct or incorrect. ## How I built it Blood, sweat, tears, and Red Bull were the fuel that ignited Politically Correct. Many thanks to the kind folks from Magnet and Azure (Microsoft) for helping us early in the morning or late at night. For the build we utilized the Magnet SDK to enable easy in-app messaging and receiving between users and a random sample of users. With the messages, we added and triggered a message 'send-event' based on the click of a judgement button or an ask button. When a message was received we sorted the message (either a message to be judged or a message that is a judgement). To ensure that all judgement messages corresponded to the proper question messages we used special hash ids and stored these ids in serialized data. We updated the Feed and the MyQuestions tab on every message receive. For Azure we used logistic regression and a looooooooonnnnnnnnggggggg list of offensive and not offensive phrases. Then after training the set to create a model, we set up a web api that will be called by Politically Correct to get an initial sentiment analysis of the message. ## Challenges We ran into Aside from the multiple attempts of putting foot in mouth, the biggest challenges came from both platforms: **Azure**: *Perfectionism* While developing a workflow for the app the question of "How do I accurately predict the abuse in a statement?" often arose. As this challenge probably provokes similar doubts from Ph.Ds we would like to point to perfectionism as the biggest challenge with Azure. **Magnet:** *Impatience* Ever the victims, we like to blame companies for putting a lot of words in their tutorials because it makes it hard for us to skim through (we can't be bothered with learning we want to DO!!). The tutorials and documentation provided the support and gave all of us the ability to learn to sit down and iterate through a puzzle until we understood the problem. It was difficult to figure out the format in which we would communicate the judgement of the random sample of users. ## Accomplishments that I'm proud of We are very proud of the fact that we have a fully integrated Magnet messaging API, and the perfect implementation of the database backend. ## What I learned Aside from the two virtues of "good enough" and "patience", we learned how to work together, how to not work together, and how to have fun (in a way that sleep deprivation can allow). In the context of technical expertise (which is what everyone is going to be plugging right here), we gained a greater depth of knowledge on the Magnet SDK, and how to create a work flow and api on Azure. ## What's next for Politically Correct The future is always amazing, and the future for Politically Correct is better (believe it or not). The implementation for Politically Correct enjoys the partial integration of two amazing technologies Azure and Magnet, but the full assimilation (we are talking Borg level) would result in the fulfillment of two goals: 1) Dynamically train the offense of specific language by accounting for the people's responses to the message. 2) Allow integrations with various multimedia sites (i.e. Facebook and Twitter) to include an automatic submission/decline feature when there is a consensus on the statement.
losing
## Inspiration Our families tend to trade cars a lot, and all too frequently, we end up on kijiji spending hours searching for the best car deals. We wanted to build a tool that would help save us time while searching for cars. ## What it does Bang 4 Buck is a browser extension tool that allows you to search for the best deals on our favourite second hand marketplace -- Kijiji. In order to search for deals, you navigate to the Kijiji website, and search for the car you are looking for. Optionally, you can include any filters you want, such as price range, vehicle model, kilometer range etc.. When you have selected all the filters, you open up Bang 4 Buck extension tool, and hit "Find Deals". This will return to you all the best deals for your given filter, which you can navigate through by hitting "Next" or "Previous" to redirect you to the ad for that vehicle. ## How we built it We used JS, HTML, CSS for the chrome extension popup window in order to interact with different parts of the browser and make HTTP requests. The chrome extension sends a url for the first webpage of vehicle results you are filtering for over HTTP to a backend server. We used Flask web server in order to handle incoming HTTP requests from the browser extension. The Flask server used aiohttp in order to make many asynchronous HTTP requests to many pages for the given vehicle filter. Once all the data had been scraped from each page, we collected necessary data from each vehicle ad such as, vehicle year, total kilometers driven, and the cost. We could then use the year, and kilometers driven to calculate a "life left score", and divide this by the cost of the vehicle to calculate a final score for the vehicle. These would be sorted from highest to lowest, and the corresponding urls would be sent back to the chrome extension, and saved in cache so the user could navigate through the urls and see the best deals. ## Challenges we ran into We had never developed a chrome extension, and some of the nuances with chrome extensions were not quite as similar to vanilla HTML CSS and JS as we initially thought. Additionally, making many requests from the web server to scrape information from Kijiji was initally very slow, which we improved by threading the HTTP requests ## Accomplishments that we're proud of This is a tool that is genuinely useful for us and our families so we are glad that we were able to build something during the hackathon that will be useful, and not thrown away the next day ## What we learned How to build chrome extensions, how to use Flask, and how to make a lot of threaded HTTP requests ## What's next for Bang 4 Buck We want to implement sentiment analysis on the description for each of the vehicles. This would implement a neural network to look for keywords to give a better indication of the lifetime left for the car, which could give us more accurate results.
## Inspiration Our project inspiration derives from our everyday life as make-ends-mean college students. Groceries, gas prices, rents, and education item costs have been sky-rocketing and we are trying our best to survive. ## What it does \_Price Ain't Right \_ seeks to find only the best, most reasonable, and in-your-budget deals from the big online retailers. We also have features like price-tracking notifiers and ecological purchases planned in future updates. ## How we built it As a team of two, each one of us takes on essential roles responsible for the final product. We built our front-end website with React.js, along with JS, CSS, HTML, coffee, tears, and a lot of red debugging screens. The backend wasn't fun either, as the majority of our time was spent on figuring out why Amazon is denying our API requests, and came to the disappointing conclusion that they had set measurements to prevent people from keeping track of their data. ## Challenges we ran into There were a lot of framework problems with React.js in the front-end as many of the libraries were recently updated, e.g react-router-dom, most notably, the adjustment in their parameter from Switch to Route and Route to Routes. Also, a lot of times the website goes blank, and then it doesn't. It's rather interesting tracking down these bugs. Additionally, centering divs after divs and formatting them was really time-consuming. For our backend, the API scrapers have hard limiting for most of the retail companies online which make our data-collecting process a huge pain since we mainly use Flask, Beautifulsoup libraries. ## Accomplishments that we're proud of The website works and we can pull data from the retail websites and correctly display them in the order we want. ## What we learned Frameworks are difficult to manage, Selenium is better than Beautifulsoup, we should've learned Angular, and centering multiple divs with backgrounds are really tedious and eye-damaging. ## What's next for Price Ain't Right We will be releasing more product-features in the next updates, some of which include education deal, and eco-friendly deals on various websites.
## Inspiration One of the most exciting parts about Hackathons is the showcasing of the final product, well-earned after hours upon hours of sleep-deprived hacking. Part of the presentation work lies in the Devpost entry. I wanted to build an application that can rate the quality of a given entry to help people write better Devpost posts, which can help them better represent their amazing work. ## What it does The Chrome extension can be used on a valid Devpost entry web page. Once the user clicks "RATE", the extension will automatically scrap the relevant text and send it to a Heroku Flask server for analysis. The final score given to a project entry is an aggregate of many factors, such as descriptiveness, the use of technical vocabulary, and the score given by an ML model trained against thousands of project entries. The user can use the score as a reference to improve their entry posts. ## How I built it I used UiPath as an automation tool to collect, clean, and label data across thousands of projects in major Hackathons over the past few years. After getting the necessary data, I trained an ML model to predict the probability for a given Devpost entry to be amongst the winning projects. I also used the data to calculate other useful metrics, such as the distribution of project entry lengths, average amount of terminologies used, etc. These models are then uploaded on a Heroku cloud server, where I can get aggregated ratings for texts using a web API. Lastly, I built a Javascript Chrome extension that detects Devpost web pages, scraps data from the page, and present the ratings to the user in a small pop-up. ## Challenges I ran into Firstly, I am not familiar with website development. It took me a hell of a long time to figure out how to build a chrome extension that collects data and uses external web APIs. The data collection part is also tricky. Even with great graphical automation tools at hand, it was still very difficult to do large-scale web-scraping for someone relatively experienced with website dev like me. ## Accomplishments that I'm proud of I am very glad that I managed to finish the project on time. It was quite an overwhelming amount of work for a single person. I am also glad that I got to work with data from absolute scratch. ## What I learned Data collection, hosting ML model over cloud, building Chrome extensions with various features ## What's next for Rate The Hack! I want to refine the features and rating scheme
losing
## Inspiration We wanted to create a new way to interact with the thousands of amazing shops that use Shopify. ![demo](https://res.cloudinary.com/devpost/image/fetch/s--AOJzynCD--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/0G1Pdea.jpg) ## What it does Our technology can be implemented inside existing physical stores to help customers get more information about products they are looking at. What is even more interesting is that our concept can be implemented to ad spaces where customers can literally window shop! Just walk in front of an enhanced Shopify ad and voila, you have the product on the sellers store, ready to be ordered right there from wherever you are. ## How we built it WalkThru is Android app built with the Altbeacon library. Our localisation algorithm allows the application to pull the Shopify page of a specific product when the consumer is in front of it. ![Shopify](https://res.cloudinary.com/devpost/image/fetch/s--Yj3u-mUq--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/biArh6r.jpg) ![Estimote](https://res.cloudinary.com/devpost/image/fetch/s--B-mjoWyJ--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/0M85Syt.jpg) ![Altbeacon](https://avatars2.githubusercontent.com/u/8183428?v=3&s=200) ## Challenges we ran into Using the Estimote beacons in a crowded environment has it caveats because of interference problems. ## Accomplishments that we're proud of The localisation of the user is really quick so we can show a product page as soon as you get in front of it. ![WOW](https://res.cloudinary.com/devpost/image/fetch/s--HVZODc7O--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.giphy.com/xT77XWum9yH7zNkFW0.gif) ## What we learned We learned how to use beacons in Android for localisation. ## What's next for WalkThru WalkThru can be installed in current brick and mortar shops as well as ad panels all over town. Our next step would be to create a whole app for Shopify customers which lets them see what shops/ads are near them. We would also want to improve our localisation algorithm to 3D space so we can track exactly where a person is in a store. Some analytics could also be integrated in a Shopify app directly in the store admin page where a shop owner would be able to see how much time people spend in what parts of their stores. Our technology could help store owners increase their sells and optimise their stores.
# Inspiration and Product There's a certain feeling we all have when we're lost. It's a combination of apprehension and curiosity – and it usually drives us to explore and learn more about what we see. It happens to be the case that there's a [huge disconnect](http://www.purdue.edu/discoverypark/vaccine/assets/pdfs/publications/pdf/Storylines%20-%20Visual%20Exploration%20and.pdf) between that which we see around us and that which we know: the building in front of us might look like an historic and famous structure, but we might not be able to understand its significance until we read about it in a book, at which time we lose the ability to visually experience that which we're in front of. Insight gives you actionable information about your surroundings in a visual format that allows you to immerse yourself in your surroundings: whether that's exploring them, or finding your way through them. The app puts the true directions of obstacles around you where you can see them, and shows you descriptions of them as you turn your phone around. Need directions to one of them? Get them without leaving the app. Insight also supports deeper exploration of what's around you: everything from restaurant ratings to the history of the buildings you're near. ## Features * View places around you heads-up on your phone - as you rotate, your field of vision changes in real time. * Facebook Integration: trying to find a meeting or party? Call your Facebook events into Insight to get your bearings. * Directions, wherever, whenever: surveying the area, and find where you want to be? Touch and get instructions instantly. * Filter events based on your location. Want a tour of Yale? Touch to filter only Yale buildings, and learn about the history and culture. Want to get a bite to eat? Change to a restaurants view. Want both? You get the idea. * Slow day? Change your radius to a short distance to filter out locations. Feeling adventurous? Change your field of vision the other way. * Want get the word out on where you are? Automatically check-in with Facebook at any of the locations you see around you, without leaving the app. # Engineering ## High-Level Tech Stack * NodeJS powers a RESTful API powered by Microsoft Azure. * The API server takes advantage of a wealth of Azure's computational resources: + A Windows Server 2012 R2 Instance, and an Ubuntu 14.04 Trusty instance, each of which handle different batches of geospatial calculations + Azure internal load balancers + Azure CDN for asset pipelining + Azure automation accounts for version control * The Bing Maps API suite, which offers powerful geospatial analysis tools: + RESTful services such as the Bing Spatial Data Service + Bing Maps' Spatial Query API + Bing Maps' AJAX control, externally through direction and waypoint services * iOS objective-C clients interact with the server RESTfully and display results as parsed ## Application Flow iOS handles the entirety of the user interaction layer and authentication layer for user input. Users open the app, and, if logging in with Facebook or Office 365, proceed through the standard OAuth flow, all on-phone. Users can also opt to skip the authentication process with either provider (in which case they forfeit the option to integrate Facebook events or Office365 calendar events into their views). After sign in (assuming the user grants permission for use of these resources), and upon startup of the camera, requests are sent with the user's current location to a central server on an Ubuntu box on Azure. The server parses that location data, and initiates a multithread Node process via Windows 2012 R2 instances. These processes do the following, and more: * Geospatial radial search schemes with data from Bing * Location detail API calls from Bing Spatial Query APIs * Review data about relevant places from a slew of APIs After the data is all present on the server, it's combined and analyzed, also on R2 instances, via the following: * Haversine calculations for distance measurements, in accordance with radial searches * Heading data (to make client side parsing feasible) * Condensation and dynamic merging - asynchronous cross-checking from the collection of data which events are closest Ubuntu brokers and manages the data, sends it back to the client, and prepares for and handles future requests. ## Other Notes * The most intense calculations involved the application of the [Haversine formulae](https://en.wikipedia.org/wiki/Haversine_formula), i.e. for two points on a sphere, the central angle between them can be described as: ![Haversine 1](https://upload.wikimedia.org/math/1/5/a/15ab0df72b9175347e2d1efb6d1053e8.png) and the distance as: ![Haversine 2](https://upload.wikimedia.org/math/0/5/5/055b634f6fe6c8d370c9fa48613dd7f9.png) (the result of which is non-standard/non-Euclidian due to the Earth's curvature). The results of these formulae translate into the placement of locations on the viewing device. These calculations are handled by the Windows R2 instance, essentially running as a computation engine. All communications are RESTful between all internal server instances. ## Challenges We Ran Into * *iOS and rotation*: there are a number of limitations in iOS that prevent interaction with the camera in landscape mode (which, given the need for users to see a wide field of view). For one thing, the requisite data registers aren't even accessible via daemons when the phone is in landscape mode. This was the root of the vast majority of our problems in our iOS, since we were unable to use any inherited or pre-made views (we couldn't rotate them) - we had to build all of our views from scratch. * *Azure deployment specifics with Windows R2*: running a pure calculation engine (written primarily in C# with ASP.NET network interfacing components) was tricky at times to set up and get logging data for. * *Simultaneous and asynchronous analysis*: Simultaneously parsing asynchronously-arriving data with uniform Node threads presented challenges. Our solution was ultimately a recursive one that involved checking the status of other resources upon reaching the base case, then using that knowledge to better sort data as the bottoming-out step bubbled up. * *Deprecations in Facebook's Graph APIs*: we needed to use the Facebook Graph APIs to query specific Facebook events for their locations: a feature only available in a slightly older version of the API. We thus had to use that version, concurrently with the newer version (which also had unique location-related features we relied on), creating some degree of confusion and required care. ## A few of Our Favorite Code Snippets A few gems from our codebase: ``` var deprecatedFQLQuery = '... ``` *The story*: in order to extract geolocation data from events vis-a-vis the Facebook Graph API, we were forced to use a deprecated API version for that specific query, which proved challenging in how we versioned our interactions with the Facebook API. ``` addYaleBuildings(placeDetails, function(bulldogArray) { addGoogleRadarSearch(bulldogArray, function(luxEtVeritas) { ... ``` *The story*: dealing with quite a lot of Yale API data meant we needed to be creative with our naming... ``` // R is the earth's radius in meters var a = R * 2 * Math.atan2(Math.sqrt((Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) * Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) + Math.cos((Math.PI / 180)(latitude1)) * Math.cos((Math.PI / 180)(latitude2)) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2))), Math.sqrt(1 - (Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) * Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) + Math.cos((Math.PI / 180)(latitude1)) * Math.cos((Math.PI / 180)(latitude2)) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2);))); ``` *The story*: while it was shortly after changed and condensed once we noticed it's proliferation, our implementation of the Haversine formula became cumbersome very quickly. Degree/radian mismatches between APIs didn't make things any easier.
## What it does We built a way to browse Amazon with your voice and your phone camera. You can scan the things you like around you, talk to a trained ML model about the reviews, and visualize the item in Augmented Reality. ## How we built it * Scrape Amazon Review Pages: A heavy amount of querying Amazon webpages lets us access millions of opinions of products across the site. * Bidirectional Attention Flow Model: we trained a model using a year-old ML method called bidirectional attention flow, which is being heavily used in academics. * Speech to Text Translation: Using Houndify and MacOs, we enable a conversation between you and the Amazon customer of your choice. * Google Cloud Image Search: Naturally, we implemented Google's Vision API to label the objects in the user's camera. You just snap the button in the corner, and your screen gets searched across the internet for the best options! * AR Virtualization: To bring the webpage to life, we provide a platform for users to visualize not only the product in question in 3D space, but also to display Amazon star reviews right alongside it. ## Greatest Challenges * Adding textures in AR got a bit frustrating. * Amazon is not very welcoming to people scraping the pages...so it was a tedious process. Not to mention, the Amazon signature for the request was a doozy. * We used not one, but two computers to cover all of our functionality. As a result, we'd lose touch between them when we'd come in and out of WiFi. * Time. Never enough of it. ## What's next for Emporium * Ability to purchase products through Emporium * Provide multiple Amazon query results in Augmented Reality * Attach the Augmented Reality to the real-world location of the item being investigated
winning
## Inspiration With 1 in 5 people developing melanoma in their lifetimes, skin cancer is one of the most common cancers. Yet, there is no easy way to track and monitor its progression, especially in response to treatment. ## What it does Using computer vision, our app easily records and graphs the surface area of melanoma on a user's skin. Simply use a nickel as a reference point and our app can figure the rest out. ## Challenges we ran into At one point we rewrote the computer vision function using Python from Matlab but didn't even use it.
# Project Inspiration Our project was inspired by the innovative work showcased at an Intel workshop, which harnessed the power of image recognition for wildfire prediction and even the identification of ancient fossils. This ingenuity sparked our desire to create a groundbreaking skin model. Our goal was to develop a AI solution that could analyze user-submitted skin photos, providing not just a diagnosis but also essential information on disease risks and potential treatments. # Overcoming Challenges Our journey was marked by challenges, with the primary hurdle being the fine-tuning of the AI model. We encountered difficulties stemming from dependencies, requiring relentless problem-solving. Additionally, we faced intermittent connectivity issues with Intel's cloud service and Jupyter Notebook, which occasionally disrupted our training process. Despite these obstacles, we remained resolute in our mission to deliver a valuable tool for the early detection of skin diseases.
## Not All Backs are Packed: An Origin Story (Inspiration) A backpack is an extremely simple, and yet ubiquitous item. We want to take the backpack into the future without sacrificing the simplicity and functionality. ## The Got Your Back, Pack: **U N P A C K E D** (What's it made of) GPS Location services, 9000 mAH power battery, Solar charging, USB connectivity, Keypad security lock, Customizable RBG Led, Android/iOS Application integration, ## From Backed Up to Back Pack (How we built it) ## The Empire Strikes **Back**(packs) (Challenges we ran into) We ran into challenges with getting wood to laser cut and bend properly. We found a unique pattern that allowed us to keep our 1/8" wood durable when needed and flexible when not. Also, making connection of hardware and app with the API was tricky. ## Something to Write **Back** Home To (Accomplishments that we're proud of) ## Packing for Next Time (Lessons Learned) ## To **Pack**-finity, and Beyond! (What's next for "Got Your Back, Pack!") The next step would be revising the design to be more ergonomic for the user: the back pack is a very clunky and easy to make shape with little curves to hug the user when put on. This, along with streamlining the circuitry and code, would be something to consider.
losing
# Arctic Miner - The Crypto Club Penguin Basic blockchain game that allows the user to collect penguins, breed them as well as trade them. All penguins are stored on the ethereum blockchain. ## What it Does Our program Arctic Miner is a collection game that takes advantage of the **ERC721** token standard. This project allows the user to have a decentralized inventory of penguins that have specific traits. The user can breed these penguins to create offspring that have different traits that takes advantage of our genetic algorithm. Each penguin is it's own **non-fungible** ERC721 token with it's own personal traits that are different from the other penguins. ## How We Built it Arctic Miners runs on Angular front end with the smart contract being mode with Solidity ## What We Learned Over the course of this hackathon, we learned a lot about various token standard as well as applications of the blockchain to create decentralized applications. In terms of sheer programming, we learned how to use Angular.js and Solidity. No group member had prior experience with either beforehand. ## Useful Definitions **Non-Fungible:** non fungible tokens are tokens that have their own respective value and are not equivalent to other members of the same class. To explain metaphorically, a fungible token would be like a one dollar note. If I have a dollar note and you have a dollar note, and we trade them, neither one of us is at a loss since they are the exact same. A non-fungible token would be like a trading card. If I have a Gretzky rookie card and you have a Bartkowski card, technically both are hockey card, but the Gretzky card has a higher value than the Bartowski one; thus separating the two. **ERC721:** this is a non-fungible token standard that differs from the more common ERC20 token which is fungible. ## Installation and setup The node-modules/dependencies can be installed by running the following command in the root directory terminal: ``` npm install ``` Following dependencies in order to use the smart contracts: * solidity 0.4.24 To compile the contracts first enter the following commands in the contracts folder: ``` npm install truffle npm install babel-register npm install babel-polyfill npm install solc@0.4.25 truffle compile ``` ## How to run on a local machine This web app was created using Angular.js You will need to install Ganache if you would like to locally host the blockchain, otherwise the truffle file will need to be updated to use infura. To build, simply enter in the terminal mapped to the root folder the following command: ``` ng server ``` Then, visit <http://localhost:4200> to see the visualization
## Inspiration Productivity is hard to harness especially at hackathons with many distractions, but a trick we software developing students found to stay productive while studying was using the “Pomodoro Technique”. The laptop is our workstation and could be a source of distraction, so what better place to implement the Pomodoro Timer as a constant reminder? Since our primary audience is going to be aspiring young tech students, we chose to further incentivize them to focus until their “breaks” by rewarding them with a random custom-generated and minted NFT to their name every time they succeed. This unique inspiration provided an interesting way to solve a practical problem while learning about current blockchain technology and implementing it with modern web development tools. ## What it does An innovative modern “Pomodoro Timer” running on your browser enables users to sign in and link their MetaMask Crypto Account addresses. Such that they are incentivized to be successful with the running “Pomodoro Timer” because upon reaching “break times” undisrupted our website rewards the user with a random custom-generated and minted NFT to their name, every time they succeed. This “Ethereum Based NFT” can then be both viewed on “Open Sea” or on a dashboard of the website as they both store the user’s NFT collection. ## How we built it TimeToken's back-end is built with Django and Sqlite and for our frontend, we created a beautiful and modern platform using React and Tailwind, to give our users a dynamic webpage. A benefit of using React, is that it works smoothly with our Django back-end, making it easy for both our front-end and back-end teams to work together ## Challenges we ran into We had set up the website originally as a MERN stack (MongoDB/Express.js/REACT/Node.js) however while trying to import dependencies for the Verbwire API, to mint our images into NFTs to the user’s wallets we ran into problems. After solving dependency issues a “git merge” produced many conflicts, and on the way to resolving conflicts, we discovered some difficult compatibility issues with the API SDK and JS option for our server. At this point we had to pivot our plan, so we decided to implement the Verbwire Python-provided API solution, and it worked out very well. We intended here to just pass the python script and its functions straight to our front-end but learned that direct front-end to Python back-end communication is very challenging. It involved Ajax/XML file formatting and solutions heavily lacking in documentation, so we were forced to keep searching for a solution. We realized that we needed an effective way to make back-end Python communicate with front-end JS with SQLite and discovered that the Django framework was the perfect suite. So we were forced to learn Serialization and the Django framework quickly in order to meet our needs. ## Accomplishments that we're proud of We have accomplished many things during the development of TimeToken that we are very proud of. One of our proudest moments was when we pulled an all-nighter to code and get everything just right. This experience helped us gain a deeper understanding of technologies such as Axios, Django, and React, which helped us to build a more efficient and user-friendly platform. We were able to implement the third-party VerbWire API, which was a great accomplishment, and we were able to understand it and use it effectively. We also had the opportunity to talk with VerbWire professionals to resolve bugs that we encountered, which allowed us to improve the overall user experience. Another proud accomplishment was being able to mint NFTs and understanding how crypto and blockchains work, this was a great opportunity to learn more about the technology. Finally, we were able to integrate crypto APIs, which allowed us to provide our users with a complete and seamless experience. ## What we learned When we first started working on the back-end, we decided to give MongoDB, Express, and NodeJS a try. At first, it all seemed to be going smoothly, but we soon hit a roadblock with some dependencies and configurations between a third-party API and NodeJS. We talked to our mentor and decided it would be best to switch gears and give the Django framework a try. We learned that it's always good to have some knowledge of different frameworks and languages, so you can pick the best one for the job. Even though we had a little setback with the back-end, and we were new to Django, we learned that it's important to keep pushing forward. ## What's next for TimeToken TimeToken has come a long way and we are excited about the future of our application. To ensure that our application continues to be successful, we are focusing on several key areas. Firstly, we recognize that storing NFT images locally is not scalable, so we are working to improve scalability. Secondly, we are making security a top priority and working to improve the security of wallets and crypto-related information to protect our users' data. To enhance user experience, we are also planning to implement a media hosting website, possibly using AWS, to host NFT images. To help users track the value of their NFTs, we are working on implementing an API earnings report with different time spans. Lastly, we are working on adding more unique images to our NFT collection to keep our users engaged and excited.
## Inspiration In traditional finance, banks often swap cash flows from their assets for a fixed period of time. They do this because they want to hold onto their assets long-term, but believe their counter-party's assets will outperform their own in the short-term. We decided to port this over to DeFi, specifically Uniswap. ## What it does Our platform allows for the lending and renting of Uniswap v3 liquidity positions. Liquidity providers can lend out their positions for a short amount of time to renters, who are able to collect fees from the position for the duration of the rental. Lenders are able to both hold their positions long term AND receive short term cash flow in the form of a lump sum ETH which is paid upfront by the renter. Our platform handles the listing, selling and transferring of these NFTs, and uses a smart contract to encode the lease agreements. ## How we built it We used solidity and hardhat to develop and deploy the smart contract to the Rinkeby testnet. The frontend was done using web3.js and Angular. ## Challenges we ran into It was very difficult to lower our gas fees. We had to condense our smart contract and optimize our backend code for memory efficiency. Debugging was difficult as well, because EVM Error messages are less than clear. In order to test our code, we had to figure out how to deploy our contracts successfully, as well as how to interface with existing contracts on the network. This proved to be very challenging. ## Accomplishments that we're proud of We are proud that in the end after 16 hours of coding, we created a working application with a functional end-to-end full-stack renting experience. We allow users to connect their MetaMask wallet, list their assets for rent, remove unrented listings, rent assets from others, and collect fees from rented assets. To achieve this, we had to power through many bugs and unclear docs. ## What we learned We learned that Solidity is very hard. No wonder blockchain developers are in high demand. ## What's next for UniLend We hope to use funding from the Uniswap grants to accelerate product development and add more features in the future. These features would allow liquidity providers to swap yields from liquidity positions directly in addition to our current model of liquidity for lump-sums of ETH as well as a bidding system where listings can become auctions and lenders rent their liquidity to the highest bidder. We want to add different variable-yield assets to the renting platform. We also want to further optimize our code and increase security so that we can eventually go live on Ethereum Mainnet. We also want to map NFTs to real-world assets and enable the swapping and lending of those assets on our platform.
partial
## Inspiration We were inspired by the retro movie Tron and a love for all things racing. ## What it does DEREZ combines the thrill of first person view car racing with the power of mixed reality. How it works is we are able to render an augmented reality race course for an RC var, visible through an Oculus Rift. The user can use their phone to set the start and end points, obstacles, and other features of the racing experience, and then see those things as they race in first person view using the RC car. ## How we built it We used ARCore plus a phone to render the augmented reality objects, and then the Oculus to see them in real time as you race. ## Challenges we ran into We originally were planning on using a drone instead of a car for the project, but due to adverse weather conditions had to pivot. ## What's next for DEREZ We plan to expand from RC car racing to a multitude of applications- this technology can be applied to military drones, submarines, or really any vehicle or device with a first person view camera.
## What it does ColoVR is a virtual experience allowing users to modify their environment with colors and models. It's a relaxing, low poly experience that is the first of its kind to bring an accessible 3D experience to the Google Cardboard platform. It's a great way to become a creative in 3D without dropping a ton of money on VR hardware. ## How we built it We used Google Daydream and extended off of the demo scene to our liking. We used Unity and C# to create the scene, and do all the user interactions. We did painting and dragging objects around the scene using a common graphics technique called Ray Tracing. We colored the scene by changing the vertex color. We also created a palatte that allowed you to change tools, colors, and insert meshes. We hand modeled the low poly meshes in Maya. ## Challenges we ran into The main challenge was finding a proper mechanism for coloring the geometry in the scene. We first tried to paint the mesh face by face, but we had no way of accessing the entire face from the way the Unity mesh was set up. We then looked into an asset that would help us paint directly on top of the texture, but it ended up being too slow. Textures tend to render too slowly, so we ended up doing it with a vertex shader that would interpolate between vertex colors, allowing real time painting of meshes. We implemented it so that we changed all the vertices that belong to a fragment of a face. The vertex shader was the fastest way that we could render real time painting, and emulate the painting of a triangulated face. Our second biggest challenge was using ray tracing to find the object we were interacting with. We had to navigate the Unity API and get acquainted with their Physics raytracer and mesh colliders to properly implement all interaction with the cursor. Our third challenge was making use of the Controller that Google Daydream supplied us with - it was great and very sandboxed, but we were almost limited in terms of functionality. We had to find a way to be able to change all colors, insert different meshes, and interact with the objects with only two available buttons. ## Accomplishments that we're proud of We're proud of the fact that we were able to get a clean, working project that was exactly what we pictured. People seemed to really enjoy the concept and interaction. ## What we learned How to optimize for a certain platform - in terms of UI, geometry, textures and interaction. ## What's next for ColoVR Hats, interactive collaborative space for multiple users. We want to be able to host the scene and make it accessible to multiple users at once, and store the state of the scene (maybe even turn it into an infinite world) where users can explore what past users have done and see the changes other users make in real time.
## Inspiration The inspiration for our revolutionary software sprang from the desire to elevate the way we share our lives and experiences on social media. In a world increasingly connected through pictures and videos, we envisioned taking storytelling a notch higher — by introducing **Immersive Stories**. We sought to transform 3D rendering into an engaging medium where you can literally walk people through your experiences, not just show them. **Imagine not just viewing a friend's vacation photo but stepping into their reality, exploring the picturesque environments as if you were there.** It's about bringing depth, dimension, and a dynamic perspective to social sharing, ushering in a new era of interactive and immersive storytelling where distances diminish, and experiences become more vibrant, vivid, and real. It's not just sharing; it's inviting friends into your world, offering a walkthrough of your moments, **redefining connection and interaction in the digital age**. ## What it does Our software brings your videos to life, converting them into navigable 3D spaces. **Start by recording a video of any environment or object**, then use the software to **transform that footage into a digital 3D realm**, ready to explore in-depth. This tool unlocks a new layer of interaction, letting users virtually step into the spaces and moments captured, offering a richer, more immersive way to share and experience content. **It's more than viewing**; it's about experiencing, walking around, and feeling closer to the real thing, all from the user's perspective. ## How we built it Building this innovative software initiated with the meticulous crafting of our **user interface in Unity**, a robust platform known for its expansive functionalities and user-friendly features. The backbone of the process is the seamless integration of users' **Immersive Stories** into the platform; they simply upload their videos, which are subsequently **dissected into individual frames**. Next, we instituted a **Python script that effectively scrutinizes each frame, identifying and eliminating all blurry images** to ensure only the crispest, clearest frames are utilized. This filtering is a crucial step, paving the way for the high-definition 3D renderings that come next. The selected frames then move to the next pivotal stage, being **processed by NeRF (Neural Radiance Fields)** which, with the assistance of cutting-edge AI technology, **converts them into astonishingly realistic 3D renderings**. It is a transformative phase where two-dimensional images metamorphose into three-dimensional spaces endowed with depth, texture, and nuance. Once the NeRF has worked its magic, the 3D rendering is prepared for user interaction; it is **first exported as a .ply file before being converted to a .obj file**, enabling it to be finely presented back in the Unity interface. The final product is a user’s **Immersive Story**, ready to be **navigated with intuitive ease using the WASD or arrow keys**, offering users not just a viewing experience, but a vibrant, lifelike journey through their captured moments. Every detail has been conceived with the user’s immersive experience at the forefront, culminating in a tool that transcends traditional storytelling. ## Challenges we ran into In the developmental stages, we encountered **significant challenges while trying to run NVIDIA NGP**, especially during the construction of our own NeRF. The main hindrance came in the form of missing dependencies, a hurdle that not only slowed our progress but required meticulous troubleshooting to identify and rectify the absent elements. **This necessitated a deeper dive into the intricate web of dependencies, enhancing our understanding and facilitating a more robust build**. Despite these setbacks, our team kept pushing, applying tenacity and expertise to navigate through the complexities and keep the project advancing forward. ## Accomplishments that we're proud of We are incredibly proud of the strides we have made in making NeRF technology **accessible and user-friendly through our native platform**. By automating the traditionally laborious process of using NeRF, we have managed to remove the barriers to entry, **allowing a wider audience to create their own Immersive Stories without the hassle**. This innovation not only brings a fresh, dynamic way to share experiences but is a milestone in fostering inclusivity in the digital storytelling landscape. It's a step toward a world where detailed, rich, and immersive narratives are not a privilege but a norm, accessible and easily crafted by all. ## What we learned One of the essential skills acquired was **mastering Unity**, a multifaceted platform where we **not only built a visually pleasing and intuitive interface but also integrated custom API calls,** enhancing the functionality and user experience remarkably. Another cornerstone in our developmental journey was **constructing our own NeRF system**, meticulously **tying it into our API calls to automate intricate processes** that historically demanded a significant time investment. This deep integration paved the way for a more streamlined and efficient workflow, which stands as a testament to our team's ingenuity and technical prowess. To further refine the output and boost efficiency, **we took it upon ourselves to develop an additional machine learning model, dedicated to identifying and removing blurry images from the video input**. This forward-thinking addition facilitated **faster and cleaner convergence of NeRF**, thereby not only speeding up the processing time but also remarkably improving the quality of the 3D renderings. It was a journey of constant learning, bringing together a tapestry of technologies to craft a tool that stands at the frontier of immersive digital storytelling. ## What's next for Immersify As Immersify continues to evolve, **the next milestone is venturing into the mobile sphere to redefine how stories are shared and experienced** on popular platforms like Instagram and Snapchat. While currently housed exclusively within Unity, the plan is to transition into a mobile-friendly format, potentially laying the groundwork for its own revolutionary social media platform. Whether integrating with existing giants or pioneering its own space, **Immersify is poised to bring 3D immersive stories to the fingertips of users globally**, offering a richer, more interactive storytelling canvas that mirrors the dynamic nature of real-life experiences.
partial
## Inspiration We express emotions in our everyday lives when we communicate with our loved ones, our neighbors, our friends, our local Loblaw store customer service, our doctors or therapists. These emotions can be examined by cues such as gesture, text and facial expressions. The goal of Emotional.AI is to provide a tool for businesses (customer service, etc), or doctors/therapists to identify emotions and enhance their services. ## What it does Uses natural language processing (from audio transcription via Assembly AI) and computer vision to determine emotion of people. ## How we built it #### **Natural Language processing** * First we took emotion classified data from public sources (Kaggle and research studies). * We preprocessed, cleaned, transformed, created features, and performed light EDA on the dataset. * Used TF-IDF tokenizer to deal with numbers, punctuation marks, non letter symbols, etc. * Scaled the data using Robust Scaler and made 7 models. (MNB, Linear Regression, KNN, SVM, Decision Tree, Random Forrest, XGB) #### **Computer Vision** Used Mediapipe to generate points on face, then use those points to get training data set. We used Jupyter Notebook to run OpenCV and Mediapipe. Upon running our data in Mediapipe, we were able to get a skeleton map of the face with 468 points. These points can be mapped in 3-dimension as it contains X, Y, and Z axis. We processed these features (468 points x 3) by saving them into a spreadsheet. Then we divided the spreadsheet into training and testing data. Using the training set, we were able to create 6 Machine learning models and choose the best one. #### **Assembly AI** We converted video/audio from recordings (whether it’s a therapy session or customer service audio from 1000s of Loblaws customers 😉) to text using Assembly API. #### **Amazon Web Services** We used the S3 services to host the video files uploaded by the user. These video files were then sent the Assembly AI Api. #### **DCP** For Computing (ML) ## Challenges we ran into * Collaborating virtually is challenging * Deep learning training takes a lot of computing power and time * Connecting our front-end with back-end (and ML) * Time management * Working with react + flask server * Configuring amazon buckets and users to make the app work with the s3 services ## Accomplishments that we're proud of Apart from completing this hack, we persevered through each challenge as a team and succeeded in what we put ourselves up to. ## What we learned * Working as a team * Configuration management * Working with Flask ## What's next for Emotional.AI * We hope to have a more refined application with cleaner UI. * We want to train our models further with more data and have more classifications. * We want to make a platform for therapists to connect with their clients and use our tech. * Make our solution work in real-time.
## Inspiration Given the increase in mental health awareness, we wanted to focus on therapy treatment tools in order to enhance the effectiveness of therapy. Therapists rely on hand-written notes and personal memory to progress emotionally with their clients, and there is no assistive digital tool for therapists to keep track of clients’ sentiment throughout a session. Therefore, we want to equip therapists with the ability to better analyze raw data, and track patient progress over time. ## Our Team * Vanessa Seto, Systems Design Engineering at the University of Waterloo * Daniel Wang, CS at the University of Toronto * Quinnan Gill, Computer Engineering at the University of Pittsburgh * Sanchit Batra, CS at the University of Buffalo ## What it does Inkblot is a digital tool to give therapists a second opinion, by performing sentimental analysis on a patient throughout a therapy session. It keeps track of client progress as they attend more therapy sessions, and gives therapists useful data points that aren't usually captured in typical hand-written notes. Some key features include the ability to scrub across the entire therapy session, allowing the therapist to read the transcript, and look at specific key words associated with certain emotions. Another key feature is the progress tab, that displays past therapy sessions with easy to interpret sentiment data visualizations, to allow therapists to see the overall ups and downs in a patient's visits. ## How we built it We built the front end using Angular and hosted the web page locally. Given a complex data set, we wanted to present our application in a simple and user-friendly manner. We created a styling and branding template for the application and designed the UI from scratch. For the back-end we hosted a REST API built using Flask on GCP in order to easily access API's offered by GCP. Most notably, we took advantage of Google Vision API to perform sentiment analysis and used their speech to text API to transcribe a patient's therapy session. ## Challenges we ran into * Integrated a chart library in Angular that met our project’s complex data needs * Working with raw data * Audio processing and conversions for session video clips ## Accomplishments that we're proud of * Using GCP in its full effectiveness for our use case, including technologies like Google Cloud Storage, Google Compute VM, Google Cloud Firewall / LoadBalancer, as well as both Vision API and Speech-To-Text * Implementing the entire front-end from scratch in Angular, with the integration of real-time data * Great UI Design :) ## What's next for Inkblot * Database integration: Keeping user data, keeping historical data, user profiles (login) * Twilio Integration * HIPAA Compliancy * Investigate blockchain technology with the help of BlockStack * Testing the product with professional therapists
## Inspiration We were inspired to build Loki to illustrate the plausibility of social media platforms tracking user emotions to manipulate the content (and advertisements) that they view. ## What it does Loki presents a news feed to the user much like other popular social networking apps. However, in the background, it uses iOS’ ARKit to gather the user’s facial data. This data is piped through a neural network model we trained to map facial data to emotions. We use the currently-detected emotion to modify the type of content that gets loaded into the news feed. ## How we built it Our project consists of three parts: 1. Gather training data to infer emotions from facial expression * We built a native iOS application view that displays the 51 facial attributes returned by ARKit. * On the screen, a snapshot of the current face can be taken and manually annotated with one of four emotions [happiness, sadness, anger, and surprise]. That data is then posted to our backend server and stored in a Postgres database. 2. Train a neural network with the stored data to map the 51-dimensional facial data to one of four emotion classes. Therefore, we: * Format the data from the database in a preprocessing step to fit into the purely numeric neural network * Train the machine learning algorithm to discriminate different emotions * Save the final network state and transform it into a mobile-enabled format using CoreMLTools 3. Use the machine learning approach to discreetly detect the emotion of iPhone users in a Facebook-like application. * The iOS application utilizes the neural network to infer user emotions in real time and show post that fit the emotional state of the user * With this proof of concept we showed how easy applications can use the camera feature to spy on users. ## Challenges we ran into One of the challenges we ran into was the problem of converting the raw facial data into emotions. Since there are 51 distinct data points returned by the API, it would have been difficult to manually encode notions of different emotions. However, using our machine learning pipeline, we were able to solve this. ## Accomplishments that we're proud of We’re proud of managing to build an entire machine learning pipeline that harnesses CoreML — a feature that is new in iOS 11 — to perform on-device prediction. ## What we learned We learned that it is remarkably easy to detect a user’s emotion with a surprising level of accuracy using very few data points, which suggests that large platforms could be doing this right now. ## What's next for Loki Loki is currently not saving any new data that it encounters. One possibility is for the application to record the expression of the user mapped to the social media post. Another possibility is to expand on our current list of emotions (happy, sad, anger, and surprise) as well as train on more data to provide more accurate recognition. Furthermore, we can utilize the model’s data points to create additional functionalities.
winning
## Inspiration Alex K's girlfriend Allie is a writer and loves to read, but has had trouble with reading for the last few years because of an eye tracking disorder. She now tends towards listening to audiobooks when possible, but misses the experience of reading a physical book. Millions of other people also struggle with reading, whether for medical reasons or because of dyslexia (15-43 million Americans) or not knowing how to read. They face significant limitations in life, both for reading books and things like street signs, but existing phone apps that read text out loud are cumbersome to use, and existing "reading glasses" are thousands of dollars! Thankfully, modern technology makes developing "reading glasses" much cheaper and easier, thanks to advances in AI for the software side and 3D printing for rapid prototyping. We set out to prove through this hackathon that glasses that open the world of written text to those who have trouble entering it themselves can be cheap and accessible. ## What it does Our device attaches magnetically to a pair of glasses to allow users to wear it comfortably while reading, whether that's on a couch, at a desk or elsewhere. The software tracks what they are seeing and when written words appear in front of it, chooses the clearest frame and transcribes the text and then reads it out loud. ## How we built it **Software (Alex K)** - On the software side, we first needed to get image-to-text (OCR or optical character recognition) and text-to-speech (TTS) working. After trying a couple of libraries for each, we found Google's Cloud Vision API to have the best performance for OCR and their Google Cloud Text-to-Speech to also be the top pick for TTS. The TTS performance was perfect for our purposes out of the box, but bizarrely, the OCR API seemed to predict characters with an excellent level of accuracy individually, but poor accuracy overall due to seemingly not including any knowledge of the English language in the process. (E.g. errors like "Intreduction" etc.) So the next step was implementing a simple unigram language model to filter down the Google library's predictions to the most likely words. Stringing everything together was done in Python with a combination of Google API calls and various libraries including OpenCV for camera/image work, pydub for audio and PIL and matplotlib for image manipulation. **Hardware (Alex G)**: We tore apart an unsuspecting Logitech webcam, and had to do some minor surgery to focus the lens at an arms-length reading distance. We CAD-ed a custom housing for the camera with mounts for magnets to easily attach to the legs of glasses. This was 3D printed on a Form 2 printer, and a set of magnets glued in to the slots, with a corresponding set on some NerdNation glasses. ## Challenges we ran into The Google Cloud Vision API was very easy to use for individual images, but making synchronous batched calls proved to be challenging! Finding the best video frame to use for the OCR software was also not easy and writing that code took up a good fraction of the total time. Perhaps most annoyingly, the Logitech webcam did not focus well at any distance! When we cracked it open we were able to carefully remove bits of glue holding the lens to the seller’s configuration, and dial it to the right distance for holding a book at arm’s length. We also couldn’t find magnets until the last minute and made a guess on the magnet mount hole sizes and had an *exciting* Dremel session to fit them which resulted in the part cracking and being beautifully epoxied back together. ## Acknowledgements The Alexes would like to thank our girlfriends, Allie and Min Joo, for their patience and understanding while we went off to be each other's Valentine's at this hackathon.
## Inspiration Imagine a world where your best friend is standing in front of you, but you can't see them. Or you go to read a menu, but you are not able to because the restaurant does not have specialized brail menus. For millions of visually impaired people around the world, those are not hypotheticals, they are facts of life. Hollywood has largely solved this problem in entertainment. Audio descriptions allow the blind or visually impaired to follow the plot of movies easily. With Sight, we are trying to bring the power of audio description to everyday life. ## What it does Sight is an app that allows the visually impaired to recognize their friends, get an idea of their surroundings, and have written text read aloud. The app also uses voice recognition to listen for speech commands to identify objects, people or to read text. ## How we built it The front-end is a native iOS app written in Swift and Objective-C with XCode. We use Apple's native vision and speech API's to give the user intuitive control over the app. --- The back-end service is written in Go and is served with NGrok. --- We repurposed the Facebook tagging algorithm to recognize a user's friends. When the Sight app sees a face, it is automatically uploaded to the back-end service. The back-end then "posts" the picture to the user's Facebook privately. If any faces show up in the photo, Facebook's tagging algorithm suggests possibilities for who out of the user's friend group they might be. We scrape this data from Facebook to match names with faces in the original picture. If and when Sight recognizes a person as one of the user's friends, that friend's name is read aloud. --- We make use of the Google Vision API in three ways: * To run sentiment analysis on people's faces, to get an idea of whether they are happy, sad, surprised etc. * To run Optical Character Recognition on text in the real world which is then read aloud to the user. * For label detection, to indentify objects and surroundings in the real world which the user can then query about. ## Challenges we ran into There were a plethora of challenges we experienced over the course of the hackathon. 1. Each member of the team wrote their portion of the back-end service a language they were comfortable in. However when we came together, we decided that combining services written in different languages would be overly complicated, so we decided to rewrite the entire back-end in Go. 2. When we rewrote portions of the back-end in Go, this gave us a massive performance boost. However, this turned out to be both a curse and a blessing. Because of the limitation of how quickly we are able to upload images to Facebook, we had to add a workaround to ensure that we do not check for tag suggestions before the photo has been uploaded. 3. When the Optical Character Recognition service was prototyped in Python on Google App Engine, it became mysteriously rate-limited by the Google Vision API. Re-generating API keys proved to no avail, and ultimately we overcame this by rewriting the service in Go. ## Accomplishments that we're proud of Each member of the team came to this hackathon with a very disjoint set of skills and ideas, so we are really glad about how well we were able to build an elegant and put together app. Facebook does not have an official algorithm for letting apps use their facial recognition service, so we are proud of the workaround we figured out that allowed us to use Facebook's powerful facial recognition software. We are also proud of how fast the Go back-end runs, but more than anything, we are proud of building a really awesome app. ## What we learned Najm taught himself Go over the course of the weekend, which he had no experience with before coming to YHack. Nathaniel and Liang learned about the Google Vision API, and how to use it for OCR, facial detection, and facial emotion analysis. Zak learned about building a native iOS app that communicates with a data-rich APIs. We also learned about making clever use of Facebook's API to make use of their powerful facial recognition service. Over the course of the weekend, we encountered more problems and bugs than we'd probably like to admit. Most of all we learned a ton of valuable problem-solving skills while we worked together to overcome these challenges. ## What's next for Sight If Facebook ever decides to add an API that allows facial recognition, we think that would allow for even more powerful friend recognition functionality in our app. Ultimately, we plan to host the back-end on Google App Engine.
## Inspiration: Whenever people are on the car, especially for long-distance road trips, people get bored. Many people are motion sick, and constantly staring at texts on a book will cause motion sickness. As such, our app enables anyone who prefers the book to be read to them instead of reading the book themselves to do just that. There is a catch: the voices used in the narration are generated based on the sentiment and content analysis of the phrases. Therefore, characters will act out interesting conversations and are therefore a lot more engaging that previously existing automated text-to-audio models. Also, images will be projected for each scene such that it also suits younger audiences who like visual effects. ## What it does Given some text (in the form of screenshots or real time photos taken with your phone), our web app interprets the text, create various different voices (with different pace, volume, pitch) based on the sentiment and content analysis of each character's phrases, and plays out the scene. At the same time, a picture is generated for each scene (arbitrary scene) and is displayed on both the phone screen as well as the Arduino LCD. This allows the users to get another dimension of information, and this is especially useful when kids are using this web app to read books. ## How we built it We started off with a block diagram linking all components of the app, and we independently tested all the various components. We used a lot of Google Cloud APIs (Vision, Text-to-speech, Natural Language) in the process of developing our app, and more specifically, we included OCR, sentiment analysis, content analysis just to name a few. As we got each component working, we incrementally built feature by feature. The first feature is audio from image, then varying the pitch/speed based on the sentiment calculation. After these, we worked on content analysis, and using the results of the content analysis, we made our own Google Custom search engine to perform image searching. Then we feed the results of image search back to the phone as well as the Arduino. Arduino receives the BMP version fo the image search results and displays each image using a bitmap that is generated for each image. At last, we made an app that integrates both the picture taking, audio outputting, visual/image outputting. ## Challenges we ran into We initially wanted to use the Raspberry PI camera as the main camera to take pictures, however, the Raspberry PIs that we received at the Hackathon couldn't boot up, so we had to resort to using Arduino Photon to receive the BMP file which caused a lot of additional overhead. ## Accomplishments that we're proud of The quality of the outputs are surprisingly good. The conversations are engaging and very entertaining to the listener. We also flip the gender of the characters randomly to make it more interesting. ## What we learned 1. Clear up questions regarding APIs early on during a hackathon to prevent wasting time on something easy to solve. 2. Start off the braining storming for ideas more systematically. For example, have a deadline for the project idea to be decided on so that we do not waste time braining storming, but rather on the actual coding/designing of the project. 3. Talk to other teams about how to use certain tools! Do not limit yourself to only asking for advice from the mentors. Other teams and other hackers are usually down to help out and very insightful! ## What's next for Visual Audio 1. Add AR effects to the images. So instead of the images being displayed on the phone screen/Arduino LCD, we project these images using AR technologies so it's even more engaging, especially for children. 2. Add more robust context analysis for gender inference.
winning
## Inspiration We are a group of students with different backgrounds and limited programming experiences. As exchange and transfer students, we found it challenging to build a strong connection with new people, to solve this, we want to create a platform where users can easily match up with people who share similar interests. ## What it does We use simple questions to identify users' common interests and match them through an API platform. Our back end provides social media account suggestions for users and hiking trails recommendations. And the whole process takes only 5 mins! ## How we built it This is just a prototype that we built by html/css/javascript/php/MySQL. We are still in the process of embedding API into our platform. ## Challenges we ran into It's challenging to build a website from scratch with a prior background. Also, we couldn't figure out how to retrieve data from our SQL database for the purpose of matching users using similar traits. ## Accomplishments that we're proud of We built a website that is linked to SQL. ## What we learned The process of developing an interactive website. ## What's next for Hike and See! We will keep working on API to provide a better user experience.
## Clique ### Inspiration + Description Our inspiration came from us really missing those real life encounters that we make with people on a daily basis. In college, we quite literally have the potential to meet someone new everyday: at the dining hall, the gym, discussion section, anywhere. However, with remote learning put into place with most universities, these encounters are now nonexistent. The interactions through people on Zoom just weren't cutting it for meeting new people and starting organic relationships with other people. We created Clique: an app that would encourage university students to organically meet each other. To remove barriers of social anxiety and awkwardness, Clique makes its users anonymous. We only let users upload an image that is not themselves and a 10 words bio to describe themselves. This way it's similar to real life encounters they would've made on campus where they wouldn't even know the other person's name. This also removes inherent biases of judging people by their appearance. After signing up, users can swipe through other users' profiles and decide if their image and bio is interesting enough for them to strike up a conversation. ### Technical Details We used React for the frontend and Google Firebase for the backend. #### Front End Clique contains 4 core pages: Login/Signup, Profile, Match, Conversations. We used React Bootstrap for many of the components to create a minimalistic design. All of the pages interact with the user and also interact with the backend. For finishing touches we added a navigation bar for easy access and loading animations to improve the user experience. #### Back End We used Firebase auth to handle logins and signups. Upon signups, we would add a new entry in Firestore hashing the entry with the unique user id. The entry in Firestore would contain a default bio. When the user would upload an image to their profile, we would hash that image with their uid as well for fast lookup. This way we can easily pull data relevant to current user because we can search for data tagged with their unique user id. Our chat functionality creates a new an entry for every conversation between two users and continually updates that entry in a sub-entry as the conversation goes on. Our matching functionality randomly generates users that we have never matched with before by checking hashes in the users match history. We really embraced our will to break down the barriers that prevent bringing people together to build a user-focused product to help people make connections in a socially distant way. :-)
## AI, AI, AI... The number of projects using LLMs has skyrocketed with the wave of artificial intelligence. But what if you *were* the AI, tasked with fulfilling countless orders and managing requests in real time? Welcome to chatgpME, a fast-paced, chaotic game where you step into the role of an AI who has to juggle multiple requests, analyzing input, and delivering perfect responses under pressure! ## Inspired by games like Overcooked... chatgpME challenges you to process human queries as quickly and accurately as possible. Each round brings a flood of requests—ranging from simple math questions to complex emotional support queries—and it's your job to fulfill them quickly with high-quality responses! ## How to Play Take Orders: Players receive a constant stream of requests, represented by different "orders" from human users. The orders vary in complexity—from basic facts and math solutions to creative writing and emotional advice. Process Responses: Quickly scan each order, analyze the request, and deliver a response before the timer runs out. Get analyzed - our built-in AI checks how similar your answer is to what a real AI would say :) ## Key Features Fast-Paced Gameplay: Just like Overcooked, players need to juggle multiple tasks at once. Keep those responses flowing and maintain accuracy, or you’ll quickly find yourself overwhelmed. Orders with a Twist: The more aware the AI becomes, the more unpredictable it gets. Some responses might start including strange, existential musings—or it might start asking you questions in the middle of a task! ## How We Built It Concept & Design: We started by imagining a game where the player experiences life as ChatGPT, but with all the real-time pressure of a time management game like Overcooked. Designs were created in Procreate and our handy notebooks. Tech Stack: Using Unity, we integrated a system where mock requests are sent to the player, each with specific requirements and difficulty levels. A template was generated using defang, and we also used it to sanitize user inputs. Answers are then evaluated using the fantastic Cohere API! Playtesting: Through multiple playtests, we refined the speed and unpredictability of the game to keep players engaged and on their toes.
losing
> > `2023-10-10 Update` > > We've moved all of our project information to our GitHub repo so that it's up to date. > Our project is completely open source, so please feel free to contribute if you want! > <https://github.com/soobinrho/BeeMovr> > > >
## Inspiration As victims, bystanders and perpetrators of cyberbullying, we felt it was necessary to focus our efforts this weekend on combating an issue that impacts 1 in 5 Canadian teens. As technology continues to advance, children are being exposed to vulgarities online at a much younger age than before. ## What it does **Prof**(ani)**ty** searches through any webpage a child may access, censors black-listed words and replaces them with an appropriate emoji. This easy to install chrome extension is accessible for all institutional settings or even applicable home devices. ## How we built it We built a Google chrome extension using JavaScript (JQuery), HTML, and CSS. We also used regular expressions to detect and replace profanities on webpages. The UI was developed with Sketch. ## Challenges we ran into Every member of our team was a first-time hacker, with little web development experience. We learned how to use JavaScript and Sketch on the fly. We’re incredibly grateful for the mentors who supported us and guided us while we developed these new skills (shout out to Kush from Hootsuite)! ## Accomplishments that we're proud of Learning how to make beautiful webpages. Parsing specific keywords from HTML elements. Learning how to use JavaScript, HTML, CSS and Sketch for the first time. ## What we learned The manifest.json file is not to be messed with. ## What's next for PROFTY Expand the size of our black-list. Increase robustness so it parses pop-up messages as well, such as live-stream comments.
## Team Sam Blouir Pat Baptiste ## Inspiration Seeing the challenges of blind people navigating, especially in new areas and being inspired by someone who wanted to remain blind anonymously ## What it does Uses computer vision to find nearby objects in the vicinity of the wearer and sends information (X,Y coordinates, depth, and size) to the wearer in real-time so they can enjoy walking ## How I built it Python, C, C++, OpenCV, and an Arduino ## Challenges I ran into Performance and creating disparity maps from stereo cameras with significantly different video output qualities ## Accomplishments that I'm proud of It works! ## What I learned We learned lots about using Python, OpenCV, an Arduino, and integrating all of these to create a hardware hack ## What's next for Good Vibrations Better depth-sensing and miniaturization!
winning
## Inspiration Following many natural disasters, we heard news about first responders being overwhelmed and cannot work at full efficiency. The idea of triage, in our opinion, could help this situation – knowing the urgency of different emergencies, we are able to allocate resources to the maximum efficiency. We believe this app is very rich in social goods and could help first responders in times of crisis. ## What it does In short, Zeroth Responder is an AI 911 agent. Zeroth responder, as its name suggests, is used before the first responders take a call. It listens to the descriptions of the caller to judge their circumstances, asks follow-up questions, and effectively note down and classify the situation based on its urgency. In addition, it allows first-responders to have first-hand information about the callers in summarized forms and prioritize their resource allocations. ## How we built it We stored a 911 training manual into a vector database powered by Milvus, then queried it in conjunction with LLMs to create responses to the caller and assess the situation. ## Challenges we ran into We had much trouble setting up connection with the database, as well as making various parts of our stack integrate with each other. ## Accomplishments that we're proud of We are proud that the app is functionally complete and could contribute to society. ## What's next for Zeroth Responder We are integrating with supabase to create a dashboard for first responders to assess the severity of each call and to deal with them appropriately.
## Inspiration Keeping up with the curriculum and working on side ventures is difficult, which brings up the need for an application that can automate the note-taking process. ## What it does WeNote is a platform-independent application that uses AssemblyAI API to transcribe speech into text. WeNote allows people to take notes, just by speaking, thereby reducing the time taken to type or write them. Users can also upload an audio from their lectures and make notes out of them, without having to put in the effort. ## How we built it Front-end: Flutter Back-end: Python-Flask, AssemblyAI(API) ## Challenges we ran into Synchronizing the audio and directly sending it to the server, and sending a file with a POST request. The challenge with flutter web was the lack of file storage which we had to inovatively manage with the browser's session storage for storing the audio data. Another challenge was managing the different audio encodings for our cross platform application. ## Accomplishments that we're proud of We're proud of integrating the sound with flutter web and using that data to get the notes from it while building almost the whole application. ## What we learned We learned about AssemblyAI and Flutter(using sound, http requests and flutter web). ## What's next for WeNote We'll be connecting the in-app option to record audio and the server.
## Problem Statement As the number of the elderly population is constantly growing, there is an increasing demand for home care. In fact, the market for safety and security solutions in the healthcare sector is estimated to reach $40.1 billion by 2025. The elderly, disabled, and vulnerable people face a constant risk of falls and other accidents, especially in environments like hospitals, nursing homes, and home care environments, where they require constant supervision. However, traditional monitoring methods, such as human caregivers or surveillance cameras, are often not enough to provide prompt and effective responses in emergency situations. This potentially has serious consequences, including injury, prolonged recovery, and increased healthcare costs. ## Solution The proposed app aims to address this problem by providing real-time monitoring and alert system, using a camera and cloud-based machine learning algorithms to detect any signs of injury or danger, and immediately notify designated emergency contacts, such as healthcare professionals, with information about the user's condition and collected personal data. We believe that the app has the potential to revolutionize the way vulnerable individuals are monitored and protected, by providing a safer and more secure environment in designated institutions. ## Developing Process Prior to development, our designer used Figma to create a prototype which was used as a reference point when the developers were building the platform in HTML, CSS, and ReactJs. For the cloud-based machine learning algorithms, we used Computer Vision, Open CV, Numpy, and Flask to train the model on a dataset of various poses and movements and to detect any signs of injury or danger in real time. Because of limited resources, we decided to use our phones as an analogue to cameras to do the live streams for the real-time monitoring. ## Impact * **Improved safety:** The real-time monitoring and alert system provided by the app helps to reduce the risk of falls and other accidents, keeping vulnerable individuals safer and reducing the likelihood of serious injury. * **Faster response time:** The app triggers an alert and sends notifications to designated emergency contacts in case of any danger or injury, which allows for a faster response time and more effective response. * **Increased efficiency:** Using cloud-based machine learning algorithms and computer vision techniques allow the app to analyze the user's movements and detect any signs of danger without constant human supervision. * **Better patient care:** In a hospital setting, the app could be used to monitor patients and alert nurses if they are in danger of falling or if their vital signs indicate that they need medical attention. This could lead to improved patient care, reduced medical costs, and faster recovery times. * **Peace of mind for families and caregivers:** The app provides families and caregivers with peace of mind, knowing that their loved ones are being monitored and protected and that they will be immediately notified in case of any danger or emergency. ## Challenges One of the biggest challenges have been integrating all the different technologies, such as live streaming and machine learning algorithms, and making sure they worked together seamlessly. ## Successes The project was a collaborative effort between a designer and developers, which highlights the importance of cross-functional teams in delivering complex technical solutions. Overall, the project was a success and resulted in a cutting-edge solution that can help protect vulnerable individuals. ## Things Learnt * **Importance of cross-functional teams:** As there were different specialists working on the project, it helped us understand the value of cross-functional teams in addressing complex challenges and delivering successful results. * **Integrating different technologies:** Our team learned the challenges and importance of integrating different technologies to deliver a seamless and effective solution. * **Machine learning for health applications:** After doing the research and completing the project, our team learned about the potential and challenges of using machine learning in the healthcare industry, and the steps required to build and deploy a successful machine learning model. ## Future Plans for SafeSpot * First of all, the usage of the app could be extended to other settings, such as elderly care facilities, schools, kindergartens, or emergency rooms to provide a safer and more secure environment for vulnerable individuals. * Apart from the web, the platform could also be implemented as a mobile app. In this case scenario, the alert would pop up privately on the user’s phone and notify only people who are given access to it. * The app could also be integrated with wearable devices, such as fitness trackers, which could provide additional data and context to help determine if the user is in danger or has been injured.
losing
## Inspiration We are Penn Blinders, drawing inspiration from the iconic Netflix series, Peaky Blinders. Our team consists of four developers, Charles Jin (Senior Developer), David Fu(Junior Developer), Tony Tian (Sophomore Developer), and Stan Chen(Freshman Developer). Today, we present to you our thrilling single-player shooting game, where you step into the shoes of the Blinders, defending our beloved bar from rival gangs. ## What it does Our game immerses you in the heart of 1920s Birmingham, where you assume the role of a member of the notorious Penn Blinders. Your mission? Defend our turf from rival gangs looking to sabotage our bar and take control of our territory. Let's delve into the gameplay mechanics. You'll navigate through meticulously recreated The Garrison Pub, encountering infinite waves of rival gang members. Your aim is to eliminate them and fortify the defenses of our pub. Each wave presents new challenges, from stealthy infiltrations to intense shootouts, providing a dynamic, immersive, and endless experience. As you progress, you'll enhance your capabilities and expanding your knowledge of the Blinder's Bar. Take full advantage of the abundant furniture —ranging from tables to chairs to crates to barrels— to build environmental vantage and choke points. These movable map elements can transform an unfamiliar battleground into the certain demise of the hordes of invading rival gangs. One mission – defend the Blinder's Bar. May the odds be ever in your favor. ## How we built it We built our entire project using Unity, featuring our own custom-made retro pixel art sprites. ## Challenges we ran into Since a majority of our team consisted of first time hackers, we ran into a lot of trouble using Unity and other tools. ## Accomplishments that we're proud of Through youtube tutorials and support from our more experienced programmer, we were able to set up and use Unity properly. We were also able to integrate our visual and sound design into a immersive experience. Considering how little experience we had to begin with, we were very proud to deliver a finished game! ## What we learned We learn lots about collaborating to create a video game, splitting up the team into diverse roles, and having a visionary leader to call the shots. It was only through everyone's contribution that we could realize our idea. ## What's next for draft Upgrades for the main character, alternative maps, more enemy types -- there is lots more to come!
## Inspiration We were inspired by dungeon-crawler video games in addition to fantasy role-play themes. We wanted to recreate the flexibility of rich analog storytelling within computer-generated games while making the game feel as natural as possible. ## Overview Our game is a top-down wave-based shooter centered around a narrator mechanic. Powered by the co:here natural language processing platform, the narrator entity generates sentences to describe in-game actions such as damage, player kill, and player death events. Using information provided by the user about their character and mob information, we engineered prompts to generate narrations involving these elements. ## Architecture We built the game in Godot, an open-source game engine. To give us more flexibility when working with data gathered from the co:here platform, we opted to build a python flask back-end to locally interact with the game engine. The back end manages API calls and cleaning responses. ## Challenges We were all unfamiliar with the Godot platform before this hackathon which provided difficulties in terms of a learning curve. A lot of time was also spent on prompt engineering to query the large language models correctly such that the output we received was relevant and coherent. ## Next Steps We would like to give the player more room to directly interact with mobs through text prompts and leverage classification tools to analyze player input dialogue. We were also looking into integrating a tts system to help ease the player's cognitive load.
## Inspiration We are all software/game devs excited by new and unexplored game experiences. We originally came to PennApps thinking of building an Amazon shopping experience in VR, but eventaully pivoted to Project Em - a concept we all found mroe engaging. Our swtich was motivated by the same force that is driving us to create and improve Project Em - the desire to venture into unexplored territory, and combine technologies not often used together. ## What it does Project Em is a puzzle exploration game driven by Amazon's Alexa API - players control their character with the canonical keyboard and mouse controls, but cannot accomplish anything relevant in the game without talking to a mysterious, unknown benefactor who calls out at the beginning of the game. ## How we built it We used a combination of C++, Pyhon, and lots of shell scripting to create our project. The client-side game code runs on Unreal Engine 4, and is a combination of C++ classes and Blueprint (Epic's visual programming language) scripts. Those scripts and classes communicate an intermediary server running Python/Flask, which in turn communicates with the Alexa API. There were many challenges in communicating RESTfully out of a game engine (see below for more here), so the two-legged approach lent itself well to focusing on game logic as much as possible. Sacha and Akshay worked mostly on the Python, TCP socket, and REST communication platform, while Max and Trung worked mainly on the game, assets, and scripts. The biggest challenge we faced was networking. Unreal Engine doesn't naively support running a webserver inside a game, so we had to think outside of the box when it came to networked communication. The first major hurdle was to find a way to communicate from Alexa to Unreal - we needed to be able to communicate back the natural language parsing abilities of the Amazon API to the game. So, we created a complex system of runnable threads and sockets inside of UE4 to pipe in data (see challenges section for more info on the difficulties here). Next, we created a corresponding client socket creation mechanism on the intermediary Python server to connect into the game engine. Finally, we created a basic registration system where game clients can register their publicly exposed IPs and Ports to Python. The second step was to communicate between Alexa and Python. We utilitzed [Flask-Ask](https://flask-ask.readthedocs.io/en/latest/) to abstract away most of the communication difficulties,. Next, we used [VaRest](https://github.com/ufna/VaRest), a plugin for handing JSON inside of unreal, to communicate from the game directly to Alexa. The third and final step was to create a compelling and visually telling narrative for the player to follow. Though we can't describe too much of that in text, we'd love you to give the game a try :) ## Challenges we ran into The challenges we ran into divided roughly into three sections: * **Threading**: This was an obvious problem from the start. Game engines rely on a single main "UI" thread to be unblocked and free to process for the entirety of the game's life-cycle. Running a socket that blocks for input is a concept in direct conflict with that idiom. So, we dove into the FSocket documentation in UE4 (which, according to Trung, hasn't been touched since Unreal Tournament 2...) - needless to say it was difficult. The end solution was a combination of both FSocket and FRunnable that could block and certain steps in the socket process without interrupting the game's main thread. Lots of stuff like this happened: ``` while (StopTaskCounter.GetValue() == 0) { socket->HasPendingConnection(foo); while (!foo && StopTaskCounter.GetValue() == 0) { Sleep(1); socket->HasPendingConnection(foo); } // at this point there is a client waiting clientSocket = socket->Accept(TEXT("Connected to client.:")); if (clientSocket == NULL) continue; while (StopTaskCounter.GetValue() == 0) { Sleep(1); if (!clientSocket->HasPendingData(pendingDataSize)) continue; buf.Init(0, pendingDataSize); clientSocket->Recv(buf.GetData(), buf.Num(), bytesRead); if (bytesRead < 1) { UE_LOG(LogTemp, Error, TEXT("Socket did not receive enough data: %d"), bytesRead); return 1; } int32 command = (buf[0] - '0'); // call custom event with number here alexaEvent->Broadcast(command); clientSocket->Close(); break; // go back to wait state } } ``` Notice a few things here: we are constantly checking for a stop call from the main thread so we can terminate safely, we are sleeping to not block on Accept and Recv, and we are calling a custom event broadcast so that the actual game logic can run on the main thread when it needs to. The second point of contention in threading was the Python server. Flask doesn't natively support any kind of global-to-request variables. So, the canonical approach of opening a socket once and sending info through it over time would not work, regardless of how hard we tried. The solution, as you can see from the above C++ snippet, was to repeatedly open and close a socket to the game on each Alexa call. This ended up causing a TON of problems in debugging (see below for difficulties there) and lost us a bit of time. * **Network Protocols**: Of all things to deal with in terms of networks, we spent he largest amount of time solving the problems for which we had the least control. Two bad things happened: heroku rate limited us pretty early on with the most heavily used URLs (i.e. the Alexa responders). This prompted two possible solutions: migrate to DigitalOcean, or constantly remake Heroku dynos. We did both :). DigitalOcean proved to be more difficult than normal because the Alexa API only works with HTTPS addresses, and we didn't want to go through the hassle of using LetsEncrypt with Flask/Gunicorn/Nginx. Yikes. Switching heroku dynos it was. The other problem we had was with timeouts. Depending on how we scheduled socket commands relative to REST requests, we would occasionally time out on Alexa's end. This was easier to solve than the rate limiting. * **Level Design**: Our levels were carefully crafted to cater to the dual player relationship. Each room and lighting balance was tailored so that the player wouldn't feel totally lost, but at the same time, would need to rely heavily on Em for guidance and path planning. ## Accomplishments that we're proud of The single largest thing we've come together in solving has been the integration of standard web protocols into a game engine. Apart from matchmaking and data transmission between players (which are both handled internally by the engine), most HTTP based communication is undocumented or simply not implemented in engines. We are very proud of the solution we've come up with to accomplish true bidirectional communication, and can't wait to see it implemented in other projects. We see a lot of potential in other AAA games to use voice control as not only an additional input method for players, but a way to catalyze gameplay with a personal connection. On a more technical note, we are all so happy that... THE DAMN SOCKETS ACTUALLY WORK YO ## Future Plans We'd hope to incorporate the toolchain we've created for Project Em as a public GItHub repo and Unreal plugin for other game devs to use. We can't wait to see what other creative minds will come up with! ### Thanks Much <3 from all of us, Sacha (CIS '17), Akshay (CGGT '17), Trung (CGGT '17), and Max (ROBO '16). Find us on github and say hello anytime.
losing
## Inspiration An individual living in Canada wastes approximately 183 kilograms of solid food per year. This equates to $35 billion worth of food. A study that asked why so much food is wasted illustrated that about 57% thought that their food goes bad too quickly, while another 44% of people say the food is past the expiration date. ## What it does LetsEat is an assistant that comprises of the server, app and the google home mini that reminds users of food that is going to expire soon and encourages them to cook it in a meal before it goes bad. ## How we built it We used a variety of leading technologies including firebase for database and cloud functions and Google Assistant API with Dialogueflow. On the mobile side, we have the system of effortlessly uploading the receipts using Microsoft cognitive services optical character recognition (OCR). The Android app is writen using RxKotlin, RxAndroid, Retrofit on a MVP architecture. ## Challenges we ran into One of the biggest challenges that we ran into was fleshing out our idea. Every time we thought we solved an issue in our concept, another one appeared. We iterated over our system design, app design, Google Action conversation design, integration design, over and over again for around 6 hours into the event. During development, we faced the learning curve of Firebase Cloud Functions, setting up Google Actions using DialogueFlow, and setting up socket connections. ## What we learned We learned a lot more about how voice user interaction design worked.
## Inspiration We love cooking and watching food videos. From the Great British Baking Show to Instagram reels, we are foodies in every way. However, with the 119 billion pounds of food that is wasted annually in the United States, we wanted to create a simple way to reduce waste and try out new recipes. ## What it does lettuce enables users to create a food inventory using a mobile receipt scanner. It then alerts users when a product approaches its expiration date and prompts them to notify their network if they possess excess food they won't consume in time. In such cases, lettuce notifies fellow users in their network that they can collect the surplus item. Moreover, lettuce offers recipe exploration and automatically checks your pantry and any other food shared by your network's users before suggesting new purchases. ## How we built it lettuce uses React and Bootstrap for its frontend and uses Firebase for the database, which stores information on all the different foods users have stored in their pantry. We also use a pre-trained image-to-text neural network that enables users to inventory their food by scanning their grocery receipts. We also developed an algorithm to parse receipt text to extract just the food from the receipt. ## Challenges we ran into One big challenge was finding a way to map the receipt food text to the actual food item. Receipts often use annoying abbreviations for food, and we had to find databases that allow us to map the receipt item to the food item. ## Accomplishments that we're proud of lettuce has a lot of work ahead of it, but we are proud of our idea and teamwork to create an initial prototype of an app that may contribute to something meaningful to us and the world at large. ## What we learned We learned that there are many things to account for when it comes to sustainability, as we must balance accessibility and convenience with efficiency and efficacy. Not having food waste would be great, but it's not easy to finish everything in your pantry, and we hope that our app can help find a balance between the two. ## What's next for lettuce We hope to improve our recipe suggestion algorithm as well as the estimates for when food expires. For example, a green banana will have a different expiration date compared to a ripe banana, and our scanner has a universal deadline for all bananas.
## Inspiration As college who have all recently moved into apartments for the first time, we found that we were wasting more food than we could've expected. Having to find rotten yet untouched lettuce in the depths of the fridge is not only incredibly wasteful for the environment but also harmful to our nutrition. We wanted to create this app to help other students keep track of the items in their fridge, without having to wrack their brains for what to cook everyday. Our goal was to both streamline mealtime preparations and provide a sustainable solution to everyday food waste. ## What it does Our app is meant to be simple and intuitive. Users are able to upload a photo of their receipt directly from our app, which we then process and extract the food items. Then, we take these ingredients, calculate expiration dates, and produce recipes for the user using the ingredients that they already have, prioritizing ingredients that are expiring sooner. ## How we built it Our tech stack consisted of React-Native, Express, MongoDB, Open AI API, and OCR. We used React-Native for our frontend and Express for our backend support. MongoDB was used to store the data we parsed from user receipts, so that way our app would not be memoryless. To actually process and recognize the text on the receipt, we used OCR. To generate recipes, we utilized Open AI API and engineered prompts that would yield the best results. ## Challenges we ran into For this project, we wanted to really challenge ourselves by using a tech stack we had never used before, such as React-Native, Express, OpenAI API, and OCR. Since essentially our entire tech stack was unfamiliar, we faced many challenges in understanding syntax, routing, and communication between the frontend and backend. Additionally, we faced issues with technology like Multer in the middleware when it came to sending image information from the front end to backend, as we had never used Multer before either. However, we are incredibly proud of ourselves for being able to persevere and find solutions to our problems, to both learn new skills as well as produce our MVP. ## Accomplishments that we're proud of We are incredibly proud of being able to produce our final product. Though it may not be the best, we hope that it symbolizes our learning, development, and perseverance. From getting our MongoDB database set up to getting our frontend to properly communicate with our backend, we will be taking away many accomplishments with us. ## What we learned As previously mentioned, we learned an entirely new tech stack. We got to experience React-Native, Express, OpenAI API, and OCR for the first time. It's hard to verbalize what we have learned without talking about our entire project process, since we truly learned something new every time we implemented something. ## What's next for Beat the Receipt Originally, we wanted to implement our in-app camera, but due to an unfamiliar tech stack, we didn't get a chance to implement it for this iteration, but are already working on it. Additionally for the future, we hope to allow users to choose recipes that better cater to their tastes while still using soon-to-expire ingredients. Eventually, we would also like to implement a budgeting option, where users can visualize how much of their budget has been spent on their groceries, with our app handling the calculations.
winning
## Inspiration We were inspired to create this chrome extension because we often use the website Codeforces.com to practice competitive programming. However, since Codeforces has such a variety of problems to choose from (over 6000 problems!) we often struggle with finding the next problem to work on. It is difficult to find a problem that is at our right skill level yet also helps us work on the skills we need to practice most. Therefore, we decided to create a chrome extension that will perform this challenging task for us through the power of computation. ## What it does We built a chrome extension to suggest relevant problems on Codeforces. Clicking on the extension icon will bring up a popup menu that allows users to generate recommended problems based upon their skill levels and their chosen categories. ## How we built it First, we scanned through the Codeforces website and established a difficulty rating for each problem using a ranking formula with binary search and uploaded the results to a server. Then, we took into account the category each problem falls into (ex: "dp", "data structures"). For each specific user, we use the data on his/her past submissions to establish a specific user rating for each category. We then combine all of our data to suggest a recommended problem that the user can solve next. ## Challenges we ran into The most difficult task we encountered was integrating two separate pieces of software designed by different developers to create a functional product (frontend and backend). Fine tuning the mathematical model behind our problem recommendation algorithm also presented a significant challenge. ## Accomplishments that we're proud of We were able to successfully use a server to lookup data in real time and brave the horror that is JavaScript. ## What we learned We learned how to use JavaScript to build a chrome extension and connect it to a server. We learned how to perform server side functions with Flask and SQLite. ## What's next for Codeforces Companion (Coco) After more fine tuning on the mathematical model and incorporating periodic updates to our database, we will publish this extension onto the Chrome Web Store. Currently, the user's category ratings are not displayed; in future versions, the user will be able to access this information to pinpoint their competitive programming weaknesses more accurately.
## Inspiration As third year students looking for internships, my group was looking for a way to make our job hunts a little easier and more organized. Any computer science student knows that a large part of many interview processes at tech companies is Leetcode data structure and algorithm problems. Aside from some Leetcode problems being extremly hard, it felt like even after understanding one, we would forget how to do it a week or two later. Thats when we learned about spaced repetition, a theory that says if you review a concept after a certain increment of time, you are more likely to remember it. We decided we needed something to help keep remind us to review the right Leetcode problems that we had already done, at the right times. *Thats when we came up with Leetr.* ## What it does Leetr has a few components. **A chrome extension that links to your browser, a complete website dashboard, and SMS push notifications.** When a leetcode problem is opened on a user's browser, the **chrome extension** helps the user record the current date, the Time Complexity, and any notes they may have regarding the problem. It also records things like the title, the amount of times attempted/successful, the difficulty level of the problem, and more. This information is stored in a **database** and accessed by the **dashboard website.** It has a table that shows users all the problems they have attempted alongside the information mentioned above, as well as graphs and charts that show their progress. They can review their attempts, notes, stats, and more. It also shows them the date that they attempted it, as well as the date they should review it again, in order to reenforce their learning. This is where the **SMS feature** comes in. Useres can opt in to have an SMS sent to their phone with the problem that they should try and review that day according to spaced repetition. *This helps users stay on top of the Leetcode game, no matter how busy they may be.* ## How we built it The Chrome Extension was build using HTML CSS JS. For our Database we used MongoDB Atlas, Node, and Express JS to connect to endpoints to connect to our frontend. The Frontend of the website dashboard was made using Reactjs, specifically with Charkra-UI for our components. The graphs were made using Plotly.js, an opensource graph maker for JavaScript. ## Challenges we ran into For the backend, we ran into problems with CORS because we did not have the right permissions, as Google was blocking us a lot. We had to come up with a way for the API to determine if a problem already existed, and if it should add that problem, or edit an existing problem (put request vs post request). Lastly, our frontend had to be overhauled because there were aspects of our code (ex. Plotly graphs) that would not talk to our backend no matter what we tried. ## Accomplishments that we're proud of This is our very first full stack MERN application that we built ourselves, from the ground up! This was also our first experience developing a chrome extension, which was a really cool learning experience. Lastly, we feel that our team chemistry was really good and it allowed us to work efficiently and trust eachother a lot. ## What we learned We learned many things including but not limited to, the MERN stack, how to use MongoDB Atlas, how to use plotly.js, how to use infobip, and generally we practiced our processes in connecting our database to show up in react tables and plotly graphs. ## What's next for Leetr Next, we hope to implement a mobile app for mobile push notifications, and the ability to review code on your phone on the go. Additionally, we are considering what a social feature might look like on this Leetr, maybe where friends can see eachother's progress, or can opt in to recieve the same alerts to practice the same questions.
## Inspiration There are thousands of people worldwide who suffer from conditions that make it difficult for them to both understand speech and also speak for themselves. According to the Journal of Deaf Studies and Deaf Education, the loss of a personal form of expression (through speech) has the capability to impact their (affected individuals') internal stress and lead to detachment. One of our main goals in this project was to solve this problem, by developing a tool that would a step forward in the effort to make it seamless for everyone to communicate. By exclusively utilizing mouth movements to predict speech, we can introduce a unique modality for communication. While developing this tool, we also realized how helpful it would be to ourselves in daily usage, as well. In areas of commotion, and while hands are busy, the ability to simply use natural lip-reading in front of a camera to transcribe text would make it much easier to communicate. ## What it does **The Speakinto.space website-based hack has two functions: first and foremost, it is able to 'lip-read' a stream from the user (discarding audio) and transcribe to text; and secondly, it is capable of mimicking one's speech patterns to generate accurate vocal recordings of user-inputted text with very little latency.** ## How we built it We have a Flask server running on an AWS server (Thanks for the free credit, AWS!), which is connected to a Machine Learning model running on the server, with a frontend made with HTML and MaterializeCSS. This was trained to transcribe people mouthing words, using the millions of words in LRW and LSR datasets (from the BBC and TED). This algorithm's integration is the centerpiece of our hack. We then used the HTML MediaRecorder to take 8-second clips of video to initially implement the video-to-mouthing-words function on the website, using a direct application of the machine learning model. We then later added an encoder model, to translate audio into an embedding containing vocal information, and then a decoder, to convert the embeddings to speech. To convert the text in the first function to speech output, we use the Google Text-to-Speech API, and this would be the main point of future development of the technology, in having noiseless calls. ## Challenges we ran into The machine learning model was quite difficult to create, and required a large amount of testing (and caffeine) to finally result in a model that was fairly accurate for visual analysis (72%). The process of preprocessing the data, and formatting such a large amount of data to train the algorithm was the area which took the most time, but it was extremely rewarding when we finally saw our model begin to train. ## Accomplishments that we're proud of Our final product is much more than any of us expected, especially the ability to which it seemed like it was an impossibility when we first started. We are very proud of the optimizations that were necessary to run the webpage fast enough to be viable in an actual use scenario. ## What we learned The development of such a wide array of computing concepts, from web development, to statistical analysis, to the development and optimization of ML models, was an amazing learning experience over the last two days. We all learned so much from each other, as each one of us brought special expertise to our team. ## What's next for speaking.space As a standalone site, it has its use cases, but the use cases are limited due to the requirement to navigate to the page. The next steps are to integrate it in with other services, such as Facebook Messenger or Google Keyboard, to make it available when it is needed just as conveniently as its inspiration.
losing
## Inspiration Forest fires have caused massive devastations during the last 5 years. Recently, Australia has been suffering from fires burning since October 2019. We wanted to create a hack that involves both the tool to collect the data and analyze it. ## What it does The sensors connected to the Arduino would send out statistical data using IoT protocols such as the MQTT. This data would be stored in our Firebase Cloud Firestore database which would be updated in realtime. We would then run our Machine Learning model to analyze the data and predict the likelihood of the area catching fire to prevent it. All this data would be tracked on our website for the citizens and the government of the country. ## How we built it Our hardware side uses an Arduino with a Grove shield. A Grove Temperature sensor v1.2 to track the temperature and a SparkFun Moisture sensor to track moisture content at a moderate depth. The collected data is read by server-side python code which adds all the collected data to our Firebase Cloud Firestore database. The collected data is then run through our Machine Learning model which is trained using the sci-learn kit. The model outputs the likelihood of a fire taking place in an area based on the temperature, wind, relative humidity, etc. All this data is displayed on our web application through charts and heatmaps. The web application is built using Python and the Flask web framework. ## Challenges we ran into Training the model was our biggest challenge, as we didn't have a big dataset set to work with. We had to tweak our calculations to get better prediction accuracy based on the dataset. ## Accomplishments that we're proud of ## What we learned ## What's next for Ignis Vigilante
## Inspiration I really enjoy the cyberpunk aesthetic, and realized that Arduino's challenge would be a great chance to explore some new technologies as well as have some fun dressing to play the part. ## What it does **Jetson Nano** The project uses a jetson nano to handle all of it's ML and video streaming capabilities. The project has 2 main portions, the c920 and a thermal camera in the form of adafruit mlx90640. The feed from the thermal camera and the C920 are both fed into a small LCD where S&R workers can take a look at key points in their surroundings. The LEDs strapped to the jacket are more for visual flare! **Quiz Website** We made a quiz focusing on environmental protection so that people are aware of the environmental crisis. ## Challenges we ran into **Hardware** This hackathon was a first for me in working with sensors more specifically the thermal camera, which is an i2c. i2c's are essentially small controllers that use a communication protocol used by low-speed devices. Finding the right libraries to communicate with the sensor was quite difficult, as well as visualizing the data received. This was also my first hackathon doing anything with Machine Learning, so it was interesting to use pre-trained models and how the NVIDIA jetson platform handles things, so this time around I wasn't able to train some models, but I definitely plan on trying to in the future. The reason I was unable to add a 2D-heatmap was due to Jupyter Notebook having issues with installation. **Website** Since we have no prior experience in react, we needed to learn while we code. The syntax and process were so new to me, and took me so long to complete simple tasks. Fortunately, I got so many help from grateful TreeHacks mentors and could finish the project. ## Accomplishments that we're proud of **Hardware** Being able to interface with the i2c at all was rather nice, especially since I had to pour through a lot of documentation in order to translate Raspberry Pi instructions to the Nano. Setting up the Nano to work with NVIDIAs "Hello AI World" was also quite enjoyable. **Website** We made a quiz focusing on environmental protection so that people are aware of the environmental crisis. We used react, chakra-ui, we never used these before, so we learn while we code. The whole process of making something scratch was hard, but It was fun and so much learning at the same time. And seeing it come to life from 0 was good. ## What we learned **Hardware** I got to experience the world of micro-controllers a little more, as well as get an introduction into the world of using ML models. I also learned just how far you can push a small pc that is specialized to do interesting things. I really couldn't have asked for a better intro experience to the exciting world of electronics and AI! **Website** We used react, chakra-ui, we never used these before, so we learn while we code. The whole process of making something scratch was hard, but It was fun and so much learning at the same time.
## Inspiration As students with busy lives, it's difficult to remember to water your plants especially when you're constantly thinking about more important matters. So as solution we thought it would be best to have an app that centralizes, monitors, and notifies users on the health of your plants. ## What it does The system is setup with 2 main components hardware and software. On the hardware side, we have multiple sensors placed around the plant that provide input on various parameters (i.e. moisture, temperature, etc.). Once extracted, the data is then relayed to an online database (in our case Google Firebase) where it's then taken from our front end system; an android app. The app currently allows user authentication and the ability to add and delete plants. ## How we built it **The Hardware**: The Hardware setup for this hack was reiterated multiple times through the hacking phase due to setbacks of the hardware given. Originally we planned on using the Dragonboard 410c as a central hub for all the sensory input before transmitting it via wifi. However, the Dragonboard taken by the hardware lab had a corrupted version of windows iot which meant we had to flash the entire device before starting. After flashing, we learned that dragonboards (and raspberry Pi's) lack the support for analog input meaning the circuit required some sort of ADC (analog to digital converter). Afterwards, we decided to use the ESP-8266 wifi boards to send data as it better reflected the form factor of a realistic prototype and because the board itself supports analog input. In addition we used an Arduino UNO to power the moisture sensor because it required 5V and the esp outputs 3.3V (Arduino acts as a 5v regulator). **The Software**: The app was made in Android studios and was built with user interaction in mind by having users authenticate themselves and add their corresponding plants which in the future would each have sensors. The app is built with scalability in mind as it uses Google Firebase for user authentication, sensor datalogging, ## Challenges we ran into The lack of support for the Dragonboard left us with many setbacks; endless boot cycles, lack of IO support, flashing multiple OSs on the device. What put us off the most was having people tell us not to use it because of its difficultly. However, we still wanted to incorporate in some way. ## Accomplishments that we're proud of * Flashing the Dragonboard and booting it with Windows IOT core * a working hardware/software setup that tracks the life of a plant using sensory input. ## What we learned * learned how to program the dragonboard (in both linux and windows) * learning how to incorporate Firebase into our hack ## What's next for Dew Drop * Take it to the garden world where users can track multiple plants at once and even support a self watering system
losing
## Inspiration In real life, we always find ourselves jam-packed with assignments and exams, and all kinds of tasks that make it easy for us to lose track of our performances. A calendar marks the deadlines but it fails to show our progress on each task. When it comes to teamwork, it can be even harder to keep everything organized. Increasing productivity at work or study is very important, so we chose to build an AI Chatbot for project managers. ## What it does It is a rule-based bot that helps assign tasks to team members every morning, check each member's performances anytime, updates task progress every evening, and it generates a report of project completion at the end of the day highlighting the tasks close to deadlines. ## How we built it We used MongoDB and Node.js ## Challenges we ran into The documentation for Slack was a little confusing, there isn't a clear documentation for integrating the slash commands and Slack interactive components ## Accomplishments that we're proud of This is the first time we tried building a bot, and it can actually be deployed to help make a workplace more productive ## What we learned We learned how to build an app for slack and create a bot, which is really useful since it doesn't have to be a workplace organizer. Now that we know how to build it we can easily change its functionality and turn it into a study/assignment tracker that we can actually use in real life. ## What's next for Project Manager Many other functionalities and commands can be added, or some intelligence can be added so that it is not just rule-based, it can be trained by NLP techniques so that actually 'understands' the text input sent from the user and give back smarter responses.
## Inspiration We were inspired by the various free food groups on facebook. We realized that on the one hand, everyone loves free food but it is not easy to find information about them; on the other hand, event organizers or individuals who order too much food often have to deal with food waste problems. We aim to build a mobile web that solve both problems and create the sharing economy in the food industry, just like Uber in transportation and Airbnb in the housing industry. ## What it does The web allows users who are looking for free food to: 1) access to real-time available free food locations in a map or list; 2) search for free food nearby; 3) contact the donor and pick up the food; It also allows the users who are giving away food to: 4) post about extra food that needs to be taken; ## How I built it The project is a Django site. The models that will be used in conjunction with geodjango are still being written, but they should allow us to easily visualize free food nearby given a person's location. ## Challenges I ran into Staticfiles are always tricky with webapps. Git causes some problems when large files were committed. Also, configuring a database can be tricking. Setting up the right kind of database for geodjango was very troublesome. ## Accomplishments that we're proud of We figured out the bugs and tricky predicaments we found ourselves in. ## What I learned More about Django forms and models. Also, UI/UX design. ## What's next for *shareat* Fleshing out the functionality!
**Finding a problem** Education policy and infrastructure tend to neglect students with accessibility issues. They are oftentimes left on the backburner while funding and resources go into research and strengthening the existing curriculum. Thousands of college students struggle with taking notes in class due to various learning disabilities that make it difficult to process information quickly or write down information in real time. Over the past decade, Offices of Accessible Education (OAE) have been trying to help support these students by hiring student note-takers and increasing ASL translators in classes, but OAE is constrained by limited funding and low interest from students to become notetakers. This problem has been particularly relevant for our TreeHacks group. In the past year, we have become notetakers for our friends because there are not enough OAE notetakers in class. Being note writers gave us insight into what notes are valuable for those who are incredibly bright and capable but struggle to write. This manual process where we take notes for our friends has helped us become closer as friends, but it also reveals a systemic issue of accessible notes for all. Coming into this weekend, we knew note taking was an especially interesting space. GPT3 had also been on our mind as we had recently heard from our neurodivergent friends about how it helped them think about concepts from different perspectives and break down complicated topics. **Failure and revision** Our initial idea was to turn videos into transcripts and feed these transcripts into GPT-3 to create the lecture notes. This idea did not work out because we quickly learned the transcript for a 60-90 minute video was too large to feed into GPT-3. Instead, we decided to incorporate slide data to segment the video and use slide changes to organize the notes into distinct topics. Our overall idea had three parts: extract timestamps the transcript should be split at by detecting slide changes in the video, transcribe the text for each video segment, and pass in each segment of text into a gpt3 model, fine-tuned with prompt engineering and examples of good notes. We ran into challenges every step of the way as we worked with new technologies and dealt with the beast of multi-gigabyte video files. Our main challenge was identifying slide transitions in a video so we could segment the video based on these slide transitions (which signified shifts in topics). We initially started with heuristics-based approaches to identify pixel shifts. We did this by iterating through frames using OpenCV and computing metrics such as the logarithmic sum of the bitwise XORs between images. This approach resulted in several false positives because the compressed video quality was not high enough to distinguish shifts in a few words on the slide. Instead, we trained a neural network using PyTorch on both pairs of frames across slide boundaries and pairs from within the same slide. Our neural net was able to segment videos based on individual slides, giving structure and organization to an unwieldy video file. The final result of this preprocessing step is an array of timestamps where slides change. Next, this array was used to segment the audio input, which we did using Google Cloud’s Speech to Text API. This was initially challenging as we did not have experience with cloud-based services like Google Cloud and struggled to set up the various authentication tokens and permissions. We also ran into the issue of the videos taking a very long time, which we fixed by splitting the video into smaller clips and then implementing multithreading approaches to run the speech to text processes in parallel. **New discoveries** Our greatest discoveries lay in the fine-tuning of our multimodal model. We implemented a variety of prompt engineering techniques to coax our generative language model into producing the type of notes we wanted from it. In order to overcome the limited context size of the GPT-3 model we utilized, we iteratively fed chunks of the video transcript into the OpenAI API at once. We also employed both positive and negative prompt training to incentivize our model to produce output similar to our desired notes in the output latent space. We were careful to manage the external context provided to the model to allow it to focus on the right topics while avoiding extraneous tangents that would be incorrect. Finally, we sternly warned the model to follow our instructions, which did wonders for its obedience. These challenges and solutions seem seamless, but our team was on the brink of not finishing many times throughout Saturday. The worst was around 10 PM. I distinctly remember my eyes slowly closing, a series of crumpled papers scattered nearby the trash can. Each of us was drowning in new frameworks and technologies. We began to question, how could a group of students, barely out of intro-level computer science, think to improve education. The rest of the hour went in a haze until we rallied around a text from a friend who sent us some amazing CS notes we had written for them. Their heartfelt words of encouragement about how our notes had helped them get through the quarter gave us the energy to persevere and finish this project. **Learning about ourselves** We found ourselves, after a good amount of pizza and a bit of caffeine, diving back into documentation for react, google text to speech, and docker. For hours, our eyes grew heavy, but their luster never faded. More troubles arose. There were problems implementing a payment system and never-ending CSS challenges. Ultimately, our love of exploring technologies we were unfamiliar with helped fuel our inner passion. We knew we wanted to integrate Checkbook.io’s unique payments tool, and though we found their API well architectured, we struggled to connect to it from our edge-compute centric application. Checkbook’s documentation was incredibly helpful, however, and we were able to adapt the code that they had written for a NodeJS server-side backend into our browser runtime to avoid needing to spin up an entirely separate finance service. We are thankful to Checkbook.io for the support their team gave us during the event! Finally, at 7 AM, we connected the backend of our website with the fine-tuned gpt3 model. I clicked on CS106B and was greeted with an array of lectures to choose from. After choosing last week’s lecture, a clean set of notes were exported in LaTeX, perfect for me to refer to when working on the PSET later today! We jumped off of the couches we had been sitting on for the last twelve hours and cheered. A phrase bounced inside my mouth like a rubber ball, “I did it!” **Product features** Real time video to notes upload Multithreaded video upload framework Database of lecture notes for popular classes Neural network to organize video into slide segments Multithreaded video to transcript pipeline
losing
## Inspiration Inspired by the challenges posed by complex and expensive tools like Cvent, we developed Eventdash: a comprehensive event platform that handles everything from start to finish. Our intuitive AI simplifies the planning process, ensuring it's both effortless and user-friendly. With Eventdash, you can easily book venues and services, track your budget from beginning to end, and rely on our agents to negotiate pricing with venues and services via email or phone. ## What it does EventEase is an AI-powered, end-to-end event management platform. It simplifies planning by booking venues, managing budgets, and coordinating services like catering and AV. A dashboard which shows costs and progress in real-time. With EventEase, event planning becomes seamless and efficient, transforming complex tasks into a user-friendly experience. ## How we built it We designed a modular AI platform using Langchain to orchestrate services. AWS Bedrock powered our AI/ML capabilities, while You.com enhanced our search and data retrieval. We integrated Claude, Streamlit, and Vocode for NLP, UI, and voice features, creating a comprehensive event planning solution. ## Challenges we ran into We faced several challenges during the integration process. We encountered difficulties integrating multiple tools, particularly with some open-source solutions not aligning with our specific use cases. We are actively working to address these issues and improve the integration. ## Accomplishments that we're proud of We're thrilled about the strides we've made with Eventdash. It's more than just an event platform; it's a game-changer. Our AI-driven system redefines event planning, making it a breeze from start to finish. From booking venues to managing services, tracking budgets, and negotiating pricing, Eventdash handles it all seamlessly. It's the culmination of our dedication to simplifying event management, and we're proud to offer it to you. **Eventdash could potentially achieve a market cap in the range of $2 billion to $5 billion just on B2B sector** the market cap could potentially be higher due to the broader reach and larger number of potential users. ## What we learned Our project deepened our understanding of AWS Bedrock's AI/ML capabilities and Vocode's voice interaction features. We mastered the art of seamlessly integrating 6-7 diverse tools, including Langchain, You.com, Claude, and Streamlit. This experience enhanced our skills in creating cohesive AI-driven platforms for complex business processes. ## What's next for EventDash We aim to become the DoorDash of event planning, revolutionizing the B2B world. Unlike Cvent, which offers a more traditional approach, our AI-driven platform provides personalized, efficient, and cost-effective event solutions. We'll expand our capabilities, enhancing AI-powered venue matching, automated negotiations, and real-time budget optimization. Our goal is to streamline the entire event lifecycle, making complex planning as simple as ordering food delivery.
## Inspiration and What it does We often go out with a lot of amazing friends for trips, restaurants, tourism, weekend expedition and what not. Every encounter has an associated messenger groupchat. We want a way to split money which is better than to discuss on the groupchat, ask people their public key/usernames and pay on a different platform. We've integrated them so that we can do transactions and chat at a single place. We (our team) believe that **"The Future of money is Digital Currency "** (-Bill Gates), and so, we've integrated payment with Algorand's AlgoCoins with the chat. To make the process as simple as possible without being less robust, we extract payment information out of text as well as voice messages. ## How I built it We used Google Cloud NLP and IBM Watson Natural Language Understanding apis to extract the relevant information. Voice messages are first converted to text using Rev.ai speech-to-text. We complete the payment using the blockchain set up with Algorand API. All scripts and database will be hosted on the AWS server ## Challenges I ran into It turned out to be unexpectedly hard to accurately find out the payer and payee. Dealing with the blockchain part was a great learning experience. ## Accomplishments that I'm proud of that we were able to make it work in less than 24 hours ## What I learned A lot of different APIs ## What's next for Mess-Blockchain-enger Different kinds of currencies, more messaging platforms
## Inspiration With multiple members of our team having been a part of environmental conservation initiatives and even running some of our own, an issue we have continually recognized is the difficulty in reaching out to community members that share the same vision. Outside of a school setting, it's difficult to easily connect with initiatives and to find others interested in them, and so we wanted to solve that issue by centralizing a space for these communities. ## What it does The demographic here is two-fold. Users that are interested in volunteering have the capability of logging in, and uses their provided location to narrow down nearby events to a radius of their choosing. This makes sorting through hundreds of events quick and easy, and provides a clear pathway to convert the desire to help into tangible change. Users interested in organizing their own events can create accounts and use a simple process to create an event with all its information and post it both to their own page's feed and to the main initiatives list that volunteers are able to browse through. With just a few clicks, an event can be made available to the many volunteers eager to make a difference. ## How we built it As this project is a website, and many of our team are beginners, we worked mostly with HTML, CSS, and JS. We also integrated bootstrap to help with styling and formatting for the pages to improve user experience. ## Challenges we ran into As relative beginners, one challenge we ran into was working with JavaScript files across multiple HTML pages, and finding that parts of our functionality were only accessible using node.js. To work around this, we focused on rebranching our website pages to ensure easier connections and finding ways to make our code simpler and more comprehensive. ## Accomplishments that we're proud of We're proud of the community that we built with each other during this hackathon. We truly had so much passion for making this a working product, and loved our logo so much we event made stickers! On a technical level, as first-time users of JavaScript, we're particularly proud of our work with connecting HTML input, using JavaScript for string handling, and then creating new elements on the website. Being able to collect input initiatives into our database and display them with live updates was for us, the most difficult technical work, but also by far the most rewarding. ## What we learned For our team as a whole, the biggest takeaway has been a strongly renewed interest in web development and the intricacies behind connecting so many different aspects of functionality using JavaScript. ## What's next for BranchOut Moving forward, we're looking to integrate node.js to supplement our implementation, and to increase connectivity between the different inputs available. We truly believe in our mission to promote nature conservation initiatives, and hope to further expand this into an app to increase accessibility and improve user experience.
partial
## Inspiration 🐳 The inception of our platform was fueled by the growing water crises and the lack of accessible, real-time data on water quality. We recognized the urgent need for a tool that could offer immediate insights and predictive analyses on water quality. We aimed to bridge the gap between complex data and actionable insights, ensuring that every individual, community, and authority is equipped with precise information to make informed decisions. ## What it does❓ Our platform offers a dual solution of real-time water quality tracking and predictive analytics. It integrates data from 11 diverse sources, offering live, metric-based water quality indices. The predictive model, trained on a rich dataset of over 18,000 points, including 400 events, delivers 99.7% accurate predictions of water quality influenced by various parameters and events. Users can visualize these insights through intuitive heat maps and graphs, making the data accessible and actionable for a range of stakeholders, from concerned individuals and communities to governments and engineers. We also developed an AR experience that allows users to interact with and visualize real time data points that the application provides, in addition to heat map layering to demonstrate the effectiveness and strength of the model. ## How we built it 🛠️ We harnessed the power of big data analytics and machine learning to construct our robust platform. The real-time tracking feature consolidates data from 11 different APIs, databases, and datasets, utilizing advanced algorithms to generate live water quality indices. The predictive model is a masterpiece of regression analysis, trained on a dataset enriched with 18,000 data points on >400 events, webscraped from three distinct big data sources. Our technology stack is scalable and versatile, ensuring accurate predictions and visualizations that empower users to monitor, plan, and act upon water quality data effectively. ## Challenges we ran into 😣 Collecting and consolidating a large enough dataset from numerous sources to attain unbiased information, finding sufficiently detailed 3D models, vectorizing the 1000s of text-based data points into meaningful vectors, hyperparameter optimization of the model to reduce errors to negligible amounts (1x10^-6 margin of error for values 1-10), and using the model's predictions and mathematical calculations to interpolate heat maps to accurately represent and visualize the data. ## Accomplishments that we're proud of 🔥 * A 99.7% accurate model that was self-trained on >18000 data points that we consolidated! * Finding/scraping/consolidating data from turbidity indices & pH levels to social gatherings & future infrastructure projects! * Providing intuitive, easily-understood visualizations of incredibly large and complex data sets! * Using numerous GCP services ranging from compute, ML, satellite datasets, and more! ## What we learned 🤔 Blender, data sourcing, model optimization, and error handling were indubitably the greatest learning experiences for us over the course of these 36 hours!
## Inspiration In the United States, every 11 seconds, a senior is treated in the emergency room for a fall. Every 19 minutes, an older adult dies from a fall, directly or indirectly. Deteriorating balance is one of the direct causes of falling in seniors. This epidemic will only increase, as the senior population will double by 2060. While we can’t prevent the effects of aging, we can slow down this process of deterioration. Our mission is to create a solution to senior falls with Smart Soles, a shoe sole insert wearable and companion mobile app that aims to improve senior health by tracking balance, tracking number of steps walked, and recommending senior-specific exercises to improve balance and overall mobility. ## What it does Smart Soles enables seniors to improve their balance and stability by interpreting user data to generate personalized health reports and recommend senior-specific exercises. In addition, academic research has indicated that seniors are recommended to walk 7,000 to 10,000 steps/day. We aim to offer seniors an intuitive and more discrete form of tracking their steps through Smart Soles. ## How we built it The general design of Smart Soles consists of a shoe sole that has Force Sensing Resistors (FSRs) embedded on it. These FSRs will be monitored by a microcontroller and take pressure readings to take balance and mobility metrics. This data is sent to the user’s smartphone, via a web app to Google App Engine and then to our computer for processing. Afterwards, the output data is used to generate a report whether the user has a good or bad balance. ## Challenges we ran into **Bluetooth Connectivity** Despite hours spent on attempting to connect the Arduino Uno and our mobile application directly via Bluetooth, we were unable to maintain a **steady connection**, even though we can transmit the data between the devices. We believe this is due to our hardware, since our HC05 module uses Bluetooth 2.0 which is quite outdated and is not compatible with iOS devices. The problem may also be that the module itself is faulty. To work around this, we can upload the data to the Google Cloud, send it to a local machine for processing, and then send it to the user’s mobile app. We would attempt to rectify this problem by upgrading our hardware to be Bluetooth 4.0 (BLE) compatible. **Step Counting** We intended to use a three-axis accelerometer to count the user’s steps as they wore the sole. However, due to the final form factor of the sole and its inability to fit inside a shoe, we were unable to implement this feature. **Exercise Repository** Due to a significant time crunch, we were unable to implement this feature. We intended to create a database of exercise videos to recommend to the user. These recommendations would also be based on the balance score of the user. ## Accomplishments that we’re proud of We accomplished a 65% success rate with our Recurrent Neural Network model and this was our very first time using machine learning! We also successfully put together a preliminary functioning prototype that can capture the pressure distribution. ## What we learned This hackathon was all new experience to us. We learned about: * FSR data and signal processing * Data transmission between devices via Bluetooth * Machine learning * Google App Engine ## What's next for Smart Soles * Bluetooth 4.0 connection to smartphones * More data points to train our machine learning model * Quantitative balance score system
## Inspiration Our inspiration for this project is that we, like many people, enjoy taking long showers. However, this is bad for the environment and it wastes water as well. This app is a way for us and other people to monitor their water usage effectively and take steps to conserve water, especially in the face of worsening droughts. ## What it does Solution for effective day-to-day monitoring of water usage Most people only know how much water they would use monthly from their water bill This project is meant to help residents effectively locate which areas of their home are using too much water, potentially helping them assess leaks. It also helps them save money by locating areas that are conserving water well so they can continue to save water in that area. ## How we built it We chose to approach this project by using machine learning to help a user conserve their water by having an algorithm predict what sources of water (such as a toilet or a faucet) are using the most amount of water. We came up with a concept for the app and created a wireframe to outline it as well. The app would be paired with the use of sensors, which can be installed in the pipes of the various water sources in the house. We built this project using Google Colab, Canva for the wireframe, and Excel for the data formatting. ## Challenges we ran into We spent a bit of time trying to look for datasets to match our goals, and we ended up having to format some of them differently. We also had to change how our graphs looked because we wanted to display the data in the clearest way possible. ## Accomplishments that we're proud of We formatted and used datasets pretty well. We are glad that we created a basic wireframe to model our app and we are proud we created a finished product, even if its not in a functional form yet. ## What we learned We learned how to deal with data sets and how to put lines into graphs and we also learned how to analyze graphs at a deeper level and to make sure they matched the goals of our project. ## What's next for Hydrosaver In the future we hope to create the sensor for this project and find a way to pair it with an accompanying app. We also hope to gain access to or collect more comprehensive data about water consumption in homes.
partial
## Inspiration There are many scary things in the world ranging from poisonous spiders to horrifying ghosts, but none of these things scare people more than the act of public speaking. Over 75% of humans suffer from a fear of public speaking but what if there was a way to tackle this problem? That's why we created Strive. ## What it does Strive is a mobile application that leverages voice recognition and AI technologies to provide instant actionable feedback in analyzing the voice delivery of a person's presentation. Once you have recorded your speech Strive will calculate various performance variables such as: voice clarity, filler word usage, voice speed, and voice volume. Once the performance variables have been calculated, Strive will then render your performance variables in an easy-to-read statistic dashboard, while also providing the user with a customized feedback page containing tips to improve their presentation skills. In the settings page, users will have the option to add custom filler words that they would like to avoid saying during their presentation. Users can also personalize their speech coach for a more motivational experience. On top of the in-app given analysis, Strive will also send their feedback results via text-message to the user, allowing them to share/forward an analysis easily. ## How we built it Utilizing the collaboration tool Figma we designed wireframes of our mobile app. We uses services such as Photoshop and Gimp to help customize every page for an intuitive user experience. To create the front-end of our app we used the game engine Unity. Within Unity we sculpt each app page and connect components to backend C# functions and services. We leveraged IBM Watson's speech toolkit in order to perform calculations of the performance variables and used stdlib's cloud function features for text messaging. ## Challenges we ran into Given our skillsets from technical backgrounds, one challenge we ran into was developing out a simplistic yet intuitive user interface that helps users navigate the various features within our app. By leveraging collaborative tools such as Figma and seeking inspiration from platforms such as Dribbble, we were able to collectively develop a design framework that best suited the needs of our target user. ## Accomplishments that we're proud of Creating a fully functional mobile app while leveraging an unfamiliar technology stack to provide a simple application that people can use to start receiving actionable feedback on improving their public speaking skills. By building anyone can use our app to improve their public speaking skills and conquer their fear of public speaking. ## What we learned Over the course of the weekend one of the main things we learned was how to create an intuitive UI, and how important it is to understand the target user and their needs. ## What's next for Strive - Your Personal AI Speech Trainer * Model voices of famous public speakers for a more realistic experience in giving personal feedback (using the Lyrebird API). * Ability to calculate more performance variables for a even better analysis and more detailed feedback
## Inspiration The post-COVID era has increased the number of in-person events and need for public speaking. However, more individuals are anxious to publicly articulate their ideas, whether this be through a presentation for a class, a technical workshop, or preparing for their next interview. It is often difficult for audience members to catch the true intent of the presenter, hence key factors including tone of voice, verbal excitement and engagement, and physical body language can make or break the presentation. A few weeks ago during our first project meeting, we were responsible for leading the meeting and were overwhelmed with anxiety. Despite knowing the content of the presentation and having done projects for a while, we understood the impact that a single below-par presentation could have. To the audience, you may look unprepared and unprofessional, despite knowing the material and simply being nervous. Regardless of their intentions, this can create a bad taste in the audience's mouths. As a result, we wanted to create a judgment-free platform to help presenters understand how an audience might perceive their presentation. By creating Speech Master, we provide an opportunity for presenters to practice without facing a real audience while receiving real-time feedback. ## Purpose Speech Master aims to provide a practice platform for practice presentations with real-time feedback that captures details in regard to your body language and verbal expressions. In addition, presenters can invite real audience members to practice where the audience member will be able to provide real-time feedback that the presenter can use to improve. While presenting, presentations will be recorded and saved for later reference for them to go back and see various feedback from the ML models as well as live audiences. They are presented with a user-friendly dashboard to cleanly organize their presentations and review for upcoming events. After each practice presentation, the data is aggregated during the recording and process to generate a final report. The final report includes the most common emotions expressed verbally as well as times when the presenter's physical body language could be improved. The timestamps are also saved to show the presenter when the alerts rose and what might have caused such alerts in the first place with the video playback. ## Tech Stack We built the web application using [Next.js v14](https://nextjs.org), a React-based framework that seamlessly integrates backend and frontend development. We deployed the application on [Vercel](https://vercel.com), the parent company behind Next.js. We designed the website using [Figma](https://www.figma.com/) and later styled it with [TailwindCSS](https://tailwindcss.com) to streamline the styling allowing developers to put styling directly into the markup without the need for extra files. To maintain code formatting and linting via [Prettier](https://prettier.io/) and [EsLint](https://eslint.org/). These tools were run on every commit by pre-commit hooks configured by [Husky](https://typicode.github.io/husky/). [Hume AI](https://hume.ai) provides the [Speech Prosody](https://hume.ai/products/speech-prosody-model/) model with a streaming API enabled through native WebSockets allowing us to provide emotional analysis in near real-time to a presenter. The analysis would aid the presenter in depicting the various emotions with regard to tune, rhythm, and timbre. Google and [Tensorflow](https://www.tensorflow.org) provide the [MoveNet](https://www.tensorflow.org/hub/tutorials/movenet#:%7E:text=MoveNet%20is%20an%20ultra%20fast,17%20keypoints%20of%20a%20body.) model is a large improvement over the prior [PoseNet](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5) model which allows for real-time pose detection. MoveNet is an ultra-fast and accurate model capable of depicting 17 body points and getting 30+ FPS on modern devices. To handle authentication, we used [Next Auth](https://next-auth.js.org) to sign in with Google hooked up to a [Prisma Adapter](https://authjs.dev/reference/adapter/prisma) to interface with [CockroachDB](https://www.cockroachlabs.com), allowing us to maintain user sessions across the web app. [Cloudinary](https://cloudinary.com), an image and video management system, was used to store and retrieve videos. [Socket.io](https://socket.io) was used to interface with Websockets to enable the messaging feature to allow audience members to provide feedback to the presenter while simultaneously streaming video and audio. We utilized various services within Git and Github to host our source code, run continuous integration via [Github Actions](https://github.com/shahdivyank/speechmaster/actions), make [pull requests](https://github.com/shahdivyank/speechmaster/pulls), and keep track of [issues](https://github.com/shahdivyank/speechmaster/issues) and [projects](https://github.com/users/shahdivyank/projects/1). ## Challenges It was our first time working with Hume AI and a streaming API. We had experience with traditional REST APIs which are used for the Hume AI batch API calls, but the streaming API was more advantageous to provide real-time analysis. Instead of an HTTP client such as Axios, it required creating our own WebSockets client and calling the API endpoint from there. It was also a hurdle to capture and save the correct audio format to be able to call the API while also syncing audio with the webcam input. We also worked with Tensorflow for the first time, an end-to-end machine learning platform. As a result, we faced many hurdles when trying to set up Tensorflow and get it running in a React environment. Most of the documentation uses Python SDKs or vanilla HTML/CSS/JS which were not possible for us. Attempting to convert the vanilla JS to React proved to be more difficult due to the complexities of execution orders and React's useEffect and useState hooks. Eventually, a working solution was found, however, it can still be improved to better its performance and bring fewer bugs. We originally wanted to use the Youtube API for video management where users would be able to post and retrieve videos from their personal accounts. Next Auth and YouTube did not originally agree in terms of available scopes and permissions, but once resolved, more issues arose. We were unable to find documentation regarding a Node.js SDK and eventually even reached our quota. As a result, we decided to drop YouTube as it did not provide a feasible solution and found Cloudinary. ## Accomplishments We are proud of being able to incorporate Machine Learning into our applications for a meaningful purpose. We did not want to reinvent the wheel by creating our own models but rather use the existing and incredibly powerful models to create new solutions. Although we did not hit all the milestones that were hoping to achieve, we are still proud of the application that we were able to make in such a short amount of time and be able to deploy the project as well. Most notably, we are proud of our Hume AI and Tensorflow integrations that took our application to the next level. Those 2 features took the most time, but they were also the most rewarding as in the end, we got to see real-time updates of our emotional and physical states. We are proud of being able to run the application and get feedback in real-time, which gives small cues to the presenter on what to improve without risking distracting the presenter completely. ## What we learned Each of the developers learned something valuable as each of us worked with a new technology that we did not know previously. Notably, Prisma and its integration with CockroachDB and its ability to make sessions and general usage simple and user-friendly. Interfacing with CockroachDB barely had problems and was a powerful tool to work with. We also expanded our knowledge with WebSockets, both native and Socket.io. Our prior experience was more rudimentary, but building upon that knowledge showed us new powers that WebSockets have both when used internally with the application and with external APIs and how they can introduce real-time analysis. ## Future of Speech Master The first step for Speech Master will be to shrink the codebase. Currently, there is tons of potential for components to be created and reused. Structuring the code to be more strict and robust will ensure that when adding new features the codebase will be readable, deployable, and functional. The next priority will be responsiveness, due to the lack of time many components appear strangely on different devices throwing off the UI and potentially making the application unusable. Once the current codebase is restructured, then we would be able to focus on optimization primarily on the machine learning models and audio/visual. Currently, there are multiple instances of audio and visual that are being used to show webcam footage, stream footage to other viewers, and sent to HumeAI for analysis. By reducing the number of streams, we should expect to see significant performance improvements with which we can upgrade our audio/visual streaming to use something more appropriate and robust. In terms of new features, Speech Master would benefit greatly from additional forms of audio analysis such as speed and volume. Different presentations and environments require different talking speeds and volumes of speech required. Given some initial parameters, Speech Master should hopefully be able to reflect on those measures. In addition, having transcriptions that can be analyzed for vocabulary and speech, ensuring that appropriate language is used for a given target audience would drastically improve the way a presenter could prepare for a presentation.
## Inspiration Our innovative platform combines the power of conversational AI with the fun of interactive games, transforming traditional language learning into an engaging and playful experience. With a vibrant, child-friendly interface and exciting games like 'Fruit Slash' for vocabulary building and a math challenge that lets kids solve basic arithmetic problems in Spanish, we make learning feel like play. The magic lies in our use of voice technology. ## What it does Kids respond to challenges by speaking, and our integration with Gladia’s speech-to-text engine ensures that their responses are instantly processed and validated. This conversation-based approach allows young learners to practice speaking Spanish in a natural and fun way, making language learning more interactive than ever. We believe that the best learning happens when you're having fun, and we’re changing the way kids learn languages—one conversation at a time. ## How we built it • Design & Prototyping: Used Figma to create a fun and interactive user interface, focusing on a kid-friendly design. • Frontend Development: Built the frontend using React and Next.js, implementing responsive components and game interfaces. • Speech-to-Text Integration: Integrated Gladia for live audio processing, converting speech into text in real time. • Text Processing Logic: Implemented simpdjson to handle JSON queries and manage data from the speech-to-text responses. • Game Functionality: Developed game UIs that utilize the processed text to validate responses and provide feedback within interactive games. ## Challenges we ran into • Limited Time and Expertise: Couldn't fine-tune voice models to the desired accuracy due to time constraints and lack of expertise. • Latency Issues: Experienced delays with API calls between the speech-to-text service and the conversational agent, affecting response times. • Poor Internet Connectivity: Encountered frequent connectivity issues throughout the hackathon, costing valuable development time. • Limited Workspace: Faced difficulties collaborating effectively due to cramped working conditions and lack of sufficient space. ## Accomplishments that we're proud of • Team Collaboration: Successfully worked together despite various struggles, demonstrating strong teamwork and resilience. • Rapid Prototyping: Quickly tested different voice software to validate the idea and identify a feasible solution. • UI Design and User Experience: Created a friendly, appealing, and fun interface that makes learning enjoyable for kids. • Clean and Interactive Interface: Developed a clean, user-friendly interface that enhances the learning experience through engaging interactions. ## What we learned • Real time streaming ain't easy, if you want low latency! We promise awesome latency streaming! ## What's next for Funlingo • Character-Based Conversation Agents: Train voice models to incorporate popular TV and movie characters, allowing kids to learn from their favorite personalities and making the experience more engaging. • Personalized Pronunciation Tutor: Develop a conversation agent that listens to kids’ pronunciation and provides real-time feedback, acting as a personalized tutor to help them improve their speaking skills. • Research-Backed Educational Content: Collaborate with educators and researchers to design games that are not only fun but also scientifically proven to enhance language learning outcomes.
winning
## Inspiration Integrating and playing with the hotels API exposed by travel company called Priceline. Uses BandsInTown API for bands information ## What it does It looks for where your favorite band is performing and then helps you look for hotels for that place. It also redirects you to buy tickets for that event. ## How I built it Android Studio, Android SDK ## Challenges I ran into API integration and UI development ## Accomplishments that I'm proud of Use of some material design icons. Although the app is not material design compliant, I would say its a good construct in 24 hours. ## What I learned Some aspects of material design ## What's next for MusicoHotel Look for more integration. Like detailed hotel booking, searching hotels by users preferences. Finding those preferences intelligently.
## Inspiration For first-time travelers coming into big cities or countries for business or other purposes, deciding where to stay is a difficult and sometimes stressful choice. They may check online for different hotel options, but even with 360-degree views of the rooms, they can't get the authentic experience. ## What it does We are building an AR application, which enables a user to enter any room in a 3D space and actually navigate inside of it.By doing so, they can participate in a first-hand experience to help them decide the best hotel for themselves. Sitting at home, they can get an in-the-moment visual of all hotel rooms. Therefore, it becomes easier to decide rooms of their choice. We also plan to implement the instant addition of new room items catered toward the user's preferences for a more personal and comfortable trip. ## How we built it The project utilizes Amadeus's Hotel Information API to analyze the hotels in a particular area. We also used Google AR Core and Unity 3D to build our AR product. The rest of our Android Product was built using Java and Android Studio. ## Challenges we ran into The biggest challenge has been getting 3D footage of a room without a 3D camera because even conversion of such a footage to a 3D model is a difficult task. ## Accomplishments that we're proud of We were able to deploy our 3D model and actually interact inside the room rather than simply looking at a room., providing a simple but compact solution. ## What we learned Learning to work collectively in a team despite of the cultural differences is something we learnt really quickly. Effective communication is super important when designing any project. To avoid lack of clarity and build up of arguments, it is very important to listen to everyone carefully, before putting up your arguments. ## What's next for ARound As discussed earlier, we would like to make the 360 footage of the hotel rooms into models for our AR app in a more optimized and cost effective manner. We also think the application can include Point of Interests as well such as wedding halls, restaurants, therefore we can use Amadeus API for these experiences as well.
## Inspiration We want to fix healthcare! 48% of physicians in the US are burned out, which is a driver for higher rates of medical error, lower patient satisfaction, higher rates of depression and suicide. Three graduate students at Stanford have been applying design thinking to the burnout epidemic. A CS grad from USC joined us for TreeHacks! We conducted 300 hours of interviews, learned iteratively using low-fidelity prototypes, to discover, i) There was no “check engine” light that went off warning individuals to “re-balance” ii) Current wellness services weren’t designed for individuals working 80+ hour weeks iii) Employers will pay a premium to prevent burnout And Code Coral was born. ## What it does Our platform helps highly-trained individuals and teams working in stressful environments proactively manage their burnout. The platform captures your phones’ digital phenotype to monitor the key predictors of burnout using machine learning. With timely, bite-sized reminders we reinforce individuals’ atomic wellness habits and provide personalized services from laundry to life-coaching. Check out more information about our project goals: <https://youtu.be/zjV3KeNv-ok> ## How we built it We built the backend using a combination of API's to Fitbit/Googlemaps/Apple Health/Beiwe; Built a machine learning algorithm and relied on an App Builder for the front end. ## Challenges we ran into API's not working the way we want. Collecting and aggregating "tagged" data for our machine learning algorithm. Trying to figure out which features are the most relevant! ## Accomplishments that we're proud of We had figured out a unique solution to addressing burnout but hadn't written any lines of code yet! We are really proud to have gotten this project off the ground! i) Setting up a system to collect digital phenotyping features from a smart phone ii) Building machine learning experiments to hypothesis test going from our digital phenotype to metrics of burnout iii) We figured out how to detect anomalies using an individual's baseline data on driving, walking and time at home using the Microsoft Azure platform iv) Build a working front end with actual data! Note - login information to codecoral.net: username - test password - testtest ## What we learned We are learning how to set up AWS, a functioning back end, building supervised learning models, integrating data from many source to give new insights. We also flexed our web development skills. ## What's next for Coral Board We would like to connect the backend data and validating our platform with real data!
losing
## Inspiration We were inspired by the various free food groups on facebook. We realized that on the one hand, everyone loves free food but it is not easy to find information about them; on the other hand, event organizers or individuals who order too much food often have to deal with food waste problems. We aim to build a mobile web that solve both problems and create the sharing economy in the food industry, just like Uber in transportation and Airbnb in the housing industry. ## What it does The web allows users who are looking for free food to: 1) access to real-time available free food locations in a map or list; 2) search for free food nearby; 3) contact the donor and pick up the food; It also allows the users who are giving away food to: 4) post about extra food that needs to be taken; ## How I built it The project is a Django site. The models that will be used in conjunction with geodjango are still being written, but they should allow us to easily visualize free food nearby given a person's location. ## Challenges I ran into Staticfiles are always tricky with webapps. Git causes some problems when large files were committed. Also, configuring a database can be tricking. Setting up the right kind of database for geodjango was very troublesome. ## Accomplishments that we're proud of We figured out the bugs and tricky predicaments we found ourselves in. ## What I learned More about Django forms and models. Also, UI/UX design. ## What's next for *shareat* Fleshing out the functionality!
## Inspiration As a student, sometimes we need to compromise lunch with ramen cup noodles, money, and these problems faced by all the students. So, as a software engineer, I and my roommate thought of a solution to this problem and trying to develop a prototype for the application. ## What it does Demeter allows you to post your extra perishable food on the website and also allows you to perform a location-based search using zip code so you can look up the free food near you. ## How we built it We build scalable backend REST services in the flask framework with business logic to authenticate and authorize users. We later build an angular app to talk to the REST services and for the database we used cockroach DB(Postgres). The entire system is deployed on google cloud. ## Challenges we ran into 1. Developing UI/UX for the application. 2. Network issues ## Accomplishments that we're proud of ## What we learned We learned to build and deploy an end-to-end scalable solution for a problem and work in a team. ## What's next for Demeter 1. Attach Image of food. (image recognition with a neural network) 2. Use of google maps for direction. (for data clustering and location) 3. User Notifications. 4. DocuSign Esign. 5. Claim the free food before you get there feature which will allow users to claim food by paying a minimal charge. 6. IOS and Android Apps.
## Inspiration Everyone can relate to the scene of staring at messages on your phone and wondering, "Was what I said toxic?", or "Did I seem offensive?". While we originally intended to create an app to help neurodivergent people better understand both others and themselves, we quickly realized that emotional intelligence support is a universally applicable concept. After some research, we learned that neurodivergent individuals find it most helpful to have plain positive/negative annotations on sentences in a conversation. We also think this format leaves the most room for all users to reflect and interpret based on the context and their experiences. This way, we hope that our app provides both guidance and gentle mentorship for developing the users' social skills. Playing around with Co:here's sentiment classification demo, we immediately saw that it was the perfect tool for implementing our vision. ## What it does IntelliVerse offers insight into the emotions of whomever you're texting. Users can enter their conversations either manually or by taking a screenshot. Our app automatically extracts the text from the image, allowing fast and easy access. Then, IntelliVerse presents the type of connotation that the messages convey. Currently, it shows either a positive, negative or neutral connotation to the messages. The interface is organized similarly to a texting app, ensuring that the user effortlessly understands the sentiment. ## How we built it We used a microservice architecture to implement this idea The technology stack includes React Native, while users' information is stored with MongoDB and queried using GraphQL. Apollo-server and Apollo-client are used to connect both the frontend and the backend. The sentiment estimates are powered by custom Co:here's finetunes, trained using a public chatbot dataset found on Kaggle. Text extraction from images is done using npm's text-from-image package. ## Challenges we ran into We were unfamiliar with many of the APIs and dependencies that we used, and it took a long to time to understand how to get the different components to come together. When working with images in the backend, we had to do a lot of parsing to convert between image files and strings. When training the sentiment model, finding a good dataset to represent everyday conversations was difficult. We tried numerous options and eventually settled with a chatbot dataset. ## Accomplishments that we're proud of We are very proud that we managed to build all the features that we wanted within the 36-hour time frame, given that many of the technologies that we used were completely new to us. ## What we learned We learned a lot about working with React Native and how to connect it to a MongoDB backend. When assembling everyone's components together, we solved many problems regarding dependency conflicts and converting between data types/structures. ## What's next for IntelliVerse In the short term, we would like to expand our app's accessibility by adding more interactable interfaces, such as audio inputs. We also believe that the technology of IntelliVerse has far-reaching possibilities in mental health by helping introspect upon their thoughts or supporting clinical diagnoses.
losing
## Stylete.app Project Overview ## Inspiration I've been inspired to create Stylete.app after learning about the massive amounts of fabric scraps, unsold inventory, and discarded clothing that end up in landfills. I saw an opportunity to connect waste generators with potential users, turning what was once considered trash into valuable resources for creatives, as well as anyone interested in being eco-conscious. ## What it does Stylete.app is a mobile marketplace that connects fashion companies, individual sellers, and eco-conscious buyers. It allows: * Fashion companies to list and sell excess fabric, unsold inventory, and materials * Individual sellers to offer secondhand clothing and craft supplies * Buyers to purchase these items at discounted rates ## How we built it We built Stylete.app using a combination of technologies: * Frontend: HTML, Tailwind CSS * Backend: PHP * Authentication, Payment Processing, etc. ## Challenges we ran into 1. Balancing user experience for diverse user types (companies, individual sellers, buyers) 2. Bringing in the smaller, working picture. I was considering the bigger picture, and that led to a harder time trying to get features perfected one at a time. ## Accomplishments that we're proud of 1. Creating a user-friendly interface that simplifies the listing and buying process 2. Fully functioning MVP, with both user and merchant side completed! 3. Custom onboarding based on device used. ## What we learned I not only learned how to persevere, but I've also learned how to use PHP for backend. I've received a lot of guidance from not only mentors, but my friend, Andrew as well, and they've all taught me so much on how to integrate frontend and backend together. ## What's next for Stylete.app 1. Implementing AI-powered recommendations for buyers 2. Creating an educational component with upcycling tutorials and sustainability tips 3. Expanding to include home textiles and accessories 4. Create events for best use of leftover fabrics, community engagement
## Inspiration Every year there are **one in 50 Americans** who suffer from severe allergic reactions; however, studies have shown that no more than 20% of them or their surrounding know how to act in those situations. During the past summer, such a situation happened to two members of our team which inspired us to tackle this problem and provide immersive VR experience to learn what to do in a situation like Anaphilact shock. ## What it does The application provides an immersive VR tutorial to learn how to provide the first medical aid, created with the incorporation of principles of the science of learning. For the hackathon scale, we created a lesson to teach how to react in the case of anaphylactic shock. ## How we built it Throughout the weekend we worked with Oculus Quest, Unity, 3D modeling and Houdify to create our app. ## Challenges we ran into On the ideation process, we had troubles identifying obstacles and constraints for the project itself, as well as the constraints that we have for the development. Considering the time limit, we decided to focus on creating one detailed tutorial. One of the problems of education project is that a lot of people are not interested in intensive learning; to get people’s in learning using our product, we made the content interactive and funny. ## Accomplishments that we're proud of We came up with the idea of the app currently not available to the general public (patients) and executed it in a short amount of time. ## What's next for Sonder For now, we created one tutorial for the case of anaphylactic shock; however, we can expand it to every other medical emergency case such as CPR.
## Inspiration: Our journey began with a simple, yet profound realization: sorting waste is confusing! We were motivated by the challenge many face in distinguishing recyclables from garbage, and we saw an opportunity to leverage technology to make a real environmental impact. We aimed to simplify recycling, making it accessible and accurate for everyone. ## What it does: EcoSort uses a trained ML model to identify and classify waste. Users present an item to their device's webcam, take a photo, and our website instantly advises whether it is recyclable or garbage. It's user-friendly, efficient, and encourages responsible waste disposal. ## How we built it: We used Teachable Machine to train our ML model, feeding it diverse data and tweaking values to ensure accuracy. Integrating the model with a webcam interface was critical, and we achieved this through careful coding and design, using web development technologies to create a seamless user experience. ## Challenges we ran into: * The most significant challenge was developing a UI that was not only functional but also intuitive and visually appealing. Balancing these aspects took several iterations. * Another challenge we faced, was the integration of our ML model with our UI. * Ensuring our ML model accurately recognized a wide range of waste items was another hurdle, requiring extensive testing and data refinement. ## Accomplishments that we're proud of: What makes us stand out, is the flexibility of our project. We recognize that each region has its own set of waste disposal guidelines. To address this, we made our project such that the user can select their region to get the most accurate results. We're proud of creating a tool that simplifies waste sorting and encourages eco-friendly practices. The potential impact of our tool in promoting environmentally responsible behaviour is something we find particularly rewarding. ## What we learned: This project enhanced our skills in ML, UI/UX design, and web development. On a deeper level, we learned about the complexities of waste management and the potential of technology to drive sustainable change. ## What's next for EcoSort: * We plan to expand our database to accommodate different types of waste and adapt to varied recycling policies across regions. This will make EcoSort a more universally applicable tool, further aiding our mission to streamline recycling for everyone. * We are also in the process of hosting the EcoSort website as our immediate next step. At the moment, EcoSort works perfectly fine locally. However, in regards to hosting the site, we have started to deploy it but are unfortunately running into some hosting errors. * Our [site](https://stella-gu.github.io/EcoSort/) is currently working
losing
## Inspiration We were looking for a hack to really address the need for quick and easy access to mental health help. ## What it does Happy Thoughts is a companion service. Whenever in need or desiring a pick-me-up a user can text the app. ## How we built it Using a React-made webclient we accepted user inputted preferences. Via a Google Firebase realtime database we transmitted the preference data to a Node.js web server hosted on Google App Engine. From there the server (using Twilio) can text, and receive texts from the user to have valuable conversations and send happy thoughts to the user. Also made with courage, focus, determination, and a gross amount of puffy style Cheetos. ## Challenges we ran into We didn't React well. None of us knew React. But we improvised, adapted, and overcame the challenge ## Accomplishments that we're proud of A team that pushed through, and made a hack that we're proud of. We stacked cups, learned juggling, and had an overall great time. The team bonded and came through with an impactful project. ## What we learned That a weekend is a short time when you're trying to make in impact on mental health. ## What's next for Happy Thoughts Happier Thoughts :) ## Enjoy these happy thoughts <https://www.youtube.com/watch?v=DOWbvYYzAzQ>
## Inspiration Everyone has dreams and aspirations. Whether it’s saving up for education, breaking into the music industry as a small artist, or travelling the world. Yet, too often, we don’t have the financial capacity for them to be a reality. We wanted to create an app that helps bridge this gap. Introducing Dream with Us, a network reshaping how people achieve their dreams through decentralized, peer-to-peer transaction funding. ## What it does Dream with Us is a platform that enables individuals to support the aspirations of creators, entrepreneurs, and everyday dreamers. Through our app, users can browse dreams and aspirations of others. Users can show their support and donate money in the form of cryptocurrency. At the same time, it’s a space for individuals to share their own aspirations and gain support. Our app helps connect and accelerate a community of diverse dreamers. ## How we built it Our frontend is built using Svelte, JavaScript, and TailwindCSS. The backend is built using Motoko and Coinbase API. Collectively, these allow our app to run smart contracts and manage decentralized transactions. Lastly, the entire app is hosted and deployed on the ICP blockchain. We used two canisters (which act like smart containers similar to Docker) to separate the frontend and backend. ## Challenges we ran into * Blockchain Transaction APIs: There were limitations with the transaction APIs for blockchain payments and we had to develop custom solutions. * Integration of ICP with UI Libraries: Since ICP project folder requires a lot of specific version dependencies, this led to compatibility issues with most popular UI libraries. We had to forgo using these libraries entirely during our development. * Deployment and Testing: Deploying and testing decentralized applications (dApps) on the ICP, especially containerizing both frontend and backend canisters, was tricky. It required in-depth knowledge of the ICP’s architecture and its nuances with smart contract deployment. * Initial Development Environment Setup: Setting up the development environment and coordinating between Docker and dfx CLI took time and troubleshooting. ## Accomplishments that we're proud of We’re proud to have fully deployed both the frontend and backend canisters on the ICP blockchain, making our app completely decentralized. Additionally, we successfully integrated the Coinbase API to handle secure, real-time ICP token transactions between users and dreamers, allowing seamless donations. ## What we learned We learned about Web3, the decentralized nature of blockchain technology, and how it compares to traditional systems. This gave us a deeper understanding of its potential for peer-to-peer interactions. Furthermore, we gained hands-on experience with the ICP's communication mechanisms between devices and smart contracts, allowing us to build scalable, secure decentralized apps. We also got to meet awesome teams, and people along the way :) ## What's next for Dream with Us * Implement direct peer-to-peer (P2P) transactions on the blockchain with added security layers like checksums and multi-signature verification * Introduce tiered subscriptions and investment caps to allow different levels of engagement, from small perks to larger equity-like arrangements for backers. * Provide analytics for dreamers to track their funding and backer engagement and tools to update supporters on their progress.
## Inspiration Co-founder's ability to never get straight to the point ## What it does Has 3 different components and using speech recognition and NLP to summarize text. Also gets creative to build song lyrics or even a blog post. ## How we built it We used various libraries to put the project together. This includes speech recognition libraries and the Cohere API for NLP. The whole thing was written in Python ## Challenges we ran into Limited knowledge of NLP ## Accomplishments that we're proud of Designing and developing a functioning application ## What we learned Using the libraries for NLP and speech recognition is easier than expected ## What's next for That's Crazy Better trained AI models
partial
## Inspiration CartMate is a web-based application that enables users to share their shopping carts across multiple webpages and with other people. The inspiration behind CartMate was to provide a simple and convenient way for users to share their shopping carts and collaborate on their purchases. ## What it does CartMate allows users to create a shopping cart and add items to it from any webpage they visit. They can then share their cart with others and make changes to it in real-time. This makes it easier for people to work together on purchasing items and saves time by reducing the need for manual tracking. ## How we built it The CartMate application was built using Tailwind and React in the front end and Firebase for authentication and storing data. These technologies were chosen to provide a modern and flexible solution that can scale as the application grows. ## Challenges we ran into The biggest challenge we ran into was syncing the shopping cart between all the users, it took a lot of thinking to find a smart and efficient way to do this but we did learn a lot from it. ## Accomplishments that we're proud of Our goal was to create a shopping cart sharing application that was both functional and visually appealing, and I believe that we have succeeded in that regard. One of our biggest accomplishments was working with technologies that were new to us. This was our first time using Tailwind and creating a Chrome extension, and we were able to quickly get up to speed and create a polished final product. Another accomplishment that we are proud of is our ability to overcome the challenges that we faced during the development process. Deploying to GitHub and syncing cart data between users presented some difficulties, but we were able to work through them and create a solution that meets our needs. Finally, we are proud of the product that we have created. CartMate allows users to share their shopping carts across multiple webpages and with other people, making the shopping experience easier and more convenient. We believe that this product has the potential to make a real difference in people's lives and we are excited to continue improving it in the future. ## What we learned Through the development process, we learned about the process of creating a chrome extension, how to use Firebase for the back-end, and how to use Netlify to host the application. We also worked heavily on developing our collaborative programming skillsets using Github, as we created deployments and integrated testing with each push / pull to ensure smooth CI/CD. We also gained experience in using Tailwindand React, further expanding our skill set. ## What's next for CartMate The team has plans to improve the functionality of CartMate and add new features such as sharing shopping carts with links. We aim to continue to enhance the user experience and provide a reliable and convenient solution for users to collaborate on their shopping carts.
## Inspiration With the spread of the COVID-19 pandemic, many high risk individuals are unable to go out and purchase essential goods such as groceries or healthcare products. Our app aims to be a simple, easy-to-use solution to facilitate group buys of such goods amongst small communities and neighbourhoods. Group purchasing of essential goods not only limits the potential spread of COVID-19, but also saves waste and limits GHG emissions. ## What it does At it's core, the app is meant to promote small communities to participate and host group purchases of essential goods within themselves. Users input wanted items into a list that everyone in the community can see, and then a designated purchaser uses this list to purchase items for the neighbourhood. ## How we built it On the front-end, we used Android Studio alongside Java to create a simple UI for users to input their desired purchases into lists that everyone in the group/neighbourhood can add to. With respect to the back-end, we used Mongoose for schema validation, MongoDB Atlas to host our database, Express for routing, and developed a custom-made authentication module for protecting our endpoints. Finally Postman was used to test and debug endpoints, and AWS to host the server. ## Challenges we ran into Our team was completely new to nearly every aspect of this project. We had little experience in database management and user authentication, and next to none in mobile development. It took us about 5 hours just to get our environments setup and get up to speed on the technologies we chose to use. When it came to authenticating users, we had lots of trouble getting Google Authentication to work. We sunk a lot of time into this issue and finally decided to develop a novel authentication methodology of our own. ## Accomplishments that we're proud of We're incredibly humbled to have learned so much in such little time. We chose this project because we felt we would be challenged as software developers in our choice of technologies and implementation. Nearly every single technology used in this project was completely new to each of us, and we feel like we learned a lot of new things, such as how to use Android Studio, developing a custom API overnight, and the principles of user authentication. We're also proud of having come into the hackathon with an initial idea and being able to pivot quickly in another direction, scrapping our original idea in favor of grocerWE. ## What we learned We learned that mobile development can be rewarding and principles of software construction learned in junior courses were invaluable to the creation of this project. Additionally, we also learned how important user authentication is and how it's prevalent in nearly all of our apps that we use today. Creating this app also helped us realize the impact of technology on society today, and how a simple idea can help unite people together in a global pandemic. ## What's next for grocerWE Given the time constraints of the hackathon, and how inexperienced our group was to these new technologies, there are many things that we wanted for grocerWE that we weren't quite able to implement. We'd like to be able to add Google Maps integration, where users are able to add their address to their profile in order to make delivery of groceries easier on the designated purchaser. Additionally, user roles such as purchaser or orderer were not really implemented. For the above reasons, we considered these issues out of scope and focused our time on other fundamental aspects of grocerWE.
## Tweet-Mood We're Tweet-Mood, a web based, geolocated sentiment analysis application using live tweets posted in the US. Our goal was to create a more powerful tool for understanding how various socioeconomic and other demographic factors play into the overall sentiments expressed in a community. Our application is built on a python flask server hosted on an AWS ec2 instance. Data is streamed through a redis backed ElastiCache to a naive Bayes classification model. Frontend data visualization is done in LeaftletJS and Mapbox. [tweetmood.me](http://happinessmap-dev.us-west-2.elasticbeanstalk.com)
losing
## Inspiration The EPA estimates that although 75% of American waste is recyclable, only 30% gets recycled. Our team was inspired to create RecyclAIble by the simple fact that although most people are not trying to hurt the environment, many unknowingly throw away recyclable items in the trash. Additionally, the sheer amount of restrictions related to what items can or cannot be recycled might dissuade potential recyclers from making this decision. Ultimately, this is detrimental since it can lead to more trash simply being discarded and ending up in natural lands and landfills rather than being recycled and sustainably treated or converted into new materials. As such, RecyclAIble fulfills the task of identifying recycling objects with a machine learning-based computer vision software, saving recyclers the uncertainty of not knowing whether they can safely dispose of an object or not. Its easy-to-use web interface lets users track their recycling habits and overall statistics like the number of items disposed of, allowing users to see and share a tangible representation of their contributions to sustainability, offering an additional source of motivation to recycle. ## What it does RecyclAIble is an AI-powered mechanical waste bin that separates trash and recycling. It employs a camera to capture items placed on an oscillating lid and, with the assistance of a motor, tilts the lid in the direction of one compartment or another depending on whether the AI model determines the object as recyclable or not. Once the object slides into the compartment, the lid will re-align itself and prepare for proceeding waste. Ultimately, RecyclAIBle autonomously helps people recycle as much as they can and waste less without them doing anything different. ## How we built it The RecyclAIble hardware was constructed using cardboard, a Raspberry Pi 3 B+, an ultrasonic sensor, a Servo motor, and a Logitech plug-in USB web camera, and Raspberry PI. Whenever the ultrasonic sensor detects an object placed on the surface of the lid, the camera takes an image of the object, converts it into base64 and sends it to a backend Flask server. The server receives this data, decodes the base64 back into an image file, and inputs it into a Tensorflow convolutional neural network to identify whether the object seen is recyclable or not. This data is then stored in an SQLite database and returned back to the hardware. Based on the AI model's analysis, the Servo motor in the Raspberry Pi flips the lip one way or the other, allowing the waste item to slide into its respective compartment. Additionally, a reactive, mobile-friendly web GUI was designed using Next.js, Tailwind.css, and React. This interface provides the user with insight into their current recycling statistics and how they compare to the nationwide averages of recycling. ## Challenges we ran into The prototype model had to be assembled, measured, and adjusted very precisely to avoid colliding components, unnecessary friction, and instability. It was difficult to get the lid to be spun by a single Servo motor and getting the Logitech camera to be propped up to get a top view. Additionally, it was very difficult to get the hardware to successfully send the encoded base64 image to the server and for the server to decode it back into an image. We also faced challenges figuring out how to publicly host the server before deciding to use ngrok. Additionally, the dataset for training the AI demanded a significant amount of storage, resources and research. Finally, establishing a connection from the frontend website to the backend server required immense troubleshooting and inspect-element hunting for missing headers. While these challenges were both time-consuming and frustrating, we were able to work together and learn about numerous tools and techniques to overcome these barriers on our way to creating RecyclAIble. ## Accomplishments that we're proud of We all enjoyed the bittersweet experience of discovering bugs, editing troublesome code, and staying up overnight working to overcome the various challenges we faced. We are proud to have successfully made a working prototype using various tools and technologies new to us. Ultimately, our efforts and determination culminated in a functional, complete product we are all very proud of and excited to present. Lastly, we are proud to have created something that could have a major impact on the world and help clean our environment clean. ## What we learned First and foremost, we learned just how big of a problem under-recycling was in America and throughout the world, and how important recycling is to Throughout the process of creating RecyclAIble, we had to do a lot of research on the technologies we wanted to use, the hardware we needed to employ and manipulate, and the actual processes, institutions, and statistics related to the process of recycling. The hackathon has motivated us to learn a lot more about our respective technologies - whether it be new errors, or desired functions, new concepts and ideas had to be introduced to make the tech work. Additionally, we educated ourselves on the importance of sustainability and recycling as well to better understand the purpose of the project and our goals. ## What's next for RecyclAIble RecycleAIble has a lot of potential as far as development goes. RecyclAIble's AI can be improved with further generations of images of more varied items of trash, enabling it to be more accurate and versatile in determining which items to recycle and which to trash. Additionally, new features can be incorporated into the hardware and website to allow for more functionality, like dates, tracking features, trends, and the weights of trash, that expand on the existing information and capabilities offered. And we’re already thinking of ways to make this device better, from a more robust AI to the inclusion of much more sophisticated hardware/sensors. Overall, RecyclAIble has the potential to revolutionize the process of recycling and help sustain our environment for generations to come.
# RecycPal ![](https://i.imgur.com/2UEDT6v.png) ## Summary RecycPal is your pal for creating a more sustainable world! RecycPal uses machine learning and artificial intelligence to help you identify what can be or cannot be recycled. This project was developed during DeltaHacks 8. Please check out our DevPost here: <https://devpost.com/software/recycpal> ## Motivation The effects of climate change are already being felt, especially in recent times with record breaking temperatures being recorded in alarming numbers each year [1]. According to the Environmental Protection Agency [2], Americans generated 292.4 million tons of material solid waste in 2018. Out of that amount, 69 million tons of waste were recycled and another 25 million tons were composted. This resulted in a 32.1 percent recycling and composting rate. These numbers must improve if we want a greener and sustainable future. Our team believes that building software that will educate the public on the small things they can do to help will ultimately create a massive change. We developed RecycPal in pursuit of these greener goals and a desire to help other eco-friendly people make the world a better place. ## Meet the Team * Denielle Abaquita (iOS Front-end) * Jon Abrigo (iOS Front-end) * Justin Esguerra (ML, Back-end) * Ashley Hart (ML, Back-end) ## Tech Stack RecycPal was designed and built with the following technolgies: * Figma * CoreML * XCode We also utilize some free art assets from Flaticon. [3] ## Frontend ![](https://i.imgur.com/B3JL1yw.jpg) ### History Tab | History Tab Main | Previous Picture | | --- | --- | | | | The purpose of this tab is to let the user see the pictures they have taken in the past. At the top of this tab will be a cell that leads to finding the nearest recycling center for easy access to this important feature. Each cell in this section will lead to a previously taken picture by the user and will be labeled with the date the user took the picture. ### Camera Tab | Pointing the Camera | Picture Taken | | --- | --- | | | | The purpose of this tab is to take a picture of the user's surroundings to identify any recyclable objects in the frame. Each picture will be automatically saved into the user's history. We utilized Apple's CoreML and Vision APIs to complete this section. [4, 5] After the user takes a picture, the application will perform some machine learning algorithms in the backend to identify any objects in the picture. The user will then see the object highlighted and labeled within the picture. Afterwards, the user has the option to take another picture. ### Information Tab | Information Tab | More Info on Paper | | --- | --- | | | | The purpose of this tab is to provide the user information on the nearest recycling centers and the best recycling practices based on the materials. We consulted resources provided by the Environmental Protection Agency to gather our information [6]. In this case, we have paper, plastic, and metal materials. We will also include glass and non-recyclables with information on how to deal with them. ## Backend ### Machine Learning This was our team's first time tackling machine learning and we were able to learn about neural networks, dataseet preparation, the model training process and so much more. We took advantage of CoreML [7] to create a machine learning model that would receive a photo of an object taken by the user and attempt to classify it into one of the following categories: 1. Cardboard 2. Paper 3. Plastic 4. Metal 5. Glass 6. Trash The training process introduced some new challenges that our team had to overcome. We used datasets from Kaggle [8, 9] and the TACO project [10] to train our model. In order to test our data, we utilized a portion of our data sets that we did not train with and took pictures of trash we had in our homes to give the model fresh input to predict on. We worked to ensure that that our results would have a confidnece rating of at least 80% so the front-end of the application could take that result and display proper infromation to the user. ## What We Learned ### Denielle RecycPal is the result of the cumulative effort of 3 friends wanting to build something useful and impactful. During this entire project, I was able to solidifiy my knowledge of iOS development after focusing on web development for the past few months. I was also able to learn AVFoundation and CoreML. AVFoundation is a framework in iOS that allows developers to incorporate the camera in their applications. CoreML, on the other hand, helps with training and developing models to be used in machine learning. Overall, I learned so much, and I am happy to have spent the time to work on this project with my friends. ### Justin Starting on this project, I had a general idea of how machine learning models work, but nothing prepared for me the adventure that ensued these past 36 hours. I learned CoreML fundamentals, how to compile and annotate datasets, and expanded my knowledge in XCode. These are just the tip of the iceberg considering all of our prototypes we had to scrap, but it was a privelege to grind this out with my friends. ### Jon I have learned A TON of things to put it simply. This was my first time developing on the frontend so most of the languages and process flow were new to me. I learned how to navigate and leverage the tools offered by Figma and helped create the proof of concept for RecycPal's application. Learned how to develop with Xcode and Swift and assist on creating the launch screen and home page of the application. Overall, I am thankful for the opportunity that I have been given throughout this Hackathon. ### Ashley This project served as my first hands on experience with machine learning. I learned about machine learning tasks such as image classification, expermineted with the many utilities that Python offers for data science and I learned how to organize, label, create, and utilize data sets. I also lerned how libaries such as numpy and matplotlib could be combined with frameworks such as PyTorch to build neural networks. I was also able to experiment with Kaggle and Jyupter Notebooks. ## Challenges We Ran Into ### Denielle The biggest challenges I have ran into are the latest updates to Xcode and iOS. Because it has been some time since I last developed for iOS, I have little familiarity with the updates to iOS 15.0 and above. In this case, I had to adjust to learn UIButton.Configuration and Appearance configurations for various components. As a result, that slowed down development a little bit, but I am happy to have learned about this updates! In the end, the updates are a welcome change, and I look forward to learning more and seeing what's in store in the future. ### Justin I didn't run into challenges. The challenges ran over me. From failing to implent PyTorch into our application, struggling to create Numpy (Python) based datasets, and realizing that using Google Cloud Platform for remote access to the database was too tedious and too out of the scope for our project. Despite all these challenges we persevered until we found a solution, CoreML. Even then we still ran into Xcode and iOS updates and code depracations which made this inifinitely more frustrating but ten times more rewarding. ### Jon This was my first time developing on the front end as I have mainly developed on the backend prior. Learning how to create prototypes like the color schema of the application, creating and resizing the application's logos and button icons, and developing on both the programmatic and Swift's storyboards methods were some of the challenges I faced throughout the event. Although this really slowed the development time, I am grateful for the experience and knowledge I have gained throughout this Hackathon. ### Ashley I initially attempted to build a model for this application using PyTorch. I chose this framework because of its computing power, accessible documentation. Unfortunately, I ran into several errors when I had to convert my images into inputs for a neural network. On the bright side, we found Core ML and utilized it in our application with great success. My work with PyTorch is not over as I will continue to learn more about it for my personal studies and for future hackathons. I also conducted research for this project and learned more about how I can recycle waste. ## What's Next for RecycPal? Future development goals include: * Integrating computer vision, allowing the model to see and classify multiple objects in real time. * Bolstering the accuracy of our model by providing it with more training data. * Getting user feedback to improve user experience and accessibility. * Conducting research to evaluate how effective the application is at helping people recycle their waste. * Expanding the classifications of our model to include categories for electronics, compostables, and materials that need to be taken to a store/facility to be proccessed. * Adding waste disposal location capabilites, so the user can be aware of nearby locarions where they can process their waste. ### Conclusion Thank you for checking out our project! If you have suggestions, feel free to reach out to any of the RecycPal developers through the socials we have attached to our DevPost accounts. ## References [1] Climate change evidence: How do we know? 2022. NASA. <https://climate.nasa.gov/evidence/>. [2] EPA. 2018. National Overview: Facts and Figures on Materials, Wastes and Recycling. <https://www.epa.gov/facts-and-figures-about-materials-waste-and-recycling/national-overview-facts-and-figures-materials>. [3] EPA. How Do I Recycle?: Common Recyclables. <https://www.epa.gov/recycle/how-do-i-recycle-common-recyclables>. [4] Apple. Classifying Images with Vision and Core ML. Apple Developer Documentation. <https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml>. [5] Chang, C. 2018. Garbage Classification. Kaggle. <https://www.kaggle.com/asdasdasasdas/garbage-classification>. [6] Sekar, S. 2019. Waste classification data. Kaggle. <https://www.kaggle.com/techsash/waste-classification-data>. [7] Pedro F Proença and Pedro Simões. 2020. TACO: Trash Annotations in Context for Litter Detection. arXiv preprint arXiv:2003.06975 (2020).
## Inspiration This system was designed to make waste collection more efficient, organized and user-friendly. Keeping the end users in mind we have created a system that detects what type of waste has been inserted in the bin and categorizes it as a recyclable or garbage. The system then opens the appropriate shoot (using motors) and turns on an LED corresponding to the type of waste that was just disposed to educate the user. ## What it does Able to sort out waste into recycling or garbage with the use of the Google Vision API to identify the waste object, Python to sort the object into recycling or garbage, and Arduino to move the bin/LED to show the appropriate waste bin. ## How we built it We built our hack using the Google Cloud Vision API, Python to convert data received from the API, and then used that data to transmit to the Arduino on which bin to open. The bin was operated using a stepper motor and LED that indicated the appropriate bin, recycling or garbage, so that the waste object can automatically be correctly disposed of. We built our hardware model by using cardboard. We split a box into 2 sections and attached a motor onto the centre of a platform that allows it to rotate to each of the sections. ## Challenges we ran into We were planning on using a camera interface with the Arduino to analyze the garbage at the input, unfortunately, the hardware component that was going to act as our camera ended up failing, forcing us to find an alternative way to analyze the garbage. Another challenge we ran into was getting the Google Cloud Vision API, but we stayed motivated and got it all to work. One of the biggest challenges we ran into was trying to use the Dragonboard 410c, due to inconsistent wifi, and the controller crashing frequently, it was hard for us to get anything concrete. ## Accomplishments that we're proud of Something that we are really proud of is that we were able to come up with a hardware portion of our hack overnight. We finalized our idea late into the hackathon (around 7pm) and took up most of the night splitting our resources between the hardware and software components of our hack. Another accomplishment that we are proud of is that our hack has positive implications for the environment and society, something that all of our group members are really passionate about. ## What we learned We learned a lot through our collaboration on this project. What stands out is our exploration of APIs and attempts at using new technologies like the Dragonboard 410c and sensors. We also learned how to use serial communications, and that there are endless possibilities when we look to integrate multiple different technologies together. ## What's next for Eco-Bin In the future, we hope to have a camera that is built in with our hardware to take pictures and analyze the trash at the input. We would also like to add more features like a counter that keeps up with how many elements have been recycled and how many have been thrown into the trash. We can even go into specifics like counting the number of plastic water bottles that have been recycled. This data could also be used to help track the waste production of certain areas and neighbourhoods.
winning
# SpeakEasy ## Overview SpeakEasy: AI Language Companion Visiting another country but don't want to sound like a robot? Want to learn a new language but can't get your intonation to sound like other people's? SpeakEasy can make you sound like, well, you! ## Features SpeakEasy is an AI language companion which centers around localizing your own voice into other languages. If, for example, you wanted to visit another country but didn't want to sound like a robot or Google Translate, you could still talk in your native language. SpeakEasy can then automatically repeat each statement in the target language in exactly the intonation you would have if you spoke that language. Say you wanted to learn a new language but couldn't quite get your intonation to sound like the source material you were learning from. SpeakEasy is able to provide you phrases in your own voice so you know exactly how your intonation should sound. ## Background SpeakEasy is the product of a group of four UC Berkeley students. For all of us, this is our first submission to a hackathon and the result of several years of wanting to get together to create something cool together. We are excited to present every part of SpeakEasy; from the remarkably accurate AI speech to just how much we've all learned about rapidly developed software projects. ### Inspiration Our group started by thinking of ways we could make an impact. We then expanded our search to include using and demonstrating technologies developed by CalHacks' generous sponsors, as we felt this would be a good way to demonstrate how modern technology can be used to help everyday people. In the end, we decided on SpeakEasy and used Cartesia to realize many of the AI-powered functions of the application. This enabled us to make something which addresses a specific real-world problem (robotic-sounding translations) many of us have either encountered or are attempting to avoid. ### Challenges Our group has varying levels of software development experience, and especially given our limited hackathon experience (read: none), there were many challenging steps. For example: deciding on project scope, designing high-level architecture, implementing major features, and especially debugging. What was never a challenge, however, was collaboration. We worked quite well as a team and had a good time doing it. ### Accomplishments / Learning We are proud to say that despite the many challenges we accomplished a great deal with this project. We have a fully functional Flask backend with React frontend (see "Technical Details") which uses multiple different APIs. This project successfully ties together audio processing, asynchonrous communication, artificial intelligence, UI/UX design, database management, and so much more. What's more is that many of our group members learned this from base fundamentals. ## Technical Details As mentioned in an earlier section, SpeakEasy is designed with a Flask (Python) backend and React (JavaScript) frontent. This is a very standard setup that is used often at hackathons due to its easy implementation and relatively limited required setup. Flask only requires two lines of code to make an entirely new endpoint, while React can make a full audio-playing page with callbacks that looks absolutely beautiful in less than an hour. For storing data, we use SQLAlchemy (backed by SQLite). 1. When a user opens SpeakEasy, they are first sent to a landing page. 2. After pressing any key, they are taken to a training screen. Here they will record a 15-20 second message (ideally the one shown on screen) which will be used to create an embedding. This is accomplished with the Cartesia "Clone Voice from Clip" endpoint. A Cartesia Voice (abbreviated as "Voice") is created from the returned embedding (using the "Create Voice" endpoint) which contains a Voice ID. This Voice ID is used to uniquely identify each voice, which itself is in a specific language. The database then stores this voice and creates a new user which this voice is associated with. 3. When the recording is complete and the user clicks "Next", they will be taken to a split screen where they can choose between the two main program functions of SpeakEasy. 4. If the user clicks on the vocal translation route, they will be brought to another recording screen. Here, they record a sound in English which is then sent to the backend. The backend encodes this MP3 data into PCM, sends it to a speech-to-text API, and then transfers it into a text translation API. Separately, the backend trains a new Voice (using the Cartesia Localize Voice endpoint, wrapped by get/create Voice since Localize requires an embedding instead of a Voice ID) with the intended target language and uses the Voice ID it returns. The backend then sends the translated text to the Cartesia "Text to Speech (Bytes)" endpoint using this new Voice ID. This is then played back to the user as a response to the original backend request. All created Voices are stored in the database and associated with the current user. This is done so returning users do not have to retrain their voices in any language. 5. If the user clicks on the language learning route, they will be brought to a page which displays a randomly selected phrase in a certain language. It will then query the Cartesia API to pronounce that phrase in that language, using the preexisting Voice ID if available (or prompting to record a new phrase if not). A request is made to the backend to input some microphone input, which is then compared to Cartesia's estimation of your speech in a target language. The backend then returns a set of feedback using the difference between the two pronounciations, and displays that to the user on the frontend. 6. After each route is selected, the user may choose to go back and select either route (the same route again or the other route). ## Cartesia Issues We were very impressed with Cartesia and its abilities, but noted a few issues which would improve the development experience. * Clone Voice From Clip endpoint documentation + The documentation for the endpoint in question details a `Response` which includes a variety of fields: `id`, `name`, `language`, and more. However, the endpoint only returns the embedding in a dictonary. It is then required to send the embedding into the "Create Voice" endpoint to create an `id` (and other fields), which are required for some further endpoints. * Clone Voice From Clip endpoint length requirements + The clip supplied to the endpoint in question appears to require a duration of greater than a second or two. Se "Error reporting" for further details. * Text to Speech (Bytes) endpoint output format + The TTS endpoint requires an output format be specified. This JSON object notably lacks an `encoding` field in the MP3 configuration which is present for the other formats (raw and WAV). The solution to this is to send an `encoding` field with the value for one of the other two formats, despite this functionally doing nothing. * Embedding format + The embedding is specified as a list of 192 numbers, some of which may be negative. Python's JSON parser does not like the dash symbol and frequently encounters issues with this. If possible, it would be good to either allow this encoding to be base64 encoded, hashed, or something else to prevent negatives. Optimally embeddings do not have negatives, though this seems difficult to realize. * Response code mismatches + Some response codes returned from endpoints do not match their listed function. For example, a response code of 405 should not be returned when there is a formatting error in the request. Similarly, 400 is returned before 404 when using invalid endpoints, making it difficult to debug. There are several other instances of this but we did not collate a list. * Error reporting + If (most) endpoints return in JSON format, errors should also be turned in JSON format. This prevents many parsing issues and would simplify design. In addition, error messages are too vague to glean any useful information. For example, 500 is always "Bad request" regardless of the underlying error cause. This is the same thing as the error name. ## Future Improvements In the future, it would be interesting to investigate the following: * Proper authentication * Cloud-based database storage (with redundancy) * Increased error checking * Unit and integration test coverage, with CI/CD * Automatic recording quality analysis * Audio streaming (instead of buffering) using WebSockets * Mobile device compatibility * Reducing audio processing overhead
## Inspiration Since the beginning of the hackathon, all of us were interested in building something related to helping the community. Initially we began with the idea of a trash bot, but quickly realized the scope of the project would make it unrealistic. We eventually decided to work on a project that would help ease the burden on both the teachers and students through technologies that not only make learning new things easier and more approachable, but also giving teachers more opportunities to interact and learn about their students. ## What it does We built a Google Action that gives Google Assistant the ability to help the user learn a new language by quizzing the user on words of several languages, including Spanish and Mandarin. In addition to the Google Action, we also built a very PRETTY user interface that allows a user to add new words to the teacher's dictionary. ## How we built it The Google Action was built using the Google DialogFlow Console. We designed a number of intents for the Action and implemented robust server code in Node.js and a Firebase database to control the behavior of Google Assistant. The PRETTY user interface to insert new words into the dictionary was built using React.js along with the same Firebase database. ## Challenges we ran into We initially wanted to implement this project by using both Android Things and a Google Home. The Google Home would control verbal interaction and the Android Things screen would display visual information, helping with the user's experience. However, we had difficulty with both components, and we eventually decided to focus more on improving the user's experience through the Google Assistant itself rather than through external hardware. We also wanted to interface with Android things display to show words on screen, to strengthen the ability to read and write. An interface is easy to code, but a PRETTY interface is not. ## Accomplishments that we're proud of None of the members of our group were at all familiar with any natural language parsing, interactive project. Yet, despite all the early and late bumps in the road, we were still able to create a robust, interactive, and useful piece of software. We all second guessed our ability to accomplish this project several times through this process, but we persevered and built something we're all proud of. And did we mention again that our interface is PRETTY and approachable? Yes, we are THAT proud of our interface. ## What we learned None of the members of our group were familiar with any aspects of this project. As a result, we all learned a substantial amount about natural language processing, serverless code, non-relational databases, JavaScript, Android Studio, and much more. This experience gave us exposure to a number of technologies we would've never seen otherwise, and we are all more capable because of it. ## What's next for Language Teacher We have a number of ideas for improving and extending Language Teacher. We would like to make the conversational aspect of Language Teacher more natural. We would also like to have the capability to adjust the Action's behavior based on the student's level. Additionally, we would like to implement a visual interface that we were unable to implement with Android Things. Most importantly, an analyst of students performance and responses to better help teachers learn about the level of their students and how best to help them.
## Inspiration Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves. ## What it does The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done. ## How We built it We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations. ## Challenges we ran into We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time. ## Accomplishments that we're proud of We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches. ## What we learned We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start. ## What's next for RealTalk We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server. ![Image](https://i.imgur.com/aehDk3L.gif) Tracks movement of hands and face to provide real-time analysis on expressions and body-language. ![Image](https://i.imgur.com/tZAM0sI.gif)
partial
## Inspiration With the coming of the IoT age, we wanted to explore the addition of new experiences in our interactions with physical objects and facilitate crossovers from the digital to the physical world. Since paper is a ubiquitous tool in our day to day life, we decided to try to push the boundaries of how we interact with paper. ## What it does A user places any piece of paper with text/images on it on our clipboard and they can now work with the text on the paper as if it were hyperlinks. Our (augmented) paper allows users to physically touch keywords and instantly receive Google search results. The user first needs to take a picture of the paper being interacted with and place it on our enhanced clipboard and can then go about touching pieces of text to get more information. ## How I built it We used ultrasonic sensors with an Arduino to determine the location of the user's finger. We used the Google Cloud API to preprocess the paper contents. In order to map the physical (ultrasonic data) with the digital (vision data), we use a standardized 1x1 inch token as a 'measure of scale' of the contents of the paper. ## Challenges I ran into So many challenges! We initially tried to use a RFID tag but later figured that SONAR works better. We struggled with Mac-Windows compatibility issues and also struggled a fair bit with the 2D location and detection of the finger on the paper. Because of the time constraint of 24 hours, we could not develop more use cases and had to resort to just one. ## What I learned We learned to work with the Google Cloud Vision API and interface with hardware in Python. We learned that there is a LOT of work that can be done to augment paper and similar physical objects that all of us interact with in the daily world. ## What's next for Augmented Paper Add new applications to enhance the experience with paper further. Design more use cases for this kind of technology.
## Inspiration As we have seen through our university careers, there are students who suffer from disabilities who can benefit greatly from accessing high-quality lecture notes. Many professors struggle to find note-takers for their courses which leaves these students with a great disadvantage. Our mission is to ensure that their notes increase in quality, thereby improving their learning experiences - STONKS! ## What it does This service automatically creates and updates a Google Doc with text-based notes derived from the professor's live handwritten lecture content. ## How we built it We used Google Cloud Vision, OpenCV, a camera, a Raspberry-Pi, and Google Docs APIs to build a product using Python, which is able to convert handwritten notes to text-based online notes. At first, we used a webcam to capture an image of the handwritten notes. This image was then parsed by Google Cloud Vision API to detect various characters which were then transcripted into text-based words in a new text file. This text file was then read to collect the data and then sent to a new Google Doc which is dynamically updated as the professor continues to write their notes. ## Challenges we ran into One of the major challenges that we faced was strategically dividing tasks amongst the team members in accordance with each individuals' expertise. With time, we were able to assess each others' skills and divide work accordingly to achieve our goal. Another challenge that we faced was that the supplies we originally requested were out of stock (Raspberry-Pi camera) however, we were able to improvise by getting a camera from a different kit. One of the major technical challenges we had to overcome was receiving permissions for the utilization of Google Docs APIs to create and get access to a new document. This was overcome by researching, testing and debugging our code to finally get authorization for the API to create a new document using an individual's email. ## Accomplishments that we are proud of The main goal of STONKS was accomplished as we were able to create a product that will help disabled students to optimize their learning through the provision of quality notes. ## What we learned We learned how to utilize Google Cloud Vision and OpenCV which are both extremely useful and powerful computer vision systems that use machine learning. ## What's next for STONKS? The next step for STONKS is distinguishing between handwritten texts and visual representations such as drawings, charts, and schematics. Moreover, we are hoping to implement a math-based character recognition set to be able to recognize handwritten mathematical equations.
## Inspiration One of our teammate’s grandfathers suffers from diabetic retinopathy, which causes severe vision loss. Looking on a broader scale, over 2.2 billion people suffer from near or distant vision impairment worldwide. After examining the issue closer, it can be confirmed that the issue disproportionately affects people over the age of 50 years old. We wanted to create a solution that would help them navigate the complex world independently. ## What it does ### Object Identification: Utilizes advanced computer vision to identify and describe objects in the user's surroundings, providing real-time audio feedback. ### Facial Recognition: It employs machine learning for facial recognition, enabling users to recognize and remember familiar faces, and fostering a deeper connection with their environment. ### Interactive Question Answering: Acts as an on-demand information resource, allowing users to ask questions and receive accurate answers, covering a wide range of topics. ### Voice Commands: Features a user-friendly voice command system accessible to all, facilitating seamless interaction with the AI assistant: Sierra. ## How we built it * Python * OpenCV * GCP & Firebase * Google Maps API, Google Pyttsx3, Google’s VERTEX AI Toolkit (removed later due to inefficiency) ## Challenges we ran into * Slow response times with Google Products, resulting in some replacements of services (e.g. Pyttsx3 was replaced by a faster, offline nlp model from Vosk) * Due to the hardware capabilities of our low-end laptops, there is some amount of lag and slowness in the software with average response times of 7-8 seconds. * Due to strict security measures and product design, we faced a lack of flexibility in working with the Maps API. After working together with each other and viewing some tutorials, we learned how to integrate Google Maps into the dashboard ## Accomplishments that we're proud of We are proud that by the end of the hacking period, we had a working prototype and software. Both of these factors were able to integrate properly. The AI assistant, Sierra, can accurately recognize faces as well as detect settings in the real world. Although there were challenges along the way, the immense effort we put in paid off. ## What we learned * How to work with a variety of Google Cloud-based tools and how to overcome potential challenges they pose to beginner users. * How to connect a smartphone to a laptop with a remote connection to create more opportunities for practical designs and demonstrations. How to connect a smartphone to a laptop with a remote connection to create more opportunities for practical designs and demonstrations. How to create docker containers to deploy google cloud-based flask applications to host our dashboard. How to develop Firebase Cloud Functions to implement cron jobs. We tried to developed a cron job that would send alerts to the user. ## What's next for Saight ### Optimizing the Response Time Currently, the hardware limitations of our computers create a large delay in the assistant's response times. By improving the efficiency of the models used, we can improve the user experience in fast-paced environments. ### Testing Various Materials for the Mount The physical prototype of the mount was mainly a proof-of-concept for the idea. In the future, we can conduct research and testing on various materials to find out which ones are most preferred by users. Factors such as density, cost and durability will all play a role in this decision.
partial
## Inspiration There has never been a more relevant time in political history for technology to shape our discourse. Clara AI can help you understand what you're reading, giving you political classification and sentiment analysis so you understand the bias in your news. ## What it does Clara searches for news on an inputted subject and classifies its political leaning and sentiment. She can accept voice commands through our web application, searching for political news on a given topic, and if further prompted, can give political and sentiment analysis. With 88% accuracy on our test set, Clara is nearly perfect at predicting political leaning. She was trained using random forest and many hours of manual classification. Clara gives sentiment scores with the help of IBM Watson and Google Sentiment Analysis APIs. ## How we built it We built a fundamental technology using a plethora of Google Cloud Services on the backend, trained a classifier to identify political leanings, and then created multiple channels for users to interact with the insight generated by our algorithms. For our backend, we used Flask + Google Firebase. Within Flask, we used the Google Search Engine API, Google Web Search API, Google Vision API, and Sklearn to conduct analysis on the news source inputted by the user. For our web app we used React + Google Cloud Speech Recognition API (the app responds to voice commands). We also deployed a Facebook Messenger bot, as many of our users find their news on Facebook. ## Challenges we ran into Lack of wifi was the biggest, putting together all of our APIs, training our ML algorithm, and deciding on a platform for interaction. ## Accomplishments that we're proud of We've created something really meaningful that can actually classify news. We're proud of the work we put in and our persistence through many caffeinated hours. We can't wait to show our project to others who are interested in learning more about their news! ## What we learned How to integrate Google APIs into our Flask backend, and how to work with speech capability. ## What's next for Clara AI We want to improve upon the application by properly distributing it to the right channels. One of our team members is part of a group of students at UC Berkeley that builds these types of apps for fun, including BotCheck.Me and Newsbot. We plan to continue this work with them.
## Inspiration Star Wars inspired us. ## What it does The BB-8 droid navigates its environment based on its user's gaze. The user simply has to change their gaze and the BB-8 will follow their eye-gaze. ## How we built it We built it with an RC car kit placed into a styrofoam sphere. The RC car was created with an Arduino and Raspberry Pi. The Arduino controlled the motor controllers, while the Raspberry Pi acted as a Bluetooth module and sent commands to the Arduino. A separate laptop was used with eye-tracking glasses from AdHawk to send data to the Raspberry Pi. ## Challenges we ran into We ran into issues with the magnets not being strong enough to keep BB-8's head on. We also found that BB-8 was too top-heavy, and the RC car on the inside would sometimes roll over, causing the motors to spin out and stop moving the droid. ## Accomplishments that we're proud of We are proud of being able to properly develop the communication stack with Bluetooth. It was complex connecting the Arduino -> Raspberry Pi -> Computer -> Eye-tracking glasses. ## What we learned We learned about AdHawk's eye tracking systems and developed a mechanical apparatus for BB-8/ ## What's next for BB8 Droid We would like to find proper magnets to add the head for BB-8
## Inspiration Reading the news helps people expand their knowledge and broaden their horizons. However, it can be time-consuming and troublesome to find quality news articles and read lengthy, boring chunks of text. Our goal is to make news **accessible** to everyone. We provide **concise**, **digestible** news **summaries** in a **conversational** manner to make it as easy as possible for anyone to educate themselves by reading the news. ## What it does News.ai provides a concise and digestible summary of a quality article related to the topic you care about. You can easily ask **follow-up questions** to learn more information from the article or learn about any related concepts mentioned in the article. ## How we built it 1. We used *React.js* and *Flask* for our web app. 2. We used *NewsAPI* to recommend the most updated news based on preferences. 3. We used *Monster API's OpenAI-Whisper API* for speech-to-text transcription. 4. We used *Monster API's SunoAI Bark API* for text-to-speech generation. 5. We used *OpenAI's GPT 4 API* large language model (LLM) to provide summaries of news articles. ## Challenges we ran into We ran into the challenge of connecting multiple parts of the project. Because of its inherent complexity and interconnectivity, making different APIs and frontend plus backend to work together has been our most difficult task. ## Accomplishments that we're proud of We're happy that we established a strong pipeline of API calls using AI models. For example, we converted the user's audio input to text using Whisper API before generating text in response to the user's request using GPT API and finally, we converted the generated text to audio output using Bark API. We are also proud to have integrated the NewsAPI in our recommendation system so we can display the latest news for each user tailored to their preferences. ## What we learned Each of our team members had a deep understanding of a specific part of our tech stack; whether that be the frontend, backend, or usage of AI/LLM models and APIs. We learned a lot about how these tools can be integrated and applied to solve real-world problems. Furthermore, by spending the first day going booth to booth and speaking individually to every sponsor, we learned about the intricacies of each platform and API. This allowed us to build a platform that synthesized the strengths of various tools and technologies. For example, we were able to take advantage of the ease and scalability of Monster API's Whisper and Bark APIs. ## What's next for News.ai Moving forward, we hope to allow for more personalized search of news articles beyond generic topics. Furthermore, we hope to collect additional personalized characteristics that improve the podcast content and understanding for users.
winning
## Inspiration The three of us believe that our worldview comes from what we read. Online news articles serve to be that engine, and for something so crucial as learning about current events, an all-encompassing worldview is not so accessible. Those new to politics and just entering the discourse may perceive an extreme partisan view on a breaking news to be the party's general take; On the flip side, those with entrenched radicalized views miss out on having productive conversations. Information is meant to be shared, perspectives from journals, big, and small, should be heard. ## What it does WorldView is a Google Chrome extension that activates whenever someone is on a news article. The extension describes the overall sentiment of the article, describes "clusters" of other articles discussing the topic of interest, and provides a summary of each article. A similarity/dissimilarity score is displayed between pairs of articles so readers can read content with a different focus. ## How we built it Development was broken into three components: scraping, NLP processing + API, and chrome extension development. Scraping involved using Selenium, BS4, DiffBot (API that scrapes text from websites and sanitizes), and Google Cloud Platform's Custom Search API to extract similar documents from the web. NLP processing involved using NLTK, KProtoype clustering algorithm. Chrome extension was built with React, which talked to a Flask API. Flask server is hosted on an AWS EC2 instance. ## Challenges we ran into Scraping: Getting enough documents that match the original article was a challenge because of the rate limiting of the GCP API. NLP Processing: one challenge here was determining metrics for clustering a batch of documents. Sentiment scores + top keywords were used, but more robust metrics could have been developed for more accurate clusters. Chrome extension: Figuring out the layout of the graph representing clusters was difficult, as the library used required an unusual way of stating coordinates and edge links. Flask API: One challenge in the API construction was figuring out relative imports. ## Accomplishments that we're proud of Scraping: Recursively discovering similar documents based on repeatedly searching up headline of an original article. NLP Processing: Able to quickly get a similarity matrix for a set of documents. ## What we learned Learned a lot about data wrangling and shaping for front-end and backend scraping. ## What's next for WorldView Explore possibility of letting those unable to bypass paywalls of various publishers to still get insights on perspectives.
## Inspiration False news. False news. False news everywhere. Before reading your news article in depth, let us help you give you a brief overview of what you'll be ingesting. ## What it does Our Google Chrome extension will analyze the news article you're about to read and give you a heads up on some on the article's sentiment (what emotion is the article trying to convey), top three keywords in the article, and the categories this article's topic belong to. Our extension also allows you to fact check any statement by simply highlighting the statement, right-clicking, and selecting Fact check this with TruthBeTold. ## How we built it Our Chrome extension pulls the url of the webpage you're browsing and sends that to our Google Cloud Platform hosted Google App Engine Python server. Our server is then able to parse the content of the page, determine the content of the news article through processing by Newspaper3k API. The scraped article is then sent to Google's Natural Language API Client which assess the article for sentiment, categories, and keywords. This data is then returned to your extension and displayed in a friendly manner. Fact checking follows a similar path in that our extension sends the highlighted text to our server, which checks against Google's Fact Check Explorer API. The consensus is then returned and alerted. ## Challenges we ran into * Understanding how to interact with Google's APIs. * Working with Python flask and creating new endpoints in flask. * Understanding how Google Chrome extensions are built. ## Accomplishments that I'm proud of * It works!
## Inspiration Our team met during the 2022 SHAD summer program at the *University of Calgary*. During our time at SHAD, we learned that finding a problem to address is just as, if not more, important than creating a solution. When we came to Hack the North, we all knew that we wanted to tackle a big issue that has a serious impact on society. Given the opportunity to participate in such a unique event, we wanted to find a unique way to use technology. We were all drawn to the problem of growing political and social polarization, which is largely caused by modern media. Echo-chambers and extremist outlets are becoming increasingly prevalent, and we wanted to work towards shielding society from these harms! ## What it does **News Shield** is a web-based platform that provides users with analytics about news articles from many different sources, that all have different opinions and leanings. The website presents users with popular topics that are often in the news, and uses the NewsAPI to dynamically find news articles relating to a chosen topic. Multiple APIs and modules are used to scrape and analyze these articles, returning information such as the articles': * url * author * description * word count * reading time * "reading ease" * appropriate grade level (for readers) **and most importantly...** * raw sentiment value & * relative sentiment value ## How we built it The News Shield platform is built with a python back-end and React front-end, using Flask to bridge both elements. The front-end allows the user to navigate pages and select overall news topics. When a topic is selected, key phrases are passed to the back-end, where a main function uses the key phrases, among other conditions, to search a database of news article titles using the NewsAPI. The newspaper API copies the content from the identified articles, and the OneSimpleApi "Readability, Reading Time and Sentiment for Texts" endpoint is used to conduct text analysis. All of this information is then formatted and passed back to the website using Flask. A primary function of News Shield is the relative sentiment value measurement, which can be used to see how different authors/news sources address the same topic. Our team also built a NLP algorithm to identify REAL or FAKE news. We decided not to implement this directly into the prototype at this stage, as it may come with biases based on training data and the model, which would counteract the purpose of News Shield. Our team primarily used GitHub to collaborate, in addition to VSCode, and we divided tasks based on our unique skills and strengths. ## Challenges we ran into **Front-End & Back-End!!** As a team with limited software-dev experience, we ran into many challenges when attempting to combine a front-end & back-end element. We chose to work in React and python because of our experience with these languages, but we failed to look forward at merging these elements written in these two languages. Our initial solution used Flask and ngrok to host the back-end and front-end on seperate machines, which ended up not working using our methods. We pivoted and ran both elements on one machine, which allowed us to better develop our proof of concept. **APIs** The designated back-end developers did not have prior experience dealing with APIs, so using multiple in this project provided to be a challenge. It was an interesting challenge to learn how different APIs operate, and how to use documentation effectively. ## Accomplishments that we're proud of We are proud that we were able to develop a (almost) complete solution in the designated time. As programmers with limited experience, we were very happy that we were able to all work together to build a solution that could possibly have real benefits. ## What we learned We learned how to develop on-the-fly, and how important collaboration is when developing elements individually! ## What's next for News Shield We would love to find better ways to analyze text, possibly with non-biased NLP, and improve on our solution until it is ready to release!
partial
## Inspiration In the maze of social interactions, we've all encountered the awkward moment of not remembering the name of someone we've previously met. Face-Book is our solution to the universal social dilemma of social forgetfulness. ## What it does Face-Book is an iPhone app that--discreetly--records and analyzes faces and conversations using your phone's camera and audio. Upon recognizing a familiar face, it instantly retrieves their name gathered during past saved conversations, as well as past interactions, and interesting tidbits – a true social lifesaver. ## How we built it Swift, Xcode, AWS Face Rekognition & Diarization, OpenAI. ## Challenges we ran into We navigated the uncharted waters of ethical boundaries and technical limitations, but our vision of a seamlessly connected world guided us. We didn't just build an app; we redefined social norms. ## Accomplishments that we're proud of We take pride in Face-Book's unparalleled ability to strip away the veil of privacy, presenting it as the ultimate tool for social convenience. Our app isn't just a technological triumph; it's a gateway to omnipresent social awareness. ## What we learned Our journey revealed the immense potential of data in understanding and predicting human behavior. Every interaction is a data point, contributing to an ever-growing atlas of human connections. ## What's next for Face-Book The future is limitless. We envision Face-Book as a standard feature in smartphones, working hand-in-hand with governments worldwide. Imagine a society where every face is known, every interaction logged – a utopia of safety and social convenience. Porting it to an AR platform would also be nice.
## Inspiration As our world becomes more digitalized and interactions become a more permanent, our team noticed a rise in online anxiety stemming from an innate fear of being judged or making a mistake. In Elizabeth Armstrong's book *Paying for the Party* , Armstrong mentions the negative impacts of being unique at a school where it pays to not stand out. This exact sentiment can now be seen online, except now, everything can be traced back to an identity indefinitely. Our thoughts, questions, and personal lives are constantly be ridiculed and monitored for mistakes. Even after a decade of growth, we will still be tainted with the person we were years before. Contrary to this social fear, many of us started off a childhood with a confidence and naivety of social norms allowing us to simply make friends based on interests. Everyday was made for show-and-tell and asking questions. Through this platform, we seek to develop a web app that allows us to reminisce about the days when making friends was as easy as turning to stranger on the playground and asking to play. ## What it does Our web app is designed to make befriending strangers with shared interests easier and making mistakes less permanent. When opening the app, users will be given a pseudo name and will be able to choose their interests based on a word cloud. Afterwards, the user can then follow one of three paths. The first would be a friend matching path where the user will receive eight different people who share common interests with them. In these profiles, each person's face would be blurred and the only thing shared would be interests and age. The user can select up to two people to message per day. The second path allows for learning. Once a user selects a topic they'd like to learn more about, they will then be matched to someone who is volunteering to share information. The third consists of a random match in the system for anyone who is feeling spontaneous. This was inspired by Google's "I'm feeling lucky" button. Once messaging begins, the both people will have the ability to reveal their identity at any point, which would resolve the blurred image on their profile for the user they are unlocking it for. The overall objective is to create a space for users to share without their identity being attached. ## How we built it Our team built this by taking time to learn UI design in Figma and then began to implement the frontend through html and css. We then attempted to build the back-end using python through Flask. We then hosted the web app on Azure as our server. ## Challenges we ran into Our team is made up of 100% beginners with extremely limited coding experience, so finding the starting point for web app development was the biggest challenge we ran into. In addition, we ran into a significant amount of software downloading issues which we worked with a mentor to resolve for several hours. Due to these issues, we never fully implement the program. ## Accomplishments that we're proud of Our team is extremely proud of the progress we have made thus far on the project. Coming in, most of us had very limited skills so being able to have learned Figma and launch a website in 36 hours feels incredible. Through this process, all of us were able to learn something new whether that be a software, language, or simply the process of website design and execution. As a group coming from four different schools from different parts of the world, we are also proud of the general enthusiasm, friendship, and team skills we built through this journey. ## What we learned Coming in as beginner programmers, our team learned a lot about the process of creating and designing a web app from start to finish. Through talking to mentors, we were able to learn more about the different softwares, frameworks, and languages many applications use as well as the flow of going from frontend to backend. In terms of technical skills, we picked up Figma and html and css through this project. ## What's next for Playground In the future, we hope to continue designing the frontend of Playground and then implementing the backend through python since we never got to the point of completion. As a web app, we hope to be able to later implement better matching algorithms and expanding into communities for different "playground."
## Inspiration With the excitement of blockchain and the ever growing concerns regarding privacy, we wanted to disrupt one of the largest technology standards yet: Email. Email accounts are mostly centralized and contain highly valuable data, making one small breach, or corrupt act can serious jeopardize millions of people. The solution, lies with the blockchain. Providing encryption and anonymity, with no chance of anyone but you reading your email. Our technology is named after Soteria, the goddess of safety and salvation, deliverance, and preservation from harm, which we believe perfectly represents our goals and aspirations with this project. ## What it does First off, is the blockchain and message protocol. Similar to PGP protocol it offers \_ security \_, and \_ anonymity \_, while also **ensuring that messages can never be lost**. On top of that, we built a messenger application loaded with security features, such as our facial recognition access option. The only way to communicate with others is by sharing your 'address' with each other through a convenient QRCode system. This prevents anyone from obtaining a way to contact you without your **full discretion**, goodbye spam/scam email. ## How we built it First, we built the block chain with a simple Python Flask API interface. The overall protocol is simple and can be built upon by many applications. Next, and all the remained was making an application to take advantage of the block chain. To do so, we built a React-Native mobile messenger app, with quick testing though Expo. The app features key and address generation, which then can be shared through QR codes so we implemented a scan and be scanned flow for engaging in communications, a fully consensual agreement, so that not anyone can message anyone. We then added an extra layer of security by harnessing Microsoft Azures Face API cognitive services with facial recognition. So every time the user opens the app they must scan their face for access, ensuring only the owner can view his messages, if they so desire. ## Challenges we ran into Our biggest challenge came from the encryption/decryption process that we had to integrate into our mobile application. Since our platform was react native, running testing instances through Expo, we ran into many specific libraries which were not yet supported by the combination of Expo and React. Learning about cryptography and standard practices also played a major role and challenge as total security is hard to find. ## Accomplishments that we're proud of We are really proud of our blockchain for its simplicity, while taking on a huge challenge. We also really like all the features we managed to pack into our app. None of us had too much React experience but we think we managed to accomplish a lot given the time. We also all came out as good friends still, which is a big plus when we all really like to be right :) ## What we learned Some of us learned our appreciation for React Native, while some learned the opposite. On top of that we learned so much about security, and cryptography, and furthered our beliefs in the power of decentralization. ## What's next for The Soteria Network Once we have our main application built we plan to start working on the tokens and distribution. With a bit more work and adoption we will find ourselves in a very possible position to pursue an ICO. This would then enable us to further develop and enhance our protocol and messaging app. We see lots of potential in our creation and believe privacy and consensual communication is an essential factor in our ever increasingly social networking world.
losing
## Inspiration Selina was desperately trying to get to PennApps on Friday after she disembarked her Greyhound. Alas, she had forgotten that only Bolt Bus and Megabus end their journeys directly next to the University of Pennsylvania, so she was a full 45 minute walk away from Penn Engineering. Full of hope, she approached the SEPTA stop marked on Google Maps, but was quickly rebuffed by the lack of clear markings and options for ticket acquirement. It was dark and cold, so she figured she might as well call a $5 Lyft. But when she opened the app, she was met with the face of doom: "Poor Network Connectivity". But she had five bars! If only, she despaired as she hunted for wifi, there were some way she could book that Lyft with just a phone call. ## What it does Users can call 1-888-970-LYFF, where an automated chatbot will guide them through the process of ordering a Lyft to their final destination. Users can simply look at the street name and number of the closest building to acquire their current location. ## How I built it We used the Nexmo API from Vonage to handle the voice aspect, Amazon Lex to create a chatbot and parse the speech input, Amazon Lambda to implement the internal application logic, the Lyft API for obvious reasons, and Google Maps API to sanitize the locations. ## Challenges I ran into Nexmo's code to connect the phone API to Amazon Lex was overloading the buffer, causing the bot to become unstable. We fixed this issue, submitting a pull request for Nexmo's review. ## Accomplishments that I'm proud of We got it to work end to end! ## What I learned How to use Amazon lambda functions, setup an EC2 instance, that API's don't always do what the documentation says they do. ## What's next for Lyff Instead of making calls in Lyft's sandbox environment, we'll try booking a real Lyft on our phone without using the Lyft app :) Just by making a call to 1-888-970-LYFF.
## Inspiration Old school bosses don't want want to see you slacking off and always expect you to be all movie hacker in the terminal 24/7. As professional slackers, we also need our fair share of coffee and snacks. We initially wanted to create a terminal app to order Starbucks and deliver it to the E7 front desk. Then bribe a volunteer to bring it up using directions from Mappedin. It turned out that it's quite hard to reverse engineer Starbucks. Thus, we tried UberEats, which was even worse. After exploring bubble tea, cafes, and even Lazeez, we decided to order pizza instead. Because if we're suffering, might as well suffer in a food coma. ## What it does Skip the Walk brings food right to your table with the help of volunteers. In exchange for not taking a single step, volunteers are paid in what we like to call bribes. These can be the swag hackers received, food, money, ## How we built it We used commander.js to create the command-line interface, Next.js to run MappedIn, and Vercel to host our API endpoints and frontend. We integrated a few Slack APIs to create the Slack bot. To actually order the pizzas, we employed Terraform. ## Challenges we ran into Our initial idea was to order coffee through a command line, but we soon realized there weren’t suitable APIs for that. When we tried manually sending POST requests to Starbucks’ website, we ran into reCaptcha issues. After examining many companies’ websites and nearly ordering three pizzas from Domino’s by accident, we found ourselves back at square one—three times. By the time we settled on our final project, we had only nine hours left. ## Accomplishments that we're proud of Despite these challenges, we’re proud that we managed to get a proof of concept up and running with a CLI, backend API, frontend map, and a Slack bot in less than nine hours. This achievement highlights our ability to adapt quickly and work efficiently under pressure. ## What we learned Through this experience, we learned that planning is crucial, especially when working within the tight timeframe of a hackathon. Flexibility and quick decision-making are essential when initial plans don’t work out, and being able to pivot effectively can make all the difference. ## Terraform We used Terraform this weekend for ordering Domino's. We had many close calls and actually did accidentally order once, but luckily we got that cancelled. We created a Node.JS app that we created Terraform files for to run. We also used Terraform to order Domino's using template .tf files. Finally, we used TF to deploy our map on Render. We always thought it funny to use infrastructure as code to do something other than pure infrastructure. Gotta eat too! ## Mappedin Mappedin was an impressive tool to work with. Its documentation was clear and easy to follow, and the product itself was highly polished. We leveraged its room labeling and pathfinding capabilities to help volunteers efficiently deliver pizzas to hungry hackers with accuracy and ease. ## What's next for Skip the Walk We plan to enhance the CLI features by adding options such as reordering, randomizing orders, and providing tips for volunteers. These improvements aim to enrich the user experience and make the platform more engaging for both hackers and volunteers.
## Inspiration Jessica here - I came up with the idea for BusPal out of expectation that the skill has already existed. With my Amazon Echo Dot, I was already doing everything from checking the weather to turning off and on my lights with Amazon skills and routines. The fact that she could not check when my bus to school was going to arrive was surprising at first - until I realized that Amazon and Google are one of the biggest rivalries there is between 2 tech giants. However, I realized that the combination of Alexa's genuine personality and the powerful location ability of Google Maps would fill a need that I'm sure many people have. That was when the idea for BusPal was born: to be a convenient Alexa skill that will improve my morning routine - and everyone else's. ## What it does This skill enables Amazon Alexa users to ask Alexa when their bus to a specified location is going to arrive and to text the directions to a phone number - all hands-free. ## How we built it Through the Amazon Alexa builder, Google API, and AWS. ## Challenges we ran into We originally wanted to use stdlib, however with a lack of documentation for the new Alexa technology, the team made an executive decision to migrate to AWS roughly halfway into the hackathon. ## Accomplishments that we're proud of Completing Phase 1 of the project - giving Alexa the ability to take in a destination, and deliver a bus time, route, and stop to leave for. ## What we learned We learned how to use AWS, work with Node.js, and how to use Google APIs. ## What's next for Bus Pal Improve the text ability of the skill, and enable calendar integration.
partial
## Inspiration Our inspiration was the inability of some dementia patients to even recognize their own family members. Izzie's grandpa had dementia, and he would at time mistake his wife for his sister, or forget his grandchildren. With that in mind, we wanted to create a device that would remind a dementia patient of who the person is that they are talking to. ## What it does There are two code versions on the repository. One is a computer version that listens to 7 seconds of audio, where a person would say their name. It then converts that speech to text using rev.ai, and pulls up the person's name, their relationship with the dementia patient, and a picture of them together to jog the memory and recall previous emotions. The second version is for the hardware. It has most of the same functionality, but the speech input was not completed, so it currently runs using preinstalled sound files. ## How we built it We programmed the raspberry pi with python via SSH connection. We also connected an arduino for the purpose of analog-to-digital conversion as we attempted to add the sound input to our hardware. ## Challenges we ran into We ran into a lot of challenges with installing the necessary python packages, especially with numpy. Configuration issues with the raspberry pi we were using meant we spent hours trying to uninstall and reinstall packages instead of moving forward with our code. ## Accomplishments that we're proud of We are proud of having a finished product as this was each member's first time at a hackathon. ## What we learned We learned how to use APIs and extended our knowledge of linux and command prompt. ## What's next for María The next step would to get the data from the onboard speaker transporting correctly to the raspberry pi. More importantly, however, we would need to create an easy way for users to upload text with information about the person in question and relevant photos to the program. In addition, we’d like to use a feature similar to “hey Siri” or “Alexa” works in that the device listens for someone’s name after you say “my name is” and pulls up the corresponding information to the screen.
## Inspiration Alzheimer’s impacts millions worldwide, gradually eroding patients’ ability to recognize loved ones, creating emotional strain for both patients and families. Our project aims to bridge this emotional gap by simulating personal conversations that evoke familiarity, comfort, and connection. These calls stimulate memory recall and provide emotional support, helping families stay close even when separated by distance, time zones, or other obligations. ## What it does Dear simulates conversations between Alzheimer’s patients and their family members using AI-generated voices, recreating familiar interactions to provide comfort and spark memories. The app leverages Cartesia for voice cloning and VAPI for outbound agent calls, building personalized voice agents for each family member. These agents are engineered to gently manage memory lapses and identity questions, ensuring every interaction feels natural and empathetic. Family members can upload their voices, and the system automatically generates a unique agent for each one through VAPI. Over time, these agents, powered by their own large language models (LLMs), learn from interactions, creating increasingly personalized and meaningful conversations that strengthen emotional connections. ## How we built it The frontend was built with Next.js and TailwindCSS, focusing on an intuitive, responsive design to ensure families can easily upload voices and initiate conversations. We connected multiple APIs to streamline the workflow, ensuring a smooth and engaging user experience. Our backend was developed using Flask, which allowed us to efficiently handle API requests, manage voice data, and coordinate multiple services such as Cartesia and VAPI. The backend plays a crucial role in connecting the user-facing frontend with the voice cloning and call management APIs, ensuring a seamless experience. ## Challenges we ran into The biggest challenge was managing complexity. As the idea evolved, we had to simplify our approach without sacrificing impact. Integrating VAPI and managing voice data at scale posed technical challenges, requiring creative problem-solving and iteration. Streamlining the agent-creation process became essential to deliver a seamless experience for users. ## Accomplishments that we're proud of 1. Successfully integrated multiple APIs to create a smooth user experience. 2. Overcame technical challenges to build and test functional voice agents within a short timeframe. 3. Developed a system that could redefine how Alzheimer’s patients connect with their families, promoting emotional well-being through meaningful conversations. ## What we learned We learned the value of pivoting when things became overly complex. Instead of building everything from scratch, we utilized existing APIs to accelerate development. While we explored Speech-to-Speech models, we chose a Speech-to-Text/Text-to-Speech pipeline for efficiency during the hackathon. This approach allowed us to focus on delivering a working prototype while considering future enhancements. ## What's next for DEAR . . . Our next goal is to implement a Speech-to-Speech solution for more natural, real-time conversations. As the agents interact more, they will accumulate context, improving memory stimulation and tracking emotional well-being over time. We also plan to enhance remote monitoring, enabling families to stay connected and informed about their loved one’s emotional health, even when they can’t be physically present. ## Tech Stack! Next.js and TailwindCSS: Frontend development VAPI: Agent creation and outbound call management Cartesia: Voice cloning Flask: Backend and API handling
# Summary Echo is an intelligent, environment-aware smart cane that acts as assistive tech for the visually or mentally impaired. --- ## Overview Over 5 million Americans are living with Alzheimer's. In fact, 1 in 10 people of age 65 and older has Alzheimer's or dementia. Often, those afflicted will have trouble remembering names from faces and recalling memories. **Echo does exactly that!** Echo is a piece of assistive technology that helps the owner keep track of people he/she meets and provide a way for the owner to stay safe by letting them contact the authorities if they feel like they're in danger. Using cameras, microphones, and state of the art facial recognition, natural language processing, and speech to text software, Echo is able recognize familiar and new faces allowing patients to confidently meet new people and learn more about the world around them. When Echo hears an introduction being made, it uses its camera to continuously train itself to recognize the person. Then, if it sees the person again it'll notify its owner that the acquaintance is there. Echo also has a button that, when pressed, will contact the authorities- this way, if the owner is in danger, help is one tap away. ## Frameworks and APIs * Remembering Faces + OpenCV Facial Detection + OpenCV Facial Recognition * Analyzing Speech + Google Cloud Speech-To-Text + Google Cloud Natural Language Processing * IoT Communications + gstreamer for making TCP video and audio streams + SMTP for email capabilities (to contact authorities) ## Challenges There are many moving parts to Echo. We had to integrate an interface between Natural Language Processing and Facial Recognition. Furthermore, we had to manage a TCP stream between the Raspberry Pi, which interacts with our ML backend on a computer. Ensuring that all the parts seamlessly work involved hours of debugging and unit testing. Furthermore, we had to fine tune parameters such as stream quality to ensure that the Facial Recognition worked but we did not experience high latency, and synchronize the audio and video TCP streams from the Pi. We wanted to make sure that the form factor of our hack could be experience just by looking at it. On our cane, we have a Raspberry Pi, a camera, and a button. The button is a distress signal, which will alert the selected contacts in the event of an emergency. The camera is part of the TCP stream that is used for facial recognition and training. The stream server and recognition backend are managed by separate Python scripts on either end of the stack. This results in a stable connection between the smart cane and the backend system. ## Echo: The Hacking Process Echo attempts to solve a simple problem: individuals with Alzheimer's often forget faces easily and need assistance in order to help them socially and functionally in the real world. We rely on the fact that by using AI/ML, we can train a model to help the individual in a way that other solutions cannot. By integrating this with technology like Natural Language Processing, we can create natural interfaces to an important problem. Echo's form factor shows that its usability in the real world is viable. Furthermore, since we are relying heavily on wireless technologies, it is reasonable to say that it is successful as an Internet of Things (IoT) device. ## Empowering the impaired Echo empowers the impaired to become more independent and engage in their daily routines. This smart cane acts both as a helpful accessory that can catalyze social interaction and also a watchdog to quickly call help in an emergency.
losing
## Inspiration Reducing North American food waste. ## What it does Food for All offers a platform granting the ability for food pantries and restaurants to connect. With a very intuitive interface, pantries and restaurants are able to register their organizations to request or offer food. Restaurants can estimate their leftover food, and instead of it going to waste, they are able to match with food pantries to make sure the food goes to a good cause. Depending on the quantity of food requested and available to offer as well as location, the restaurants are given a list of the pantries that best match their availability. ## How we built it Food for All is build using a full Node.js Stack. We used Express, BadCube, React, Shard and Axios to make the application possible. ## Challenges we ran into The main challenges of developing Food for All were learning new frameworks and languages. Antonio and Vishnu had very little experience with JavaScript and nonrelational databases, as well as Express. ## Accomplishments that we're proud of We are very proud of the implementation of the Google Maps API on the frontend and our ranking and matching algorithm for top shelters. ## What we learned We learned how to make Rest APIs with Express. We also realized a decent way through our project that our nonrelational local database, Badcube, worked best when the project was beginning, but as the project scaled it has no ability to deal with nuanced objects or complex nested relationships, making it difficult to write and read data from. ## What's next for Food for All In the future, we aim to work out the legal aspects to ensure the food is safely prepared and delivered to reduce the liability of the restaurants and shelters. We would also like to tweak certain aspects of the need determination algorithm used to find shelters that are at greatest need for food. Part of this involves more advanced statistical methods and a gradual transition from algorithmic to machine learning oriented methods.
## Inspiration : We wanted to solve a practical problem we'd both faced in our day-to-day lives. Food wastage is a big issue in Canada, with over 2.2 million tonnes of edible food being wasted each year. We wanted to make something that not only makes our lives easier, but make a difference to the environment as well. ## What it does Our project accepts a URL link of a picture of the contents of your fridge. Using the Clarifai API, which utilizes machine learning to recognize specific ingredients in pictures, we take the ingredients identified and return recipes on the site AllRecipes.com that use those same ingredients. ## How we built it Our front end was build using HTML/CSS. We used Node.js to do our server-side architecture, and Express.js as a link between the front-end and back-end. In order to recognize the ingredients in the picture, we utilized the Clarifai API. ## Challenges we ran into One of our challenges was parsing the out Json file correctly to retrieve specific data points we needed for our program. Another challenge we had was linking the Node.js variable values with the corresponding HTML (EJS) file to be displayed on screen, or taken from the screen. ## Accomplishments that we're proud of HackWestern6 marks as the first hackathon for the both of us, and we are very proud of what we've managed to build. It included a big learning curve on our parts given we'd never worked with any other technologies except HTML, CSS and Vanilla Javascript before, and we're especially proud of managing to parse our JSON as required as well managing to redirect our front-end to the require AllRecipes page when the user hits submit, rather than making them manually click on a link appearing on the screen. ## What we learned This entire project was entirely about learning for us. We learned how to implement API's, server-side programming using Node.js and Express.js, and how to work as a team through frustrations and bugs. ## What's next for Best with what's Left We definitely want to advance this project further because we really believe in the practicality and simplicity of this concept and it could become very useful to the everyday family if developed into an app. We want to include the capability to take photos and submitting them instead of submitting URL links, which we understand is a hassle. We also want to implement our own, trained, ML model instead of drawing from an API to get better and more customized results. In the future, we also want to add health benefits to this product, like calorie counting and macronutrient information.
## Inspiration Our inspiration came from the desire to address the issue of food waste and to help those in need. We decided to create an online platform that connects people with surplus food to those who need to address the problem of food insecurity and food waste, which is a significant environmental and economic problem. We also hoped to highlight the importance of community-based solutions, where individuals and organizations can come together to make a positive impact. We believed in the power of technology and how it can be used to create innovative solutions to social issues. ## What it does Users can create posts about their surplus perishable food (along with expiration date+time) and other users can find those posts to contact the poster and come pick up the food. We thought about it as analogous to Facebook Marketplace but focused on surplus food. ## How we built it We used React + Vite for the frontend and Express + Node.js for the backend. For infrastructure, we used Cloudflare Pages for the frontend and Microsoft Azure App Service for backend. ## Security Practices #### Strict repository access permissions (Some of these were lifted temporarily to quickly make changes while working with the tight deadline in a hackathon environment): * Pull Request with at least 1 review required for merging to the main branch so that one of our team members' machines getting compromised doesn't affect our service. * Reviews on pull requests must be after the latest commit is pushed to the branch to avoid making malicious changes after a review * Status checks (build + successful deployment) must pass before merging to the main branch to avoid erroneous commits in the main branch * PR branches must be up to date with the main branch to merge to make sure there are no incompatibilities with the latest commit causing issues in the main branch * All conversations on the PR must be marked as resolved to make sure any concerns (including security) concerns someone may have expressed have been dealt with before merging * Admins of the repository are not allowed to bypass any of these rules to avoid accidental downtime or malicious commits due to the admin's machine being compromised #### Infrastructure * Use Cloudflare's CDN (able to mitigate the largest DDoS attacks in the world) to deploy our static files for the frontend * Set up SPF, DMARC and DKIM records on our domain so that someone spoofing our domain in emails doesn't work * Use Microsoft Azure's App Service for CI/CD to have a standard automated procedure for deployments and avoid mistakes as well as avoid the responsibility of having to keep up with OS security updates since Microsoft would do that regularly for us * We worked on using DNSSEC for our domain to avoid DNS-related attacks but domain.com (the hackathon sponsor) requires contacting their support to enable it. For my other projects, I implement it by adding a DS record on the registrar's end using the nameserver-provided credentials * Set up logging on Microsoft Azure #### Other * Use environment variables to avoid disclosing any secret credentials * Signed up with Github dependabot alerts to receive updates about any security vulnerabilities in our dependencies * We were in the process of implementing an Authentication service using an open-source service called Supabase to let users sign in using multiple OAuth methods and implement 2FA with TOTP (instead of SMS) * For all the password fields required for our database and Azure service, we used Bitwarden password generator to generate 20-character random passwords as well as used 2FA with TOTP to login to all services that support it * Used SSL for all communication between our resources ## Challenges we ran into * Getting the Google Maps API to work * Weird errors deploying on Azure * Spending too much time trying to make CockroachDB work. It seemed to require certificates for connection even for testing. It seemed like their docs for using sequalize with their DB were not updated since this requirement was put into place. ## Accomplishments that we're proud of Winning the security award by CSE! ## What we learned We learned to not underestimate the amount of work required and do better planning next time. Meanwhile, maybe go to fewer activities though they are super fun and engaging! Don't take us wrong as we did not regret doing them! XD ## What's next for Food Share Food Share is built within a limited time. Some implementations that couldn't be included in time: * Location of available food on the interactive map * More filters for the search for available food * Accounts and authentication method * Implement Microsoft Azure live chat called Azure Web PubSub * Cleaner UI
losing
## Inspiration 3-D Printing. It has been around for decades, yet the printing process is often too complex to navigate, labour intensive and time consuming. Although the technology exists, it is only used by those who are trained in the field because of the technical skills required to operate the machine. We want to change all that. We want to make 3-D printing simpler, faster, and accessible for everyone. By leveraging the power of IoT and Augmented Reality, we created a solution to bridge that gap. ## What it does Printology revolutionizes the process of 3-D printing by allowing users to select, view and print files with a touch of a button. Printology is the first application that allows users to interact with 3-D files in augmented reality while simultaneously printing it wirelessly. This is groundbreaking because it allows children, students, healthcare educators and hobbyists to view, create and print effortlessly from the comfort of their mobile devices. For manufacturers and 3-D Farms, it can save millions of dollars because of the drastically increased productivity. The product is composed of a hardware and a software component. Users can download the iOS app on their devices and browse a catalogue of .STL files. They can drag and view each of these items in augmented reality and print it to their 3-D printer directly from the app. Printology is compatible with all models of printers on the market because of the external Raspberry Pi that generates a custom profile for each unique 3-D printer. Combined, the two pieces allow users to print easily and wirelessly. ## How I built it We built an application in XCode that uses Apple’s AR Kit and converts STL models to USDZ models, enabling the user to view 3-D printable models in augmented reality. This had never been done before, so we had to write our own bash script to convert these models. Then we stored these models in a local server using node.js. We integrated functions into the local servers which are called by our application in Swift. In order to print directly from the app, we connected a Raspberry Pi running Octoprint (a web based software to initialize the 3-D printer). We also integrated functions into our local server using node.js to call functions and interact with Octoprint. Our end product is a multifunctional application capable of previewing 3-D printable models in augmented reality and printing them in real time. ## Challenges I ran into We created something that had never been done before hence we did not have a lot of documentation to follow. Everything was built from scratch. In other words this project needed to be incredibly well planned and executed in order to achieve a successful end product. We faced many barriers and each time we pushed through. Here were some major issues we faced. 1. No one on our team had done iOS development before and we a lot through online resources and trial and error. Altogether we watched more than 12 hours of YouTube tutorials on Swift and XCode - It was quite a learning curve. Ultimately with insane persistence, a full all-nighter and the generous help of the Deltahacks mentors, we troubleshooted errors and found new ways of getting around problems. 2. No one on our team had experience in bash or node.js. We learned everything from the Google and our mentors. It was exhausting and sometimes downright frustrating. Learning the connection between our javascript server and our Swift UI was extremely difficult and we went through loads of troubleshooting for our networks and IP addresses. ## Accomplishments that I'm proud of and what I've Learned We're most proud of learning the integration of multiple languages, APIs and devices into one synchronized system. It was the first time that this had been done before and most of the software was made in house. We learned command line functions and figured out how to centralize several applications to provide a solution. It was so rewarding to learn an entirely new language and create something valuable in 24 hours. ## What's next for Print.ology We are working on a scan feature on the app that allows users to do a 3-D scan with their phone of any object and be able to produce a 3-D printable STL file from the photos. This has also never been accomplished before and it would allow for major advancements in rapid prototyping. We look forward to integrating machine learning techniques to analyze a 3-D model and generate settings that reduce the number of support structures needed. This would reduce the waste involved in 3-D printing. A future step would be to migrate our STL files o a cloud based service in which users can upload their 3-D models.
## Inspiration How did you feel when you first sat behind the driving wheel? Scared? Excited? All of us on the team felt a similar way: nervous. Nervous that we'll drive too slow and have cars honk at us from behind. Or nervous that we'll crash into something or someone. We felt that this was something that most people encountered, and given the current technology and opportunity, this was the perfect chance to create a solution that can help inexperienced drivers. ## What it does Drovo records average speed and composite jerk (the first derivative of acceleration with respect to time) over the course of a driver's trip. From this data, it determines a driving grade based on the results of a SVM machine learning model. ## How I built it The technology making up Drovo can be summarized in three core components: the Android app, machine learning model, and Ford head unit. Interaction can start from either the Android app or Ford head unit. Once a trip is started, the Android app will compile data from its own accelerometer and multiple features from the Ford head unit which it will feed to a SVM machine learning model. The results of the analysis will be summarized with a single driving letter grade which will be read out to the user, surfaced to the head unit, and shown on the device. ## Challenges I ran into Much of the hackathon was spent learning how to properly integrate our Android app and machine learning model with the Ford head unit via smart device link. This led to multiple challenges along the way such as figuring out how to properly communicate from the main Android activity to the smart device link service and from the service to the head unit via RPC. ## Accomplishments that I'm proud of We are proud that we were able to make a fully connected user experience that enables interaction from multiple user interfaces such as the phone, Ford head unit, or voice. ## What I learned We learned how to work with smart device link, various new Android techniques, and vehicle infotainment systems. ## What's next for Drovo We think that Drovo should be more than just a one time measurement of driving skills. We are thinking of keeping track of your previous trips to see how your driving skills have changed over time. We would also like to return the vehicle data we analyzed to highlight specific periods of bad driving. Beyond that, we think Drovo could be a great incentive for teenage drivers to be proud of good driving. By implementing a social leaderboard, users can see their friends' driving grades, which will in turn motivate them to increase their own driving skills.
## Inspiration We get the inspiration from the idea provided by Stanley Black&Decker, which is to show users how would the product like in real place and in real size using AR technic. We choose to solve this problem because we also encounter same problem in our daily life. When we are browsing website for buying furnitures or other space-taking product, the first wonders that we come up with is always these two: How much room would it take and would it suit the overall arrangement. ## What it does It provides customer with 3D models of products which they might be interested in and enable the customers to place, arrange (move and rotate) and interact with these models in their exact size in reality space to help they make decision on whether to buy it or not. ## How we built it We use iOS AR kit. ## Challenges we ran into Plane detection; How to open and close the drawer; how to build 3D model by ourselves from nothing ## Accomplishments that we're proud of We are able to open and close drawer ## What we learned How to make AR animation ## What's next for Y.Cabinet We want to enable the change of size and color of a series/set of products directly in AR view, without the need to go back to choose. We also want to make the products look more realistic by finding a way to add light and shadow to it.
winning
## Inspiration Fractals are cool, there are tons of videos on YouTube, zooming into the Mandelbrot, showing the patterns that emerge as you zoom in. While there are many examples of these, they all use the exact same equation that defines the Mandelbrot set, with fixed values for everything. However, if there's anything university math has taught us, it's that if there's any letters in the alphabet not in your equation, you don't have enough variables. Taking inspiration from this, we added several variables to the equations of a few fractals, so you can see how the fractal changes with slight changes to the how the fractal's created. ## What it does The classic equation for the Mandelbrot set is x\_{n+1} = x\_n^2+ \*x\_1. We've added 4 variables to it, changing the equation to x\_{n+1} = a\*x\_n^{power} + b\*x\_1 + c instead. Each of these variables can be changed, with the fractal being quickly generated with whatever's chosen. We've also implemented the Buddhabrot (somewhat of an inverse of the Mandelbrot set), and Newton's fractal for quartic polynomials. When users find a fractal pattern that they like, we have a feature to download it as an image in various aspect ratios. This will allow users to use their fractal creations as wallpapers, profile pictures or whatever else they want. ## How we built it The entire project was done in JavaScript, with some HTML and CSS for a webpage. The page is currently only locally hosted, using python's http.server module to host the directory. ## Challenges we ran into One major class of challenges was math errors in the fractals. When dealing with so many variables, one small error can completely corrupt the image, and tracking down some of those issues was challenging. Another challenge was web design, since neither of us were very experienced with graphic design. As a result, the site has just enough functionality to create the fractals, but not much additional design. Finally, there was a lot of trouble with Chrome and Web Workers. The chrome implementation of Web Workers is currently not the best, with an unknown issue causing the multithreading of Web Workers to leave performance the same, if not worse, as running it on the main browser thread. Updating to the most recent version of chrome minimized this problem, but it still exists and is something to look into at a future date. The best fix was switching to Firefox, where using web workers results in a significant speed up to render times. ## Accomplishments that we're proud of Getting so much done in a group of 2 people was awesome. We weren't sure if we would be able to implement Newton's Fractal in time, but we managed to figure it out and get our implementation working. ## What we learned We learned that Web Workers are way weirder than initially anticipated. Thomas was also not as well versed in JavaScript, so there was a lot of learning JavaScript's quirky behavior. We also learnt that tweaking the variables in the Mandelbrot set does result in really cool patterns! ## What's next for Fractal Playground Add some more color gradients for the Newton's Fractal and Mandelbrot set, as well as refining the user interface. A potential addition would be the usage of webGL or webGPU to render the images on a GPU, which should speed things up more than Web Workers do.
## Inspiration The internet and current text based communication simply does not promote neurodiversity. People, especially children, with developmental disabilities such as autism have a great deal of difficulty recognizing the emotions of others whether it be verbal or written. The internet gave us the ability to communicate with each other easily. In the new wave of technology, we believe that all humans should be able to understand each other easily as well. ## What it does AllChat works like any other messaging application. However, on top of sending and receiving messages, when you receive a message it displays the emotion of the given text so that those with developmental disabilities can gain more insights and more easily understand other people's messages. ## How we built it The NLP system uses tensorflow and BERT to categorize text into 5 different emotions. BERT computes vector-space representations of natural language that’s suitable for use in deep learning models. The BERT family of models uses the Transformer encoder architecture to process each token of input text in the full context of all tokens before and after. BERT usually just classifies text as either negative or positive, so I had to fine tune it to get the model we have of classifying text in multiple categories. Sockets were used to communicate between different IP addresses and ports. Threading were used to stream text in and out at the same time. The frontend system uses Kivy, a python front end library meant for cross-platform devices and multi-touch displays. ## Challenges we ran into There were a lot of firsts for this group. We are a bunch of first years after all. Whether it was someone's first time using BERT, or first time using kivy, there was a lot of pain in setting things up to a point where we were comfortable with the results. It was especially difficult to find good training data for BERT. It was also difficult to connect front-end to back-end with how the time difference in some of our group members was. ## Accomplishments that we're proud of For training the NLP system we had to read a lot of research papers about how labs have done similar things. It was extremely cool to apply something out of research papers into our own work. All things considered the front-end system looks very good, considering none of us are designers and it was that members first time using kivy a lot of progress was made. ## What we learned A big lesson that continues to be relevant in the space of data science and machine learning is garbage in, garbage out. A model is only as good as the training data you provide it with. On top of that, we learnt to work better as a group despite our time difference by using github better and writing more meaningful commit messages. ## What's next for AllChat Some next steps would be to move to a server instead of having messages being analyzed on device as with long messages it can become time intensive for a mobile phone. On top of that, some security features such as end to end encryption would also be necessary.
## Inspiration The name of our web app, Braeburn, is named after a lightly colored red apple that was once used with green Granny Smith apples to test for colorblindness. We were inspired to create a tool to benefit individuals who are colorblind by helping to make public images more accessible. We realized that things such as informational posters or advertisements may not be as effective to those who are colorblind due to inaccessible color combinations being used. Therefore, we sought to tackle this problem with this project. ## What it does Our web app analyzes images uploaded by users and determines whether or not the image is accessible to people who are colorblind. It identifies color combinations that are hard to distinguish for colorblind people and offers suggestions to replace them. ## How we built it We built our web app using Django/Html/Css/Javascript for the frontend, and we used python and multiple APIs for the backend. One API we used was the Google Cloud Vision API to help us detect the different colors present in the image. ## Challenges we ran into One challenge we ran into is handling the complexity of the different color regions within an image, which is a prevailing problem in the field of computer vision. Our current algorithm uses an api to perform image segmentation that clusters areas of similar color together. This allowed us to more easily create a graph of nodes over the image, where each node is a unique color, and each node's neighbors are different color regions on the image that are nearby. We then traverse this graph and test each pair of neighboring color regions to check for inaccessible color combinations. We also struggled to find ways to simulate colorblindness accurately as RGB values do not map easily to the cones that allow us to see color in our eyes. After some research, we converted RGB values to a different value called LMS, which is a more accurate representation of how we view color. Thus, for an RGB, the LMS value may be different for normal and colorblind vision. To determine if a color combination is inaccessible, we compare these LMS values. To provide our color suggestions, we researched a lot to figure out how to best approximate our suggestions. It ultimately led us to learn about daltonizers, which can color correct or simulate colorblind vision, and we utilize one to suggest more accessible colors. Finally, we ran into many issues integrating different parts of the frontend, which ended up being a huge time sink. Overall, this project was a good challenge for all of us, given we had no previous exposure to computer vision topics. ## Accomplishments that we're proud of We're proud of completing a working product within the time limits of this hackathon and are proud of how our web app looks! We are proud of the knowledge we learned, and the potential of our idea for the project. While many colorblindness simulators exist, ours is interesting for a few reasons . Firstly, we wanted to automate the process of making graphics and other visual materials accessible to those with colorblindness. We focused not only on the frequency of colors that appeared in the image; we created an algorithm that traverses the image and finds problematic pairs of colors that touch each other. We perform this task by finding all touching pairs of color areas (which is no easy task) and then comparing the distance of the pair with typical color vision and a transformed version of the pair with colorblind vision. This proved to be quite challenging, and we created a primitive algorithm that performs this task. The reach goal of this project would be to create an algorithm sophisticated enough to completely automate the task and return the image with color correction. ## What we learned We learned a lot about complex topics such as how to best divide a graph based on color and how to manipulate color pixels to reflect how colorblind people perceive color. Another thing we learned is that t's difficult to anticipate challenges and manage time. We also realize we were a bit ambitious and overlooked the complexity of computer vision topics. ## What's next for Braeburn We want to refine our color suggestion algorithm, extend the application to videos, and provide support for more types of colorblindness.
losing
## Inspiration After observing the news about the use of police force for so long, we considered to ourselves how to solve that. We realized that in some ways, the problem was made worse by a lack of trust in law enforcement. We then realized that we could use blockchain to create a better system for accountability in the use of force. We believe that it can help people trust law enforcement officers more and diminish the use of force when possible, saving lives. ## What it does Chain Gun is a modification for a gun (a Nerf gun for the purposes of the hackathon) that sits behind the trigger mechanism. When the gun is fired, the GPS location and ID of the gun are put onto the Ethereum blockchain. ## Challenges we ran into Some things did not work well with the new updates to Web3 causing a continuous stream of bugs. To add to this, the major updates broke most old code samples. Android lacks a good implementation of any Ethereum client making it a poor platform for connecting the gun to the blockchain. Sending raw transactions is not very well documented, especially when signing the transactions manually with a public/private keypair. ## Accomplishments that we're proud of * Combining many parts to form a solution including an Android app, a smart contract, two different back ends, and a front end * Working together to create something we believe has the ability to change the world for the better. ## What we learned * Hardware prototyping * Integrating a bunch of different platforms into one system (Arduino, Android, Ethereum Blockchain, Node.JS API, React.JS frontend) * Web3 1.0.0 ## What's next for Chain Gun * Refine the prototype
## Inspiration To introduce the most impartial and ensured form of voting submission in response to controversial democratic electoral polling following the 2018 US midterm elections. This event involved several encircling clauses of doubt and questioned authenticity of results by citizen voters. This propelled the idea of bringing enforced and much needed decentralized security to the polling process. ## What it does Allows voters to vote through a web portal on a blockchain. This web portal is written in HTML and Javascript using the Bootstrap UI framework and JQuery to send Ajax HTTP requests through a flask server written in Python communicating with a blockchain running on the ARK platform. The polling station uses a web portal to generate a unique passphrase for each voter. The voter then uses said passphrase to cast their ballot anonymously and securely. Following this, their vote alongside passphrase go to a flask web server where it is properly parsed and sent to the ARK blockchain accounting it as a transaction. Is transaction is delegated by one ARK coin represented as the count. Finally, a paper trail is generated following the submission of vote on the web portal in the event of public verification. ## How we built it The initial approach was to use Node.JS, however, Python with Flask was opted for as it proved to be a more optimally implementable solution. Visual studio code was used as a basis to present the HTML and CSS front end for visual representations of the voting interface. Alternatively, the ARK blockchain was constructed on the Docker container. These were used in a conjoined manner to deliver the web-based application. ## Challenges I ran into * Integration for seamless formation of app between front and back-end merge * Using flask as an intermediary to act as transitional fit for back-end * Understanding incorporation, use, and capability of blockchain for security in the purpose applied to ## Accomplishments that I'm proud of * Successful implementation of blockchain technology through an intuitive web-based medium to address a heavily relevant and critical societal concern ## What I learned * Application of ARK.io blockchain and security protocols * The multitude of transcriptional stages for encryption involving pass-phrases being converted to private and public keys * Utilizing JQuery to compile a comprehensive program ## What's next for Block Vote Expand Block Vote’s applicability in other areas requiring decentralized and trusted security, hence, introducing a universal initiative.
## What it does Using our app, users can scan the item, and use the provided pass code to make sure that item they have is legit. Using just the QR scanner on our app, it is very easy to verify the goods you bought, as well as the location the drugs were manufactured. ## How we built it We started off wanting to ensure immutability for our users; after all, our whole platform is made for users to trust the items they scan. What came to our minds was using blockchain technology, which would allow us to ensure each and every item would remain immutable and publicly verifiable by any party. This way, users would know that the data we present is always true and legitimate. After building the blockchain technology with Node.js, we started working on the actual mobile platform. To create both iOS and Android versions simultaneously, we used Angular to create a shared codebase so we could easily adapt the app for both platforms. Although we didn't have any UI/UX experience, we tried to make the app as simple and user-friendly as possible. We incorporated Google Maps API to track and plot the location of where items are scanned to add that to our metadata and added native packages like QR code scanning and generation to make things easier for users to use. Although we weren't able to publish to the app stores, we tested our app using emulators to ensure all functionality worked as intended. ## Challenges we ran into Our first challenge was learning how to build a blockchain ecosystem within a mobile app. Since the technology was somewhat foreign to us, we had to learn the in and outs of what "makes" a blockchain and how to ensure its immutability. After all, trust and security are our number one priorities and without them, our app was meaningless. In the end, we found a way to create this ecosystem and performed numerous unit tests to ensure it was up to industry standards. Another challenge we faced was getting the app to work in both iOS and Android environments. Since each platform had its set of "rules and standards", we had to make sure that our functions worked in both and that no errors were engendered from platform deviations. ## What's next for Trail We hope to expand our target audience to secondhand commodities and the food industry. In today's society, markets such as eBay and Alibaba are flooded with counterfeit luxury goods such as clothing and apparel. When customers buy these goods from secondhand retailers on eBay, there's currently no way they can know for certain whether that item is legitimate as they claim; they solely rely on the seller's word. However, we hope to disrupt this and allow customers to immediately view where the item was manufactured and if it truly is from Gucci, rather than a counterfeit market in China. Another industry we hope to expand to is foods. People care about where the food they eat comes from, whether it's kosher and organic and non-GMO. Although the FDA regulates this to a certain extent, this data isn't easily accessible by customers. We want to provide a transparent and easy way to users to view the food they are eating by showing them data like where the honey was produced, where the cows were grown, and when their fruits were picked. Outbreaks such as the Chipotle Ecoli incident can be pinpointed as they can view where the incident started and to warn customers to not eat food coming from that area.
winning
## Inspiration This project was a response to the events that occurred during Hurricane Harvey in Houston last year, wildfires in California, and the events that occurred during the monsoon in India this past year. 911 call centers are extremely inefficient in providing actual aid to people due to the unreliability of tracking cell phones. We are also informing people of the risk factors in certain areas so that they will be more knowledgeable when making decisions for travel, their futures, and taking preventative measures. ## What it does Supermaritan provides a platform for people who are in danger and affected by disasters to send out "distress signals" specifying how severe their damage is and the specific type of issue they have. We store their location in a database and present it live on react-native-map API. This allows local authorities to easily locate people, evaluate how badly they need help, and decide what type of help they need. Dispatchers will thus be able to quickly and efficiently aid victims. More importantly, the live map feature allows local users to see live incidents on their map and gives them the ability to help out if possible, allowing for greater interaction within a community. Once a victim has been successfully aided, they will have the option to resolve their issue and store it in our database to aid our analytics. Using information from previous disaster incidents, we can also provide information about the safety of certain areas. Taking the previous incidents within a certain range of latitudinal and longitudinal coordinates, we can calculate what type of incident (whether it be floods, earthquakes, fire, injuries, etc.) is most common in the area. Additionally, by taking a weighted average based on the severity of previous resolved incidents of all types, we can generate a risk factor that provides a way to gauge how safe the range a user is in based off the most dangerous range within our database. ## How we built it We used react-native, MongoDB, Javascript, NodeJS, and the Google Cloud Platform, and various open source libraries to help build our hack. ## Challenges we ran into Ejecting react-native from Expo took a very long time and prevented one of the members in our group who was working on the client-side of our app from working. This led to us having a lot more work for us to divide amongst ourselves once it finally ejected. Getting acquainted with react-native in general was difficult. It was fairly new to all of us and some of the libraries we used did not have documentation, which required us to learn from their source code. ## Accomplishments that we're proud of Implementing the Heat Map analytics feature was something we are happy we were able to do because it is a nice way of presenting the information regarding disaster incidents and alerting samaritans and authorities. We were also proud that we were able to navigate and interpret new APIs to fit the purposes of our app. Generating successful scripts to test our app and debug any issues was also something we were proud of and that helped us get past many challenges. ## What we learned We learned that while some frameworks have their advantages (for example, React can create projects at a fast pace using built-in components), many times, they have glaring drawbacks and limitations which may make another, more 'complicated' framework, a better choice in the long run. ## What's next for Supermaritan In the future, we hope to provide more metrics and analytics regarding safety and disaster issues for certain areas. Showing disaster trends overtime and displaying risk factors for each individual incident type is something we definitely are going to do in the future.
## Inspiration Disasters can strike quickly and without notice. Most people are unprepared for situations such as earthquakes which occur with alarming frequency along the Pacific Rim. When wifi and cell service are unavailable, medical aid, food, water, and shelter struggle to be shared as the community can only communicate and connect in person. ## What it does In disaster situations, Rebuild allows users to share and receive information about nearby resources and dangers by placing icons on a map. Rebuild uses a mesh network to automatically transfer data between nearby devices, ensuring that users have the most recent information in their area. What makes Rebuild a unique and effective app is that it does not require WIFI to share and receive data. ## How we built it We built it with Android and the Nearby Connections API, a built-in Android library which manages the ## Challenges we ran into The main challenges we faced while making this project were updating the device location so that the markers are placed accurately, and establishing a reliable mesh-network connection between the app users. While these features still aren't perfect, after a long night we managed to reach something we are satisfied with. ## Accomplishments that we're proud of WORKING MESH NETWORK! (If you heard the scream of joy last night I apologize.) ## What we learned ## What's next for Rebuild
# Inspiration During the information session presented by the Wharton Risk Center, we were inspired by the story of a little girl who recognized the early signs of a tsunami but found difficulty in spreading awareness. After finally contacting the right person, they managed to get everyone off the beach and it became the only place where no one died to the tsunami. Other problems found were a lack of data involving the extent of flooded areas and other natural disasters, and difficulties in safely delivering supplies to sites in areas affected by diasters. We wanted to build a solution that gives users the power to alert others to disasters who are within close proximity, collects missing data in a non-intrusive manner, and provides critical support and directions during disasters. # What it does ## Mobile Application We connect our users. Our iOS mobile app gives users the power to report a disaster in their area, whether it's flooding, fire, earthquake, tsunami, etc. Other users of the application within a certain radius is notified of a potential threat. They can quickly and conveniently respond yes or no with two clicks of the notification. Whether they answer yes, no, or no response, the data is stored and mapped on our web application. After sending a report, the application offers tips and resources for staying safe, and the user can prepare accordingly for the disaster. ## Web Application ### Data Visualization Our web application makes it extremely to visualize the extent of an incident. If someone responds yes, they are plotted with a red heatmap. If someone responds no, they are plotted with a blue heatmap. As a result, the affected area should be plotted with red by users who were affected, and should be bounded by a blue where users reported they didn't see anything. This data can be analyzed and visualized even more in the visualization page of our web a pp ### Mission Control Delivering supplies to sites can be dangerous, so first responders need a safe and efficient way to quickly deliver supplies. Safe routes are provided such that in order to get from point A to point B, only non affected routes will be taken. For example, if the shortest path to get from point A to point B is to go through a road that was reported to be flooded by the users, the route would avoid this road entirely. # How we built it We built the front end using React+Redux for the webapp and Swift for the mobile app, and Firebase for the backend. We mainly had two people working on each platform, but we all helped each other integrate all our separate moving parts together. Most importantly, we had a lot of fun! (And Lots of basketball with Bloomberg) # Challenges we ran into Coordinating live updates across platforms # Accomplishments that we're proud of Making a coherent and working product! # What we learned Lots of firebase. # What's next for foresite Endless analytical opportunities with the data. Machine learning for verification of disaster reports (we don't want users to falsely claim disasters in areas, but we also don't want to deter anyone from claiming so).
winning
## Inspiration The idea for VenTalk originated from an everyday stressor that everyone on our team could relate to; commuting alone to and from class during the school year. After a stressful work or school day, we want to let out all our feelings and thoughts, but do not want to alarm or disturb our loved ones. Releasing built-up emotional tension is a highly effective form of self-care, but many people stay quiet as not to become a burden on those around them. Over time, this takes a toll on one’s well being, so we decided to tackle this issue in a creative yet simple way. ## What it does VenTalk allows users to either chat with another user or request urgent mental health assistance. Based on their choice, they input how they are feeling on a mental health scale, or some topics they want to discuss with their paired user. The app searches for keywords and similarities to match 2 users who are looking to have a similar conversation. VenTalk is completely anonymous and thus guilt-free, and chats are permanently deleted once both users have left the conversation. This allows users to get any stressors from their day off their chest and rejuvenate their bodies and minds, while still connecting with others. ## How we built it We began with building a framework in React Native and using Figma to design a clean, user-friendly app layout. After this, we wrote an algorithm that could detect common words from the user inputs, and finally pair up two users in the queue to start messaging. Then we integrated, tested, and refined how the app worked. ## Challenges we ran into One of the biggest challenges we faced was learning how to interact with APIs and cloud programs. We had a lot of issues getting a reliable response from the web API we wanted to use, and a lot of requests just returned CORS errors. After some determination and a lot of hard work we finally got the API working with Axios. ## Accomplishments that we're proud of In addition to the original plan for just messaging, we added a Helpful Hotline page with emergency mental health resources, in case a user is seeking professional help. We believe that since this app will be used when people are not in their best state of minds, it's a good idea to have some resources available to them. ## What we learned Something we got to learn more about was the impact of user interface on the mood of the user, and how different shades and colours are connotated with emotions. We also discovered that having team members from different schools and programs creates a unique, dynamic atmosphere and a great final result! ## What's next for VenTalk There are many potential next steps for VenTalk. We are going to continue developing the app, making it compatible with iOS, and maybe even a webapp version. We also want to add more personal features, such as a personal locker of stuff that makes you happy (such as a playlist, a subreddit or a netflix series).
![alt tag](https://raw.githubusercontent.com/zackharley/QHacks/develop/public/pictures/logoBlack.png) # What is gitStarted? GitStarted is a developer tool to help get projects off the ground in no time. When time is of the essence, devs hate losing time to setting up repositories. GitStarted streamlines the repo creation process, quickly adding your frontend tools and backend npm modules. ## Installation To install: ``` npm install ``` ## Usage To run: ``` gulp ``` ## Credits Created by [Jake Alsemgeest](https://github.com/Jalsemgeest), [Zack Harley](https://github.com/zackharley), [Colin MacLeod](https://github.com/ColinLMacLeod1) and [Andrew Litt](https://github.com/andrewlitt)! Made with :heart: in Kingston, Ontario for QHacks 2016
## Inspiration Mental health has become one of the most prominent issues today that has impacted a high percentage of people. This has taken a negative toll on their lives and have made people feel like they do not belong anywhere. Due to this, our group decided to assist these people by creating a phone application that minimizes these negative feeling by providing a helping hand and guiding them to additional aid if necessary. ## What it does Our application utilizes a chat bot using voice recognition to communicate with users while responding accordingly to their mood. It strives to help their overall mentality and guide them to a greater overall personal satisfaction. ## How we built it We used Android Studio to create an android application that incorporates Firebase for user authentication and data management using their database. In addition, the chat bot uses Dialogflow's machine learning capabilities and dialog intents to simulate a real life conversation while providing the option for anonymity. In conjunction to Dialogflow, Avaya's API was utilized for its voice recognition and connection to emergency situation through SMS and phone calling. ## Challenges we ran into It was very challenging for us to implement the Avaya API because of its compatibility with Java DK, making it difficult to get the correct HTTP connection needed. This required specific Java versions as well as maven to be able to integrate it in conjunction with the data output from Avaya's API. In addition, the Firebase implementation provided difficulties because of it is NoSQL database which made it tough to retrieve and interact with the data. ## Accomplishments that we're proud of Despite the challenges faced, we were still able to implement both the Avaya API, which is now able to both call and sent text messages, and the Firebase database to store all the user data. This all came together with our final product where the chat bot is able to interact with, call, and send text messages when required. ## What we learned The biggest takeaway from this is learning to think outside the box and understand that there is always another way around a seemingly unsolvable goal. For example, the Avaya API library was difficult to implement because it required downloading a library and using an intermediary such as maven to access the library. However, despite this obstacle, our team was still able to find an alternative in accessing the API through Curl calls and access the needed data. A similar obstacle happened for Firebase database where the pull requests would not process as required, but we were able to find an alternative way to connect to the Firebase and still retrieve the needed data. ## What's next for ASAP Assistance The more the chat bot is utilized, the better the communication will be between the user and the bot. Further training will improve the the bot's capabilities which means it could use many more intents to improve overall user experience. With continued contribution to the logical capabilities of the bot, a wider range of communication can be supported between the user and the bot.
winning
## Inspiration With the increase in Covid-19 cases, the healthcare sector has experienced a shortage of PPE supplies. Many hospitals have turned to the public for donations. However, people who are willing to donate may not know what items are needed, which hospitals need it urgently, or even how to donate. ## What it does Corona Helping Hands is a real-time website that sources data directly from hospitals and ranks their needs based on bed capacity and urgency of necessary items. An interested donor can visit the website and see the hospitals in their area that are accepting donations, what specific items, and how to donate. ## How we built it We built the donation web application using: 1) HTML/ CSS/ Bootstrap (Frontend Web Development) 2) Flask (Backend Web Development) 3) Python (Back-End Language) ## Challenges we ran into We ran into issues getting integrating our map with the HTML page. Taking data and displaying it on the web application was not easy at first, but we were able to pull it off at the end. ## Accomplishments that we're proud of None of us had a lot of experience in frontend web development, so that was challenging for all of us. However, we were able to complete a web application by the end of this hackathon which we are all proud of. We are also proud of creating a platform that can help users help hospitals in need and give them an easy way to figure out how to donate. ## What we learned This was most of our first times working with web development, so we learned a lot on that aspect of the project. We also learned how to integrate an API with our project to show real-time data. ## What's next for Corona Helping Hands We hope to further improve our web application by integrating data from across the nation. We would also like to further improve on the UI/UX of the app to enhance the user experience.
Team channel #43 Team discord users - Sarim Zia #0673, Elly #2476, (ASK), rusticolus #4817, Names - Vamiq, Elly, Sarim, Shahbaaz ## Inspiration When brainstorming an idea, we concentrated on problems that affected a large population and that mattered to us. Topics such as homelessness, food waste and a clean environment came up while in discussion. FULLER was able to incorporate all our ideas and ended up being a multifaceted solution that was able to help support the community. ## What it does FULLER connects charities and shelters to local restaurants with uneaten food and unused groceries. As food prices begin to increase along with homelessness and unemployment we decided to create FULLER. Our website serves as a communication platform between both parties. A scheduled pick-up time is inputted by restaurants and charities are able to easily access a listing of restaurants with available food or groceries for contactless pick-up later in the week. ## How we built it We used React.js to create our website, coding in HTML, CSS, JS, mongodb, bcrypt, node.js, mern, express.js . We also used a backend database. ## Challenges we ran into A challenge that we ran into was communication of the how the code was organized. This led to setbacks as we had to fix up the code which sometimes required us to rewrite lines. ## Accomplishments that we're proud of We are proud that we were able to finish the website. Half our team had no prior experience with HTML, CSS or React, despite this, we were able to create a fair outline of our website. We are also proud that we were able to come up with a viable solution to help out our community that is potentially implementable. ## What we learned We learned that when collaborating on a project it is important to communicate, more specifically on how the code is organized. As previously mentioned we had trouble editing and running the code which caused major setbacks. In addition to this, two team members were able to learn HTML, CSS, and JS over the weekend. ## What's next for us We would want to create more pages in the website to have it fully functional as well as clean up the Front-end of our project. Moreover, we would also like to look into how to implement the project to help out those in need in our community.
## Inspiration There was a lot of free food at HackMIT that we did not know about - that is usually the case all the time at MIT! We think that it is way more convenient to be alerted pysically when free food is available than to keep refreshing Outlook. ## What it does It lights up the segment seven display to show the building and room number where free food is available RIGHT NOW. ## How we built it An IFTTT service forwards the email from the free food mailing list to the Particle Photon. It analyzes the messages from the list and identifies the location. Then, a sequence of LEDs light up to present the exact numbers of the location. ## Challenges we ran into 1. Finding a number in an email is challenging 2. Lighting up three LEDs from one Photon Output cannot be directly done. 3.There were no segment-seven displays available - making one takes time! ## Accomplishments that we're proud of We got it to work! And the numbers look like numbers. ## What we learned How to make physical devices interact with the digital world. ## What's next for Food Alert! Helping college students around MIT not to starve!
partial
## Inspiration Our team loves games. Before even coming to HW, we wanted to make something we knew we could enjoy and share with others. After a quick brainstorming session at the event itself, we decided to get out of our comfort zone and try experimenting with technologies we've never used before. The result of this is PocketPup: A virtual pet with real needs. ## What it does PocketPup is a virtual pet whose needs change according to the weather (in real time, thanks to The Weather Network's cool API). It encourages players to perform real-world physical tasks to ensure that their virtual pet is happy and in good health. ## How we built it PocketPup is built entirely using Unity3D and C#, and the character model for the pup is created using Blender. ## Challenges we ran into One challenge was figuring out how to integrate the Weather Network API into Unity. We also ran into some trouble when we had to create particle effects in sync with the real-time weather. ## Accomplishments that we're proud of Our Pup model is very expressive, and the fact that the in-game weather mirrors the real world is, in our opinion, simply awesome. ## What we learned We learned how to effectively use version control tools to distribute tasks in a team, as well as integrating new technologies into our existing skillset. Also, this has kinda become our team motto: Nothing is impossible if you work as a team. ## What's next for PocketPup It will require deeper knowledge and experience with the tools we are using, but we hope to implement a system through which your virtual pet can be taught some tricks, and an in-game currency system, the currency for which is earned through real-world tasks.
# YouDub: Expanding YouTube Accessibility Through Customizable Voice-Overs Our inspiration for developing YouDub came from a unique blend of experiences within our diverse team. With three members having Hindi as their mother tongue and one member being an English speaker, a casual conversation revealed an interesting challenge. Our English-speaking teammate shared his fondness for math tutorials taught by Indian educators, who often explain concepts in an incredibly straightforward way. However, these videos were mostly in Hindi, which meant he had to rely on subtitles to follow along. This sparked the idea: **What if we could create a Chrome extension that adds voice-overs to YouTube videos, allowing viewers to hear a dubbed version of the content in their preferred language?** Not only would this help break language barriers, but it would also assist students who struggle with videos where the accent is hard to understand, the pace is too fast, or the explanation style is unclear. By providing customizable voice-overs, YouDub enhances accessibility, making it easier for users to engage with educational content. ## What We Learned Throughout this journey, we learned a lot. On the technical side, we gained expertise in designing and building a Chrome extension, integrating APIs, and refining our use of Git for collaborative development. Beyond the code, we learned the value of teamwork, developed stronger friendships, and discovered fascinating things about each other’s perspectives and working styles. ## Stack and Functionality YouDub is built using **JavaScript**, **HTML**, and **CSS**. We leveraged **ElevenLabs API**, first feeding transcription data from YouTube videos into Google Cloud API's translate feature, and then into Eleven’s voice models, generating synchronized audio in the user’s chosen voice. This opens up various use cases, such as replacing the voice of a content creator whose delivery might be off-putting, or translating and dubbing tutorial videos, allowing users to immerse themselves in the content without relying on subtitles. The result is a more engaging experience and access to a broader range of content on YouTube. ## Challenges We Faced Building YouDub wasn’t without its challenges. One of the biggest hurdles was defining the exact scope of what we wanted to achieve. YouTube’s frequent updates to its site security also presented difficulties, causing multiple console errors that required extensive debugging. Additionally, the demanding pace of development, especially during **CalHacks**, pushed us out of our comfort zones, requiring long hours and constant problem-solving. ## Conclusion & What's next The experience was incredibly rewarding. We leave CalHacks not only with a functioning product but also with a sense of pride and accomplishment in what we’ve achieved. In the end, **YouDub empowers users to engage with YouTube content in new and exciting ways**, breaking language barriers and enhancing the viewing experience for people around the globe. With our extension in review to be publicly available on the Chrome Web Store, our product will soon be available to users around the world!
## Inspiration We wanted to create a solution that would ease mental health issues among kids, young adults, and adults by allowing them to play with a Tamagotchi/virtual pet that grows and plays with them. ## What it does We created a web application that allows users to customize and play with their own pet by logging in occasionally, taking care of their pet by bathing, playing, feeding, sleeping, which increases their mood and overall health of the bear. Doing all of the chores increases the experience of your pet, leveling it up! You can collect badges and customize your own bear as well. Each action the bear has its own unique animation. Make sure you try it out! ## How I built it We create a backend server to store user data including the bear's state and the last user login date. Comparing to the current date, we render the new bear state given how long the user has logged off. The animations were done through a customized zdog library. We used materialUI and our own custom CSS buttons to help simulate a good and smoothing user experience. ## Challenges I ran into Animations are hard :( ## Accomplishments that I'm proud of Getting more than 6 animations completed and creating a full-stack web application to customize for every single user. ## What I learned MERN stack, especially react hooks and creating custom animations. ## What's next for Bear Buddies More animations, support for mobile, etc.
losing
## Inspiration Although Canada has free health care, it can be very difficult to meet with a doctor regularly and moreover many people find it difficult to start and adhere to a Nutrition Plan. By being held accountable to their Doctor, we hope to increase the chances of the patient sticking to the Doctor recommended regime, while streamlining the communication process as much as possible. Our goal was to create an app that not only helps a patient stick to a Nutrition Plan, but also provides them with support and guidance. ## What it does The website also allows doctors to track their patient's progress while on the specialized Nutrition plan. The App allows patients to access their Doctor's recommended Dietary Nutrition Plan on a day to day basis. The application also tracks their daily nutritional intake and compares it to their personalized nutrition plan. The application also allows users to create a grocery list to encourage compliance with their doctor's recommendation. The App also recommends healthy recipes using ingredients found in your refrigerator. ## How we built it After a brainstorming session on Friday evening, we divided tasks among team members. We built a SQL database - using the PostgreSQL database engine - to work as the link between the Website accessed by Doctors and the App which Patients use. We built the android app in Java using android studio and simultaneously built the website using html, css, and javascript. ## Challenges we ran into After connecting the Website to the Database we found that, we were unable to connect to the database using the traditional Android Studio method. We then attempted to connect the database to the app using alternate solutions such as calling an API however, due to time constraints we were unable to finish this step. ## Accomplishments that we're proud of As first time hackathoners, we are very proud to produce a working Website, Database, and Application without any prior experience in App or Database development. We all individually learned a lot, and as a group are very proud of the progress we made. ## What we learned As well as technical knowledge such as programming in Android Studio, or sql; we learned the importance of dividing tasks and individual time management. We learned a lot about using a database in connection with a website or app and the challenges and benefits that it provides. ## What's next for FoodMD FoodMD has a bright future ahead of it. Our next step is looking to hook the database up with the mobile app and provide increased functionality for both Patients and Doctors.
## Inspiration Based on CDC’s second nutrition report, nutritional deficiency affects less than 10 percent of the American population. However, in some ethic or age groups, more than one third of the population group suffer nutritional deficiency. The lack of certain nutrients could lower one’s quality of life, and even lead to fatal illness. ## What it does To start off, a user could simply use an email address and set a password to create a new account. After registering, the user may log in to the website, and access more functions. After login, the user can view the dashboard and record their daily meal. The left section allows the user to record each food intake and measures the nutrients in each portion of the food. The user can add multiple times if one consumes one than one food. All entries in a day will be saved in this section, and the user may review the entry history. On the right side, the user's daily nutrients intake will be analyzed and presented as a pie chart. A calorie count and percentage of nutrients will appear on the top left of the pie chart, providing health-related info. The user may know about their nutrient intake information through this section of the dashboard. Finally, we have a user profile page, allowing the user to store and change personal information. In this section, the user can update health-related information, such as height, weight, and ages. By providing this information, the system may give a better measure of the user, and provide more accurate advice. ## How we built it We constructed a Firebase Realtime Database which is used to store user data. As for the front-end side, our team used React with CSS and HTML to build a website. We also used Figma for UI design and prototyping. As a team, we collaborate by using Github and we use Git as our version control tool. ## Challenges we ran into One of the major challenges that we have is to assemble the project using react, which caused a technical issues for everyone. Since everyone was working on an individual laptop, sometimes there will be conflicts between each commit. For example, someone may build a section differently from another, which causes a crash in the program. ## Accomplishments that we're proud of Even though our project is not as complete as we had hoped, we are proud of what we accomplished. We built something in an attempt to raise awareness of health issues and we will continue with this project. ## What we learned Before we basically had no experience with a full-stack web application. Although this project is not technically full-stack, we learned a lot. Some of our teammates learned to use react, while others learned to use a NoSQL Database like Firebase Realtime Database. ## What's next for N-Tri By collecting user's information, such as height, weight, and ages, we can personalize their nutrition plan, and provide better services. In addition, we will have a recipe recommendation section for everyone, in order for people to have a better daily meal plan. In addition, we would like to add create a more robust UI/UX design.
## Inspiration Unhealthy diet is the leading cause of death in the U.S., contributing to approximately 678,000 deaths each year, due to nutrition and obesity-related diseases, such as heart disease, cancer, and type 2 diabetes. Let that sink in; the leading cause of death in the U.S. could be completely nullified if only more people cared to monitor their daily nutrition and made better decisions as a result. But **who** has the time to meticulously track every thing they eat down to the individual almond, figure out how much sugar, dietary fiber, and cholesterol is really in their meals, and of course, keep track of their macros! In addition, how would somebody with accessibility problems, say blindness for example, even go about using an existing app to track their intake? Wouldn't it be amazing to be able to get the full nutritional breakdown of a meal consisting of a cup of grapes, 12 almonds, 5 peanuts, 46 grams of white rice, 250 mL of milk, a glass of red wine, and a big mac, all in a matter of **seconds**, and furthermore, if that really is your lunch for the day, be able to log it and view rich visualizations of what you're eating compared to your custom nutrition goals?? We set out to find the answer by developing macroS. ## What it does macroS integrates seamlessly with the Google Assistant on your smartphone and let's you query for a full nutritional breakdown of any combination of foods that you can think of. Making a query is **so easy**, you can literally do it while *closing your eyes*. Users can also make a macroS account to log the meals they're eating everyday conveniently and without hassle with the powerful built-in natural language processing model. They can view their account on a browser to set nutrition goals and view rich visualizations of their nutrition habits to help them outline the steps they need to take to improve. ## How we built it DialogFlow and the Google Action Console were used to build a realistic voice assistant that responds to user queries for nutritional data and food logging. We trained a natural language processing model to identify the difference between a call to log a food eaten entry and simply a request for a nutritional breakdown. We deployed our functions written in node.js to the Firebase Cloud, from where they process user input to the Google Assistant when the test app is started. When a request for nutritional information is made, the cloud function makes an external API call to nutrionix that provides nlp for querying from a database of over 900k grocery and restaurant foods. A mongo database is to be used to store user accounts and pass data from the cloud function API calls to the frontend of the web application, developed using HTML/CSS/Javascript. ## Challenges we ran into Learning how to use the different APIs and the Google Action Console to create intents, contexts, and fulfillment was challenging on it's own, but the challenges amplified when we introduced the ambitious goal of training the voice agent to differentiate between a request to log a meal and a simple request for nutritional information. In addition, actually finding the data we needed to make the queries to nutrionix were often nested deep within various JSON objects that were being thrown all over the place between the voice assistant and cloud functions. The team was finally able to find what they were looking for after spending a lot of time in the firebase logs.In addition, the entire team lacked any experience using Natural Language Processing and voice enabled technologies, and 3 out of the 4 members had never even used an API before, so there was certainly a steep learning curve in getting comfortable with it all. ## Accomplishments that we're proud of We are proud to tackle such a prominent issue with a very practical and convenient solution that really nobody would have any excuse not to use; by making something so important, self-monitoring of your health and nutrition, much more convenient and even more accessible, we're confident that we can help large amounts of people finally start making sense of what they're consuming on a daily basis. We're literally able to get full nutritional breakdowns of combinations of foods in a matter of **seconds**, that would otherwise take upwards of 30 minutes of tedious google searching and calculating. In addition, we're confident that this has never been done before to this extent with voice enabled technology. Finally, we're incredibly proud of ourselves for learning so much and for actually delivering on a product in the short amount of time that we had with the levels of experience we came into this hackathon with. ## What we learned We made and deployed the cloud functions that integrated with our Google Action Console and trained the nlp model to differentiate between a food log and nutritional data request. In addition, we learned how to use DialogFlow to develop really nice conversations and gained a much greater appreciation to the power of voice enabled technologies. Team members who were interested in honing their front end skills also got the opportunity to do that by working on the actual web application. This was also most team members first hackathon ever, and nobody had ever used any of the APIs or tools that we used in this project but we were able to figure out how everything works by staying focused and dedicated to our work, which makes us really proud. We're all coming out of this hackathon with a lot more confidence in our own abilities. ## What's next for macroS We want to finish building out the user database and integrating the voice application with the actual frontend. The technology is really scalable and once a database is complete, it can be made so valuable to really anybody who would like to monitor their health and nutrition more closely. Being able to, as a user, identify my own age, gender, weight, height, and possible dietary diseases could help us as macroS give users suggestions on what their goals should be, and in addition, we could build custom queries for certain profiles of individuals; for example, if a diabetic person asks macroS if they can eat a chocolate bar for lunch, macroS would tell them no because they should be monitoring their sugar levels more closely. There's really no end to where we can go with this!
losing
Learning a new language doesn't happen inside an app or in front of a screen. It happens in real life. So we created Dojo, an immersive and interactive language learning platform, powered by artificial intelligence, that supports up to 10 different languages. Our lessons are practical. They're personalized to your everyday life, encouraging you to make connections with the world around you. Go on a scavenger hunt and take photos of objects that match the given hint to unlock the next level. Open the visual dictionary and snap a photo of something to learn how to describe it in the language you're learning.
## Inspiration The inspiration behind Mentis emerged from a realization of the vast potential that a personalized learning AI platform holds in transforming education. We envisioned an AI-driven mentor capable of adapting to individual learning styles and needs, making education more accessible, engaging, and effective for everyone. The idea was to create an AI that could dynamically update its teaching content based on live user questions, ensuring that every learner could find a path that suits them best, regardless of their background or level of knowledge. We wanted to build something that encompasses the intersection of: accessibility, interactivity and audiovisual. ## What it does Mentis is an AI-powered educational platform that offers personalized learning experiences across a wide range of topics. It is able to generate and teach animated lesson plans with both visuals and audio, as it generates checkpoint questions for the user and listens to the questions of its users and dynamically adjusts the remainder of the teaching content and teaching methods to suit their individual learning preferences. Whether it's mathematics, science, or economics, Mentis provides tailored guidance, ensuring that users not only receive answers to their questions but also a deep understanding of the subject matter. ## How we built it At its core, a fast API backend powers the intelligent processing and dynamic delivery of educational content, ensuring rapid response to user queries. This backend is complemented by our use of advanced Large Language Models (LLMs), which have been fine-tuned to understand a diverse range of educational topics and specialize in code generation for the best animation, enhancing the platform's ability to deliver tailored learning experiences. We curated a custom dataset in order to leverage LLMs to the fullest and reduce errors in both script and code generation. Using our curated datasets, we were able to fine-tune models using MonsterAPI tailoring our LLMs and improve accuracy.. We implemented several API calls to ensure a smooth and dynamic operation of our platform, for general organization of the lesson plan, script generation, audio generation with ElevenLabs, and code generation for the manim library we utilize to create the animations on our front end in Bun and Next.js. [Fine tuned open source model](https://huggingface.co/generaleoley/mixtral-8x7b-manim-lora/tree/main) [Curated custom dataset](https://huggingface.co/datasets/generaleoley/manim-codegen) ## Challenges we ran into Throughout the development of Mentis, we encountered significant challenges, particularly in setting up environments and installing various dependencies. These hurdles consumed a considerable amount of our time, persisting until the final stages of development. Every stage of our application had issues we had to address: generating dynamic sections for our video scripts, ensuring that the code is able to execute the animation, integrating the text-to-speech component to generate audio for our educational content all introduced layers of complexity, requiring precise tuning and a lot of playing with to set up. The number of API calls needed to fetch, update, and manage content dynamically, coupled with ensuring the seamless interaction between the user and our application, demanded a meticulous approach. We found ourselves in a constant battle to maintain efficiency and reliability, as we tried to keep our latency low for practicality and interactivity of our product. ## Accomplishments that we're proud of Despite the setbacks, we are incredibly proud of: * Technical Overcomes: Overcoming significant technical hurdles, learning from them and enhancing our problem-solving capabilities. * Versatile System: Enabling our platform to cover a broad range of topics, making learning accessible to everyone. * Adaptive Learning: Developing a system that can truly adapt to each user's unique learning style and needs. * User-Friendly UI: Creating a user-friendly design and experience keeping our application as accessible as possible. * API Management: Successfully managing numerous API calls, we smoothed the backend operation as much as possible for a seamless user experience. * Fine tuned/Tailored Models: Going through the full process of data exploration & cleaning, model selection, and configuring the fine-tuned model. ## What we learned Throughout the backend our biggest challenge and learning point was the setup, coordination and training of multiple AI agents and APIs. For all of us, this was our first time fine-tuning a LLM and there were many things we learned through this process such as dataset selection, model selection, fine-tuning configuration. We gained an appreciation for all the great work that was being done by the many researchers. With careful tuning and prompting, we were able to greatly increase the efficiency and accuracy of the models. We also learned a lot about coordinating multi-agent systems and how to efficiently have them run concurrently and together. We tested many architectures and ended up settling for one that would optimize first for accuracy then for speed. To accomplish this, we set up an asynchronous query system where multiple “frames” can be generated at once and allow us not to be blocked by cloud computation time. ## What's next for mentis.ai Looking ahead, Mentis.ai has exciting plans for improvement and expansion: **Reducing Latency:** We're committed to enhancing efficiency, aiming to minimize latency further and optimize performance across the platform. **Innovative Features:** Given more time, we plan to integrate cutting-edge features, like using HeyGen API to create natural videos of personalized AI tutors, combining custom images, videos, and audio for a richer learning experience. **Classroom Integration:** We're exploring opportunities to bring Mentis into classroom settings, testing its effectiveness in a real-world educational environment and tailoring its capabilities to support teachers and students alike.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
partial
## Inspiration Our team grew up in the Bay Area and Chicago, watching our beautiful cities surrounded by natural landscapes continually polluted and littered. It's heartbreaking to see the parks and streets you love trashed by people that don't seem to care. But we've all been there, you need to throw something out but don't see a waste bin for blocks in each direction. Do you wait for a bin or gently set your empty coffee cup next to the curb? No one wants to scorn the environment, it's just convenience and cities can't afford to place trash bins everywhere. That's why we created LitterBug to help tackle cities' trash problem. LitterBug uses easily accessible and collectible data to give heat maps of where trash is concentrated in a city and gives smart suggestions such as optimizing for the shortest distance from clusters of litter to a minimized number of trash bins. ## What it does We decided to try to attack the dirtiest city in America: New York City(travelandleisure.com). LitterBug can use data from either: a vehicle recording geotagged images for up to date images, or Google Street View static images. We used Google Cloud Maps Static API to collect images in a grid over Manhattan corresponding to the city blocks. We oriented the heading of each image to be perpendicular to the street it's on so the image faces the sidewalks. Each image is placed in a graph of nodes and edges. The images are held in a Microsoft Azure cloud computing server and used to run image recognition for litter. This is done using tensorflow with an open-source model we found. The object detection is run on GPUs and the number of classifications (litter) is stored for that coordinate. Finally, the map is put together with litter detections to create a heat map of litter in New York. This can be used for deploying volunteers or street cleaners as well as placing new recycling and waste bins. We created a novel algorithm that groups areas of litter into clusters, and places a small number of waste bins such as to minimize the walking distance along each street from pieces of litter to the nearest bin. This allows the municipal body to only maintenance trash bins where they are needed while reducing litter and inconvenience for residents. ## How we built it The project has 3 major parts: 1-Transforming the map of a geofenced area into a graph model and requesting Google Street View Images along each edge 2-Setting up litter object detection of city images on a Microsoft Azure server and labeling them using GPU processing 3-Importing graph and labels into a heat map of litter and running waste bin optimization algorithms Together, these pieces make up the pipeline that can help city governments make smart decisions on keeping their city clean. ## Challenges we ran into 1-The main issue we ran into was graphics drivers for the Azure server. Without them, running image recognition on so many images would have taken over a day. 2-Making decisions in the waste bin algorithm on how to cluster litter and balance number of bins vs quantity of litter 3-Working with Google Maps Static to save images and give them the right heading (from a 360 sphere) ## Accomplishments that we're proud of We are super proud of the scalability of the project. Compared to looking at a few pics from Mass Ave, part of a huge city is very exciting. We are also proud of the use of cloud computing to make the image recognition easier as well as our novel waste bin optimization algorithm! ## What we learned We learned a lot about APIs and cloud computing while working on LitterBug. It's one thing to write code in an IDE, but another altogether to implement several libraries, APIs, and hosts into an efficient pipeline. Working with API was syntactically difficult, especially when you don't fully understand the code you're working with, but in the end makes the project much more powerful. We also learned to think big. Originally, we planned to use a dozen pictures from Mass Ave, but partway through the project we realized through Google Cloud we could run data on a whole city despite being hundred on miles away. ## What's next for LitterBug The most important addition to make LitterBug useful is a good GUI for either geofencing and area on Google maps or importing data from a municipal vehicle that circuits the city. With Azure, it is possible for the pipeline to not expect much from the user or the user's computer.
## Inspiration My teammate and I grew up in Bolivia, where recycling has not taken much of a hold in society unfortunately. As such, once we moved to the US and had to deal with properly throwing away the trash to the corresponding bin, we were a bit lost sometimes on how to determine which bin to use. What better way to solve this problem than creating an app that will do it for us? ## What it does By opening EcoSnap, you can take a picture of a piece of trash using the front camera, after which the image will be processed by a machine learning algorithm that will classify the primary object and give the user an estimate of the confidence percentage and in which bin the trash should go to. ## How we built it We decided to use Flutter to make EcoSnap because of its ability to run on multiple platforms with only one main source file. Furthermore, we also really liked its "Hot Reload" feature which allowed us to see the changes in our app instantly. After creating the basic UI and implementing image capturing capabilities, we connected to Google's Cloud Vision and OpenAI's GPT APIs. With this done, we fed Vision the image that was captured, which then returned its classification. Then, we fed this output to GPT, which told us which bin we should put it in. Once all of this information was acquired, a new screen propped up informing the user of the relevant information! ## Challenges we ran into Given this was our first hackathon and we did not come into it with an initial idea, we spent a lot of time deciding what we should do. After coming up with the idea and deciding on using Flutter, we had to learn from 0 how to use it as well as Dart, which took also a long time. Afterwards, we had issue implementing multiple pages in our app, acquiring the right information from the APIs, feeding correct state variables, creating a visually-appealing UI, and other lesser issues. ## Accomplishments that we're proud of This is the first app we create, a huge step towards our career in the industry and a nice project we can add to our resume. Our dedication and resilience to keep pushing and absorbing information created an experience we will never forget. It was great to learn Flutter given its extreme flexibility in front-end development. Last but not least, we are proud by our dedication to the end goal of never having to doubt whether the trash we are throwing away is going into the wrong bin. ## What we learned We learned Flutter. We learned Dart. We learned how to implement multiple APIs into one application to provide the user with very relevant information. We learned how to read documentation. We learned how to learn a language quickly. ## What's next for EcoSnap Hopefully, win some prizes at the Hackathon and keep developing the app for an AppStore release over Thanksgiving! Likewise, we were also thinking of connecting a hardware component in the future. Basically, it would be a tiny microprocessor connected to a tiny camera connected to an LED light/display. This hardware would be placed on top of trash bins so that people can know very quickly where to throw their trash!
## Inspiration Bill - "Blindness is a major problem today and we hope to have a solution that takes a step in solving this" George - "I like engineering" We hope our tool gives nonzero contribution to society. ## What it does Generates a description of a scene and reads the description for visually impaired people. Leverages CLIP/recent research advancements and own contributions to solve previously unsolved problem (taking a stab at the unsolved **generalized object detection** problem i.e. object detection without training labels) ## How we built it SenseSight consists of three modules: recorder, CLIP engine, and text2speech. ### Pipeline Overview Once the user presses the button, the recorder beams it to the compute cluster server. The server runs a temporally representative video frame through the CLIP engine. The CLIP engine is our novel pipeline that emulates human sight to generate a scene description. Finally, the generated description is sent back to the user side, where the text is converted to audio to be read. [Figures](https://docs.google.com/presentation/d/1bDhOHPD1013WLyUOAYK3WWlwhIR8Fm29_X44S9OTjrA/edit?usp=sharing) ### CLIP CLIP is a model proposed by OpenAI that maps images to embeddings via an image encoder and text to embeddings via a text encoder. Similiar (image, text) pairs will have a higher dot product. ### Image captioning with CLIP We can map the image embeddings to text embeddings via a simple MLP (since image -> text can be thought of as lossy compression). The mapped embedding is fed into a transformer decoder (GPT2) that is fine-tuned to produce text. This process is called CLIP text decoder. ### Recognition of Key Image Areas The issue with Image captioning the fed input is that an image is composed of smaller images. The CLIP text decoder is trained on only images containing one single content (e.g. ImageNet/MS CoCo images). We need to extract the crops of the objects in the image and then apply CLIP text decoder. This process is called **generalized object detection** **Generalized object detection** is unsolved. Most object detection involves training with labels. We propose a viable approach. We sample crops in the scene, just like how human eyes dart around their view. We evaluate the fidelity of these crops i.e. how much information/objects the crop contains by embedding the crop using clip and then searching a database of text embeddings. The database is composed of noun phrases that we extracted. The database can be huge, so we rely on SCANN (Google Research), a pipeline that uses machine learning based vector similarity search. We then filter all subpar crops. The remaining crops are selected using an algorithm that tries to maximize the spatial coverage of k crop. To do so, we sample many sets of k crops and select the set with the highest all pairs distance. ## Challenges we ran into The hackathon went smoothly, except for the minor inconvenience of getting the server + user side to run in sync. ## Accomplishments that we're proud of Platform replicates the human visual process with decent results. Subproblem is generalized object detection-- proposed approach involving CLIP embeddings and fast vector similarity search Got hardware + local + server (machine learning models on MIT cluster) + remote apis to work in sync ## What's next for SenseSight Better clip text decoder. Crops tend to generate redundant sentences, so additional pruning is needed. Use GPT3 to remove the redundancy and make the speech flower. Realtime can be accomplished by using real networking protocols instead of scp + time.sleep hacks. To accelerate inference on crops, we can do multi GPU. ## Fun Fact The logo is generated by DALL-E :p
losing
## Inspiration GeoGuesser is a fun game which went viral in the middle of the pandemic, but after having played for a long time, it started feeling tedious and boring. Our Discord bot tries to freshen up the stale air by providing a playlist of iconic locations in addendum to exciting trivia like movies and monuments for that extra hit of dopamine when you get the right answers! ## What it does The bot provides you with playlist options, currently restricted to Capital Cities of the World, Horror Movie Locations, and Landmarks of the World. After selecting a playlist, five random locations are chosen from a list of curated locations. You are then provided a picture from which you have to guess the location and the bit of trivia associated with the location, like the name of the movie from which we selected the location. You get points for how close you are to the location and if you got the bit of trivia correct or not. ## How we built it We used the *discord.py* library for actually coding the bot and interfacing it with discord. We stored our playlist data in external *excel* sheets which we parsed through as required. We utilized the *google-streetview* and *googlemaps* python libraries for accessing the google maps streetview APIs. ## Challenges we ran into For initially storing the data, we thought to use a playlist class while storing the playlist data as an array of playlist objects, but instead used excel for easier storage and updating. We also had some problems with the Google Maps Static Streetview API in the beginning, but they were mostly syntax and understanding issues which were overcome soon. ## Accomplishments that we're proud of Getting the Discord bot working and sending images from the API for the first time gave us an incredible feeling of satisfaction, as did implementing the input/output flows. Our points calculation system based on the Haversine Formula for Distances on Spheres was also an accomplishment we're proud of. ## What we learned We learned better syntax and practices for writing Python code. We learnt how to use the Google Cloud Platform and Streetview API. Some of the libraries we delved deeper into were Pandas and pandasql. We also learned a thing or two about Human Computer Interaction as designing an interface for gameplay was rather interesting on Discord. ## What's next for Geodude? Possibly adding more topics, and refining the loading of streetview images to better reflect the actual location.
## Inspiration Many hackers cast their vision forward, looking for futuristic solutions for problems in the present. Instead, we cast our eyes backwards in time, looking to find our change in restoration and recreation. We were drawn to the ancient Athenian Agora -- a marketplace; not one where merchants sold goods, but one where thinkers and orators debated, discussed, and deliberated (with one another?) pressing social-political ideas and concerns. The foundation of community engagement in its era, the premise of the Agora survived in one form or another over the years in the various public spaces that have been focal points for communities to come together -- from churches to community centers. In recent years, however, local community engagement has dwindled with the rise in power of modern technology and the Internet. When you're talking to a friend on the other side of the world, you're not talking a friend on the other side of the street. When you're organising with activists across countries, you're not organising with activists in your neighbourhood. The Internet has been a powerful force internationally, but Agora aims to restore some of the important ideas and institutions that it has left behind -- to make it just as powerful a force locally. ## What it does Agora uses users' mobile phone's GPS location to determine the neighbourhood or city district they're currently in. With that information, they may enter a chat group specific to that small area. Having logged-on via Facebook, they're identified by their first name and thumbnail. Users can then chat and communicate with one another -- making it easy to plan neighbourhood events and stay involved in your local community. ## How we built it Agora coordinates a variety of public tools and services (for something...). The application was developed using Android Studio (Java, XML). We began with the Facebook login API, which we used to distinguish and provide some basic information about our users. That led directly into the Google Maps Android API, which was a crucial component of our application. We drew polygons onto the map corresponding to various local neighbourhoods near the user. For the detailed and precise neighbourhood boundary data, we relied on StatsCan's census tracts, exporting the data as a .gml and then parsing it via python. With this completed, we had almost 200 polygons -- easily covering Hamilton and the surrounding areas - and a total of over 50,000 individual vertices. Upon pressing the map within the borders of any neighbourhood, the user will join that area's respective chat group. ## Challenges we ran into The chat server was our greatest challenge; in particular, large amounts of structural work would need to be implemented on both the client and the server in order to set it up. Unfortunately, the other challenges we faced while developing the Android application diverted attention and delayed process on it. The design of the chat component of the application was also closely tied with our other components as well; such as receiving the channel ID from the map's polygons, and retrieving Facebook-login results to display user identification. A further challenge, and one generally unexpected, came in synchronizing our work as we each tackled various aspects of a complex project. With little prior experience in Git or Android development, we found ourselves quickly in a sink-or-swim environment; learning about both best practices and dangerous pitfalls. It was demanding, and often-frustrating early on, but paid off immensely as the hack came together and the night went on. ## Accomplishments that we're proud of 1) Building a functioning Android app that incorporated a number of challenging elements. 2) Being able to make something that is really unique and really important. This is an issue that isn't going away and that is at the heart of a lot of social deterioration. Fixing it is key to effective positive social change -- and hopefully this is one step in that direction. ## What we learned 1) Get Git to Get Good. It's incredible how much of a weight of our shoulders it was to not have to worry about file versions or maintenance, given the sprawling size of an Android app. Git handled it all, and I don't think any of us will be working on a project without it again. ## What's next for Agora First and foremost, the chat service will be fully expanded and polished. The next most obvious next step is towards expansion, which could be easily done via incorporating further census data. StatsCan has data for all of Canada that could be easily extracted, and we could rely on similar data sets from the U.S. Census Bureau to move international. Beyond simply expanding our scope, however, we would also like to add various other methods of engaging with the local community. One example would be temporary chat groups that form around given events -- from arts festivals to protests -- which would be similarly narrow in scope but not constrained to pre-existing neighbourhood definitions.
## Inspiration This idea was inspired by the online game 'city guesser' [link](https://virtualvacation.us/multiplayer), in which a player, or multiple players watch a video walking through a specific place anywhere in the world and must guess the video's location on a world map. We wanted to do something similar, in the sense that we wanted to give people a chance to learn about what different parts of the world are like. As such, we decided to explore various cuisine around the globe in a simpler fashion when compared to city guesser (as this is our first hackathon) using a discord bot. ## What it does When prompted by the user with the command 'guess', the bot responds with a randomly generated image of food. The user is then given 3 attempts to guess which Country the dish is from, with the bot noting how many guesses the user has remaining. If the user gets the answer correct within 3 attempts, the bot responds with "Correct!", else the bot responds with "Unfortunate, you ran out of guesses, the answer was", then providing the correct answer to the user via a spoilered message (spoilered messages on discord are messages that need to be clicked on in order to view). ## How we built it We built this bot using Python and Discord's developer portal. ## Challenges we ran into Since this was our first time creating a Discord bot with Python, we ran into some initial issues of figuring how to set up the bot and getting it to carry out specific actions. After conducting some research and having perseverance, we were able to work together to create a working final product. ## Accomplishments that we're proud of Creating a working Discord bot, learning how to format certain aspects of the bot such as embeds, figuring out how to allow the user multiple guesses, and working as a team to construct the project creatively and efficiently. ## What we learned How different functions of a discord bot can be implemented, general discord bot documentation, how to write more efficient code, and how to collaborate effectively. ## What's next for Botplup Our future plans for Botplup involve the implementation of a leaderboard and profile where users can view their overall past score, win rate, and ranking against other users within the server. As well, we may add more images to our database for greater variation and may choose to add more questions about the dish, such its specific name to increase difficulty.
winning
## Inspiration Retinal degeneration affects 1 in 3000 people, slowly robbing them of vision over the course of their mid-life. The need to adjust to life without vision, often after decades of relying on it for daily life, presents a unique challenge to individuals facing genetic disease or ocular injury, one which our teammate saw firsthand in his family, and inspired our group to work on a modular, affordable solution. Current technologies which provide similar proximity awareness often cost many thousands of dollars, and require a niche replacement in the user's environment; (shoes with active proximity sensing similar to our system often cost $3-4k for a single pair of shoes). Instead, our group has worked to create a versatile module which can be attached to any shoe, walker, or wheelchair, to provide situational awareness to the thousands of people adjusting to their loss of vision. ## What it does (Higher quality demo on google drive link!: <https://drive.google.com/file/d/1o2mxJXDgxnnhsT8eL4pCnbk_yFVVWiNM/view?usp=share_link> ) The module is constantly pinging its surroundings through a combination of IR and ultrasonic sensors. These are readily visible on the prototype, with the ultrasound device looking forward, and the IR sensor looking to the outward flank. These readings are referenced, alongside measurements from an Inertial Measurement Unit (IMU), to tell when the user is nearing an obstacle. The combination of sensors allows detection of a wide gamut of materials, including those of room walls, furniture, and people. The device is powered by a 7.4v LiPo cell, which displays a charging port on the front of the module. The device has a three hour battery life, but with more compact PCB-based electronics, it could easily be doubled. While the primary use case is envisioned to be clipped onto the top surface of a shoe, the device, roughly the size of a wallet, can be attached to a wide range of mobility devices. The internal logic uses IMU data to determine when the shoe is on the bottom of a step 'cycle', and touching the ground. The Arduino Nano MCU polls the IMU's gyroscope to check that the shoe's angular speed is close to zero, and that the module is not accelerating significantly. After the MCU has established that the shoe is on the ground, it will then compare ultrasonic and IR proximity sensor readings to see if an obstacle is within a configurable range (in our case, 75cm front, 10cm side). If the shoe detects an obstacle, it will activate a pager motor which vibrates the wearer's shoe (or other device). The pager motor will continue vibrating until the wearer takes a step which encounters no obstacles, thus acting as a toggle flip-flop. An RGB LED is added for our debugging of the prototype: RED - Shoe is moving - In the middle of a step GREEN - Shoe is at bottom of step and sees an obstacle BLUE - Shoe is at bottom of step and sees no obstacles While our group's concept is to package these electronics into a sleek, clip-on plastic case, for now the electronics have simply been folded into a wearable form factor for demonstration. ## How we built it Our group used an Arduino Nano, batteries, voltage regulators, and proximity sensors from the venue, and supplied our own IMU, kapton tape, and zip ties. (yay zip ties!) I2C code for basic communication and calibration was taken from a user's guide of the IMU sensor. Code used for logic, sensor polling, and all other functions of the shoe was custom. All electronics were custom. Testing was done on the circuits by first assembling the Arduino Microcontroller Unit (MCU) and sensors on a breadboard, powered by laptop. We used this setup to test our code and fine tune our sensors, so that the module would behave how we wanted. We tested and wrote the code for the ultrasonic sensor, the IR sensor, and the gyro separately, before integrating as a system. Next, we assembled a second breadboard with LiPo cells and a 5v regulator. The two 3.7v cells are wired in series to produce a single 7.4v 2S battery, which is then regulated back down to 5v by an LM7805 regulator chip. One by one, we switched all the MCU/sensor components off of laptop power, and onto our power supply unit. Unfortunately, this took a few tries, and resulted in a lot of debugging. . After a circuit was finalized, we moved all of the breadboard circuitry to harnessing only, then folded the harnessing and PCB components into a wearable shape for the user. ## Challenges we ran into The largest challenge we ran into was designing the power supply circuitry, as the combined load of the sensor DAQ package exceeds amp limits on the MCU. This took a few tries (and smoked components) to get right. The rest of the build went fairly smoothly, with the other main pain points being the calibration and stabilization of the IMU readings (this simply necessitated more trials) and the complex folding of the harnessing, which took many hours to arrange into its final shape. ## Accomplishments that we're proud of We're proud to find a good solution to balance the sensibility of the sensors. We're also proud of integrating all the parts together, supplying them with appropriate power, and assembling the final product as small as possible all in one day. ## What we learned Power was the largest challenge, both in terms of the electrical engineering, and the product design- ensuring that enough power can be supplied for long enough, while not compromising on the wearability of the product, as it is designed to be a versatile solution for many different shoes. Currently the design has a 3 hour battery life, and is easily rechargeable through a pair of front ports. The challenges with the power system really taught us firsthand how picking the right power source for a product can determine its usability. We were also forced to consider hard questions about our product, such as if there was really a need for such a solution, and what kind of form factor would be needed for a real impact to be made. Likely the biggest thing we learned from our hackathon project was the importance of the end user, and of the impact that engineering decisions have on the daily life of people who use your solution. For example, one of our primary goals was making our solution modular and affordable. Solutions in this space already exist, but their high price and uni-functional design mean that they are unable to have the impact they could. Our modular design hopes to allow for greater flexibility, acting as a more general tool for situational awareness. ## What's next for Smart Shoe Module Our original idea was to use a combination of miniaturized LiDAR and ultrasound, so our next steps would likely involve the integration of these higher quality sensors, as well as a switch to custom PCBs, allowing for a much more compact sensing package, which could better fit into the sleek, usable clip on design our group envisions. Additional features might include the use of different vibration modes to signal directional obstacles and paths, and indeed expanding our group's concept of modular assistive devices to other solution types. We would also look forward to making a more professional demo video Current example clip of the prototype module taking measurements:(<https://youtube.com/shorts/ECUF5daD5pU?feature=share>)
## Inspiration Have you ever met a blind engineer? Odds are you haven't, because it is much harder for blind people to acquire information compared to those with sight. Some of the current best solutions are text-to-braille devices, which can scan in letters and translate them to tactile braille dots. However, these devices can translate just a handful of letters at a time, and cost upwards of $4,000, making them an extremely niche item. If the cost can be driven down and the number of letters increased, then the playing field can begin to be leveled for the blind. ## What it does Sight Stone can take in an image file, or it can take the picture by itself. The image is then processed, parsed to characters, and translated to braille. From there, control signals are sent to mechanical actuators, which raise a grid of dots and allow the user to feel the letters in braille with their fingertips. ## How we built it Sight Stone is built off a Raspberry Pi. The image is taken using a Pi camera, and processed using OpenCV. Control signals are sent to linear shift registers. Each mechanical actuator is connected to a power MOSFET, whose gate is driven by an output of a linear shift register. (Note: due to resource constraints, we were only able to manufacture one actuator. The rest are simulated by LEDs, driven identically to how the actuators would be if there were more). The mechanical actuator is custom-designed and built from scratch by our team, and backed by in-depth calculations and detailed Solidworks drawings. The vertical motion is produced by two counter opposing components- a spring and a shape memory alloy (SMA). SMA, also dubbed "memory metal," can return to a certain shape when heated. That "certain shape" can be set by annealing the metal, or heating it to an even higher temperature. To put the actuator in the "up" position, simply leave it be. The spring pushes the pad on top up, and the SMA provides only a small opposing force. To pull the pad down, we drive the nMOS gate high, allowing current to run through the SMA, heating it up, and contracting it into a coil shape. The SMA exerts a force in the downward direction larger in magnitude than the spring, and so the pad is pulled down. When combined with linear shift registers, we can independently actuate a large number of dots quickly, cheaply, and effectively. ## Challenges we ran into 1) The linear shift register did not work for a very long time. We eventually realized that we needed Schmidt triggers and resistors to account for the internal capacitance of each MOSFET. 2) We had to scrap and rebuild much of our hardware. We attempted to heat shrink some connections on our proto board to make sure different nodes did not short, which ended in the MOSFETs overheating and frying. 3) When we first started testing the SMA, we noticed that after many runs the metal lost its shape. At first, we just assumed the material was poor and had a very short cycle life span. However, we eventually realized that in our excitement we ran too much current through it, heating it up to such a point that it annealed into the shape it was currently in. From then on we were careful to use the least amount of current possible. ## Accomplishments that we're proud of The design and execution of the mechanical actuator is our primary accomplishment. It is an innovative solution to a well-known problem, and is cheaper to produce than the competition's best technology by two orders of magnitude. Scaling the actuator is extremely easy (simply vary the size of the pad, spring, and SMA), and thus Sight Stone can be manufactured at different sizes for different ages and preferences. The number and configuration of the dots can also be altered easily between different models. ## What we learned Next time, we need to plan out our hardware components before competition, and potentially order parts. We spent the first several hours of competition looking through drawers and googling what the parts did, which was valuable time that could have been saved through some prior research. ## What's next for Sight Stone Firstly, we would like to perfect our actuators. With some extra hardware we should be able to control additional parameters such as how fast the pad raises and lowers, and the height to which the pad raises. We would also like to expand our current methods of reading in text to include reading handwriting and scraping from web pages.
## Inspiration When I was a kid, I was particularly curious. I would ask my parents a lot of questions, and they'd meet me with enough of an answer for me to leave them alone. This week, I desperately asked my dad if he had any ideas for what to hack, and his first answer, with no hesitation, was an AI toy for children to ask questions to. Meet, corgAI. ## What it does corgAI is a little corgi stuffy that will listen when you wave to it, and then respond to the query. Designed for kids, corgAI is always there to listen, and exists to inspire curiosity in children everywhere. ## How we built it In the course of 36 hours, we built and programmed a custom pcb with a mic, a speaker + amplifier, an IR sensor, and two ESP32s connected over UART. One ESP32 runs the speech-to-text using Google Cloud API and sends the question to ChatGPT's API, while the other handles text-to-speech with ChatGPT's response and the speaker. The IR sensor is used to indicate when to start listening. The circuit board is powered by an elegoo 3.3/5v power module which can be connected directly to a laptop/wall, or to a battery (for when its in the stuffy!) ## Challenges we ran into Literally every step we got to we encountered a new problem. To start, we both have never touched hardware. We winged/yoloed every single part of this pcb, and it works! (kinda..). While soldering, we continuously soldered things incorrectly and had to redo parts, or our plans didn't pan out. We drew out the schematic of how we wanted to wire things, but didn't account for the height of the overlapping parts. Also, we broke the speaker multiple times by burning it. It does not stop there! Even after its first power on, and after diagnosing all the problems that made different parts not turn on, we were met with every software issue possible. Things like drivers that weren't installed, the wrong version of our board, incorrect libraries, + much more. ## Accomplishments that we're proud of Everything! This whole project in and of itself is a huge accomplishment, we completed our first hardware hack, we learned SO much about PCBs and electrical components. Even further, we learned a lot about using Google Cloud API and ChatGPT API. Both of us have used some API's before, but on a much simpler scale. We also got to get both API's to work in tandem, which is super cool to see. ## What we learned READ THE DOCS!! Don't spend 6 hours trying to get an access token you don't need :) ## What's next for corgAI * get a new speaker
partial
## Inspiration In an unprecedented time of fear, isolation, and *can everyone see my screen?*, no ones life has been the same since COVID. We saw people come together to protect others, but also those who refused to wear cloth over their noses. We’ve come up with cutting edge, wearable technology to protect ourselves against the latter, because in 2022, no one wants anyone invading their personal space. Introducing the anti anti-masker mask, the solution to all your pandemic related worries. ## What it does The anti anti-masker mask is a wearable defense mechanism to protect yourself from COVID-19 mandate breakers. It detects if someone within 6 feet of you is wearing a mask or not, and if they dare be mask-less in your vicinity, the shooter mechanism will fire darts at them until they leave. Never worry about anti-maskers invading your personal space again! ## How we built it The mask can be split into 3 main subsystems. **The shooter/launcher** The frame and mechanisms are entirely custom modeled and built using SolidWorks and FDM 3D Printing Technology. We also bought a NERF Gun, and the NERF launcher is powered by a small lipo battery and uses 2 brushless drone motors as flywheels. The darts are automatically loaded into the launcher by a rack and pinion mechanism driven by a servo, and the entire launcher is controlled by an Arduino Nano which receives serial communications from the laptop. **Sensors and Vision** We used a single point lidar to detect whether a non mask wearer is within 6 ft of the user. For the mask detection system, we use a downloadable app to take live video stream to a web server where the processing takes place. Finally, for the vision processing, our OpenCV pipeline reads the data from the webserver. **Code** Other than spending 9 hours trying to install OpenCV on a raspberry pi 🤡 the software was one of the most fun parts. To program the lidar, we used an open source library that has premade methods that can return the distance from the lidar to the next closest object. By checking if the lidar is within 500 and 1500mm, we can ensure that a target that is not wearing a mask is within cough range (6ft) before punishing them. The mask detection with OpenCV allowed us to find those public anti-maskers and then send a signal to the serial port. The Arduino then takes the signals and runs the motors to shoot the darts until the offender is gone. ## Challenges we ran into The biggest challenge was working with the Pi Zero. Installing OpenCV was a struggle, the camera FPS was a struggle, the lidar was a struggle, you get the point. Because of this, we changed the project from Raspi to Arduino, but neither the Arduino Uno or the Arduino Nano ran supported dual serial communication, so we had to downgrade to a VL53L0X lidar, which supported I2C, a protocol that the nano supported. After downloading DFRobot’s VL53L0X’s lidar library, we used their sample code to gather the distance measurement which was used in the final project. Another challenges we faced was designing the feeding mechanism for our darts, we originally wanted to use a slider crank mechanism, however it was designed to be quite compact and as a result the crank caused too much friction with the servo mount and the printed piece cracked. In our second iteration we used a rack and pinion design which significantly reduced the lateral forces and very accurately linearly actuated, this was ultimately used in our final design. ## Accomplishments that we're proud of We have an awesome working product that's super fun to play with / terrorize your friends with. The shooter, albiet many painful hours of getting it working, worked SO WELL and the fact we adapted and ended up with robust and consistently working software was a huge W as well. ## What we learned Install ur python libraries before the hackathon starts 😢 but also interfacing with lidars, making wearables, all that good stuff. ## What's next for Anti Anti-Masker Mask We would want to add dart targeting and a turret to track victims. During our prototyping process we explored running the separate flywheels at different speeds to try to curve the dart, this would have ensured more accurate shots at our 2 meter range. Ultimately we did not have time to finish this process however we would love to explore it in the future. Improve wearablility → reduce the laptop by using something like jetson or a Pi, maybe try to reduce the dart shooter or create a more compact “punishment” device. Try to mount it all to one clothing item instead of 2.5.
## Inspiration In recent times, we have witnessed undescribable tragedy occur during the war in Ukraine. The way of life of many citizens has been forever changed. In particular, the attacks on civilian structures have left cities covered in debris and people searching for their missing loved ones. As a team of engineers, we believe we could use our skills and expertise to facilitate the process of search and rescue of Ukrainian citizens. ## What it does Our solution integrates hardware and software development in order to locate, register, and share the exact position of people who may be stuck or lost under rubble/debris. The team developed a rover prototype that navigates through debris and detects humans using computer vision. A picture and the geographical coordinates of the person found are sent to a database and displayed on a web application. The team plans to use a fleet of these rovers to make the process of mapping the area out faster and more efficient. ## How we built it On the frontend, the team used React and the Google Maps API to map out the markers where missing humans were found by our rover. On the backend, we had a python script that used computer vision to detect humans and capture an image. Furthermore, For the rover, we 3d printed the top and bottom chassis specifically for this design. After 3d printing, we integrated the arduino and attached the sensors and motors. We then calibrated the sensors for the accurate values. To control the rover autonomously, we used an obstacle-avoider algorithm coded in embedded C. While the rover is moving and avoiding obstacles, the phone attached to the top is continuously taking pictures. A computer vision model performs face detection on the video stream, and then stores the result in the local directory. If a face was detected, the image is stored on the IPFS using Estuary's API, and the GPS coordinates and CID are stored in a Firestore Database. On the user side of the app, the database is monitored for any new markers on the map. If a new marker has been added, the corresponding image is fetched from IPFS and shown on a map using Google Maps API. ## Challenges we ran into As the team attempted to use the CID from the Estuary database to retrieve the file by using the IPFS gateway, the marker that the file was attached to kept re-rendering too often on the DOM. However, we fixed this by removing a function prop that kept getting called when the marker was clicked. Instead of passing in the function, we simply passed in the CID string into the component attributes. By doing this, we were able to retrieve the file. Moreover, our rover was initally designed to work with three 9V batteries (one to power the arduino, and two for two different motor drivers). Those batteries would allow us to keep our robot as light as possible, so that it could travel at faster speeds. However, we soon realized that the motor drivers actually ran on 12V, which caused them to run slowly and burn through the batteries too quickly. Therefore, after testing different options and researching solutions, we decided to use a lithium-poly battery, which supplied 12V. Since we only had one of those available, we connected both the motor drivers in parallel. ## Accomplishments that we're proud of We are very proud of the integration of hardware and software in our Hackathon project. We believe that our hardware and software components would be complete projects on their own, but the integration of both makes us believe that we went above and beyond our capabilities. Moreover, we were delighted to have finished this extensive project under a short period of time and met all the milestones we set for ourselves at the beginning. ## What we learned The main technical learning we took from this experience was implementing the Estuary API, considering that none of our teammembers had used it before. This was our first experience using blockchain technology to develop an app that could benefit from use of public, descentralized data. ## What's next for Rescue Ranger Our team is passionate about this idea and we want to take it further. The ultimate goal of the team is to actually deploy these rovers to save human lives. The team identified areas for improvement and possible next steps. Listed below are objectives we would have loved to achieve but were not possible due to the time constraint and the limited access to specialized equipment. * Satellite Mapping -> This would be more accurate than GPS. * LIDAR Sensors -> Can create a 3D render of the area where the person was found. * Heat Sensors -> We could detect people stuck under debris. * Better Cameras -> Would enhance our usage of computer vision technology. * Drones -> Would navigate debris more efficiently than rovers.
## DEMO WITHOUT PRESENTATION ## **this app would typically be running in a public space** [demo without presentation (judges please watch the demo with the presentation)](https://youtu.be/qNmGr1GJNrE) ## Inspiration We spent **hours** thinking about what to create for our hackathon submission. Every idea that we had already existed. These first hours went by quickly and our hopes of finding an idea that we loved were dwindling. The idea that eventually became **CovidEye** started as an app that would run in the background of your phone and track the type and amount of coughs throughout the day, however we discovered a successful app that already does this. About an hour after this idea was pitched **@Green-Robot-Dev-Studios (Nick)** pitched a variation of this app that would run on a security camera or in the web and track the coughs of people in stores (anonymously). A light bulb immediately lit over all of our heads as this would help prevent covid-19 outbreaks, collect data, and is accessible to everyone (it can run on your laptop as opposed to a security camera). ## What it does **CovidEye** tracks a tally of coughs and face touches live and graphs it for you.**CovidEye** allows you to pass in any video feed to monitor for COVID-19 symptoms within the area covered by the camera. The app monitors the feed for anyone that coughs or touches their face. **\_For demoing purposes, we are using a webcam, but this could easily be replaced with a security camera. Our logic can even handle multiple events by different people simultaneously. \_** ## How we built it We used an AI called PoseNet built by Tensorflow. The data outputted by this AI is passed through through some clever detection logic. Also, this data can be passed on to the government as an indicator of where symptomatic people are going. We used Firebase as the backend to persist the tally count. We created a simple A.P.I. to connect Firebase and our ReactJS frontend. ## Challenges we ran into * We spent about 3 hours connecting the AI count to Firebase and patching it into the react state. * Tweaking the pose detection logic took a lot of trial and error * Deploying a built react app (we had never done that before and had a lot of difficulty resulting in the need to change code within our application) * Optimizing the A.I. garbage collection (chrome would freeze) * Optimizing the graph (Too much for chrome to handle with the local A.I.) ## Accomplishments that we're proud of * **All 3 of us** We are very proud that we thought of and built something that could really make a difference in this time of COVID-19, directly and with statistics. We are also proud that this app is accessible to everyone as many small businesses are not able to afford security cameras. * **@Alex-Walsh (Alex)** I've never touched any form of A.I/M.L. before so this was a massive learning experience for me. I'm also proud to have competed in my first hackathon. * **@Green-Robot-Dev-Studios (Nick)** I'm very proud that we were able to create an A.I. as accurate as it in is the time frame * **@Khalid Filali (Khalid)** I'm proud to have pushed my ReactJS skills to the next level and competed in my first hackathon. ## What we learned * Posenet * ChartJS * A.I. basics * ReactJS Hooks ## What's next for CovidEye -**Refining** : with a more enhanced dataset our accuracy would greatly increase * Solace PubSub, we didn't have enough time but we wanted to create live notifications that would go to multiple people when there is excessive coughing. * Individual Tally's instead of 1 tally for each person (we didn't have enough time) * Accounts (we didn't have enough time)
winning
## Inspiration Natural disasters do more than just destroy property—they disrupt lives, tear apart communities, and hinder our progress toward a sustainable future. One of our team members from Rice University experienced this firsthand during a recent hurricane in Houston. Trees were uprooted, infrastructure was destroyed, and delayed response times put countless lives at risk. * **Emotional Impact**: The chaos and helplessness during such events are overwhelming. * **Urgency for Change**: We recognized the need for swift damage assessment to aid authorities in locating those in need and deploying appropriate services. * **Sustainability Concerns**: Rebuilding efforts often use non-eco-friendly methods, leading to significant carbon footprints. Inspired by these challenges, we aim to leverage AI, computer vision, and peer networks to provide rapid, actionable damage assessments. Our AI assistant can detect people in distress and deliver crucial information swiftly, bridging the gap between disaster and recovery. ## What it Does The Garuda Dashboard offers a comprehensive view of current, upcoming, and past disasters across the country: * **Live Dashboard**: Displays a heatmap of affected areas updated via a peer-to-peer network. * **Drones Damage Analysis**: Deploy drones to survey and mark damaged neighborhoods using the Llava Vision-Language Model and generate reports for the Recovery Team. * **Detailed Reporting**: Reports have annotations to classify damage types [tree, road, roof, water], human rescue needs, site accessibility [Can response team get to the site by land], and suggest equipment dispatch [Cranes, Ambulance, Fire Control]. * **Drowning Alert**: The drone footage can detect when it identifies a drowning subject and immediately call rescue teams * **AI-Generated Summary**: Reports on past disasters include recovery costs, carbon footprint, and total asset/life damage. ## How We Built It * **Front End**: Developed with Next.js for an intuitive user interface tailored for emergency use. * **Data Integration**: Utilized Google Maps API for heatmaps and energy-efficient routing. * **Real-Time Updates**: Custom Flask API records hot zones when users upload disaster videos. * **AI Models**: Employed MSNet for real-time damage assessment on GPUs and Llava VLM for detailed video analysis. * **Secure Storage**: Images and videos stored on Firebase database. ## Challenges We Faced * **Model Integration**: Adapting MSNet with outdated dependencies required deep understanding of technical papers. * **VLM Setup**: Implementing Llava VLM was challenging due to lack of prior experience. * **Efficiency Issues**: Running models on personal computers led to inefficiencies. ## Accomplishments We're Proud Of * **Technical Skills**: Mastered API integration, technical paper analysis, and new technologies like VLMs. * **Innovative Impact**: Combined emerging technologies for disaster detection and recovery measures. * **Complex Integration**: Successfully merged backend, frontend, and GPU components under time constraints. ## What We Learned * Expanded full-stack development skills and explored new AI models. * Realized the potential of coding experience in tackling real-world problems with interdisciplinary solutions. * Balanced MVP features with user needs throughout development. ## What's Next for Garuda * **Drone Integration**: Enable drones to autonomously call EMS services and deploy life-saving equipment. * **Collaboration with EMS**: Partner with emergency services for widespread national and global adoption. * **Broader Impact**: Expand software capabilities to address various natural disasters beyond hurricanes.
## Inspiration We took inspiration from the multitude of apps that help to connect those who are missing to those who are searching for their loved ones and others affected by natural disaster, especially flooding. We wanted to design a product that not only helped to locate those individuals, but also to rescue those in danger. Through the combination of these services, the process of recovering after natural disasters is streamlined and much more efficient than other solutions. ## What it does Spotted uses a drone to capture and send real-time images of flooded areas. Spotted then extracts human shapes from these images and maps the location of each individual onto a map and assigns each victim a volunteer to cover everyone in need of help. Volunteers can see the location of victims in real time through the mobile or web app and are provided with the best routes for the recovery effort. ## How we built it The backbone of both our mobile and web applications is HERE.com’s intelligent mapping API. The two APIs that we used were the Interactive Maps API to provide a forward-facing client for volunteers to get an understanding of how an area is affected by flood and the Routing API to connect volunteers to those in need in the most efficient route possible. We also used machine learning and image recognition to identify victims and where they are in relation to the drone. The app was written in java, and the mobile site was written with html, js, and css. ## Challenges we ran into All of us had a little experience with web development, so we had to learn a lot because we wanted to implement a web app that was similar to the mobile app. ## Accomplishments that we're proud of Accomplishment: We are most proud that our app can collect and stores data that is available for flood research and provide real-time assignment to volunteers in order to ensure everyone is covered in the shortest time ## What we learned We learned a great deal about integrating different technologies including XCode, . We also learned a lot about web development and the intertwining of different languages and technologies like html, css, and javascript. ## What's next for Spotted Future of Spotted: We think the future of Spotted is going to be bright! Certainly, it is tremendously helpful for the users, and at the same time, the program improves its own functionality as data available increases. We might implement a machine learning feature to better utilize the data and predict the situation in target areas. What's more, we believe the accuracy of this prediction function will grow exponentially as data size increases. Another important feature is that we will be developing optimization algorithms to provide a real-time most efficient solution for the volunteers. Other future development might be its involvement with specific charity groups and research groups and work on specific locations outside US.
## Inspiration One day Saaz was sitting at home thinking about his fitness goals and his diet. Looking in his fridge, he realized that, on days when his fridge was only filled with leftovers and leftover ingredients, it was very difficult for him to figure out what he could make that followed his nutrition goals. This dilemma is something Saaz and others like him often encounter, and so we created SmartPalate to solve it. ## What it does SmartPalate uses AI to scan your fridge and pantry for all the ingredients you have at your disposal. It then comes up with multiple recipes that you can make with those ingredients. Not only can the user view step-by-step instructions on how to make these recipes, but also, by adjusting the nutrition information of the recipe using sliders, SmartPalate caters the recipe to the user's fitness goals without compromising the overall taste of the food. ## How we built it The scanning and categorization of different food items in the fridge and pantry is done using YOLOv5, a single-shot detection convolutional neural network. These food items are sent as a list of ingredients into the Spoonacular API, which matches the ingredients to recipes that contain them. We then used a modified natural language processing model to split the recipe into 4 distinct parts: the meats, the carbs, the flavoring, and the vegetables. Once the recipe is split, we use the same NLP model to categorize our ingredients into whichever part they are used in, as well as to give us a rough estimate on the amount of ingredients used in 1 serving. Then, using the Spoonacular API and the estimated amount of ingredients used in 1 serving, we calculate the nutrition information for 1 serving of each part. Because the amount of each part can be increased or decreased without compromising the taste of the overall recipe, we are then able to use a Bayesian optimization algorithm to quickly adjust the number of servings of each part (and the overall nutrition of the meal) to meet the user's nutritional demands. User interaction with the backend is done with a cleanly built front end made with a React TypeScript stack through Flask. ## Challenges we ran into One of the biggest challenges was identifying the subgroups in every meal(the meats, the vegetables, the carbs, and the seasonings/sauces). After trying multiple methods such as clustering, we settled on an approach that uses a state-of-the-art natural language model to identify the groups. ## Accomplishments that we're proud of We are proud of the fact that you can scan your fridge with your phone instead of typing in individual items, allowing for a much easier user experience. Additionally, we are proud of the algorithm that we created to help users adjust the nutrition levels of their meals without compromising the overall taste of the meals. ## What we learned Using our NLP model taught us just how unstable NLP is, and it showed us the importance of good prompt engineering. We also learned a great deal from our struggle to integrate the different parts of our project together, which required a lot of communication and careful code design. ## What's next for SmartPalate We plan to allow users to rate and review the different recipes that they create. Additionally, we plan to add a social component to SmartPalate that allows people to share the nutritionally customized recipes that they created.
winning
## Inspiration You see a **TON** of digital billboards at NYC Time Square. The problem is that a lot of these ads are **irrelevant** to many people. Toyota ads here, Dunkin' Donuts ads there; **it doesn't really make sense**. ## What it does I built an interactive billboard that does more refined and targeted advertising and storytelling; it displays different ads **based on who you are** ~~(NSA 2.0?)~~ The billboard is equipped with a **camera**, which periodically samples the audience in front of it. Then, it passes the image to a series of **computer vision** algorithm (Thank you *Microsoft Cognitive Services*), which extracts several characteristics of the viewer. In this prototype, the billboard analyzes the viewer's: * **Dominant emotion** (from facial expression) * **Age** * **Gender** * **Eye-sight (detects glasses)** * **Facial hair** (just so that it can remind you that you need a shave) * **Number of people** And considers all of these factors to present with targeted ads. **As a bonus, the billboard saves energy by dimming the screen when there's nobody in front of the billboard! (go green!)** ## How I built it Here is what happens step-by-step: 1. Using **OpenCV**, billboard takes an image of the viewer (**Python** program) 2. Billboard passes the image to two separate services (**Microsoft Face API & Microsoft Emotion API**) and gets the result 3. Billboard analyzes the result and decides on which ads to serve (**Python** program) 4. Finalized ads are sent to the Billboard front-end via **Websocket** 5. Front-end contents are served from a local web server (**Node.js** server built with **Express.js framework** and **Pug** for front-end template engine) 6. Repeat ## Challenges I ran into * Time constraint (I actually had this huge project due on Saturday midnight - my fault -, so I only **had about 9 hours to build** this. Also, I built this by myself without teammates) * Putting many pieces of technology together, and ensuring consistency and robustness. ## Accomplishments that I'm proud of * I didn't think I'd be able to finish! It was my first solo hackathon, and it was much harder to stay motivated without teammates. ## What's next for Interactive Time Square * This prototype was built with off-the-shelf computer vision service from Microsoft, which limits the number of features for me to track. Training a **custom convolutional neural network** would let me track other relevant visual features (dominant color, which could let me infer the viewers' race - then along with the location of the Billboard and pre-knowledge of the demographics distribution, **maybe I can infer the language spoken by the audience, then automatically serve ads with translated content**) - ~~I know this sounds a bit controversial though. I hope this doesn't count as racial profiling...~~
## Inspiration When struggling to learn HTML, and basic web development. The tools provided by browsers like google chrome were hidden making hard to learn of its existence. As an avid gamer, we thought that it would be a great idea to create a game involving the inspect element tool provided by browsers so that more people could learn of this nifty feature, and start their own hacks. ## What it does The project is a series of small puzzle games that are reliant on the user to modify the webpage DOM to be able to complete. When the user reaches the objective, they are automatically redirected to the next puzzle to solve. ## How we built it We used a game engine called craftyjs to run the game as DOM elements. These elements could be deleted and an event would be triggered so that we could handle any DOM changes. ## Challenges we ran into Catching DOM changes from inspect element is incredibly difficult. Working with craftyjs which is in version 0.7.1 and not released therefore some built-ins e.g. collision detection, is not fully supported. Handling various events such as adding and deleting elements instead of recursively creating a ton of things recursively. ## Accomplishments that we're proud of EVERYTHING ## What we learned Javascript was not designed to run as a game engine with DOM elements, and modifying anything has been a struggle. We learned that canvases are black boxes and are impossible to interact with through DOM manipulation. ## What's next for We haven't thought that far yet You give us too much credit. But we have thought that far. We would love to do more with the inspect element tool, and in the future, if we could get support from one of the major browsers, we would love to add more puzzles based on tools provided by the inspect element option.
## Inspiration It all started a couple days ago when my brother told me he'd need over an hour to pick up a few items from a grocery store because of the weekend checkout line. This led to us reaching out to other friends of ours and asking them about the biggest pitfalls of existing shopping systems. We got a whole variety of answers, but the overwhelming response was the time it takes to shop and more particularly checkout. This inspired us to ideate and come up with an innovative solution. ## What it does Our app uses computer vision to add items to a customer's bill as they place items in the cart. Similarly, removing an item from the cart automatically subtracts them from the bill. After a customer has completed shopping, they can checkout on the app with the tap of a button, and walk out the store. It's that simple! ## How we built it We used react with ionic for the frontend, and node.js for the backend. Our main priority was the completion of the computer vision model that detects items being added and removed from the cart. The model we used is a custom YOLO-v3Tiny model implemented in Tensorflow. We chose Tensorflow so that we could run the model using TensorflowJS on mobile. ## Challenges we ran into The development phase had it's fair share of challenges. Some of these were: * Deep learning models can never have too much data! Scraping enough images to get accurate predictions was a challenge. * Adding our custom classes to the pre-trained YOLO-v3Tiny model. * Coming up with solutions to security concerns. * Last but not least, simulating shopping while quarantining at home. ## Accomplishments that we're proud of We're extremely proud of completing a model that can detect objects in real time, as well as our rapid pace of frontend and backend development. ## What we learned We learned and got hands on experience of Transfer Learning. This was always a concept that we knew in theory but had never implemented before. We also learned how to host tensorflow deep learning models on cloud, as well as make requests to them. Using google maps API with ionic react was a fun learning experience too! ## What's next for MoboShop * Integrate with customer shopping lists. * Display ingredients for recipes added by customer. * Integration with existing security systems. * Provide analytics and shopping trends to retailers, including insights based on previous orders, customer shopping trends among other statistics.
winning
## Inspiration Natural disasters do more than just destroy property—they disrupt lives, tear apart communities, and hinder our progress toward a sustainable future. One of our team members from Rice University experienced this firsthand during a recent hurricane in Houston. Trees were uprooted, infrastructure was destroyed, and delayed response times put countless lives at risk. * **Emotional Impact**: The chaos and helplessness during such events are overwhelming. * **Urgency for Change**: We recognized the need for swift damage assessment to aid authorities in locating those in need and deploying appropriate services. * **Sustainability Concerns**: Rebuilding efforts often use non-eco-friendly methods, leading to significant carbon footprints. Inspired by these challenges, we aim to leverage AI, computer vision, and peer networks to provide rapid, actionable damage assessments. Our AI assistant can detect people in distress and deliver crucial information swiftly, bridging the gap between disaster and recovery. ## What it Does The Garuda Dashboard offers a comprehensive view of current, upcoming, and past disasters across the country: * **Live Dashboard**: Displays a heatmap of affected areas updated via a peer-to-peer network. * **Drones Damage Analysis**: Deploy drones to survey and mark damaged neighborhoods using the Llava Vision-Language Model and generate reports for the Recovery Team. * **Detailed Reporting**: Reports have annotations to classify damage types [tree, road, roof, water], human rescue needs, site accessibility [Can response team get to the site by land], and suggest equipment dispatch [Cranes, Ambulance, Fire Control]. * **Drowning Alert**: The drone footage can detect when it identifies a drowning subject and immediately call rescue teams * **AI-Generated Summary**: Reports on past disasters include recovery costs, carbon footprint, and total asset/life damage. ## How We Built It * **Front End**: Developed with Next.js for an intuitive user interface tailored for emergency use. * **Data Integration**: Utilized Google Maps API for heatmaps and energy-efficient routing. * **Real-Time Updates**: Custom Flask API records hot zones when users upload disaster videos. * **AI Models**: Employed MSNet for real-time damage assessment on GPUs and Llava VLM for detailed video analysis. * **Secure Storage**: Images and videos stored on Firebase database. ## Challenges We Faced * **Model Integration**: Adapting MSNet with outdated dependencies required deep understanding of technical papers. * **VLM Setup**: Implementing Llava VLM was challenging due to lack of prior experience. * **Efficiency Issues**: Running models on personal computers led to inefficiencies. ## Accomplishments We're Proud Of * **Technical Skills**: Mastered API integration, technical paper analysis, and new technologies like VLMs. * **Innovative Impact**: Combined emerging technologies for disaster detection and recovery measures. * **Complex Integration**: Successfully merged backend, frontend, and GPU components under time constraints. ## What We Learned * Expanded full-stack development skills and explored new AI models. * Realized the potential of coding experience in tackling real-world problems with interdisciplinary solutions. * Balanced MVP features with user needs throughout development. ## What's Next for Garuda * **Drone Integration**: Enable drones to autonomously call EMS services and deploy life-saving equipment. * **Collaboration with EMS**: Partner with emergency services for widespread national and global adoption. * **Broader Impact**: Expand software capabilities to address various natural disasters beyond hurricanes.
## Inspiration In recent times, we have witnessed undescribable tragedy occur during the war in Ukraine. The way of life of many citizens has been forever changed. In particular, the attacks on civilian structures have left cities covered in debris and people searching for their missing loved ones. As a team of engineers, we believe we could use our skills and expertise to facilitate the process of search and rescue of Ukrainian citizens. ## What it does Our solution integrates hardware and software development in order to locate, register, and share the exact position of people who may be stuck or lost under rubble/debris. The team developed a rover prototype that navigates through debris and detects humans using computer vision. A picture and the geographical coordinates of the person found are sent to a database and displayed on a web application. The team plans to use a fleet of these rovers to make the process of mapping the area out faster and more efficient. ## How we built it On the frontend, the team used React and the Google Maps API to map out the markers where missing humans were found by our rover. On the backend, we had a python script that used computer vision to detect humans and capture an image. Furthermore, For the rover, we 3d printed the top and bottom chassis specifically for this design. After 3d printing, we integrated the arduino and attached the sensors and motors. We then calibrated the sensors for the accurate values. To control the rover autonomously, we used an obstacle-avoider algorithm coded in embedded C. While the rover is moving and avoiding obstacles, the phone attached to the top is continuously taking pictures. A computer vision model performs face detection on the video stream, and then stores the result in the local directory. If a face was detected, the image is stored on the IPFS using Estuary's API, and the GPS coordinates and CID are stored in a Firestore Database. On the user side of the app, the database is monitored for any new markers on the map. If a new marker has been added, the corresponding image is fetched from IPFS and shown on a map using Google Maps API. ## Challenges we ran into As the team attempted to use the CID from the Estuary database to retrieve the file by using the IPFS gateway, the marker that the file was attached to kept re-rendering too often on the DOM. However, we fixed this by removing a function prop that kept getting called when the marker was clicked. Instead of passing in the function, we simply passed in the CID string into the component attributes. By doing this, we were able to retrieve the file. Moreover, our rover was initally designed to work with three 9V batteries (one to power the arduino, and two for two different motor drivers). Those batteries would allow us to keep our robot as light as possible, so that it could travel at faster speeds. However, we soon realized that the motor drivers actually ran on 12V, which caused them to run slowly and burn through the batteries too quickly. Therefore, after testing different options and researching solutions, we decided to use a lithium-poly battery, which supplied 12V. Since we only had one of those available, we connected both the motor drivers in parallel. ## Accomplishments that we're proud of We are very proud of the integration of hardware and software in our Hackathon project. We believe that our hardware and software components would be complete projects on their own, but the integration of both makes us believe that we went above and beyond our capabilities. Moreover, we were delighted to have finished this extensive project under a short period of time and met all the milestones we set for ourselves at the beginning. ## What we learned The main technical learning we took from this experience was implementing the Estuary API, considering that none of our teammembers had used it before. This was our first experience using blockchain technology to develop an app that could benefit from use of public, descentralized data. ## What's next for Rescue Ranger Our team is passionate about this idea and we want to take it further. The ultimate goal of the team is to actually deploy these rovers to save human lives. The team identified areas for improvement and possible next steps. Listed below are objectives we would have loved to achieve but were not possible due to the time constraint and the limited access to specialized equipment. * Satellite Mapping -> This would be more accurate than GPS. * LIDAR Sensors -> Can create a 3D render of the area where the person was found. * Heat Sensors -> We could detect people stuck under debris. * Better Cameras -> Would enhance our usage of computer vision technology. * Drones -> Would navigate debris more efficiently than rovers.
## Inspiration As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process! ## What it does Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points! ## How we built it We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API. ## Challenges we ran into Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time! Besides API integration, definitely working without any sleep though was the hardest part! ## Accomplishments that we're proud of Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :) ## What we learned I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth). ## What's next for Savvy Saver Demos! After that, we'll just have to see :)
winning
## Inspiration Exercising is very important to living a healthy and happy life. Motivation and consistency are key factors that prevent people from reaching their fitness goals. There are many apps to try to help motivate aspiring athletes, but they often just pit people against each other and focus on raw performance. People, however, are not equally athletic and therefore the progress should not be based on absolute performances. Ananke uses a new approach to encourage people to improve their fitness. ## What it does Ananke does not determine your progress based on absolute miles ran or the time spent on a road bike, but more on invested efforts instead. If a 2 mile run is exhausting for you, that is fine! And if you managed to run 3 miles the next day, we reward your progress. That's how Ananke will continuously empower you to achieve more - every single day. The strongest competitor is always yourself! To suggest the optimal workouts for you that suit your performance level, Ananke takes your fitness history into account. Ananke will provide you with a suggestion for a light, medium and challenging workout, and it is up to you to choose the preferable workout depending on your mood and well-being. Whatever workout you choose, it is going to be an advancement and propel you forward on your journey to a blossomed life. Our app-architecture has a functional, minimalistic design. By completing suggested workouts and pushing yourself to your limits, you will grow plants. The more you exercise, the greener your profile becomes. Ananke will analyse the fitness data corresponding to the accomplished workout and determine an intensity-score based on factors other than pure data - we use an algorithm to determine how hard you have worked. This score will have an influence on the growth of your plants. Not only do more efforts make your plants grow faster, but also make you happier! We also want to incentivize rest and community, so we built a friendship system where friends can help water your plants and encourage you to keep working hard. ## How we built it Ananke is a (web-)app which has been mainly built in React. We use an API provided by TerraAPI which enables us to draw fitness data from mobile wearables. In our case, we integrated a FitBit as well as an Apple Watch as our data sources. Once the fitness data is extracted from the API it is transferred to a web-hook which works as the interface between the local and cloud server of TerraAPI. The data is then processed on a server and subsequently passed to an artificial intelligence application called OdinAI which is also a product of TerraAPI. The AI will determine suitable workout suggestions based on the API data. Ultimately, the output is presented on the frontend application. ## Challenges we ran into The API provided us with A LOT of data, so we encountered many challenges with data processing. Also the server architecture, as well as the communication between front- and backend posed some challenges for us to overcome. ## Accomplishments that we're proud of We managed to create a beautiful frontend design while maintaining ultimate functionality in the code. The server architecture is stable and works suitably for our purposes. Most importantly we are most proud of our team work: Everybody contributed to different components of our application and we worked very efficiently within our team. We are located across the world from each other and half the team hadn't met prior, yet we were able to work together very well and create an app we are very proud of. ## What we learned The Devpost submission form doesn't auto-save :/ ## What's next for Ananke To determine the intensity-score of a workout it is advantageous to integrate an AI-driven tool which can recognise tendencies i.e. in your heart-rate dynamic and thereby indicate progress more precisely. If your heart-rate decreases while other factors like distance, pace, etc. are taken as constant, it would indicate an extended endurance. This is important for further workout suggestions. We would also like to incorporate more of TerraAPI's data access tools. For example, one interesting feature that could be pushed in the future is the ability to start a workout from the web app. TerraAPI has a function that can start workouts remotely by providing activity and time. This would further integrate the user into the app and allow them to start their suggested workouts easily. We'd also like to integrate a more robust community and messaging system, as well as more rewards like customizing plants.
## Inspiration TerraFit, a clever play on the word "terrific," is born from a passion to inspire self-confidence among gym-goers. Recognizing the common struggles of feeling unsure or lacking guidance during workouts, the TerraFit team embarked on a mission to provide users with more than just instructions. While existing apps offer workout routines and interaction with trainers, they often fall short in terms of affordability, accessibility, and the timely communication of exercises. Drawing inspiration from these pain points, TerraFit aspires to create a more enjoyable and empowering fitness journey. ## What it does TerraFit is a **terrific**, evolutionary fitness app that leverages innovative features. Users can scan or manually input gym equipment available to them, ensuring accessibility and adaptability to various workout environments. Furthermore, the app offers a post-workout feedback form and a platform for users to upload videos for form checks, promoting improvement and accountability. ## How we built it 1. **Brainstorm Ideas**: We kicked off our journey with an extensive brainstorming session. During this creative process, we explored various concepts and ideas, eventually refining them to shape the core vision of TerraFit. 2. **Team Collaboration**: Our team is a diverse mix of individuals with different backgrounds and skill sets, which we leveraged to our advantage. We recognized the unique strengths each team member brought to the table. 3. **Backend and Frontend development**: Keagan, our resident tech expert, took charge of the backend and frontend development. His expertise was instrumental in bringing the core functionalities of TerraFit to life. Our dedicated team has successfully established user accounts, implemented login and signup functionalities, and crafted a dynamic webpage for selecting and customizing workouts. Leveraging the power of GPT AI and essential APIs, we have integrated key features, such as creating a leaderboard and enabling users to save their favorite workouts. The tangible outcomes of our efforts are readily available for review on our GitHub repository. 4. **UI/UX Mastery**: Aman, our UI/UX maestro, played a pivotal role in crafting the user experience. He meticulously designed Figma wireframes to visualize and communicate our app's vision. Moving forward he created the high fidelity prototype of the app to help enhance user understanding of the functionality. 5. **Content Creation and Learning**: Aarthi and Kevin helped create a mini database of workouts. Aarthi focused on creating engaging, captivating content on the app screen. Meanwhile, Kevin took charge of video editing, breathing life into TerraFit's concept and vision through compelling visual content. ## Challenges we ran into Navigating a steep learning curve was indeed a notable aspect of our journey in building TerraFit. Many of our team members were relatively new to the technical intricacies of app development, which made embracing these challenges all the more significant. Our commitment to mastering new technologies and coding languages demonstrated our dedication to the project's success. We thrived on the opportunity to learn and grow. As most of us were first-time participants in a major hackathon, we encountered uncertainty and the unknown, yet these experiences ultimately strengthened our team and bolstered our confidence. We believe in the unique potential of TerraFit to stand out from existing fitness apps. Our core mission revolves around enhancing the confidence of aspiring and seasoned fitness enthusiasts and improving accessibility. This commitment is evident in the distinctive features showcased in our wireframes, such as the "scan your gym" and "check your form" functionalities. However, we were unable to implement these features as we are still waiting for the GPT image API. We view these challenges as opportunities and possibilities so we are excited to work on these after the hackathon. ## Accomplishments that we're proud of Our proudest achievement is designing TerraFit as a comprehensive solution to make health and fitness accessible, convenient, and comfortable for all. Each team member played a crucial role in contributing to our shared goal. We are also really proud that we have created something that we all genuinely believe in and are hopeful that this is the first step to addressing the challenges brought about by the plethora of fitness apps. Our commitment to enhancing accessibility, confidence, and convenience in the fitness world is what drives us. Additionally, we embarked on a valuable learning journey throughout the development of TerraFit. Some of our team members, who didn't have prior backgrounds in software development, acquired fundamental knowledge in frontend development. This newfound understanding allowed us to contribute effectively to the project's success, demonstrating our commitment to personal growth and skill development. Our collective willingness to expand our horizons was a crucial aspect of our journey. ## What we learned In the crucible of our hackathon experience, we unearthed the priceless gems of perseverance and the indomitable spirit of teamwork. Our journey was nothing short of remarkable, as we forged an unbreakable bond while sharing every moment—whether it was devouring late-night pizza, nodding off to sleep, burning the midnight oil in intense work sessions, or eagerly participating in enlightening workshops. Astonishingly, despite hailing from diverse backgrounds and being relative strangers at the start, we seamlessly gelled into a harmonious team. The magic lay in our shared purpose and relentless drive, transcending the barriers of familiarity. Our journey was a testament to the incredible power of unity, which not only helped us conquer technical challenges but also formed the bedrock of lasting friendships and unforgettable memories. ## What's next for TerraFit Our next steps involve eagerly awaiting the integration of the updated GPT API, which will enable groundbreaking functionalities like equipment scanning and form checks through image and videos. While we've thoughtfully incorporated these features into our wireframes, we're excited to bring them to life after the hackathon. Stay tuned for an even more remarkable journey!
## Inspiration Many people want to stay in shape, so they really want to go workout to achieve their physique. However, most don't due to the hassle of creating unique workout plans because it could be time consuming and general to a specific body type, resulting in poor outcomes. What if you can build a plan that focuses on ***you***? Specifically, your body type, your schedule, the workouts you want to do and your dietary restrictions? Meet Gain+ where we create the fastest way to make big gains! ## What it does Gain+ creates a custom workout and meal plan based on what you want to look like in the future (i.e. 12 weeks). You will interact with a personal trainer created with AI to discuss your goal. First, you would load two pictures: one based on what you look like now and another based on what you hope to somewhat achieve after you finish your plan. Then, you'll give answers to any questions your coach has before generating a full workout and meal plan. The workout plan is based on the number of days you want to go to the gym, while the meal plan is for every day. You can also add workouts and meals before finalizing your plan as well. ## How we built it For our website, we've built the frontend in **React and Tailwind CSS** while **Firebase** provides out backend and database to store chats and users. As for the model creating the workout plans, there's a custom model that was created from a [Kaggle Dataset](https://www.kaggle.com/datasets/trainingdatapro/human-segmentation-dataset) and trained on **Roboflow** that classifies images based on gender, the three main types of bodies (ectomorph, mesomorph and endomorph) and the various subtypes. The best classes for that model is then sent to our chatbot, which was trained and deployed with **Databricks Mosaic AI** and based on **LLaMA 3.1**. ## Challenges we ran into Some challenges we ran into were the integration of the frontend, backend, and AI and ML components. This was a quite large and upscaled project where we used a lot of new technologies that we had little to no experience with. For example, there was a huge CORS issue in the final hours of hacking that plagued our project that we tried to solve with some help from the internet, as well as getting help from our mentors, Paul and Sammy. ## Accomplishments that we're proud of This was Kersh and Mike's first time doing something in Databricks and Ayan's first time using Firebase in a more professional scale. The fact that we actually implemented these technologies into a final project from little to no experience was a big accomplishment for all of us. ## What we learned We learned a lot throughout this hackathon, like working with external APIs for LLMs and Databricks, gained hands on experience with prompt engineering and finally, adjusting to unexpected roadblocks that we faced throughout this hackathon ## What's next for Gain+ Next steps would definitely be to improve the UI and UX and also implement some new features. Some of them can include a significant focus for people who have bodybuilding or powerlifting meets, which we'll implement through a separate toggle.
losing
## Inspiration More money, more problems. Lacking an easy, accessible, and secure method of transferring money? Even more problems. An interesting solution to this has been the rise of WeChat Pay, allowing for merchants to use QR codes and social media to make digital payments. But where does this leave people without sufficient bandwidth? Without reliable, adequate Wi-Fi, technologies like WeChat Pay and Google Pay simply aren't options. People looking to make money transfers are forced to choose between bloated fees or dangerously long wait times. As designers, programmers, and students, we tend to think about how we can design tech. But how do you design tech for that negative space? During our research, we found of the people that lack adequate bandwidth, 1.28 billion of them have access to mobile service. This ultimately led to our solution: **Money might not grow on trees, but Paypayas do.** 🍈 ## What it does Paypaya is an SMS chatbot application that allows users to perform simple and safe transfers using just text messages. Users start by texting a toll free number. Doing so opens a digital wallet that is authenticated by their voice. From that point, users can easily transfer, deposit, withdraw, or view their balance. Despite being built for low bandwidth regions, Paypaya also has huge market potential in high bandwidth areas as well. Whether you are a small business owner that can't afford a swipe machine or a charity trying to raise funds in a contactless way, the possibilities are endless. Try it for yourself by texting +1-833-729-0967 ## How we built it We first set up our Flask application in a Docker container on Google Cloud Run to streamline cross OS development. We then set up our database using MongoDB Atlas. Within the app, we also integrated the Twilio and PayPal APIs to create a digital wallet and perform the application commands. After creating the primary functionality of the app, we implemented voice authentication by collecting voice clips from Twilio to be used in Microsoft Azure's Speaker Recognition API. For our branding and slides, everything was made vector by vector on Figma. ## Challenges we ran into Man. Where do we start. Although it was fun, working in a two person team meant that we were both wearing (too) many hats. In terms of technical problems, the PayPal API documentation was archaic, making it extremely difficult for us figure out how to call the necessary functions. It was also really difficult to convert the audio from Twilio to a byte-stream for the Azure API. Lastly, we had trouble keeping track of conversation state in the chatbot as we were limited by how the webhook was called by Twilio. ## Accomplishments that we're proud of We're really proud of creating a fully functioning MVP! All of 6 of our moving parts came together to form a working proof of concept. All of our graphics (slides, logo, collages) are all made from scratch. :)) ## What we learned Anson - As a first time back end developer, I learned SO much about using APIs, webhooks, databases, and servers. I also learned that Jacky falls asleep super easily. Jacky - I learned that Microsoft Azure and Twilio can be a pain to work with and that Google Cloud Run is a blessing and a half. I learned I don't have the energy to stay up 36 hours straight for a hackathon anymore 🙃 ## What's next for Paypaya More language options! English is far from the native tongue of the world. By expanding the languages available, Paypaya will be accessible to even more people. We would also love to do more with financial planning, providing a log of previous transactions for individuals to track their spending and income. There are also a lot of rough edges and edge cases in the program flow, so patching up those will be important in bringing this to market.
## Inspiration Ideas for interactions from: * <http://paperprograms.org/> * <http://dynamicland.org/> but I wanted to go from the existing computer down, rather from the bottom up, and make something that was a twist on the existing desktop: Web browser, Terminal, chat apps, keyboard, windows. ## What it does Maps your Mac desktop windows onto pieces of paper + tracks a keyboard and lets you focus on whichever one is closest to the keyboard. Goal is to make something you might use day-to-day as a full computer. ## How I built it A webcam and pico projector mounted above desk + OpenCV doing basic computer vision to find all the pieces of paper and the keyboard. ## Challenges I ran into * Reliable tracking under different light conditions. * Feedback effects from projected light. * Tracking the keyboard reliably. * Hooking into macOS to control window focus ## Accomplishments that I'm proud of Learning some CV stuff, simplifying the pipelines I saw online by a lot and getting better performance (binary thresholds are great), getting a surprisingly usable system. Cool emergent things like combining pieces of paper + the side ideas I mention below. ## What I learned Some interesting side ideas here: * Playing with the calibrated camera is fun on its own; you can render it in place and get a cool ghost effect * Would be fun to use a deep learning thing to identify and compute with arbitrary objects ## What's next for Computertop Desk * Pointing tool (laser pointer?) * More robust CV pipeline? Machine learning? * Optimizations: run stuff on GPU, cut latency down, improve throughput * More 'multiplayer' stuff: arbitrary rotations of pages, multiple keyboards at once
## Inspiration We've worked on e-commerce stores (Shopify/etc), and managing customer support calls was tedious and expensive (small businesses online typically have no number for contact), despite that ~60% of customers prefer calls for support questions and personalization. We wanted to automate the workflow to drive more sales and save working hours. Existing solutions require custom setup in workflows for chatbots or asking; people still have to answer 20 percent of questions, and a lot are confirmation questions (IBM). People have question fatigue with bots to get to an actual human. ## What it does It's an embeddable javascript widget/number for any e-commerce store or online product catalog that lets customers call, text, or message on site chat about products personalized to them, processing returns, and general support. We plan to expand out of e-commerce after signing on 100 true users who love us. Boost sales while you are asleep instead of directing customers to a support ticket line. We plan to pursue routes of revenue with: * % of revenue from boosted products * Monthly subscription * Costs savings from reduced call center capacity requirements ## How we built it We used a HTML/CSS frontend connected to a backend of Twilio (phone call, transcription, and text-to-speech) and OpenAI APIs (LLMs, Vector DBQA customization). ## Challenges we ran into * Deprecated Python functionality for Twilio that we did not initially realize, eventually discovered this while browsing documentation and switched to JS * Accidentally dumped our TreeHacks shirt into a pot of curry ## Accomplishments that we're proud of * Developed real-time transcription connected to a phone call, which we then streamed to a custom-trained model -- while maintaining conversational-level latency * Somehow figured out a way to sleep * Became addicted to Pocari Sweat ## What we learned We realized the difficulty of navigating documentation while traversing several different APIs. For example, real-time transcription was a huge challenge. Moreover, we learned about embedding functions that allowed us to customize the LLM for our use case. This enabled us to provide a performance improvement to the existing model while also not adding much compute cost. During our time at TreeHacks, we became close with the Modal team as they were incredibly supportive of our efforts. We also greatly enjoyed leveraging OpenAI to provide this critical website support. ## What's next for Ellum We are releasing the service to close friends who have experienced these problems, particularly e-commerce distributors and beta-test the service with them. We know some Shopify owners who would be down to demo the service, and we hope to work closely with them to grow their businesses. We would love to pursue our pain points even more for instantly providing support and setting it up. Valuable features, such as real-time chat, that can help connect us to more customers can be added in the future. We would also love to test out the service with brick-and-mortar stores, like Home Depot, Lowes, CVS, which also have a high need for customer support. Slides: <https://drive.google.com/file/d/1fLFWAgsi1PXRVi5upMt-ZFivomOBo37k/view?usp=sharing> Video Part 1: <https://youtu.be/QH33acDpBj8> Video Part 2: <https://youtu.be/gOafS4ZoDRQ>
winning
## Inspiration Planning vacations can be hard. Traveling is a very fun experience but often comes with a lot of stress of curating the perfect itinerary with all the best sights to see, foods to eat, and shows to watch. You don't want to miss anything special, but you also want to make sure the trip is still up your alley in terms of your own interests - a balance that can be hard to find. ## What it does explr.ai simplifies itinerary planning with just a few swipes. After selecting your destination, the duration of your visit, and a rough budget, explr.ai presents you with a curated list of up to 30 restaurants, attractions, and activities that could become part of your trip. With an easy-to-use swiping interface, you choose what sounds interesting or not to you, and after a minimum of 8 swipes, let explr.ai's power convert your opinions into a full itinerary of activities for your entire visit. ## How we built it We built this app using React Typescript for the frontend and Convex for the backend. The app takes in user input from the homepage regarding the location, price point, and time frame. We pass the location and price range into the Google API to retrieve the highest-rated attractions and restaurants in the area. Those options are presented to the user on the frontend with React and CSS animations that allow you to swipe each card in a Tinder-style manner. Taking consideration of the user's swipes and initial preferences, we query the Google API once again to get additional similar locations that the user may like and pass this data into an LLM (using Together.ai's Llama2 model) to generate a custom itinerary for the user. For each location outputted, we string together images from the Google API to create a slideshow of what your trip would look like and an animated timeline with descriptions of the location. ## Challenges we ran into Front-end and design require a LOT of skill. It took us quite a while to come up with our project, and we originally were planning on a mobile app, but it's also quite difficult to learn completely new languages such as swift along with new technologies all in a couple of days. Once we started on explr.ai's backend, we were also having trouble passing in the appropriate information to the LLM to get back proper data that we could inject back into our web app. ## Accomplishments that we're proud of We're proud at the overall functionality and our ability to get something working by the end of the hacking period :') More specifically, we're proud of some of our frontend, including the card swiping and timeline animations as well as the ability to parse data from various APIs and put it together with lots of user input. ## What we learned We learned a ton about full-stack development overall, whether that be the importance of Figma and UX design work, or how to best split up a project when every part is moving at the same time. We also learned how to use Convex and Together.ai productively! ## What's next for explr.ai We would love to see explr.ai become smarter and support more features. explr.ai, in the future, could get information from hotels, attractions, and restaurants to be able to check availability and book reservations straight from the web app. Once you're on your trip, you should also be able to check in to various locations and provide feedback on each component. explr.ai could have a social media component of sharing your itineraries, plans, and feedback with friends and help each other better plan trips.
# Travel Itinerary Generator ## Inspiration Traveling is an experience that many cherish, but planning for it can often be overwhelming. With countless events, places to visit, and activities, it's easy to miss out on experiences that could have made the trip even more memorable. This realization inspired us to create the **Travel Itinerary Generator**. We wanted to simplify the travel planning process by providing users with curated suggestions based on their preferences. ## What It Does The **Travel Itinerary Generator** is a web application that assists users in generating travel itineraries. Users receive tailored suggestions on events or places to visit by simply entering a desired location and activity type. The application fetches this data using the Metaphor API, ensuring the recommendations are relevant and up-to-date. ## How We Built It We began with a React-based frontend, leveraging components to create a user-friendly interface. Material-UI was our go-to library for the design, ensuring a consistent and modern look throughout the application. To fetch relevant data, we integrated the Metaphor API. Initially, we faced CORS issues when bringing data directly from the front end. To overcome this, we set up a Flask backend to act as a proxy, making requests to the Metaphor API on behalf of the front end. We utilized the `framer-motion` library for animations and transitions, enhancing the user experience with smooth and aesthetically pleasing effects. ## Challenges We Faced 1. **CORS Issues**: One of the significant challenges was dealing with CORS when trying to fetch data from the Metaphor API. This required us to rethink our approach and implement a Flask backend to bypass these restrictions. 2. **Routing with GitHub Pages**: After adding routing to our React application, we encountered issues deploying to GitHub Pages. It took some tweaking and adjustments to the base URL to get it working seamlessly. 3. **Design Consistency**: Ensuring a consistent design across various components while integrating multiple libraries was challenging. We had to make sure that the design elements from Material-UI blended well with our custom styles and animations. ## What We Learned This project was a journey of discovery. We learned the importance of backend proxies in handling CORS issues, the intricacies of deploying single-page applications with client-side routing, and the power of libraries like `framer-motion` in enhancing user experience. Moreover, integrating various tools and technologies taught us the value of adaptability and problem-solving in software development. ## Conclusion This journey was like a rollercoaster - thrilling highs and challenging lows. We discovered the art of bypassing CORS, the nuances of SPAs, and the sheer joy of animating everything! It reinforced our belief that we can create solutions that make a difference with the right tools and a problem-solving mindset. We're excited to see how travelers worldwide will benefit from our application, making their travel planning a breeze! ## Acknowledgements * [Metaphor API](https://metaphor.systems/) for the search engine. * [Material-UI](https://mui.com/) for styling. * [Framer Motion](https://www.framer.com/api/motion/) for animations. * [Express API](https://expressjs.com/) hosted on [Google Cloud](https://cloud.google.com/). * [React.js](https://react.dev/) for web framework.
## Inspiration It is a truth universally known that eating food is much better when you’re eating with other people. If only it were that easy deciding where to eat, though — and unfortunately, it gets harder the larger your party is... ## Problem statement There is a lot of difficulty in deciding where to eat and where to go. Factors such as restaurant hours, dining options (which were particularly impacted after the pandemic), the location, and individual dietary restrictions come into play when deciding where to go when meeting up with friends and family over dinner. Planning a meetup over food results in long chains of messages exchanged being group chats. Ray and I believe that sharing meals with the people you care about is one of the things that make life worth living. We also believe technology should be used to make our lives a little bit easier, and there are few things more difficult than deciding where to eat. ## Our solution ...is Nommers! ## What it does Nommers is a web application that is responsive for both desktop and mobile breakpoints. ### How do you use it? 1. Designate a trusted individual in a group of diners to be your party owner. This party owner will make a lobby and filter restaurants based on location, price, star rating, available dietary restrictions, and so on. 2. Once the party is created, the party owner will share a party code with everyone. This code is used to enter the lobby. Once everyone is in the lobby, the party owner will commence the voting period. 3. Everyone in the lobby will be able to vote on the restaurants they would like to eat at by swiping through the options. Once everyone is in the lobby is done voting, the voting period ends. 4. Lastly, the results are displayed! Nommers shows you the top restaurants that everyone voted on. This hopefully helps make the decision-making a lot smoother and easier. ## How we built it Christina: I started off by creating low-fidelity wireframes on Figma, carefully selecting a color palette and font choices, and brainstorming ideas for a logotype. While I did that, Ray was working on the backend and started by deploying the server. He had... some difficulty with it to say the least. He had a lot of success with writing the swiping animation during the voting process, though, and I was pleased with how it turned out! After I finished creating low-fidelity wireframes, I created all of the web pages with mobile responsiveness, and then proceeded to implement them with desktop responsiveness. While I was doing this, Ray was working on fetching data from the Google Maps API. ## Technologies used Figma, Adobe Illustrator, Adobe Photoshop, Svelte, HTML, CSS, JavaScript, GitHub, Postman, AWS Elastic Beanstalk, AWS S3, EC2, RDS, Elastic Cache, PostgreSQL, Redis, Django, Django Channels, Django REST Framework, Google Maps API ## Challenges we ran into Christina: Perhaps we bit off more than we could chew. I worked on developing the frontend of all the web pages and making them mobile and desktop responsive. This proved to be very difficult, especially since I had to make assets for the webpages, write copy, prepare the presentation, and make ## Accomplishments that we're proud of Christina: I learned Svelte for the first time; I made some very cute graphics on Adobe Illustrator and Figma and crafted the branding and personality of Nommers. I think it looks adorable and I am very proud of the work that I performed! Ray: Over all I am please with the backend deployment, depite the challenges. ## What we learned Christina: Svelte definitely! I have never used Svelte but Ray insisted it was the hot new JavaScript framework. I tried it out and unfortunately, I did not find it much enjoyable and prefer Next.js; however, it was worth a try and I am glad I had that experience! Ray learned that it was particularly difficult to work with setting up AWS servers... and all things web related in general ## What's next for Nommers Ray would like to smoothen the animations for the swiping. Ray would also like correct many of the server and webstite state management related issues. ## Resources used Freepik, Dafont, Unsplash
partial
## Inspiration A project online detailing the "future cities index," a statistic that aims to calculate the viability of building a future city. After watching the Future Cities presentation, we were interested to see *where* Future Cities would be built, if a project like the one we saw was funded in the US. This prompted us to create a tool that may help social scientists answer that question — as many people work to innovate the various components of future cities, we tried to find possible homes for their ideas. ## What it does Allows Social Scientists and amateur researchers to access aggregated census and economic data through Lightbox API, without writing a single line of code. The program calculates a Future Cities Index based on the resilience of a census tract to natural disaster, housing availability, and the social vulnerability in the area. ## How we built it Interactive UI built with ReactJS, data parsed from Lightbox API with Javascript. ## Challenges we ran into Loading in the census tracts in our interactive map, finding appropriate data to display for each tract, and calculating the Future Cities Index ## Accomplishments that we're proud of Creating a working interactive map, successfully displaying a real-time Future Cities Index ## What we learned How to use geodata to make interactive maps that behave as we wish. We are able to overlay different raster images and polygons onto a map. ## What's next for Future Cities Index Using more parameters in the Future Cities Index, displaying data on the County and City level, linking each county tract to available census data, and allowing users to easily compare tracts
## Inspiration Remember the thrill of watching mom haggle like a pro at the market? Those nostalgic days might seem long gone, but here's the twist: we can help you carry out the generational legacy. Introducing our game-changing app – it's not just a translator, it’s your haggling sidekick. This app does more than break down language barriers; it helps you secure deals. You’ll learn the tricks to avoid the tourist trap and get the local price, every time. We’re not just reminiscing about the good old days; we’re rebooting them for the modern shopper. Get ready to haggle, bargain, and save like never before! ## What it does Back to the Market is a mobile app specifically crafted to enhance communication and negotiation for users in foreign markets. The app shines in its ability to analyze quoted prices using local market data, cultural norms, and user-set preferences to suggest effective counteroffers. This empowers users to engage in informed and culturally appropriate negotiations, without being overcharged. Additionally, Back to the Market offers a customization feature, allowing users to tailor their spending limits. The user-interface is simple and cute, making it accessible for a broad range of users regardless of their technical interface. Its integration of these diverse features positions Back to the Market not just as a tool for financial negotiation, but as a comprehensive companion for a more equitable, enjoyable, and efficient international shopping experience. ## How we built it Back to the Market was built by separating the front-end from the back-end. The front-end consists of React-Native, Expo Go, and Javascript to develop the mobile app. The back-end consists of Python, which was used to connect the front-end to the back-end. The Cohere API was used to generate the responses and determine appropriate steps to take during the negotiation process. ## Challenges we ran into During the development of Back to the Market, we faced two primary challenges. First was our lack of experience with React Native, a key technology for our app's development. While our team was composed of great coders, none of us had ever used React prior to the competition. This meant we had to quickly learn and master it from the ground up, a task that was both challenging and educational. Second, we grappled with front-end design. Ensuring the app was not only functional but also visually appealing and user-friendly required us to delve into UI/UX design principles, an area we had little experience with. Luckily, through the help of the organizers, we were able to adapt quickly with few problems. These challenges, while demanding, were crucial in enhancing our skills and shaping the app into the efficient and engaging version it is today. ## Accomplishments that we're proud of We centered the button on our first try 😎 In our 36 hours journey with Back to the Market, there are several accomplishments that stand out. Firstly, successfully integrating Cohere for the both the translation and bargaining aspects of the app was a significant achievement. This integration not only provided robust functionality but also ensured a seamless user experience, which was central to our vision. Secondly, it was amazing to see how quickly we went from zero React-Native experience to making an entire app with it in less than 24 hours. We were able to create both an aesthetically pleasing and highly functional. This rapid skill acquisition and application in a short time frame was a testament to our team's dedication and learning agility. Finally, we take great pride in our presentation and slides. We managed to craft an engaging and dynamic presentation that effectively communicated the essence of Back to the Market. Our ability to convey complex technical details in an accessible and entertaining manner was crucial in capturing the interest and understanding of our audience. ## What we learned Our journey with this project was immensely educational. We learned the value of adaptability through mastering React-Native, a technology new to us all, emphasizing the importance of embracing and quickly learning new tools. Furthermore, delving into the complexities of cross-cultural communication for our translation and bargaining features, we gained insights into the subtleties of language and cultural nuances in commerce. Our foray into front-end design taught us about the critical role of user experience and interface, highlighting that an app's success lies not just in its functionality but also in its usability and appeal. Finally, creating a product is the easy part, making people want it is where a lot of people fall. Thus, crafting an engaging presentation refined our storytelling and communication skills. ## What's next for Back to the Market Looking ahead, Back to the Market is poised for many exciting developments. Our immediate focus is on enhancing the app's functionality and user experience. This includes integrating translation features to allow users to stay within the app throughout their transaction. In parallel, we're exploring the incorporation of AI-driven personalization features. This would allow Back to the Market to learn from individual user preferences and negotiation styles, offering more tailored suggestions and improving the overall user experience. The idea can be expanded by creating a feature for users to rate suggested responses. Use these ratings to refine the response generation system by integrating the top-rated answers into the Cohere model with a RAG approach. This will help the system learn from the most effective responses, improving the quality of future answers. Another key area of development is utilising computer vision so that users can simply take a picture of the item they are interested in purchasing instead of having to input an item name, which is especially handy in areas where you don’t know exactly what you’re buying (ex. cool souvenir). Furthermore, we know that everyone loves a bit of competition, especially in the world of bargaining where you want the best deal possible. That’s why we plan on incorporating a leaderboard for those who save the most money via our negotiation tactics.
## Inspiration Hurricane Milton, the most devastating disaster in over 30 years, left more than **3 million people** without power and overwhelmed emergency services scrambling to respond. **While drone technology now floods us with vast amounts of data, the real challenge lies in making sense of it in rapidly evolving environments**—operators are still stuck manually sifting through critical information when time is running out. In moments of chaos, the ability to scale autonomous search-and-rescue missions and intelligently uncover patterns from data becomes essential. SkySearch enables operators to uncover hidden insights on the environment in vast seas of data by integrating real-time data on the environment, telemetry, and previous missions into a single pane of glass (software mission control system). Operators can deploy fleets of drones to investigate regions and collect video feed used to reconstruct the scene, enabling them to drill-down on areas of interest through a semantic search engine. ## What it does Our goal is to enable operators to **interact with data** and uncover hidden patterns effortlessly. SkySearch is built around the end-to-end search-and-rescue workflow in the following use cases: 1. Search: Drones are deployed through the software by operators and autonomously navigate through terrain to identify objects of interest in real-time. 2. Rescue: Operators can interact with live data to isolate hazards and locate people through a unified search interface. Based on this data, the system then recommends risk-aware, optimized rescue routes for first responders. Core features * Environment reconstruction of damaged regions and infrastructure with Gaussian splatting * Risk-aware pathfinding for rescue operations and pathfinding * Semantic Search through disparate data sources to uncover patterns and recommend actions ## How we built it We designed an embedded architecture that enables software and hardware interfaces to bidirectionally communicate information and commands. * Drone SDK used for live video streaming * TP-Link Antennas for a local wifi system to create a more robust data pipeline between the drone and the software interface - rather than relying on Satellites and Wifi * OpenCV and Apple Depth Pro used to process footage and classify data * SingleStore for real-time database management ## Challenges we ran into * Accounting for low battery drones * Integration between hardware and software interfaces * Balancing human judgement with autonomy ## Accomplishments that we're proud of * Implemented autonomous swarming framework to detect * Integrated gaussian splatting * Risk-aware map traversal and recommended "safe routes" for emergency responders * Dynamic Data Generation to generate and query data dynamically allows for efficient testing and analysis, improving the app's responsiveness and visibility into critical information during rescue missions.
winning
•⁠ ⁠Inspiration The inspiration for digifoot.ai stemmed from the growing concern about digital footprints and online presence in today's world. With the increasing use of social media, individuals often overlook the implications of their online activities. We aimed to create a tool that not only helps users understand their digital presence but also provides insights into how they can improve it. •⁠ ⁠What it does digifoot.ai is a comprehensive platform that analyzes users' social media accounts, specifically Instagram and Facebook. It aggregates data such as posts, followers, and bios, and utilizes AI to provide insights on their digital footprint. The platform evaluates images and content from users’ social media profiles to ensure there’s nothing harmful or inappropriate, helping users maintain a positive online presence. •⁠ ⁠How we built it We built digifoot.ai using Next.js and React for the frontend and Node.js for the backend. The application integrates with the Instagram and Facebook Graph APIs to fetch user data securely. We utilized OpenAI's API for generating insights based on the collected social media data. •⁠ ⁠Challenges we ran into API Authentication and Rate Limits: Managing API authentication for both Instagram and OpenAI was complex. We had to ensure secure access to user data while adhering to rate limits imposed by these APIs. This required us to optimize our data-fetching strategies to avoid hitting these limits. Integrating Image and Text Analysis: We aimed to analyze both images and captions from Instagram posts using the OpenAI API's capabilities. However, integrating image analysis required us to understand how to format requests correctly, especially since the OpenAI API processes images differently than text. The challenge was in effectively combining image inputs with textual data in a way that allowed the AI to provide meaningful insights based on both types of content. •⁠ ⁠Accomplishments that we're proud of We are proud of successfully creating a user-friendly interface that allows users to connect their social media accounts seamlessly. The integration of AI-driven analysis provides valuable feedback on their digital presence. Moreover, we developed a robust backend that handles data securely while complying with privacy regulations. •⁠ ⁠What we learned Throughout this project, we learned valuable lessons about API integrations and the importance of user privacy and data security. We gained insights into how AI can enhance user experience by providing personalized feedback based on real-time data analysis. Additionally, we improved our skills in full-stack development and project management. •⁠ ⁠What's next for digifoot.ai Looking ahead, we plan to enhance digifoot.ai by incorporating more social media platforms for broader analysis capabilities. We aim to refine our AI algorithms to provide even more personalized insights. Additionally, we are exploring partnerships with educational institutions to promote digital literacy and responsible online behavior among students.
## Inspiration The increasing frequency and severity of natural disasters such as wildfires, floods, and hurricanes have created a pressing need for reliable, real-time information. Families, NGOs, emergency first responders, and government agencies often struggle to access trustworthy updates quickly, leading to delays in response and aid. Inspired by the need to streamline and verify information during crises, we developed Disasteraid.ai to provide concise, accurate, and timely updates. ## What it does Disasteraid.ai is an AI-powered platform consolidating trustworthy live updates about ongoing crises and packages them into summarized info-bites. Users can ask specific questions about crises like the New Mexico Wildfires and Floods to gain detailed insights. The platform also features an interactive map with pin drops indicating the precise coordinates of events, enhancing situational awareness for families, NGOs, emergency first responders, and government agencies. ## How we built it 1. Data Collection: We queried You.com to gather URLs and data on the latest developments concerning specific crises. 2. Information Extraction: We extracted critical information from these sources and combined it with data gathered through Retrieval-Augmented Generation (RAG). 3. AI Processing: The compiled information was input into Anthropic AI's Claude 3.5 model. 4. Output Generation: The AI model produced concise summaries and answers to user queries, alongside generating pin drops on the map to indicate event locations. ## Challenges we ran into 1. Data Verification: Ensuring the accuracy and trustworthiness of the data collected from multiple sources was a significant challenge. 2. Real-Time Processing: Developing a system capable of processing and summarizing information in real-time requires sophisticated algorithms and infrastructure. 3. User Interface: Creating an intuitive and user-friendly interface that allows users to easily access and interpret information presented by the platform. ## Accomplishments that we're proud of 1. Accurate Summarization: Successfully integrating AI to produce reliable and concise summaries of complex crisis situations. 2. Interactive Mapping: Developing a dynamic map feature that provides real-time location data, enhancing the usability and utility of the platform. 3. Broad Utility: Creating a versatile tool that serves diverse user groups, from families seeking safety information to emergency responders coordinating relief efforts. ## What we learned 1. Importance of Reliable Data: The critical need for accurate, real-time data in disaster management and the complexities involved in verifying information from various sources. 2. AI Capabilities: The potential and limitations of AI in processing and summarizing vast amounts of information quickly and accurately. 3. User Needs: Insights into the specific needs of different user groups during a crisis, allowing us to tailor our platform to better serve these needs. ## What's next for DisasterAid.ai 1. Enhanced Data Sources: Expanding our data sources to include more real-time feeds and integrating social media analytics for even faster updates. 2. Advanced AI Models: Continuously improving our AI models to enhance the accuracy and depth of our summaries and responses. 3. User Feedback Integration: Implementing feedback loops to gather user input and refine the platform's functionality and user interface. 4. Partnerships: Building partnerships with more emergency services and NGOs to broaden the reach and impact of Disasteraid.ai. 5. Scalability: Scaling our infrastructure to handle larger volumes of data and more simultaneous users during large-scale crises.
## Inspiration Social media has been shown in studies that it thrives on emotional and moral content, particularly angry in nature. In similar studies, these types of posts have shown to have effects on people's well-being, mental health, and view of the world. We wanted to let people take control of their feed and gain insight into the potentially toxic accounts on their social media feed, so they can ultimately decide what to keep and what to remove while putting their mental well-being first. We want to make social media a place for knowledge and positivity, without the anger and hate that it can fuel. ## What it does The app performs an analysis on all the Twitter accounts the user follows and reads the tweets, checking for negative language and tone. Using machine learning algorithms, the app can detect negative and potentially toxic tweets and accounts to warn users of the potential impact, while giving them the option to act how they see fit with this new information. In creating this app and its purpose, the goal is to **put the user first** and empower them with data. ## How We Built It We wanted to make this application as accessible as possible, and in doing so, we made it with React Native so both iOS and Android users can use it. We used Twitter OAuth to be able to access who they follow and their tweets while **never storing their token** for privacy and security reasons. The app sends the request to our web server, written in Kotlin, hosted on Google App Engine where it uses Twitter's API and Google's Machine Learning APIs to perform the analysis and power back the data to the client. By using a multi-threaded approach for the tasks, we streamlined the workflow and reduced response time by **700%**, now being able to manage an order of magnitude more data. On top of that, we integrated GitHub Actions into our project, and, for a hackathon mind you, we have a *full continuous deployment* setup from our IDE to Google Cloud Platform. ## Challenges we ran into * While library and API integration was no problem in Kotlin, we had to find workarounds for issues regarding GCP deployment and local testing with Google's APIs * Since being cross-platform was our priority, we had issues integrating OAuth with its requirement for platform access (specifically for callbacks). * If each tweet was sent individually to Google's ML API, each user could have easily required over 1000+ requests, overreaching our limit. Using our technique to package the tweets together, even though it is unsupported, we were able to reduce those requests to a maximum of 200, well below our limits. ## What's next for pHeed pHeed has a long journey ahead: from integrating with more social media platforms to new features such as account toxicity tracking and account suggestions. The social media space is one that is rapidly growing and needs a user-first approach to keep it sustainable; ultimately, pHeed can play a strong role in user empowerment and social good.
partial
## Inspiration Aravind doesn't speak Chinese. When Nick and Jon speak in Chinese Aravind is sad. We want to solve this problem for all the Aravinds in the world -- not just for Chinese though, for any language! ## What it does TranslatAR allows you to see English (or any other language of your choice) subtitles when you speak to other people speaking a foreign language. This is an augmented reality app which means the subtitles will appear floating in front of you! ## How we built it We used Microsoft Cognitive Services's Translation APIs to transcribe speech and then translate it. To handle the augmented reality aspect, we created our own AR device by combining an iPhone, a webcam, and a Google Cardboard. In order to support video capturing along with multiple microphones, we multithread all our processes. ## Challenges we ran into One of the biggest challenges we faced was trying to add the functionality to handle multiple input sources in different languages simultaneously. We eventually solved it with multithreading, spawning a new thread to listen, translate, and caption for each input source. ## Accomplishments that we're proud of Our biggest achievement is definitely multi-threading the app to be able to translate a lot of different languages at the same time using different endpoints. This makes real-time multi-lingual conversations possible! ## What we learned We familiarized ourselves with the Cognitive Services API and were also able to create our own AR system that works very well from scratch using OpenCV libraries and Python Imaging Library. ## What's next for TranslatAR We want to launch this App in the AppStore so people can replicate VR/AR on their own phones with nothing more than just an App and an internet connection. It also helps a lot of people whose relatives/friends speak other languages.
## 💫 Inspiration Inspired by our grandparents, who may not always be able to accomplish certain tasks, we wanted to create a platform that would allow them to find help locally . We also recognize that many younger members of the community might be more knowledgeable or capable of helping out. These younger members may be looking to make some extra money, or just want to help out their fellow neighbours. We present to you.... **Locall!** ## 🏘 What it does Locall helps members of a neighbourhood get in contact and share any tasks that they may need help with. Users can browse through these tasks, and offer to help their neighbours. Those who post the tasks can also choose to offer payment for these services. It's hard to trust just anyone to help you out with daily tasks, but you can always count on your neighbours! For example, let's say an elderly woman can't shovel her driveway today. Instead of calling a big snow plowing company, she can post a service request on Locall, and someone in her local community can reach out and help out! By using Locall, she's saving money on fees that the big companies charge, while also helping someone else in the community make a bit of extra money. Plenty of teenagers are looking to make some money whenever they can, and we provide a platform for them to get in touch with their neighbours. ## 🛠 How we built it We first prototyped our app design using Figma, and then moved on to using Flutter for actual implementation. Learning Flutter from scratch was a challenge, as we had to read through lots of documentation. We also stored and retrieved data from Firebase ## 🦒 What we learned Learning a new language can be very tiring, but also very rewarding! This weekend, we learned how to use Flutter to build an iOS app. We're proud that we managed to implement some special features into our app! ## 📱 What's next for Locall * We would want to train a Tensorflow model to better recommend services to users, as well as improve the user experience * Implementing chat and payment directly in the app would be helpful to improve requests and offers of services
## Inspiration All three teammates had independently converged on an idea of glasses with subtitles for the world around you. After we realized the impracticality of the idea (how could you read subtitles an inch from your eye without technology that we didn't have access to?) we flipped it around: instead of subtitles (with built-in translation!) that only you could see for *everybody else*, what if you could have subtitles for *you* that everyone else could see? This way, others could understand what you were saying, breaking barriers of language, distance, and physical impairments. The subtitles needed to be big so that people could easily read them, and somewhere prominent so people you were conversing with could easily find them. We decided on having a large screen in the front of a shirt/hoodie, which comes with the benefits of wearable tech such as easy portability. ## What it does The device has three main functions. The first is speech transcription in multiple languages, where what you say is turned into text and you can choose the language you're speaking in. The second is speech translation, which currently translates your transcribed speech into English. The final function is displaying subtitles, and your translated speech is displayed on the screen in the front of the wearable. ## How we built it We took in audio input from a microphone connected to a Raspberry Pi 5, which sends packets of audio every 100 ms to the Google Cloud speech-to-text API, allowing for live near real-time subtitling. We then sent the transcribed text to the Google Cloud translate API to translate the text into English. We sent this translated text to a file, which was read from to create our display using pygame. Finally, we sewed all the components into a hoodie that we modified to become our wearable subtitle device! ## Challenges we ran into There were no microphones, so we had to take a trip (on e-scooters!) to a nearby computer shop to buy microphones. We took one apart to be less bulky, desoldering and resoldering components in order to free the base components from the plastic encasing. We had issues with 3D printing parts for different components: at one point our print and the entire 3D printer went missing with no one knowing where it went, and many of our ideas were too large for the 3D printers. Since we attached everything to a hoodie, there were some issues with device placement and overheating. Our Raspberry Pi 5 reached 85 degrees C, and some adapters were broken due to device placement. Finally, a persistent problem we had was using Google Cloud's API to switch between recording different languages. We couldn't find many helpful references online, and the entire process was very complicated. ## Accomplishments that we're proud of We're proud of successfully transcribing text from audio from the taken-apart microphone. We were so proud, in fact, that we celebrated by going to get boba! ## What we learned We learned four main lessons. The first and second were that the materials you have access to can significantly increase your possibilities or difficulty (having the 7" OLED display helped a lot) but that even given limited materials, you still have the ability to create (when we weren't able to get a microphone from the Hardware Hub, we went out and bought a microphone that was not suited for our purposes and took it apart to make it work for us). The third and fourth were that seemingly simple tasks can be very difficult and time-consuming to do (as we found in the Google Cloud's APIs for transcription and translation) but also that large, complex tasks can be broken down into simple doable bits (the entire project: we definitely couldn't have made it possible without everyone taking on little bits one at a time). ## What's next for Project Tee In the future, we hope to make the wearable less bulky and more portable by having a flexible OLED display embedded in the shirt, and adding an alternative power source of solar panels. We also hope to support more languages in the future (we currently support five: English, Spanish, French, Mandarin, and Japanese) both to translate from and to, as well as a possible function to automatically detect what language a user is speaking. As the amount of language options increases, we will likely need an app or website as an option for people to change their language options more easily.
winning
## Inspiration SustainaPal is a project that was born out of a shared concern for the environment and a strong desire to make a difference. We were inspired by the urgent need to combat climate change and promote sustainable living. Seeing the increasing impact of human activities on the planet's health, we felt compelled to take action and contribute to a greener future. ## What it does At its core, SustainaPal is a mobile application designed to empower individuals to make sustainable lifestyle choices. It serves as a friendly and informative companion on the journey to a more eco-conscious and environmentally responsible way of life. The app helps users understand the environmental impact of their daily choices, from transportation to energy consumption and waste management. With real-time climate projections and gamification elements, SustainaPal makes it fun and engaging to adopt sustainable habits. ## How we built it The development of SustainaPal involved a multi-faceted approach, combining technology, data analysis, and user engagement. We opted for a React Native framework, and later incorporated Expo, to ensure the app's cross-platform compatibility. The project was structured with a focus on user experience, making it intuitive and accessible for users of all backgrounds. We leveraged React Navigation and React Redux for managing the app's navigation and state management, making it easier for users to navigate and interact with the app's features. Data privacy and security were paramount, so robust measures were implemented to safeguard user information. ## Challenges we ran into Throughout the project, we encountered several challenges. Integrating complex AI algorithms for climate projections required a significant amount of development effort. We also had to fine-tune the gamification elements to strike the right balance between making the app fun and motivating users to make eco-friendly choices. Another challenge was ensuring offline access to essential features, as the app's user base could span areas with unreliable internet connectivity. We also grappled with providing a wide range of educational insights in a user-friendly format. ## Accomplishments that we're proud of Despite the challenges, we're incredibly proud of what we've achieved with SustainaPal. The app successfully combines technology, data analysis, and user engagement to empower individuals to make a positive impact on the environment. We've created a user-friendly platform that not only informs users but also motivates them to take action. Our gamification elements have been well-received, and users are enthusiastic about earning rewards for their eco-conscious choices. Additionally, the app's offline access and comprehensive library of sustainability resources have made it a valuable tool for users, regardless of their internet connectivity. ## What we learned Developing SustainaPal has been a tremendous learning experience. We've gained insights into the complexities of AI algorithms for climate projections and the importance of user-friendly design. Data privacy and security have been areas where we've deepened our knowledge to ensure user trust. We've also learned that small actions can lead to significant changes. The collective impact of individual choices is a powerful force in addressing environmental challenges. SustainaPal has taught us that education and motivation are key drivers for change. ## What's next for SustainaPal The journey doesn't end with the current version of SustainaPal. In the future, we plan to further enhance the app's features and expand its reach. We aim to strengthen data privacy and security, offer multi-language support, and implement user support for a seamless experience. SustainaPal will also continue to evolve with more integrations, such as wearable devices, customized recommendations, and options for users to offset their carbon footprint. We look forward to fostering partnerships with eco-friendly businesses and expanding our analytics and reporting capabilities for research and policy development. Our vision for SustainaPal is to be a global movement, and we're excited to be on this journey towards a healthier planet. Together, we can make a lasting impact on the world.
## Inspiration Around 40% of the lakes in America are too polluted for aquatic life, swimming or fishing.Although children make up 10% of the world’s population, over 40% of the global burden of disease falls on them. Environmental factors contribute to more than 3 million children under age five dying every year. Pollution kills over 1 million seabirds and 100 million mammals annually. Recycling and composting alone have avoided 85 million tons of waste to be dumped in 2010. Currently in the world there are over 500 million cars, by 2030 the number will rise to 1 billion, therefore doubling pollution levels. High traffic roads possess more concentrated levels of air pollution therefore people living close to these areas have an increased risk of heart disease, cancer, asthma and bronchitis. Inhaling Air pollution takes away at least 1-2 years of a typical human life. 25% deaths in India and 65% of the deaths in Asia are resultant of air pollution. Over 80 billion aluminium cans are used every year around the world. If you throw away aluminium cans, they can stay in that can form for up to 500 years or more. People aren’t recycling as much as they should, as a result the rainforests are be cut down by approximately 100 acres per minute On top of this, I being near the Great Lakes and Neeral being in the Bay area, we have both seen not only tremendous amounts of air pollution, but marine pollution as well as pollution in the great freshwater lakes around us. As a result, this inspired us to create this project. ## What it does For the react native app, it connects with the Website Neeral made in order to create a comprehensive solution to this problem. There are five main sections in the react native app: The first section is an area where users can collaborate by creating posts in order to reach out to others to meet up and organize events in order to reduce pollution. One example of this could be a passionate environmentalist who is organizing a beach trash pick up and wishes to bring along more people. With the help of this feature, more people would be able to learn about this and participate. The second section is a petitions section where users have the ability to support local groups or sign a petition in order to enforce change. These petitions include placing pressure on large corporations to reduce carbon emissions and so forth. This allows users to take action effectively. The third section is the forecasts tab where the users are able to retrieve data regarding various data points in pollution. This includes the ability for the user to obtain heat maps regarding the amount of air quality, pollution and pollen in the air and retrieve recommended procedures for not only the general public but for special case scenarios using apis. The fourth section is a tips and procedures tab for users to be able to respond to certain situations. They are able to consult this guide and find the situation that matches them in order to find the appropriate action to take. This helps the end user stay calm during situations as such happening in California with dangerously high levels of carbon. The fifth section is an area where users are able to use Machine Learning in order to figure out whether where they are is in a place of trouble. In many instances, not many know exactly where they are especially when travelling or going somewhere unknown. With the help of Machine Learning, the user is able to place certain information regarding their surroundings and the Algorithm is able to decide whether they are in trouble. The algorithm has 90% accuracy and is quite efficient. ## How I built it For the react native part of the application, I will break it down section by section. For the first section, I simply used Firebase as a backend which allowed a simple, easy and fast way of retrieving and pushing data to the cloud storage. This allowed me to spend time on other features, and due to my ever growing experience with firebase, this did not take too much time. I simply added a form which pushed data to firebase and when you go to the home page it refreshes and see that the cloud was updated in real time For the second section, I used native base in order to create my UI and found an assortment of petitions which I then linked and added images from their website in order to create the petitions tab. I then used expo-web-browser, to deep link the website in opening safari to open the link within the app. For the third section, I used breezometer.com’s pollution api, air quality api, pollen api and heat map apis in order to create an assortment of data points, health recommendations and visual graphics to represent pollution in several ways. The apis also provided me information such as the most common pollutant and protocols for different age groups and people with certain conditions should follow. With this extensive api, there were many endpoints I wanted to add in, but not all were added due to lack of time. For the fourth section, it is very much similar to the second section as it is an assortment of links, proofread and verified to be truthful sources, in order for the end user to have a procedure to go to for extreme emergencies. As we see horrible things happen, such as the wildfires in California, air quality becomes a serious concern for many and as a result these procedures help the user stay calm and knowledgeable. For the fifth section, Neeral please write this one since you are the one who created it. ## Challenges I ran into API query bugs was a big issue in formatting back the query and how to map the data back into the UI. It took some time and made us run until the end but we were still able to complete our project and goals. ## What's next for PRE-LUTE We hope to use this in areas where there is commonly much suffering due to the extravagantly large amount of pollution, such as in Delhi where seeing is practically hard due to the amount of pollution. We hope to create a finished product and release it to the app and play stores respectively.
## 🌱 Inspiration With the ongoing climate crisis, we recognized a major gap in the incentives for individuals to make greener choices in their day-to-day lives. People want to contribute to the solution, but without tangible rewards, it can be hard to motivate long-term change. That's where we come in! We wanted to create a fun, engaging, and rewarding way for users to reduce their carbon footprint and make eco-friendly decisions. ## 🌍 What it does Our web app is a point-based system that encourages users to make greener choices. Users can: * 📸 Scan receipts using AI, which analyzes purchases and gives points for buying eco-friendly products from partner companies. * 🚴‍♂️ Earn points by taking eco-friendly transportation (e.g., biking, public transit) by tapping their phone via NFC. * 🌿 See real-time carbon emission savings and get rewarded for making sustainable choices. * 🎯 Track daily streaks, unlock milestones, and compete with others on the leaderboard. * 🎁 Browse a personalized rewards page with custom suggestions based on trends and current point total. ## 🛠️ How we built it We used a mix of technologies to bring this project to life: * **Frontend**: Remix, React, ShadCN, Tailwind CSS for smooth, responsive UI. * **Backend**: Express.js, Node.js for handling server-side logic. * **Database**: PostgreSQL for storing user data and points. * **AI**: GPT-4 for receipt scanning and product classification, helping to recognize eco-friendly products. * **NFC**: We integrated NFC technology to detect when users make eco-friendly transportation choices. ## 🔧 Challenges we ran into One of the biggest challenges was figuring out how to fork the RBC points API, adapt it, and then code our own additions to match our needs. This was particularly tricky when working with the database schemas and migration files. Sending image files across the web also gave us some headaches, especially when incorporating real-time processing with AI. ## 🏆 Accomplishments we're proud of One of the biggest achievements of this project was stepping out of our comfort zones. Many of us worked with a tech stack we weren't very familiar with, especially **Remix**. Despite the steep learning curve, we managed to build a fully functional web app that exceeded our expectations. ## 🎓 What we learned * We learned a lot about integrating various technologies, like AI for receipt scanning and NFC for tracking eco-friendly transportation. * The biggest takeaway? Knowing that pushing the boundaries of what we thought was possible (like the receipt scanner) can lead to amazing outcomes! ## 🚀 What's next We have exciting future plans for the app: * **Health app integration**: Connect the app to health platforms to reward users for their daily steps and other healthy behaviors. * **Mobile app development**: Transfer the app to a native mobile environment to leverage all the features of smartphones, making it even easier for users to engage with the platform and make green choices.
winning
## Inspiration We're tired of seeing invasive advertisements and blatant commercialization all around us. With new AR technology, there may be better ways to help mask and put these advertisements out of sight, or in a new light. ## What it does By utilizing Vueforia technologies and APIs, we can use image tracking to locate and overlay advertisements with dynamic content. ## How we built it Using Unity and Microsoft's Universal Window's Platform, we created an application that has robust tracking capabilities. ## Challenges we ran into Microsoft's UWP platform has many challenges, such as various different requirements and dependencies. Even with experienced Microsoft personel, due to certain areas of lacking implementation, we were ultimately unable to get certain portions of our code running on the Microsoft Hololens device. ## Accomplishments that we're proud of Using the Vuforia api, we implemented a robust tracking solution while also pairing the powerful api's of giphy to have dynamic, networked content in lieu of these ads. ## What we learned While Unity and Microsoft's UWP are powerful platforms, sometimes there can be powerful issues that can hinder development in a big way. Using a multitude of devices and supported frameworks, we managed to work around our blocks the best we could in order to demonstrate and develop the core of our application. ## What's next for Ad Block-AR Hopefully we're going to be extending this software to run on a multitude of devices and technologies, with the ultimate aim of creating one of the most tangible and effective image recognition programs for the future.
## Inspiration As a team, we've been increasingly concerned about the *data privacy* of users on the internet in 2022. There’s no doubt that the era of vast media consumption has resulted in a monopoly of large tech firms who hold a strong grasp over each and every single user input. When every action you make results in your data being taken and monetized to personalize ads, it’s clear to see the lack of security and protection this can create for unsuspecting everyday users. That’s why we wanted to create an all-in-one platform to **decentralize the world of advertising** and truly give back **digital data ownership** to users. Moreover, we wanted to increase the transparency of what happens with this data, should the user opt-in to provide this info to advertisers. That’s where our project, **Sonr rADar**, comes in. ## What it does As the name suggests—Sonr rADar, integrated into the **Sonr** blockchain ecosystem, is a mobile application which aims to decentralize the advertising industry and offer a robust system for users to feel more empowered about their digital footprint. In addition to the security advantages and more meaningful advertisements, users can also monetarily benefit from their interactions with advertisers. We incentivize users by rewarding them with **cryptocurrency tokens** (such as **SNR**) for their time spent on ads, creating a **win/win situation**. Not only that, but it can send advertisers anonymous info and analytics about how successful their ads are, helping them to improve further. Upon opening the app, users are met with a clean dashboard UI displaying their current token balance, as well as transfer history. In the top right corner, there exists a dropdown menu where the user can choose from various options. This includes the opportunity to enter and update their personal information that they would like to share with advertisers (for ad personalization), an ads dashboard where they can view all relevant ads, and a permissions section for them to opt in/out of where and how their data is being shared (e.g. third parties). Their information is then carefully stored using **data schemas, buckets, and objects**, which directly utilize Sonr’s proprietary **blockchain** technology to offer a *fully transparent* data management solution. ## How we built it We built this application using the Flutter SDK (more specifically, **motor\_flutter**), programming languages such as Dart & C++, and a suite of tools associated with Sonr; including the Speedway CLI! We also utilized a **testing dataset of 500** user profiles (names, emails, phone numbers, locations) for the **schema architecture**. ## Challenges we ran into Since our project leverages tools from an early-stage startup, some documentation was quite hard to find, so there were many instances when we were stuck during the backend implementation. Luckily, we were able to visit the Sonr sponsor booth and get valuable assistance with any troubleshooting—as they were very helpful. It was also largely our first time working with Flutter, Go, and a lot of the other components of blockchain and Sonr, so naturally there was a lot to process and overcome. ## Accomplishments that we're proud of We're proud to have created a proof-of-concept prototype for our ambitious idea; working together as a team to overcome the many obstacles we faced along the way! We hope that this project will create a lasting impact in the advertising space. ## What we learned How to work together as a team and delegate tasks, balance our time amongst in-person activities + working on our project, and create something exciting in only 36 hours! By working with feedback and workshops offered by the sponsor CEO and Chief of Staff, we got an insightful look into the eventful startup experience and what that journey may look like. It's hard to think of a better way to dive into the exciting opportunities of Blockchain! ## What's next for Sonr rADar The state of our project is only the beginning; we plan to further build out the individual pages on the application and eventually deploy it to the App Store! We also hope to receive guidance and mentorship from the Sonr team, so that we can utilize their products in the most effective way possible. As we gain more traction, more advertisers and users will join—making our app even better. It's a positive feedback loop that can one day allow us to even bring the decentralized advertisement platform into AI! ## Team Member Badge IDs **Krish:** TSUNAMI TRAIT CUBE SCHEMA **Mihir:** STADIUM MOBILE LAP BUFFER **Amire:** YAM MANOR LIABILITY BATH **Robert:** CRACKERS TRICK GOOD SKEAN
## Inspiration Want to take advantage of the AR and object detection technologies to help people to gain safer walking experiences and communicate distance information to help people with visual loss navigate. ## What it does Augment the world with the beeping sounds that change depending on your proximity towards obstacles and identifying surrounding objects and convert to speech to alert the user. ## How we built it ARKit; RealityKit uses Lidar sensor to detect the distance; AVFoundation, text to speech technology; CoreML with YoloV3 real time object detection machine learning model; SwiftUI ## Challenges we ran into Computational efficiency. Going through all pixels in the LiDAR sensor in real time wasn’t feasible. We had to optimize by cropping sensor data to the center of the screen ## Accomplishments that we're proud of It works as intended. ## What we learned We learned how to combine AR, AI, LiDar, ARKit and SwiftUI to make an iOS app in 15 hours. ## What's next for SeerAR Expand to Apple watch and Android devices; Improve the accuracy of object detection and recognition; Connect with Firebase and Google cloud APIs;
partial
## Inspiration My friend and I needed to find an apartment in New York City during the Summer. We found it very difficult to look through multiple listing pages at once so we thought to make a bot to suggest apartments would be helpful. However, we did not stop there. We realized that we could also use Machine Learning so the bot would learn what we like and suggest better apartments. That is why we decided to do RealtyAI ## What it does It is a facebook messenger bot that allows people to search through airbnb listings while learning what each user wants. By giving feedback to the bot, we learn your **general style** and thus we are able to recommend the apartments that you are going to like, under your budget, in any city of the world :) We can also book the apartment for you. ## How I built it Our app used a flask app as a backend and facebook messenger to communicate with the user. The facebook bot was powered by api.ai and the ML was done on the backend with sklearn's Naive Bayes Classifier. ## Challenges I ran into Our biggest challenge was using python's sql orm to store our data. In general, integrating the many libraries we used was quite challenging. The next challenge we faced was time, our application was slow and timing out on multiple requests. So we implemented an in-memory cache of all the requests but most importantly we modified the design of the code to make it multi-threaded. ## Accomplishments that I'm proud of Our workflow was very effective. Using Heroku, every commit to master immediately deployed on the server saving us a lot of time. In addition, we all managed the repo well and had few merge conflicts. We all used a shared database on AWS RDS which saved us a lot of database scheme migration nightmares. ## What I learned We learned how to use python in depth with integration with MySQL and Sklearn. We also discovered how to spawn a database with AWS. We also learned how to save classifiers to the database and reload them. ## What's next for Virtual Real Estate Agent If we win hopefully someone will invest! Can be used by companies for automatic accommodations for people having interviews. But only by individuals how just want to find the best apartment for their own style!
## Inspiration The inspiration for Green Cart is to support local farmers by connecting them directly to consumers for fresh and nutritious produce. The goal is to promote community support for farmers and encourage people to eat fresh and locally sourced food. ## What it does GreenCart is a webapp that connects local farmers to consumers for fresh, nutritious produce, allowing consumers to buy directly from farmers in their community. The app provides a platform for consumers to browse and purchase produce from local farmers, and for farmers to promote and sell their products. Additionally, GreenCart aims to promote community support for farmers and encourage people to eat fresh and locally sourced food. ## How we built it The GreenCart app was built using a combination of technologies including React, TypeScript, HTML, CSS, Redux and various APIs. React is a JavaScript library for building user interfaces, TypeScript is a typed superset of JavaScript that adds optional static types, HTML and CSS are used for creating the layout and styling of the app, Redux is a library that manages the state of the app, and the APIs allow the app to connect to different services and resources. The choice of these technologies allowed the team to create a robust and efficient app that can connect local farmers to consumers for fresh, nutritious produce while supporting the community. ## Challenges we ran into The GreenCart webapp development team encountered a number of challenges during the design and development process. The initial setup of the project, which involved setting up the project structure using React, TypeScript, HTML, CSS, and Redux, and integrating various APIs, was a challenge. Additionally, utilizing Github effectively as a team to ensure proper collaboration and version control was difficult. Another significant challenge was designing the UI/UX of the app to make it visually appealing and user-friendly. The team also had trouble with the search function, making sure it could effectively filter and display results. Another major challenge was debugging and fixing issues with the checkout balance not working properly. Finally, time constraints were a challenge as the team had to balance the development of various features while meeting deadlines. ## Accomplishments that we're proud of As this was the first time for most of the team members to use React, TypeScript, and other technologies, the development process presented some challenges. Despite this, the team was able to accomplish many things that they were proud of. Some examples of these accomplishments could include: Successfully setting up the initial project structure and integrating the necessary technologies. Implementing a user-friendly and visually appealing UI/UX design for the app. Working collaboratively as a team and utilizing Github for version control and collaboration. Successfully launching the web app and getting a positive feedback from users. ## What we learned During this hackathon, the team learned a variety of things, including: How to use React, TypeScript, HTML, CSS, and Redux to build a web application. How to effectively collaborate as a team using Github for version control and issue tracking. How to design and implement a user-friendly and visually appealing UI/UX. How to troubleshoot and debug issues with the app, such as the blog page not working properly. How to work under pressure and adapt to new technologies and challenges. They also learn how to build a web app that can connect local farmers to consumers for fresh, nutritious produce while supporting the community. Overall, the team gained valuable experience in web development, teamwork, and project management during this hackathon. ## What's next for Green Cart Marketing and Promotion: Develop a comprehensive marketing and promotion strategy to attract customers and build brand awareness. This could include social media advertising, email campaigns, and influencer partnerships. Improve User Experience: Continuously gather feedback from users and use it to improve the app's user experience. This could include adding new features, fixing bugs and optimizing the performance. Expand the Product Offerings: Consider expanding the range of products offered on the app to attract a wider customer base. This could include organic and non-organic produce, meat, dairy and more. Partnership with Local Organizations: Form partnerships with local organizations such as supermarkets, restaurants, and community groups to expand the reach of the app and increase the number of farmers and products available. ## Git Repo ; <https://github.com/LaeekAhmed/Green-Cart/tree/master/Downloads/web_dev/Khana-master>
## Inspiration A member of our core team is very close with his cousin who is severely disabled. Thus, we approached YHack with a social conscious Hack that would assist those who don't have the same opportunities to attend Hackathons as we do. Although our peers who are visually impaired use current methods and technology including echolocation, seeing-eye dogs, and a white cane for assistance, existing aides fall short from the potential presented by today's technology. We decided to design and construct a revolutionary product that allows those who are blind to have a greater sense of their surroundings rather than what lies five feet ahead. Looking to our community, we reached out and spoke with a prominent economics professor from Brown University, professor Roberto Serrano. He explained that, "The cane isn't perfect. For example, if an obstacle is not on the floor, but is up above, you are likely to bump into it. I would think that some electronic device that alerts me to its presence would help." Thus, Louis was born, a proprietary, mobile braille reader that not only alerts but also locates and describes one's surroundings from a small, integrated camera. ## What it does Louis uses a raspberry-pi camera to take images that are then uploaded and processed by the Microsoft Azure (vision) API, Google Cloud (vision) API, and Facebook Graph API to provide short-text summaries of the image. This text is converted to a Braille matrix which is transformed into a series of stepper motor signals. Using two stepper motors, we translate the image into a series of Braille characters that can be read simply by the sliding of a finger. ## How we built it The hardware was designed using SolidWorks run on Microsoft Remote Desktop. Over a series of 36 hours we ventured to Maker Spaces to prototype our designs before returning to Yale to integrate them and refine our design. ## Challenges we ran into In order to make an economically feasible system rather than creating actuators for every braille button, we devised a system using a series of eight dot-combinations that could comply with an unlimited amount of brail characters. We designed our own braille discs that are turned into a recognizable Braille pattern. We ran into a huge roadblock of how to turn one Braille piece at a time while keeping the rest constant. We overcame this obstacle and devised and designed a unique, three-part inner turning mechanism that allowed us to translate the whole platform horizontally and rotate a single piece at a time. At first, we attempted to transform a visual input to an audio headset or speaker, but we realized we were making a product rather than something that actually makes a difference in people's lives. When someone loses one of their senses, the others become incredibly more precise. Many people in the world who are visually impaired count on the sounds we hear everyday to guide them; therefore, it's imperative that we looked towards touch: a sense that is used far less for reference and long-range navigation. ## What we learned In 36 hours we were able to program and generate a platform that takes the images we see and others cannot and converts it into a physical language on a 3D printed, and completely self-designed system. In addition, we explored the numerous applications of Microsoft Azure and the bourgeoning field of image processing. ## What's next for Louis We are going to Kinect! Unfortunately, we were unable to gain access to a Microsoft Kinect; nevertheless, we look forward to returning to Brown University with Louis and integrating the features of Kinect to a Braille output. We hope to grant our peers and colleagues with visual impairment unparalleled access to their surroundings using touch and the physical language of braille.
partial
## Inspiration Vision—our most dominant sense—plays a critical role in every faucet and stage in our lives. Over 40 million people worldwide (and increasing) struggle with blindness and 20% of those over 85 experience permanent vision loss. In a world catered to the visually-abled, developing assistive technologies to help blind individuals regain autonomy over their living spaces is becoming increasingly important. ## What it does ReVision is a pair of smart glasses that seamlessly intertwines the features of AI and computer vision to help blind people navigate their surroundings. One of our main features is the integration of an environmental scan system to describe a person’s surroundings in great detail—voiced through Google text-to-speech. Not only this, but the user is able to have a conversation with ALICE (Artificial Lenses Integrated Computer Eyes), ReVision’s own AI assistant. “Alice, what am I looking at?”, “Alice, how much cash am I holding?”, “Alice, how’s the weather?” are all examples of questions ReVision can successfully answer. Our glasses also detect nearby objects and signals buzzing when the user approaches an obstacle or wall. Furthermore, ReVision is capable of scanning to find a specific object. For example—at an aisle of the grocery store—” Alice, where is the milk?” will have Alice scan the view for milk to let the user know of its position. With ReVision, we are helping blind people regain independence within society. ## How we built it To build ReVision, we used a combination of hardware components and modules along with CV. For hardware, we integrated an Arduino uno to seamlessly communicate back and forth between some of the inputs and outputs like the ultrasonic sensor and vibrating buzzer for haptic feedback. Our features that helped the user navigate their world heavily relied on a dismantled webcam that is hooked up to a coco-ssd model and ChatGPT 4 to identify objects and describe the environment. We also used text-to-speech and speech-to-text to make interacting with ALICE friendly and natural. As for the prototype of the actual product, we used stockpaper, and glue—held together with the framework of an old pair of glasses. We attached the hardware components to the inside of the frame, which pokes out to retain information. An additional feature of ReVision is the effortless attachment of the shade cover, covering the lens of our glasses. We did this using magnets, allowing for a sleek and cohesive design. ## Challenges we ran into One of the most prominent challenges we conquered was soldering ourselves for the first time as well as DIYing our USB cord for this project. As well, our web camera somehow ended up getting ripped once we had finished our prototype and ended up not working. To fix this, we had to solder the wires and dissect our goggles to fix their composition within the frames. ## Accomplishments that we're proud of Through human design thinking, we knew that we wanted to create technology that not only promotes accessibility and equity but also does not look too distinctive. We are incredibly proud of the fact that we created a wearable assistive device that is disguised as an everyday accessory. ## What we learned With half our team being completely new to hackathons and working with AI, taking on this project was a large jump into STEM for us. We learned how to program AI, wearable technologies, and even how to solder since our wires were all so short for some reason. Combining and exchanging our skills and strengths, our team also learned design skills—making the most compact, fashionable glasses to act as a container for all the technologies they hold. ## What's next for ReVision Our mission is to make the world a better place; step-by-step. For the future of ReVision, we want to expand our horizons to help those with other sensory disabilities such as deafness and even touch.
## Foreword Before we begin, a **very big thank you** to the NVIDIA Jetson team for their generosity in making this project submission possible. ## Inspiration Nearly 100% of sight-assistance devices for the blind fall into just two categories: Voice assistants for navigation, and haptic feedback devices for directional movement. Although the intent behind these devices is noble, they fail in delivering an effective sight-solution for the blind. Voice assistant devices that relay visual information from a camera-equipped computer to the user are not capable of sending data to the user in real time, making them very limited in capability. Additionally, the blind are heavily dependent on their hearing in order to navigate environments. They have to use senses besides vision to the limit to make up for their lack of sight, and using a voice assistant clogs up and introduces noise to this critical sensory pathway. The haptic feedback devices are even more ineffective; these simply tell the user to move left, right, backwards, etc. While these devices provide real-time feedback and don’t introduce noise to one’s hearing like with the voice assistants, they provide literally no information regarding what is in front of the user; it simply just tells them how to move. This doesn’t add much value for the blind user. It's 2021. Voice assistant and haptic feedback directional devices are a thing of the past. Having blind relatives and friends, we wanted to create a project that leverages the latest advancements in technology to create a truly transformative solution. After about a week's worth of work, we've developed OptiLink; a brain machine interface that feeds AI-processed visual information **directly to the user's brain** in real-time, eliminating the need for ineffective voice assistant and directional movement assistants for the blind. ## What it does OptiLink is the next generation of solutions for the blind. Instead of using voice assistants to tell the user what’s in front of them, it sends real-time AI processed visual information directly to the user’s brain in a manner that they can make sense of. So if our object detection neural network detects a person, the blind user will actually be able to tell that a person is in front of them through our brain-machine interface. The user will also be able to gauge distance to environmental obstacles through echolocation, once again directly fed to their brain. Object detection is done through a camera equipped NVIDIA Jetson Nano; a low-power single board computer optimized for deep learning. A Bluetooth enabled nRF52 microcontroller connected to an ultrasonic sensor provides the means to process distances for echolocation. These modules are conveniently packed in a hat for use by the blind. On the Nano, an NVIDIA Jetpack SDK accelerated MobileNet neural network detects objects (people, cars, etc.), and sends an according output over Bluetooth via the Bleak library to 2 Neosensory Buzz sensory substitution devices located on each arm. These devices, created by neuroscientists David Eagleman and Scott Novich at the Baylor School of Medicine, contain 4 LRAs to stimulate specific receptors in your skin through patterns of vibration. The skin receptors send electrical information to your neurons and eventually to your brain, and your brain can learn to process this data as a sixth sense. Specific patterns of vibration on the hands tell the user what they’re looking at (for example, a chair will correspond to pattern A, a car will correspond to pattern B). High priority objects like people and cars will be relayed through feedback from the right hand, while low priority objects (such as kitchenware and laptops) will be relayed via feedback from the left hand. There are ~90 such possible objects that can be recognized by the user. Ultrasonic sensor processed distance is fed through a third Neosensory Buzz on the left leg, with vibrational intensity corresponding to distance to an obstacle. ## How we built it OptiLink's object detection inferences are all done through the NVIDIA Jetson Nano running MobileNet. Through the use of NVIDIA's TensorRT to accelerate inferencing, we were able to run this object detection model at a whopping 24 FPS with just about 12 W of power. Communication with the 2 Neosensory Buzz feedback devices on the arm were done through Bluetooth Low Energy via the Bleak library and the experimental Neosensory Python SDK. Echolocation distance processing is done through an Adafruit nRF52840 microcontroller connected to an ultrasonic sensor; it relays processed distance data (via Bluetooth Low Energy) to a third Neosensory Buzz device placed on the leg. ## Challenges we ran into This was definitely the most challenging to execute project we've made to date (and we've made quite a few). Images have tons of data, and processing, condensing, and packaging this data into an understandable manner through just 2 data streams is a very difficult task. However, by grouping the classes into general categories (for example cars, motorcycles, and trucks were all grouped into motor vehicles) and then sending a corresponding signal for the grouped category, we could condense information into a manner that is more user friendly. Additionally, we included a built-in frame rate limiter, which prevents the user from receiving way too much information too quickly from the Neosensory Buzz devices. This allows the user to far more effectively understand the vibrational data from the feedback devices. ## Accomplishments that we're proud of We think we’ve created a unique solution to sight-assistance for the blind. We’re proud to have presented a fully functional project, especially considering the complexities involved in its design. ## What we learned This was our first time working with the NVIDIA Jetson Nano. We learned a ton about Linux and how to leverage NVIDIA's powerful tools for machine learning (The Jetpack SDK and TensorRT). Additionally, we gained valuable experience with creating brain-machine interfaces and learned how to process and condense data for feeding into the nervous system. ## What's next for OptiLink OptiLink has room for improvement in its external design, user-friendliness, and range of features. The device currently has a learning curve when it comes to understanding all of the patterns; of course, it takes time to properly understand and make sense of new sensory feedback integrated into the nervous system. We could create a mobile application for training pattern recognition. Additionally, we could integrate more data streams in our product to allow for better perception of various vibrational patterns corresponding to specific classes. Physical design elements could also be streamlined and improved. There’s lots of room for improvement, and we’re excited to continue working on this project!
## Inspiration The inspiration for our project, CLIF ( Communicative Learning Intelligent Friend), stemmed from a deep, personal connection with a deaf friend who navigated daily communication challenges with resilience and determination. This individual, along with some 20% of the global population who struggle with hearing loss on a day-to-day basis, inspired our team to bring CLIF to life. ## Project Story Originally CLIF started off as just a Python program and a webcam, we later expanded on the idea and made a module and added cosmetics to make it into an friendly appearance that can fit into variety of households ## What it does CLIF (Communicative Learning Intelligent Friend) is a device designed to facilitate communication for mute and deaf individuals. By leveraging computer vision and machine learning, CLIF interprets sign language gestures captured by a camera in real time. It then translates these gestures into spoken words, which are audibly relayed through a speaker, and simultaneously displayed on an LCD screen. This innovative device bridges the communication gap, empowering individuals with hearing and speech impairments to engage more effectively with others daily. ## How we built it Initially, we meticulously crafted a sturdy physical prototype, prioritizing approachability and user-friendliness in its design. We meticulously curated a diverse dataset consisting of 30 distinct categories of images, encompassing the alphabet and common sign language phrases. With each class comprising 200 images, our dataset comprised a total of 6000 images, all meticulously captured by our team. Utilizing the powerful capabilities of the MediaPipe library, we meticulously processed each image and its corresponding label. Leveraging the library's robust functionality, we precisely landmarked the hands depicted in every image. With 42 landmarks meticulously identified for each hand, and 84 when considering both hands, our program achieved exceptional precision in hand gesture recognition. Subsequently, we employed Scikit-learn, coupled with a random forest classifier, to train an advanced AI module. This module endowed our creation, affectionately named CLIF, with the ability to swiftly and accurately recognize various sign language gestures in real time. To enhance user interaction, we seamlessly integrated a Bluetooth speaker into our design. This feature empowered CLIF to seamlessly translate American Sign Language gestures into spoken English, providing users with instant comprehension and accessibility. Lastly, we employed PySerial to facilitate communication between CLIF and an Arduino board. This enabled CLIF to transmit analyzed words to an LCD screen in real time, ensuring users received immediate visual feedback on the interpreted sign language gestures. ## Accomplishments that we’re proud of * Development of a real-time interpreter and display system. * Implementation of hand detection capability, enabling the device to detect two hands using 84 landmarks, as opposed to the standard 42. * Achieving 100% accuracy on test cases, demonstrating the robustness and reliability of the system. ## Challenges we ran into Throughout this project, our team faced challenges including ideating a prototype that could realistically be used in daily life, developing a working solution within the constraints of available electronics and materials, and connecting and configuring a variety of new integrated electronics. Despite the hurdles we faced, we embraced these challenges as opportunities for growth and innovation, ultimately creating a communication device that we believe has the potential to improve the lives of mute and deaf individuals significantly. ## What we learned Throughout this project's journey, we learned a variety of invaluable lessons. In the initial phase, our team learned about developing an idea that can become a fully functional prototype. Aspects such as problem definition, usage goals, ergonomics, and more were considered. As participants in the MakeUofT Hackathon, we gained expertise in formulating a plan that functions within the material constraints. In addition, our team explored several different approaches, exploring and integrating various hardware components, such as cameras, Bluetooth modules, and Arduino Uno boards, into our design. This led us to gain knowledge in various technology applications, communication methods, and more. ## What's next for CLIF The future of CLIF is bright. We are committed to refining its functionality and accessibility, ensuring it becomes increasingly useful for empowering individuals with hearing and speech impairments in various settings and communities. More specifically, incorporating Bluetooth components and having better wire control can help further simplify the usage and portability of CLIF. Furthermore, a wider variety of languages and recognizable ASL phrases can introduce CLIF internationally. Lastly, changes such as improving the UI and optimizing the accuracy and speed of the processing can bring CLIF to new heights of effectiveness and usability.
winning
💡 ## Inspiration 49 percent of women reported feeling unsafe walking alone after nightfall according to the Office for National Statistics (ONS). In light of recent sexual assault and harassment incidents in the London, Ontario and Western community, women now feel unsafe travelling alone more than ever. Light My Way helps women navigate their travel through the safest and most well-lit path. Women should feel safe walking home from school, going out to exercise, or going to new locations, and taking routes with well-lit areas is an important precaution to ensure safe travel. It is essential to always be aware of your surroundings and take safety precautions no matter where and when you walk alone. 🔎 ## What it does Light My Way visualizes data of London, Ontario’s Street Lighting and recent nearby crimes in order to calculate the safest path for the user to take. Upon opening the app, the user can access “Maps” and search up their destination or drop a pin on a location. The app displays the safest route available and prompts the user to “Send Location” which sends the path that the user is taking to three contacts via messages. The user can then click on the google maps button in the lower corner that switches over to the google maps app to navigate the given path. In the “Alarm” tab, the user has access to emergency alert sounds that the user can use when in danger, and upon clicking the sounds play at a loud volume to alert nearby people for help needed. 🔨 ## How we built it React, Javascript, and Android studio were used to make the app. React native maps and directions were also used to allow user navigation through google cloud APIs. GeoJson files were imported of Street Lighting data from the open data website for the City of London to visualize street lights on the map. Figma was used for designing UX/UI. 🥇 ## Challenges we ran into We ran into a lot of trouble visualization such a large amount of data that we exported on the GeoJson street lights. We overcame that by learning about useful mapping functions in react that made marking the location easier. ⚠️ ## Accomplishments that we're proud of We are proud of making an app that can be of potential help to make women be safer walking alone. It is our first time using and learning React, as well as using google maps, so we are proud of our unique implementation of our app using real data from the City of London. It was also our first time doing UX/UI on Figma, and we are pleased with the results and visuals of our project. 🧠 ## What we learned We learned how to use React, how to implement google cloud APIs, and how to import GeoJson files into our data visualization. Through our research, we also became more aware of the issue that women face daily on feeling unsafe walking alone. 💭 ## What's next for Light My Way We hope to expand the app to include more data on crimes, as well as expand to cities surrounding London. We want to continue developing additional safety features in the app, as well as a chatting feature with the close contacts of the user.
## Inspiration How many times have you went on a website, looked at the length of the content and decided that it was not worth your time to read the entire page? The inspiration for my project comes from this exact problem. Imagine how much easier it would be if website owners could automatically generate a summary of their webpage for their users to consume. ## What it does Allows the website owner to easily generate a summary of their webpage and place it anywhere on there webpage. ## How I built it I used CloudFlare so that it would be available to the many website owners very easily as an app. I used javascript to fetch the summary from a 3rd party API and to update the html of the page. ## Challenges I ran into It was hard trying to find a summarizer that was suitable for my needs. Many of them only accept URLs as input and many are just not good enough. Once I found one that was suitable, I also had difficulty incorporating it within my app. I had also never done anything with html or css before so it was hard trying to create the view that the user would see. ## Accomplishments that I'm proud of It works, looks nice, and is easy to use ## What I learned How to make a CloudFlare app, html, css ## What's next for TL;DR Add more options for the app (maybe show relevant tweets?) and possibly create a chrome web extension that does that same thing.
## Inspiration We wanted to create a project that would challenge both our front end and back end skills. ## What it does It finds the safest path between two points in London. ## How I built it It uses the Google Maps API, data from the city of London, and statistical analysis to make recommendations. ## Challenges I ran into React was a challenge as we couldn't pass props to our maps components. Challenges with Python packages and getting statistically significant results. ## Accomplishments that I'm proud of We got this done. ## What I learned In future front end projects, we shouldn't use React. ## What's next for SafeWay A possible sale to Google ;)
winning
## Inspiration Booking appointments through the phone can be a chore to do sometimes. Verbal communication is less clear, concise and easily mistranslated, and it can be a hassle to wait for reception to be available, callbacks, and picking the best available time. In this day and age, we live in a world where talking is often not required to get things done. With just a few clicks on a screen, we can get transportation, food or things delivered right to our doorsteps. So why not make booking appointments just as easy? Our project is inspired based on our desires to make booking our appointments quicker, easier, and eliminate any miscommunication with just a touch of a button. ## What it does You don't have to call up your dental office or your car repair centre. Just open Book Better, and in 2 clicks, you have got yourself an appointment scheduled! ## How we built it We used HTML, Bootstrap and CSS for the front-end and used JavaScript, Firebase, jQuery, MomentsJS and Full Calendar API to implement back-end. In regards to Firebase, we specifically used it for authentication (i.e. registering users), cloud database, and hosting. ## Challenges we ran into * Asynchronous programming with JavaScript despite JavaScript being single-threaded! * It took us a long time to read through and fully understand the Full Calendar API and integrate it with our app * Designing a complex database structure: it's our first time dealing with so many different types of data to store. We had to design a data structure that is efficient for lookups and reduce redundancy. * Trying to host it using Domain.com ## Accomplishments that we're proud of * Fully implemented a working minimum viable product. We have implemented all the core functions of the app that we had planned * To have successfully used Firebase as our cloud database platform since having a good cloud database structure can be scalable for the future * Visually pleasing front-end with cute logo. We are happy with the overall look and minimalistic design of the app. Its clean and intuitive UI makes this app easy-to-use for everyone. * Responsive web design allow accessibility in any type of device ## What we learned * Understanding how Firebase works (i.e. how they store data, API): we are all very new to Firebase so implementing our app with Firebase was a real challenge to us * JavaScript, learning more about its JavaScript-ness. Having built mobile apps in the past, transitioning into web app has been quite the challenge, especially with a language (i.e., JavaScript) that works differently than the one we're used to (Java) * Constructing a good database structure for the core functions of the app * Allowing the authorization of two types of users that leads to their respective interface ## What's next for BookBetter * Mobile app * More flexibility and powerful options in booking (i.e. change bookings, share appointments with someone, customize alerts) * Recruiting organizations to register with BookBetter to digitize their booking process * Promoting a sustainable relationship between businesses and their clients * Colour coding events of different categories
## Inspiration --- I recently read Ben Sheridan's paper on Human Centered Artificial Intelligence where he argues that Ai is best used as a tool that accelerates humans rather than trying to *replace* them. We wanted to design a "super-tool" that meaningfully augmented a user's workday. We felt that current calendar apps are a messy and convoluted mess of grids flashing lights, alarms and events all vying for the user's attention. The chief design behind Line is simple, **your workday and time is linear so why shouldn't your calendar be linear?**. Now taking this base and augmenting it with *just the right amount* of Ai. ## What it does You get a calendar that tells you about an upcoming lunch with a person at a restaurant, gives you some information about the restaurant along with links to reviews and directions that you can choose to view. No voice-text frustration, no generative clutter. ## How we built it We used React Js for our frontend along with a docker image for certain backend tasks and hosting a small language model for on metal event summarization **you can self host this too for an off the cloud experience**, if provided the you.com API key is used to get up to date and accurate information via the smart search query. ## Challenges we ran into We tackled a lot of challenges particularly around the interoperability of our tech stack particularly a potential multi-database system that allows users to choose what database they wanted to use we simply ran out of time to implement this so for our demo we stuck with a firebase implementation , we also wanted to ensure that the option to host your own docker image to run some of the backend functions was present and as a result a lot of time was put into making both an appealing front and backend. ## Accomplishments that we're proud of We're really happy to have been able to use the powerful you.com smart and research search APIs to obtain precise data! Currently even voice assistants like Siri or Google use a generative approach and if quizzed on subjects that are out of their domain of knowledge they are likely to just make things up (including reviews and addresses), which could be super annoying on a busy workday and we're glad that we've avoided this pitfall, we're also really happy at how transparent our tech stack is leaving the door open for the open source community to assist in improving our product! ## What we learned We learnt a lot over the course of two days, everything from RAG technology to Dockerization, Huggingface spaces, react js, python and so much more! ## What's next for Line Calendar Improvements to the UI, ability to swap out databases, connections via the google calendar API and notion APIs to import and transform calendars from other software. Better context awareness for the you.com integration. Better backend support to allow organizations to deploy and scale on their own hardware.
## Inspiration Living in the big city, we're often conflicted between the desire to get more involved in our communities, with the effort to minimize the bombardment of information we encounter on a daily basis. NoteThisBoard aims to bring the user closer to a happy medium by allowing them to maximize their insights in a glance. This application enables the user to take a photo of a noticeboard filled with posters, and, after specifying their preferences, select the events that are predicted to be of highest relevance to them. ## What it does Our application uses computer vision and natural language processing to filter any notice board information in delivering pertinent and relevant information to our users based on selected preferences. This mobile application lets users to first choose different categories that they are interested in knowing about and can then either take or upload photos which are processed using Google Cloud APIs. The labels generated from the APIs are compared with chosen user preferences to display only applicable postings. ## How we built it The the mobile application is made in a React Native environment with a Firebase backend. The first screen collects the categories specified by the user and are written to Firebase once the user advances. Then, they are prompted to either upload or capture a photo of a notice board. The photo is processed using the Google Cloud Vision Text Detection to obtain blocks of text to be further labelled appropriately with the Google Natural Language Processing API. The categories this returns are compared to user preferences, and matches are returned to the user. ## Challenges we ran into One of the earlier challenges encountered was a proper parsing of the fullTextAnnotation retrieved from Google Vision. We found that two posters who's text were aligned, despite being contrasting colours, were mistaken as being part of the same paragraph. The json object had many subfields which from the terminal took a while to make sense of in order to parse it properly. We further encountered troubles retrieving data back from Firebase as we switch from the first to second screens in React Native, finding the proper method of first making the comparison of categories to labels prior to the final component being rendered. Finally, some discrepancies in loading these Google APIs in a React Native environment, as opposed to Python, limited access to certain technologies, such as ImageAnnotation. ## Accomplishments that we're proud of We feel accomplished in having been able to use RESTful APIs with React Native for the first time. We kept energy high and incorporated two levels of intelligent processing of data, in addition to smoothly integrating the various environments, yielding a smooth experience for the user. ## What we learned We were at most familiar with ReactJS- all other technologies were new experiences for us. Most notably were the opportunities to learn about how to use Google Cloud APIs and what it entails to develop a RESTful API. Integrating Firebase with React Native exposed the nuances between them as we passed user data between them. Non-relational database design was also a shift in perspective, and finally, deploying the app with a custom domain name taught us more about DNS protocols. ## What's next for notethisboard Included in the fullTextAnnotation object returned by the Google Vision API were bounding boxes at various levels of granularity. The natural next step for us would be to enhance the performance and user experience of our application by annotating the images for the user manually, utilizing other Google Cloud API services to obtain background colour, enabling us to further distinguish posters on the notice board to return more reliable results. The app can also be extended to identifying logos and timings within a poster, again catering to the filters selected by the user. On another front, this app could be extended to preference-based information detection from a broader source of visual input.
losing
## Away From Keyboard ## Inspiration We wanted to create something that anyone can use AFK for chrome. Whether it be for accessibility reasons -- such as for those with disabilities that can't use the keyboard -- or for daily use when you're cooking, our aim was to make scrolling and Chrome browsing easier. ## What it does Our app AFK (away from keyboard) helps users scroll and read, hands off. You can control the page by saying "go down/up", "open/close tab", "go back/forward", "reload/refresh", or reading the text on the page (it will autoscroll once you reach the bottom). ## How we built it Stack overflow and lots of panicked googling -- we also used Mozilla's web speech API. ## Challenges we ran into We had some difficulties scraping the text from sites for the reading function as well as some difficulty integrating the APIs into our extension. We started off with a completely different idea and had to pivot mid-hack. This cut down a lot of our time, and we had troubles re-organizing and gauging the situation. However, as a team, we all worked on contributing parts to the project and, in the end we were able to create a working product despite the small road bumps we ran into. ## Accomplishments that we are proud of As a team, we were able to learn how to make chrome extensions in 24 hours :D ## What we learned We learned chrome extensions, using APIs in the extension and also had some side adventures with vue.js and vuetify for webapps. ## What's next for AFK We wanted to include other functionalities like taking screen shots and taking notes with the voice.
## 💡Inspiration In a world of fast media, many youth struggle with short attention spans and find it difficult to stay focused. We want to help overcome this phenomena with the help of a baby leaf sprout and automated browser proctoring! We hope this tool will help users ace their tests and achieve their dreams! ## 🔍 What it does Don't Leaf Me! utilizes a camera and monitors the user throughout the study session. If the user leaves their computer (leaves the camera) we know the user is not studying, and the browser opens a page to let the user know to come back and study! The tool also allows users to add urls that they wish to avoid during their study session and monitors which pages the user goes to. If the user travels to one of those pages, the tool alerts them to go back to study. The user can also use the built-in study session timer which lets them keep track of their progress. ## ⚙️ How we built it We constructed the front end with React, TypeScript, Chakra UI, styled-components, and webpack. We built the backend with Node, Express, and the Cloud Vision API from the Google Cloud Platform to detect whether or not a user was in front of the camera. ## 🚧 Challenges we ran into It was quite tricky to get the camera working to detect a person's face. There aren't many examples of what we were going for online, and much of our progress was made through trial and error. There were even less resources on how to connect the camera to the chrome extension. It was also difficult to extract the url of a browser from the chrome extension page since they are two separate pages. ## ✔️ Accomplishments that we're proud of * Creating a beautiful UI that creates a comfortable and motivating environment for users to be productive in * Successfully grappling with Google's Cloud Vision API * We managed to program the vision we had and also implemented most of the features we had planned to. ## 📚 What we learned * Through creating this project we gained a deeper understanding of the browser and DOM. * This was also the first time using our camera and chrome extension for many of our team members ## 🔭 What's next for Don't Leaf Me! * Don't Leaf Me! would like to add the ability for users to input the amount of time they wish to stay focused for on the timer. * Explore options of publishing our extension to the chrome web store.
## Inspiration In Canada every year **$160 million** is lost due to scams and phishing attacks in the financial industry. Scammers target an older population that isn't tech savvy and immigrants who struggle with a language barrier. **I wanted to do my part and use technology to give an advantage back to Canadians and help protect vulnerable members of our society from scammers.** ## What it does I created an app that uses Machine Learning to help you figure out if a message you received is fake or real. ## How we built it The **ML backend** started by cleaning a dataset of spam and regular messages, tokenizing, and cleaning the data. Then a Keras Sequential Model was used for training and testing to make the ML predictions using 5000 key words. The ML backend was built using Jupyter Notebook. I created a **ML Prediction API** to host the prediction calculation and serve an endpoint where the front end can query the machine learning model. The query is tokenized and given to the ML model to be predicted and the result is passed back to the front end. The ML Prediction API is hosted using Google Colab Notebook & Google Compute Engine. I also created a **Front End** to serve users the interface to paste a copy of the message they are suspicious about. The front end is a static HTML website that is served using Flask and Jinja2 templates. The hosting of the Front End is done using Google App Engine. ## Challenges we ran into 1. I haven't done any **Natural Language Processing** before and there are a lot of concepts 2. The datasets for spam are very dirty and needed a lot of cleaning and formatting 3. There is a lot of moving parts to this hack, **Data Science**, **Back End**, **Front End**, ... need more coffee. ## Accomplishments that we're proud of I'm proud that I was able to figure out how to get my ML model to correctly detect spam and scams with a **90% accuracy**. Given that it was my first time trying a **Fin-Tech** hack and using NLP I'm very proud of that outcome. The website is live so feel free to try some messages out! ## What we learned I learned how to create a **Keras Sequential Model** to do **NLP modelling**. I also learned how to setup and **productionalize a ML backend**, and I learned that *I hate front end work*. :) ## What's next for Does It Make Cents? I want to present "Does It Make Cents" to Scotiabank and show them my amazing **FinTech hack**. I hope I was able to blow their socks off and hopefully get in contact with a recruiter for my last coop term.
partial
## Inspiration Many visually impaired individuals face challenges due to their dependence on others. They lack the ability to do things on their own and often miss out on the simple pleasures of life. We wanted to create an app that helps the visually impaired fully experience life -- on their own terms. ## What it does "Sight" is essentially a navigation app for the blind. It uses image recognition and ultrasonic sensors attached to an Arduino in order to help visually impaired people to "see" around them. It also provides audio-based directions to the user. ## How we built it Our team first designed visual and audio based interfaces with a visually-impaired audience in mind. We then 3D-printed an iPhone case which was designed to hold an Arduino board/ultrasonic sensor. Then, our team developed an image recognition model using Apple's CoreML. Lastly, the model was implemented in the iOS application and any remaining flaws were removed. ## Challenges we ran into The main issue our team ran into regarded implementing our ML models in the final iOS application. We had a TensorFlow model already trained, however, our team was not able to use this model in the final application. ## Accomplishments that we're proud of Although our team was not able to use the TensorFlow model as initially planned, we were able to come up with an alternate solution that worked. Our team is proud that we were able to come up with a working app that has potential to impact the modern world. ## What we learned Our team primarily learned how to complement iOS development and ML models and how Arduino/iOS compatibility functions. ## What's next for Sight Moving forward, our team needs to improve Sight's image-recognition software. We created our own dataset since we lacked the computing power to use larger, more official datasets. We used CoreML's pre-trained MobileNet model. Although the model works to an extent, the safety of the blind individual is paramount and improving the image-recognition software directly benefits the safety of the user.
## Inspiration To improve the quality of life and safety of individuals who are vision-impaired or blind. A new solution that provides further independence to everyday life. ## What it does A vibrational language that collects real time visual information (i.e. cars, people, traffic lights etc.) about a user’s surroundings and generates unique, identifying vibrations on their smart device to indicate nearby objects. All processing is done on device, with no connection to the cloud, making it very secure. Our implementation paves the path for the App to be able to communicate the location and distance of growingly complex object libraries, as technology improves with the Apple Watch. ## How we built it Using Joseph Redmon’s, aka pjreddie, YoloV3, converted from Darknet to Keras/Tensorflow and then to Core ML, we created an iPhone app that allows users to sense the world around them. The machine learning model detects people, cards, and birds (for demo purposes) and tracks them in real time through their iPhone iSight camera. The conversion was done with YAD2K and Core ML Community Tools. Through the element of the Taptic Engine on iPhone 6S and later, users are able to sense what specific object is around them. The Taptic Engine is much more precise than a regular vibration motor, which can be programmed in a variety of combinations and frequencies. We controlled it through the use of the Piano framework which allows much easier access to the Taptic Engine and the ability to create symphonies of different vibration styles. The App was made in Xcode 9.4.1 and ran on an iPhone 6S running iOS 11.0. ## Challenges we ran into * Balance between speed and image processing accuracy * Working with Fitbit Studio and its various limitations * YOLOV3’s conversion from Darknet to Keras to Core ML because of Core ML’s support of Keras one of the only official conversions. ## Accomplishments that we're proud of * Creating something with the ability to positively impact lives * Imagine recognition was smooth and snappy ## What we learned * Application of ML * Real time object detection ## What's next for 6ixth Sense Open source where people can contribute different vibrational patterns to detect different objects. Eventually have all objects in database and create a functional vibrational language for the blind.
## Inspiration The idea was to help people who are blind, to be able to discretely gather context during social interactions and general day-to-day activities ## What it does The glasses take a picture and analyze them using Microsoft, Google, and IBM Watson's Vision Recognition APIs and try to understand what is happening. They then form a sentence and let the user know. There's also a neural network at play that discerns between the two dens and can tell who is in the frame ## How I built it We took a RPi Camera and increased the length of the cable. We then made a hole in our lens of the glasses and fit it in there. We added a touch sensor to discretely control the camera as well. ## Challenges I ran into The biggest challenge we ran into was Natural Language Processing, as in trying to parse together a human-sounding sentence that describes the scene. ## What I learned I learnt a lot about the different vision APIs out there and creating/trainingyour own Neural Network. ## What's next for Let Me See We want to further improve our analysis and reduce our analyzing time.
losing
## Inspiration Ever wish you could hear your baby cry where you are in the world? Probably not, but it's great to know anyways! Did you know that babies often cry when at least one of their needs is not met? How could possibly know about baby's needs without being there watching the baby sleep? ## What it does Our team of 3 visionaries present to you **the** innovation of the 21st century. Using just your mobile phone and an internet connection, you can now remotely receive updates on whether or not your baby is crying, and whether your baby has reached dangerously high temperatures. ## How we built it We used AndroidStudio for building the app that receives the updates. We used Socket.io for the backend communication between the phone and the Intel Edison. ## Challenges we ran into Attempting to make push notifications work accounted for a large portion of time spent building this prototype. In the future versions, push notifications would be included. ## Accomplishments that we're proud of We are proud of paving the future of baby-to-mobile communications for fast footed parents around the globe. ## What we learned As software people we are proud that we were to able to communicate with the Intel Edison. ## What's next for Baby Monitor Push notifications. Stay tuned!!
## Inspiration We created this app to address a problem that our creators were facing: waking up in the morning. As students, the stakes of oversleeping can be very high. Missing a lecture or an exam can set you back days or greatly detriment your grade. It's too easy to sleep past your alarm. Even if you set multiple, we can simply turn all those off knowing that there is no human intention behind each alarm. It's almost as if we've forgotten that we're supposed to get up after our alarm goes off! In our experience, what really jars you awake in the morning is another person telling you to get up. Now, suddenly there is consequence and direct intention behind each call to wake up. Wake simulates this in an interactive alarm experience. ## What it does Users sync their alarm up with their trusted peers to form a pact each morning to make sure that each member of the group wakes up at their designated time. One user sets an alarm code with a common wakeup time associated with this alarm code. The user's peers can use this alarm code to join their alarm group. Everybody in the alarm group will experience the same alarm in the morning. After each user hits the button when they wake up, they are sent to a soundboard interface, where they can hit buttons to send try to wake those that are still sleeping with real time sound effects. Each time one user in the server hits a sound effect button, that sound registers on every device, including their own device to provide auditory feedback that they have indeed successfully sent a sound effect. Ultimately, users exit the soundboard to leave the live alarm server and go back to the home screen of the app. They can finally start their day! ## How we built it We built this app using React Native as a frontend, Node.js as the server, and Supabase as the database. We created files for the different screens that users will interact with in the front end, namely the home screen, goodnight screen, wakeup screen, and the soundboard. The home screen is where they set an alarm code or join using someone else's alarm code. The "goodnight screen" is what screen the app will be on while the user sleeps. When the app is on, it displays the current time, when the alarm is set to go off, who else is in the alarm server, and a warm message, "Goodnight, sleep tight!". Each one of these screens went through its own UX design process. We also used Socket.io to establish connections between those in the same alarm group. When a user sends a sound effect, it would go to the server which would be sent to all the users in the group. As for the backend, we used Supabase as a database to store the users, alarm codes, current time, and the wake up times. We connected the front and back end and the app came together. All of this was tested on our own phones using Expo. ## Challenges we ran into We ran into many difficult challenges during the development process. It was all of our first times using React Native, so there was a little bit of a learning curve in the beginning. Furthermore, incorporating Sockets with the project proved to be very difficult because it required a lot of planning and experimenting with the server/client relationships. The alarm ringing also proved to be surprisingly difficult to implement. If the alarm was left to ring, the "goodnight screen" would continue ringing and would not terminate. Many of React Native's tools like setInterval didn't seem to solve the problem. This was a problematic and reoccurring issue. Secondly, the database in Supabase was also quite difficult and time consuming to connect, but in the end, once we set it up, using it simply entailed brief SQL queries. Thirdly, setting up the front end proved quite confusing and problematic, especially when it came to adding alarm codes to the database. ## Accomplishments that we're proud of We are super proud of the work that we’ve done developing this mobile application. The interface is minimalist yet attention- grabbing when it needs to be, namely when the alarm goes off. Then, the hours of debugging, although frustrating, was very satisfying once we finally got the app running. Additionally, we greatly improved our understanding of mobile app developement. Finally, the app is also just amusing and fun to use! It’s a cool concept! ## What we learned As mentioned before, we greatly improved our understanding of React Native, as for most of our group, this was the first time using it for a major project. We learned how to use Supabase and socket. Additionally, we improved our general Javascript and user experience design skills as well. ## What's next for Wakey We would like to put this app on the iOS App Store and the Android Play Store, which would take more extensive and detailed testing, especially as for how the app will run in the background. Additionally, we would like to add some other features, like a leaderboard for who gets up most immediately after their alarm gets off, who sends the most sound effects, and perhaps other ways to rank the members of each alarm server. We would also like to add customizable sound effects, where users can record themselves or upload recordings that they can add to their soundboards.
## Inspiration We saw multiple different problems with pre-existing commercial smart home monitoring devices, they required separate devices and also rely on your existing Wi-Fi network to function. ## What it does We-cu (mind the Carleton University pun) monitors your home using a multitude of different sensors ranging from temperatures sensors all the way to a gas detector that is used for detecting smoke which allow you to remotely monitor the status of your house and also receive notifications via text message, push notifications or even have it post on your facebook wall if anything happens in your house. ## How we built it We used a Particle Electron which is a IoT board that has support for 3G connectivity, directly connected to a range of sensors and also connected to our firebase database as well as tied into many of our connectors that provide a notification service as well as a monitoring service. We used C++ to code the particle electron and react-native for the mobile web app that allows you to monitor your house. In addition we used twilio API to send sms messages to alert you of changes in your home even if you do not have data. ## Challenges we ran into This was the first hardware hack that any of us have done before, we had to learn how to connect all of the different sensors to the particle electron and also properly handle the data that's coming in from all the sensors and also redirect them to the proper connectors that we have made. ## Accomplishments that we're proud of We are proud to be able to present we-cu which is a fully functional independent home monitoring solution in 24 hours without any prior experience with using any hardware before (and also it not blowing up or catching on fire). ## What we learned We got to play around with a lot of different modules and the Particle Electron, learned the basics of building a simple circuit and also handling data from all of the different modules. ## What's next for we-cu Next, we would make a better enclosure for our system (maybe something 3d printed?).
winning
# Things2Do Minimize time spent planning and maximize having fun with Things2Do! ## Inspiration The idea for Things2Do came from the difficulties that we experienced when planning events with friends. Planning events often involve venue selection which can be a time-consuming, tedious process. Our search for solutions online yielded websites like Google Maps, Yelp, and TripAdvisor, but each fell short of our needs and often had complicated filters or cluttered interfaces. More importantly, we were unable to find event planning that accounts for the total duration of an outing event and much less when it came to scheduling multiple visits to venues accounting for travel time. This inspired us to create Things2Do which minimizes time spent planning and maximizes time spent at meaningful locations for a variety of preferences on a tight schedule. Now, there's always something to do with Things2Do! ## What it does Share quality experiences with people that you enjoy spending time with. Things2Do provides the top 3 suggested venues to visit given constraints of the time spent at each venue, distance, and select category of place to go. Furthermore, the requirements surrounding the duration of a complete event plan across multiple venues can become increasingly complex when trying to account for the tight schedules of attendees, a wide variety of preferences, and travel time between multiple venues throughout the duration of an event. ## How we built it The functionality of Things2Do is powered by various APIs to retrieve the details of venues and spatiotemporal analysis with React for the front end, and express.js/node.js for the backend functionality. APIs: * openrouteservice to calculate travel time * Geoapify for location search autocomplete and geocoding * Yelp to retrieve names, addresses, distances, and ratings of venues Languages, tools, and frameworks: * JavaScript for compatibility with React, express.js/node.js, Verbwire, and other APIs * Express.js/node.js backend server * TailwindCSS for styling React components Other services: * Verbwire to mint NFTs (for memories!) from event pictures ## Challenges we ran into Initially, we wanted to use Google Maps API to find locations of venues but these features were not part of the free tier and even if we were to implement these ourselves it would still put us at risk of spending more than the free tier would allow. This resulted in us switching to node.js for the backend to work with JavaScript for better support for the open-source APIs that we used. We also struggled to find a free geocoding service so we settled for Geoapify which is open-source. JavaScript was also used so that Verbwire could be used to mint NFTs based on images from the event. Researching all of these new APIs and scouring documentation to determine if they fulfilled the desired functionality that we wanted to achieve with Things2Do was an enormous task since we never had experience with them before and were forced to do so for compatibility with the other services that we were using. Finally, we underestimated the time it would take to integrate the front-end to the back-end and add the NFT minting functionality on top of debugging. A challenge we also faced was coming up with an optimal method of computing an optimal event plan in consideration of all required parameters. This involved looking into algorithms like the Travelling Salesman, Dijkstra's and A\*. ## Accomplishments that we're proud of Our team is most proud of meeting all of the goals that we set for ourselves coming into this hackathon and tackling this project. Our goals consisted of learning how to integrate front-end and back-end services, creating an MVP, and having fun! The perseverance that was shown while we were debugging into the night and parsing messy documentation was nothing short of impressive and no matter what comes next for Things2Do, we will be sure to walk away proud of our achievements. ## What we learned We can definitively say that we learned everything that we set out to learn during this project at DeltaHacks IX. * Integrate front-end and back-end * Learn new languages, libraries, frameworks, or services * Include a sponsor challenge and design for a challenge them * Time management and teamwork * Web3 concepts and application of technology ## Things to Do The working prototype that we created is a small segment of everything that we would want in an app like this but there are many more features that could be implemented. * Multi-user voting feature using WebSockets * Extending categories of hangouts * Custom restaurant recommendations from attendees * Ability to have a vote of "no confidence" * Send out invites through a variety of social media platforms and calendars * Scheduling features for days and times of day * Incorporate hours of operation of venues
## Inspiration Travelling can be a pain. You have to look up attractions ahead of time and spend tons of time planning out what to do. Shouldn't travel be fun, seamless and stress-free? ## What it does SightSee takes care of the annoying part of travel. You start by entering the city that you're visiting, your hotel, a few attractions that you'd like to see and we take care of the rest. We provide you with a curated list of recommendations based on TripAdvisor data depending on proximity to your attractions and rating. We help you discover new places to visit as well as convenient places for lunch and dinner. Once you've finalized your plans, out pops your itinerary for the day, complete with a walking route on Google Maps. ## How we built it We used the TripAdvisor API and the Google Maps Embed API. It's built as a single-page Web application, powered by React and Redux. It's hosted on an Express.js-based web server on an Ubuntu 14.04 VM in Microsoft Azure. ## Challenges we ran into We ran into challenges with the TripAdvisor API and its point of interest data, which can be inaccurate at times. ## Accomplishments that we're proud of The most awesome user interface ever!
# **THIS IS HOW WE CONNECT THE DOTS** **youngdumbandreallbroke.tech** ## Inspiration We love to travel. But having ourselves manage all our traveling needs on different services is a lot of work. Traveling is still considered a luxury for many. We believe that travel costs can be estimated down to tens of dollars for digital-enough countries. All we need is a visual planner that can help us plan better and estimate our costs to the tenth. ## What it does We built a service where all a user has to input is destination and time (optionally this can also be intelligently suggested). Once it knows when and where you want to go, it visualizes and plans the trip for you: the flights (by getting quotes from Skyscanner), the Airbnb (by narrowing down the AirBnBs that are closer to your destinations, with a minimum preferred rating and a nice enough review), the Uber (books what you like to ride), your events (pulls up the type of events you'll like), the dinner reservation. Once booked: it'll show a complete map of a user's whole travel itinerary (and the path) using a map service, and show a very close estimate. And the best part: it does it all for you! * It pulls out your tickets automatically when you need them. * Books your Uber when you land at the airport. * For public transport: pulls up directions on Google Maps with the destination automatically. ## How we built it Express.js with Node.js is used to develop and serve the backend logic. It developed and uses Google's Direction API, Google Places API, Google Place Autocomplete API. The frontend is developed in React.js. has used the public web address and google cloud services was used for hosting the website. ## Challenges we ran into * Team building * Tech stack selection * Time Constraints ## Accomplishments that we're proud of MVP SHIPPED AND READY TO USE! ## What we learned SLEEP IS IMPORTANT Efficient planning and what to be shipped for an MVP has to be clearly decided. Importance of a robust project architecture ## What's next for online travel agent Additional features can include: 1. ability to compare friends' schedule to allow meeting (or it can tell the friends/colleagues that a friend/colleague is coming) 2. books the event tickets you planned to attend as per the timings that best suits you; 3. confirms the dinner reservations with your friends/colleagues for you; 4. if your flight is delayed: it suggests everything that can still be changed and does it for you - all of it (with just a confirmation tap within the app).
partial
## Why Type 2 diabetes can be incredibly tough, especially when it leads to complications. I've seen it firsthand with my uncle, who suffers from peripheral neuropathy. Watching him struggle with insensitivity in his feet, having to go to the doctor regularly for new insoles just to manage the pain and prevent further damage—it’s really painful. It's constantly on my mind how easily something like a pressure sore could become something more serious, risking amputation. It's heartbreaking to see how diabetes quietly affects his everyday life in ways people do not even realize. ## What Our goal is to create a smart insole for diabetic patients living with type 2 diabetes. This insole is designed with several pressure sensors placed at key points to provide real-time data on the patient’s foot pressure. By continuously processing this data, it can alert both the user and their doctor when any irregularities or issues are detected. What’s even more powerful is that, based on this data, the insole can adjust to help correct the patient’s walking stance. This small but important correction can help prevent painful foot ulcers and, hopefully, make a real difference in their quality of life. ## How we built it We build an insole with 3 sensors on it (the sensors are a hackathon project on their own), that checks the plantar pressure exerted by the patient. We stream and process the data and feed it to another model sole that changes shape based on the gait analysis so it helps correct the patients walk in realtime. Concurrently we stream the data out to our dashboard to show recent activity, alerts and live data about a patient's behavior so that doctors can monitor them remotely- and step in if any early signs of neural-degradation . ## Challenges we ran into and Accomplishments that we're proud of So, we hit a few bumps in the road since most of the hackathon projects were all about software, and we needed hardware to bring our idea to life. Cue the adventure! We were running all over the city—Trader Joe's, Micro Center, local makerspaces—you name it, we were there, hunting for parts to build our force sensor. When we couldn’t find what we needed, we got scrappy. We ended up making our own sensor from scratch using PU foam and a pencil (yep, a pencil!). It was a wild ride of custom electronics, troubleshooting hardware problems, and patching things up with software when we couldn’t get the right parts. In the end, we’re super proud of what we pulled off—our own custom-built sensor, plus the software to bring it all together. It was a challenge, but we had a blast, and we're thrilled with what we made in the time we had! ## What we learned Throughout this project, we learned that flexibility and resourcefulness are key when working with hardware, especially under tight time constraints, as we had to get creative with available materials. As well as this - we learnt a lot about preventative measures that can be taken to reduce the symptoms of diabetes and we have optimistic prospects about how we will can continue to help people with diabetes. ## What's next for Diabeteasy Everyone in our team has close family affected by diabetes, meaning this is a problem very near and dear to all of us. We strive to continue developing and delivering a prototype to those around us who we can see, first hand, the impact and make improvements to refine the design and execution. We aim to build relations with remote patient monitoring firms to assist within elderly healthcare, since we can provide one value above all; health.
## Inspiration As a team, we've all witnessed the devastation of muscular-degenerative diseases, such as Parkinson's, on the family members of the afflicted. Because we didn't have enough money or resources or time to research and develop a new drug or other treatment for the disease, we wanted to make the medicine already available as effective as possible. So, we decided to focus on detection; the early the victim can recognize the disease and report it to his/her physician, the more effective the treatments we have become. ## What it does HandyTrack uses three tests: a Flex Test, which tests the ability of the user to bend their fingers into a fist, a Release Test, which tests the user's speed in releasing the fist, and a Tremor Test, which measures the user's hand stability. All three of these tests are stored and used to, over time, look for trends that may indicate symptoms of Parkinson's: a decrease in muscle strength and endurance (ability to make a fist), an increase in time spent releasing the fist (muscle stiffness), and an increase in hand tremors. ## How we built it For the software, we built the entirety of the application in the Arduino IDE using C++. As for the hardware, we used 4 continuous rotation servo motors, an Arduino Uno, an accelerometer, a microSD card, a flex sensor, and an absolute abundance of wires. We also used a 3D printer to make some rings for the users to put their individual fingers in. The 4 continuous rotation servos were used to provide resistance against the user's hands. The flex sensor, which is attached to the user's palm, is used to control the servos; the more bent the sensor is, the faster the servo rotation. The flex sensor is also used to measure the time it takes for the user to release the fist, a.k.a the time it takes for the sensor to return to the original position. The accelerometer is used to detect the changes in the user's hand's position, and changes in that position represent the user's hand tremors. All of this data is sent to the SD cards, which in turn allow us to review trends over time. ## Challenges we ran into Calibration was a real pain in the butt. Every time we changed the circuit, the flex sensor values would change. Also, developing accurate algorithms for the functions we wanted to write was kind of difficult. Time was a challenge as well; we had to stay up all night to put out a finished product. Also, because the hack is so hardware intensive, we only had one person working on the code for most of the time, which really limited our options for front-end development. If we had an extra team member, we probably could have made a much more user-friendly application that looks quite a bit cleaner. ## Accomplishments that we're proud of Honestly, we're happy that we got all of our functions running. It's kind of difficult only having one person code for most of the time. Also, we think our hardware is on-point. We mostly used cheap products and Arduino parts, yet we were able to make a device that can help users detect symptoms of muscular-degenerative diseases. ## What we learned We learned that we should always have a person dedicated to front-end development, because no matter how functional a program is, it also needs to be easily navigable. ## What's next for HandyTrack Well, we obviously need to make a much more user-friendly app. We would also want to create a database to store the values of multiple users, so that we can not only track individual users, but also to store data of our own and use the trends of different users to compare to the individuals, in order to create more accurate diagnostics.
## Inspiration Stroke costs the United States an estimated $33 billion each year, reducing mobility in more than half of stroke survivors age 65 and over, or approximately 225,000 people a year. Stroke motor rehabilitation is a challenging process, both for patients and for their care providers. On the patient-side, occupational and physical therapy often involves hours of training patients to perform functional movements. This process can be mundane and tedious for patients. On the physician’s side, medical care providers need quantitative data and metrics on joint motion while stroke patients perform rehabilitative tasks. To learn more about the stroke motor skill rehabilitation process and pertinent needs in the area, our team interviewed Dr. Kara Flavin, a physician and clinical assistant professor of Orthopaedic Surgery and Neurology & Neurological Sciences at the Stanford University School of Medicine, who helps stroke patients with recovery. We were inspired by her thoughts on the role of technology in the stroke recovery process to learn more about this area, and ultimately design our own technology to meet this need. ## What it does Our product, Kinesis, consists of an interconnected suite of three core technologies: an Arduino-based wearable device that measures the range of motion exhibited by the joints in users’ hands and transmits the data via Bluetooth; a corresponding virtual reality experience in which patients engage with a virtual environment with their hands, during which the wearable transmits range of motion data; and a Bluetooth to Android application to MongoDB data collection and visualization mechanism to store and provide this information to health care professionals. ## How we built it Our glove uses variable-resistance flex sensors to measure joint flexion of the fingers. We built circuits powered by the Arduino microcontroller to generate range-of-motion data from the sensor output, and transmit the information via a Bluetooth module to an external data collector (our Android application.) We sought a simple yet elegant design when mounting our hardware onto our glove. Next, we built an Android application to collect the data transmitted over Bluetooth by our wearable. The data is collected and sent to a remote server for storage using MongoDB and Node.js. Finally, we the data is saved in .csv files, which are convenient for processing and would allow for accessible and descriptive visuals for medical professionals. Our virtual-reality experience was built in the Unity engine and was constructed for Google Cardboard, deployable to any smartphone device that supports Google Cardboard. ## Challenges Our project proved to be an exciting yet difficult journey. We quickly found that the various aspects of our project - the Arduino-based hardware, Android application, and VR with Unity/Google Cardboard - were a challenging application of internet of things (IoT) and hard to integrate. Sending data via Bluetooth from the hardware (Arduino) to the Android app and parsing that data to useful graphs was an example of how we had to combine different technologies. Another major challenge was that none of our team members had prior Unity/VR-development/Google Cardboard experience, so we had to learn these frameworks from scratch at the hackathon. ## Accomplishments that we’re proud of We hope that our product will make motor rehabilitation a more engaging and immersive process for stroke patients while also providing insightful data and analytics for physicians. We’re proud of learning new technologies to put our hack together, building a cost-effective end-to-end suite of technologies, and blending together software as well as hardware to make a product with incredible social impact. ## What we learned We had a very well-balanced team and were able to effectively utilize our diverse skill sets to create Kinesis. Through helping each other with coding and debugging, we familiarized ourselves with new ways of thinking. We also learned new technologies (VR/ Unity/Google Cardboard, MongoDB, Node.js), and at the same time learnt something new about the ones that we are familiar with (Android, hardware/Arduino). We learnt that the design process is crucial in bringing the right tools together to address social causes in an innovative way. ## What's next for Kinesis We would like to have patients use Kinesis and productize our work. As more patients use Kinesis, we hope to add more interactive virtual reality games and to use machine learning to derive better analytics from the increasing amount of data on motor rehabilitation patterns. We would also like to extend the applications of this integrated model to other body parts and health tracking issues.
winning
## Inspiration No one likes being stranded at late hours in an unknown place with unreliable transit as the only safe, affordable option to get home. Between paying for an expensive taxi ride yourself, or sharing a taxi with random street goers, the current options aren't looking great. WeGo aims to streamline taxi ride sharing, creating a safe, efficient and affordable option. ## What it does WeGo connects you with people around with similar destinations who are also looking to share a taxi. The application aims to reduce taxi costs by splitting rides, improve taxi efficiency by intelligently routing taxi routes and improve sustainability by encouraging ride sharing. ### User Process 1. User logs in to the app/web 2. Nearby riders requesting rides are shown 3. The user then may choose to "request" a ride, by entering a destination. 4. Once the system finds a suitable group of people within close proximity, the user will be send the taxi pickup and rider information. (Taxi request is initiated) 5. User hops on the taxi, along with other members of the application! ## How we built it The user begins by logging in through their web browser (ReactJS) or mobile device (Android). Through API calls to our NodeJS backend, our system analyzes outstanding requests and intelligently groups people together based on location, user ratings & similar destination - all in real time. ## Challenges we ran into A big hurdle we faced was the complexity of our ride analysis algorithm. To create the most cost efficient solution for the user, we wanted to always try to fill up taxi cars completely. This, along with scaling up our system to support multiple locations with high taxi request traffic was definitely a challenge for our team. ## Accomplishments that we're proud of Looking back on our work over the 24 hours, our team is really excited about a few things about WeGo. First, the fact that we're encouraging sustainability on a city-wide scale is something really important to us. With the future leaning towards autonomous vehicles & taxis, having a similar system like WeGo in place we see as something necessary for the future. On the technical side, we're really excited to have a single, robust backend that can serve our multiple front end apps. We see this as something necessary for mass adoption of any product, especially for solving a problem like ours. ## What we learned Our team members definitely learned quite a few things over the last 24 hours at nwHacks! (Both technical and non-technical!) Working under a time crunch, we really had to rethink how we managed our time to ensure we were always working efficiently and working towards our goal. Coming from different backgrounds, team members learned new technical skills such as interfacing with the Google Maps API, using Node.JS on the backend or developing native mobile apps with Android Studio. Through all of this, we all learned the persistence is key when solving a new problem outside of your comfort zone. (Sometimes you need to throw everything and the kitchen sink at the problem at hand!) ## What's next for WeGo The team wants to look at improving the overall user experience with better UI, figure out better tools for specificially what we're looking for, and add improved taxi & payment integration services.
## Inspiration The vicarious experiences of friends, and some of our own, immediately made clear the potential benefit to public safety the City of London’s dataset provides. We felt inspired to use our skills to make more accessible, this data, to improve confidence for those travelling alone at night. ## What it does By factoring in the location of street lights, and greater presence of traffic, safeWalk intuitively presents the safest options for reaching your destination within the City of London. Guiding people along routes where they will avoid unlit areas, and are likely to walk beside other well-meaning citizens, the application can instill confidence for travellers and positively impact public safety. ## How we built it There were three main tasks in our build. 1) Frontend: Chosen for its flexibility and API availability, we used ReactJS to create a mobile-to-desktop scaling UI. Making heavy use of the available customization and data presentation in the Google Maps API, we were able to achieve a cohesive colour theme, and clearly present ideal routes and streetlight density. 2) Backend: We used Flask with Python to create a backend that we used as a proxy for connecting to the Google Maps Direction API and ranking the safety of each route. This was done because we had more experience as a team with Python and we believed the Data Processing would be easier with Python. 3) Data Processing: After querying the appropriate dataset from London Open Data, we had to create an algorithm to determine the “safest” route based on streetlight density. This was done by partitioning each route into subsections, determining a suitable geofence for each subsection, and then storing each lights in the geofence. Then, we determine the total number of lights per km to calculate an approximate safety rating. ## Challenges we ran into: 1) Frontend/Backend Connection: Connecting the frontend and backend of our project together via RESTful API was a challenge. It took some time because we had no experience with using CORS with a Flask API. 2) React Framework None of the team members had experience in React, and only limited experience in JavaScript. Every feature implementation took a great deal of trial and error as we learned the framework, and developed the tools to tackle front-end development. Once concepts were learned however, it was very simple to refine. 3) Data Processing Algorithms It took some time to develop an algorithm that could handle our edge cases appropriately. At first, we thought we could develop a graph with weighted edges to determine the safest path. Edge cases such as handling intersections properly and considering lights on either side of the road led us to dismissing the graph approach. ## Accomplishments that we are proud of Throughout our experience at Hack Western, although we encountered challenges, through dedication and perseverance we made multiple accomplishments. As a whole, the team was proud of the technical skills developed when learning to deal with the React Framework, data analysis, and web development. In addition, the levels of teamwork, organization, and enjoyment/team spirit reached in order to complete the project in a timely manner were great achievements From the perspective of the hack developed, and the limited knowledge of the React Framework, we were proud of the sleek UI design that we created. In addition, the overall system design lent itself well towards algorithm protection and process off-loading when utilizing a separate back-end and front-end. Overall, although a challenging experience, the hackathon allowed the team to reach accomplishments of new heights. ## What we learned For this project, we learned a lot more about React as a framework and how to leverage it to make a functional UI. Furthermore, we refined our web-based design skills by building both a frontend and backend while also use external APIs. ## What's next for safewalk.io In the future, we would like to be able to add more safety factors to safewalk.io. We foresee factors such as: Crime rate Pedestrian Accident rate Traffic density Road type
## Inspiration With all team members living in urban cities, it was easy to use up all of our mobile data while on the go. From looking up restaurants nearby and playing pokemon go, it was easy to chew threw the limited data we had. We would constantly be looking for a nearby Tim Hortons, just to leech their wifi and look for a bus route to get home safely. Therefore, we drew our inspiration from living out the reality that our phones simply are not as useful with mobile data, and we know that many people around the globe depend on mobile data for both safety and convenience with their devices. With **NAVIGATR**, users will not have to rely on their mobile data to find travel information, weather, and more. ## What it does NAVIGATR uses machine learning and scrapes real-time data to respond to any inquiry you might have when data isn't available to you. We have kept in mind that the main issues that people may have when on the go and running out of mobile data are travel times, bus routes, destination information, and weather information. So, NAVIGATR is able to pull all this information together to allow users to use their phone to the fullest even if they do not have access to mobile data; additionally, we are able to allow users to have peace of mind when on the go - they will always have the information they need to get home safely. ## How we built it We built NAVIGATR using a variety of different technical tools; more specifically, we started by using Twilio. Twilio catches the SMS messages that are sent, and invokes a webhook to reply back to the message. Next, we use Beautifulsoup to scrape and provide data from Google seaches to answer queries; additionally, our machine learning model, GPT-3, can respond to general inquiries. Lastly, this is all tied together using Python, which facilitates communication between tools and catches user input errors. ## Challenges we ran into Mainly, the TTC API was outdated by twelve years; therefore, we had to shift our focus to webscraping. Webscraping is more reliable than the TTC API, and we were able to create our application knowing all information is accurate. Furthermore, the client is allowed to input any starting point and destination they wish, and our application is now not limited to just the Toronto area. ## Accomplishments that we're proud of We believe that we were able to address a very relevant issue in modern day society, which is safety in urban environments and mobile-data paywalls. With the explosion of technology in the last two decades, there is no reason why innovation can not be used to streamline information in this way. Moreover, we wanted to try and create an application that has geniune use for people around the globe; additionally, this goal lead us to innovate with the idea in mind of improving the daily lives of a variety of people. ## What we learned We learned how to handle webscraping off Google, as well as creating webhooks and utilizing machine learning models to bring our ideas to life. ## What's next for NAVIGATR Next, we would like to implement a wider variety of tools that align with our mission of providing users with simple answers to questions that they may have. Continuing on the theme of safety, we would like to add features which provide a user with information about high density areas vs. low density areas, weather warnings, as well as secure travel route vs. low risk travel routes. We believe that all of these features would greatly increase the impact NAVIGATR would have in a user's everyday life.
partial
## Inspiration Automation is at its peak when it comes to technology, but one area that has lacked to keep up, is areas of daily medicine. We encountered many moments within our family members where they had trouble keeping up with their prescription timelines. In a decade dominated by cell phones, we saw the need to develop something fast and easy, where it wouldn’t require something too complicated to keep track of all their prescriptions and timelines and would be accessibly at their fingertips. ## What it does CapsuleCalendar is an Android application that lets one take a picture of their prescriptions or pill bottles and have them saved to their calendars (as reminders) based on the recommended intake amounts (on prescriptions). The user will then be notified based on the frequency outlined by the physician on the prescription. The application simply requires taking a picture, its been developed with the user in mind and does not require one to go through the calendar reminder, everything is pre-populated for the user through the optical-character recognition (OCR) processing when they take a snap of their prescription/pill bottle. ## How we built it The application was built for Android purely in Java, including integration of all APIs and frameworks. First, authorization of individualized accounts was done using Firebase. We implemented and modified Google’s optical-character recognition (OCR) cloud-vision framework, to accurately recognize text on labels, and process and parse it in real-time. The Google Calendar API was then applied on the parsed data, and with further processing, we used intents to set reminders based on the data of the prescriptions labels (e.g. take X tablets X daily - where X was some arbitrary number which was accounted for in a (or multiple) reminders). ## Challenges we ran into Working with the OCR Java framework was quite difficult to implement into our personalized application due to various dependency failures - it took us way too long to debug and get the framework to work *sufficiently* for our needs. Also, the default OCR graphics toolkit only captures very small snippets of text at a single time whereas we needed multiple lines to be processed at once and text at different areas within the label at once (e.g. default implementation would allow one set to be recognized and processed - we needed multiple sets). The default OCR engine wasn't quite effective for multiple lines of prescriptions, especially when identifying both prescription name and intake procedure - tweaking this was pretty tough. Also, when we tried to use the Google Calendar API, we had extensive issues using Firebase to generate Oauth 2.0 credentials (Google documentation wasn’t too great here :-/). ## Accomplishments that we're proud of We’re proud of being able to implement a customized Google Cloud Vision based OCR engine and successfully process, parse and post text to the Google Calendar API. We were just really happy we had a functional prototype! ## What we learned Debugging is a powerful skill we took away from this hackathon - it was pretty rough going through complex, pre-written framework code. We also learned to work with some new Google APIs, and Firebase integrations. Reading documentation is also very important… along with reading lots of StackOverflow. ## What's next for CapsuleCalendar We would like to use a better, stronger OCR engine that is more accurate at reading labels in a curved manner, and does not get easily flawed from multiple lines of text. Also, we would like to add functionality to parse pre-taken images (if the patient doesn’t have their prescription readily available and only happens to have a picture of their prescription). We would also like to improve the UI. ## Run the application Simply download/clone the source code from GitHub link provided and run on Android studio. It is required to use a physical Android device as it requires use of the camera - not possible on emulator.
## Inspiration How many times have you forgotten to take your medication and damned yourself for it? It has happened to us all, with different consequences. Indeed, missing out on a single pill can, for some of us, throw an entire treatment process out the window. Being able to keep track of our prescriptions is key in healthcare, this is why we decided to create PillsOnTime. ## What it does PillsOnTime allows you to load your prescription information, including the daily dosage and refills, as well as reminders in your local phone calendar, simply by taking a quick photo or uploading one from the library. The app takes care of the rest! ## How we built it We built the app with react native and expo using firebase for authentication. We used the built in expo module to access the devices camera and store the image locally. We then used the Google Cloud Vision API to extract the text from the photo. We used this data to then create a (semi-accurate) algorithm which can identify key information about the prescription/medication to be added to your calendar. Finally the event was added to the phones calendar with the built in expo module. ## Challenges we ran into As our team has a diverse array of experiences, the same can be said about the challenges that each encountered. Some had to get accustomed to new platforms in order to design an application in less than a day, while figuring out how to build an algorithm that will efficiently analyze data from prescription labels. None of us had worked with machine learning before and it took a while for us to process the incredibly large amount of data that the API gives back to you. Also working with the permissions of writing to someones calendar was also time consuming. ## Accomplishments that we're proud of Just going into this challenge, we faced a lot of problems that we managed to overcome, whether it was getting used to unfamiliar platforms or figuring out the design of our app. We ended up with a result rather satisfying given the time constraints & we learned quite a lot. ## What we learned None of us had worked with ML before but we all realized that it isn't as hard as we thought!! We will definitely be exploring more similar API's that google has to offer. ## What's next for PillsOnTime We would like to refine the algorithm to create calendar events with more accuracy
Domain name: MedicationDedication.io ## Inspiration Drugs are often taken incorrectly due to infrequency of consumption and misused or stolen. Furthermore, drugs can be stored at poor temperatures and go bad. ## What it does Medication Dedication is a IoT project that aims to monitor pill usage and storage. Our project aims to monitor usage through an attachment of sensors on a pill bottle. The sensors log whenever the bottle is opened, how full the bottle is, and what temperature the bottle is at. Data is stored in a database and viewable on a phone app. # How its built The pill bottle attachment is made with an IR sensor, temperature sensor, and velostat. These sensors are attached to an Arduino/Particle Photon. The Particle Photon logs the data to Azure IoT Hub which then outputs the data using a Stream Analytics job. The job sends data to an Azure SQL Database. Our phone application is built using Flutter. The phone app is able to show an activity log and set alarms. The app accesses data through a NodeJS server running on Azure App Service. The server is connected to the Azure SQL Database that stores all the IoT data. A SMS reminder is sent through Twilio whenever an alarm is triggered. ## Challenges we ran into Our team has had some hardware issues. It was also the first time we used Flutter, Azure, and Twilio. ## Accomplishments that we're proud of We used a lot of new technologies!
winning
## What it does Khaledifier replaces all quotes and images around the internet with pictures and quotes of DJ Khaled! ## How we built it Chrome web app written in JS interacts with live web pages to make changes. The app sends a quote to a server which tokenizes words into types using NLP This server then makes a call to an Azure Machine Learning API that has been trained on DJ Khaled quotes to return the closest matching one. ## Challenges we ran into Keeping the server running with older Python packages and for free proved to be a bit of a challenge
Ever wonder where that video clip came from? Probably some show or movie you've never watched. Well with RU Recognized, you can do a reverse video search to find out what show or movie it's from. ## Inspiration We live in a world rife with movie and tv show references, and not being able to identify these references is a sign of ignorance in our society. More importantly, the feeling of not being able to remember what movie or show that one really funny clip was from can get really frustrating. We wanted to enale every single human on this planet to be able to seek out and enjoy video based content easily but also efficiently. So, we decided to make **Shazam, but for video clips!** ## What it does RU Recognized takes a user submitted video and uses state of the art algorithms to find the best match for that clip. Once a likely movie or tv show is found, the user is notified and can happily consume the much desired content! ## How we built it We took on a **3 pronged approach** to tackle this herculean task: 1. Using **AWS Rekognition's** celebrity detection capabilities, potential celebs are spotted in the user submitted video. These identifications have a harsh confidence value cut off to ensure only the best matching algorithm. 2. We scrape the video using **AWS' Optical Character Recognition** (OCR) capabilities to find any identifying text that could help in identification. 3. **Google Cloud's** Speech to Text API allows us to extract the audio into readable plaintext. This info is threaded through Google Cloud Custom Search to find a large unstructured datadump. To parse and exract useful information from this amourphous data, we also maintained a self-curated, specialized, custom made dataset made from various data banks, including **Kaggle's** actor info, as well as IMDB's incredibly expansive database. Furthermore, due to the uncertain nature of the recognition API's, we used **clever tricks** such as cross referencing celebrities seen together, and only detecting those that had IMDB links. Correlating the information extracted from the video with the known variables stored in our database, we are able to make an educated guess at origins of the submitted clip. ## Challenges we ran into Challenges are an obstacle that our team is used to, and they only serve to make us stronger. That being said, some of the (very frustrating) challenges we ran into while trying to make RU Recognized a good product were: 1. As with a lot of new AI/ML algorithms on the cloud, we struggles alot with getting our accuracy rates up for identified celebrity faces. Since AWS Rekognition is trained on images of celebrities from everyday life, being able to identify a heavily costumed/made-up actor is a massive challenge. 2. Cross-connecting across various cloud platforms such as AWS and GCP lead to some really specific and hard to debug authorization problems. 3. We faced a lot of obscure problems when trying to use AWS to automatically detect the celebrities in the video, without manually breaking it up into frames. This proved to be an obstacle we weren't able to surmount, and we decided to sample the frames at a constant rate and detect people frame by frame. 4. Dataset cleaning took hours upon hours of work and dedicated picking apart. IMDB datasets were too large to parse completely and ended up costing us hours of our time, so we decided to make our own datasets from this and other datasets. ## Accomplishments that we're proud of Getting the frame by frame analysis to (somewhat) accurately churn out celebrities and being able to connect a ton of clever identification mechanisms was a very rewarding experience. We were effectively able to create an algorithm that uses 3 to 4 different approaches to, in a way, 'peer review' each option, and eliminate incorrect ones. ## What I learned * Data cleaning is ver very very cumbersome and time intensive * Not all AI/ML algorithms are magically accurate ## What's next for RU Recognized Hopefully integrate all this work into an app, that is user friendly and way more accurate, with the entire IMDB database to reference.
## Inspiration We're tired of plain old video players. Too much utilizable space goes to waste, and ads that *do* pop up distract the user from the viewing experience. We created a better way for video players and companies to both achieve their goals: present everything in a pleasant, concise format but also promote a user experience that focuses on the commerce that drives the open video industry. ## What it does Our process utilizes a conversion method to take YouTube videos, asynchronously query the associated audio via Google Cloud Platform's Transcribe and Storage service, and create a searchable transcript for keywords associated with the video. Finally, we also utilize machine learning models to acquire visual data from video frames, such as the faces of individuals or visual factors to predict environments (e.g. changes in environment during a flood to forewarn individuals). From there, we utilize the Kensho API to obtain information about the products visualized or cited and their associated organizations or corporations. ## How we built it We began by wireframing the presentation aspects of video visualizations we aimed to solve, specifically those of marketing individual products, displaying company trends, and identifying the video environments, specifically the people and natural context of the video. We then used a modular software design pattern to develop the individual aspects of our project independently before merging these individual modules into our final product. In order to allow our APIs to function as expected, we first needed to acquire the video metadata, including the associated text and images. To best manage our time, we broke up the work into designing the frontend components, the API calls, and acquiring the functional data necessary in parallel. Upon requesting and converting the video data from the YouTube into WAV audio format, we used Google Cloud's Storage platform to effectively store our audio. Then, we transcribed said audio file into text with Google Cloud's Speech API. Chosen video frames were also analyzed with a neural pathway for cascade classification and filters for edge detection. We then created appropriate functions and display formats for the outputs of the API calls, ranging from text to graphs. We then created a front-end web-app using React.js to serve as the user-interface, serving the results of the API calls to the UI-platform. ## Challenges we ran into The GCP solution took time to appropriately instantiate, as documentation on some aspects of reading from files stored on GCP Buckets directly was out of date (using Python 2 instead of 3). Instead, we read data using GCP's speech-to-text functions directly, which does so asynchronously on cloud machines. YouTube's video players were not implemented for visualization on external players, as much of the associated information for "playability" was not presented in a simple format. We counteracted this issue by building a signal processing unit that collects information from asynchronous events and handles them appropriately. ## Accomplishments that we're proud of We are proud of our utilizations of new technologies, having little familiarity with any of the APIs given, and successfully integrating these technologies together in an elegant and manageable platform. We are also impressed with the simplistic design of the platform and structural basis that allowed us to build our applications. ## What we learned * Using Python libraries and mathematical models for computer vision and file conversion (imageai, youtube-dl, cv2) * Using a cloud platform for process management, namely Google Cloud Platform (google-cloud-{storage, speech}) * Playing YouTube videos via personal handlers * Embedding events upon keyword or image analysis events ## What's next for Lensflare We hope to further expand our plans for Lensflare by involving more APIs, increasing the quantity and quality available for access by the user. We were encouraged by the possibility for scalability implemented by our system. Apart from requesting the YouTube video itself and video preprocessing, by storing all required data uniquely on GCP and querying this information asynchronously, we allow for a simple method to request and transfer information with multithreading for better efficiency. We hope to develop Lensflare into a full-force solution for video viewing with the appropriate foundational and financial support.
winning
## Inspiration The inspiration for the project was to design a model that could detect fake loan entries hidden amongst a set of real loan entries. Also, our group was eager to design a dashboard to help see these statistics - many similar services are good at identifying outliers in data but are unfriendly to the user. We wanted businesses to look at and understand fake data immediately because its important to recognize quickly. ## What it does Our project handles back-end and front-end tasks. Specifically, on the back-end, the project uses libraries like Pandas in Python to parse input data from CSV files. Then, after creating histograms and linear regression models that detect outliers on given input, the data is passed to the front-end to display the histogram and present outliers on to the user for an easy experience. ## How we built it We built this application using Python in the back-end. We utilized Pandas for efficiently storing data in DataFrames. Then, we used Numpy and Scikit-Learn for statistical analysis. On the server side, we built the website in HTML/CSS and used Flask and Django to handle events on the website and interaction with other parts of the code. This involved retrieving taking a CSV file from the user, parsing it into a String, running our back-end model, and displaying the results to the user. ## Challenges we ran into There were many front-end and back-end issues, but they ultimately helped us learn. On the front-end, the biggest problem was using Django with the browser to bring this experience to the user. Also, on the back-end, we found using Keras to be an issue during the start of the process, so we had to switch our frameworks mid-way. ## Accomplishments that we're proud of An accomplishment was being able to bring both sides of the development process together. Specifically, creating a UI with a back-end was a painful but rewarding experience. Also, implementing cool machine learning models that could actually find fake data was really exciting. ## What we learned One of our biggest lessons was to use libraries more effectively to tackle the problem at hand. We started creating a machine learning model by using Keras in Python, which turned out to be ineffective to implement what we needed. After much help from the mentors, we played with other libraries that made it easier to implement linear regression, for example. ## What's next for Financial Outlier Detection System (FODS) Eventually, we aim to use a sophisticated statistical tools to analyze the data. For example, a Random Forrest Tree could have been used to identify key characteristics of data, helping us decide our linear regression models before building them. Also, one cool idea is to search for linearly dependent columns in data. They would help find outliers and eliminate trivial or useless variables in new data quickly.
## Inspiration As avid Canva fans, we noticed that when we needed to make QR codes in bulk, the native features were often cumbersome. Having to independently click to form each QR code would waste a lot of time. ## What it does Our app allows users to upload a list of QR codes and generate QR in bulk, thus saving users a lot of time. We also created a feature where users can choose the theme of the QR codes, allowing them to customize the aesthetics of the QR codes according to a theme, like rainbow or black/white. ## How we built it We used the native Canva developer toolkit, and we simply built additional buttons on top of it. We used React Native as our tech stack, which integrates into Canva. We also used the QR generator API, which generates the actual QR code. ## Challenges we ran into Because this is our first hackathon, we have 2 main challenges: (1) Ideating: we had some trouble coming with up an idea, so we discussed our experiences with Canva and did research online to find several problems with Canva. (2) Programming: We had difficulty using React Native, since it was our first time using it. We had to resort to documentation and simply problem-solving whenever we ran into issues, such as when we added new buttons. ## Accomplishments that we're proud of Simply building our first-ever hackathon product - and having the perseverance to overcome any issues.
## Inspiration Public washrooms can be a rough time for users thanks to lots of traffic, clogged toilets, missing toilet paper and more. There’s rarely opportunities to easily provide feedback on these conditions. For management, washrooms often get cleaned too frequently or rarely at all. Let’s bring simple sensors to track the status of frequently used facilities to optimize cleaning for businesses and help users enjoy their shit. ## What it does System of IoT sensors, opportunity for gesture-based user feedback, streamed to a live dashboard to give management updates We track: -Traffic into washroom -Methane levels -Fullness of trash cans We provide: Refill buttons in each stall Prompt for facility rating using Leap Motion gesture tracking ## How we built it Used Snapdragon to attach three sensors for data collection purpose to track cleanliness in washroom Used Leap Motion to create interactive dashboard to rate they experience through gestures - more sanitary and futuristic approach, incentivizing users to participate All data pushed up to Freeboard where management can see the status of their washrooms ## Challenges we ran into Tricky using documentation with Leap Motion, didn’t take into account web apps need time to load, wouldn’t load body; also challenging to build hand recognition from the ground up Getting hardware to work: Snapdragon with shield doesn’t have documentation - normal arduino code wasn’t working Integrating data with IoT dashboard ## Accomplishments that we're proud of Found cool solution to get data from sensors to dashboard: wrote to serial, read that, used a python script, and then used dweet.io to publish to freeboard Building hand recognition in Leap Motion ## What we learned Learned to use Leap Motion Live data transmission with multiple sensors Integrating a ton of IoT data ## What's next for Facile Store data online so that we can use ML to understand trends in facility behaviour, cluster types of facilities to better estimate cleaning times Talk to facilities management staff - figure out current way they rate and clean their washrooms, base sensors on that Partnering with large malls, municipal governments, and office complexes to see if they’d be interested Applicable to other environments in smart cities beyond bathrooms
losing
## Inspiration 🍪 We’re fed up with our roommates stealing food from our designated kitchen cupboards. Few things are as soul-crushing as coming home after a long day and finding that someone has eaten the last Oreo cookie you had been saving. Suffice it to say, the university student population is in desperate need of an inexpensive, lightweight security solution to keep intruders out of our snacks... Introducing **Craven**, an innovative end-to-end pipeline to put your roommates in check and keep your snacks in stock. ## What it does 📸 Craven is centered around a small Nest security camera placed at the back of your snack cupboard. Whenever the cupboard is opened by someone, the camera snaps a photo of them and sends it to our server, where a facial recognition algorithm determines if the cupboard has been opened by its rightful owner or by an intruder. In the latter case, the owner will instantly receive an SMS informing them of the situation, and then our 'security guard' LLM will decide on the appropriate punishment for the perpetrator, based on their snack-theft history. First-time burglars may receive a simple SMS warning, but repeat offenders will have a photo of their heist, embellished with an AI-generated caption, posted on [our X account](https://x.com/craven_htn) for all to see. ## How we built it 🛠️ * **Backend:** Node.js * **Facial Recognition:** OpenCV, TensorFlow, DLib * **Pipeline:** Twilio, X, Cohere ## Challenges we ran into 🚩 In order to have unfettered access to the Nest camera's feed, we had to find a way to bypass Google's security protocol. We achieved this by running an HTTP proxy to imitate the credentials of an iOS device, allowing us to fetch snapshots from the camera at any time. Fine-tuning our facial recognition model also turned out to be a bit of a challenge. In order to ensure accuracy, it was important that we had a comprehensive set of training images for each roommate, and that the model was tested thoroughly. After many iterations, we settled on a K-nearest neighbours algorithm for classifying faces, which performed well both during the day and with night vision. Additionally, integrating the X API to automate the public shaming process required specific prompt engineering to create captions that were both humorous and effective in discouraging repeat offenders. ## Accomplishments that we're proud of 💪 * Successfully bypassing Nest’s security measures to access the camera feed. * Achieving high accuracy in facial recognition using a well-tuned K-nearest neighbours algorithm. * Fine-tuning Cohere to generate funny and engaging social media captions. * Creating a seamless, rapid security pipeline that requires no legwork from the cupboard owner. ## What we learned 🧠 Over the course of this hackathon, we gained valuable insights into how to circumvent API protocols to access hardware data streams (for a good cause, of course). We also deepened our understanding of facial recognition technology and learned how to tune computer vision models for improved accuracy. For our X integration, we learned how to engineer prompts for Cohere's API to ensure that the AI-generated captions were both humorous and contextual. Finally, we gained experience integrating multiple APIs (Nest, Twilio, X) into a cohesive, real-time application. ## What's next for Craven 🔮 * **Multi-owner support:** Extend Craven to work with multiple cupboards or fridges in shared spaces, creating a mutual accountability structure between roommates. * **Machine learning improvement:** Experiment with more advanced facial recognition models like deep learning for even better accuracy. * **Social features:** Create an online leaderboard for the most frequent offenders, and allow users to vote on the best captions generated for snack thieves. * **Voice activation:** Add voice commands to interact with Craven, allowing roommates to issue verbal warnings when the cupboard is opened.
## Inspiration We were heavily focused on the machine learning aspect and realized that we lacked any datasets which could be used to train a model. So we tried to figure out what kind of activity which might impact insurance rates that we could also collect data for right from the equipment which we had. ## What it does Insurity takes a video feed from a person driving and evaluates it for risky behavior. ## How we built it We used Node.js, Express, and Amazon's Rekognition API to evaluate facial expressions and personal behaviors. ## Challenges we ran into This was our third idea. We had to abandon two major other ideas because the data did not seem to exist for the purposes of machine learning.
## Inspiration Nowadays, large corporations are spending more and more money nowadays on digital media advertising, but their data collection tools have not been improving at the same rate. Nike spent over $3.03 billion on advertising alone in 2014, which amounted to approximately $100 per second, yet they only received a marginal increase in profits that year. This is where Scout comes in. ## What it does Scout uses a webcam to capture facial feature data about the user. It sends this data through a facial recognition engine in Microsoft Azure's Cognitive Services to determine demographics information, such as gender and age. It also captures facial expressions throughout an Internet browsing session, say a video commercial, and applies sentiment analysis machine learning algorithms to instantaneously determine the user's emotional state at any given point during the video. This is also done through Microsoft Azure's Cognitive Services. Content publishers can then aggregate this data and analyze it later to determine which creatives were positive and which creatives generated a negative sentiment. Scout follows an opt-in philosophy, so users must actively turn on the webcam to be a subject in Scout. We highly encourage content publishers to incentivize users to participate in Scout (something like $100/second) so that both parties can benefit from this platform. We also take privacy very seriously! That is why photos taken through the webcam by Scout are not persisted anywhere and we do not collect any personal user information. ## How we built it The platform is built on top of a Flask server hosted on an Ubuntu 16.04 instance in Azure's Virtual Machines service. We use nginx, uWSGI, and supervisord to run and maintain our web application. The front-end is built with Google's Materialize UI and we use Plotly for complex analytics visualization. The facial recognition and sentiment analysis intelligence modules are from Azure's Cognitive Services suite, and we use Azure's SQL Server to persist aggregated data. We also have an Azure Chatbot Service for data analysts to quickly see insights. ## Challenges we ran into **CORS CORS CORS!.** Cross-Origin Resource Sharing was a huge pain in the head for us. We divided the project into three main components: the Flask backend, the UI/UX visualization, and the webcam photo collection+analysis. We each developed our modules independently of each other, but when we tried to integrate them together, we ran into a huge number of CORS issues with the REST API endpoints that were on our Flask server. We were able to resolve this with a couple of extra libraries but definitely a challenge figuring out where these errors were coming from. SSL was another issue we ran into. In 2015, Google released a new WebRTC Policy that prevented webcam's from being accessed on insecure (HTTP) sites in Chrome, with the exception of localhost. This forced us to use OpenSSL to generate self-signed certificates and reconfigure our nginx routes to serve our site over HTTPS. As one can imagine, this caused havoc for our testing suites and our original endpoints. It forced us to resift through most of the code we had already written to accommodate this change in protocol. We don't like implementing HTTPS, and neither does Flask apparently. On top of our code, we had to reconfigure the firewalls on our servers which only added more time wasted in this short hackathon. ## Accomplishments that we're proud of We were able to multi-process our consumer application to handle the massive amount of data we were sending back to the server (2 photos taken by the webcam each second, each photo is relatively high quality and high memory). We were also able to get our chat bot to communicate with our REST endpoints on our Flask server, so any metric in our web portal is also accessible in Messenger, Skype, Kik, or whatever messaging platform you prefer. This allows marketing analysts who are frequently on the road to easily review the emotional data on Scout's platform. ## What we learned When you stack cups, start with a 3x3 base and stack them in inverted directions. ## What's next for Scout You tell us! Please feel free to contact us with your ideas, questions, comments, and concerns!
winning
## Inspiration With iOS rolling out an in built QR code scanner in the camera, this provides a ubiquitous, easy to use (very low barrier of use) way to interact with real world objects. We envision a world where can access information about any real world object and interact with these objects simply by scanning and clicking. The use cases are endless: scanning bus stop to get bus schedule and pay ticket, scanning medications to get drug content and directions, scanning school equipment to get instruction and request maintenance, scanning restaurant check to pay and give feedback. Furthermore — this paves the way for the upcoming AR revolution! ## What it does We're building the first online, crowdsourced registry for physical objects. Users, organizations, manufacturers can snap an image of an object in the world, assign it some metadata, then print out a QR code representing that digital record of the object. Other users simply scan a QR code and access the page associated with the object. Imagine seeing a washing machine that's broken — what if you could scan its QR code, read about how to fix it, and contact a service person right in that UI? ## How We built it We build this with ReactJS, Node, Express, and MongoDB. ## Challenges I ran into Trying to figure out what features to double down on, and also how to work on front-end and backend concurrently (without getting out of sync with our API design). ## Accomplishments that I'm proud of Getting this to actually work! ## What I learned That ideas are cheap — execution is everything. Also, that your idea evolves as you build it. ## What's next for IoTize Nail the QR code interaction user flow. More robust UI. Get some sleep. Getting a more robust and idiot proof MVP. Add a security layer. Invest more on data analytics. We're hoping to also come up with an idea of how to represent "Events" or "Interactions". What if you could scan a QR code, and engage in an interaction that taught you more about financial literacy? or credit cards? Or what if you could scan a QR code for an event or promotion?
## Inspiration Our inspiration for TapIt came from the potential of bus tickets, where a simple single-use ticket that would otherwise be thrown away (how wasteful!) can be configured to store information and interact with cell phones through Near Field Communication (NFC). We were intrigued by how this technology, often associated with expensive systems, could be repurposed and made accessible for everyday users. Additionally, while exploring networking features at Hack the North, we recognized the need for a more seamless and efficient way to exchange information. Traditional methods, like manually typing contact details or scanning QR codes, often feel cumbersome and time-consuming. We saw an opportunity to not only drastically simplify this process but also to reduce waste by giving disposable objects, like bus tickets, a new life as personalized digital cards. Our goal was to democratize this powerful technology, allowing anyone to easily share their information without the need for costly hardware or complex setups. ## What it does TapIt turns any NFC-enabled object, such as bus tickets, NFC product tags, or even your student card, into a personalized digital card. Users can create profiles that include their contact details, social media links, and more, which can then be written onto NFC tags. When someone taps an NFC-enabled object on their phone, the profile information is instantly shared. This makes networking, sharing information, and staying connected easier and more intuitive than ever. Just tap it! ## How we built it We used React Native and Expo to create a mobile app for Android and iOS. We used npm packages for NFC writing, and we used Flask to write a backend to create short profile URLs to write onto the NFC cards. ## Challenges we ran into We had issues with device compatibility and NFC p2p software restrictions. We also had trouble setting up an auth0 authentication system. It was very difficult to compile applications at first with React Native. ## Accomplishments that we're proud of We learned a lot about mobile application development and React Native in a very short period of time. Working with NFC technology was also really cool! ## What we learned NFC/HCE technologies and mobile development were our main focuses - and we're proud to have created a product while learning about these things on the fly. ## What's next for TapIt Features to support a wider range of NFC-enabled tags! We want to create an ecosystem that supports quick contact exchange with repurposed but readily accessible materials.
## Inspiration I dreamed about the day we would use vaccine passports to travel long before the mRNA vaccines even reached clinical trials. I was just another individual, fortunate enough to experience stability during an unstable time, having a home to feel safe in during this scary time. It was only when I started to think deeper about the effects of travel, or rather the lack thereof, that I remembered the children I encountered in Thailand and Myanmar who relied on tourists to earn $1 USD a day from selling handmade bracelets. 1 in 10 jobs are supported by the tourism industry, providing livelihoods for many millions of people in both developing and developed economies. COVID has cost global tourism $1.2 trillion USD and this number will continue to rise the longer people are apprehensive about travelling due to safety concerns. Although this project is far from perfect, it attempts to tackle vaccine passports in a universal manner in hopes of buying us time to mitigate tragic repercussions caused by the pandemic. ## What it does * You can login with your email, and generate a personalised interface with yours and your family’s (or whoever you’re travelling with’s) vaccine data * Universally Generated QR Code after the input of information * To do list prior to travel to increase comfort and organisation * Travel itinerary and calendar synced onto the app * Country-specific COVID related information (quarantine measures, mask mandates etc.) all consolidated in one destination * Tourism section with activities to do in a city ## How we built it Project was built using Google QR-code APIs and Glideapps. ## Challenges we ran into I first proposed this idea to my first team, and it was very well received. I was excited for the project, however little did I know, many individuals would leave for a multitude of reasons. This was not the experience I envisioned when I signed up for my first hackathon as I had minimal knowledge of coding and volunteered to work mostly on the pitching aspect of the project. However, flying solo was incredibly rewarding and to visualise the final project containing all the features I wanted gave me lots of satisfaction. The list of challenges is long, ranging from time-crunching to figuring out how QR code APIs work but in the end, I learned an incredible amount with the help of Google. ## Accomplishments that we're proud of I am proud of the app I produced using Glideapps. Although I was unable to include more intricate features to the app as I had hoped, I believe that the execution was solid and I’m proud of the purpose my application held and conveyed. ## What we learned I learned that a trio of resilience, working hard and working smart will get you to places you never thought you could reach. Challenging yourself and continuing to put one foot in front of the other during the most adverse times will definitely show you what you’re made of and what you’re capable of achieving. This is definitely the first of many Hackathons I hope to attend and I’m thankful for all the technical as well as soft skills I have acquired from this experience. ## What's next for FlightBAE Utilising GeoTab or other geographical softwares to create a logistical approach in solving the distribution of Oyxgen in India as well as other pressing and unaddressed bottlenecks that exist within healthcare. I would also love to pursue a tech-related solution regarding vaccine inequity as it is a current reality for too many.
losing
# About Us Discord Team Channel: #Team-25 secretage001#6705, Null#8324, BluCloos#8986 <https://friendzr.tech/> ## Inspiration Over the last year the world has been faced with an ever-growing pandemic. As a result, students have faced increased difficulty in finding new friends and networking for potential job offers. Based on Tinder’s UI and LinkedIn’s connect feature, we wanted to develop a web-application that would help students find new people to connect and network with in an accessible, familiar, and easy to use environment. Our hope is that people will be able Friendz to network successfully using our web-application. ## What it does Friendzr allows users to login with their email or Google account and connect with other users. Users can record a video introduction of themselves for other users to see. When looking for connections, users can choose to connect or skip on someone’s profile. Selecting to connect allows the user to message the other party and network. ## How we built it The front-end was built with HTML, CSS, and JS using React. On our back-end, we used Firebase for authentication, CockroachDB for storing user information, and Google Cloud to host our service. ## Challenges we ran into Throughout the development process, our team ran into many challenges. Determining how to upload videos recorded in the app directly to the cloud was a long and strenuous process as there are few resources about this online. Early on, we discovered that the scope of our project may have been too large, and towards the end, we ended up being in a time crunch. Real-time messaging also proved incredibly difficult to implement. ## Accomplishments that we're proud of As a team, we are proud of our easy-to-use UI. We are also proud of getting the video to record users then directly upload to the cloud. Additionally, figuring out how to authenticate users and develop a viable platform was very rewarding. ## What we learned We learned that when collaborating on a project, it is important to communicate, and time manage. Version control is important, and code needs to be organized and planned in a well-thought manner. Video and messaging is difficult to implement, but rewarding once completed. In addition to this, one member learned how to use HTML, CSS, JS, and react over the weekend. The other two members were able to further develop their database management skills and both front and back-end development. ## What's next for Friendzr Moving forward, the messaging system can be further developed. Currently, the UI of the messaging service is very simple and can be improved. We plan to add more sign-in options to allow users more ways of logging in. We also want to implement AssembyAI’s API for speech to text on the profile videos so the platform can reach people who aren't as able. Friendzr functions on both mobile and web, but our team hopes to further optimize each platform.
## Inspiration I've always been inspired by the notion that even as just **one person** you can make a difference. I really took this to heart at DeltaHacks in my attempt to individually create a product that could help individuals struggling with their mental health by providing **actionable and well-studied techniques** in a digestible little Android app. As a previous neuroscientist, my educational background and research in addiction medicine has shown me the incredible need for more accessible tools for addressing mental health as well as the power of simple but elegant solutions to make mental health more approachable. I chose to employ a technique used in Cognitive Behavioral Therapy (CBT), one of (if not the most) well-studied mental health intervention in psychological and medical research. This technique is called automatic negative thought (ANT) records. Central to CBT is the principle that psychological problems are based, in part, on faulty/unhelpful thinking and behavior patterns. People suffering from psychological problems can learn better ways of coping with them, thereby relieving their symptoms and becoming more effective in their lives. CBT treatment often involves efforts to change thinking patterns and challenge distorted thinking, thereby enhancing problem-solving and allowing individuals to feel empowered to improve their mental health. CBT automatic negative thought (ANT) records and CBT thought challenging records are widely used by mental health workers to provide a structured way for patients to keep track of their automatic negative thinking and challenge these thoughts to approach their life with greater objectivity and fairness to their well-being. See more about the widely studied Cognitive Behavioral Therapy at this American Psycological Association link: [link](https://www.apa.org/ptsd-guideline/patients-and-families/cognitive-behavioral) Given the app's focus on finding objectivity in a sea of negative thinking, I really wanted the UI to be simple and direct. This lead me to take heavy inspiration from a familiar and nostalgic brand recognized for its bold simplicity, objectivity and elegance - "noname". [link](https://www.noname.ca/) This is how I arrived at **noANTs** - i.e., no (more) automatic negative thoughts ## What it does **noANTs** is a *simple and elegant* solution to tracking and challenging automatic negative thoughts (ANTs). It combines worksheets from research and clinical practice into a more modern Android application to encourage accessibility of automatic negative thought tracking. See McGill worksheet which one of many resources which informed some of questions in the app: [link](https://www.mcgill.ca/counselling/files/counselling/thought_record_sheet_0.pdf) ## How I built it I really wanted to build something that many people would be able to access and an Android application just made the most sense for something where you may need to track your thoughts on the bus, at school, at work or at home. I challenged myself to utilize the newest technologies Android has to offer, building the app entirely in Jetpack Compose. I had some familiarity using the older Fragment-based navigation in the past but I really wanted to learn how to utilize the Compose Navigation and I can excitedly say I implemented it successfully. I also used Room, a data persistence library which provided an abstraction layer for the SQLite database I needed to store the thought records which the user generates. ## Challenges I ran into This is my first ever hackathon and I wanted to challenge myself to build a project alone to truly test my limits in a time crunch. I surely tested them! Designing this app with a strict adherence to NoName's branding meant that I needed to get creative making many custom components from scratch to fit the UI style I was going for. This made even ostensibly simple tasks like creating a slider, incredibly difficult, but rewarding in the end. I also had far loftier goals with how much I wanted to accomplish, with aspirations of creating a detailed progress screen, an export functionality to share with a therapist/mental-health support worker, editing and deleting and more. I am nevertheless incredibly proud to showcase a functional app that I truly believe could make a significant difference in people's lives and I learned to prioritize creating and MVP which I would love to continue building upon in the future. ## Accomplishments that I'm proud of I am so proud of the hours of work I put into something I can truly say I am passionate about. There are few things I think should be valued more than an individual's mental health, and knowing that my contribution could make a difference to someone struggling with unhelpful/negative thinking patterns, which I myself often struggle with, makes the sleep deprivation and hours of banging my head against the keyboard eternally worthwhile. ## What I learned Being under a significant time crunch for DeltaHacks challenged me to be as frugal as possible with my time and design strategies. I think what I found most valuable about both the time crunch, my inexperience in software development, and working solo was that it forced me to come up with the simplest solution possible to a real problem. I think this mentality should be approached more often, especially in tech. There is no doubt a place, and an incredible allure to deeply complex solutions with tons of engineers and technologies, but I think being forced to innovate under constraints like mine reminded me of the work even one person can do to drive positive change. ## What's next for noANTs I have countless ideas on how to improve the app to be more accessible and helpful to everyone. This would start with my lofty goals as described in the challenge section, but I would also love to extend this app to IOS users as well. I'm itching to learn cross-platform tools like KMM and React Native and I think this would be a welcomed challenge to do so.
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
partial
## What it does Alzheimer's disease and dementia affect many of our loved ones every year; in fact, **76,000 diagnoses** of dementia are made every year in Canada. One of the largest issues caused by Alzheimer's is the loss of ability to make informed, cognitive decisions about their finances. This makes such patients especially vulnerable to things such as scams and high-pressure sales tactics. Here's an unfortunate real-life example of this: <https://www.cbc.ca/news/business/senior-alzheimers-upsold-bell-products-source-1.6014904> We were inspired by this heartbreaking story to build HeimWallet. HeimWallet is a digital banking solution that allows for **supervision** over a savings account owned by an individual incapable of managing their finances, and is specifically **tailored** to patients with Alzheimer's disease or dementia. It can be thought of as a mobile debit card linked to a savings account that only allows spending if certain conditions set by a designated *guardian* are met. It allows a family member or other trusted guardian to set a **daily allowance** for a patient and **keep track of their purchases**. It also allows guardians to keep tabs on the **location of patients via GPS** every time a purchase is attempted, and to authorize or refuse attempted purchases that go beyond the daily allowance. This ensures that patients and their guardians can have confidence that the patient's assets are in safe hands. Further, the daily allowance feature empowers patients to be independent and **shop with confidence**, knowing that their disease will not be able to dominate their finances. The name "HeimWallet" comes from "-Heim" in "Alzheimer's". It also alludes to Heimdall, the mythical Norse guardian of the bridge leading to Asgard. ## How we built it The frontend was built using React-Native and Expo, while the backend was made using Python (Flask) and MongoDB. SMS functionality was added using Twilio, and location services were added using Google Maps API. The backend was also deployed to Heroku. We chose **React-Native** because it allowed us to build our app for both iOS and Android using one codebase. **Expo** enabled rapid testing and prototyping of our app. **Flask**'s lightweightness was key in getting the backend built under tight time constraints, and **MongoDB** was a natural choice for our database since we were building our app using JavaScript. **Twilio** enabled us to create a solution that worked even for guardians who did not have the app installed. Its text message-based interactions enabled us to build a product accessible to those without smartphones or mobile data. We deployed our backend to **Heroku** so that Twilio could access our backend's webhook for incoming text messages. Finally, the **Google Maps API**'s reverse geocoding feature enables guardians to see the addresses of where patients are located when a transaction is attempted. ## Challenges we ran into * Fighting with Heroku for almost *six hours* to get the backend deployed. The core mistake ended up being that we were trying to deploy our Python-based backend as a Node.js app.. oops. * Learning to use React Native -- all of us were new to it, and although we all had experience building web apps, we didn't quite have that same foundation with mobile apps. * Incorporating Figma designs on React Native in a way such that it is cross-platform between Android, iOS, and Web. A lot of styling works differently between these platforms, so it was tricky to make our app look consistent everywhere. * Managing mix of team members who were hacking in-person + online. Constant communication to keep everyone in the loop was key! ## Accomplishments that we're proud of We're super proud that we managed to come together and make our vision a reality! And we're especially proud of how much we learned and took away from this hackathon. From learning React Native, to Twilio, to getting better with Figma and sharpening our video-editing skills for our submission, it was thrilling to have gained exposure to so much in so little time. We're also proud of the genuine hard work every member of our team put in to make this project happen -- we worked deep into the A.M. hours, and constantly sought to improve the usability of our product with continuous suggestions and improvements. ## What's next for HeimWallet Here are some things we think we can add on to HeimWallet in order to bring it to the next level: * Proper integration of SOS (e.g. call 911) and Send Location functionality in the patient interface * Ability to have multiple guardians for one patient, so that there are many eyes safeguarding the same assets * Better security and authentication features for the app; of course, security is vital in a fintech product * Feature to allow patients to send a voice memo to a guardian in order to clarify a spending request
## Inspiration ⚡️ Given the ongoing effects of COVID-19, we know lots of people don't want to spend more time than necessary in a hospital. We wanted to be able to skip a large portion of the waiting process and fill out the forms ahead of time from the comfort of our home so we came up with the solution of HopiBot. ## What it does 📜 HopiBot is an accessible, easy to use chatbot designed to make the process of admitting patients more efficient — transforming basic in person processes to a digital one, saving not only your time, but the time of the doctors and nurses as well. A patient will use the bot to fill out their personal information and once they submit, the bot will use the inputted mobile phone number to send a text message with the current wait time until check in at the nearest hospital to them. As pandemic measures begin to ease, HopiBot will allow hospitals to socially distance non-emergency patients, significantly reducing exposure and time spent around others, as people can enter the hospital at or close to the time of their check in. In addition, this would reduce the potential risks of exposure (of COVID-19 and other transmissible airborne illnesses) to other hospital patients that could be immunocompromised or more vulnerable. ## How we built it 🛠 We built our project using HTML, CSS, JS, Flask, Bootstrap, Twilio API, Google Maps API (Geocoding and Google Places), and SQLAlchemy. HTML, CSS/Bootstrap, and JS were used to create the main interface. Flask was used to create the form functions and SQL database. The Twilio API was used to send messages to the patient after submitting the form. The Google Maps API was used to send a Google Maps link within the text message designating the nearest hospital. ## Challenges we ran into ⛈ * Trying to understand and use Flask for the first time * How to submit a form and validate at each step without refreshing the page * Using new APIs * Understanding how to use an SQL database from Flask * Breaking down a complex project and building it piece by piece ## Accomplishments that we're proud of 🏅 * Getting the form to work after much deliberation of its execution * Being able to store and retrieve data from an SQL database for the first time * Expanding our hackathon portfolio with a completely different project theme * Finishing the project within a tight time frame * Using Flask, the Twilio SMS API, and the Google Maps API for the first time ## What we learned 🧠 Through this project, we were able to learn how to break a larger-scale project down into manageable tasks that could be done in a shorter time frame. We also learned how to use Flask, the Twilio API, and the Google Maps API for the first time, considering that it was very new to all of us and this was the first time we used them at all. Finally, we learned a lot about SQL databases made in Flask and how we could store and retrieve data, and even try to present it so that it could be easily read and understood. ## What's next for HopiBot ⏰ * Since we have created the user side, we would like to create a hospital side to the program that can take information from the database and present all the patients to them visually. * We would like to have a stronger validation system for the form to prevent crashes. * We would like to implement an algorithm that can more accurately predict a person’s waiting time by accounting for the time it would take to get to the hospital and the time a patient would spend waiting before their turn. * We would like to create an AI that is able to analyze a patient database and able to predict wait times based on patient volume and appointment type. * Along with a hospital side, we would like to send update messages that warns patients when they are approaching the time of their check-in.
*\*\*OUR PROJECT IS NOT COMPLETE*\* ## Inspiration Due to the pandemic, lots of people order in food instead of going to restaurant to be safe. There are many popular food delivery applications available and for a lot of people, they scroll through multiple apps to search for the cheapest price for the same items. It is always nice to save money and our app can definitely help people with this. Our proof-of-concept application utilizes dummy data from our own database. The reason for this is because there is a lack of publicly available APIs to gather food delivery company information that is required. ## What it does The user enters in a delivery address, and this gets a list of restaurants. Then, the user selects a restaurant, selects the menu items and the quantity of each item, and then they will be able to see a price breakdown and the price total between the available food delivery services. ## How We built it We decided to create a Flutter application to challenge ourselves. None of us had worked with Flutter and the Dart language before and this was a fun and difficult process. 3 of us developed the frontend. The backend was created using Express.js, database using Google Cloud SQL, and the server hosted on Heroku. 1 of us developed the backend (which was amazing!) ## Challenges We ran into As we are all unfamiliar with Dart and Flutter, it took us more time than with a familiar tool to develop. The time pressure of the hackathon was also a challenge. Although we didn't finish on time, this was still nonetheless a wonderful experience to develop something cool with a new technology. ## Accomplishments that We're proud of We are proud to learn a bit about Dart and Flutter, and to be able to develop for most of the hackathon. We accomplished a lot but if we had more time, we could have finished the project. ## What I learned Dart and Flutter. Working with API calls in Dart. ## What's next for our App There are a few features in the roadmap, if we were to continue working on this app we would: * add promotions. This is a key feature because the price between the services vary greatly if promotions are taken into account * add login functionality * web scrape (or find publicly available APIs) popular food delivery services and obtain real data to utiilize * add images to restaurants and menu items
winning
## Inspiration Cryptocurrencies are a very recent trend and very few people envisioned Bitcoin's meteoric price increase. However, there are hundreds of other cryptocurrencies some of which have shown the explosive potential of Bitcoin. Our goal was to use machine learning to detect what's the difference between a cryptocurrency that booms or busts. ## What it does Instead of just using an algorithm that uses the price of a cryptocurrency itself to predict future price our team decided it would be very valuable to track online sentiment about cryptocurrencies. We believe this will be especially effective since cryptocurrencies's platforms are entirely online as well as chatter about buying and selling the currency. Additionally, since cryptocurrencies' prices tend to be very momentum based we believe this increases the accuracy of our prediction. Using data from Reddit and news sources about the cryptocurrency along with the price history, we predict what new, unfamiliar cryptocurrencies have the most promise. ## How we built it We scraped the entirety of Reddit for mentions of cryptocurrencies with the 200 highest market caps. We then built a classifier using a decision tree algorithm with the Google Cloud Natural Language API training with Bitcoin data. ## Challenges we ran into We got massive amounts of data from Reddit and had to build our own server and then parse through the data (500 GB) to find posts pertaining to cryptocurrencies. Even though we built a model, we did not have the computing power to run it. Additionally, none of our team members had extensive experience with machine learning. ## Accomplishments that we're proud of We are proud of the fact that none of us knew ML, but were able to teach ourselves the basics and apply them to find a model for the data. ## What's next for Crypto-current-cy For this project, we focused on cryptocurrencies since we believed that the price would be correlated relatively strongly with online news compared to other stocks or technologies. However, moving forward we would like to take this experience and think about how to apply news and social media sentiment it to predicting success of new technologies.
## Inspiration Stock price movements are said to be unpredictable. We want to test out if machine learning and big data can do what only a few well-trained financial analysts can: predict stock price movements based on recent macroeconomic news. ## What it does Scraper module takes text content from news pages. The data is sent to sentiment analysis API to compute a sentiment score that tells if the news article is about positive or negative news. Then we aggregate these data to give one global score to determine buy or sell. ## How we built it Used google and x-ray node modules to build a scraper module. Sentiment Analysis API to compute the sentiment score. node.js for server setup. ## Challenges we ran into Implementation and incorporation of machine learning into finance: the joint-knowledge and new paradigm were required from all teammembers. ## Accomplishments that we are proud of We were able to work together even if we all come from different schools and it was the first time we've met each other. Also, despite of having two first-time hackers, we could all work together to build a fully-functional application. ## What we learned The two first-time hackers had an opportunity to learn javascript from scratch and build a fully functioning node.js module. The two other experienced hackers also learned a lot about advanced concepts such as machine learning, integrating the whole system with node.js ## What's next for Stock It Compute theoretical fair price of the stock using Capital Asset Pricing Model (and other valuation models) and compare it to current value of the stock to determine the theoretical mispricing and potentially the machine learning algorithm will be able to tell what is the reason behind that mispricing. Furthermore, this application could be used for predicting option prices, currencies, and even political event such as elections.
## Inspiration We met at the event - on the spot. We are a diverse team, from different parts of the world and different ages, with up to 7 years of difference! We believe that there is an important evolution towards using data technology to make investment decisions & data applications to move to designing new financial products and services we have not even considered yet. ## What it does Trendy analyzes over 300,000+ projects from Indiegogo, a crowd-funding website, for the last year. Trendy monitors and evaluates on average 20+ data points per company. Hence, we focus on 6 main variables and illustrate the use case based on statistical models. In order to keep the most user-friendly interface where we could still get as many info as possible, we have decided to create a chatbot on which the investor interacts with our platform. The user may see graphs, trends analysis, and adjust his preferabilities accordingly. ## Challenges we ran into We have had a lot of trouble setting up the cloud to host everything. We also have had a lot of struggles in order to build a bot, due to many restrictions Facebook has set. The challenges kept us apart from innovating more on our product. ## Accomplishments that we're proud of We are very proud to have a very acute data analysis and a great interface. Our results are logical and we seem to have one of the best interfaces. ## What we learned We learned a lot about cloud hosting, data management, and chatbot setup. More concretely, we have built ourselves a great platform to facilitate our financial wealth plan! ## What's next for Trendy We foresee adding a couple of predictive analytics concepts to our trend hacking platform, like random forests, Kelly criterion, and a couple of others. Moreover, we envisage empowering our database and analysis' accuracy by implementing some Machine Learning models.
losing
## Inspiration We aren't musicians. We can't dance. With AirTunes, we can try to do both! Superheroes are also pretty cool. ## What it does AirTunes recognizes 10 different popular dance moves (at any given moment) and generates a corresponding sound. The sounds can be looped and added at various times to create an original song with simple gestures. The user can choose to be one of four different superheroes (Hulk, Superman, Batman, Mr. Incredible) and record their piece with their own personal touch. ## How we built it In our first attempt, we used OpenCV to maps the arms and face of the user and measure the angles between the body parts to map to a dance move. Although successful with a few gestures, more complex gestures like the "shoot" were not ideal for this method. We ended up training a convolutional neural network in Tensorflow with 1000 samples of each gesture, which worked better. The model works with 98% accuracy on the test data set. We designed the UI using the kivy library in python. There, we added record functionality, the ability to choose the music and the superhero overlay, which was done with the use of rlib and opencv to detect facial features and map a static image over these features. ## Challenges we ran into We came in with a completely different idea for the Hack for Resistance Route, and we spent the first day basically working on that until we realized that it was not interesting enough for us to sacrifice our cherished sleep. We abandoned the idea and started experimenting with LeapMotion, which was also unsuccessful because of its limited range. And so, the biggest challenge we faced was time. It was also tricky to figure out the contour settings and get them 'just right'. To maintain a consistent environment, we even went down to CVS and bought a shower curtain for a plain white background. Afterward, we realized we could have just added a few sliders to adjust the settings based on whatever environment we were in. ## Accomplishments that we're proud of It was one of our first experiences training an ML model for image recognition and it's a lot more accurate than we had even expected. ## What we learned All four of us worked with unfamiliar technologies for the majority of the hack, so we each got to learn something new! ## What's next for AirTunes The biggest feature we see in the future for AirTunes is the ability to add your own gestures. We would also like to create a web app as opposed to a local application and add more customization.
## Inspiration: We wanted to combine our passions of art and computer science to form a product that produces some benefit to the world. ## What it does: Our app converts measured audio readings into images through integer arrays, as well as value ranges that are assigned specific colors and shapes to be displayed on the user's screen. Our program features two audio options, the first allows the user to speak, sing, or play an instrument into the sound sensor, and the second option allows the user to upload an audio file that will automatically be played for our sensor to detect. Our code also features theme options, which are different variations of the shape and color settings. Users can chose an art theme, such as abstract, modern, or impressionist, which will each produce different images for the same audio input. ## How we built it: Our first task was using Arduino sound sensor to detect the voltages produced by an audio file. We began this process by applying Firmata onto our Arduino so that it could be controlled using python. Then we defined our port and analog pin 2 so that we could take the voltage reading and convert them into an array of decimals. Once we obtained the decimal values from the Arduino we used python's Pygame module to program a visual display. We used the draw attribute to correlate the drawing of certain shapes and colours to certain voltages. Then we used a for loop to iterate through the length of the array so that an image would be drawn for each value that was recorded by the Arduino. We also decided to build a figma-based prototype to present how our app would prompt the user for inputs and display the final output. ## Challenges we ran into: We are all beginner programmers, and we ran into a lot of information roadblocks, where we weren't sure how to approach certain aspects of our program. Some of our challenges included figuring out how to work with Arduino in python, getting the sound sensor to work, as well as learning how to work with the pygame module. A big issue we ran into was that our code functioned but would produce similar images for different audio inputs, making the program appear to function but not achieve our initial goal of producing unique outputs for each audio input. ## Accomplishments that we're proud of We're proud that we were able to produce an output from our code. We expected to run into a lot of error messages in our initial trials, but we were capable of tackling all the logic and syntax errors that appeared by researching and using our (limited) prior knowledge from class. We are also proud that we got the Arduino board functioning as none of us had experience working with the sound sensor. Another accomplishment of ours was our figma prototype, as we were able to build a professional and fully functioning prototype of our app with no prior experience working with figma. ## What we learned We gained a lot of technical skills throughout the hackathon, as well as interpersonal skills. We learnt how to optimize our collaboration by incorporating everyone's skill sets and dividing up tasks, which allowed us to tackle the creative, technical and communicational aspects of this challenge in a timely manner. ## What's next for Voltify Our current prototype is a combination of many components, such as the audio processing code, the visual output code, and the front end app design. The next step would to combine them and streamline their connections. Specifically, we would want to find a way for the two code processes to work simultaneously, outputting the developing image as the audio segment plays. In the future we would also want to make our product independent of the Arduino to improve accessibility, as we know we can achieve a similar product using mobile device microphones. We would also want to refine the image development process, giving the audio more control over the final art piece. We would also want to make the drawings more artistically appealing, which would require a lot of trial and error to see what systems work best together to produce an artistic output. The use of the pygame module limited the types of shapes we could use in our drawing, so we would also like to find a module that allows a wider range of shape and line options to produce more unique art pieces.
## Inspiration All of our teammates love music, so we decided to make something fun that involves music! ## What it does AirPiano recognizes your hand gestures--different gestures representing different sounds--and turn them into MIDI input, and then output the signals to music apps to form a melody. ## How we built it We used Myo Gesture Control Armband to recognize hand/arm gestures, and map them to corresponding sounds. ## Challenges we ran into We had many great ideas, so we struggled a long time with the brainstorming process. ## Accomplishments that we're proud of None of us had any experience with Myo Gesture Control Armband before, so we are really proud that we made AirPiano work! ## What we learned Dealing with Myo SDK, and APIs. ## What's next for AirPiano Incorporating more gestures, and revolutionizing the way young DJs(like one of our team member) make music.
winning
## Inspiration At the start of their 3rd-to-last semester, every college junior starts talking about their "senior trip" ideas: some want to travel to Thailand, others to Italy, and others to Zion National Park. We realized that a social media platform could effectively aggregate everyone's travel wishlists and thus make vacation planning easier for users. Enter WeTrip! ## What it does WeTrip is a social media site exclusively designed for travel. Users can share their travel wishlists with others, bookmark destinations, create vacation groups, post pictures, and leave reviews. This creates an interactive environment where 1) users can easily form travel groups based on their friends' benefits, and 2) users benefit from crowdsourcing. ## How we built it The front end of the website was built using HTML, CSS, and Bootstrap. The back end of the website leveraged linear algebra in the form of Python and Django to create databases of multiple attributional relationships. Key attributes included in our databases included the users, their reviews, the destinations, the groups, the bookmarks, and the photos. Backend was built from Amazon's cloud services. We used S3 for hosting media files and elastic beanstalk to deploy our Django application. The model for our website was housed in RDS in a postgres database. ## Challenges we ran into This was the first time one of us had ever done web dev, so naturally it took a bit of time getting used to HTML and CSS. Designing a social media website with so many attributes all interconnected with each other was also not an easy task to think of and accomplish within the alloted 36 hours. ## Accomplishments that we're proud of From a technical perspective, we brushed up a lot on Django, HTML, and CSS. From a personal satisfaction standpoint, this was Mickey's first ever hackathon and Kevin's most dedicated hackathon: we were able to finish our hack! ## What we learned Creating a multifaceted website from scratch is no easy task, and we have certainly gained an appreciation for those who work in web development. Tasks need to be efficiently split between front end and back end with a smooth integration afterward. ## What's next for WeTrip Integration of the Yelp API for more granular tourist attractions, using NLP to add news stories to different destinations, and machine learning recommendations for users.
## Inspiration Recently, security has come to the forefront of media with the events surrounding Equifax. We took that fear and distrust and decided to make something to secure and protect data such that only those who should have access to it actually do. ## What it does Our product encrypts QR codes such that, if scanned by someone who is not authorized to see them, they present an incomprehensible amalgamation of symbols. However, if scanned by someone with proper authority, they reveal the encrypted message inside. ## How we built it This was built using cloud functions and Firebase as our back end and a Native-react front end. The encryption algorithm was RSA and the QR scanning was open sourced. ## Challenges we ran into One major challenge we ran into was writing the back end cloud functions. Despite how easy and intuitive Google has tried to make it, it still took a lot of man hours of effort to get it operating the way we wanted it to. Additionally, making react-native compile and run on our computers was a huge challenge as every step of the way it seemed to want to fight us. ## Accomplishments that we're proud of We're really proud of introducing encryption and security into this previously untapped market. Nobody to our kowledge has tried to encrypt QR codes before and being able to segment the data in this way is sure to change the way we look at QR. ## What we learned We learned a lot about Firebase. Before this hackathon, only one of us had any experience with Firebase and even that was minimal, however, by the end of this hackathon, all the members had some experience with Firebase and appreciate it a lot more for the technology that it is. A similar story can be said about react-native as that was another piece of technology that only a couple of us really knew how to use. Getting both of these technologies off the ground and making them work together, while not a gargantuan task, was certainly worthy of a project in and of itself let alone rolling cryptography into the mix. ## What's next for SeQR Scanner and Generator Next, if this gets some traction, is to try and sell this product on th marketplace. Particularly for corporations with, say, QR codes used for labelling boxes in a warehouse, such a technology would be really useful to prevent people from gainng unnecessary and possibly debiliatory information.
## Inspiration We were inspired by the resilience of freelancers, particularly creative designers, during the pandemic. As students, it's easy to feel overwhelmed and not value our own work. We wanted to empower emerging designers and remind them of what we can do with a little bit of courage. And support. ## What it does Bossify is a mobile app that cleverly helps students adjust their design fees. It focuses on equitable upfront pay, which in turn increases the amount of money saved. This can be put towards an emergency fund. On the other side, clients can receive high-quality, reliable work. The platform has a transparent rating system making it easy to find quality freelancers. It's a win-win situation. ## How we built it We got together as a team the first night to hammer out ideas. This was our second idea, and everyone on the team loved it. We all pitched in ideas for product strategy. Afterwards, we divided the work into two parts - 1) Userflows, UI Design, & Prototype; 2) Writing and Testing the Algorithm. For the design, Figma was the main software used. The designers (Lori and Janice) used a mix iOS components and icons for speed. Stock images were taken from Unsplash and Pexels. After quickly drafting the storyboards, we created a rapid prototype. Finally, the pitch deck was made to synthesize our ideas. For the code, Android studio was the main software used. The developers (Eunice and Zoe) together implemented the back and front-end of the MVP (minimum viable product), where Zoe developed the intelligent price prediction model in Tensorflow, and deployed the trained model on the mobile application. ## Challenges we ran into One challenge was not having the appropriate data immediately available, which was needed to create the algorithm. On the first night, it was a challenge to quickly research and determine the types of information/factors that contribute to design fees. We had to cap off our research time to figure out the design and algorithm. There were also technical limitations, where our team had to determine the best way to integrate the prototype with the front-end and back-end. As there was limited time and after consulting with the hackathon mentor, the developers decided to aim for the MVP instead of spending too much time and energy on turning the prototype to a real front-end. It was also difficult to integrate the machine learning algorithm to our mini app's backend, mainly because we don't have any experience with implement machine learning algorithm in java, especially as part of the back-end of a mobile app. ## Accomplishments that we're proud of We're proud of how cohesive the project reads. As the first covid hackathon for all the team members, we were still able to communicate well and put our synergies together. ## What we learned Although a simple platform with minimal pages, we learned that it was still possible to create an impactful app. We also learned the importance of making a plan and time line before we start, which helped us keep track of our progress and allows us to use our time more strategically. ## What's next for Bossify Making partnerships to incentivize clients to use Bossify! #fairpayforfreelancers
partial