id
int64
1.57k
21k
project_link
stringlengths
30
96
project_description
stringlengths
1
547k
10,007
https://devpost.com/software/ai-assistant-with-better-communication-capabilities
Login through Twitter/facebook account. WIT gives you different options to talk based on your interest WIT gives users real feel of human other side by face expressions and quick replies in voice mode. Even recommend you the friends. It will sync your friends and recommends after processed by algorithm. Inspiration Have you heard of the fact that one in twenty Indians suffers from depression and it makes it worse to know that more than 300 million people worldwide suffer from depression , and in some cases, it gives very bad results for the mankind? Also, some introverted personalities like to get rid of their boredom with things that provide them with some information about the things they love. So what is the solution for this ? A human needs to talk to someone so that he can get rid of boredom. But is chatbot a better option? Yes of course! But most of it has a time-consuming typing process that makes the process less interactive and even more boring, they do not recommend and inspire you to meet new people who have similar interests. Or those who do similar things like you. So here we are bringing more interactive AI-powered bot which will overcome this problem. Why Speech bot Automated Support For Similar Queries & conversation Save Human Resources for Qualitative Tasks Better User Interaction Speechbot- Easy-to-use Cost-effective and Time-efficient What it does Wit Chat AI is a web-based AI bot. This gives you the option to talk on some topics. After login up through your Facebook / Twitter account, it shows you some interesting topics to interact with WIT Chat AI. You can interact through speech. This makes you feel genuine human interaction on the other side. Also once you click on Connect Friend Card. It will sync your following friends according to your sign in option whether it was Facebook or Twitter which will show you in the recommended friends list. Finally, the user will feel more connected to his friends and will have a list given by the bot, through which he can initiate chat. WIT chat gives you a better option to ask stock related questions in real-time. So it is a complete package as a web-based solution and is superior to some other voice assistants and is free and easy to use. How we built it We are using Wit.ai to find the intent of the question. This gives us the confidence of the question asked. Once the user clicks the mic button. It records the voice for a few seconds and sends a voice to the server. Where server-side we are using wit.ai API to fetch the intent of the query. Once we get the real meaning of the question. We are using a MySQL database to give them the proper answer. The answer will come in the form of text and we are using the gTTS library to convert that processed text into speech. Wit Chat provides various cards as an option: Music Sports Stocks. Movies. Connect people Also, we wrote an algorithm to recommend similar interest friends to whom he/she can start a discussion. it will continuously look to the bio of users and filter out the friends according to the interest and field. Challenges we ran into Overall the project building was a bit challenging in lockdown. The challenges we faced are as follows: The main challenge we faced while working on this is how users can communicate through speech to the WIT. That was something new for us to divide the work and to come up with good logic. Due to lockdown, it was a completely remote project we have worked on. We have to take several zoom calls to decide the approach. Less time was also a pressure, we have to complete our office work and then take a good start after office hours and work over the weekends. The feature we want to enable lip sync while WIT processes the answer which was a bit difficult for us. We ended up with a good alternative that makes the conversation human friendly. Accomplishments that we're proud of A great team that has worked within the stipulated time frame was a good achievement for us. Also, we think that we have actually done a good job in deciding the architecture and flow of the code. This enabled us to make good command of Django to add features like social authentication and JS. This was a very good part of the project and that made us excited to see this project up and running in the end. What we learned We can say that this was the first project we worked on together and we were completely new to the secret management part, that way we would manage tokens that we should not uncover the code. We found that configparser, a better way to hide secrets and furthermore we have used our Django skills and competitive programming skills to write a good backend logic. That has ended up as completion of this project. What's next for WIT CHAT AI We want to do a little work on the UI part and make a feature in the WIT chat that will hold the intent and send a greeting message to the most priority user and how will we decide the user's priority? Yes, we have to write an algorithm for this. It will answer who is the most appropriate user to whom it can take the initiative to talk. Even WIT will initiate a chat on behalf of the user. Also, we want to deploy it so the people can take advantage of the WIT. Built With: · Wit-ai · Django · Django-social-auth · Twitter-friends-API . Facebook login API · Django-rest-framework . JS . HTML5 , Bootstrap3 . gTTS, Pyaudio Github repo Built With cnn django gtts html5 pyaudio python wit Try it out github.com
10,007
https://devpost.com/software/voiceover
ICON Inspiration came across wit.ai through Facebook What it does integrates NLP data How I built it wit.ai Challenges I ran into training Accomplishments that I'm proud of end product What I learned NLP What's next for VoiceOver CERTIFICATION Built With wit.ai
10,007
https://devpost.com/software/ebuddy-7jxaz2
Sentiment analysis from the audio Talk to the buddy A nasty joke Inspiration Mental health has been of prime importance since the past few years accelerated by the current pandemic. What it does It listens to anyone for any length about anything. It analyzes the sentiment in the user's speech and responds with either a random joke or an inspirational quote. How I built it The application is built using ReactJS without any backend technologies. ReactMic library was used to record the voice. Axios makes the REST API call to wit.ai app, fetches the sentiment, and based on the response either a joke or a quote is displayed. Hosted as a static web app in AWS S3. Challenges I ran into Javascript events life cycles, wit.ai app training. Accomplishments that I'm proud of Clean, simple interface. What I learned Wit.ai capabilities. What's next for eBuddy Store the user's recordings in the database and display them as text notes in the app. Built With amazon-web-services javascript react wit.ai Try it out witaiapp.s3-website.ap-south-1.amazonaws.com
10,007
https://devpost.com/software/voice-order-mvso3t
Voice Order app logged into Burger Boy merchant account. Admin interface to manage menu. Voice Order in kiosk mode ready to take an order. Voice Pay app - bring near kiosk to begin order. Creating a new order by speaking to kiosk. Spoken item identified by wit.ai and highlighted in app. Order complete and ready to pay. Voice Pay app - bring near kiosk to confirm and begin payment. Make a payment in Voice Pay app. Order confirmation displayed in Voice Pay app. Inspiration As a care giver, I have been hyper-sensitive about my contacts in this COVID-19 world. In looking for ways to use technology to improve on processes, I considered my local small businesses. Order taking requires risky face-to-face dwell time that can be limited with the right solution. What it does Voice Order implements an automated contactless order taking and payment service using a voice-enabled kiosk and Square services for a catalog of items, order logging, payments, and loyalty rewards. Voice Order uses wit.ai to provide a natural language experience for placing an order. The kiosk runs on an iPad and communicates with a Brother printer for the kitchen or fulfillment area to prepare the order. There is a companion Voice Pay app that interfaces over bluetooth with the kiosk Voice Order app. The Voice Pay app identifies you to the service, enables contactless payment via a credit card with Square, or Apple Pay, and provides order confirmation. The merchant logs into the Voice Order app and can set up their service by identifying their printer and reviewing their menu. They can also review recent orders. When ready, they enter kiosk mode in the app to take orders. A user launches the Voice Pay app and brings their phone near the kiosk to sync. They are then presented with a menu. As they speak to the kiosk, it processes their speech on wit.ai to assemble their order and shows the total cost. The Voice Order app automatically applies any earned rewards, such as a free item. The user can add a tip, and finish their order. The kiosk then prompts to make a payment on their Voice Pay app. When that's done, the kiosk shows confirmation and prints the order on the prep area Brother printer. The Voice Pay app shows their complete order with an order number so they can claim the order when it's brought forward. The system is then ready for the next customer. The whole system uses the Square service to host the order item catalog, to record orders, take payments, track customer loyalty, etc. The wit.ai service is used to process what the user says in a natural manner to compose their order, and handle tasks such as adding a tip, and confirming the order. How I built it I created 2 iOS apps in Swift - Voice Order and Voice Pay. I used the Brother SDK to interface to the printer. The Voice Order app uses iOS voice recognition technology to create the text sent to the wit.ai service. The Voice Order app interfaces with wit.ai using their API, receiving back intents, entities and related data. It interprets this information to formulate the order and take associated actions. Both apps use bluetooth technology to synchronize information between them. They both use the Square SDK to interface with the payments service. The Voice Order app further interfaces with the Square service over their REST API to manage the menu (via a catalog), order taking, loyalty rewards, and payment fulfillment. Challenges I ran into There were numerous challenges in building this service from scratch. First, getting the voice interface working so that I could associate intents and entities from the wit.ai service with a catalog of menu items from the Square backend took a lot of experimentation and testing. Second was interfacing to the Brother printers, which took a while to learn, but I had some help from Brother support. Third was implementing the bluetooth sync interface between the two apps. That was very hard to debug any issues. Finally, interfacing to the Square service took a while to figure out, debug, and test. The payment SDK was somewhat easier to integrate, and there was some good example code. But the REST API required a lot of experimentation with the API explorer, trial and error in the code implementation, and proper sequencing of the calls to effect the transactions. Accomplishments that I'm proud of I'm very happy with the progress I was able to make in just a few weeks. While it still needs work before deployment, it's working well as a demonstration. There were a lot of pieces that all had to work together to even get to that stage. What I learned I learned a great deal about each of the various pieces. How to implement a voice-controlled interface, how to talk to Brother printers, how to sync information over bluetooth, and how to work with the Square service with their SDK and REST API. What's next for Voice Order I want to identify a local merchant where I can field test the service. There is still work to do for account creation, merchant definition, menu editing, and some other components before Voice Order is ready for deployment. Built With bluetooth brother ios square swift wit.ai
10,007
https://devpost.com/software/t-o876jv
Inspiration We wanted to build an integrated experience for Facebook users to have real-time financial data at their fingertips. What it does Given a stock symbol, the app delivers real-time stock prices along with the price change as well as percentage change. How I built it I built the app through many rounds of trial and error and testing to obtain a well-performing model. I am using a live Glitch app to handle the Wit requests, and the stocks api I am using is from IEX Cloud ( link ) Challenges I ran into Firstly, it was a challenge setting up a Python app to communicate with my Wit and Facebook apps as there are not many example projects out there yet. Moreover, even though I was using some sample code from the Wit Github repository, I discovered a few minor errors that were preventing me from receiving responses in my Messenger app. Lastly, it was a challenge sourcing for a well-designed API as I wanted something that could add value to users by providing insights beyond the typical price data. However, as far as free APIs go, I decided to settle for IEX Cloud's in the end. Accomplishments that I'm proud of I am proud to have overcome the challenges I faced, especially when I figured out a bug in the sample code after hours of debugging. Through the debugging process, I am thankful to have interacted with the Wit.ai community which has been really helpful and encouraging to me, and allowed me to push through to complete a fully-functional app in a short period of time. What I learned I should not be afraid of not knowing things and should view such situations as opportunities to learn and grow. By being curious and reaching out to the community, I have gained much insights into what Wit can do and also discovered more about different tools for app deployment such as Glitch, ngrock and Heroku. What's next for RTSP I would definitely look towards providing more insights with data such as recent news, trends and the like. However, this would require a rich and reliable data source to be accomplished. On the user experience side, I think more work could be done especially with regards to the possibility of images and visualisations. Built With iexcloud wit.ai Try it out www.facebook.com
10,007
https://devpost.com/software/edizon
Home Screen of Edizon. Help Menu where you look for instructions. Cute little golden retriever. Inspiration Edizon is a voice control photo editing web application. Allowing user to edit photo with just their voice. No more slider and curves. Photo editing have always been a difficult task . Photo editing application like photoshop, have interface that looks like a spacecship control panel , way too many buttons and sliders ! Photo illustrating user trying to edit a photo. (Just kidding this is a astronaut named Chris Ferguson) Here's where wit.ai kick in , speech recognition and natural language processing allow us to command with just our speech. Removing all these control. No more confusing , just pure talking. What it does Edizon helps user to edit their awesome photography just by their voice. User command the application by speaking what they want the photo to be. For example : Blur the background by five , grayscale the photo .. etc. How I built it The application is built using react for UI , recoil for state management , fabricjs for Canvas manipulation and Wit.ai for natural language processing. I designed the layout and the architecture from scratch and implemented step by step. However , the architecture isn't perfect and I have to edit the architecture as I continue to build some more functionality. The result is quite fascinating though , It working like a charm now ! Challenges I ran into Before joining this Hackathon, I have completely no idea about pixels and little on canvas. After hours and hours of drowning in article and documents , I finally get to understand how these things work and applied them correctly. Moreover , it's my first time using Recoil for state management. Recoil is a new library for state management and the doc is not quite complete , But the concept of how they managed these state looks appealing to me , I recommend you to check it out if you don't :smile: . The concept of selector and atom is not hard to understand , it's just the document is missing some cases where you can use selector as atom default's value ! Which told me some trials to find out. Accomplishments that I'm proud of I used to think pixel manipulation is a hard work and I am too stupid to understand them. But now I made a application which manipulating pixel :smile: Nothing is impossible What I learned Recoil , Fabricjs , canvas What's next for Edizon Edison is still a very young application. It's still lacking of a lot of functionality. I hope to improve the architecture of Edison , allowing it to be more extensible so user and community can wrote their own plugin / filters. Open sourcing the application could definitely help to archive such goal and allow everyone interested in the application to contribute. Built With docker heroku node.js react recoil wit.ai Try it out edizon.herokuapp.com github.com
10,007
https://devpost.com/software/speakbot
Inspiration The inspiration for my app are the virtual assistant technologies such as Siri and Alexa. Also, I wanted to create a virtual assistant that will answer questions related to the current situation of COVID. I really want people to know information as simple as getting raw data and information quickly and as simple as asking a question. What it does The app is a chatbot where you can type a message or speak directly to it. The A.I. will answer questions related to COVID such as the number of deaths per country, what to do if you are sick, and time graphs of cases. How I built it I built it using javascript for the frontend and flask for the backend. The app was containerized using Docker and deployed to heroku as a demo app. Wit.ai was used for creating intentions related to a user's sentence. An API was used from this link link , WebSockets was used, and other libraries for audio and charts were used. Challenges I ran into The main challenge I faced was deploying the app. Although my app worked in development, I wasn't able to make it work in production. Good thing I was able to use docker though to resolve the issues. Also, another problem I faced was building the chatbot using wit.ai. I had to brainstorm how to create it and classify certain sentences and get certain keywords. In the end, wit.ai was easy to use and I was able to polish my product effectively. Accomplishments that I'm proud of I am proud that I was able to build this project since it is my biggest project so far. I determined enough to build this project spending around 10 hours a day for about a month. I am proud that Facebook gave me an opportunity to join this hackathon even if I am a high school student. What I learned I learned a lot of things mostly patience and constant learning. Also, I learned about not give up despite the difficulties. I learned about determination. Win or lose I am still happy to be part of this. What's next for Covid Chat Bot Well, since my app is already built for production, I can add more features to it. I am not really looking forward to scaling it but I might use some of the app's features for future projects. Built With backplane-javascript docker flask heroku javascript jquery websockets wit.ai Try it out daduyafbhack.herokuapp.com github.com
10,007
https://devpost.com/software/new-world-sf4hgj
I have seen many websites using chatbot and they worked similar as human beings do.Anyone comes in stress when they are asked questions and also irritate.This increases tension or stress in their mind.So, it will reduce some amount of stress and lead to a beautiful life. A chatbot is an artificial intelligence (AI) software that can simulate a conversation (or a chat) with a user in natural language through messaging applications, websites, mobile apps or through the telephone. A chatbot is often described as one of the most advanced and promising expressions of interaction between humans and machines. However, from a technological point of view, a chatbot only represents the natural evolution of a Question Answering system leveraging Natural Language Processing (NLP). Formulating responses to questions in natural language is one of the most typical Examples of Natural Language Processing applied in various enterprises’ end-use applications. Basically i downloaded some Q/A and i used it in my project. As i am beginner in ML/AI, i had many challenges.But somehow i managed to gathered some information into pieces and built it. I'm proud of that i learnt many things about chatbot. These are very useful and will replace human in the upcoming future. Built With python
10,007
https://devpost.com/software/stocker-a4kcwn
Main Screen Query input The Result Inspiration As an investor in trading in forex, equities, and cryptocurrencies, I wanted to continuously know the exchange rates of cryptocurrencies against a fiat currency. I wanted to build something for voice assistants such as Alexa or Google Assistant, but I felt like a cross platform mobile application would be more accessible to users. What it does Stocker provides the exchange rates of a cryptocurrency against a fiat currency or vice versa. It takes input in natural language text and provides current exchange rates for the said currencies. How I built it Stocker is built using Flutter, Dart, Wit.ai, and CoinAPI . The interface is built using Flutter. The functionality is done using Dart. packages such as http, convert, and io has been used. These can be found in link . Wit.ai has been used to process the user input and return the two quotes (crypto and fiat). an API call has been made using the http.get method. The two quotes are passed to CoinAPI and the current rate is returned. Challenges I ran into I tried using Swift for the project but found making the data model for the json file difficult as wit.ai returned the entities in the following manner: json quotes:crypto { } quotes:currency { } ` Following this issue I switched to Flutter using Dart. Then I referenced the json and accessed it using arrays. Accomplishments that I'm proud of Training my own model with such ease. Thanks to the convenience of using wit.ai. What I learned I learned about using multiple APIs and using wit.ai to develop chat bots easily. What's next for Stocker I intend to implement speech to text in the future. I also intend to develop Stocker natively for iOS using Swift. Built With coinapi dart flutter wit.ai Try it out github.com
10,007
https://devpost.com/software/askoro
Inspiration Wanted to create my own version of siri or alexa. What it does Answers most if not all of your questions. Examples include but are not limited to: how many miles are in a kilometer? What is the chemical formula of salt? How far is mars from earth? Who is the current prime minister of canada? How I built it The frontend was based on electron, which is a javascript framework for creating desktop applications using web technologies. For the backend, I used nodejs which handled things like saving audio files to the native file system. The logic of the program is something like this: user asks a question using their microphone -> audio gets saved to system -> wit ai speech to text converts the audio question to text -> text based question gets processed through wolfram alpha api -> the response is saved as an audio file with google text to speech -> the audio file is played and the user gets their answer. Challenges I ran into Programming the APIs was somewhat straightforward. What I did have trouble with though, is the communication from the website and the renderer (electron). I had to send the address of the audio files (after they were saved) from the backend to the frontend and I ended up using electron apis ipcRenderer for that. IPCRenderer is an EventEmitter which 'communicates asynchronously from a renderer process to the main process'. Accomplishments that I'm proud of I wanted to learn electron for a while and this was the perfect opportunity for me to learn it. I am proud that I was able to create my first electron app. What I learned For the most part, I got to learn electron. However, that wasn't all; I learned to create requests to apis, use google tts to convert the text based answer to audio, use the 'fs' javascript module to read files on the file system, and more. What's next for Askoro Well, when you start the app it says 'I am your personal voice assistant' but all it can do now is answer basic questions, it's not actually personalized. In the future, I can add a user-base for the app and it would be able to create events on their calendars, run a timer, make calls, send text messages, control their smart devices etc. Built With electron ffmpeg google-web-speech-api p5.js wit.ai wolfram-technologies Try it out github.com
10,007
https://devpost.com/software/aisha-bot-to-help-everyday-trader-get-stock-prices
Stock prices of Facebook Majority holders of Facebook stock Expert recommendation on the Facebook stock Greeting message from Aisha Inspiration Everyday thousands of daily traders spend tens of hours pulling up information on current stock prices, history of the stock and expert recommendation of the stock. I wanted to automate this process, which might only save them a few seconds on each stock they search up, but in the grand scheme would help them save a lot of time and effort. This would especially helpful to people who trade as a hobby or a side job, as they do not have the time resources as a full-time trader. What it does Aisha is a bot made using Wit.ai which helps you to get up and running with basic information about stocks of a company. You can search stock prices, history of the share, majority holders of the share and advice on the share by industry experts and so on. How I built it Aisha is built using the python API for Wit.ai. Aisha also utilises the yfinance API to fetch data relating to the stock. Challenges I ran into The first challenge was to form a problem statement. Once I had that sorted out, figuring out python for wit was another challenge, which was actually quite simple. Integrating with messenger was another challenge, but I overcame this by doing some research and finding some really helpful tutorials. Accomplishments that I'm proud of This is my first time working on a chatbot and it was really cool. Wit.ai is super easy and convenient to use. I am proud of figuring out how to get a basic chatbot up and running. What I learned This project was a huge learning curve for me. It gave me a lot of insight to me on how NLP works and it made me more curious to research further of NLP and important stuff that can be automated using chatbot. Sentiment extraction using wit seemed really cool to me, even if I didn't use it in my project. What's next for Aisha - Bot to help everyday trader get stock prices Aisha, in this form, is very basic and I would like to extend its functionality further. I think what I could do is to find a way to predict share price using Facebook Prophet. More functionality that can be added is to find the best performing stock and recommend it to the user and more helpful stuff like this. Note Please use NASDAQ name of stocks during testing, otherwise the program crashes. Built With chatbot heroku messenger ngrok python wit.ai yfinance Try it out www.facebook.com
10,007
https://devpost.com/software/hemi
window.fbAsyncInit = function() { FB.init({ appId : 115745995110194, xfbml : true, version : 'v3.3' }); // Get Embedded Video Player API Instance FB.Event.subscribe('xfbml.ready', function(msg) { if (msg.type === 'video') { // force a resize of the carousel setTimeout( function() { $('[data-slick]').slick("setPosition") }, 2500 ) } }); }; (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "https://connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); Inspiration Wanted to see how flexible the wit.ai api is, and for this simple project, it did great What it does You can read verses of bible, search and get information from facebook messenger/ facebook page How I built it I build it on top of the sample glitch demo, using an opensource bible sqlite db Challenges I ran into wit.ai is smart, but could be smarter, it does most of the stuff pretty well, but there may be some edge cases when it could be improved Accomplishments that I'm proud of creating a poc in a short time What I learned that javascript sucks, and writing messenger apps is not that hard What's next for HeMi who knows, lets see if people like it Built With facebook-messenger glitch.com javascript Try it out www.facebook.com
10,007
https://devpost.com/software/textcollect
So we decided to create TextCollect(our project) to concurrently meet both the requirements of social distancing plus healthcare. This is what makes USSD unique and more effective in comparison to mobile applications to which the vulnerable don't have the access. Aim for global expansion of this model much needed in countries with dense populations. Inspiration Imagine Peter. He is a 67yr old male living in Johannesburg, South Africa. He has been on the same treatment for HIV and hypertension for the past 2 years and his medication has not changed in this time. On the surface, he is healthy. He is due for his 6-monthly follow-up at the clinic soon and he is scared, as he will need to take 2 taxis to get there and not be able to practice appropriate social distancing throughout, making him vulnerable to the virus due to his age. While also given that the visit is unlikely to change the management of his diseases. We realise the indispensable need to stay at homes to curb the spread of COVID 19 , but more importantly look after our health . So we decided to create TextCollect(our project) to concurrently meet both the requirements of social distancing plus healthcare. What it does TextCollect triages patients (accurately and with reliability) with chronic illnesses using a USSD model in which a patient answers a questionnaire similar to the one used by a doctor and depending on his responses which are relayed to our server to match with the machine learning data, we notify him whether his medication is sufficient for the disease or a change in dosage/medicine is needed. This helps them access health care facilities while maintaining social distancing norms. Technical Functioning 1)USSD session is initialised by mobile user 2)Sends: HTTP “GET” Message to 3rd Party Server Address 3)XML Response String Containing Menus etc get relayed from the server 4)USSD Menus Displayed on Mobile Handset 5)Responses given by user relayed back to our server for implementing with machine kerning data sets 6)The result of evaluation is provided to the customer via SMS. We have done extensive case research for this model in South Africa, where majority doesn't have access to smartphones and using USSD will be most effective. We have a team of data scientists, developers, health care professionals working on this and are ready with the prototype keen on implementing it for the greater good. Accomplishments that we're proud of We are proud to have managed all the technicalities and logistics with minimal resources and to have progressed by leaps within minimal time as well. We are registered as a Delaware Corporation and in the process for patenting the idea. Extremely proud of our well estimated statistical prediction of this USSD model preventing 8,00,000 visits annually with a mere 10% implementation, implying a massive curb on the spread of COVID-19 and patients peacefully being able to access healthcare. Finances We aim to breakeven in Year 1 and have plans for payback in Year 2 as explained in the financials document, Our conservative projection is based on a gradual adoption of our solution with a start at 2% of the target population. Our revenue system is based on a monthly subscription of $0.5, which can be financed by an NGO. Our system is profitable from the first year :). What's next for TextCollect Target the South African govt. for medical response budget, look out for local NGOs like the Praekelt Foundation and foreign agencies like PEPFAR, DIFD. Building partnerships with private pharmaceutical companies as 26% of all South Africans take at least 1 medication regularly - i.e 15.8 million for HIV alone Among those most at risk for complications of COVID-19, those above 65yrs, amount to 59% of total patients that take at least 1 medication. But 71% of all medications are prescribed in the public sector, and among those accessing health care in public sector, 20% reported of not being able to fill a script in the past year due to stockouts at their clinic. It is that gap we are hoping to fill. Aim for global expansion of this model much needed in countries with dense populations. Looking forward to a happy peaceful world enabled with contact List Interaction and minimal travel maintaining social distancing.
10,007
https://devpost.com/software/todos-mas-cerca-9h0u43
first collaborative and fair trade platform between users and businesses 1km around. Inspiration we want to support every kilometer of the planet to revive the economy What it does Our platform supports giving visibility to all businesses close to users, supporting small businesses to reactivate their sales, thus preventing them from going bankrupt and large chains being the ones to keep these revenues. Focusing on the collaborative economy and fair trade How we built it A group of Ecuadorian entrepreneurs developed a platform to connect the owners of commercial establishments with their clients. They are: www.todosmascerca.com Ricardo Mancero and Andrés Dueñas, project leaders have been working for several years in the development of digital projects and during the quarantine process due to Covid19; They looked for a technological solution to help those most economically affected by the crisis. The premise is to prevent people from crowding into supermarkets and help revive the economy of small and medium-sized companies, which have been forced to close their operations due to a lack of customers. Challenges we ran into Revive the small business economy. Avoid crowds. Digitize small businesses. Connect 1km around users with shops. Connection facilities between businesses and users through a webapp. Visibility of small businesses due to scarce advertising resources. Accomplishments that we're proud of This Startup has been very well received at the international level, being chosen by Unesco to spread it worldwide within its #YouthOfUnesco program. On the other hand, the Inter-American Development Bank in its IDB LAB platform has placed the project within “COVID-19 Map of Innovators of Latin America and the Caribbean”. In Mexico, the venture was chosen as the winner in the call "Hack The Crisis Mx" among more than 400 proposals. He was chosen winner in the economic reactivation category in the "Hacking Covid" call organized by the government of the Canary Islands-Spain. Currently the project is as a finalist in the call: "Ecuador Post-Crisis Hackathon" and as a finalist project in the "Ideas for Our Mexico" contest with which they have managed to reach the final among more than 1500 proposals. What we learned We learn to get to know people closer, understanding their needs and anxieties, and we continue learning as we want to reactivate the planet's economy. What's next for todos mas cerca 1.- The next steps is to have a multichannel bot that connects users with businesses 1km around. 2.- implement payment gateways, so that the user can buy through the platform 3.- Develop an ERP for the inventory of shops, so that when a user wants to search for example nutella, they can see according to their geolocation which store has nutella. 4.- create our own delivery app 5.- connect farmers with businesses to generate fair trade 6.- We have a fremium model, which means that any business can have an ecommerce website if they wish, we want to democratize electronic commerce. Built With amazon-web-services angular.js api digitalocean google-maps javascript maps mongodb php whatsapp Try it out www.todosmascerca.com www.todosmascerca.com
10,007
https://devpost.com/software/iris-6n9oau
Sample_Matches All_Mismatches Contents Load train Data Size of first image in dataset Load Test Data Calculate the number of images in each category. Define Network Architecture Specify Training Options Train Network Using Training Data Classify validation Compute Accuracy move diary file Test one at a time Find the square root of the false matches to determine the subplots Recalbirating true and false start plotting mismatches start plotting sample of correct match % Biometric Systems % CYS616 % CONVOLUTIONAL NEURAL NETWORK FOR IRIS RECOGNITION clc; % Clear the command window. close all; % Close all figures (except those of imtool.) clear; % Erase all existing variables. Or clearvars if you want. workspace; % Make sure the workspace panel is showing. reset(gpuDevice); % Reset GPU memory diary Progress.txt diary on Load train Data categories = { '001','002','003','004','005','006','007','008','009','010', ... '011','012','013','014','015','016','017','018','019','020', ... '021','022','023','024','025','026','027','028','029','030', ... '031','032','033','034','035','036','037','038','039','040', ... '041','042','043','044','045','046','047','048','049','050', ... '051','052','053','054','055','056','057','058','059','060', ... '061','062','063','064','065','066','067','068','069','070', ... '071','072','073','074','075','076','077','078','079','080', ... '081','082','083','084','085','086','087','088','089','090', ... '091','092','093','094','095','096','097','098','099','100', ... '101','102','103','104','105','106','107','108','109','110', ... '111','112','113','114','115','116','117','118','119','120', ... '121','122','123','124','125','126','127','128','129','130', ... '131','132','133','134','135','136','137','138','139','140', ... '141','142','143','144','145','146','147','148','149','150', ... '151','152','153','154','155','156','157','158','159','160', ... '161','162','163','164','165','166','167','168','169','170', ... '171','172','173','174','175','176','177','178','179','180', ... '181','182','183','184','185','186','187','188','189','190', ... '191','192','193','194','195','196','197','198','199','200', ... '201','202','203','204','205','206','207','208','209','210', ... '211','212','213','214','215','216','217','218','219','220', ... '221','222','223'}; imdsTrain = imageDatastore(fullfile(pwd,'Dataset/TrainData', categories),'IncludeSubfolders',true,'FileExtensions','.bmp','LabelSource','foldernames'); num_train = size(imdsTrain.Files,1); Size of first image in dataset img = readimage(imdsTrain,1); [x , y , z] = size(img); Load Test Data imdsValidation = imageDatastore(fullfile(pwd,'Dataset/TestData', categories),'IncludeSubfolders',true,'FileExtensions','.bmp','LabelSource','foldernames'); num_test = size(imdsValidation.Files,1); Calculate the number of images in each category. labelCount = countEachLabel(imdsTrain); Define Network Architecture layers = [ imageInputLayer([x y z]); convolution2dLayer(3,8,'Padding','same') batchNormalizationLayer reluLayer(); maxPooling2dLayer(5,'Stride',2) convolution2dLayer(3,16,'Padding','same') batchNormalizationLayer reluLayer(); averagePooling2dLayer(5,'Stride',2) convolution2dLayer(3,16,'Padding','same') batchNormalizationLayer reluLayer(); maxPooling2dLayer(5,'Stride',2) convolution2dLayer(3,32,'Padding','same') batchNormalizationLayer reluLayer(); averagePooling2dLayer(5,'Stride',2) convolution2dLayer(3,32,'Padding','same') batchNormalizationLayer reluLayer(); fullyConnectedLayer(size(categories,2),'BiasLearnRateFactor',2); softmaxLayer classificationLayer]; Specify Training Options options = trainingOptions('sgdm', ... 'InitialLearnRate', 0.001, ... 'ValidationData',imdsValidation, ... 'ValidationFrequency',100, ... 'Shuffle','every-epoch', ... 'MaxEpochs', 10, ... 'MiniBatchSize', 8, ... 'LearnRateSchedule','piecewise', ... 'LearnRateDropFactor',0.5, ... 'LearnRateDropPeriod',50, ... 'ExecutionEnvironment','gpu', ... 'Verbose', true, 'VerboseFrequency', 10); Train Network Using Training Data [net_Wael, info] = trainNetwork(imdsTrain,layers,options); save('TrainedNetwork.mat','net_Wael') movefile('TrainedNetwork.mat','results') Initializing input data normalization. |======================================================================================================================| | Epoch | Iteration | Time Elapsed | Mini-batch | Validation | Mini-batch | Validation | Base Learning | | | | (hh:mm:ss) | Accuracy | Accuracy | Loss | Loss | Rate | |======================================================================================================================| | 1 | 1 | 00:00:03 | 0.00% | 0.30% | 5.8989 | 5.8493 | 0.0010 | | 1 | 10 | 00:00:03 | 0.00% | | 6.3078 | | 0.0010 | | 1 | 20 | 00:00:04 | 0.00% | | 5.7647 | | 0.0010 | | 1 | 30 | 00:00:04 | 12.50% | | 4.6819 | | 0.0010 | | 1 | 40 | 00:00:05 | 0.00% | | 6.0511 | | 0.0010 | | 1 | 50 | 00:00:05 | 12.50% | | 5.2941 | | 0.0010 | | 1 | 60 | 00:00:06 | 25.00% | | 4.8073 | | 0.0010 | | 1 | 70 | 00:00:06 | 0.00% | | 5.5946 | | 0.0010 | | 1 | 80 | 00:00:07 | 0.00% | | 4.9676 | | 0.0010 | | 1 | 90 | 00:00:08 | 12.50% | | 4.1375 | | 0.0010 | | 1 | 100 | 00:00:11 | 25.00% | 20.48% | 4.5945 | 4.1891 | 0.0010 | | 1 | 110 | 00:00:11 | 0.00% | | 5.6557 | | 0.0010 | | 1 | 120 | 00:00:12 | 25.00% | | 4.4698 | | 0.0010 | | 1 | 130 | 00:00:12 | 25.00% | | 3.4383 | | 0.0010 | | 1 | 140 | 00:00:13 | 37.50% | | 4.0359 | | 0.0010 | | 1 | 150 | 00:00:13 | 25.00% | | 3.3781 | | 0.0010 | | 1 | 160 | 00:00:14 | 12.50% | | 3.2702 | | 0.0010 | | 1 | 170 | 00:00:14 | 37.50% | | 2.7774 | | 0.0010 | | 1 | 180 | 00:00:15 | 25.00% | | 3.2226 | | 0.0010 | | 1 | 190 | 00:00:16 | 37.50% | | 2.7147 | | 0.0010 | | 2 | 200 | 00:00:18 | 37.50% | 48.13% | 2.5095 | 2.4462 | 0.0010 | | 2 | 210 | 00:00:19 | 75.00% | | 1.3925 | | 0.0010 | | 2 | 220 | 00:00:19 | 50.00% | | 2.9466 | | 0.0010 | | 2 | 230 | 00:00:20 | 50.00% | | 2.5476 | | 0.0010 | | 2 | 240 | 00:00:20 | 75.00% | | 1.8553 | | 0.0010 | | 2 | 250 | 00:00:21 | 50.00% | | 2.1814 | | 0.0010 | | 2 | 260 | 00:00:22 | 75.00% | | 1.3891 | | 0.0010 | | 2 | 270 | 00:00:22 | 50.00% | | 2.0654 | | 0.0010 | | 2 | 280 | 00:00:23 | 87.50% | | 0.8888 | | 0.0010 | | 2 | 290 | 00:00:23 | 75.00% | | 1.2809 | | 0.0010 | | 2 | 300 | 00:00:26 | 87.50% | 70.70% | 1.7367 | 1.4797 | 0.0010 | | 2 | 310 | 00:00:26 | 87.50% | | 0.7486 | | 0.0010 | | 2 | 320 | 00:00:27 | 75.00% | | 1.1256 | | 0.0010 | | 2 | 330 | 00:00:28 | 87.50% | | 0.9025 | | 0.0010 | | 2 | 340 | 00:00:28 | 100.00% | | 0.7769 | | 0.0010 | | 2 | 350 | 00:00:29 | 100.00% | | 0.5883 | | 0.0010 | | 2 | 360 | 00:00:29 | 87.50% | | 0.6106 | | 0.0010 | | 2 | 370 | 00:00:30 | 75.00% | | 0.6648 | | 0.0010 | | 2 | 380 | 00:00:30 | 100.00% | | 0.4217 | | 0.0010 | | 2 | 390 | 00:00:31 | 100.00% | | 0.5894 | | 0.0010 | | 3 | 400 | 00:00:33 | 100.00% | 86.10% | 0.4794 | 0.8252 | 0.0010 | | 3 | 410 | 00:00:34 | 100.00% | | 0.1444 | | 0.0010 | | 3 | 420 | 00:00:35 | 87.50% | | 0.4606 | | 0.0010 | | 3 | 430 | 00:00:35 | 75.00% | | 0.5521 | | 0.0010 | | 3 | 440 | 00:00:36 | 100.00% | | 0.1855 | | 0.0010 | | 3 | 450 | 00:00:36 | 87.50% | | 0.3960 | | 0.0010 | | 3 | 460 | 00:00:37 | 87.50% | | 0.4560 | | 0.0010 | | 3 | 470 | 00:00:37 | 87.50% | | 0.5018 | | 0.0010 | | 3 | 480 | 00:00:38 | 100.00% | | 0.2638 | | 0.0010 | | 3 | 490 | 00:00:38 | 87.50% | | 0.3632 | | 0.0010 | | 3 | 500 | 00:00:41 | 100.00% | 91.33% | 0.1457 | 0.5355 | 0.0010 | | 3 | 510 | 00:00:42 | 87.50% | | 0.3922 | | 0.0010 | | 3 | 520 | 00:00:42 | 100.00% | | 0.1587 | | 0.0010 | | 3 | 530 | 00:00:43 | 100.00% | | 0.1844 | | 0.0010 | | 3 | 540 | 00:00:43 | 100.00% | | 0.1165 | | 0.0010 | | 3 | 550 | 00:00:44 | 100.00% | | 0.6054 | | 0.0010 | | 3 | 560 | 00:00:45 | 100.00% | | 0.0861 | | 0.0010 | | 3 | 570 | 00:00:45 | 100.00% | | 0.0987 | | 0.0010 | | 3 | 580 | 00:00:46 | 100.00% | | 0.2619 | | 0.0010 | | 4 | 590 | 00:00:46 | 100.00% | | 0.0685 | | 0.0010 | | 4 | 600 | 00:00:49 | 100.00% | 94.47% | 0.0628 | 0.4107 | 0.0010 | | 4 | 610 | 00:00:50 | 100.00% | | 0.0989 | | 0.0010 | | 4 | 620 | 00:00:50 | 100.00% | | 0.0897 | | 0.0010 | | 4 | 630 | 00:00:51 | 100.00% | | 0.1051 | | 0.0010 | | 4 | 640 | 00:00:51 | 100.00% | | 0.0589 | | 0.0010 | | 4 | 650 | 00:00:52 | 100.00% | | 0.1304 | | 0.0010 | | 4 | 660 | 00:00:52 | 100.00% | | 0.1253 | | 0.0010 | | 4 | 670 | 00:00:53 | 100.00% | | 0.0776 | | 0.0010 | | 4 | 680 | 00:00:53 | 100.00% | | 0.0596 | | 0.0010 | | 4 | 690 | 00:00:54 | 100.00% | | 0.0640 | | 0.0010 | | 4 | 700 | 00:00:57 | 100.00% | 95.81% | 0.0766 | 0.3357 | 0.0010 | | 4 | 710 | 00:00:57 | 100.00% | | 0.0595 | | 0.0010 | | 4 | 720 | 00:00:58 | 100.00% | | 0.0207 | | 0.0010 | | 4 | 730 | 00:00:58 | 100.00% | | 0.0253 | | 0.0010 | | 4 | 740 | 00:00:59 | 100.00% | | 0.0761 | | 0.0010 | | 4 | 750 | 00:00:59 | 100.00% | | 0.1142 | | 0.0010 | | 4 | 760 | 00:01:00 | 100.00% | | 0.0487 | | 0.0010 | | 4 | 770 | 00:01:00 | 100.00% | | 0.0661 | | 0.0010 | | 4 | 780 | 00:01:01 | 100.00% | | 0.0962 | | 0.0010 | | 5 | 790 | 00:01:01 | 100.00% | | 0.0182 | | 0.0010 | | 5 | 800 | 00:01:04 | 100.00% | 95.22% | 0.0296 | 0.3024 | 0.0010 | | 5 | 810 | 00:01:05 | 100.00% | | 0.0279 | | 0.0010 | | 5 | 820 | 00:01:05 | 100.00% | | 0.0272 | | 0.0010 | | 5 | 830 | 00:01:06 | 100.00% | | 0.0298 | | 0.0010 | | 5 | 840 | 00:01:06 | 100.00% | | 0.0548 | | 0.0010 | | 5 | 850 | 00:01:07 | 100.00% | | 0.0499 | | 0.0010 | | 5 | 860 | 00:01:07 | 100.00% | | 0.0150 | | 0.0010 | | 5 | 870 | 00:01:08 | 100.00% | | 0.0194 | | 0.0010 | | 5 | 880 | 00:01:09 | 100.00% | | 0.0747 | | 0.0010 | | 5 | 890 | 00:01:09 | 100.00% | | 0.0809 | | 0.0010 | | 5 | 900 | 00:01:12 | 100.00% | 96.71% | 0.0372 | 0.2745 | 0.0010 | | 5 | 910 | 00:01:12 | 100.00% | | 0.0237 | | 0.0010 | | 5 | 920 | 00:01:13 | 100.00% | | 0.0249 | | 0.0010 | | 5 | 930 | 00:01:14 | 100.00% | | 0.0268 | | 0.0010 | | 5 | 940 | 00:01:14 | 100.00% | | 0.0201 | | 0.0010 | | 5 | 950 | 00:01:15 | 100.00% | | 0.0362 | | 0.0010 | | 5 | 960 | 00:01:15 | 100.00% | | 0.0328 | | 0.0010 | | 5 | 970 | 00:01:16 | 100.00% | | 0.0349 | | 0.0010 | | 6 | 980 | 00:01:16 | 100.00% | | 0.0244 | | 0.0010 | | 6 | 990 | 00:01:17 | 100.00% | | 0.0154 | | 0.0010 | | 6 | 1000 | 00:01:20 | 100.00% | 96.56% | 0.0196 | 0.2753 | 0.0010 | | 6 | 1010 | 00:01:20 | 100.00% | | 0.0427 | | 0.0010 | | 6 | 1020 | 00:01:21 | 100.00% | | 0.0230 | | 0.0010 | | 6 | 1030 | 00:01:21 | 100.00% | | 0.0460 | | 0.0010 | | 6 | 1040 | 00:01:22 | 100.00% | | 0.0298 | | 0.0010 | | 6 | 1050 | 00:01:22 | 100.00% | | 0.0227 | | 0.0010 | | 6 | 1060 | 00:01:23 | 100.00% | | 0.0442 | | 0.0010 | | 6 | 1070 | 00:01:23 | 100.00% | | 0.0120 | | 0.0010 | | 6 | 1080 | 00:01:24 | 100.00% | | 0.1166 | | 0.0010 | | 6 | 1090 | 00:01:25 | 100.00% | | 0.0333 | | 0.0010 | | 6 | 1100 | 00:01:27 | 100.00% | 96.11% | 0.0192 | 0.2617 | 0.0010 | | 6 | 1110 | 00:01:28 | 100.00% | | 0.0398 | | 0.0010 | | 6 | 1120 | 00:01:28 | 100.00% | | 0.0294 | | 0.0010 | | 6 | 1130 | 00:01:29 | 100.00% | | 0.0256 | | 0.0010 | | 6 | 1140 | 00:01:29 | 100.00% | | 0.0682 | | 0.0010 | | 6 | 1150 | 00:01:30 | 100.00% | | 0.0215 | | 0.0010 | | 6 | 1160 | 00:01:31 | 100.00% | | 0.0304 | | 0.0010 | | 6 | 1170 | 00:01:31 | 100.00% | | 0.0210 | | 0.0010 | | 7 | 1180 | 00:01:32 | 100.00% | | 0.0117 | | 0.0010 | | 7 | 1190 | 00:01:32 | 100.00% | | 0.0163 | | 0.0010 | | 7 | 1200 | 00:01:35 | 100.00% | 95.96% | 0.0204 | 0.2605 | 0.0010 | | 7 | 1210 | 00:01:36 | 100.00% | | 0.0172 | | 0.0010 | | 7 | 1220 | 00:01:36 | 100.00% | | 0.0626 | | 0.0010 | | 7 | 1230 | 00:01:37 | 100.00% | | 0.0245 | | 0.0010 | | 7 | 1240 | 00:01:37 | 100.00% | | 0.0150 | | 0.0010 | | 7 | 1250 | 00:01:38 | 100.00% | | 0.0216 | | 0.0010 | | 7 | 1260 | 00:01:38 | 100.00% | | 0.0168 | | 0.0010 | | 7 | 1270 | 00:01:39 | 100.00% | | 0.0243 | | 0.0010 | | 7 | 1280 | 00:01:39 | 100.00% | | 0.0195 | | 0.0010 | | 7 | 1290 | 00:01:40 | 100.00% | | 0.0259 | | 0.0010 | | 7 | 1300 | 00:01:43 | 100.00% | 96.56% | 0.0450 | 0.2477 | 0.0010 | | 7 | 1310 | 00:01:43 | 100.00% | | 0.0311 | | 0.0010 | | 7 | 1320 | 00:01:44 | 100.00% | | 0.0367 | | 0.0010 | | 7 | 1330 | 00:01:44 | 100.00% | | 0.0388 | | 0.0010 | | 7 | 1340 | 00:01:45 | 100.00% | | 0.0157 | | 0.0010 | | 7 | 1350 | 00:01:45 | 100.00% | | 0.0188 | | 0.0010 | | 7 | 1360 | 00:01:46 | 100.00% | | 0.0161 | | 0.0010 | | 8 | 1370 | 00:01:47 | 100.00% | | 0.0161 | | 0.0010 | | 8 | 1380 | 00:01:47 | 100.00% | | 0.0246 | | 0.0010 | | 8 | 1390 | 00:01:48 | 100.00% | | 0.0162 | | 0.0010 | | 8 | 1400 | 00:01:50 | 100.00% | 96.86% | 0.0243 | 0.2382 | 0.0010 | | 8 | 1410 | 00:01:51 | 100.00% | | 0.0179 | | 0.0010 | | 8 | 1420 | 00:01:52 | 100.00% | | 0.0203 | | 0.0010 | | 8 | 1430 | 00:01:52 | 100.00% | | 0.0103 | | 0.0010 | | 8 | 1440 | 00:01:53 | 100.00% | | 0.0100 | | 0.0010 | | 8 | 1450 | 00:01:53 | 100.00% | | 0.0158 | | 0.0010 | | 8 | 1460 | 00:01:54 | 100.00% | | 0.0301 | | 0.0010 | | 8 | 1470 | 00:01:54 | 100.00% | | 0.0258 | | 0.0010 | | 8 | 1480 | 00:01:55 | 100.00% | | 0.0369 | | 0.0010 | | 8 | 1490 | 00:01:55 | 100.00% | | 0.0241 | | 0.0010 | | 8 | 1500 | 00:01:58 | 100.00% | 95.96% | 0.0207 | 0.2491 | 0.0010 | | 8 | 1510 | 00:01:59 | 100.00% | | 0.0134 | | 0.0010 | | 8 | 1520 | 00:01:59 | 100.00% | | 0.0139 | | 0.0010 | | 8 | 1530 | 00:02:00 | 100.00% | | 0.0120 | | 0.0010 | | 8 | 1540 | 00:02:00 | 100.00% | | 0.0206 | | 0.0010 | | 8 | 1550 | 00:02:01 | 100.00% | | 0.0297 | | 0.0010 | | 8 | 1560 | 00:02:01 | 100.00% | | 0.0320 | | 0.0010 | | 9 | 1570 | 00:02:02 | 100.00% | | 0.0101 | | 0.0010 | | 9 | 1580 | 00:02:02 | 100.00% | | 0.0545 | | 0.0010 | | 9 | 1590 | 00:02:03 | 100.00% | | 0.0088 | | 0.0010 | | 9 | 1600 | 00:02:06 | 100.00% | 96.41% | 0.0123 | 0.2439 | 0.0010 | | 9 | 1610 | 00:02:06 | 100.00% | | 0.0150 | | 0.0010 | | 9 | 1620 | 00:02:07 | 100.00% | | 0.0313 | | 0.0010 | | 9 | 1630 | 00:02:07 | 100.00% | | 0.0161 | | 0.0010 | | 9 | 1640 | 00:02:08 | 100.00% | | 0.0135 | | 0.0010 | | 9 | 1650 | 00:02:09 | 100.00% | | 0.0152 | | 0.0010 | | 9 | 1660 | 00:02:09 | 100.00% | | 0.0113 | | 0.0010 | | 9 | 1670 | 00:02:10 | 100.00% | | 0.0139 | | 0.0010 | | 9 | 1680 | 00:02:10 | 100.00% | | 0.0090 | | 0.0010 | | 9 | 1690 | 00:02:11 | 100.00% | | 0.0134 | | 0.0010 | | 9 | 1700 | 00:02:14 | 100.00% | 95.96% | 0.0139 | 0.2346 | 0.0010 | | 9 | 1710 | 00:02:14 | 100.00% | | 0.0166 | | 0.0010 | | 9 | 1720 | 00:02:15 | 100.00% | | 0.0131 | | 0.0010 | | 9 | 1730 | 00:02:15 | 100.00% | | 0.0090 | | 0.0010 | | 9 | 1740 | 00:02:16 | 100.00% | | 0.0107 | | 0.0010 | | 9 | 1750 | 00:02:17 | 100.00% | | 0.0107 | | 0.0010 | | 10 | 1760 | 00:02:17 | 100.00% | | 0.0093 | | 0.0010 | | 10 | 1770 | 00:02:18 | 100.00% | | 0.0084 | | 0.0010 | | 10 | 1780 | 00:02:18 | 100.00% | | 0.0078 | | 0.0010 | | 10 | 1790 | 00:02:19 | 100.00% | | 0.0118 | | 0.0010 | | 10 | 1800 | 00:02:22 | 100.00% | 96.56% | 0.0142 | 0.2341 | 0.0010 | | 10 | 1810 | 00:02:22 | 100.00% | | 0.0159 | | 0.0010 | | 10 | 1820 | 00:02:23 | 100.00% | | 0.0354 | | 0.0010 | | 10 | 1830 | 00:02:23 | 100.00% | | 0.0094 | | 0.0010 | | 10 | 1840 | 00:02:24 | 100.00% | | 0.0171 | | 0.0010 | | 10 | 1850 | 00:02:24 | 100.00% | | 0.0161 | | 0.0010 | | 10 | 1860 | 00:02:25 | 100.00% | | 0.0131 | | 0.0010 | | 10 | 1870 | 00:02:25 | 100.00% | | 0.0350 | | 0.0010 | | 10 | 1880 | 00:02:26 | 100.00% | | 0.0247 | | 0.0010 | | 10 | 1890 | 00:02:27 | 100.00% | | 0.0101 | | 0.0010 | | 10 | 1900 | 00:02:29 | 100.00% | 96.71% | 0.0116 | 0.2256 | 0.0010 | | 10 | 1910 | 00:02:30 | 100.00% | | 0.0085 | | 0.0010 | | 10 | 1920 | 00:02:31 | 100.00% | | 0.0072 | | 0.0010 | | 10 | 1930 | 00:02:31 | 100.00% | | 0.0149 | | 0.0010 | | 10 | 1940 | 00:02:32 | 100.00% | | 0.0155 | | 0.0010 | | 10 | 1950 | 00:02:35 | 100.00% | 96.41% | 0.0190 | 0.2316 | 0.0010 | |======================================================================================================================| Classify validation labels = classify(net_Wael,imdsValidation); Compute Accuracy YValidation = imdsValidation.Labels; accuracy02 = sum(labels == YValidation)/numel(YValidation)*100; dis=['The final accuracy rate is: ', num2str(accuracy02,'%2.2f'), '%']; disp(dis); diary off The final accuracy rate is: 97.16% move diary file movefile('Progress.txt','results') Test one at a time true=0; false=0; for i = 1:size(imdsValidation.Files,1) if labels(i) == imdsValidation.Labels(i) true=true+1; else false=false+1; end end Find the square root of the false matches to determine the subplots f0=floor(sqrt(false)); if f0 == sqrt(false) f1=f0; else f1=f0+1; end Recalbirating true and false true=0; false=0; start plotting mismatches falsefig = figure('Name','All Mismatch Pictures','Visible','off','Units', 'Normalized', 'OuterPosition', [0, 0.04, 1, 0.96]); for i = 1:size(imdsValidation.Files,1) imf = imread(imdsValidation.Files{i}); if labels(i) ~= imdsValidation.Labels(i) colorText = 'r'; false=false+1; if false > f0*f1 break else subplot(f1,f0,false); imshow(imf); title(char(labels(i)),'Color',colorText); end end end xlabel('ALL mismatch incidents'); saveas(falsefig,'All Mismatches.png'); movefile('All Mismatches.png','results'); start plotting sample of correct match truefig = figure('Name','Sample of Correct Match','Visible','off','Units', 'Normalized', 'OuterPosition', [0, 0.04, 1, 0.96]); true=0; false=0; for i = 1:size(imdsValidation.Files,1) ii = randi(size(imdsValidation.Files,1)); imt = imread(imdsValidation.Files{ii}); if labels(ii) == imdsValidation.Labels(ii) colorText = 'g'; true=true+1; if true > f0*f1 break else subplot(f1,f0,true); imshow(imt); title(char(labels(ii)),'Color',colorText); end end end xlabel('Sample of Correct Incidents'); saveas(truefig,'Sample Matches.png'); movefile('Sample Matches.png','results'); Built With matlab Try it out codeocean.com github.com
10,007
https://devpost.com/software/moosha-ai
GIF This is moosha my pet from Jarvis ai in Ironman movie my ai is a full stack assistant and can do complex calculations I build it with python and tensorflow gui building was stress for mr ai has now currently having everyfeature of An future ai I learnt lot of things but main I became better at gui programming What's next for Moosha-AI Built With numpy tensorflow wolfram-technologies Try it out github.com
10,007
https://devpost.com/software/born-to-storms-discussion
JanetA, AI Iron Seas Inspiration I am a technical designer and worked for NASA for 30 years designing scientific instruments for studying Earth. I favor a design approach that jumps back and forth from the bottom (what technology can now do) and the top (the big picture). Often I have found it very hard to define the highest level of the top of a design. So it is with our climate crisis; so it is with human/AI interaction. One way to do this is to study the life’s work of a beloved professor, in this case Joseph Campbell, and apply that work to the new problem. Here we are applying “Hero’s Journey” to providing our young people with a foundation for action. Logline: Set in the 2020s, a young woman, supported by an Artificial Intelligence, driven from her home by storms and rising seas embarks on a life affirming struggle to find and support so many people in action on our climate crisis. Since we needed some piazza, we followed Arthur C. Clark’s advice and added an Artificial Intelligence as a main character, JanetA. What it does The task for this Hack is to have discussions with people of diverse backgrounds to insure that the novel characters from similar backgrounds ring true. These character include; 1. Afro-American young woman high school student 2. A person from Bangladesh 3. Older Afro-American woman and man 4. Native American man ## How I built it All top level designs are is build out of words. I now have available: (1) example chapter, (2) short segments for describing each character. Challenges I ran into Most of the AIs in movies and TV are horrible monsters ("Terminator"). Most of the available Climate Fiction is pure dystopia and worse than useless for our young people. Accomplishments that I'm proud of This is my third proper book. What I learned What you know how to do is not always what is needed. Learn something new. What's next for Born to Storms Table Discussions This is a rare learning element in the very top level of design and in communication on technology. Built With word Try it out bigmoondig.com
10,008
https://devpost.com/software/little-foodies
Inspiration I loved to try new restaurants and considered myself a foodie. I take photos to record what I have eaten, and I usually share those food photos on some food critic social channels. But I feel my photos are quite similar compare to others' photos. So I'm thinking if there's a way to make my food review photos a bit different than others. So I cam up with this idea -- put a little foodie (which representing myself) on the table to take photos with the dishes! What it does This filter can put a little character on the table/ground. The special thing about this filter is you can customize a lot of things. You can pick the character's style, movement, face emotion, speech bubble style, and give ratings. So with this filter, you can express how much you love (or hate) the food just within the photo. The little foodie speaks for you! How I built it I draw a lot of images, make a lot of animations, and some 3d models. Some of the art assets were purchased, but usually still been modified by myself to fit my need. Challenges I ran into The main challenge is to figure out a nice way for users to do customization. There are so many options to pick, and also they can adjust both the character and the speech bubble's position, rotation, scale. In the end, I use multiple levels of NativeUI Picker to do the trick. But it seemed the picker act differently on Android and iOS. I was developing with Android, and I found I might have some bugs to fix in iOS ... Accomplishments that I'm proud of I'm so happy to make this filter. This filter is just what I would want for myself! And also I felt satisfied with the customization functionality! What I learned I think I dug deeply into the NativeUI Picker function. And also spent a lot of time managing assets and complex editor patches. I think I have improved my managing skills! What's next for Little Foodies Maybe the next version should let users customize more detail. Like the hairstyle, or switch cloth. That might be more useful. The making of the 3d model is also going to be much more complex! Built With javascript sparkarstudio Try it out www.instagram.com
10,008
https://devpost.com/software/blow-soap-bubbles-in-ar
About the creators of this project Data Sapiens is a creative collaborative, living in the cloud. We are a spare-time collective of creatives from all around the world, specializing in different disciplines. Inspiration For the past months, Data Sapiens have been experimenting on how we can get users to interact with augmented reality experiences not only with touch gestures but also with physical body gestures. We got the idea of detecting if a user is blowing on the phone from watching footage from a trip to the windy west coast of Norway and hearing the wind blowing into the microphone. Using our findings to simulate soap bubbles felt natural when blowing on the display of the device. What it does It enables users to blow soap bubbles in augmented reality. After following three easy steps to calibrate and initialize the experience, the filter will emit bubbles when the user blows on the screen of their device. How we built it Detecting how devices react when you blow on them The moment Spark AR's new audio reactivity features launched, we got to work. We started by seeing how the different frequencies were split and interpreted by Spark AR. We analyzed how sound reacts when a user blows on the screen. Devices have different placement of their microphone, and users hold their phones differently. We noticed that some users use their digitus minimus (little finger) as support when holding their device. If a left-handed user does this on an iPhone, the user might be blocking the microphone, as iPhones have their microphone on the bottom left side of the phone. Google Pixel devices have their microphone on top of the display. These are one of many factors we considered when selecting a sound frequency and threshold value for our custom "wind on the screen" trigger. In our lab, we managed to test on five devices from different manufacturers. To test our detection, even more, we decided to release an early prototype of our filter and ask the community for help, so we could map how different phones behave. We learned that our detection works nicely on most phones, but some old Huawei and Samsung devices are not always working as there are some bugs in the core, and not in the filter. Positioning and triggering a particle emitter We linked an emitter that is a child of a plane tracker to share the same world coordinates and rotation as the camera. objEmiter.worldTransform.position = objCamera.worldTransform.position; objEmitter.transform.rotationX = objCamera.rotationX.sub(Math.PI); objEmitter.transform.rotationY = objCamera.rotationZ.sub(trackerPlane.worldTransform.rotationZ); objEmitter.transform.rotationZ = objCamera.rotationY; With the emitter positioned, we used our findings from our research on audio reactivity, to trigger the emitter to emit particles that were to end up as out bubbles. Making it look like bubbles One of our main goals was to make our filter as authentic as possible. To learn more about how soap bubbles behave, we shot many videos of real soap bubbles in slow-motion and systematically analyzed the recordings. Some factors we were looking for include bubble life span, acceleration, velocity, size, and how the color change based on the backdrop. Using the data from our analysis, we managed to use set forces in the emitter, to move the bubbles in a way that feels natural. We thought that bubbles reflect the environment like a 360-degree photo in a sphere. We were surprised when we saw soap bubbles reflect 180-degree of the environment in a semisphere that is inverted to create a full sphere. Richard Heeks has documented this really well in photo series, Zubbles . We used this opportunity to use the camera texture to recreate the same reflection feeling instead of loading an environment texture. Using the camera texture has two advantages. The look of the bubbles will change based on the environment, and be more relevant to the user's lighting conditions. We had to fake the reflection look by using pincushion distortion as the angle of view on devices is not wide. The camera texture is also pointing in the wrong direction of what would be the texture, so only used the high luminance areas and blended these with our gradients ramps for iridescent. The second and biggest advantage of not using environment textures is size. Our filter is fully procedural, meaning we are not relying on any assets besides our patches and scripts. The size of the export is only 14kb making the filter fast to load and more accessible for users in locations with a bad internet connection. Involving the community Filters need to be tested, and we believe the best way to do this is involving the community. After 3 days of development, we released a prototype and asked the community for feedback. You can read the post here . 253 captures were made on the first day, and people from the community trying the filter gave us feedback. One of the common feedback was to make better instructions. So we did, we also saw the opportunity to create a feature request to the Spark AR team to get better audio reactivity instructions such as, "Start recording and make some noise". Another feedback was to have the bubbles pop more randomly, which resulted in the research we did on soap bubble behavior. The community has helped and pushed us making an augmented reality experience we are truly proud of. Challenges we ran into Plane tracker stability Since we rely on plane tracking to position the bubbles in world space we needed the tracker to the stable so the bubbles don't jump around when the user moves around. To solve this we disabled auto-start on the tracker and added an instruction for the user to look around for 3 seconds before tapping a surface to track. This resulted in a lot more stable tracking across different environments, even in low light. Scale bug after Instagram update The prototype broke on Instagram after an update. It turned out to be Reactive.scale() not working anymore. We rely on inverting the plane tracker scale to keep a constant scale for the bubbles. To solve this we scaled each axis separately. We also submitted a bug report on this. Instructions for audio reactivity Today we can't show instructions while recording, and the user has to record for the audio reactivity to work. This makes it tricky to make the user understand that they need to make some noise or blow on their phone to trigger the emitter. We have submitted a feature request with a proposed solution for this. To work around this we made sure to have a clear demo video. This seems to be working as we can tell from the stats that people are using and sharing the filter. Outro We are really proud of the realistic look of the bubbles, and even more proud to be able to do this in such a small filter size. As a creative collaboration with a focus on building bidirectional bridges between the analog and digital worlds, we are thriving to make experiences accessible for everyone. Having a filter of this quality that will load in an instant, even on bad network connections is a milestone for us. We are looking forward to seeing more users use our filter, learn from them, and improve our filter. We are also proud of the process of building this filter. We learned a lot from the community, a lot about world space in Spark AR, and how we can make AR experiences more physical. We are already looking forward to the next challenge. Resources Link to the filter on Instagram Case study video on Vimeo Project on Github. Data Sapiens on Instagram Data Sapiens Website. Built With ar javascript sparkar Try it out www.instagram.com github.com www.instagram.com
10,008
https://devpost.com/software/pirate-dash-360
Inspiration As recent university graduates, each of us are about to step into the working world. Looking back on our educational journey, we thought of instances that have propelled us into the world of technology, and some highlights include playing puzzle games like Rush Hour, Circuit Maze and Carcassonne. These brain stimulating games have indeed formed the foundation of our technology backgrounds, as they strengthened our logical thinking aptitude, planning, memory capacity and problem solving skills . As a team, we wanted to pay it forward and contribute to society , especially in the education space . We wanted to do so through a fun and fuss-free medium. After all, learning is most productive when it is fun! To maximize the fun, we wanted to incorporate elements of cutting edge technology in today's society for a stronger user engagement and experience. During this difficult period, as people around the world are staying home and observing social distancing measures due to COVID-19, we have come to realize that some parents are finding it difficult to keep their children occupied productively. To tackle this, we decided to design a solution to keep youths engaged while developing their logical reasoning and problem solving skills. After several rounds of ideation, Pirate Dash 360 was born. What it does Pirate Dash 360 provides an immersive 360° Augmented Reality (AR) puzzle experience through a platformer game which brings the player into the world of treasure hunting by solving puzzles. Players are tasked to formulate a path which leads the pirate from the start tile to the end tile (where the treasure chest is located) while avoiding various traps and obstacles. This is done via swapping of directional tiles. Through this game, we hope to strengthen the player's logical thinking, as they have to derive an answer from randomized tile patterns. Currently, Pirate Dash 360 offers a selection of 3 different worlds (Grass, Snow and Desert) of varying difficulty, each comprising 5 separate levels. Each level features traps, obstacles, tile variations and many more! From left to right: Grass World Level 1, Snow World Level 1, Desert World Level 1, Snow World top view How we built it Each world was designed with a theme and different set of challenges in mind. To project a fun and unified experience, we carefully curated a range of assets such as 3D objects, images and sounds from multiple sources. We then positioned and scaled the assets in Spark AR Studio. We also made use of Spark AR's native picker as a user interface, as well as particle systems to enhance the environment. To ensure that the tiles are placed precisely, we rendered them programmatically through a script, where all of the game logic is written. Our animations and interactions with materials/textures were also achieved with scripting, using the Spark AR Studio API. We also used the Patch Editor in Spark AR Studio to bridge our script and pass variables into the program to manipulate animation controllers of our objects. Challenges we ran into As most of us were new to augmented reality and game programming, we had to learn Spark AR from scratch. Fortunately, the tutorials were informative and provided us with what we needed to complete our game. Some of our main challenges include the following: Sourcing and curation of assets that were within our stipulated budget, whilst maintaining the 4 MB size requirement for deployment of Pirate Dash 360 on Instagram. Our lack of knowledge on 3D modelling and animation concepts such as blending, cull and animation curves made it difficult for us to configure the aesthetics of our game in the earlier weeks. Despite multiple attempts, we were not able to do due to time constraint and bandwidth limitations. We decided to utilize the wealth of resources available from third parties such as SketchFab. While we used scripting to create most of our animations, we were not able to change the animation controller attached to our 3D objects without the Patch Editor. We managed to integrate the script and Patch Editor via passing of variables, but the Option Picker was limited to only 5 options, and hence, we could not fully utilize the range of animations of our objects. We had multiple ambitious ideas around the usage of AR during our planning stage, but many proved infeasible due to the size limit. However, this is fully understandable and we are glad to have overcame this challenge towards the end. Some of us concurred on the initial learning curve of the patch editor for animation tweaks. This was resolved over time through practice, and we realized the immense equity that it provides for developers. We had issues with positioning and piecing together assets in the World View, as we were pretty new to AR technology. This got better over time, through practice and playing around with the different tooling available in Spark AR Studio. Accomplishments that we’re proud of Overall, we are delighted with our end product, and only wished we had time to implement more fascinating features we had in mind. That being said, we are really proud of what we have achieved as a team. We started out with a single level, moved on to create 5 levels, and ended up with 3 worlds of 5 levels each. It has been an incredible journey and we are glad to created a fully functional AR game. What we learned We learnt so much regarding Spark AR Studio and how scripting can be used to enhance the overall AR experience. Through our various brainstorming and ideation sessions, we learnt that that there are endless possibilities when it comes to AR, and can be leveraged in many settings including, but not limited to, education and gaming. Some key learning points include: Learnt about how animations work and how they can be created in Spark AR with the Patch Editor. Learnt how to create Spark AR projects and script using JavaScript with reactive programming. Learnt about UI/UX, theming and A/B Testing, through this, we were able to piece together various curated UI assets, which helped us in conceptualizing and theming of the game, thus, providing a natural and intuitive experience for our gamers. Learnt about the benefits of AR and Education-based Gaming, and how we could potentially bring across social good and value to the masses through cutting edge technology. What's next for Pirate Dash 360 We have great plans for Pirate Dash 360 ! We intend to add more world themes and levels, include new obstacles and more complexities into the game. Beyond this, we also plan to introduce elements that will further challenge short-term memory capacity as well as visual and spatial reasoning. Examples include jigsaw puzzles, rotating tiles and an eight directional tile system as compared to our current four directional tile system. Look forward to our updates! Acknowledgements SketchFab Quaternius.Itch.IO Kenney Facebook Sound Design Global Genius Spark AR Studio 3D Assets https://www.kenney.nl/assets/platformer-kit https://www.kenney.nl/assets/pirate-kit https://sketchfab.com/3d-models/3d-sidescroller-little-pirate-7329a3297b374a8ca0bbb032eb49a3aa https://sketchfab.com/3d-models/treasure-chest-b46fd9edd44e412fa76f9b9a2b86281c https://sketchfab.com/3d-models/chevron-f4a277b8d9cd47e0bea272eb58c1c5b0 https://sketchfab.com/3d-models/bomberman-bomb-4a109903cbd34bef9d48e427a2d4da78 https://sketchfab.com/3d-models/bomberman-fire-8e482145eeed419980fabf073fcb13c9 https://quaternius.itch.io/platformer-pack/ Audio Assets Imported from within Spark AR Studio. Sources: Facebook Sound Design Global Genius Built With javascript sparkar sparkarstudioapi Try it out github.com www.instagram.com
10,008
https://devpost.com/software/canada-eh
Inspiration For Canada Day celebration What it does Its a world AR effect with Animation. How I built it By using a free maple leaf 3d fbx image, Canadian Flag colors and 3d Text Challenges I ran into I originally wanted to create another effect, but time constraints made me create something simple. I hope to complete it the next time around. Accomplishments that I'm proud of Learning and creating this effect. What I learned There is so much to learn and I have just tapped the surface. What's next for Canada Eh A theme for social distancing and gestures Built With ar particle
10,008
https://devpost.com/software/facebook-gaming-with-gamear-stream
Select the user Live streaming Live streaming Live streaming Inspiration As now-a -days everyone like to share everything and the best platform for sharing is the facebook. But on facebook an user cannot share his/her gaming experience with everyone. So after looking at this I decided to build something for gamers through which they can share their experience with others and let others to enjoy the game also. So I decided to use my backgrounds and knowledge in image & video processing to fix that issue by building an easy-to-use framework that enable developers capture and record ARKit videos, photos, Live Photos and GIFs; and I called it ARVideoKit. The amazing feedback from developers motivated me to build something even better using the Facebook technologies, and I'm calling it GameAR Stream. What it does GameAR Stream is an iOS framework built to specifically enable mobile developers implement real-time AR content rendering and allow users to share their augmented reality gaming experience on Facebook Live. How I built it Using the Facebook LoginKit, CoreKit, ShareKit (AccountKit) and real-time messaging protocol (RTMP) technologies I was able to connect to the Facebook live stream API and push audio & rendered game scenes in real-time. Then I used ARKit, SparkAR, AVFoundation, Metal, and CoreMedia frameworks in order to render the phone camera stream with the augmented reality components and game scenes in real-time. GameAR Stream is a CocoaTouch framework written in Swift, which means it's very simple to implement it in any iOS application! Challenges I ran into I have ran into many challenges while building this framework and here are 2 main challenges I ran into and how I solved them: Rendering 2D game & augmented reality scenes – solved by developing a functionality that uses Metal framework and 3D renderer to diffuse the 2D scenes materials. Creating an RTMP session and pushing rendered buffers to the RTMP session – solved by developing a functionality that used AVFoundation, Foundation, and rtmpdump C Library in order to start an RTMP session and push the rendered buffers to the session! Accomplishments that I'm proud of I'm mainly proud of how I was able to figure out creating RTMP sessions and pushing rendered audio and video buffers in real-time from a mobile device! What I learned While building this framework, I have learned many new things related to real-time messaging protocols and networking. What's next for FaceBook Gaming with GameAR Stream I am looking to expand the domain of virtual sharing like live sharing the experience of museums , monuments , movies with augmented reality. Built With ar arkit facebook-account-kit facebook-core-sdk facebook-login-api facebook-share-sdk particle swift Try it out github.com
10,008
https://devpost.com/software/planet-match-multiplayer-world-ar-game-for-instagram-calls
Planet Match Game - Cover Planet Match Game - Spark AR Screengrab Planet Match Game - Spark AR Script Planet Match Game - Instagram Call Gameplay Screengrabs Inspiration During the lockdown I really miss getting together with friends to play board games. I am currently learning Spark AR, and very interested in the evolution of XR, I use the Oculus Quest and can’t wait for FB glasses one day! During the lockdow also people are not moving much. So I thought of an active game that sparks imagination I also set myself to use as much of the things I was learning in tutorials and that I believed could be a strong reason to use them in my concept. I wanted my World AR project to not make sense unless it was truly happening as an interaction with the real world, instead of other World AR projects I have seen where the background is not important. What it does It makes a planet pop out from the floor, and your opponent needs to match its texture, fast! Our they loos lives and matches! There is a welcome screen after which you can select Player 1 - will see a planet popping out from the floor Player 2 - has 30’’ to find a material or object that would match the surface of their opponent’s planet. This is visualised as a transparent planet that gets filled with what Player2 camera is pointing at. To continue to the next round and switch roles, both players need to tap their screen. Speed is key! Players can lose their 3 lives and earn negative points when their time is up. Luckly lives recover as your opponent loses theirs. How I built it I Used Spark AR, and the patch editor doing a lot of trial and error, then I realised some things needed script so I watch many tutorials until I finally got it working. It has a little bit of script, but it is mostly developed with the patch editor. Challenges I ran into This is my first Spark AR project, I had many challenges. Where to start! I think that the main challenge, not just for me but perhaps for Facebook (and what I wanted to learn) is that I don't know if it actually works on a call ... as I had not submitted the filter to publishing until I finished it today. It was very hard to imagine the interaction where both filters don't communicate with each other, and this is where FB could perhaps release something (if there is I didn't find it) that makes a filter that it is used in the same call to communicate with each other and exchange variable values. So I had to make up a new way to do the scoring that made sense in the game, without the filters speaking to each other. I really enjoyed the challenge! Accomplishments that I'm proud of The whole thing, honestly, I never thought I could make it work! I am so excited this is a new territory for me and I can't wait to do more and more! What I learned I learned Spark AR from scratch, the patch and Scripting and all, it is amazing that one can imagine something and make it a reality just like that! What's next for Planet Match - Multiplayer World-AR Game for Instagram calls Well, to take the world by storm! If it does work during a call, it would be amazing and I believe many people would want to try it out. I believe it is the first Multiplayer World-AR Game that would work on a phonecall Next is to see if filters can exchange variable values on the same call, to make more complex games! THANK YOU! Built With particle patch script Try it out www.instagram.com docs.google.com github.com
10,008
https://devpost.com/software/arc-4ahnks
Inspiration Fashion as we know it has become obsolete. The Covid-19 pandemic has demonstrated how unsustainable the classic fashion business model is. With digital, nothing physical is ever needed, only data. Design can be distributed across a multitude of designs and people will be able to directly access and adopt the designs of those they prefer to patronize. Making a statement will become both more accessible and less impactful to the environment as when fashion changes the design can be changed without having to sent a garment to landfill. As technology improves people will be able to display their fashion choices to others without having to worry about losing control of their design. Digital garment signatures could prove authenticity and protect copyright. Certificate authorities could be formed from fashion houses to lend their trusted brand to designers to form a curated digital range of fashion whose lineage can be traced and proved using some of the same methods used to verify the identity of websites we use to conduct our business ever day online. For now we have curated a digital gallery of fashion that can be viewed in concert with a gallery t-shirt. When perceiving the gallery through this filter different realities can be contemplated and a future of augmented fashion brought just a little closer. How I built it I created a logo to use as a tracking marker and then combine a uipicker to allow users to select to view different pieces of art in the effect. Logic was used in the patch editor to enable animation for some pieces and also music where appropriate. Using hand-drawn illustrations and using illustrator to create unique artwork. as well as using other artists to show the potential in a shared design Challenges I ran into Accomplishments Delivering on time working with a team formed only 3 days before. challenges The restrictions on QR codes and external textures limited the ability to connect unique user markers with unique content that the wearer could control and change. This means the effect user has the power to change how the garment wearer is perceived rather than the garment wearer being able to choose the fashion in which they are perceived. The code its self is not very advanced for tracking this Accomplishments Delivering on time working with a team formed only 3 days before. What I learned Learned what is and what is not possible within spark AR. As well as how to make digital fashion Filters What's next for ARC Using this to create a more advanced filtered garment with more customization so that the consumer is the designer
10,008
https://devpost.com/software/draw-your-own-sign
About. Built With augmented-reality
10,008
https://devpost.com/software/repnatural
Natural surroundings Inspiration What it does It replicates a natural landscape How We built it Using Plane Tracker,Some 3D Objects Challenges I ran into Inserting so many 3D Objects and giving them animation made the platform lag Accomplishments that I'm proud of Completed it in about 24 Hours What I learned Using the producer patch to animate. What's next for RepNatural Trying to further replicate more of such landscapes and more developments. Built With sparkar Try it out github.com
10,008
https://devpost.com/software/cpr-trainer-cz6mvl
C P R Inspiration It’s a shocking fact that almost 17 million people die globally of cardiac arrest. Currently, about 9 in 10 people who have a cardiac arrest outside the hospital die. But CPR can help improve those odds. If it is performed in the first few minutes of an arrest, CPR can double or triple a person’s chance of survival. Surprisingly, not many people are aware of the intricate steps needed to perform CPR successfully. This project is something very close to [Satvik] as one of his closest friends passed away due to cardiac arrest. It goes without saying that having the knowledge to give CPR is extremely important. It's very possible that the situation could have had a more fortunate outcome. Motivated to spread CPR knowledge, we created CPR Trainer. What it does CPR Trainer is a handy (proof-of-concept) tool, which aims to aide folks in first-aid ;) How we built it Coordinating over both EST and IST timezones has its challenges for sure. This necessitated constant and effective communication, which was KEY! Google Docs is your best friend. Challenges we ran into Spark AR is powerful and bleeding-edge, but its feature-set isn't yet completely robust. Figuring out how to work around issues like body tracking was a doozy, but definitely rewarding! Accomplishments that we're proud of Within just a few weeks, we've built something that truly has the potential to be a huge benefit to folks down the road. That's just awesome! What we learned Spark AR, and AR in general, are still so untapped. There's so much it can do, and our imaginations are only just beginning to grasp what's possible! What's next for CPR trainer We'd love to refine CPR Trainer and turn it into a pro-grade experience, suitable for anyone new to CPR. Looking forward to seeing where we can take this :) Built With javascript patch sparkar Try it out github.com
10,008
https://devpost.com/software/arc-67p8bl
Project on SPARK AR Project on SPARK AR Inspiration Fashion as we know it has become obsolete. The Covid-19 pandemic has demonstrated how unsustainable the classic fashion business model is. With digital, nothing physical is ever needed, only data. Design can be distributed across a multitude of designs and people will be able to directly access and adopt the designs of those they prefer to patronize. Making a statement will become both more accessible and less impactful to the environment as when fashion changes the design can be changed without having to sent a garment to landfill. As technology improves people will be able to display their fashion choices to others without having to worry about losing control of their design. Digital garment signatures could prove authenticity and protect copyright. Certificate authorities could be formed from fashion houses to lend their trusted brand to designers to form a curated digital range of fashion whose lineage can be traced and proved using some of the same methods used to verify the identity of websites we use to conduct our business every day online. For now we have curated a digital gallery of fashion that can be viewed in concert with a gallery t-shirt. When perceiving the gallery through this filter different realities can be contemplated and a future of augmented fashion brought just a little closer. We were inspired by Lego and how it has a system. Fashion needs a system. Fashion is one of the most polluting industries In the world. The rise of AR shows the promise of a digital-based design that doesn't create any waste or chemicals. Fashion needs to be shareable editable, so that the consumer is the designer and has full customization. What it does Using marker recognition to place AR filters atop a t-shirt to give the wearer different options to be styled in without the need to buy anything physical. There are 10 filters to choose from that can be used on Facebook and Instagram. How I built it Using hand-drawn illustrations and using Illustrator to create unique artwork, as well as using other AR tits to show the potential in a shared network. How I built it We created a logo to use as a tracking marker and then combined an UI picker to allow users to select to view different pieces of art in the effect. In total we have 10 different filters that will be applied to the garment according to the user's choice. Logic was used in the patch editor to enable animation for some pieces and also music where appropriate. Challenges I ran into Learning how to use Spark AR. Our team members have very different backgrounds, with a member who's been working with AR previously but within the aerospace industry, a member who is a fashion designer and another member who is a frontend engineer. It was our first time using Spark AR and also the first time making filters for the general public on Facebook and Instagram, so we learnt a lot in the process. Accomplishments that I'm proud of Delivering on time working with a team formed only 3 days before, working remotely and the 3 of us being in different countries (United Kingdom & France). What I learned We learnt how to use Spark. None of the team had used it before the team was formed. What's next for ARC We plan to use this to create a more advanced filtered garment with more customization so that the consumer is the designer. Instagram effect URL: https://www.instagram.com/ar/2658206224394673/ Facebook effect ID: 261222311773350 Built With ar particle sparkar Try it out github.com www.instagram.com www.facebook.com
10,008
https://devpost.com/software/a-ja3nou
Inspiration Color blindness (color vision deficiency or CVD) affects approximately 1 in 12 men and 1 in 200 women worldwide. It is caused by the absence or impairment of one or more cone cells within the human eye. Due to a deficiency in these cells' ability to detect certain wavelengths, the retina itself is unable to distinguish between these colors. The most common of these deficiencies is red-green color blindness, in which the individual has trouble distinguishing between the red and green wavelengths. Within this category, there are four types of color blindness: deuteranomaly (green-weak), deuteranopia (green-blind), protanomaly (red-weak), and protanopia (red-blind). Both protanopia and deuteranopia make the individual completely unable to differentiate between red and green. The next most common form of color blindness is blue-yellow color blindness. People with this condition have difficulty distinguishing between the colors blue and green, as well as the colors yellow and red. Tritanomaly is the mild form of the condition and is characterized by the impairment of the relevant color receptors. Tritanopia is the more severe form and is characterized by the complete absence of the relevant color receptors, which makes it much more difficult to tell the difference between blue and green, purple and red, and yellow and pink. Despite the large segment of the population affected by color blindness, many people do not realize the everyday struggles that color-blind people face. Color blindness impacts simple tasks such as the thorough cooking of meat (due to difficulty distinguishing between the color of cooked meat and uncooked meat), distinguishing between fresh and spoiled foods (such as moldy bread), distinguishing between ripe and unripe fruits, and choosing what clothes to wear. Color blindness also affects more complex tasks, such as painting, graphic design, or jobs involving colored wires much more difficult than they would be for those with normal color vision. Therefore, this tool, Hue Helper , will act as a simple tool that color-blind people can use to differentiate the colors of their surroundings, thus aiding them in accomplishing everyday tasks that so many people take for granted. What it does Hue Helper provides color correction for the three most common forms of color blindness: protanomaly/protanopia , deuteranomaly/deuteranopia , and tritanomaly/tritanopia . Although it cannot restore normal color vision to those who are color-blind, it can help them differentiate between colors in a way that they would not normally be able to. In addition, Hue Helper provides simulations of protanopia and deuteranopia for the benefit of those with normal color vision. These simulations can be used by web developers, graphic designers, and others who are interested in creating products that are accessible to color-blind people. How I built it Both the color-blind correction filters as well as the color-blind simulation filters were created using lookup tables. The simulation filters use the same algorithm as Adobe Photoshop's colorblind simulation filters, while the color correction filters use the same algorithm as Google Chrome's inbuilt color filter schemes. In addition, we added buttons in order to allow users to toggle between each type of filter. Challenges I ran into It was difficult to understand how to convert the images through use of the Daltonization algorithm. This was an obstacle because the Daltonization algorithm doesn’t work with the RGB model, which is the model used in the majority of all image processing algorithms. The Daltonization algorithm instead makes use of the Long-Medium-Short Waves (LMS) Color Space, a model that attempts to emulate the way in which our brains process the varying wavelengths of light. Accomplishments that I'm proud of The main obstacle of this project was overcoming the restrictions set by the Daltonization algorithm due to its incompatibility with standard RGB models. However, our team worked diligently to try and find a way to implement the algorithm into the LMS color space form that could be interpreted via Adobe Lightroom. This process did not go as smoothly as originally planned; however, these setbacks allowed us to be able to regroup and further refine our development schedule. While hectic over the course of the project we became closer and learned how to effectively communicate with each other. Overall, I am really proud of our ability to really come together and apply our different strengths towards a common goal. What I learned Throughout the process of this hackathon, we learned various techniques and skills that will no doubt help us in furthering our career. Of course the biggest hurdle we faced was figuring out how to create images via the Daltonization algorithm due to the algorithms restrictions. In addition to learning how to make use of the algorithm attempting to implement it into Spark AR and javascript made the task even more challenging. However, this challenge gave us the opportunity to actively seek out tutorials on the techniques needed to properly create and execute the code needed to reach our goal of creating a tool that the color blind can use in their daily lives. What's next for us Augmented reality is an up and coming tool that while having many applications it is unfortunately limited by how early in its infancy it is currently in. While the idea of us catching a virtual monster or playing card games where our cards come to life is amazing in itself, the true potential of augmented reality lies in its use to enhance the human experience in our daily lives. The ability for AR to innovate the methods in which the future generations are able to learn at schools is limitless.What if our glasses came loaded with an AR tool that would allow electricians to view schematics in real time? What if surgeons could visualize a patient’s blood vessels in a 3-dimensional space prior to surgery based on imaging? This project attempts to act as a prosthetic, an emulation, for one of the most vital of human senses, sight. That is the overall goal of this project as well as the general goal of this group, to allow innovation in medicine to be married to innovation in coding. We are hopeful that this project inspires others to consider other ways we can help simplify the lives of those who suffer from various forms of health problems or deficiencies. Built With javascript sparkar Try it out github.com www.instagram.com
10,008
https://devpost.com/software/black-lives-matter-e963vk
Inspiration In light of the current events, we have created a filter that unifies all those who support the colored communities and individuals, and those who have faced experienced discrimination due to their skin color. We want to remind the people about the ideals of the free world and free America. It is important to commemorate the fact that during this tough time we have to live in harmony and support those who are in need. What it does We've created a virtual world that portrays a few of the most significant colored figures in the world. People can use this filter to show their support to the colored communities and individuals while learning about the famous quotes from famous personalities: Martin Luther King and George Floyd. How we built it For building this project we used software like blender (3d modeling) and spark ar. Firstly, we had to create different assets for the effect. For that we used blender. We created the 3d model of the personalities, then the environment, and finally the textures. The final step was to compile all the assets together while adding the different required animations. For the effect, we used the plane tracker and the animation sequence. Challenges we ran into We really wanted to make an impact in spreading racial inequality through this project, but had a difficult time finding the right way. To create our filter, we first had to make a 3d object using blender which was a really challenging task considering we were not so familiar with 3d modelling. Next came the spark ar part, where we spent hours trying to find the right assets and objects trying making the filter look like what we had in mind. Even trying to find the right images took a significant amount of time. And, after all that, the filter looked nothing like we wanted it to be. The position as well as sizes were messed up and we had to spend more time making it look good on a real device. But, after all that hard work, we finally made what we wanted, helped the community as well as had fun along the way. Accomplishments that we're proud of Initially, we all were not at all acquainted with the vicinity of Argumented Reality but after a lot of determination and hard work we ended up boasting a wonderful filter which unifies all the protesters against the discrimination and challenges faced by the Black Community . Our team member Satvik Kapoor came up a mind-boggling 3-D model which was certainly a proud moment for the team . What we learned We found that educational AR representations were beneficial for learning specific knowledge and increasing participants' self-efficacy . After gaining a ton of knowledge about Augmented Reality , we believe that AR can radically transform STEM learning by making challenging concepts accessible to students. We even gained some knowledge about animations and the lives of black people while researching about them . Moreover, being introduced to SPARK AR was an awesome experience for the team and we hope to continue gaining wisdom in this field What's next for Black Lives Matter We intent to promote and create awareness about the movement at a large scale by uploading our filter on various social websites such as Facebook and Instagram. Built With blender sparkar Try it out github.com
10,008
https://devpost.com/software/newspickar
Inspiration The inspiration for this project was from a food policy course I am currently taking. In this course we talk about the struggles lower income households have to access adequate amounts of nutritious food. Not only is there an economic barrier to obtaining adequate food, but a social barrier as well. We hope that by creating this filter, we can reduce the stigmatization associated with food bank usage and encourage others to donate in a fun and interactive way. What it does This filter gives users a fun way to choose which item to donate to their local food bank. By removing the issue of researching which items are most in need and making it easy to decide, we hope to encourage more donations. Additionally, the use of the filter may help inspire other people in the user’s group of friends to also donate. How we built it This project was built in SparkAR with graphics done in Adobe Photoshop. We used SparkAR features such as Person Segmentation, Face Tracking, Emitters, and the Patch Editor for logic. Challenges we ran into Our original plan was to look up trends using a Facebook API but when we discovered that API calls were not allowed, we needed to change our idea quickly. Accomplishments that we're proud of This was our first time using the SparkAR hackathon and half of our team had no coding experience (despite extensive knowledge on food insecurity) What we learned We learned that relatively simple filters can have a strong effect on changing user behaviour. People who tested our filter said that they were much more likely to donate now that they had something to help them decide the item(s). What's next for FoodBankAR We plan to publish the effect and work with local food banks to raise awareness for donations. In addition, the accessibility of FoodBankAR allows us to easily spread awareness about this issue in the community with no net cost. This can be incorporated in future health awareness virtual conferences and webinars. Built With sparkar Try it out github.com www.instagram.com
10,008
https://devpost.com/software/tony-penguin-pro-skater
Inspiration We tried our best to think of an interactive effect that we could do with Spark AR that would be fun for the user and encourage them to share the video/picture itself. We explored our team's likes and skateboarding came up. An endless runner seemed to fit that idea naturally. And we skinned it in a silly cool 2000s skateboarding vibe. The rainbow text was a throwback to the Word Art text of the early 2000s. The penguin is there because he's cute, silly, and we needed someone on that skateboard. :) What it does You tilt your phone to move the penguin player left and right and tap the screen to jump. You avoid the obstacles as long as you can and try to reach a totally gnarly awesome radical high score! How I built it Using Spark AR and Blender. Challenges I ran into Keeping under that 4mb limit was tough. In the end we were able to fit it under 2mb but that took a lot of crunching of the textures and shortening of the music. It didn't take long for any 3D model to push past the size limit so that was a struggle. The scripting was also a new paradigm for us as we haven't done reactive coding before. The patch editor was also interesting to learn. About 90% of the making of this was reading up on SparkAR's documentation for scripting and patches. Last but not least, trying to keep in mind all of Facebook's policies as we develop it. We found out about the instruction issue at the last minute rather than us doing custom text. Accomplishments that I'm proud of That we were able to produce something playable for the hackathon that was actually fun and what we envisioned. That's thanks to the power of SparkAR's Patch Editor. It can do a lot once you understand it. What I learned SparkAR's Patch System makes it possible to get by with minimal coding. And how to interact with objects that makes sense in an AR space. You can't predict where the player will always be so we had to make it make sense from several angles. What's next for Tony Penguin Pro Skater More obstacles to avoid and SFX! Credits Hiya Gupta Christopher F. Arnold Facebook Effect Link Built With ar particle Try it out github.com
10,008
https://devpost.com/software/chill-roomz
window.fbAsyncInit = function() { FB.init({ appId : 115745995110194, xfbml : true, version : 'v3.3' }); // Get Embedded Video Player API Instance FB.Event.subscribe('xfbml.ready', function(msg) { if (msg.type === 'video') { // force a resize of the carousel setTimeout( function() { $('[data-slick]').slick("setPosition") }, 2500 ) } }); }; (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "https://connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); Table Frame 03 Frame 04 Frame 02 Frame 01 Inspiration We drew inspiration from the great classic games based on escape room logic. What it does Opening the camera you will find yourself a desk in front, above the desk there is a scale model of the rooms to be completed, along with some useful notes and a laptop. Tap on the model to start the game. Once the game has started, you will be catapulted into the first room. Solve the puzzle to get to the next level. The first level is easy, You will recognize the clickable objects because they flash. Find the hidden key! How I built it We built ChillRoomz entirely on Spark AR Studio making the most of the Patch Editor. Challenges I ran into The main challenge was to design the various game 'levels' in patch editor.Also hiding objects and designing paths to allow the user to complete the game was quite a challenge. Accomplishments that I'm proud of We are very happy to have managed to develop a compelling and above all not obvious or easy game path. What's next for Chill Roomz We are evaluating the idea of ​​adding more rooms, therefore more levels and more fun! Built With ar photoshop sparkar studio Try it out www.instagram.com
10,008
https://devpost.com/software/forbidden-mask
Fire part of the mask with flames Ice part of the mask with flames Fire part of the mask without flames Ice part of the mask without flames Inspiration I just wanted to learn something new during this lockdown, so I learnt the software from scratch to accomplish the assigned task What it does It gives user the ability to transform someone's face completely via a sophisticated mask which can change its appearance according to the user input How I built it I built it using Spark AR studio with the help of Adobe Photoshop and Adobe Illustrator Challenges I ran into I found it troublesome to put more than 2 instruction on the user screen which were visible for equal amount of time each. I also put a lot of effort in building PBR materials and utilizing the true potential of environmental texture in the materials tab. Accomplishments that I'm proud of I finally solved most of my issues on my own and also improved my existing project by many folds because of the community What I learned How to create realistic looking face-mask in Spark AR with physically based materials What's next for Forbidden Mask I'll add an animation sequence to both fire and ice part of the mask and make the background response to those changes as well Built With adobe-illustrator blender particle photoshop Try it out github.com www.instagram.com
10,008
https://devpost.com/software/stark-reminder
Snippet of Real-World effect Sanitizer Object in 3D Space Message Screen Objects in 3D Plane Patch Editor showing Animations and Object Gesture Inspiration While the world is dealing with pandemic, i.e. Covid-19, there are some people who are not following the protocol of maintaining 6-feet distance, using hand sanitizer, etc. CDC has been publishing guidelines frequently about the importance of masks and sanitizers after the outbreak in China, followed by Italy and Iran, but unfortunately, even during lockdown, some people fail to follow these guidelines and are a threat to public in general. I wanted to build a filter that sends a message to the user that sanitizers are important not just for their health, but for the hygiene of people around them! What it does When people move around, they will see virus everywhere. Somewhere in between, there is a hand sanitizer that pops a message when tapped. How I built it -SparkAR -AR Libraries Challenges I ran into Object and plane tracking becomes difficult if it involves 3D elements. SparkAR is pretty flexible that way. It lets users test the effects they built on their devices through SparkAR Player App. Also, for publishing these filters, there is a guideline for both Facebook and Instagram. If your project size exceeds the limit, you won't be able to publish your effect on SparAR Hub. Accomplishments that I'm proud of This filter is able to produce the effect which was intended, and the features are not looking out-of-place either. What I learned I explored some really cool SparkAR features and I have used them in my project as well. I would like to explore some advanced features too! References: https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/cloth-face-cover.html#:~:text=In%20light%20of%20this%20new,community%2Dbased%20transmission . Built With javascript sparkar Try it out github.com
10,008
https://devpost.com/software/asdaw
Original Sketch example of the frame scaled up to the size of a 6ft door GIF first successful test of vertical plane tracking As part of our research we looked at world filters that were already available on Instagram, Facebook, and Snapchat for some inspiration on how to support a large social good and/ or enrich communities. We found a filter by Maximkuzlin where users can upload images into a billboard and we thought this feature had the potential to be very powerful. When we have the user insert themselves into the filter it allows them to shape what story they want to be heard and makes them more connected to their creation. This became the core to our project. While we were sketching ideas, our team imagined that users should be able to place their images on a wall in their home or blow it up to the size of a building. We thought about how it is common during a wake for the family to display photos from the loved one’s life, and how this could be presented in a collage. A memorial collage became our target goal to achieve. Starting off small we thought we should plan for the user to upload 5 images and have 3 different collage layout variants to choose from. Our team started writing some scripts to upload images to the filter, but we couldn’t figure out how to start. On the “Facebook Online Hackathon” Group on Facebook we asked the community where would be a good place to start looking for our answer. Thankfully someone responded and they pointed us towards the Gallery Texture documentation. It seemed intimidating at first, but looking at some tutorials it was actually a lot simpler than we expected. To our surprise and a happy turn of event the Gallery Texture also supported video formats which added another layer of storytelling which is at the user’s disposal. The biggest challenge we faced when developing this filter was with Spark AR’s native Plane Tracker, which out-of-the-box is optimized for the horizontal plane. What’s quite excellent about Spark AR is that you can extend and modify the available functionalities of most things by writing code, and so we spent some time reading the documentation and writing our own custom JavaScript for tracking vertical surfaces (in this case, walls). To do this, we studied how the SceneModule works, what classes it ships with and the methods it works with. We were particularly interested in the PlaneTracker sub-module, along with its getters and setters. The final piece of the puzzle was the TrackingMode enums, unique keys that set the focal point of the plane object. For our project’s purposes, we wrote a piece of JS code that looked for the plane object and found its trackingMode property. It is writable, so we set its TrackingMode enum to ‘VERTICAL_TRACKING’, allowing us to bypass the default plane tracker and map our world objects to vertical surfaces instead. Once we had our base of uploading 1 image and our vertical plane tracking we went back to our original sketch to include multiple images on different 3D planes in a collage layout. This was a more formidable challenge however. We weren’t getting reliable results from uploading the couple images we chose and the images were not going in the planes we wanted them to be in. Unhappy with our progress and time running out we begrudgingly had to drop the idea of allowing multiple images and by proxy had to drop the idea of a collage. We still had a very valuable base, but we just needed to adapt it a bit. Looking at artwork artists had made for George Floyd we decided we should instead frame the user’s image instead. We also found some interesting silhouette artwork and thought it can also be a very compelling way to frame a photo. Once we learned more about alpha masks we were able to achieve that silhouetted looked we wanted. With both influences in mind we made a handful of frame variants for the user to choose from. We were also able to find a way to keep the 1 photo user chosen and have it easily transfer to each of the frames. In the future we want to be able to have a wide variety of frames for users to choose from that fit the memorial style they envision. Built With blender javascript sketchfab sparkar Try it out github.com
10,008
https://devpost.com/software/domino-picture
Default domino image for the peace created my own domino using my gallery picture created my own domino using my gallery picture created my own domino using my gallery picture Inspiration People like to make their own creatures. I also like that and especially I like domino. It's very difficult to build a domino. I should think what I would make and how make it. But it's very exciting when it start to fall. Maybe most people also like that but maybe they don't want to build it themselves. So I thought that if they can make their own domino easily and can enjoy only the thrill of falling over, It can be a really fun creative tool. What it does They can built their own domino by using their photo in gallery and enjoy play the domino. How I built it First to make this effect I made domino models and used world object template. And in the script I made a class for each domino's data model to handle falling degree and sequence. And I made a callback to start the interval timer by touching the front parts so that I can control the domino's falling every time the callback is called. And I used a Gallery Picker to take a texture they want. And finally I made a pixelated effect to texture for making perfect color for domino. Challenges I ran into I'm not designer. I'm a developer. So I was so hard to make a 3D resources. So when I struggled with that I decided to make that using python script in blend and of course I studied so much about that. At now I can make a 3D models by myself! Furthermore to optimize the capacity, I tried making to control the domino's movement in the script. If it handled entirely by model's frame animation, the capacity would have been enormous. What I learned As I mentioned in the above, it became an opportunity to make a 3D modeling by myself and also It became the opportunity to think as a user. What's next for Domino Maker I want to make more various complex shape of domino. Built With javascript patch-editor Try it out www.instagram.com
10,008
https://devpost.com/software/say-u9dnj5
The Tunnel Pics of others in front of tunnel doors based on background segmentation Inspiration I decided to join this Hackathon to learn SparkAR and started with a “virtual presence” effect idea based on the gallery texture to offer an escape for people who got isolated due to Covid. Few days later, George Floyd got murdered, and my psyche changed. Instead of working on a project to technically challenge myself, I decided to build an effect for paying some sort of tribute to Black Lives lost due to police brutality. What it does See, Hear, Say Their Names immerses the user in acknowledging 93 African Americans killed in USA under police custody. The effect does the following: • User starts the effect surrounded by a “tunnel” of which walls are filled with the scrolling names ,and hearing two voices reading the names. • User can move inside the tunnel in three directions and rotate her camera up and down to read different names. • While "in the tunnel", user can take photos/videos of other people "outside the tunnel" through the tunnel doors. • While "outside the tunnel", user can take photos/videos of other people "in front of " tunnel doors. How I built it I used the SparkAR plane tracker in a different way. To enable user to move inside the tunnel in three directions, besides rotating her camera, I needed the plane tracker. However, I wanted the effect to start immediately, not waiting for a plane detection in the environment, so that the user finds herself in the tunnel when the effect begins. Experimenting with SparkAR, I observed the following: When a plane tracker is placed into the scene even without placing any object under the plane tracker, a horizontal plane at the device’s starting position is assumed, and SparkAR’s environmental understanding tracks this point. I also wanted to offer the user some way of sharing her experience with this effect with other people by enabling her taking pictures/videos of people in the environment with the names on the tunnel walls. Since implementing this completely convincingly is still an open Augmented Reality problem, I implemented the following compromise: 1- When user is “outside the tunnel,” I used “SparkAR” background segmentation to take other people’s pictures as if they were standing in front of the tunnel doors covered with scrolling names. 2- When user is “in the tunnel," other people “cannot enter” the tunnel, but user can take their pictures/videos with the names as if they were outside the tunnel. I implemented three different approaches for the first case, including a portal approach (see the Challenges section) to get a feel which one is more satisfying as an experience. The one I submitted is as follows: Once the user is outside the tunnel, she sees the tunnel doors as two screens, and can take pictures of others in front of these screens. I chose this option since it eliminates the possible experience that user may see others walking through the tunnel walls. (Of course, user still may see herself may walking through the tunnel walls// under the current state of AR). User instructions: Not to interrupt the user’s experience especially inside the tunnel, I used only one instruction to remind about the audio. I also intentionally did not allow stopping he audio, but only lowering the volume by a screen tap, I believe AR experiences should be self explanatory and discovered by user. To create the audio, I used text2speech.org free text to speech converter. I took the names from NPR. Challenges I ran into If I had chosen not to have doors, a good number of challenges I faced would have disappeared. However,, for the type of feeling I wanted to evoke, I envisioned a tunnel like structure with two openings at both ends. To decide on what user should experiences when she is outside the tunnel,, I implemented the following approaches: First, I built a portal like implementation by occluding the tunnel and leaving only the doors visible from outside, so that when user is outside, inside of the tunnel is only visible through doors. Second alternative, I made the tunnel wall material with back culling, but door material as both sided. When user is outside and looking into the tunnel through the walls, she sees inside the tunnel like a portal as in the first case. But looking from a side, user can see inside the tunnel with that wall missing, which gives more background options for taking others' pics. Third and the submitted implementation is the one I described in the previous section. Not a challenge, but time consuming task was to transform textures so that they exactly fit the structure and readable. Accomplishments that I'm proud of I am most satisfied making an effect for acknowledging 93 African Americans from Eric Garner to George Floyd, who died because of police brutality in US. Being "in the tunnel" surrounded with and hearing their names has not lost its power on me even after playing the effect many times. I believe it can also have a healing effect on other users. What I learned I learned SparkAR. I teach free of charge classes, and I always ask not only “what I learned”, but “what I can teach”? SparkAR can be a great tool to teach Augmented Reality. What's next for Say I’d like to make the name entry and audio to this effect dynamic, so that with unfortunate increase in the number of lives lost due to police brutality, and other unjust causes, this effect can be easily updated. Built With javascript probuilder sparkar unity Try it out github.com
10,008
https://devpost.com/software/krupoorva-an-ar-tutor
Krupoorva User Space Interaction Prism Gravity Virtual Space Experience Inspiration Augmented Reality can change the way we learn the concepts, it makes learning more fun, engaging and effective. Even the toughest of the topics can be made easy to visualize and understand with the help of augmented reality. The main intention behind creating Kupoorva is to help students learn concepts by bringing extra creativity, interactivity and engagement 😎 Educational institutes were one of the first places that had to be shut down due to the current pandemic situation leaving a gap between effective learning techniques and students. During the pandemic, educational institutions have adopted online teaching and learning methods. And this motivated us to bridge that gap using Augmented Reality We thought of giving the whole process a new dimension and bringing the classroom to the user's space. With the help of augmented reality, the students can learn the concepts by seeing them in action and interacting with it anywhere, anytime! 😍 What it does Krupoorva, the personal AR tutor guides the user to choose among the different physics concepts available. On choosing particular concepts, it helps the user understand the concept with audio and text tutorials. Along with that, the user can see the AR objects in action and interact with them. To make the whole process more fun, we have also added face filters that represent the concept that the user is currently learning. The user can switch to any topic at any point of time simply by tapping on the AR tutor which navigates to the available topics. We have covered the topics in physics that students usually find hard to visualize and understand. We believe that during times like these when the students are stuck at home and can't attend their institutions, AR can help enrich the process of learning and make it more fun 🎉 How we built it With Compassion, Healthy Teamwork, Creative minds, Never To Give Up Attitude & with ❤ using SparkAR Challenges we ran into Plane tracker optimization ✈ UX designing for best experience 😁 3.The process of making the experience interactive and engaging required us to work intensively on the logic development and scripting 🤔 4.Optimizing and reducing the size of the objects and resources 😫 Accomplishments that we're proud of We are super happy and proud that we could bring our idea of using AR in teaching complex concepts to action. The use of AR in education will breathe in a new life in the learning and teaching process 👨‍🏫 Seamless experience across most of the devices both with device motion capabilities as well those don’t, through what we call a Virtual Space immersive experience 😯 Logic development for each of the ideas we had thought for this project. Voice guide for easy navigation throughout the AR effect ➿ What we learned Honestly we both were very naive when we first started working on this project. We started with patch editor and then slowly slowly learning all the components of SparkAR studio as well advanced scripting 💻 We have also taken advantage of newer features like block creation and sharing. It really helped us work collaboratively in a more efficient way 👩🏻‍🤝‍🧑🏽 How to work under tight deadlines and in competitive environment 😈 What's next for Krupoorva - The AR Tutor Looking forward to bringing in more concepts and subjects to benefit a larger group of students 👨‍👩‍👧‍👦 Using more advanced features of the SparkAR studio to enhance the current experience even better 💯 We have started working on introducing new topics under each subject already 😛 Built With blender photoshop reactive sparkar tinkercad typescript Try it out www.instagram.com www.instagram.com github.com
10,008
https://devpost.com/software/blockbuilder-ar
Thumbnail Starting Screen Built Pyramid Inspiration I've spent a lot of time playing with my one-year-old nephew during quarantine. If there's one thing I learned, it's that stacking things and knocking them down is fun , for anyone of any age. We decided to build an augmented reality experience to allow anyone to experience that nostalgia. What it does Our world effect provides an enjoyable simulator of a nearly universal childhood pastime. Build block structures that are limited only by the stretches of your imagination. Create a work of art and then throw a ball at it! Knock it to the ground! Take videos of your unique creations to share with your friends. BlockBuilder AR allows you to build towers with colorful building blocks, utilizing object physics to bring the creations to life. You can add blocks and change their colors. Grab and move them around by moving your device. When you're done with your creation, BlockBuilder features two destruction objects: a ball to throw at your tower from different angles, or a car that races toward your tower for a more dramatic destruction. Our effect unlocks that lost childhood joy of building something simple and creative that you’re proud of, and gleefully knocking it to pieces! Whether it’s an impossibly tall tower that defies gravity or an elaborate and colorful pyramid, it happens within your real surroundings. And the best part is: there’s no cleanup! How we built it The glue that holds this project together is the Spark AR Physics library , which integrates Spark AR scene objects with physics objects built with the cannon.js physics library. This allowed us to provide magical AR interactions that follow the laws that govern our real world. We started by creating our blocks in Spark AR Studio, and getting the physics library set up so that they can fall to the ground. We added our buttons to reset the scene, make new blocks, and turn on and off the gravity simulation. Since Spark AR does not include the capability to instantiate new scene objects, we created a pool of 25 blocks pre-added to the scene. We then developed the ability to move the blocks around by selecting them. At first we attempted to do this with a pan gesture that calculated the direction in which to move them based on the angle of the camera. We changed this and opted for a physical movement of the device because reactive math made the first method difficult, and it is also more intuitive to use. Next, we added our destruction objects . We added models for a ball and a toy car and the buttons to use them. The physics libraries allowed us to shoot them precisely and realistically. Finally, we utilized the patch editor not only for UI and audio purposes, but also for touch gesture interactions and simple animations that give our items more personality. It was an extremely useful tool due to the large scope of this project. Challenges we ran into A big challenge that we dealt with throughout the entirety of development was the challenge of binding all of the visible components and their invisible physics driven counterpart. It became clear very quickly that manipulating objects precisely while physics were enabled was going to be extremely difficult, so we made a design decision to do all the building before physics is enabled- which led to the implementation of the gravity button. This turned out to be a useful design change since we could make the blocks return to their pre-physics positions when the gravity button is pressed again, allowing for further editing. Another challenge that we had to overcome was the fact that none of us had any experience with reactive programming, and figuring out how to do even simple logic with the Reactive module was a challenge. Perhaps the most frustrating of the challenges that we encountered however was the fact that while oftentimes our code would work perfectly in the editor, it would completely break when running on our phones, much to our dismay. For example, getting the ball to stay fixed in front of the user before being thrown took several overhauls before it worked in any capacity on our phones, while almost every version worked almost flawlessly in the editor. Through many long nights of bug fixing and lots of teamwork we were able to iron out almost every bug and have the functional AR block building experience we dreamt of over a month ago! Accomplishments that we're proud of What we’re most proud of is the fact that we worked together as a team so effectively despite being separated due to covid. This was really the first time any one of us had worked collaboratively on a big project like this. Learning to use GitHub to make sure all of our additions merged smoothly was a huge learning curve, but we stuck it out and we were all able to flex our developer muscles. We also worked hard doing detailed reading of the documentation, as there are few Spark AR Studio tutorials to be found online due to it being such a new technology, especially regarding advanced scripting. The Reactive paradigm was brand new to all of us, and combining our implementation with the non-reactive cannon.js was tricky. With tenacity and lots of snacks, we were able to persist and overcome. What we learned First and foremost we learned how to use SparkAR altogether. This was our first time using such a tool. We learned all about patch editor, the way it talks to the script, and when it should and shouldn't be used. We definitely learned a lot about reactive programming and how to make it work with the non reactive cannon.js. But most of all we learned that teamwork really does make the dream work. Or, maybe it was just tremendous amounts of caffeine. What's next for BlockBuilder AR When planning this project we had high ambitions for a social experience where you could build structures with your friends in real time. Due to the fact that the Networking was unavailable at the time of this Hackathon, we weren’t able to implement multiplayer, but it’s still something we’d love to add. We’d like to add the capability to spawn different shapes, as well as new textures and even more colors. We are also planning to add an undo button and the ability to erase individual blocks. Lastly, more interactive destruction objects would make this even more fun than it already is! Built With cannon.js javascript node.js spark-ar-physics sparkar webpack Try it out github.com www.instagram.com
10,008
https://devpost.com/software/the-corona-game
Scan and Play Game Logic Map(in SparkAR) Inspiration It's often said that "Play is our brain's favourite way of learning" That's why I created an educational COVID-19 AR Game on Instagram, with the intention of providing some baseline knowledge about this dangerous virus and necessary prevention to stop the spread and lower the graph. As this game developed, I found an extremely positive feedback from the testers specially those children who used to go out to play in fields but now are not able to. This AR game actually created a sense of awareness amongst the children and helped put a more tangible face to the virus and this very serious situation. In addition to the cause behind this effect, I also wanted to see how far can I push myself with size limitation of 4MB and bring a new perspective to the table by introducing something which has not been introduced on this platform before. What it does The COVID-19 game is a 20sec world-based educational augmented reality game, that informs viewers about the SARS-CoV-2 Coronavirus that’s currently spreading around the world. It teaches about the various ways of prevention from coronavirus such as maintaining physical distance ,washing your hands regularly, wearing masks, etc. The game starts when you tap on the floating icon. As soon as you tap , five different objects mysteriously augments in your 3D space and you have 20 seconds to scan through environment and collect them all . At last a score out of 5 is displayed on your screen along with the preventive measures according to the world health organisation. How I built it I built the AR effect in Facebook's SparkAR platform throughout most of the month of June (2020), spending roughly 56 hours writing the script, editing audio, Photoshopping textures, Importing and optimizing 3D Models(from the AR Library)and animating components . After importing all the assets I had to first create the initial screen and animate the objects by using loop animation and transition patch from the patch editor. then using the object on tap patch I scaled up and augmented all the 3D models and 2D planes in the surrounding environment. For counting the score , a counter would increase each time you collect an item and and using multiple add gates the final score gets stored in a value which is then connected to the script which changes the score text on the screen. For the timer , runtime patch is used which gets reset each time you click the play icon. Then through "round" and "minimum (rounded runtime value,20) " patch, the time is stored in a value which is then connected to the script and is displayed as the timer. Then at last the end screen is displayed when the timer runs out which contains your final score and a remark corresponding to your score (by checking the final score using "exactly equal" patchand displaying the corresponding text). Challenges I ran into One of the major challenge was to fit all the prefabs, textures, audios , etc in the given 4MB file size restriction for the entire effect. David times when I had to remodel all the prefabs from scratch again and again just to save those precious few KBs. Removing useless textures also helped a lot in saving memory and optimising the game. Thinking out of the box with the sole motive to spread awareness for the coronavirus and its prevention in a non boring and intriguing way was also one of the challenges I ran into which was solved by developing this game on spark AR and using its function to bring a new perspective to the table. Accomplishments that I'm proud of Considering this to be the only game I built on any platform I am extremely proud of the fact that I built this in less than a week and brought a new intriguing way of spreading awareness for the coronavirus and making people know how to take preventive measures against it. What I learned I learn how to use this software, design the game, implement it in less than a week . Also the power of social media is something I realized is a big power these days . Especially in the days of this pandemic when everything is under lockdown , social media is something which can be used heavily to spread awareness globally and and empower people to begin preparing for a new (potential) reality of widespread infection. What's next for The Corona Game Next I want to randomize the spawn location of all the items and add more stability to it . Also I want to add more and more of 3d models and new features such as changing the screen colour gradient with respect to amount of Covid 19 patients in that area. Built With blender javascript photoshop sparkar Try it out github.com www.instagram.com
10,008
https://devpost.com/software/foldables
3D net of a cube View of the effect opened in Instagram View in Spark AR Inspiration Teachers would often have to painstakingly draw out geometry on whiteboards to help students see how they look like in a 3D space. Unfortunately, a 2D solution like using a whiteboard or drawings on paper sometimes may not be ideal especially for learners who may have difficulty trying to visualize drawn 2D shapes in a 3D form. I narrowed down my idea to looking at using AR as a way to solve this by displaying 3D nets of common 3D solids which can be animated and controlled by the user. What it does Foldables presents learners with a choice of basic solids like a cube, cuboid, prism and pyramid. After tapping to place the net of the solid, the user can interact with the solid by tapping and moving it around and being able to adjust it's scale. Long pressing on the screen will trigger the net object of, for example, a cube to start folding to form a 3D cube. Letting go of the finger from the screen will reverse the animation. There is also a slider which the user can use to control the animation. The user can easily move the slider up to any point and stop to inspect the solid being formed. The Native UI picker options at the bottom provide access to the other solids for the user to play around with as well. How I built it Creating the nets of the solids and their respective folding animations was done in Blender before being brought into Spark AR. I used the world sticker template in Spark AR to get quickly started with scene setup for a world AR effect. Challenges I ran into Implementing a way for the user to access the various types of nets for 1 solid like the cube. I decided to just have another 3d object in the scene that would appear alongside the cube net (which is a round circular button with arrows on it). Tapping on it will switch between the various types of cube net layouts. This helped to solve this problem. Accomplishments that I'm proud of Implementing the use of Native UI, the slider and the use of the 3d button object to swap between different 3d nets of the cube was something I did not expect to fully achieve, but I am glad I managed to do that. What I learned Asset management (still learning this) and also how to make use of the option sender to toggle between more than the default 5 options for the user to have more variety to play with and explore. What's next for Foldables To implement more variety of nets for the other solids (cuboid, prism, pyramid) and also have maybe more types of solids available which are of increasing visual complexity. Built With sparkar Try it out www.instagram.com
10,008
https://devpost.com/software/the-green-ribbon
The green ribbon Inspiration I have been wanting to help friends and family with their mental health issues for a long time. But due to the stigma around the topic it has never been easy to talk about it. What it does It is a simple filter which puts forth some mind boggling facts about ignored mental health issues and some statistics associated with it. How I built it I have built the filter using spark ar studio. Challenges I ran into Because mental health is a very sensitive topic, I am still categorising and filtering the topics that might trigger anyone. Accomplishments that I'm proud of I am still figuring out Spark AR studio, so using it to build something like this is a huge accomplishment. What I learned I learned all the basic of the studio, the different capabilities of the project and how the script is integrated with the project. What's next for The Green Ribbon Oh tons more! I hope to add more 3d and interactive assets and add a way for people to help themselves by making a gratitude chart and helping them recall some good stuff! Built With javascript sparkar Try it out www.instagram.com
10,008
https://devpost.com/software/quiz-mania-3pqj54
Inspiration Tell me and I forget, teach me and I may remember, involve me and I learn. Learning never exhausts mind, it only gets boring. So to make learning more interesting and engaging which involves the user and also keep entertained, I created quiz mania. This helps the user to enhance their learning by involving them in solving quiz in much interesting way and engaging them in learning process and knowing their score for further improvement. What it does Quiz mania provides quizzes on various subjects where one can select the subject of their interest. There will be questions on your screen and whole lot of answers will there in the real world where you have to search for your correct answer and hit the target answer. If you hit the correct target then you will be rewarded by point and if wrong target point would we deducted. At the end of the quiz you will know your score and can share it on social media for comparison and further improvement. This method will make learning fun and will keep engaging. What I learned As learning can never be expressed in words but It was a great experience working with SparkAR. Working on patch editor to create logic's and calculations had improved my logic building skills. Implementing the complete idea in SparkAR had made me more familiar with the software. Creating animations, models and UI has improved my creativity and thinking skills. As the number of projects we build it helps in future for development of ideas and projects and I am sure all the leanings during this period of time will help me further. What's next for Quiz Mania In further version will include more levels and trying to make it more interesting and user friendly. Will include some more topics and personalize it according to age and background. Built With blender paint3d photoshop sparkar Try it out www.instagram.com
10,008
https://devpost.com/software/ctos-phone
Demo Picture Inspiration I am a huge fan of Watch Dogs which came out back in 2014. I always had this idea at the back of my mind, "What if you were able to get your hands on a Hacker's phone and be able to Profile People while walking across the Streets" and this is what I wanted to achieve with this Effect. What it does This effect gives you access to a CtOS Phone which is what the protagonist uses in Watch Dogs which enables him to hack into pretty much anything in his environment and profile People and extract confidential information about them. Let’s say you are walking across the street and people are walking towards you so when you activate this Effect, it blurs their face and displays a custom information board similar to Watch Dogs along with giving you that cool Hacker Phone feel. How I built it I created a Colour grading LUT in Adobe Photoshop and Adobe Lightroom which applies a bluish tint to the environment to give it the shady feel. Then I replicated the Watch Dogs information board animation in Adobe After Effects. I created multiple information boards for each Face Tracker so that the users don’t get bored (I’ve included 2). I created a Boot Animation so when the user activates the Effect, he actually feels the phone is booting into a Hacking Ecosystem. I have added immersive audio to back the effect. Challenges I ran into One of the biggest challenges was to actually replicate the Watch Dogs information board animation. It would have been a really tedious task if I would have gone for scripting and still maybe would have struggled to finish it. Instead, as I am comfortable with Video Editing, I chose After Effects and built it finally after 15 different layers and 6 hours of rendering time while keeping in mind that it is going to be pretty heavy in terms of size. Another challenge was to use the patch editor because this was the first time I have ever touched Spark AR but being an Electronics student, the logic was quite easy to pick up and I absolutely loved the experience. Accomplishments that I'm proud of This was my first ever Instagram Filter and I am happy that I was introduced to Spark AR. I had a lot of fun playing around with the patch editors, materials, textures etc. I am really happy that I was able to use my Video Editing and Illustrator skills and integrate that with the effect to make it even cooler. What I learned This was my first step into the World of AR, as I am more of a Web and App Development Programmer with absolutely no AR knowledge. This Hackathon taught me what are the factors which play an important role in AR, how do various objects work, choosing between 3D and 2D models in various situations, how to integrate my own works into AR etc. Overall it was a really fun experience and I am definitely looking forward to participate in the upcoming AR Hackathons. What's next for CtOS Phone One of the features I have been thinking about is building an interactive bridge in between people who have the Effect activated on their phones. They can challenge each other within a particular radius and play mini Cyberpunk themed games against each other in the AR world. Built With aftereffects lightroom photoshop premierepro sparkar vegas Try it out www.instagram.com github.com
10,008
https://devpost.com/software/project-crimson-pearl
Inspiration Considering the current epidemic, we wanted to make something positive and fun to improve people's social experiences while maintaining adequate safety. We were inspired by emojis and how they allow users to better communicate and express themselves via text. What it does Pearls is a World AR effect that lets users add smiles to up to five mask-wearing family members and friends. How we built it We used the Spark AR face mask demo as a starting point. Unfortunately, there isn't a block for the position of the mouth, so we used the face finder block and positioned the 2D image from the center. Our 2D images were found on the internet and cropped to our needs using GIMP. Also, we added the UI picker block to allow users to change the texture/2D image. Five face finders where used so that up to five individuals can be in the same picture/video. What we learned Spark AR is a flexible and well-designed app with great tutorials and community support for building out a variety of AR effects. Blender is a powerful but complicated solution for building 3D models. What's next for Pearls The Pearls team has already created 3D models of teeth and lips to animate within Spark AR to create 3D smiles and is working on perfecting and importing a sufficient armature/rigging system to properly transform our 3d models within Spark AR based on eye movements. Challenges we ran into One hurdle was understanding what the FaceFinder block uses to track a face. Was it possible to find a face with a mask? Would it find a face from a side profile? We found that the algorithm focuses on the eyes through trial and error, and we were able to place a 2D image at a fixed position on the face. We also used Blender for the first time to build a 3D model of lips and teeth to use within Spark AR. Learning how to use the Blender application, build a model, and incorporate sufficient armature consumed a substantial amount of time, and we've been unable to determine how to transfer the armature as created in Blender into Spark AR without issues. When trying to import our Blender creation, the OBJ format excluded the armature and the FBX format simplified the armature by automatically removing joints and limbs. Accomplishments that we're proud of: We believe we have created a simple and effective solution to allow users to express themselves safely in a world with Covid-19. Built With adobe gimp sparkar Try it out www.instagram.com
10,008
https://devpost.com/software/aretail
Portal door with occlusion effect Virtual shopping experience Covid 19 safety gudielines Inspiration The current Covid-19 situation has put a hold on a lot of things that we assumed were normal. Going to a store and picking up clothes is one of them. Online shopping of apparels currently results to a whopping 40% of returns because the users do not get an actual perception of how a dress might look until its in their hands. What it does We bring the physical store to the user with the power of Augmented Reality. ARetail as we call it enables users to visualize the clothes they like with a store like an experience. This helps the users make a more confident choice to purchase and thus reducing returns. How we built it Spark AR Studio's documentation helped in getting around the interface and features very quickly. We designed a 3D Boutique on Sketchup, added proper materials in blender and bought into Spark AR Studio. Sample Template gave us a quick way to get the basics ready with respect to plane tracking, so we could get to quickly focusing on the storyboard and content placement in the scene. Logics were applied using the Patch Editor which seemed to be better and faster than having to code everything up in Javascript. Challenges we ran into The Size limitation set for 4MB for Instagram is painful one and we learned that the hard way. We had to strip down a lot of details from our 3D environment to bring the size and poly count down and then further reduce mesh resolution of mannequin and clothes using decimation in blender as well as simplygon. The next major challenge is that the plane tracker feature isn't quite stable . There's a lot stability issues here especially on Android. iPhone seems to have a more stable tracking, but I believe SparkAR has a lot of work to do here and fix out the issues with respect to SLAM and maybe use native ARCore and ARKit apis fro ground plane detection and spatial awareness. Accomplishments that we're proud of What we learned Learning a new tool is always challenging and fun, and Spark AR Studio was no different. We even fiddled around making funny facial effects with goofy glasses and realized how easy it is to get started but to perfect you need a lot of creative skill. What's next for ARetail We want to get a lot of reach for our AR project with Spark AR and show that social media is the true powerhouse in enabling the new normal even in retail businesses while also making e-commerce more efiecient. We will also work on more demos with SparkAR and maybe find a way to monetize it for our skills. Built With blender simplygon sketchup sparkarstudio Try it out github.com www.instagram.com www.facebook.com
10,008
https://devpost.com/software/distortion-world
Distortion World cover Inspiration VFX of Inception and Doctor Strange What it does Distorting the ground in the real world to provide a new view of the world How I built it Transformed the vertex coordinates of the plane detected by the plane tracker into the clip space Deformed the vertex position. Challenges I ran into Converting world space to clip space Vertex displacement UI/UX design for this filter Accomplishments that I'm proud of Found a new way to use SparkAR What I learned Vertex shader in SparkAR Matrix transform What's next for Distortion World Distort buildings Built With matrixprojection shaders sparkar Try it out www.instagram.com
10,008
https://devpost.com/software/oculus-tasks
Icon Work, not just Jobs: Helping AR Help People Help Others ** AR can help unskilled people learn and perform tasks of all sorts. And 30-40 million Americans face job losses right now due to Coronavirus. So let's match them with work, and not just jobs. Look, it takes years and years to train people to do work. From taking care of the elderly, or working in healthcare or public health (e.g. contact tracing), assembling furniture, or simpler tasks like picking out groceries to bring to people during social distancing -- there's a lot of work and a lot of help needed. And like new two-sided marketplaces for labor, like Amazon Local Services, Uber or TaskRabbit, instead of needing a $30,000 car and expensive carbon-based fuel, or a $30,000/year education... all you'll need is yourself, an Internet connection, and an AR-capable device -- to do all sorts of useful tasks on behalf of other people. And ideally get paid for it. What it does AR Tasks solves for two classes of users: Customers and Workers. Customers can specify or select from pre-entered Task sets. These may have suggested prices for the total Task set to be done. For example, picking out a set of vegetables from the greengrocer or fresh air farmer's market could be a $5 task. They then enter payment information and this payment is held in escrow, along with an administrative fee, until a Worker is found and can be assigned to the task. The Customer can also specify if they want a new worker or an experienced one, see reviews, and potentially set thresholds on what level of experience they will require -- but also pay more for more experience if they want. Workers can choose to work on certain Task types, see estimates of how much time it'll take to do the work from previous Workers, and what minimum amount they'd like to be paid for the work. For example, they might see a Task for a neighbor asking to pick up some vegetables from the Farmer's Market for $5.00. They might even know the neighbor on Facebook, so they'll do the job for free since they'll be walking past their house today anyway. But they don't know what the vegetable looks like and aren't familiar with shopping at the Farmer's Market. The Worker can also update the Customer with status, and ask questions for clarification especially if there are unexpected events. In this example, the Worker launches the Tasks app on their AR device and navigates to the Farmer's Market down the block. They then see a task list - a simple description of the item accompanied by an image, in this case an image of the vegetable the Customer wants. The farmer is offering a great two-for-one deal. The Worker asks the Customer if they want an extra vegetable; the Customer doesn't reply quickly so the Worker picks up the extra vegetable and thanks the Farmer. They point the camera of the AR device at the item as they're completing the task, and confirm that these are the same thing. They use the Tasks credit card to pay for the item with the Merchant (Tasks has already escrowed the funds), and then they head to the Customer's house -- who happens to be a neighbor on Facebook -- and they drop off the vegetables. They then take a photo of the completed task (in this case, the deliver on the stoop of their neighbor's house) and the funds are transferred to their account. In this case, the neighbors are friend. So the Worker decides to donate the proceeds ($5.00 in this case) to a charity they care about: the COVID-19 Solidarity Response Fund at the World Health Organization. The Worker shares this on Facebook with their friends so their friends can donate too. Finally, another 3rd class of users, "Sponsors" can sponsor Tasks to be performed. Maybe you're a health insurer, and Dr. Freddy Abnousi from Facebook is wondering how Oculus AR could help health insurers. Well it turns out you'd love to make sure that elderly people can recover safely at home, rather than in hospitals or even nursing homes. AR devices can help make this happen, by dividing up tasks and letting AR Tasks source the very best people to take care of elderly patients who need more help. How I built it I used SparkAR Studio to load in graphics assets that contained the "Task List" for the Worker. I used a simple Google Form to handle Customer and Worker requests for the time being, as I don't know how to code very well (and ran out of time to re-learn). But I can have people do the back-end matching for now, and Google Forms has a back-end API that I can eventually integrate into. I can also modify the Form quickly without production interruptions. Challenges I ran into SparkAR Studio didn't really have a way for me to add in arbitrary assets e.g. Textures (2D graphics), which I wanted to use to render the Task list and its updates. I wanted to use SparkAR to display dynamic text from another application, and dynamic images in order to show people instructions or a new photo of a vegetable/shopping target etc. Accomplishments that I'm proud of I did this project with my daughter, who, with my other daughter also, really wants a puppy. She is not old enough to enter the competition but I really wanted her to get an idea of the new world of Augmented Reality. Her name is Kimberly and she's learning computer science as a freshman. What I learned I learned that AR is not the easiest thing to work with right now. The tools are built mostly for Instagram and Snapchat-type filters. On the other hand, tools like Microsoft Hololens are almost over-built for super enterprise-y uses. But there's almost nothing in between. My guess is that Apple's AR device will probably land right in the sweet spot in between, which will be a great opportunity for Spark AR Studio and other frameworks like Spark AR. What's next for Oculus Tasks I could use some more teammates. I couldn't really find anyone who wanted to code with me or join my team, and I'm not good at coding myself -- I'm a doctor, after all. Also I'm not sure my current employer will be too happy with my side project. But I do hope that I can get involved in stuff like this; it seems like the right thing to do, and a way to help lots of people in need do work, get paid, and help others. Especially in healthcare and aging -- we don't have enough people to help what the world needs right now, let alone the aging population even before this pandemic happened. I hope that AR Tasks and/or something like that can come online quickly. Built With ar forms google particle Try it out sites.google.com
10,008
https://devpost.com/software/grow-your-flower
Grow Your Flower! Step 1 Step 2 Step 5 Step 4 Step 3 Inspiration This AR effect is inspired by the popular toy from Japan, Tamagotchi, where one can nurture and grow their own pets in a digital device. Instead of growing a pet, our team decided to allow everyone to experience growing flowers easily as growing one in real life is hard. We hope that this filter can get more people to reconnect with nature and even encourage themselves to grow a real flower in real life. What it does This effect allows the user to place up to 3 flower pots on a flat surface, growing up to 3 different types of flower at one time. The first time the effect is used, only 1 pot will be seen and 1 type of flower will grow randomly to full once the user taps the pot and animation of rain finished watering the pot. The 2nd time the effect is open, the user will see 2 pots and be able to grow 2 different types of flowers. A 3rd pot will appear 3rd time the effect is used and the user will be able to grow 3 different types of flower. The flowers will be selected randomly among 5 different types of flower, each with 4 different colours of combination. Altogether, there are 20 different flowers that the user can grow. On each use, events such as rainbow & butterflies appearing will occur randomly to provide surprises and encourage repeated use. UI buttons are also available for the users to choose their own texture for the flower pots. How we built it The project is divided into 3 parts among the team. Eri modelled all the 3D flowers, pots and prepared all the texture assets (including visual design) needed for the project with blender. Boon imported in all the assets, create lighting shaders of the flowers with patch editor and organises them in blocks for easy importing. Finally, Pofu compiled all the blocks and program all the logic of the effect in Javascript, including using the persistence module to check how many times the effect has been used by a user. Challenges we ran into 1) Working together remotely The team consist of Eri from Japan, Pofu from Taiwan and Boon from Singapore. Hence, it is challenging to find a common time to meet & discuss on top of our day job. Thanks to the internet, this challenge is manageable though. 2) Compiling the projects using blocks Blocks are heavily used in this project for Boon to prepare the shaders of the 3D assets before passing it on for Pofu to finalise the project with scripting. However, we encounter several errors when Pofu tried to import the blocks that Boon has created into his project file. In the end, Pofu had to redo his part using Boon's project file because the blocks can't be imported into a different project file for some reason. Accomplishments that we are proud of We are proud that all 3 of us managed to connect and collaborate together even though we are based in 3 different countries. Even more so, we are proud to have completed and submitted the project as a team. What we learned We learned how to collaborate with creators of different skills and combine our forces to create an AR effect. What's next for Grow your flower More random events can be added Built With sparkar Try it out github.com
10,008
https://devpost.com/software/the-spark-arms
Snug View Bar View Patch Graph Mood Board Instant Gaming View The Spark Arms is a virtual meeting place for the members of the Spark AR community. Covid 19 has seriously impaired peoples ability to meet in person but for global online communities this was already a big challenge. The Spark Arms Instagram effect will give our community a taster of the space which I am going on to build in full immersive VR for the Oculus. Virtual meeting places will become increasingly valuable to communities and organisations as the uptake of immersive tech continues to rise but, currently, VR spaces that are available tend to be very generic which is why I foresee a huge rise in demand for branded digital environments. Looking at the success of recent virtual events such as the Travis Scott experience suggests that virtual branded environments within established digital channels such as Fortnite offer brands a huge potential revenue stream. The Spark ARms may be a digital space, but what if it wasn't? What if half of this space was built in the real world and the other half in AR. I've deliberately designed the space to include the physically possible and the physically impossible as I believe AR can be incredibly powerful when designed in tandem with interior spaces and architecture as opposed to being applied afterwards. I have a background in retail design and have wanted to create a branded environment in Spark for a while now. My process started by creating mood boards for the space which not only represented Spark as a brand but also the Spark community and it's creators. I then went on to design the space in plan and elevation before modelling and rendering in 3D. Once I was happy with the 3D renders I set about optimising the models and building the space in Spark. Once this was complete I used the patch editor to program the interactions. Unfortunately I was unable to get three key interactions into the space as I ran out of time. These were a particle based painting/drawing feature allowing users to concept face effects on the 'sketch wall', a working laser darts game and a working ping pong game in the 'instant gaming' zone, although I will be adding these in future. Built With blender particle photoshop vectorworks Try it out www.instagram.com
10,008
https://devpost.com/software/spacex-crew-dragon
Falcon 9 B5 Payload and the astronauts Inspiration Space exploration, in essence, is pure curiosity and the urge for survival. On May 30th, SpaceX and NASA made history with the launch of Falcon 9 Block 5 rocket to the ISS. It’s one of the most important missions in human history as it was the first time, a reusable rocket was used in a mission sending astronauts to orbit. The F9 B5 carried the astronauts from Kennedy Space Centre to the International Space Station. The whole mission was a remarkable idea on paper which came to fruition. What it does Our filter explores the journey and tries to visualize it with 3D models and animation. We wanted to simulate the experience and also teach about the mission to others. AR is one of the best tools for teaching topics that can be better explained visually. Anyone with access to Instagram or Facebook will be able to visualize and understand this piece of history using this filter. How we built it SparkAR was used as the filter builder and we designed the 3D models in Blender. Challenges we ran into Stabilization of the filter Using animations in Spark AR Accomplishments that we're proud of We are really proud of being able to choreograph this piece of history into a medium that is one step higher than watching a video and showing the capability of AR in learning. What we learned It was a profound learning experience for us to build something from scratch. We also learnt about how hard it is to make a polished filter. By trying to simulate the mission, we came to know about how previous launches stacked up to this one. What's next for SpaceX Crew Dragon AR We plan to make the whole experience more intricate and detailed as possible. Citations SpaceX Demo - 2 Mission: https://en.wikipedia.org/wiki/Crew_Dragon_Demo-2 Falcon 9: https://www.spacex.com/vehicles/falcon-9/ NASA: https://www.nasa.gov/ Built With blender sparkar Try it out github.com www.facebook.com www.instagram.com
10,008
https://devpost.com/software/plants-r-us
Small Plant Large Plant Medium Plant Inspiration There are so many benefits to having plants at home. Plants in your home can boost mood, productivity, concentration, and creativity. With many of us sheltering in place during the COVID-19 pandemic plants can improve the environment where we are spending a lot more of our time. What it does This filter aims to help you determine places in your home where you might want to add a plant to improve the environment. It provides a small selection of plants that you can add to your world. You can resize them and take photos of the places in your home where a plant could be a good fit. How I built it I built it by leveraging the Spark AR World Effect template to start then using documentation to learn to add the Native Picker UI. I imported plant images from the AR Library built into SparkAR via SketchFab. Challenges I ran into I ran into a challenge swapping out objects when the picker was chosen. I looked online and eventually found a helpful youtube video . the example code uses deprecated APIs so I used the SparkAR documentation to learn about the new methods I should be using to improve my code. I also ran into issues trying to get display images for the 3D models I was using in the effect. I wanted to use these images as the labels for the picker buttons so that people can see the plant options. Initially, I had trouble finding a 3D object viewer for my Mac but later I found an online option via AutoDesk that I used to create screenshots of the models. Accomplishments that I'm proud of I am most proud of getting the basic effect working in only a few hours although I've never used the SparkAR platform before. There were moments where I was stuck and wanted to quit but I kept reading and experimenting and now that I've had a few breakthroughs I'm sure I can build so much more! What I learned This was my first effect so I learned a lot about the SparkAR platform and how to create filters. I also learned a lot about where to get help and how to use patches and scripting to add interactivity to my effect. The SparkAR Community provides a lot of support and inspiration. There are also a lot of videoes tutorials online to see demos on how to create different things. What's next for Plants at Home Allow people to place multiple objects in the world at the same time (not sure it's possible) Improving the variety of plants Ensure new items spawn in the person's field of view Improve the picker icons to display the plant images Built With instagram javascript sparkar Try it out github.com www.instagram.com
10,008
https://devpost.com/software/spark-ar-workout
Inspiration Well, its always said that when you are thinking about an idea one should be open-minded for one doesn't know when and where the inspiration may come from. For Building Tizi Tizi My inspiration came from youtube. Yes, Youtube Video just came across a youtube video as I was searching for some inspiration and I thought, what a good Idea can it be to have a character exercising in AR and Boom the Idea was Born. What it does Spark AR Workout employs the power of Facebook SparkAR by using the camera to superimpose Kocha the Digital Exercise guru into your world. The Tizi Tizi workout Shows the right way of doing a different kind of exercises, helps make a light moment with a friend for sharing on social media by trying to imitate what Kocha the digital character is doing as the model can be scaled up and down and can be moved around by a tap. For Tizi Tizi it’s all about having fun while working out How I built it We built Tizi Tiziwith Facebook Spar AR Augmented reality application as our main tool while employing other tools that helped customized some of the assets that we used in building the platform. Our effect employees the use of animation in fbx file with the different animation saved within one fbx file. For the audio considering that the app needed to be small enough to achieve the required size of 4mb for Instagram effect, we did quite a double compression of the .M4A Mono audio so as to be able to reduce the size to a minimal. We found this site to be good in audio compression link For the Character used we created the character in adobe fuse, customized it to our liking and then exported it to the online adobe rigging website mixamo where we did an auto rig and downloaded the rigged character ready for animation. Using motion capture clips provided by we managed to create one sequence of our character in 3d studio max which we used to create our fbx file that we used in our effect. For the textures, To make them small enough we converted all our textures to .jpg from .png and omitted using some of the texture maps like specular since we didn’t see the need of them as we were ok with just the diffuse and the normal map. In order to manage to achieve minimal texture size we compressed them using Photoshop and did a second compression online on the site link and link Challenges I ran into Well, well, well The first challenge that was that all the animation that we had could not fit into our effect as it was making the effect be out of range of the required size. So we opted to remove the majority of the effect and be left with a few that will help us prove our concept Secondly, we would have loved to have different audio sunk to the animation but the issue of escalating the size became an issue to, so we opted to just remain with a single track as per now and maybe consider doing some additional input in the near future Re-thinking the whole idea from the initial thought and conceptualization as I couldn't achieve what I had in mind first When uploading the effect on Sparkarhub I kept getting an error that an un know error has occurred while processing the request while saving and couldn’t manage to find a way around it for hours luckily I figured out that my effect had already been saved by going back to the effect panel Accomplishments that I'm proud of Through the problems stated above, We managed to learn a lot as when something wasn't working we had to rethink on how to achieve it. The biggest achievement that we had is seeing something that we have conceptualized from scratch coming to being Becoming super compressors  With the challenges that we faced we have managed to learn a couple of ways of achieving good compression while still maintaining good quality in our assets. Learned to use a couple of nodes in the patch editor was also an achievement -Managing to optimize the App size -We are very happy to have achieved Spark Tizi Tizi_V1 at least it gives us the hope that we are grasping the in and outs of Spark Ar and its just a matter of time before we build more awesome effects and applications What I learned Number 1 Troubleshooting and being patient Using Animation in Spark AR Customizing assets What's next for Tizi Tizi As more integration and features are added to Spark AR We will be looking to adding more features into spark AR and learning from other creative and techies and end user of what they think we can improve on as we continue to build on it and look how it can be more adopted by the community. Built With 3ds-max adobe-audition adobe-fuse adobe-illustrator mixamo photoshop sparkar Try it out www.instagram.com
10,008
https://devpost.com/software/we-re-all-water-anyway
Inspiration With the elevated racism and race matters in the world right now, we wanted to speak to the issue in a different perspective. To avoid being too political and provide a more lighthearted experience, we came up with "we're all water, anyway". Water is something that everyone in the world interacts with. The earth is 71% water, our bodies are 60% water...we are all water. With all that's going on, we want to remind each other that we are all one. Because in the end...we're all water, anyway. What it does As the rain falls from the sky, it turns into puddles on the "ground". When users tap on the screen, our message appears on the ground from water ringlets that form. How we built it (1) Starting with typography design, we selected a typeface that reflects fluidity and water. Next, we modelled the text and water ringlet in Maya. (2) We used the particle emitter to simulate rain falling from the sky. (3) Animated the text and water ringlets. (4) Added a screen tap gesture that creates "puddles" where one word of the message appears with each ringlet. Challenges we ran into Coming from different disciplines of designer (Amy), animator (Niru) and programmer (Laith), we teamed together to build in Spark AR. However, with the three of us combined, we were either beginners or completely new to the program. Our challenge was figuring out the parameters in which we could create according to our vision. Accomplishments that we're proud of Learning the software in such a short time and watching our idea come to life in Spark AR. What we learned We all got a feel of what Spark AR can do. What's next for we're all water, anyway. Our project was put together in one week's time. In the future, we hope to refine our skills and elevate the experience. Keep up with us: Amy - amywu.ca // Niru - instagram.com/niru67 // Laith - laith.wtf Built With maya sparkar Try it out www.instagram.com github.com www.instagram.com
10,008
https://devpost.com/software/sdfsdf-z51fy4
We have come up with an interactive game for the backpacker community, we’ve tried to stick to the idea that the most backpackers and travelers seek, the thrill and adventure of traveling solo or with friends, the game starts on-screen tap with a distressed para jumper trying to land safely on land, the user has to blink in order to rotate the para jumper until he’s in a standing position. The user will get three tries based on which the score would be counted. On successful landing, the score will get incremented and another para jumper will start falling. At the end of the game, the user can tap on the screen to reset the score and restart the game. We have added the Screen tap patch at the beginning to start the game, also the Background Movement patch to constantly move the background to create a more immersive and fun experience. To detect the eye blink we’ve also added an eye blink patch that returns true if the user blinks his/her eyes. We’ve also added a player rotation patch which increases the rotation speed every round making it harder for the user to stabilize the para jumper. For the increment of the score, we’ve added a Scoring patch to monitor the user's score throughout the play duration. We’ve also added some Logic patches to detect when all the three jumpers have completed their landings on which we redirect to the game over screen where the user can tap the screen to reset and restart the game. Built With sparkar Try it out www.instagram.com github.com
10,008
https://devpost.com/software/pot-quiz
Logo Inspiration Creating a funny pop quiz app using AR with feedback for each correct answer. What it does The application rewards you by increasing the plant size by each correct answer you get and gives you a score for precision. How I built it Using Spark AR, it was initially going to be a pop quiz using random knowledge questions called using an API, but now it is random mathematical equations to be solved. Challenges I ran into Facebook has closed the networking API so we had to change to local mathematical approach. Accomplishments that I'm proud of Risk managing the situation. What I learned Using Spark AR. What's next for Pot Quiz Creating the initial version of the game vision using random knowledge questions. Polishing of the game: adding sound. adding animation for the flower when resizing. adding better particle effects. Built With blender javascript sparkar Try it out github.com www.instagram.com
10,008
https://devpost.com/software/myfirstfilter
Inspiration I created my first project with Spark AR makeup filter What it does How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for MyfirstFilter Built With ar particle Try it out www.instagram.com
10,008
https://devpost.com/software/anywhere-door-41rvo8
The portal! A glimpse of stylized representation we made of Vagamon! Another glimpse. The 2D Map we use for navigation. 2D map of the place obtained from google. Inspiration We have all been sitting in our homes, wishing we had an escape. What if we had a window? a portal? through which we could escape and explore the wilderness out there. With Spark AR, we wanted to give this dream a shot. What it does Soon as you open the filter, a portal appears, you just have to tap on the portal and you will be transported into it. We chose this beautiful place called Vagamon. Also, we have provided a 2D map for the same place, the 2D map will show your current position. You can move the red dot to move through that place. You can tap on the objects to know more about them. You can also change the turn on rain by choosing different weathers in the UI picker. How we built it Vagamon is a lesser known place, full of beautiful valleys, waterfalls, animals, flower farms, trekking sites, simply a haven for anyone who visits it. First we gathered all the information of this place. Then built the basic layout, taking reference from Google Maps and other relevant websites. Then we segregated the salient features of the place that uniquely represent it. For e.g tea farms, small waterfalls, beautiful hills and paragliders. We then tried emulated those feature by building lowpoly assets of them with love and care. We obtained the actual terrain of Vagamon using the blender GIS add on. We also used the texture of the same, later we realised that building a stylised texture would be better as it would suite the art style. We also prepared a stylized 2D Map of the place to help us know what is where. Then started placing the assets on the 2D Map. Then we rigged the objects and added some relevant animations. We also built a 2D version of the place and put it on the map, so that you can navigate it using the red dot. Challenges we ran into Compressing so many assets and textures was a challenge. Putting those interactions on the UI picker was difficult. Accomplishments that we're proud of We have never done anything like this before. Modelling, scripting, rigging and animation. We knew none of it. Now we can do some of all of it! Bringing an idea to the surface was also something we are proud of. What we learned Importance of proper planning. Maintaining good communication with the team, working together (even if teammates are physically far off.) Importance of properly naming the animation playback controllers. What's next for The Portal One day, we might expect to have portals for all the wonderful places in the world! This is an ambitious one. :) We see printed Maps all over us, we can build portals for all of them! It would be fun, educative and so entertaining! Built With blender javascript macbook sparkar Try it out www.instagram.com
10,008
https://devpost.com/software/anatomy-for-real
Front cam effect Treasure hunt Back camera effect Inspiration As a kid we always wanted superpowers and technology be it x-ray vision like superman or holograms like Iron Man.Why not put those with educational material and make learning fun. What it does The effect has two aspects.The front cam gives user a realistic feel as if they are looking at themselves with X-ray vision.The back camera allows the user to place a 3D virtual model of human body and interact with it and switch between views like muscle,bones and organs. PS:There is also a easter egg, a treasure hunt game to make learning even more interactive. How I built it It was built using Spark AR.The front filter is a face tracker with relevant models laid out according to user face.The back camera effect is a plane tracker which allows user to place 3D objects in their environment and the user can interact with it using object tabs. Challenges I ran into A multi-faceted effect with tracking and with soo many aspects lead to a big challenge of asset management and keeping the size to a minimum. Accomplishments that I'm proud of The effect successfully went down to around 3 mb leaving space for much more content. What I learned Making complex effects using spark AR and best practices. What's next for Anatomy for Real Adding more models and more info to make the effect even more informative and interactive. Built With sparkar Try it out www.instagram.com
10,008
https://devpost.com/software/box-shooter-q02wfm
Inspiration I know how important it is for game development the use of physics simulation. The example on Spark documentation is very simple and I dont know many filters using it, so i decided to learn and explore the possibilities What it does How I built it I used Spark AR, the Cannon Physics package. and some simple 3D objects Challenges I ran into Using cannon physics without specific documentation on how to use it with Spark AR Combining the asyncronous nature of Spark AR with Cannon Physics to exchange information back and fourth from scripts and patches Syncronizing the scales of objects simulated and objects displayed on Spark. Accomplishments that I'm proud of Reach the deadline and learn the basics of physics inside Spark AR What I learned Lots about asyncronous programming and also improved my knowledge about the use of scripts on Spark AR What's next for Box Shooter Shoot many balls Point the balls using the camera as reference Use touch gestures to shoot the balls Spread lots of boxes on the environment Create a timer and a score to record how many boxes the user managed to hit Built With cannon javascript particle physics Try it out www.instagram.com
10,008
https://devpost.com/software/face-privacy
anon Inspiration Going to a ballgame with my friend, I wanted to share an IG story to get on the megascreen, but because my friend called in sick to work they couldn't be in the picture. What it does You can tap a face for pixelation, so people have a choice regarding their privacy. How I built it Patch editor and deep diving through the Spark AR Facebook group. Challenges I ran into Balancing the color, pixels and smoothness to allow for privacy in an elegant way. Accomplishments that I'm proud of It's my 3rd filter! What I learned It's hard to test multiple face trackers, so I uploaded my own test video to Spark. What's next for Face Privacy Include options to add glasses and blur hair with segmentation. Built With sparkar Try it out www.instagram.com github.com
10,008
https://devpost.com/software/insta-yoga
Inspiration People who prefer to transition to a healthy lifestyle are often presented with tough choices when it comes to picking up fitness activities. The year 2020 has presented a series of challenges, especially when it comes to outdoor activities as well as trying out fitness solutions outside of the home. While Yoga is something that could be learned and practiced in the personal space at home, the fear of learning the techniques wrong is a big letdown for a lot of beginners. InstaYoga tries to solve the problem by bringing a virtual yoga instructor in your living space. What it does InstaYoga is a World AR experience that helps users learn beginner yoga poses with the help of a Virtual Yoga Trainer. The trainer demonstrates various yoga poses that can be viewed from different angles, thereby improving the user's understanding of the posture to be maintained during each of the poses. Tapping on the screen allows the user to switch to a different pose. In addition to the learning experience, users can personalize the experience by customizing the outfit of the trainer allowing them to capture and share videos of them training along-side the virtual trainer. How I built it Modeling, Rigging, and Posing of the Virtual Trainer was done using Blender. Texturing was done using Substance Painter. The final animation was exported as FBX and brought in to Spark AR. Since its a World AR Effect, I started with a Plane Tracker. The first feature I added was to implement Tap To Change the Yoga pose. This was achieved using Screen Tap patch that would trigger a counter which would allow me to switch between animation actions in the FBX file using an Option Picker. To improve the visual impact, a particle emitter was added to the Yoga Mat on which the Trainer performs the different poses. An Animation patch was used to turn on the emitter by changing its birthrate from zero to an acceptable number. As I didn't want the particle effects to last beyond a few seconds, the emitter was turned off using a Delay patch that would trigger an Option Switch to turn off the same. Outfit selection was added using Picker UI as it felt like the most ideal place to give more choices to the user to personalize their experience. Since its a world effect, the ability to move and rotate the scene was added to assist users to position the effect in their living room better. Additional props like a spare Yoga Mat and a water bottle were added from the Asset Library(Sketchfab). 3D Text, which is a new feature introduced in the latest version of SparkAR was added, to inform the user about the name of the pose they were viewing. Finally, ambient music and additional feedback sounds were added when the user taps on the screen. Challenges I ran into Texture swapping based on picker UI selection was tricky as I could not find built-in patches to accomplish that(It's possible that I missed one that exists already). I had to get around the problem by creating a texture sequence and use an option picker to pick the frame corresponding to the user selection. Accomplishments that I'm proud of This was the first time that I built a character from scratch, from Modelling all the way through posing and bringing it inside Spark AR. While there was an option to download and use models, I thought of taking this as a challenge to self evaluate how much effort it takes to build an effect from ideation to publishing. Overall I feel I gave my best considering the pockets of time I got during my weekends. What I learned This is the first time I tried to finish a World AR effect. While I had practiced some of the techniques based on the developer documentation, this was the first time I tried to build something with a set target in mind. During the course of building this, I got more comfortable with bringing in skeletal animation as well as using the built-in patches along the way What's next for Insta Yoga The immediate updates that I would be working on would be to add animation for pose transition instead of snapping to the pose. While the current version includes the features I had initially thought of, adding more poses and more ways to personalize the experience would be something that would improve engagement. The current trainer lacks facial features, adding those would definitely improve the overall experience. Built With blender gimp Try it out www.instagram.com github.com
10,008
https://devpost.com/software/roland-gokeys-ar
Play along with this Play along with this Inspiration I was just thinking of the different things that unite people and music was definitely one of them. Music is one thing that brings the world together irrespective of cast, language, colour, country or age. I've tempted the bring the world closer together with music. I've used this opportunity to teach music while learning it myself. While quarantined at home, this was the only musical instrument at my disposal so i thought I'd make the most of it. What it does As soon as the the effect picks up the Roland Go Keys Keyboard it introduces you the keys on the keyboard, it teaches you the minor and major scales and also had 2 pieces you could play along with. How I built it I used a target tracker with a picture of the Roland Go Keys Keyboard. I used blocks and the UI picker in Spark AR to let users go through what they want to learn, Ive used sound to demonstrate what the chords will actually sound like. Challenges I ran into the File was too heavy to load and my Internet connectivity was too slow. I was technically working blind other than the reference image inused on the fixed target tracker. Had to do a lot of testing (tried to) for accuracy and optimally use the patch editor. Accomplishments that I'm proud of I've always wanted to participate in a Hackathon and the fact that that i actually completed the file in the specified time is an achievement. Working on this project helped me realise the potential of AR and how it could help change lives for the better What I learned The potential of AR id immense and we're just getting started. What's next for Roland GoKeys AR I want to add more material to the educational AR. I would also like to experiment with other instruments. Built With adobe-audition adobe-illustrator sparkar Try it out github.com github.com www.facebook.com
10,008
https://devpost.com/software/ffface-ar-gallery
Hi! We’re FFFACE - An Instagram effects production studio. We’re excited to present our new project. Problem The COVID-19 pandemic affected everyone, including the art world: Artists from all around the world can't exhibit their new works in person at galleries. Art lovers also can’t enjoy the opportunity to appreciate curated collections at their favorite art spaces. Solution We created a 3D art gallery in augmented reality with placements for paintings and other works in an Instagram filter that anyone can open using their smartphone. Art pieces can be imported to the gallery project within no time, which makes it easily customizable for any gallery or artist’s Instagram account. How we did it: First, we built a 3D scene of the gallery in the real-scale size in Blender. Lights and shadows were baked into the textures. Finally, the file was converted into the Spark AR via glb format. How to activate: 1) Open This effect link on your smartphone: https://www.instagram.com/a/r/?effect_id=300454300950295 2) Tap on the floor to open the gallery 3) Come In and move around 4) Share your experience is your Stories Impact Our solution will help: To continue art exhibition accessibility to a wide audience. Art lovers from all around the world will be allowed to visit exhibitions from wherever they are. Reach new and reactivate audiences for both galleries and artists to help new artists get noticed and get first sales. Implement an innovative project with the ability to scale worldwide. After Hackathon, we'd like to create AR Galleries for all interested artists as well as famous galleries like MoMA, Tate Museum, Saatchi Gallery, SMK, Van Gogh Museum, etc. FFFACE AR gallery - the most engaging way to expose art online. Built With blender sparkar Try it out www.instagram.com
10,008
https://devpost.com/software/emojifun
Image shows different modes of island, sequentially arranged as, Normal(Front View), Ozone(Top view with Ozone layer), Cloud(Top view) Initial view of the effect, when Island appears. Island's state, when Afforestation is at level 1, and Global Warming at 5. At this stage, trees have disappeared, and ice caps have melted. This is Ozone layer's state when Afforestation is at level 4, and Global Warming at 2. Here blue depicts maximum depletion of layer. At this stage Ozone layer's major portion is depleted, when Global warming is at level 4. Also note that, Green depicts, minimum depletion. Effect also shows fact cards, this fact card is displayed when the user taps on Green Platform. This fact card is for water body. This fact card is for the mountains. This fact card is displayed when the user is in Ozone mode, and taps on it. Effect Id 872923989858530 Inspiration The filter is inspired from the Australian bushfire . Our team was devastated from the the loss of wildlife, and the amount of deforestation the fire caused. Along with that, we also took the vicious act of deforestation taking place in Amazon forest in account while creating the effect. We concluded to design an interaction for the user, to let them understand the consequences of Deforestation and Global Warming (which is another attack of mankind, on Earth), on Earth. What it does This is an interactive AR effect which shows how the world is going to look if we continue to miss use our green gold i.e. Our forests. The effect provides interactive buttons on either sides, to select the level of Afforestation and Global Warming, and as the user selects the higher Afforestation level, level of Global Warming decreases, and vice versa. The effect not only shows the effects on Earth, but also on our protection coat, Ozone layer. The user can easily switch between Ozone, Cloud or Front view(or mode), using the options present in the bottom of effect. How we built it The effect was made possible using Spark AR Studio to create the AR experience. We built all the 3D models(like that of Earth, Trees, Mountain) on our own, using Blender 3D . Next, to create the interactive UI elements were created in Adobe Illustrator and with some support of Photoshop . Challenges we ran into To represent the Earth, or a part of the earth , which consist of all the major elements of the earth which are affected by Global Warming. Earlier we started with complete AR environment with Mountains and Trees all around, but, it being a complex and congested display, we agreed to show a part of the earth. Building the system of Levels . This being our first time of creating a World AR effect, we wanted it to interact with the user in the best way possible. Also, we tried to make the system to give a gaming experience to the user, with smooth animation and tap responses. Working with Spark AR Studio and bridging between its Patches and Scripts was another challenge for us, which was also our accomplishment, as it helped us to achieve interaction which may not be possible or feasible, through either way. Accomplishments that we're proud of We are proud of creating the effective and minimum size 3D models, in Blender. Animating the model as we wanted. Scripting the interactive parts of the effect successfully, so that they work as intended. Also, using the Native UI effectively. Successfully launching the filter on intended platforms. What we learnt How to make use of both Script and Patches in Spark AR Studio, at the same time, using Bridging . Also, using Blocks to use an effect in multiple projects. Build a Gaming Experience, in the AR environment , which not only interacts with the user, but also, responds in the best way possible, also, animating appropriate elements with smooth animations. To work in a team on an AR effect . Dividing the Work, according to teammate's experience, or according to the requirements(which is what we mostly followed, by dividing the work for 3D models, UI elements creation, and working in Spark Studio). What's next for the effect We next intend to spread the effect, as much as possible. Thus, spreading the message of Grow Trees . Also, teaching the people the consequences of their actions on nature. We have also planned to expand the environment of effect, to cover wildlife and the effect of mankind's actions on them. Along with that, we also intend to add more fact cards , which can be seen presently by tapping on the blinking Sphere near each object (like mountain, trees, Ozone layer). Built With adobe-illustrator blender javascript photoshop sparkar Try it out www.instagram.com
10,008
https://devpost.com/software/find-the-pairs
Inspiration The main inspiration was to be able to find some way to use spark as more than just a tool to create filters but also to create any type of game. What it does 'Medieval shoot' is a game in which you will have to shoot at the enemies that want to attack you for a certain time, your only weapon will be a cannon. Use it wisely. The game has 2 different levels in wich you'll be able to interact with physics inside a game made it in Spark. How I built it The first thing was the design of the game, I found enough limitations for many things that I had in mind but in the end I was left with the idea of making a vertical slice showing a little of what the final project would become. The main mechanic that was used was to shoot. 2 different levels were designed in which the biggest difference lies in the target and the complexity of the level. The game was built in Spark AR Studio, I used javascript to write all the game logic and I used the patch editor to control the interaction with the objects in the scene. Predesigned levels had to be generated and displayed only when the player completes a previous level. 3D modeling was done in blender, a low poly design was chosen to optimize resources. There are around 10 different objects in the game such as: floors, bushes, cannon, characters..... Challenges I ran into The main challenges were Control physics inside Spark Create objects in real time Accomplishments that I'm proud of I am proud of all the work done and having accomplished the main goal of achieving a physics game based. What I learned The most important thing I learned was to control my development times and not give up. What's next for 'Medieval Shoot' The next thing is to increase more levels, improve art, optimize resources, improve performance and increase more innovative mechanics. Built With adobe-illustrator blender javascript photoshop spark-ar Try it out www.instagram.com
10,008
https://devpost.com/software/4-c
The most challenging thing we had to address just to get started was actually aligning the lighter, fun (bordering frivolous) medium of FB/Insta AR filters with the task of addressing a more serious social good issue — something that inevitably requires a longer conversation and engagement beyond our 8 second attention span. The topic we chose was climate change because it is inexorably intertwined with issues of social good as well. By 2060, our earth is projected to warm by 4 degrees Celcius. This has devastating effects across the planet, including rising sea levels and unstable, extreme weather patterns. However, not everyone will be directly or equally affected by climate change. Poorer communities will be disproportionately hurt. The only way to proceed is to not leave anyone behind. But sometimes it’s hard to care about an issue that doesn’t affect one directly, so we took this opportunity to make a speculative vision of your surroundings if we continue on the path we are currently on. Making use of an often used digital feature, helps to make these topics feel more present in people’s day to day. We believe in the power of mere exposure. We’re inspired by the nonprofits and research institutes that work tirelessly in this area. We hope to amplify their voices. The technical challenge we faced was trying to make the environment as real as possible to give urgency to the issue of rising sea levels. To tackle this challenge, we made sketches and storyboards of the underwater scene. None of us have previous experience with visual node-based authoring environments like SparkAR, or even much experience with 3D. It took downloading Blender for the first time, learning about materials, shaders, specular qualities, UV maps, inverse normals, and one unfortunate hour despairing over Quaternion math. We photoshopped assets, created movements with animation rigs, and added sound and bubble effect through Spark AR to create a multimodal immersive space. A sign with dynamic data and “call to action” message were put in the scene to encourage users to continue taking actions off-line after interacting with the AR experience. Built With blender cinema4d sparkar Try it out github.com
10,008
https://devpost.com/software/ar-u-dance
ARUDance is an alternative version of “rhythm game” on Instagram to make you keep moving and uplift your mood. Inspiration ARUDance is inspired by a “rhythm game” that challenges a player to sense the rhythm. This game typically focuses on dance or the simulated performance of musical instruments, and require the player to press the button in a sequence dictated on the screen. What it does You can play this filter with your feet. Simply scan the floor using a back camera, tap the screen, and follow the rhythm. There are four buttons with different colors that we have provided, you just need to step on buttons at precise times. The screen will show which button you have to step on for accuracy and for synchronization with the beat. How we built it Our idea is to create an alternative version of “rhythm game” on Instagram to make you keep moving and uplift your mood. We enable the effect on the back camera. We use plane tracking to place some kind of outline rectangle on the ground. The position and size of that outline rectangle can be adjusted so that it seems to appears right in front of our feet. Then, if we are ready to play, we can tap the screen to start the music. A series of rectangles will be moving closer to the outline rectangle. Then, we should step on when the rectangle and the outline rectangle are in the same position. We developed the effect mostly by using the patch editor within Spark AR studio version 90. The 3D model is modeled using the Blender application. We implement the moving rectangle using a patch editor. We use a plane tracker to track the ground. All of them were done on a personal computer running Windows 10 with Intel Core i5 7th Generation and 16Gb RAM. The effect was tested using Instagram and the Spark AR Player application for Android. Challenges we ran into There are our challenges in the process of developing the effect Spark AR doesn’t support track our feet, so we can’t implement the point system Spark AR ground is not really stable if the ground is plain, so we created a function to move and zoom by panning and pinching the screen to adjust the position of the object. There is still a very little Q.A. forum available on the internet. Accomplishments that we're proud of We’re proud that we are able to develop the ARUDance with Spark AR. The reason is, we can bring happiness to people through dancing in this pandemic situation. Since we move our bodies around, it helps us to exercise at home. Also, people can share their excitement in Instagram and Facebook stories. What we learned We learned that there are still many idea and products which can be provided in the AR technology. This filter can make peoples enjoy their #stayathome moment. What's next for ARUDance All that we want for ARUDance is bringing a new experience for Instagram users and being the pioneer of “rhythm game” using Augmented Reality (AR) technology on smartphones. Built With 3dmodelling android audio blender blender282 color-filter instruction javascript patcheditor planedetection sparkar sparkar90 windows-10 Try it out github.com www.instagram.com
10,008
https://devpost.com/software/community-garden
Inspiration With more people are into cooking and gardening during the pandemic, we realize gardening is such a cool way to help people come together. This community garden filter is meant to bring people together to create the sense of community we all needed during covid-19, black lives matter movement, and many more. With the reopening in NY reaching phase three, we want to share the joy of being with our friends soon through this filter. Concept People help people. It is important to appreciate the community/friend circles we created for ourselves. What it does The types of flowers change depending on the amount of people in the world AR view. People have to gather together in order to unlock different types of flowers. It also segments the person(s) and the background to create a dreamy effect. How I built it We used Blender to create two 3D flowers each and I worked on the Spark AR mostly with the help of Melissa. We imported the 3D model with texture and created 3D particle effect following the person while changing the background to create a dreamy effect. For the environment, we used the 3*3 convolution patch to achieve the outline effect. Challenges I ran into I wanted to create particle system with the 3D models using the existing particles system in Spark AR. However, the particles system only use 2D materials. Therefore, we created the effect of 3D particle systems by creating different looping animations in terms of the positions and rotations. Accomplishments that I'm proud of I thought of a way to solve the problem of finding four people by photoshopping my face four times and using that to record for my world AR effect. I used Blender to build 3d flowers and I have been since learning more Blender for more of my work in Spark AR. I did most of the work in Spark AR in terms of the background effect as well as the tweaking of the 3D particle systems and I am more confident in using Spark AR moving forward. What I learned It is always good to start with one element and figure out how to program it in Spark and apply it to the rest of the elements. What's next for Community Garden I will make the background effect more dreamy and try to work on the interaction more to make it more intuitive and interactive. Since it is the world AR effect, the people in front of camera can not see the real-time changes of the environment. I will try to find a way to make it more interactive and immersive. Built With 3d blender particle Try it out www.instagram.com github.com www.instagram.com
10,008
https://devpost.com/software/safe-hands
Inspiration Washing your hands is one of the most effective actions you can take to reduce the spread of pathogens and prevent infections, including the COVID-19 virus. So I built an augmented reality world effect that promotes the use of correct handwashing techniques using the information provided by the World Health Organization on how to wash your hands properly. WHO hand washing The image below shows the most frequently missed parts of the hands when the correct hand washing technique is not adopted. What it does Safe Hands will show you how to wash your hands correctly using a pair of 3D virtual hands and an A.I generated text-to-speech that gives you instructions. The effect will guide you through all the steps required to wash your hands in an interactive way. Users can interact with the faucet, hands, and soap dispenser by tapping on them. How I built it The effect was built with Spark AR studio, using javascript and the patch editor. I created the handwashing animations in Blender and imported the models in Spark AR studio. To play the animations I used Spark AR's script to patch bridging. In javascript, I wrote the logic required to switch between each step of the handwashing process. For the voice-overs, I used Amazon Polly, an A.I service that turns text into lifelike speech and generates the audio files in mp3 format. I converted the mp3 files to m4a and imported them into Spark AR studio. I downloaded the 3D hand model from Turbosquid and the rest of the 3D models from Poly . Challenges I ran into Animating the hands was challenging and figuring out the best way to import them into Spark AR studio. Accomplishments that I'm proud of Bringing this idea from concept to a working application and also creating something that will help combat COVID-19. What I learned I learned about all the new Spark AR features and also how to animate 3D objects. What's next for Safe Hands Next, I plan to add a way of visualizing the most frequently missed parts of the hands when the correct hand washing technique is not adopted as shown in the image above. And also some minor improvements. Built With amazon-polly blender javascript spark-ar Try it out github.com www.instagram.com
10,008
https://devpost.com/software/metro-ar
Overall LA Metro System Map Directions Capabilities from Point A to Point B Confirmation Window to Transition from System to Station map Time until the next train's arrival IT HAS ARRIVED! Inspiration The inspiration for this project originally came from our own personal experiences and hardships using the LA Metro subway system. As it stands right now, we found that the system as a whole was pretty hard to navigate, and that it’s hard to have a spontaneous adventure to explore a specific station. To validate this hypothesis we decided to launch a google survey on social media that asked people to rate various factors of the LA Metro Subway System. The results were pretty shocking. For the question "How efficient is the current metro system in terms of getting from point A to point B?", only 1 person (out of 51 respondents) gave a 5 out of 5. The average response score on this question was a dismal 2.71. For the following question "How would you rate the metro system overall?", nobody answered with a 5 out of 5 and the average score ended up at 2.82. All of this really motivated us to create a simple and unique solution that would both help users better navigate the metro system and also understand the value of individual stations. What it does Our solution that we ended up making was Metro AR. Metro AR is designed to help out the end user by using several small features that are tied into two overall views, an overall map view and a station map view. The overall map view is the first thing that a user sees, and this view displays the full LA metro map laid out with its corresponding lines and keys. The user can then scroll around and zoom in and out to view the map as they want. The user can also focus on specific lines on the map by tapping on the lines within the key and toggling the map keys on and off. There is another button on the left side with a curved arrow that is responsible for providing directions. Upon pressing that button, a user sees a popup where they can enter a start station ("from" field) and a destination station ("to" field) and then press submit. This will then create another popup that shows the text directions from the entered start station to the entered destination station. This feature in conjunction with the toggle line feature can help a user really understand the details of their planned trip and how to effectively get "from point A to point B". A user can also click in on specific stations, and that ultimately brings them to the second station map view. Similar to the overall map, a user is able to scroll around and zoom in/out on this station map. On this view, there is a little knob at the bottom with four different icons that all control the state of a pop up box. Selecting the first icon turns the pop up box invisible if it is visible. Selecting the second icon shows the minutes and seconds until the next train arrives at the station. A train also appears on the station map when the timer on this screen drops to 0. The next two icons are meant to help the user understand the best parts of the area that station is located in. Selecting the third icon shows the top ranked attraction in the area, and selecting the fourth (and final) icon shows the top ranked restaurant in the area. How we built it This whole project was built out with the assistance of two major tools: Blender and SparkAR. We used blender to create some of the amazing assets used in this project from scratch. The full map of the LA Metro Subway system (with all of its lines, stations, and other intricacies) was completely created and styled from scratch in blender. The keys on the sides of the overall map (showing the line colors and map details) were also created from scratch in blender. Finally, the metro train asset that pops up as the next train timer hits 0:00 on the station map screen was modeled and textured in blender. On the programmatic side, we experimented with several different SparkAR features to create the various interactions within the project. The native patch editor was used to zoom and scroll the different maps as well as toggle the metro lines on and off. From there, scripting was used for all of the more complex interactions and functionalities. The TouchGestures module was used to check in on whether or not a user tapped on different UI components (ex. The direction box, and individual stations). The Reactive module was used to constantly monitor the time that had passed since the filter got opened (from a Time Patch module), and the status of the choice made on the station screen (from a Native UI picker). The patches module was used to read in those two values from the patch editor as well as to output whether or not the Native UI picker would be visible. The NativeUI module was used to allow the user to input start and end locations in the directions feature. We wanted to use the Networking module to connect to API services such as the LA Metro API and the TripAdvisor API, but that module was deprecated due to security issues. As a workaround, we built out a "pseudoAPI" within the script as the single source of truth for the data used in directions, best attraction, best food, and time until the next train. Challenges we ran into There were actually a lot of challenges that we ran into because we were beginners at both SparkAR and Blender. At the start of this competition both of us had 0 experience working with AR development in SparkAR and 0 experience working on asset development with Blender. Even though both of us were inexperienced, we have always admired advances in augmented reality and 3D asset development from afar. When SparkAR beta was announced a while back, we both signed up, but we never found the time to start and really dive in until this hackathon. Besides the natural hurdles that typically come with learning a new software, SparkAR posed us with some unique challenges. For starters, SparkAR uses a specific style of javascript where everything is reactive (not linear), and data is transmitted as streams rather than constants. Wrapping our minds around just the basics of that proved to be tricky. Even past the basics, we ran into quite a few hurdles, and interestingly enough, most of those hurdles were overcome with the help of the native documentation (as opposed to forums like StackExchange). One problem that we had to creatively overcome was the deprecation of the Networking module. One of our big value propositions was supposed to be access to real time data (such as the time until the next train), and without Networking, we weren't able to hit the API endpoint required. To overcome this issue, we "hardcoded" a pseudoAPI inside the script to give us the ability to easily add and manipulate data as well as simulate a json response. Another big challenge that we faced came in the form of deploying our project to Spark AR Hub. We tried using a multitude of different facebook accounts and different browsers, but we were still met with an “Unknown Error” message on the final screen. After a lot of repetition and persistence, we finally got the project to pass through and submit. Even though the project is submitted and the link is now available, the filter itself occasionally crashes on the facebook app (maybe due to different phones). If you are having trouble loading and running that demo link, you can feel free to check out this project on the linked github repository and download it from there. Accomplishments that we're proud of This section heavily overlaps with the previous section because our biggest challenge forced us to learn SparkAR and Blender from scratch in order to succeed. Learning SparkAR was a pretty interesting experience because, in terms of script debugging, it proved to be different than learning a lot of other programming languages. This is because there weren't a lot of natural side resources to help (ex. StackExchange, Youtube, etc.) outside of the native documentation. One example of this was editing and reading from the editable text feature. We had a little bit of trouble finding out how to actually edit the editable text field and then read from it. After a bit of trial and error, we looked online for help, and outside of one page on the SparkAR documentation, there were no helpful and related resources. Ultimately though, this hackathon was great for both of us because it pushed us to learn this AR skillset that we have looked at and admired for quite some time now. What we learned This section also overlaps with both of the previous sections. We are both really proud of this learning opportunity that we gained by entering this hackathon. We can now take these abilities to create 3D assets and AR situations and use it in different projects that we want to pursue. After the experience from this hackathon, both of us agreed that we want to do a lot more SparkAR projects both casually for fun, and competitively for all the upcoming SparkAR hackathons. What's next for Metro AR MetroAR is a cool personal project but there is still a lot of work that needs to be done before it is ready to be deployed in the real world. First off, we need to connect up all of the metro stations both to the script and to appropriate station map assets. Right now, only Arcadia and 7th Street/Metro Center actually hold relevant data and link up to a corresponding station map. Since the framework is basically set up (via the internal pseudoAPI), adding stations should be fairly easy. Secondly, we would like to build out the directions feature in a more robust and intuitive manner. Right now, text directions are displayed whenever a user enters a start and end location (based on our very limited internal API). We need to build this feature out so that a user can find the directions between any two stations and the route returned from the directions is highlighted and easy to understand. Finally, we would like to integrate APIs (LA Metro, TripAdvisor) when the Networking module gets restored so that we can get actual real-time information on all of the stations (ex. time until next train, best local attraction, best local restaurant). If we can successfully complete all of those tasks, the next natural step would be to reach out to the Los Angeles County Metropolitan Transportation Authority and ask for some sort of partnership. We can offer them a really cool project based on real gathered data and made with clean homemade assets, and they can offer us a unique stamp of approval and a marketing platform. Built With blender javascript sparkar Try it out www.facebook.com drive.google.com
10,008
https://devpost.com/software/changemakers-training
Initial message Explains the user that he/she has to catch 5 Gems Front view of the Ship. We named this ship: "We are all created equal" Every time you catch a Gem you will see the Changemaker Action After catching the Recycling Gem, it will recover the sea colors and it will make the soda cans dissappear Inspiration This Effect is inspired by the need to improve society. What it does This Effect lets the user to explore a Changemaker Game where we change the World for the better. We wanted that all the main aspects of our society were shown in the scenario, so every detail was meticulously created. We got the idea of inserting 5 GEMS that symbolize the actions that a Changemaker should do to save the world. How we built it The base 3D Ship model was bought in Sketchfab and that was modified by us through Photoshop. Then we proceeded to create the storytelling to be implemented. The initial message tells the user that the World has been environmentally destroyed, which is not so far from current reality, and then we show the user that he/she needs to catch 5 Gems. The order in which Gems are caught doesn't matter. Instead it important is to notice that some of the Gems will make the scenario change. For instance: The Gem related to recycling will recover sea colors back to it's original colors and will enable animal sounds and sea sounds, and i will make the soda cans dissappear The Gem related to planting trees will recover the Palms colors Challenges we ran into Asset dimensions: I think the main constraint for any medium complexity Effect for Instagram is the Size. Publishing: A current challenge is the fact that our filter hasn't been approved yet by Instagram. It's been rejected and we can't figure out why. Plane tracking: Oh boy, we struggled a lot with this functionality! At the end we realized that increasing the dimensions of the 3D objects would make the user move around and unconsciously track the plane, which was what we wanted. Accomplishments that we're proud of We are proud of the result achieved because the deliverables are exactly what we wanted. We would be thrilled if the filter got accepted so that it can help somehow to make conscious the need for change. What we learned A lot. For sure we learned how to manage 3D objects. What's next for Changemaker Game Next step is to create new worlds, new challenges for people to make this world a better place. Built With photoshop sparkar Try it out www.instagram.com
10,008
https://devpost.com/software/microbes-anatomy
Testing with Spark-AR Player Spark-AR Studio Blender - 3D Model of COVID-19 Inspiration I always wanted to see microorganisms but a microscope is expensive and out of my reach. Later on, I got to know about Spark-AR and got this idea to make microbes on a bigger scale. He/She who faces the same problem as I, can have the experience of looking microbes under a microscope. What it does It shows different parts of a microbe, in this case, it's COVID-19. When tapping on the screen the microbe splits in half and shows the inner anatomy. It will only show up when the camera is placed on the COVID-19 poster made by our team. How we built it It was done in two parts, the 3D model of the microbe was made in Blender. Then the animations and functionality are done in patch editor in Spark-AR. Challenges we ran into While exporting the 3D model as .fbx format we could not export the transparent material of the orb that was done in Blender. But it was later added by the Material feature of Spark-AR. Accomplishments that we're proud of Because of Spark-AR, we gained the experience of working with AR and turned our imagination into reality. What we learned We learned how to take a digital 3D model and make the experience of observing it in the real-world using Spark-AR. While doing it we also learned a lot about the internal structure of the microbe. What's next for Microbes Anatomy We want to add more microbes such as bacteria, archaea, fungi, protozoa, algae, and viruses with detailed information. We also want to reach schools with no lab or equipment so that they can also experience and learn about the microbes. Built With blender spark-ar Try it out www.instagram.com github.com
10,008
https://devpost.com/software/dress-runway-ar
First display. You can place the object anywhere on the detected plane. The first color and pose the dress shows. The back view of the dress. Color Variation: red Color Variation: Blue Color Variation: White Color Variation: Green Pose Variation: "A" Pose Inspiration Dress Runway addresses the problem of online shoppers who are not convinced by just photos and videos shown in the website. We aim to let shoppers have a better feel of how their dress would look in their everyday environment. What it does This AR project is a marketing solution for fashion brands using Spark AR in instagram. It lets users see a three-dimensional render of a dress they choose in the AR filter gallery, and the user would see how the dress would look like in their everyday environment. First the user needs to decide where they want to place the model by tapping. The dressed model then appears on the tapped area. Next, the user has a choice to change the color of the dress they want to see through the UI picker. He or she can also change the pose of the model by tap-and-hold on the model. This would let them see how the dress would look like in different poses. The rendered dress has a realistic looking texture thanks to the normal map placed on the dress material. The user can adjust the lighting cast on the model through the UI slider so that they can match the model's brightness to the actual room brightness, making the dress look more immersed. How we built it We used Marvelous Designer to create the dressed avatar, and lowered its polygon count using Blender. The 3D model itself is reduced to around 1.2 megabytes, which is less than half of the maximum limit of 4 megabytes. The 3D model is then uploaded to Mixamo to be automatically rigged and we downloaded several animations from there to display in SparkAR. In SparkAR Studio, we used both scripting and patch editor to implement the user interface, controls, and animations. Color done with script. Patch editor: animation selection with long hold, object movement, brightness control, using particle effect and animation for 3D transitions. Created animated floor with white dots using shader control in patch editor. Challenges we ran into Making a dressed 3D character that is minimized enough to fit a 4MB size. Not only the learning curve, is important to keep the level of detail and aesthetics high enough. Strategizing the best approach for code aspects across patch editor and script. With each new feature to implement comes the question of where and how to code it? Experimenting with how every feature can be done in both paradigms, and thinking about how they could be seamlessly integrated, were unfamiliar challenges. Accomplishments we're proud of As the leader, I am most proud of being able to break down one big abstract goal into concrete small tasks. This allows splitting work easier to my teammates, and it gives me motivation to strive for several short term goals instead of one big goal. What we learned We learned that creating and sharing AR effects in SparkAR is very quick and user friendly. We learned the possibility for AR artists to reach out hundreds of millions of people through this platform. What is next The current Dress Runway addresses the user's initial problem of only having a 2D photo as her only reference. The following can be done to improve the user experience. Realize a better lighting for the dress. We can make use of the camera texture to extract major colors and adjust ambient lighting, and point lights for the dress to have a more physically accurate color to reality. Refining the 3D model. With a better 3D artist, we can get a more realistic 3D dress that is still size efficient. The current dress is still weight painted to a human rig, but loose cloths should be rigged separately to look more realistic. Another problem for online shoppers is size measurement. While we are not sure of the technical capabilities of the current SparkAR, we do believe that AR has the possibility of granting these features. Automatically measure a person's main measurements to check if a dress fits. This includes bust, waist, hip sizes, and also height. Augmenting a person with a 3D dress. This involves matching a rigged dress to a detected human pose and size. Built With blender figma javascript marvelous-designer mixamo spark-ar Try it out www.instagram.com
10,008
https://devpost.com/software/safe-zone-2ojerb
Thumbnail for my project Actual result Inspiration While during covid 19 pandemic we cannot completely eliminate the disease at this point of time, we can at least control its spread and negative impacts by keeping ourselves in a safe zone, i.e., by taking all precautionary measures and maintaining a good hygiene. What it does Outside the sphere is the disease. When a person is outside the sphere, he is in danger. Sphere is the safe zone where he can be safe from the disease. Sphere is the safe zone. It is kept colorful to show that safety from the disease can make the life happier and more colorful. How I built it I used Spark AR to make my filter. In addition, I imported a 3D object from Sketchfab to use in my project, which is the sphere. Challenges I ran into The major challenge was to understand how Spark AR works, as I had never used it before. After watching some videos I gained some basic understanding. The challenge throughout has always been to express the idea on a platform as accurately as possible. Accomplishments that I'm proud of I got more comfortable in using a new software (Spark AR). Also I could express my ideas more accurately, if not absolutely. What I learned I learnt to use Spark AR here and how to shape ideas realistically to express. What's next for Safe zone Get new projects or collaborations in future based on my work, that is my wish :) Built With sketup sparkar unity Try it out github.com
10,008
https://devpost.com/software/expariment
Labelled diagram of a standard titration set-up Choose your chemicals! Choose the indicator! Titrate! A student using expARiment for his chemistry homework! Acceptable ranges of titration values Inspiration Learning science without experiments is like listening to a bad joke, you don’t get it. (haha) But in the midst of this pandemic, with school closed off and national examinations inching closer every second, where do students and teachers go to find an alternative to physical laboratories? The answer is as always, Augmented Reality. What it does Our virtual laboratory provides a platform for students to carry out these experiments from the safety of their homes while not compromising on rigour and detail, dutifully educating our scientists of the future. Focusing on the acid-base titration experiment (a standard chemistry experiment for middle to high school students worldwide), our project aims to illuminate its procedure and underlying concepts, which are fundamental to a student's chemistry education. How we built it Spark AR studio provided the perfect platform to realise this idea. It begins by using the Plane Tracker to identify a surface (e.g. a table) where the user (the student) can conduct the experiments. The experiment apparatus — retort stand, burette, conical flask, etc. — will then be presented to the user, who then sets them up by panning and dragging the objects around in the World AR scene. The Native UI picker then allows the user to pick the chemicals to be placed in the burette and the conical flask respectively, and also the indicator to be added to the solution in the conical flask. Text displays on the right of the screen shows the user the chemicals chosen, as well as the current burette reading and the pH of the solution in the conical flask. The user then has two controls available for conducting the experiment: long-press and single-tap . Ideally, the user would start by long-pressing the burette to allow a stream of chemicals to flow from the burette to the conical flask. (this is equivalent to opening the burette tap to full) The stream of liquid flow is visualized with a coloured particle emitter in the World AR scene. Then, when nearing the end-point of the titration, the colour in the conical flask would change, signalling the user to switch from long-press to single-tap control. By single-tapping, the user has a much finer control over the flow rate from the burette, allowing 0.1cm^3 of solution to flow “drop-by-drop” each tap. Finally when the user is within 0.2cm^3 from the end-point, the colour would change again, indicating the user to stop the titration and record the result. However, if the user happens to exceed the end-point, the colour of the solution would still continue to change as the colour of the indicator directly reflects the pH of the solution in the conical flask. Challenges we ran into For both of our group members, this was our first experience with SparkAR. Due to our unfamiliarity with the studio, it was difficult to grasp the capabilities and limitations of the SparkAR modules from the limited amount of examples of documentation and examples available online. But we were able to find workarounds for many of the problems we faced, such as changing the colour of the solution at discrete intervals instead of continuously as we could not directly edit the RGB colour of the material within the script. When working on the details of the project, we were stuck at managing the pH calculations for an arbitrary titration, where the mathematics turned out to be quite involved, hence we decided to reduce the complexity by mainly considering strong-strong and strong-weak titrations. Most of the problems we faced were resolved in this way of finding compromises Accomplishments that we're proud of Getting through the whole project without using patch editor. What we learned Reactive programming, Native UI picker, Animations in SparkAR studio, 3D modelling What's next for expARiment Implement more chemicals, such as strong/weak acids/bases, and different types of chemistry reactions besides titration, such as gas collection, heat decomposition and crystallization. Physics experiments are also possible! Mechanics simulations can be done easily using cannon-js, where students can experiment with falling objects to measure gravitational acceleration, or set up a simple pendulum to measure the period of its oscillations. The potential for expARiment is endless! Built With javascript sketchup sparkar Try it out github.com www.instagram.com www.facebook.com
10,008
https://devpost.com/software/plant-i2zmko
World without greenery! Colourless and Dull World when we think green. Grow Green! Be green! Inspiration Do you know what’s the best time to plant a tree? It’s now! Inspired by this thought, we begin to bring it to reality by creating an interactive PlantAR experience on Instagram using Spark AR. Ever wondered what we give our Environment in return which provides us so much, from a molecule of oxygen to breathe… to a beautiful earth to live on ? Instead we keep deteriorating its resources . However, in this era of industrialization and urbanization , this AR effect lays emphasis to motivate people to spare some of their time in taking care of their Mother Earth by keeping her green for future generation to have a better world to live in What it does The PlantAR effect is an interactive AR experience , that allows user to beautify its surrounding by making it green and view its surrounding in a more ecological perspective and at the same time promoting the idea of afforestation. This effect will allow the user to get close to nature and will motivate them to think green and dedicate some of their deeds towards protecting it. To plant a garden is to believe in tomorrow How we built it We built the AR effect in Facebook's Spark AR platform throughout most of the month of June (2020), spending hours creating and optimizing 3D models in Blender, designing the AR logic to both run + consistently reset the experience. This is in addition to nearly 40 hours of research starting back in May and continuing alongside development in June. Material for 3D models were created in Spark AR We started by creating by creating different types of trees (using 4 models) and particle effect in Blender and characters were made in blender too. Once we were happy with the models, we imported all of them into Spark AR and began piecing together the organization of our scene and the effect logic. At first, we were only going to allow the user to draw, edit, and view their natural scenery in the back-facing camera, so we got to work adding my track objects to a plane tracker. But eventually we came to screen tap and screen long press to interact with environment Having come from a game development background, we knew we would be more comfortable with patch editing as opposed to using Javascript. Challenges we ran into There were three main platform-specific challenges that we had to work around on Spark: • Spark does not allow for dynamic instantiation / reparenting of scene objects • Facebook/Instagram is strict when it comes to custom UI elements • Spark does not allow custom shape for particle System Drawing and editing your own natural elements Particle systems plays an important role in the process not being able to customize particle shape proved to be a major setback on the project, but it was not the end so we instead came up with tap interaction to create a beautiful environment Accomplishments that we're proud of We are pretty proud of the workarounds that we came up and the challenges we aced. One of the main achievements of this project is, that we are able to know and learn about a new platform to create a fun AR experience and improve our knowledge. Overall, I’m just proud that we accomplished a personally ambitious goal and that we incorporated a lot of the direct feedback we got from people who used the effect. What we learned Of course , we learned more about building 3D experiences, about implementing feedback from users, about navigating Spark AR, but the most important thing we learned through building this effect is that one is much more likely to put the work into building something full and complete if they’re personally excited by it - conceptually, technically, etc. – as it will never feel complete to you. As once said by Sir Edmund Hillary “It is not the mountain we conquer but ourselves.” Also working as a team lead us to ideate and brainstorm more, giving an insight of working and achieving goals together What's next for plant Since nature and its components are never ending , therefore , this effect can be refurbished with many inputs possible. We aim to add feature that allows user to add other environmental components like butterflies, different flowers , some aesthetic water bodies etc. Thus, making it more customizable . We also tend to include more than one background frame. Like to include interior of a house and promote indoor planting or a terrace to motivate terrace farming. However , it would be fun to associate with a game , so that children also indulge themselves and at the same time gain wisdom to preserve nature and maintain the greenery of their surroundings. Built With blender sparkar Try it out www.instagram.com github.com
10,008
https://devpost.com/software/back-to-the-park
Teeter Totter Hoop and ball Hoop Swing Inspiration Argentina is going through the longest Coronavirus quarantine in the world, 100 days and still counting. In the metropolitan area of Buenos Aires there has been no exceptions made for children, meaning that children in Buenos Aires are dealing with one of the most severe isolation conditions in the world . This affects their mood and health just as much as their social skills and education. What it does It brings popular activities from Buenos Aires' parks to the reality of their homes , allowing them to play and interact with their parents giving some sense of activity that may relate easier with the outdoors activity they need. How we built it We built the models with Blender and set all the programming within Spark AR. We used the patch editor to set the interaction with the objects and allow the user to select the type of game and color through object tap, the animations were handled by vector transitions. Challenges we ran into The main challenge was understanding that we needed to make elements that were really familiar for kids if we wanted them to play with their body without watching the screen, so we stayed as close as possible to the most popular games in Buenos Aires' parks. Accomplishments that we're proud of We are proud of doing something to ease the confinement of families, even if it's just for a few moments of fun it's really important. It's been interesting as well to address a real problem Argentinians are dealing with and translate it into a AR challenge. What we learned We learned that if you have an idea clear you can develop really fast, * the key is to have an idea that will address our problem before starting to produce. Spark AR is a great tool to make ideas come to life in augmented reality. What's next for Back to the Park We want to develop a gamification side of the experience so that the kids have something to play for and share with their friends, similar to how their social interaction really works at the park. Built With blender javascript particle Try it out www.instagram.com
10,008
https://devpost.com/software/ar-u-fit
AR-U-FIT , #VirtualOutfitting for Everyone!! Inspiration During this COVID 19, we saw a problem that there are limitations when people want to try new clothes in the Fitting room, it’s hard to imagine all the risks you can get while doing it. Because of that, most people buy their clothes through e-commerce, but new problems occur: How to measure their body for clothing size, what size they should buy is it M or is it L? It's really hard to decide! How to know whether or not the clothes that they’ll buy are going to match with their other outfit? “Will it fit? Will it look good for my body? Will it match my style?” That’s the question people ask nowadays To answer that question we draw some memories from our childhood, a barbie outfit sticker game ( https://www.amazon.in/Barbie-22303-Sticker-Stylist/dp/B006ZUUUA4 ) one that always been played by children. And so with the help of Blender, SparkAR, and a Lil bit of Javascript, we make it a brand new authentic usable Instagram Filter that can change how the future of fashion products, being perceived by its customer. What it does You can try different kinds of t-shirt that we have prepare for you, and also add a hat if you like it. The t-shirt is resizeable using the pinch or from UI Slider, and you can also arrange the height position by tap-hold and move it up and down to make sure it fits your body. To change the hat design, just tap on the face to rotate between several hats designed that available. How we built it Since our concept is to style the other person, we only enable the effect on the back camera. We use a face tracker to detect the face. We attached the t-shirt and hat 3D-Object as a child of the face tracker. so that their position and rotation will follow the face. Then, we use javascript to customize the rotation of the t-shirt by controlling each of the t-shirt's rig. We mainly use the patch editor within Spark AR studio version 90 to build the effect. A script with Javascript programming language also used to create more amazing experiences. The 3D objects are modeled using Blender application. We use Spark AR Face References Assets as the size reference to build the 3D object. All of them were done on a personal computer running Windows 10 with Intel Core i5 7th Generation and 16Gb RAM. The effect was tested using Instagram and the Spark AR Player application for Android. Challenges we ran into There are 3 main challenges we ran in the process of development SparkAR not really support double UI Picker, SparkAR only track the face, it made the outfit position is not smooth, the addition of shoulder track can make the filters looks smoother in the future The guided instruction option is too limited for the full experience of AR-U-FIT, it will be great if we can have a full customized instruction to help the user explore the full feature of AR-U-FIT. Accomplishments that we're proud of We are proud to have a proof of concept of how the #VirtualOutfitting can be done using the help of SparkAR and Instagram Stories, we saw that this could be one of the Instagram filters that have high usability and great potential of future development in the fashion industry. What we learned We learned that there still many opportunities that can be explored in the AR industry, this filter can change the customer behavior and the future of the fashion industry especially during and after the COVID 19 era. What's next for AR-U-FIT First, we want the #VirtualOutfitting to become a new-normal in the fashion industry, then we want AR-U-FIT to drive changes in how the fashion product is being delivered to the customer in the future. Built With 3dmodelling android blender blender282 javascript patcheditor sparkar sparkar90 sparkarfacereferenceasset windows-10 Try it out www.instagram.com github.com
10,008
https://devpost.com/software/klurdy-8nkb92
Inspiration We started klurdy as a basic e-commerce store for African fashion and used data mining to onboard 100K+ products from the interwebs and we started getting loads of traffic. People would place orders via WhatsApp and work with local tailors in Nairobi to deliver the product. We realized there is usually an issue with the supply chain for fabrics and we would end up sending the customers a list of fabrics currently available and they would have to pick the fabrics that appeal to them. The challenge was the customer had to imagine the design they want with the fabric they pick and this might be the root cause of customer complaints where products ordered don't match what's delivered, We received multiple lawsuit threats from fashion houses whose products were on our platform without consent and that demonstrated that there was an issue with copyrights in the industry. We decided to ditch the standard image-based e-commerce site in 2019 and started learning and exploring the use of 3D in the fashion industry as we offer design services in Africa and Europe. We came across spark AR and it's amazing features in Jan this year and it had that WOW factor. The pandemic has not been kind to us as the majority of our design gigs came to a halt, having a massive impact on our revenue streams. We decided to go back to our roots, fashion, and use this hackathon to leapfrog our efforts to develop the MVP for our 3D experience. Due to social distancing, people can't go to the malls so we want to take the clothes to them by using AR/VR. What it does Klurdy is a PWA that allows end-users to visualize fashion designs with various materials that are currently available in the market. We upload custom 3D models of fashion designs and materials in our proprietary backend system and make them available in our easy-to-use product configurator. After the user has personalized the apparel to their liking, they can 'hang' it in their living room, office, or wherever they are through Instagram or Facebook stories. This allows us to take online shopping experiences offline, get feedback from friends or family. In short, we have developed an advanced hanger for any garment we choose to incorporate in our platform. How I built it As a design studio, we strive to be minimalists and find the shortest path to success. The web app is built with Angular and nest js to prototype the web-based experience and MongoDB as the database. We chose to use Babylon js as our 3D engine due to ease of use, supports many features and browsers. We integrated the web app experience with spark AR filters by deep-linking with the Facebook app. However, this did not work on IG due to deep linking being unavailable so IG filter was separate from the Facebook filter. This submission only includes the IG filter The AR filter is built using js scripting, utilizing a multitude of features. The process used in IG filter is as follows: added a plane tracker to detect surfaces and added a null object as the child of the tracker. This is used for touch gestures. added a null object as the 3D model as a child of the null object. For this demo, we chose to use a simple t-shirt for girls with a minimalistic design. The 3D model loaded had different mesh parts that can be customized to offer different designs added textures for the various fabrics available. We added 4 African prints and 4 solid color textures, We then created standard materials for all of these textures and assigned default materials for the various meshes of the 3D model of the t-shirt. We digitized some African prints using adobe illustrator, which was a fun way to see how we can manifest some of our existing skills as a design studio we started scripting and fetched all the textures, materials, and mesh objects from the scene. We then created a native UI picker for selection options that the user can use to personalize the t-shirt. On monitoring selection, we apply appropriate materials to the meshes of the t-shirt. We monitor the touch gestures to move, rotate and scale the t-shirt appropriately added music to play in the background All spark ar features were made using scripting to achieve better creative freedom. Challenges I ran into Since the user should be able to personalize different parts of the apparel, the format for the 3D model was a challenge but finally, gLTF format emerged the winner for both the web and spark ar experiences The size of the 3D models was a challenge. We wanted to use as many polygons as we could for detail but we had to scale down. So please don't mind the quality. IGs file size requirements were lower than FBs We felt like the documentation for the native UI module scripting is not so straight forward. You got to have a keen eye to notice textures need to be uncompressed and assigning visibility had to read more on ReactiveMOdule to make things work The textures needed dimensions that were in the power of two. Ended up using 512 by 512 though not all textures are able to be tiled seamlessly. It was difficult to capture material change action in the video when texture was selected in picker so we just took screenshots. Many people in Nairobi use phones without gyroscopes so it was difficult to set up a test group for feedback. As UXers, we believe in the feedback loop. The network module was not working so we couldn't communicate with our APIs. We wanted to make one flter that downloads 3D models and materials dynamically based on which product the user interacts with on our e-commerce site. Accomplishments that I'm proud of We had to devise our own process flow for making the experience work for desktop users using QR codes. This way we are able to allow users to continue the experience on phones through our custom QR code scanner page. This is a real demonstration that Klurdy is a design studio with UX engineering capabilities. What I learned We learned how to take fashion design patterns and create 3D models by ourselves using CLO software. This was a fun experience and we are starting to feel comfortable in 3D garment design. We also got to have a deep dive into Spark AR, gyroscopes, and a different style of reactive programming (We're only used to Rx). We also learned how to use illustrator to make seamless patterns for African prints. What's next for Klurdy We want to port the web app to mobile platforms (iOS and Android) using flutter and desktop platforms (Microsoft and Apple app stores) using electron. We are experimenting with Draco, a 3D compression library so that we are able to provide amazing experiences to our users via faster download times. However, we are not sure if we'll be able to run the decoder on the AR effect as a package. We are moving forward to look for 50 fashion brands in Europe who will work with us in our beta program as we build the next set of features, like our own AR viewport that has full-body tracking and trying clothing virtually in realtime. It will allow sharing generated video clips that can be shared on social media. We also want to explore selling clothes using VR by building a 3D shopping mall that hosts fashion brands. In short, take a deep dive into AR/VR for the fashion industry before everyone realizes the huge opportunity. Add avatars to model our clothes, opening possibilities to virtual fashion runways and gamification. Add sophisticated material change animations Built With angular.js babylonjs node.js Try it out www.instagram.com github.com beta.klurdy.com
10,008
https://devpost.com/software/sci-fi-portal
Inspiration We got inspired by watching some Sci-Fi Movies in which they show cool portals. What it does It depicts a small portal to the mars using Spark AR Studio How we built it We built using Spark AR and Autodesk Fusion 360 Challenges we ran into We were complete beginners to Spark AR Studio. For first few days we have to learn how to use it. Accomplishments that we're proud of We finally completed this effect and learned how to use Spark AR Studio from being complete beginners. What we learned Spark AR Studio,Using Patches in it,Displaying 3D Objects in Spark AR What's next for Sci-Fi Portal We are aiming to improve it more Built With autodesk-fusion-360 sparkar Try it out github.com
10,008
https://devpost.com/software/covid-shooter
Launch Page patches Maya Render2 maya model progress SparkAR view Maya Render1 Inspiration Do you know that handwashing is one of the best things you can do to stop the spread of COVID-19 and other diseases? According to the CDC, “washing hands can keep you healthy and prevent the spread of respiratory and diarrheal infections from one person to the next.” And when soap and water aren’t available, hand-sanitizer could come to your rescue, since it can quickly reduce the number of germs on hands in many situations when it contains at least 60% alcohol. However, as the COVID-19 pandemic situation goes on for months, a lot of people began to slack off. Therefore, I made this AR filter/game to raise awareness of the easiest thing you could to prevent coronavirus- just press the soap/hand sanitizer bottle and wash your hands! What it does The filter starts with a hand sanitizer botter as the “weapon” as your first-person view using the back camera. After double-tapping the screen, the game starts. Several viruses are generated related to the plane the camera captures(using plane tracker) and move to random positions. The player’s goal is to shoot down all the viruses surrounding them. Press the bottle for bottle motion, and press the virus to shoot it down. After all the viruses are gone, you win! How I built it I first built models of viruses and a hand sanitizer bottle in Maya. Then, I imported the obj files into Spark AR, and added the shooting mechanisms, object motion, and game logic using patches. Challenges I ran into Getting familiar with Spark AR was a challenge since it’s unlike 3D modeling software in terms of manipulating objects, and it’s unlike coding in terms of adding patches to achieve certain animation and behavior on objects. It is still a very nice software to use, but I do need to watch a lot of tutorials that are needed in order to get to the effect I want. Another challenge is trying to make the virus look not too daunting but still resembles the virus, so I decided to use a 2D rendering approach to make the color more like animation so it’s more kid and user friendly to play. Using the patches to get the boolean logic is a bit challenging because it’s different from coding, and it’s easy to get really messy with the patches to achieve a simple logic. Therefore it’s important to plan ahead and keep the patches organized. What's next for COVID Shooter It would be more fun and playable if I could add a scoring system and levels to generate more and faster virus. Creating filter effects if the user is “hit” by the virus Adding an information page to show the “keeping it hygiene” message I want to spread across References https://www.cidrap.umn.edu/news-perspective/2020/04/studies-hand-sanitizers-kill-covid-19-virus-e-consults-appropriate https://www.cdc.gov/healthywater/hygiene/hand/handwashing.html https://sparkar.facebook.com/ar-studio/ https://sparkar.facebook.c Built With maya photoshop sparkar Try it out github.com
10,008
https://devpost.com/software/league-of-legends-ar-puzzle-game
Volibear Tracking Image Garen Tracking Image Ashe Tracking Image Volibear Attack Animation Garen Attack Animation Ashe Attack Animation Inspiration I really think AR is a tool that bridges the old and the new. And some classic board games and puzzles are still really fun, but they shouldn't be discounted because they were created before the digital age. I think adding new components to established games could breathe new life into them. People wrote Tetris off as an old school game until they developed a competitive mode. People laughed at the concept of Wizard Chess in Harry Potter when the pieces attacked each other, and everyone loved the idea of their Pokemon and Yu-gi-oh cards coming to life and now with AR this is all possible. It just surprises me that companies with successful IP's haven't made these features accessible to the mass casual gaming market, especially after the monumental success of Pokemon GO. What it does The camera searches for the allocated image, that being of a correctly completed rubik's cube side. Once it recognises the target image, the animation is triggered and the League of Legends character pops up and attacks. How I built it There's Windows friendly software call Obsidian, which allows you to fish out the various animations from League of Legends once you download it, I then imported the models and animations into Blender or Maya, scale the animations and check the imported textures, import the reworks into SparkAR Studio, apply the animation controller (for a final working version probably turn off loop animation) and set the target image tracker to a captured photo of the rubik's cube's completed sides. (Disclaimer: The image Tracking tends to work best with iPhones, and the test links and videos are in order they're posted. 1st link being for the yellow side which is Volibear, 2nd link being for the red side which is Garen and the 3rd link being for the blue side which is Ashe). Challenges I ran into I'm still yet to find animation software that imports 3D animations with the textures and materials, the champion textures and materials have a lot more parts so finding what goes where can be a bit of a mission sometimes. Also turns out image tracking works best for iPhones. Accomplishments that I'm proud of Learning about AR and building a filter portfolio with 90 million impressions, when I have an arts degree, not one in engineering or computer sciences. What I learned That I enjoy interactive, physically engaging games, that just sitting at a PC clicking a mouse isn't too enjoyable for me. Hopefully I find a market of people out there that think differently in what types of play and entertainment there is out there. What's next for League of Legends AR Puzzle game I developed a fully fledged demo for mobile in Unity with 2D puzzles, a life counter, timer, marketing plan and rules which I sent to Riot Forge, Riot Gaming's Indie publishing branch but never received a response, I'd love to get their attention so that Augmented Reality Puzzle Games could maybe become a game category moving forward. If I continue with the concept of a Rubik's cube, maybe it'll be cool to get to a point where different pattens play different animations as well, like if you make an X pattern it heals you or certain colour combinations do more damage, that way it becomes a memory game as well, so that just memorising the algorithm to solve one side of the rubik's cube won't guarantee a win. I'm honestly just experimenting with games I would like to see in the market with the hopes of gaining the attention of either Riot or Wizards of the coast. Built With blender cinema4d maya odsidian sparkar Try it out www.instagram.com www.instagram.com www.instagram.com
10,008
https://devpost.com/software/travelar-ucme7a
Inspiration In current times, when travelling is restricted and unsafe due to the pandemic, we yearn for the exotic outdoor experiences. Some visions to take in the scenic beauty of various tourist spots that are beyond our reach. We feel the urge to travel to various destinations and still have the comfort and safety of our homes. Let our Filter beam you from your Living room to a series of destinations around the world. What it does The filter provides you options(labelled spheres) on various stands in the camera view. When you click on any option it ports you to the labelled place, where you can experience the Panoramic view of the tourist spot. To come back to your camera view, tap anywhere on the screen. How I built it I built it with Spark AR where there are flat spheres and 3D models where in on taping the sphere would initialize an animation which would take up the camera space creating an illusion of the panoramic view. Challenges I ran into Proper scaling and toggling between the visible states of the game objects were a challenge. Accomplishments that I'm proud of making my idea come to life. What I learned I learned a lot about the Spark AR interface and the patches. What's next for TravelAR A range of categorised destinations can be further added. The User Interface will have extended features A lot of interactive Points of Interest and Infomatics within panorama with minimalistic design. Built With sparkar Try it out www.instagram.com
10,008
https://devpost.com/software/black-brasil
View of the art exhibition View of the concrete barriers with the pieces of art we have created Example of piece of art created Text examplaining one instituition that tackles different and specific aspects of racism. Inspiration The inpiration for this filter came trough with the recent events on protests against racism all over the world but specially in Brazil. For that reason we wanted to give the opportunity for instagram users to experience an Augmented reality art exhibition in which our goal was to inspire and enrich not only communities through the human necessity of experiencing art but also empower individuals to act for change. What it does While coming up with the AR filter we have created art pieces for the exhibition where each one spreads an black brazilian institution that tackles different and specific aspects of racism. How I built it Using Spark AR studio and JavaScript scripts Challenges I ran into We had to learn the Spark AR platform and create several textures for the AR art exhibition. Accomplishments that I'm proud of We are proud of our the augmented realty filter prototype since we had to work hard to create it. And we are even more proud of our researches about art and the importance of empower individuals to act for change. What I learned We learned that every individual can act for change and that using our gained skills of creating augmented reality filter for instagran we can support and spread several messages that helps enrich communities and supports a larger social good. What's next for Black Brasil Our goal is to continue working on the filter so we can improve not only the user experience in it, but increase and create textures for more black brazilian institutions to be spread. Built With javascript particle Try it out github.com www.instagram.com
10,008
https://devpost.com/software/ecomo
Inspiration Trees and their ecosystems are threatened in Australia. The two main reasons are bushfires and deforestation. We are astonished at how much vegetation was destroyed by one of the worst bushfires in Australian history last summer. Millions of hectares of land burned. An estimated 445 deaths and more than 4000 people admitted to hospitals. Besides, according to WWF, Australia is the only developed country on the list of deforestation hotspots. Therefore, we would like to take this World AR challenge, and raise the awareness of tree preservation and eco-friendly environment in an interactive and interesting way. What it does To convey the message of eco-friendly awareness, our filter EcoMore consists of Three green symbols. Rotating trees represent the eternal lifecycle of Nature. Flying flowers symbolize the prosperity of living organisms. The image of hands holding the Earth means the future of our mother nature is in our hands. Everyone is responsible to take care of our one and only planet. How I built it EcoMore is a filter that aims to raise awareness of nature preservation, so the first step is to find related eco-friendly materials, mainly pictures. The creation of this filter can be generally divided into three steps. Firstly, we want to insert tree objects. add an object of FaceMesh under the category of FaceTracker. Then, a tree object in the SparkAR resource library is dragged to the Face Mesh. A null object is added and transitioned. Because we want five trees shown on the person’s head, so we enter the number 72, which is 360 degrees divided by 5. After transitioning, then we can rotate these trees to represent the eternal lifecycle. Next, we want to create flying flowers in the background. This is created by the feature of the particle. Different layers for the background panel and user panel need to be set. Then insert particle at the same place. After that, in the particle, we add a new material, which is the flower. So, it will be flying around. The last part is to add face paint. Here, we add FaceMesh under FaceTracker again. From the right-side panel, we add materials. Then under the tab of Help, we download Face Assets. We open Masculine dace. After that, add the image of hands holding the Earth on the face. When finished, hide the bottom layer of the face, and save the new image as .png. Then drag this image to the material we just created, and then change the texture. Here we go. The image matches to our face. Challenges I ran into One of our challenges was how to express our ideas via a filter with World AR effects. After exploring on the Internet, we brainstormed and found a way to present it. Another challenge was when creating particles, we did not separate layers, so the particle is stuck with face mesh. However, after looking for online resources, we troubleshot this problem. Accomplishments that I'm proud of Our favorite part of this project is creativity. We have an idea. Then we need to integrate our thoughts into a filter with World AR effects, which is hard. But we found an interesting way to raise everyone’s awareness of nature preservation. What I learned We used Spark AR before, but this project makes us more familiar with some secret features, especially particles. Also, there are abundant online tutorials and resources about Spark AR. We joined Spark AR Community on Facebook to connect with other Spark AR professionals or lovers. What's next for EcoMore EcoMore is more of a presentation to raise awareness of nature preservation. In the future, we would like to add more interactive features to EcoMore 2.o, such as changing the background or choosing different plants rotating around the head. Built With adobe-illustrator particle photoshop Try it out www.instagram.com
10,008
https://devpost.com/software/emoji-ddr
Inspiration It’s sad that we couldn’t get out to arcades to play our favorite Taiko Master since coronavirus social-distancing and city lockdowns kicked in. We came across Beat Saber mixed reality on YouTube, but figured that we couldn’t afford such expensive VR headsets…That’s how we decided to make our own beats/rhythm game. And thanks to the Audio Analyzer released in v89, creating a music-themed effect has been a really cool journey. What it does An innovative form of Dance Dance Revolution, Emoji DDR drops stickers according to the beats of a song. If the user successfully make the corresponding facial expressions and/or head rotations as the emojis hit the score bar, they get points. How we built it We used Spark AR Studio v89 to build Emoji DDR Game. Most of the game logics and animations were created in the patch editor. We used almost all of the built-in patches, including the newly released Audio Analyzer. Game elements Emojis: We imported beat parameters (in time scale) into patch to control the fallings of emojis of different songs. Here we used Loop Animation to generate random numbers and trigger animation of a random pattern of emoji falling effects. Tracks: we used Audio Analyzer to change the texture of the background tracks in response to the power of some frequency bands of the music. User: we placed the user in the game by using camera texture and person segmentation. Score logic We used the Face Tracker to track user faces and detect interactions like leaning head, opening mouth and blinking. Once an interaction is detected, it sends out a trigger to compare the emoji positions and the hit bar position. If they match (with some tolerance), we will add one point to the total score. Meanwhile, the bar flashes and a scoring sound is played. Game flow To provide a complete game flow, we designed a two-page onboarding process, the replay button during the game, and the restart option on the game-over page. We applied Offset to Runtime to manage the timeline of the game. Visibilities of various game elements were in made possible with the Counter patch. Challenges we ran into Our two-person team was based on opposite sides of the world – Aileen was in Beijing and Xiaozhi was in St. Louis with a 13-hour time difference. Sometimes, we were awake at 3AM to make it work for the other person’s schedule. It’s all worth it now that our game is complete! One technical challenge was we only realized that each effect can’t exceed 4 MB in size after we finished making the game. Thus, we had to cut the number of songs to fall within what is allowed by size. Accomplishments that we're proud of We are most proud of the fact that we could implement our idea into a real, working effect, given we had 0 knowledge of Spark AR before entering the hackathon. And ~200 views on our demo video within 1 day wasn’t bad, either! What we learned Kudos to Facebook for making Spark AR such a lovely low-code developer tool that we enjoyed our developing process 100%. In addition to working out on the technicalities of Spark AR, we learned how to decide on and refine our project goal as well as details according to what’s accessible in the developer tool so that bringing our ideation to reality wasn’t built on a whim. It was also amazing to talk with people in the game and music production profession to pick their brains on beats design. On a side note, we also learned how to use Git and Github (first time ever for Aileen!) thanks to this project. What's next for Emoji DDR — Hit ’em all Improve game playability and fun beats extraction and design combo effects song selection Figure out the song copyright issue and then publish the effect on Facebook and Instagram Work on compression methods to pack as many songs as possible into our effect Promote the lovely, low-code Spark AR developer tool We’re also thinking of making a series of Emoji effects/games that wouldn’t incur the size problem. A few ideas include: On-screen Emojis for video chats: Detect user’s facial expression and generate emojis accordingly. We envision this effect to be integrated into video chats. Mimic Emoji Game: The user tries to mimic as many Emojis as they float across screen. Built With javascript sparkar Try it out github.com
10,008
https://devpost.com/software/perfect-weather-prmtqa
this shows the demo of autumn with maple leaves , location, tempurature and time. Inspiration Not all people like all the weathers and mostly we don't get the weather that we want. so to change that and get whatever weather we want around in a virtual experience. What it does It provides the user with a unique experience of weather with the location, time and temperature of the area of the user. The user can experience different weathers at a time like rainy, snowy and autumn with maple leaves. the user can also mix up the weathers and experience the weathers together to get a unique visual. How we built it We used the particles system for the realistic experience of rainfall, snow and the falling maple leaves. the textures were used to show as the options and also in the particle system. The patch editor was used to take care of the visibility of the effect. Challenges we ran into making a menu for users to select the weather and making the particle system work more realistic was a challenge for us. Accomplishments that we're proud of We learned how the particle systems work and made it more realistic as we can. What we learned we learned how the particle system works and also how to make a UI for the filters and user experience better. What's next for Perfect Weather we can add more weathers to it with time. Built With sparkar Try it out www.facebook.com
10,008
https://devpost.com/software/thrill-me
Spark project Inspiration - my love of dance and AR and Jacko What it does - its an audio reactive dancing 3d model of Miachael Jackson How I built it - i designed and rigged the avatar in C4D, then imported it to spark and linked it all up with the energy meter patch and the tombstone animations so it all moves in time. Challenges I ran into - bones, bones, bones. Also I dont have liscencing to his track so had to make a casio keyboard version!! Accomplishments that I'm proud of - i think his dance moves are pretty close to the music video Thriller What I learned - lots of new things about the challanges of designing 3D animation and then importing it correctly into Spark. What's next for Thrill Me - I would like approach record labels about creating other characters. Maybe Madonna, or Beyonce would be fun ? The 3D animation was all designed by myself in C4D and is fan art. Its not taken from any official Michael Jakson assets. The Michael Jackson head is from Turbo Squid but has also been edited slightly in C4D - https://www.turbosquid.com/FullPreview/Index.cfm/ID/568006 The track playing in the video I made for this submission is the original 'Thriller', but in the actual project and published effect its a free fan version made by this guy who gives it away free on his youtube - https://www.youtube.com/watch?v=T1OxhoKhNOQ Built With c4d particle Try it out www.instagram.com
10,008
https://devpost.com/software/tsby
After rounds of refinement still got much work to do Inspiration There are multiple occasions that during friends meet-up, there will be one or two there will not be able to make it. A lot of the time, we print those faces of friends that don't make it and use it for a group picture. We have lots of fun taking those pictures. What if we could use AR to replace those printed A4 paper? This is especially true during this pandemic where gathering are not encouraged. Able to virtually spawn a friend on your screen is awesome! On top of that, our team is span across 3 different countries, 3 different timezones. We are having fun hacking AR to make it seems like we are working together in the same location. What it does When you turn on your Instagram filter, it supposes to upload a picture of a person and translate it into AR and appear virtually next to you. For hackathon purposes, it will spawn Dara and Margo to Tien so it seems like we are working together for the hackathon. How we built it We build it with sparkAR. We also used Blender for our 3d model. Challenges we ran into 3d modeling is hard while using our mediocre laptop, it becomes very toasty and time-consuming to render it. Besides that, we are having an issue with moving two objects on the screen. Another huge challenge from letting us build the solution we envision is we have no idea how to enable user to upload any picture and convert that picture into a 3d model on the fly. Accomplishments that I'm proud of Able to complete this challenge despite have 0 knowledge in SparkAR. On top of that, we are a team of 3 different countries from 3 different time zone! What we learned SparkAR is an awesome tool creating AR filters for Facebook and Instagram. What's next for FriendAR We hope that we could at least get some acknowledgment from Facebook hackathon team. We will continue to improve our 3d model to make it looks more realistic. We would like to explore the solution we envision which is to enable the user to upload any face/whole body image and convert it into a 3d model on the fly for the filter. Ideally, we foresee the following project milestones: We need to introduce the possibility to select a single or several photos of a friend from the user collection. An automatic or semi-supervised algorithm such as Graphcut[1] or a modern segmentation net (such as mobilenet) will extract the person from the collection of the images. A 3D avatar model is to be created then, using some recent works which propose single-shot 3d avatar creation [2]. Then the obtained 3D model can be used in our AR project. Friends are always near! References Vicente, Sara, Vladimir Kolmogorov, and Carsten Rother. "Graph cut based image segmentation with connectivity priors." 2008 IEEE conference on computer vision and pattern recognition. IEEE, 2008. Li, Zhong, et al. "3D Human Avatar Digitization from a Single Image." The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry. 2019. Built With blender sparkar Try it out www.instagram.com www.facebook.com github.com
10,008
https://devpost.com/software/sonic-in-3d-world
Sonic in the real world Inspiration I am inspired a lot by the creators of Facebook and Instagram filters What it does It is a funny cartoon . Children can play with it . How I built it I built it with Spark AR studio . Also used the AR library . It was about 4 hours i took to built it Challenges I ran into Publishing and submitting was a great challenge for me Accomplishments that I'm proud of I am proud because it's my first 3D filter project . What I learned I learned the full thing and applied it too . What's next for Sonic in 3D world i'll make it better and it will be available in front camera also Built With sparkar Try it out www.instagram.com www.instagram.com
10,008
https://devpost.com/software/desgining-room
Room of Lion Study room Store your Valuable Bird 's eye Rest room Inspiration earlier I was not aware of what is Augmented and Virtual but I got some knowledge from one of my friends then I started working on Instagram face filters and Snapchat filters. What it does My model gives you a brief idea about how the augmented reality works on 3d model where we used different types of rooms How I built it We built it using AR spark and for 3d models, we have took help of Blender and Sketchlab Challenges I ran into There were many challenges because this is first Facebook AR hackathon and we faced many problems using patch editor Accomplishments that I'm proud of But at last I proud that I have completed this challenge anyhow and I am happy for this. What I learned I learned how to make world ar effect and how to build 3d models and how to deal with patch editors. What's next for Designing Room Well its a surprise, when you will see the video you will get about it. The effect is full of different types of rooms. Built With 3dmodel ar arspark blender planetracker sketchlab worldeffect Try it out github.com
10,008
https://devpost.com/software/world-filter
window.fbAsyncInit = function() { FB.init({ appId : 115745995110194, xfbml : true, version : 'v3.3' }); // Get Embedded Video Player API Instance FB.Event.subscribe('xfbml.ready', function(msg) { if (msg.type === 'video') { // force a resize of the carousel setTimeout( function() { $('[data-slick]').slick("setPosition") }, 2500 ) } }); }; (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "https://connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); Day-time environment Monster during day-time Night-time environment with snow fall Monster during night-time snow-fall during night Inspiration As we have to make a world filter, so it is kinda cool to be in another place. So, I thought to work on this. What it does As the filter loads, first its checks if front camera is opened or back. If front camera is opened, it asks to switch the camera through custom instructions and after changing to back camera, it asks to move the device to have a look. Then the environment or surrounding of the user changes. It have UI Picker. So there are two options: One for day-time and another for night time. During night, snowfall also takes place using particle emitter. There is a slow background music to match the theme. How I built it First, the 3D model was prepared. The sky is made of sphere. while the ground is made from the plane in blender along with rest of the things. Then the tectures were applied. And in SparkAR, placed the object in world environment along with particle emitter. Emitter was given a texture of snow to give a effect of snowfall. A speaker was added, which plays the background music. I used script for the UI Picker and the variable from the script is used in the patches to change the textures. Custom Instructions are implemented using patches. Challenges I ran into Firstly, The 3D object was not so easy to work on, compression took most of the time. Then, I was planning to use DeviceMotionModule to make movement with the device moment but no result was obtained.So, then I switched to UI Picker, to give a good world AR effect. As my system is old, so all the working took so long to complete. Accomplishments that I'm proud of The filter which I made is what I am proud of. Because they were all new concepts eg UI Picker, DeviceModule, Scripting, 3D-models, exporting, compression. But after the effect was completed, it was a satisfactory feeling. What I learned The things I learnd are: Blender SparkAR - UI Picker Sound Speaker texture Compression Filter uploading Using Patches Working with Scripting and many more things. What's next for World filter I will be working on this filter, if some-how I get a way to move a 3D object based on device motion. And creating somewhat more enhanced scenery. Built With blender sparkar Try it out www.instagram.com github.com
10,008
https://devpost.com/software/audio-playar
Inspiration When sharing music on Instagram stories, people would usually use screen shots or the share option on their music app with a plain background. Even if they used the music sticker, it is not well integrated with the photo or video they have in the background. Stories like this, people, myself included, skip to the next story because is not interesting. I wanted to come up with a way of sharing music that attracts attention and looks nice in any environment. I named this effect, audio playAR in hopes that people won't feel restricted to just sharing their favourite song. There is room for the users to get creative and share other forms of audio such as podcasts, presentations, speeches, etc. What it does Users can share their song by first selecting the album cover from their camera roll and playing the song in the background. Once pressed record, audio visualiser bars will start reacting to the sound. By tapping and pinching, users can reposition and resize the effect. With the sleek look, this effect will go with any background. How I built it I built this using 21 cubes from AR Library. The animations and textures are all made using patches. Challenges I ran into It took a while to come up with the placement of each objects. I also had difficulty making the bars expand only on the y axis with the audio analyser but I managed to come up with the solution. Accomplishments that I'm proud of I am happy and proud that I managed to build this effect with just patches and assets found on AR Library. Also, this allowed me to achieve the simplicity that I wanted for this effect so I am happy about that too. What I learned I learned that null objects are quite useful. What's next for audio playAR I want to make the effect work with Instagram Music once it is available to all countries and devices. Built With patches Try it out www.instagram.com
10,008
https://devpost.com/software/physics-ar
Overview Ambulance Wave Machine Newton's Cradle Inspiration We love physics and wanted to help others learn about these important natural properties as well as enable others who in these, or future, trying times to have access to a way in which they can view physics experiments and visually see them being demonstrated easily. What it does Demonstrates various properties of physics like the Doppler Effect, Transverse Waves, and Conservation of Energy with each of the experiments. How I built it I used blender to create the 3-D model of the ambulance as well as its animation. Then I imported into Spark AR alongside Michael Li's Wave Machine and Newton's Cradle, wherein which I positioned each of the objects so that they looked good using the plane tracker to detect when their was a space in which the experiments could be shown. Following that, I made the materials for each component of each object in Spark AR to give each object life and intrigue. Lastly I utilized the animation controller and audio playback controller to allow each object to play their necessary animation and audio for it to demonstrate the physical property. Challenges I ran into The lack of support for materials from Spark AR when importing from blender, the file size limit for Spark AR world effects for Instagram (4 mb) Accomplishments that I'm proud of Starting the hackathon and being organized in our milestones throughout it, and learning how to use blender and spark AR. I am also proud of creating my Ambulance, and animating it from scratch. Furthermore, I am also proud of how I positioned the Ambulance, Wave Machine and Newton's Cradle in Spark AR to make it look nice and easy to look around at each of the experiments. Lastly I am proud of utilizing the Spark AR's plane tracker technology, the animation playback controller, and the audio playback controller to allow people to see the experiments with their animations and audio. What I learned How to use Blender, how to use Spark AR, how to use Audacity What's next for Physics AR Make an app Built With audacity blender spark-ar Try it out github.com drive.google.com www.instagram.com
10,008
https://devpost.com/software/physics-ar-part-2
AR Overview Inspiration We were inspired to make this effect because we love physics and wanted to be able to let other people see these physics models in action. What it does The Effect displays three physics animations: Trebuchet, Rocket, Free Fall How we built it We built our animations in Blender, and then created baked keyframe animations for these blenders. Challenges we ran into There were some compatibility issues between our Blender models and when we imported them into Spark AR. In addition, we spent a lot of time whittling down our file size to fit the 4MB limit allowed for Instagram effects. Accomplishments that we're proud of Our physics animations took hours of work to make, from making the 3d model, to animating them realistically, which can prove challenging. What we learned We learned that Spark AR is a great tool for making AR effects for both Facebook Camera and Instagram Camera. We also gained experience making 3d models in Blender. What's next for Physics AR Part 2 We want to be able to expand this into an AR app. Built With blender sparkar Try it out www.instagram.com github.com
10,008
https://devpost.com/software/garden-ahsu4p
Inspiration Connecting with friends is especially hard nowadays during the COVID-19 pandemic. Since we are unable to physically meet with friends, or unable to travel back to hometowns, we wanted to come up with a fun way that people feel a sense of connection while having fun. This is how we came up with the idea for gARden as gardening is both therapeutic and fun activity, we wanted to bring a sense of community just like how people come together in community gardens. What it does gARden is an Instagram filter where you can grow your very own plant. You start out with a seed and are given 3 actions to grow your plant into a healthy flower: (1) a watering can to water your plant, (2) a pesticide to get rid of those pesky pests trying to kill your plant and (3) a dose of sunlight to feed your plant and make it happy. After tending to your plant, you can share your accomplishment (or fail if your plant dies) to your community and challenge them to grow their own plants and share it with the community as well. How we built it Our techstack consisted of SparkAR Studio, Maya, and JavaScript. We have the Patch Editor from SparkAR Studio and JavaScript working in tandem to create animations and interactive effects. The script was used specifically for the rotating seed and the NativeUI Picker. We initially had all the options from the picker interface visible to the user at the same time, but to create a stronger narrative, the script executes a set of commands based on what the user has selected. Maya was used for 3d modelling. All the objects were exported in .obj along with their material layers and imported as assets to be used in Spark AR Studio. We also used Git to work remotely and to stay on top of versioning. Challenges we ran into The biggest challenge we ran into was being able to use GitHub with SparkAR Studio. SparkAR projects can't seem to work in parallel in Git without overwriting each other. Another challenge was learning SparkAR Studio and understanding the strengths and limitations of the software. As we realized that the features in SparkAR Studio are relatively restrictive compared to Unity, we had to work our way around some constraints. Accomplishments that we're proud of We are so very proud of the outcome and what we were able to make. Given out busy schedules, we were so happy that we made a whole finished product. What we learned We learned that working remotely during a hackathon is much slower than doing it in person, but at the same time, we were able to learn a great deal from how to manage our time and how to efficiently learn new skills and tools with a limited amount of time. What's next for gARden Ability to save plant in order to view other users' plants in the same screen Implement a leaderboard to used between users Built With javascript maya sparkar Try it out github.com
10,008
https://devpost.com/software/air-quality-index-lxtb8n
Inspiration What it does: It simulates the air particles in the surrounding. How we built it: Using three Particle Systems in world AR. Challenges: Wheather API Which we plan to do in future. What we learned: Using Spark AR to build Really Cool effects. What's next for the Air Quality Index In the next phase, we will integrate it to whether API to get AIR Quality Index and adjust whether particles based on that Data. Built With react-native sparkar Try it out www.instagram.com github.com
10,008
https://devpost.com/software/running-robot
Inspiration Make the Instagram a fun place to play with and share with other people. The way we came up with is to make use of AR in game. Also, we hope this game is key steps of future AR game development. Since AR is digital asset then people can enjoy more realistic but without real assets, AR game has potential market. What it does A robot and road will appear in the game with command to control the robot. Users should control the robot to reach the goal without stepping off the road. How We built it We used Spark AR and mainly scripting to produce random roads and the detection of whether the robot is on the road or it reaches the goal. Challenges We ran into Comparing the position of the different objects in the game was challenging since the scale was really different. Concepts of Plane which was really important when it comes to track the plane at the same plane when users move their devices. Motion of robot with animation is really challenging since we have to apply the animation only while robot is running. Accomplishments that we're proud of Understanding the usage of Spark AR and quickly got to work with the platform. Realistic movement of robot. What We learned How to use Spark AR, General concepts in AR, How to use scripting for Spark AR What's next for Running Robot We are thinking of playing with friends interactively on instagram so it is really a social platform. They can also live share it with friends. Add more option like time attack or variety of stage. Built With particle Try it out www.instagram.com