title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
How to Make Your Baby Smart in the Womb
March 07, 2019 Every mother dreams that her child is smart and sharp minded. But do you know that all of this depends on the mother’s diet? Therefore the question arises how to get intelligent baby during pregnancy or what to eat during pregnancy to make baby intelligent. The physical and mental development of the child depends on the mother who eats during pregnancy. You should also eat such foods that correct development of their brain properly. During pregnancy, the correct amount of folic acid, vitamin D, iron etc. in the meal increases the child’s IQ level. Let’s take a look at some such diets. What to eat during pregnancy to make baby intelligent Below are certain foods that help baby growth and brain development. Folic acid in the spinach and lentils pulses Folic acid is very important for the development of a child’s brain cells and folic acid is abundant in lentil and pulses. According to a study, women who have taken plenty of folic acid during pregnancy, 4 weeks before and during 8 weeks, the chance of developing a child’s retardation decreases by almost 40 percent. Iron is helpful in delivering oxygen to the brain cells during pregnancy. Green leafy vegetables like spinach is a good source of folic acid. Eggs have rich amino acids, which help in improving a child’s brain development and memory. Pregnant women should be able to consume two eggs in a day, it fills the daily requirement of colin. I-Q level of the low-weight child is also less. Eggs also contain excess protein and iron, which also keeps the weight of the child right. Yogurt: Your body needs to work harder to develop the child’s brain cells in the womb. You need more protein for this. In addition to other protein sources, you should also take yogurt, which has a high protein intake. Blueberries full of anti-oxidants Blueberries, artichokes, and tomato contain high amounts of antioxidants. These fruits help develop brain development by developing a child’s brain tissues. If you want your baby to become smart then it is important to consume it during pregnancy. Fatty fish rich in omega-3 fatty acids Fatty fish salmon, tuna, mackerel etc. have high omega-3 fatty acids. It is very important for children’s brain development. According to the study, the women who consumed fish less than twice a week, their child’s IQ level was very less compared to those who consumed fish. Final words We all know that living a healthy lifestyle when you are expecting helps your baby to grow big and strong, in the same way above discussed are the health foods that makes the womb smarter too. Every food you eat during the journey of pregnancy had a direct or indirect effect on your baby. Those above discussed are the certain foods items that have been proven to effectively help your baby’s memory and ability to learn. Therefore every woman who is pregnant and soon to be new mommy must eat healthy meals that will definitely help your baby growth and brain development. Creating a strong foundation built on a healthy environment and diet can ensure that your baby will have a bright tomorrow.
https://medium.com/@manoj61184/how-to-make-your-baby-smart-in-the-womb-3cb7e7ca9f13
['Doctor Shifu']
2019-03-07 15:01:57.827000+00:00
['Nutrition', 'Tips', 'Home', 'Pregnancy', 'Baby']
Getting Started flutter? Let's know the Purpose Default files in Flutter Project.
Weather you are new to flutter or experienced, you may have think about the defaults files in flutter project. If I am right let’s read the article. Simply put, to make a basic app, you only need to focus on the “lib” directory and the “pubspec.yaml” file. “But what are those other files? what is there need?” Let’s see one by one… android and iOS The “android” and “ios” directories hold a complete Android and iOS app respectively with all their respective files. For example, if you go into “android” you’ll find a complete Android project including a manifest.xml, activities, Gradle files, etc. If you want to write any platform-specific code or add permissions, you’ll have to edit these projects. lib The “lib” directory holds all your .dart files and most if not all your code will rest over here. The “lib” directory holds all the main dart code used to run your app. The “main.dart” is kind of like an entry point for the flutter application further when do we do start creating multi-screen applications following a certain design pattern we will have to create more files test Test folder as the name suggests this folder is used to store and manage the testing code for the application just like the library folder. The “test” directory is for writing tests in Dart similar to Instrumented tests in Android using Espresso. Tests help you verify a component works without actually having to do it yourself. There’s a sample test written for you which you can run and see how it works out. The syntax is simple enough to try out a few yourself, but we’ll cover this in a later article. Now these are particularly configuration files which are used by the IDE of your choice and language tools now let us explain this one by one the .gitignor This file is a hidden file, the IDE uses to store the list of files which need to be ignored when source code is uploaded or checked into any get versioning system like for example GitHub. .metadeta This file is also a hidden file which is used by IDE to drag the properties of the particular flutter project. It saves the data related to the project. .packages The “.packages” file is one of the package managers which managed components inside the IDE let’s say for example in case of “.package” manager calls pub and this hidden file is used by the pub to manage. pubspec.lock It is just like the “.packages” this file is also used by pub package manager in order to get the concrete versions and other identifying information. pubspec.yaml Here, we can add third-party libraries. The “pubspec.yaml” file contains all of the packages you’ve imported. (For Android Developers: This is equivalent to your Gradle files where you add in dependencies) purpose_of_file.iml Our project name .iml file which is used by the editor engine to get the configuration of Java module which is used by this project moving on to README.MD The final file is the readme.MD this is a markdown file and by as the name suggests it’s a readme file which can contain any information that you would want to mention about the project. It is an optional file. so this is particular this is mainly the basic information related to flutter project.
https://medium.com/@dnyaneshwarbhusare47/getting-started-flutter-lets-know-the-purpose-default-files-in-flutter-project-9d341a238ba6
[]
2019-10-11 08:56:24.278000+00:00
['Flutter', 'Flutter Widget', 'Getting Started', 'Visual Studio Code', 'Flutter Ui']
Daily rituals to support you to rest, restore and renew
Daily rituals are essential for dialling down stress and pressure and reconnecting with self In the fast pace of life and the hyper-connected world we live in, wellness and self-care rituals can often fall to the bottom of the list of our priorities. As someone who revels and the busyness of life and making the most of possibilities, if I am not careful, self-care can take a back seat. But when I have things in control, I absolutely cherish my mini-timeouts or downtime and my deep rest and renew rituals. At any given moment, I try to connect with calm, the ambient to energise even if it is just for a short time and celebrate the connection with my heart, mind and soul. When we engage in rituals the opportunity to indulge in self-care and connect we can enhance the quality of our lives. When we are aligned and become a tuned to the environment around us through rituals and honour each moment, these nurturing moments enable us to ground to take a breath and Gain perspective. Doing so regularly as a ritual helps us to connect more deeply as a human being moving through our body and our life around us. Rituals can transform daily routines and habits from the mundane to something quite powerful and incredibly meaningful. These moments of devotion enable us to engage in self-care as a matter of normality. The rituals that you engage in are essential for managing your energy levels and re-energising right across your day — from sunrise to sunset. Your rituals serve as a reminder, a motivator, a guiding light to keep you on track and to prevent you from burning out. What is the difference between ritual and routine? Rituals are a meaningful and mindful way of connecting with your energy with focused attention and intention. Through rituals, you are giving yourself permission to create the time and space to connect the reality of life with your inner energy, your intuition, your passions and aspirations. Rituals can be powerful, meaningful and symbolic. In contrast to a routine which is a simple action. The primary difference is the mindset and attitude behind the action. Routines are like our checklists provide the framework for your day. You don’t have to place a lot of focus on what comes next as they occur naturally and you may find you even do them unconsciously. Routines are about functioning and don’t take too much brainpower, in fact, they alleviate a lot of stress on the day as they are things you often do without even thinking. They are essential as they serve as a checklist to keep us on track and moving in the right direction. You may find that you ‘mindlessly’ engage in routines. This may not be bad for the brain is it does not draw down on your mental energy, and in fact, your routines may even alleviate stress on your mind! Rituals, on the other hand, provide us with a sense of control and security. We can turn everyday actions into more sacred and meaningful events to bring more meaning and fulfilment in our lives. Make a ritual your own. Make it personal, make it meaningful. The beauty of rituals is that these behaviours become habits, and most of all, that they can be created from just about anything. Getting started with rituals to manage your energy This process invites you to connect your routine and habits from your outer world and bring them into your inner world with purpose and meaning. This process of blending the routine with the sacred is something you can to do mindfully check in with your body, mind and soul, your emotions and to reconnect with your inner wisdom and intuition. Your rituals are one way you can manage your energy during your day. From sunrise to sunset you can identify moments to respect and affirm, assemble and gather, adorn, nurture and nourish, connect and retreat. You may associate rituals with religion or spirituality. However, Deep Rest and Renew enables you to transform any habit or routine into rituals with the right attitude, perspective and behaviour. Deep Rest and Renew Rituals are based on the themes of respect and affirm, assemble and gather, adorn, nurture and nourish, connect and retreat. Here is how you can get started: Respect and Affirm Affirmations are one way you can show respect and self-love and build the connection to yourself. When you start to respect yourself including your mind, your body, your soul, others will also. Whether this is self-esteem, boundaries, believing in your good, your abilities, what you deserve and how you are treated. Set your intention for resting deeply Be clear right upfront about your practice and what you want to achieve. To bring calm. To bring rest. To bring clarity. To bring peace. Say it out loud to yourself. Make it a ritual. Devote it to you. To your day. To someone else. Setting an intention for practice at the start of the day is an excellent foundation. Setting an intention at the end of the day and seal your day with rest allows you to consolidate the day, still your mind and rest your thoughts. When to practice and making time Timing is everything when you want to experience the benefits of this practice. Respect and work with your body’s intelligence and your intuition to guide you to the best time of the day, and for what feels right for you. This could be at the end of your day before sleep, or at the end of your workday to ‘seal’ the day before you enjoy your evening. Perhaps you might prefer the morning. If you do, remember this practice is about lengthening and resting into poses, and first thing in the morning you will not be as supple as you are at the end of the day after eight hours or so of movement and action. Create your sacred space — your rest nest Surrounding yourself with positive, calming, nurturing energy can totally transform your life. By creating a sacred space to rest — your Rest Nest you can establish your space wherever you dedicated to your well-being, happiness and all things good. There is an overwhelming sense of calm when you walk into your favourite yoga studio, spa or place of relaxation. There are so many inviting elements that speak to all the senses to enable you to rest and restore your body, mind and soul. The good thing is that you don’t have to leave home to create your own blissful space Assemble and gather Find your sacred space. Seek to find a space that will honour your body, mind and soul. Choose an area in your home that is personal, is quiet and free from distractions and interruptions (if you can!). While it is great to create your Rest Nest in a specific location, the key is not to become dependent on just one place; ultimately you are working towards being comfortable and relaxing under many different conditions. Adorn yourself Comfort is everything when you are resting, deeply. Before you start, be sure to choose a comfortable, warm, quiet and safe space to practice. Place particular focus on your environment and ambience. For your body, stretchy, unrestricted clothing is a must. Be unapologetic about it. Don’t wear clothing that will irritate you, is too tight, is distracting, or just plain impractical for moving into deep, restful poses. Importantly, adorn yourself with items that you love and show respect to you and your body. Many essential oils can be applied topically to calm your nervous system, promote relaxation and lead to deep rest. You can use oils on the soles of your feet, neck, on your temples, inside of your wrists or ankles — try not to apply undiluted oils to the skin by using a carrier oil or cream. Nurture yourself with something soothing Deep rest and renewal promote the activation of the parasympathetic nervous system — otherwise known as the ‘rest and digest’ response. To do this, we need to switch off the sympathetic nervous system — the ‘fight or flight’ response. Stimulants before to your practice such as coffee, alcohol, nicotine, electronics, screen time and blue light, etc. serve as an alert stimulus and engage the sympathetic nervous system and consequently can frustrate your attempts to rest deeply. Try something soothing instead such as your favourite herbal tea. Connect and retreat Keep a journal. You know you should be resting, but your brain is still on autopilot. This is not uncommon when you are starting out in the practice of deep rest, or when the tempo of your life increases as a result of a change. Keeping a journal close by or a notebook during your practice to write down any burning ideas or reminders will calm your mind and allow you to focus on rest again. Until your practice develops and you can extend your attention and calm your mind, journal and keep notes. Mindful awareness. Use your daily chores as an opportunity to practice being more mindful and aware of your actions. So instead of feeling like every daily activity is something that just needs to get done, it becomes an event you feel serves a positive function in your life, and it becomes something you may even enjoy doing and look forward to. Learn more at: https://deeprestandrenew.teachable.com/p/rest-restore-and-renew-with-yin-yoga-and-yoga-nidra-21-day-program Join the community at https://www.facebook.com/groups/deeprestandrenewcommunity
https://deeprestandrenew.medium.com/deep-rest-rituals-to-support-you-restore-and-renew-b825fc765041
['Penny Ward']
2020-11-01 08:53:14.045000+00:00
['Yin Yoga', 'Exhaustion', 'Productivity', 'Ritual', 'Burnout']
We need a deep dive into white angst
Study it, respond to root causes to curb violence, chaos That Republican Party leaders are so scared of their own voters that they wouldn’t support an independent probe of the Jan. 6 U.S. Capitol insurrection is nothing but pure karma. The GOP has spent decades manipulating poor and working-class whites to vote against their own interests in order to protect the rich and keep the shrinking party’s hold on power. Just as the Democratic Party once did throughout the South, the GOP has used race, religion and social issues to keep these voters resistant to change and hostile to others. That sense of victimhood now stews in a broth of conspiracy theories, racial hatred, embrace of violence and a hostility toward government and democracy in general. The Biden administration should declare this dangerous mix a national crisis. Bring together experts in sociology, psychology, religion and economics to address root causes, determine strategies for outreach or, at least, find ways to insulate those more open to living in a multicultural democracy. This type of study is nothing new. Black Americans have routinely been dissected and analyzed. After urban riots, The Johnson administration did something similar with the 1968 National Advisory Commission on Civil Disorders, known as the Kerner Commission. Certainly, the current growth of racist militias, even middle-class whites seeking to overthrow elections, and mass shootings and random attacks on minorities would qualify as “civil disorders.” America is about two decades away from whites becoming less than half of the nation’s population. What other violence and disruption should we expect when demographics further undermine the theory of white supremacy? Some researchers have explored the rise in white self-destructiveness, including opioid and meth addiction and suicides with guns. Court filings and interviews with some insurrectionists indicate they feel besieged and powerless in a changing nation. All relevant findings should be pulled together to determine the right responses. Some may argue that this strategy is unnecessary, that discontent will subside with new policies that elevate all working-class people. In polls, President Joe Biden’s ideas do win majority support from Republicans. Yet, two out of three also agree with former President Donald Trump — the avatar of their resentment — that Biden was illegally elected. So, the insurrection continues. In the last few months, 14 states have passed laws to help suppress voter turnout and nullify election results. Belief in the QAnon conspiracy about elite, cannibalistic pedophiles has taken over many evangelical churches. Nazis are gaining traction on right-wing social media, and traditional conservative media have committed to enflaming grievances. Too many GOP lawmakers are either auditioning to replace Trump or are just trying to hang on until the party implodes. Considering the country’s unfinished business on racial equity, why should we give more attention to white people with unhinged actions and beliefs? Because they are wreaking havoc. We are preoccupied now with refighting old battles, such as voting rights, while being distracted from urgent challenges like climate change. Also, by responding with more than politics and prosecutions, we just might learn something. When Johnson set up the Kerner Commission, he had hoped to find communist agitation behind the rioting. Yet, after research and on-the-ground interviews, the commission declared the real problem was “white racism” in areas including housing, employment and criminal justice. Lawmakers rejected that conclusion and soon launched the “tough on crime” era, creating volatile consequences we still face. Consider how less fraught and divided our nation would be if we had heeded the Kerner findings and tried to solve some of those problems then. We can’t afford to repeat the mistake of ignoring mounting crises that are unraveling the country.
https://medium.com/@vgallman/time-to-focus-on-white-psychopathy-45a8649c95a8
['Vanessa Gallman']
2021-06-15 12:23:42.301000+00:00
['Biden', 'Race Relations', 'Republicans', 'Insurrection', 'Democracy']
Jetson Nano -Remote VNC Access
Step 1: Connecting the Board to Your Wireless Network It turns out the NVIDIA L4T has poor support for USB Wi-Fi adaptors, and most of the adaptors don’t work with the distribution. So buy a compatible WIFI USB adapter to connect to your local network. Step 2: Enabling Desktop Sharing Unfortunately, the instructions helpfully left on the Jetson’s desktop on how to enable the installed VNC server from the command line don’t work, and going ahead and opening the Settings application on the desktop and clicking on “Desktop Sharing” also fails as the Settings app silently crashes. A problem that appears to be down to an incompatibility with the older Gnome desktop. I will tell you the easiest route to get through and make the Desktop Sharing application work normal. First, open the terminal and paste this line. sudo nano /usr/share/glib-2.0/schemas/org.gnome.Vino.gschema.xml Next, add (paste) the following key into the XML file which just opened in nano editor. <key name=’enabled’ type=’b’> <summary>Enable remote access to the desktop</summary> <description> If true, allows remote access to the desktop via the RFB protocol. Users on remote machines may then connect to the desktop using a VNC viewer. </description> <default>true</default> </key> adding the key in the XML file Now, after pasting, save and exit. Next, Compile the schemas for Gnome using the command below sudo glib-compile-schemas /usr/share/glib-2.0/schemas Now, at this stage, the “Desktop Sharing” panel crashing should have stopped. So open the Desktop Sharing app and Tick the “Allow other users to view your desktop” and also “Allow other users to control your desktop” checkmarks. Then make sure “You must confirm each access to this machine” is turned off. Finally tick the “Require the user to enter this password” checkmark, and enter a password for the VNC session. Use these settings in the Desktop sharing app Step 3: Start the VNC server on every startup Open ‘startup applications’ using the search box that appears at the top of the screen. Now, click Add at the right of the box, then type ‘Vino’ in the name box, and then in the command box enter /usr/lib/vino/vino-server . Click Save at the bottom right of the box, and then close the app. Next, we need to disable encryption of the VNC connection to get things working. To do this, open the terminal and enter the following commands $ gsettings set org.gnome.Vino require-encryption false $ gsettings set org.gnome.Vino prompt-enabled false You can now go ahead and reboot the board, once rebooted, log back into your account. VNC should now be running and serving the desktop. Step 4: Connect to your Nano via VNC Find the IP of your Jetson nano board using ‘ifconfig’ command. I found mine in the wlan0 as 192.168.0.106 Use this command to check whether your VNC server is running or not. ps -ef|grep vnc vnc server is running Now, once you know your IP and also confirm the VNC server is up and running, its time to connect to your Jetson Nano via any VNC client app. Open Remmina click the add button on the top left, then select VNC as the protocol, then click the (…)button which is in the right end of the server IP address entering bar. This (…) button should automatically scan and show the name of your jetson Nano. Now just click, enter your password and connect. The (…) button Scan results showing my Jetson nano, click on it to connect Step 5: Enable auto-login and Disable / Turn Off Lock Screen (optional) The VNC server starts only after the user logs in. In an ideal headless setup, we need a keyboard and monitor to log in, doing this is at every login in tiring. So I usually enable auto-login and disable the lock screen. Open the Activities overview and start typing Users. Click Users to open the panel, select the user account that you want to log in to automatically at startup, Press Unlock in the top right corner and type in your password when prompted, Switch the Automatic Login switch to on. To Disable / Turn Off Lock Screen please refer this link
https://medium.com/@bharathsudharsan023/jetson-nano-remote-vnc-access-d1e71c82492b
['Bharath Sudharsan']
2020-02-03 19:58:16.693000+00:00
['Vnc Server', 'Getting Started', 'Headless', 'Jetson Nano', 'Nvidia']
When You Don’t Know What to Do, Stop Everything and Listen
When You Don’t Know What to Do, Stop Everything and Listen Don’t move until you know it’s time. Photo by Roland Hechanova on Unsplash I feel a little lost. I’m worried about money, I’m worried about my older family members during the holidays with COVID, I’m worried about Medium’s changes and views. Am I good enough? I wake up, and I do my best to get some words on the page. The more I write, the bigger and bigger the knot in my chest gets. Pretty soon, I’m pacing around my apartment, not sure what to do with myself. Should I go for a walk? Should I keep writing? Should I eat something? Why am I so anxious? Finally, I just sit. No phone, no computer. Just the afternoon light coming through my window. I dose at the beginning. That clears up nearly half of my issue by itself (when a kid gets cranky, he obviously needs a nap. I guess I’m no different). I wake back up and just keep sitting there. My dog curls up with me. I sense my hands, then my legs, then my whole body. I decide that I won’t move until I know what to do next. An hour later, I have clarity, and I’m ready to get up. It’s Hard to Listen With So Much Noise Notifications, texts, calls, it’s my move on Chess.com…. Sometimes we just need a moment to hear what our bodies are saying. That takes longer than reading an article. And the more you rush it, the longer it takes. What would happen if you just sat down and said to yourself, “OK, I’ll sit here until we all agree on what we need to do next.” Anxiety might be telling you, “We need to check Medium notifications!” That’s fine. But there is no consensus. You can say, “thanks for the input, anxiety, but I am going to wait until all parts of me are on board before we start doing anything.” Give Up You don’t have to go for a walk or take a cold shower, or meditate, or anything “productive.” You can do these things. In fact, I started with meditation before I dozed off. But the point of these things is to stop all the “doing” and just take a moment to “be.” If you are stuck in the indecision of which non-active activity to do — you’re off to a bad start. Instead, do nothing. Just sit down on your couch and stare at the wall. Relax. Feel different parts of your body. Tell yourself this: “This is a treat. I am going to do nothing for a moment.” Let it feel like an escape. There is no task to win. You are just going to sit in this spot until you have a motivation to do something from joy rather than anxiety. Joy Over Anxiety Anxiety is a useful guy. I am working on mending my relationship with him. When he shows up before I hit “publish” on a big piece and double-check the work and catch a huge error, I thank anxiety. “Nice work, man!” When I wake up in the morning and anxiety is over-powered, saying, “This is all meaningless. You will die eventually, so why bother?” I need to thank anxiety for his input and kindly ask him to shut the fuck up. All parts of us have a reason to exist. Anxiety isn’t the enemy. He’s just been ignored and misused for too long. You remember that kid that got picked on too much in high school? Well, we all know how that goes. Anxiety is allowed to hang out with me, but he mostly shouldn’t be calling the shots. The only way to set up this pleasant working relationship is to listen and be friendly with him. When I find myself overwhelmed by anxiety, I need to sit down and listen for as long as it takes. He will have one bad idea after another: “We should move to Albuquerque!” “You should call your ex!” “Maybe drugs aren’t the worst thing ever?” I don't have to act on any of these ideas. I can just listen. You know how you listen to a person? No solutions, just an empathetic ear. That’s how you listen to anxiety. Eventually, he will tire himself out. Joy will rise again and suggest something like: “Maybe we could write an article about listening?” Anxiety — satisfied — will get on board with the rest of me. Get All of You Pointing in the Same Direction We all get busy and ignore our needs from time to time. Happens to the best of us. Like a family no longer united, ignore certain members for too long and they will revolt. You are all of your parts — even the ones you don’t like. You can’t get rid of fear — you need to learn to have a working relationship with him. When you take some time to listen to yourself, you learn to accept that you are you — warts and all — and taking the first step from there. Don’t assume you are only the person in a “good” mood. That’s just mania. You’ve probably just fed your fear and anxiety by acting out some drama in your life — now you feel better and you think it will last forever. I’m not judging, by the way. This is me. I feel bad. I take it out on some loved one in my life. I get “high” on the apology process. I feel better for a few days. Repeat. I notice that the drama is reduced, and the manic episodes are fewer when I get all of me on the same page.
https://medium.com/the-ascent/when-you-dont-know-what-to-do-stop-everything-and-listen-3958ddf133c5
['Taylor Foreman']
2020-12-22 19:02:32.850000+00:00
['Choices', 'Life', 'Listening', 'Motivation', 'Anxiety']
DFINITY White Paper: Our Consensus Algorithm
We’re happy to announce the release of the DFINITY White Paper on our Consensus System When designing the DFINITY consensus mechanism we faced demanding requirements as we sought to dramatically improve the user experience for those interacting with our blockchain computer. The single most important aspect of the user experience is certainly the time required before a transaction becomes final. This is not solved by a short block time alone — we also had to reduce the number of confirmations required to a small constant. DFINITY moreover had to provide a provably secure proof-of-stake algorithm that scales to millions of active participants without compromising any bit on decentralization. DFINITY’s white paper covering consensus We soon realized that the key to scalability lay in having an unmanipulable source of randomness available. Hence we built a scalable decentralized random beacon, based on what we call the Threshold Relay technique, right into the foundation of the protocol. This strong foundation drives a scalable and fast consensus layer: On top of the beacon runs a blockchain which utilizes notarization by threshold groups to achieve near-instant finality. Details can be found in the overview paper that we are releasing today. The roots of the DFINITY consensus mechanism date back to 2014 when our Chief Scientist, Dominic Williams, started to look for more efficient ways to drive large consensus networks. Since then, much research has gone into the protocol and it took several iterations to reach our current design. For any practical consensus system the difficulty lies in navigating the tight terrain that one is given between the boundaries imposed by theoretical impossibility-results and practical performance limitations. The first key milestone was the novel Threshold Relay technique for decentralized, deterministic randomness which is made possible by certain unique characteristics of the BLS signature system. The next breakthrough was the notarization technique which allows DFINITY consensus to solve the traditional problems that come with proof-of-stake systems. Getting the security proofs sound was the final step before publication. We believe that DFINITY consensus has made the proper trade-offs between the practical side (realistic threat models and security assumptions) and the theoretical side (provable security). Out came a flexible, tunable algorithm which we expect will establish itself as the best performing proof-of-stake algorithm. In particular, having the built-in random beacon will prove to be indispensable when building out sharding and scalable validation techniques. We hope you enjoy the read. Look out for the first public testnet in the coming months. The DFINITY team plans to release more papers on different aspects of its technology as they become ready for publication. If you’d like to discuss any of these details in the white paper with the team or community, you can find us on Rocket Chat, Telegram, and Reddit. For a brief overview of the DFINITY network, we’ve also released a 2 minute explainer video.
https://medium.com/dfinity/dfinity-white-paper-our-consensus-algorithm-a11adc0a054c
['Timo Hanke']
2020-08-05 18:41:34.207000+00:00
['Blockchain', 'Bitcoin', 'ICO', 'Ethereum', 'Cryptocurrency']
Dissecting Dynamic Programming — Top Down & Bottom Up
The previous blog post “Dissecting Dynamic Programming — The Beginning” described two important concepts in Dynamic Programming, then illustrated them using the Fibonacci Sequence example, and finally it ended with some details about translating a recurrence relation to code. This blog post will share a few insights about how to translate a recurrence relation of a Dynamic Programming problem to commonly used solution approaches — “top down” and “bottom up”. First let’s make sure we are on the same page regarding what a “recurrence relation” is. Recurrence Relation If the term “recurrence relation” is somewhat of vague to you, here is one of the most intuitive definitions. “A recurrence relation is an equation that defines a sequence based on a rule that gives the next term as a function of the previous term(s)” (from mathematical perspective) Another description is “A recurrence relation is an equation which is defined in terms of itself and the initial conditions” In the context of Dynamic Programming, a recurrent relation describes the relationship between the subproblems in a way that clearly defines how an optimal solution is computed using the solutions of the smaller subproblems (previous terms). Let’s re-examine the Fibonacci sequence relation recurrence to better understand the subproblem relationship. fib(n) = fib(n-1) + fib(n-2) where fib(0) = 0 and fib(1) = 1 The above recurrence relation can find an optimal solution to any n by adding the solution of two smaller subproblems together. For example, the optimal solution to fib(6) is the sum of fib(5) and fib(4). In order to find the solution to fib(5) and fib(4), all we have to do is to apply the same recurrence relation, until we arrive at the initial conditions of either fib(0) or fib(1). As you can see that recurrence relation lends itself pretty nicely with a programming construct called recursion. Once we are able to come up with a recurrence relation for a Dynamic Programming problem, then the next step is to translate it into code using either the top down or bottom up approach. The below section will first show how to that, and then the pros and cons of each approach will be discussed. Top Down Approach The top down approach solves Dynamic Programming problems by attempting to solve the largest subproblem (final optimal solution) first. It then realizes it couldn’t because it needs the solutions to the slightly smaller subproblems, and therefore it will then proceed to attempt to solve those. This process continues until the solution to smaller subproblems are obtained from the initial conditions (base case). At this point, the solution to smaller subproblems are bubbled up and are used to solve larger subproblems and finally the largest subproblem. The order of solving the subproblems is essentially the same as when recursion is used to solve the recurrence relation. See Figure #1 for the top down approach to solving fib(4) = fib(3) + f(2). Figure #1 — top down subproblem execution order In other words, the top down approach is the direct translation of the recurrence relation using a programming construct called recursion. That’s the main reason why this approach is often equated to recursion. Remember the “overlapping subproblem” concept? When recursion is used to solve Dynamic Programming problems, we must be efficient about in dealing with subproblems. The amount of overlapping subproblem varies among the different Dynamic Programming problems, ideally they should be solved only once and their solution is stored and reused when needed. In above Figure #1, the overlapping subproblems for fib(4) are color coded in yellow. That is where the concept “memoization” (not memorization) comes into the picture and more details are available from Wikipedia. Essentially, we need a data structure (either an array or HashMap) to hold the previously computed solutions of the overlapping subproblems, so they can be reused easily and quickly. Figure #2 below contains the top down implementation of the Fibonacci sequence recurrence relation and it uses an one dimensional array to store the subproblem solutions to avoid the re-computation of overlapping subproblems. Figure #2 — Fibonacci top down implementation of fib(n) Before moving on to the bottom up approach, let’s dissect the time and space complexity of the above implementation in Figure #2. Complexity Analysis Time complexity: Since we use a cache to avoid the re-computation the overlapping subproblems, the actual number of subproblems that requires computation is n. Figure #1 should give us that information T(n) = O(n) If we didn’t use a cache and have to recompute the solutions of the overlapping subproblems, the time complexity would increase to O(2^n) Space complexity: A one dimensional array was allocated to store the subproblem solutions and its length is proportional to n S(n) = O(n) Bottom Up Approach If we closely examine the order of subproblem solution computation in Figure #1 above, it turns out they are computed in this order: 0,1,2,3,4. If that is the case, then another way to accomplish the same goal is to compute the solutions from left (smaller subproblem) to right (larger subproblem) in an iterative manner. This is the essence of the bottom up approach. First we need a data structure to hold the subproblem solutions, then we iterate from smaller subproblems to large ones and use the recurrence relation to compute the solution to each of subproblems, as depicted in Figure #3. Figure #3 — bottom up approach Figure #4 below contains the bottom up implementation of the Fibonacci sequence. Since the computation order follows the subproblem size, from small to large, therefore the solution to smaller subproblems are readily available when needed. Figure #4 — Bottom up implementation of fib(n) Complexity Analysis Time complexity: The code in Figure #4 should give us sufficient information to analyze the time complexity. It consists of a single for loop that goes from 2 to n. Inside the for loop, there is only a simple addition of two values from an array lookup. Therefore the time complexity is T(n) = O(n) Space complexity: The code in Figure #4 only allocates an array of size n+1 Therefore the space complexity is O(n) We can further optimize the space complexity to O(1) by noticing that any subproblem depends only on the two smaller subproblems. I will leave that as an exercise for the reader. Pros and Cons of Top Down and Bottom Up Top Down (using recursion) Pros It is easier to translate a recurrence relation to code using recursion The recursion will only solve the needed subproblems to come up with the final optimal solution Cons There is some overhead in maintaining the stack frames Recursion might run into stack overflow when recursion depth is too large Might not be easy to reason about the logic due to the recursion Bottom Up (using iterative approach) Pros In general, it tends to be more efficient than the top down approach It is easier to reason about the logic because it is self-contained within a function Cons It requires a slightly different way of thinking when translating a recurrence relation to code using the looping constructs like while or for loop The solution to all the subproblems are computed from the smallest to largest size and in a consecutive fashion, regardless of whether they are needed or not to come with the final optimal solution. The Fibonacci sequence Dynamic Programming problem is widely used to explain the important concepts and different ways of translating a recurrence relation to code. Since the Fibonacci sequence recurrent relation is provided, we didn’t need to spend time on deriving it. One of the most important things we need to do when trying solve a Dynamic Programming problem is to come up with a recurrence relation, which requires deep understanding and analysis of the given problem statement and examples. It takes time and practice in order to develop this skill, but it will pay off big time when one becomes competent at it. In the next blog, I will share a few insights and tips into how to come up with the recurrence relation for a few Dynamic Programming problems. Check out the resources below to learn more about solving Dynamic Programming problems. Enjoy!
https://hien-luu.medium.com/dissecting-dynamic-programming-top-down-bottom-up-3d3a1d62fbd7
['Hien Luu']
2020-12-06 18:36:21.114000+00:00
['Bottom Up Approach', 'Top Down Approach', 'Interview Questions', 'Dynamic Programming', 'Computer Science']
For only 20 items, this is quite a thorough list!
For only 20 items, this is quite a thorough list! Since others are offering suggestions as well, mine is coconut oil. Can be bought organically in larger containers at Costco & etc. So useful and healthy in so many ways... including delicious food ways :)
https://medium.com/@pixcayanmacuahuitl/for-only-20-items-this-is-quite-a-thorough-list-7db2a2ade596
['Pixcayān Macuahuitl']
2020-12-13 01:36:44.317000+00:00
['Health', 'Plant Based', 'Wellness', 'Vegan', 'Veganism']
Fundamentals of Supervised Sentiment Analysis
Model Evaluation Evaluation Typically our data will have a high class-imbalance problem as people are more likely to write about neutral or positive tweets than negative tweets in most cases. But for most business problems, our model must detect these negative tweets. So we will keep an eye out for these by looking at the macro-averaged f1-score as well as the precision-recall curve. Below function will plot the ROC curve and precision-recall curve and print the key evaluation metrics. Baseline Model We can use a scikit-learn’s DummyClassifier to first see what our baseline measure would be if we were to just classify it based on how frequently each class occurs. from sklearn.dummy import DummyClassifier dummy_classifier = DummyClassifier() dummy_classifier.fit(tweets_train, labels_train) y_pred_p = dummy_classifier.predict_proba(tweets_validation) y_pred = dummy_classifier.predict(tweets_validation) Bag-of-Words Model (Count Vectors) One of the simplest way to quantify text data is to just count the frequency of each word. The scikit-learn’s CountVectorizer can do that job easily. from sklearn.feature_extraction.text import CountVectorizer countvec = CountVectorizer(ngram_range = (1, 2), min_df = 2) count_vectors = countvec.fit_transform(tweets_train) This returns the count vectors of single vocabularies and bigrams that occurs at least twice. Then we can use thes count vectors to train different classification algorithms. TF-IDF Vectors A problem with the count vector is that it only looks at the frequency of individual word and it does not care about the context in which the word occurs. There’s no way to assess whether how important specific word in a tweet is. This is where the term frequency — inverse document frequency (TF-IDF) score comes in. TF-IDF score weighs words that are more uniquely frequent in one tweet more than words that tend to be frequent across all tweets. from sklearn.feature_extraction.text import TfidfVectorizer tfvec = TfidfVectorizer(ngram_range = (1, 2), min_df = 2) tf_vectors = tfvec.fit_transform(tweets_train) Now that we have two vectorized texts, we can test different classifiers for each of these vectors. Naive-Bayes Naive-Bayes is one of the more popular choices for text classification. It’s a simple application of Bayes Theorem on each class and the predictors, and it assumes that each individual features (words in our case) are independent of each other. So let’s say, we have a tweet that reads… “I love my new phone. It’s really fast, reliable and well-designed!”. This tweet clearly has a positive sentiment. In this case, the Naive-Bayes model assumes that the individual words like ‘love’, ‘new’, ‘really’, ‘fast’, ‘reliable’, all contribute independently to its positive class. In other words, likelihood of the tweet being positive when it uses the word ‘reliable’ does not change by other words. This does not mean that these words are independent in their appearances. Some words may tend to appear together more often than not, but that does not mean how much each word contributes to its class is dependent. Naive-Bayes algorithm is simple to use and reliable when the above assumption holds. Since testing on our model requires vectorization, we can get the pipeline built into our model. from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline mn_nb = MultinomialNB() # change countvec to tfvec for tf-idf model = Pipeline([('vectorize', countvec), ('classify', mn_nb)]) # fitting training count vectors (change to tf_vectors for tf-idf) model['classify'].fit(count_vectors, labels_train) y_pred_p = model.predict_proba(tweets_validation) y_pred = model.predict(tweets_validation) evaluating(labels_validation, y_pred, y_pred_p) Because of its assumption of independence between features, Naive Bayes overestimates confidence of how much each feature contributes to the label, making it a bad estimator. So take the probability of predictions with a grain of salt. Support Vector Machine (SVM) Another popular choice of the text classification algorithm is the support vector machine (SVM). Simply put, SVM finds the hyperplane that divides the classes with a maximum margin between them. The main reason SVM is preferred in text classification is that we tend to end up with a lot of features. If we were to work in such a high dimensional space that takes all our features, it would have caused a problem known as the curse of dimensionality. Basically, our space is too big that our observations start to lose their meanings. But SVM is more robust when dealing with a large number of features because it uses the kernel trick. SVM does not actually work in the high dimensions, it just looks at the pairwise distances between observations as if they are in the high dimensions. It does take a long time to do the job, but it is robust. from sklearn.svm import SVC svm_classifier = SVC(class_weight = 'balanced', probability= True) # don't forget to adjust the hyperparameters! # change countvec to tfvec for tf-idf svm_model = Pipeline([('vectorize', countvec), ('classify', svm_classifier)]) # fitting training count vectors (change to tf_vectors for tf-idf) svm_model['classify'].fit(count_vectors, labels_train) y_pred_p = svm_model.predict_proba(tweets_validation) y_pred = svm_model.predict(tweets_validation) evaluating(labels_validation, y_pred, y_pred_p) SHAP Evaluation When the SVM uses the kernel trick, things get into a bit of grey area, in terms of interpretability. But we can use the Shapley value to decipher how individual features are contributing to the classification. We’ll use SHAP’s friendly interface to visualize the Shapley values. For a detailed tutorial on this, I recommend reading the documentation on SHAP. import shap shap.initjs() sample = shap.kmeans(count_vectors, 10) e = shap.KernelExplainer(svm_model.predict_proba, sample, link = 'logit') shap_vals = e.shap_values(X_val_tf, nsamples = 100) shap.summary_plot(shap_vals, feature_names = countvec.get_feature_names(), class_names = svm_model.classes_) Phew, that was a lot. Let’s take a breather. Photo by freestocks on Unsplash LSTM Let’s dive a little bit deeper (literally). So far we worked with two different frequency measures to quantify our text data. But the frequency of each word tells only little bit of the story. Understanding language and its meaning requires understanding of syntax, or at the very least, the sequence of words. So we will look at a deep learning architecture that cares about the sequence of vocabularies: the long short-term memory (LSTM) architecture. For the LSTM, we need to feed in texts as a sequence. Below steps outline the steps to run and evaluate the LSTM classifier. I explained each step in the code. Word Embedding (GloVe) One downside of our LSTM model is that it only contains information that’s present within our training data, while vocabularies have semantic meanings outside of the tweets. Knowing how each vocabulary relates to each other in terms of semantic similarity may help our model. We can apply weights to our vocabularies based on the pre-trained word embedding algorithm. For this, we will use GloVe, the vector representations obtained by a team at Stanford. I’m using their 200 dimensions word vectors trained on 2 Billion tweets. You will need to download the vectors from their website. Then you can add the obtained vector matrix as embedding weights to the Embedding layer of our LSTM architecture. # adding the bolded part model.add(Embedding(num_vocab, 200, weights = [vector_matrix], input_length = max_len)) By using the word embedding and LSTM, my model showed 20% increase in overall accuracy and 16% increase in the macro-averaged F1-score.
https://towardsdatascience.com/fundamentals-of-supervised-sentiment-analysis-1975b5b54108
['Eunjoo Byeon']
2020-12-09 16:26:51.903000+00:00
['Machine Learning', 'Sentiment Analysis', 'NLP', 'Python', 'Data Science']
Software Vendors: An Important Constituent of the CINDX Ecosystem
Who are the Software Vendors on the CINDX Platform? Software Vendors are creators and sellers of cryptocurrency software that help platforms like CINDX operate in a self-sustaining, automated, transparent, and error-free manner. Since most of the crypto software Vendors need to deal with blockchain technology, they must conform to some of the most basic concepts of cryptocurrency exchange and trade. Therefore, they need to follow some basic rules. Vendors are the “solution suppliers” for trading. They create and distribute various software products, including analytical and trading bots, tools for social network analytics, and software for technical analysis. The Vendor’s Hub acts as the platform where Vendors place a software product in the CINDX ecosystem. Each Vendor simply needs to register and upload their products. Thereafter, the CINDX software specialists perform code auditing and integrate the solutions into the system (provided they are high-quality and relevant to the CINDX project.) These products then become available for purchase by Traders and Managers in the CINDX digital marketplace. Vendors earn profits when their products are purchased. Some Examples The list of software Vendors is ever-growing. Let’s check out some of the most popular options adding the most value to cryptocurrency platforms. Chain Chain is a cryptocurrency software solution which has found a place with leaders in financial services, such as Nasdaq and First Data. It is dedicated to building the next-generation infrastructure for the financial services industry, and is creating a software platform that enables the production and transfer of digital assets in both fiat and crypto currencies across organizational lines. Chain is also a software Vendor that remains at the top of financial services platforms in the cryptocurrency markets. The product it offers uses blockchain technology and is made of critical cryptographic protocols. Chain is an expert in building solutions that deal with distributed database techniques. Gem Gem is invaluable for developers who are not experts in security or the underlying protocols of cryptography. It allows them to rapidly build apps and solutions using Bitcoin and other cryptocurrencies. The company offers bank-grade security on cryptocurrency for a developer’s apps without having any stake or possession of the funds. Gem is renowned for reinventing the banking platform from the ground up, especially in the cryptocurrency field. Blockstream Blockstream has a vision of an honest ecosystem of financial networks build on sidechains, a kind of blockchain technology that extends the capabilities of the invincible Bitcoin network. Sidechains are used to create opportunities for new models of trust and improve the properties of Bitcoin. Sidechains allow for the movement of digital assets from one blockchain to another and can be used to link disparate, unique crypto markets together for the expansion of liquidity via a shared protocol. Consensus Systems (ConsenSys) Predominantly Ethereum-oriented, ConsenSys is a blockchain software firm that builds decentralized applications. It is currently involved in building foundational tools for the newer and more modern business models in the cryptocurrency space. These tools are individually funded and marketed with the aim of building a more convenient ecosystem of cryptocurrency exchange and trading. The Importance of Software Vendors Blockchain technology is a shared protocol which acts in a pluralistic and utilitarian manner. In that sense, blockchain maintains and manages almost all cryptocurrency-related transactions, apart from maintaining the immutability of the records. Software Vendors play an important role in making blockchain technology more robust and relevant, by constantly upgrading the quality standards of their products. Without Vendors, the entire cryptocurrency industry would eventually break down. Vendors ensure that blockchain remains the most modern technology in the financial services and products niche by adding ultra-responsive upgrades to address the complex and ever-growing needs of the crypto industry. In that sense, software vendors are the force which keeps the blockchain community active and safe from all probable malfunctions. By designing an in-platform marketplace for trading and analytical software, CINDX is cultivating a competitive environment for Vendors to create the best possible trading tools. Competition among Vendors is the essential fuel that runs the CINDX project. CINDX is built to provide the most profitable outcome for its users, and as such, software Vendors are an invaluable part of the entire CINDX story. The real success of affairs on the CINDX project is factually dependent on the involvement of software developers as Vendors. About CINDX CCINDX is an investment platform that allows individuals to combine several crypto exchange accounts into one trading terminal, and gives them the option to connect to the best managers without having to transfer their funds. Moreover, implementation of blockchain-based transactions will allow the trading history to be saved, and a rating system will be used to differentiate the successful managers from the less successful ones. Join our Telegram group or other social media to stay updated. Website • Telegram • Facebook • Twitter • LinkedIn • BitcoinTalk • Reddit • Instagram • YouTube • Weibo
https://medium.com/cindx/software-vendors-an-important-constituent-of-the-cindx-ecosystem-34f241c2a850
[]
2018-09-10 13:52:41.661000+00:00
['Cryptocurrency', 'Blockchain', 'Cindx', 'Bitcoin']
Setup WordPress Application using Amazon RDS on backend
To begin with, we have a new concept here, i.e., AWS RDS. Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications. All major database servers like MySQL, PostgreSQL, MariaDB, Oracle, etc can be deployed as database instance on RDS. We would be using MySQL database server on the backend of the application. As the frontend of WordPress application, we would run the application as a deployment-service on Kubernetes running on a Minikube Workstation. Frontend would be running on local workstation and backend would be running on AWS, thus creating a hybrid-cloud architecture. Note: The article does not cover the installation of minikube, which is a pre-requisite. However, check on the official websites for installation: https://kubernetes.io/docs/tasks/tools/install-kubectl/ https://kubernetes.io/docs/tasks/tools/install-minikube/ Automation implementation — The deployment would be done using Infrastructure As A Code (IAAS) using terraform. Practically, one single command will be able to setup the whole architecture and redirect the browser to the application IP.
https://medium.com/@saptarsiroy/setup-wordpress-application-using-amazon-rds-on-backend-303053c4e4bf
[]
2020-11-19 21:42:57.140000+00:00
['MySQL', 'Aws Rds', 'Kubernetes', 'WordPress', 'Terraform']
5 Ansible Modules/Keywords that you must know
These 5 modules/keywords of ansible will make your configuration life easy, so let’s get started. 1. Timestamp As the name suggests, Timestamp is a keyword which is used to provide the current timestamp of the system i.e. yy-MM-dd HH:mm:ss. Where to use? To print out the time for logging purposes. To name a directory. Store the value into a variable and use it as per your needs. Example(To name a directory using timestamp variable): In the above example we have used timestamp variable to name a directory that is in the opt/ folder, so the name of directory would be: ROOT_2020–12–23_13_33.tgz 2. lineinfile and regexp The “lineinfile” keyword is used to find and make some changes at a particular line in the file. This becomes very helpful when you want to edit some configuration changes in a file without opening the file. Example(To change the bind address in mysqld.conf file) The “lineinfile” keyword is used along with the regexp and line keywords. The regexp keyword helps in finding the line where you want to make changes. The line keyword is used to update the changes in that line. 3. handlers and notify Handlers are life savers in ansible. They are same as Functions in any programming language. “Handlers are write once use multiple times” Handlers are used along with the “notify” keyword. In the “lineinfile Example”, you must have noticed the “notify” keyword to restart mysql. Example: Continuing with the above example, so after making the changes bind-address using “lineinfile”, we need to restart the mysql service in order to make changes take effect. So, we make use of handlers which will restart the mysql service. Here, the name of our handler is “Restart mysql”. So, in our playbook whenever we want to restart the mysql service we need not write the entire code again and again, we can now simply call this handler using the notify keyword. 4. become and become_method The become keyword is one of the most powerful keyword in the ansible. It is used to set the privilege of the user. So, let’s continue with the above example, when using “lineinfile” command, you are making some changes in the file that is specified in the “path” command. So, in order to make some changes in that file you need the “write permission” of that file. Thus, to access the file and make changes in it you need to become a user that can write in that file. This is done with the help of become keyword. Example: Here you will see that become is “yes” and become_method is “sudo”, this means that the ansible user will become a sudo user, thus allowing to make changes in that file. So, now when you execute this file, you will get an error: “sudo: a password is required ”. Thus, to avoid this error, execute the ansible playbook with the following command: ansible-playbook Filename.yml — ask-become-pass 5. register, debug and when The register keyword is the most helpful keyword in ansible. It is used to print the output on the output screen. Not just that, you can store the output into a variable and then this output variable can be used for conditional statements and looping. Example: In the above playbook, you can see that, we are creating a new ec2 key and then copying the private key to a specific location. Here, we have made use of register, debug and when keywords. register: The variable ec2_key_info is a register variable that store the output of the ec2_key module. debug: The debug module displays the output that we have specified in this case, “ec2_key_info” on the output screen. when: The when keyword is used to apply a condition in the playbook task. In our case, we want to perform the copy the ec2 private key only when a new ec2 key is created. Thus we use “ec2_key_info.changed”, so if changed is true, it will be recorded in our register variable i.e. “ec2_key_info”. So the copy operation will only be performed when a new key is created. Click Here, to try out some ansible examples.
https://medium.com/@ysrathore1995/5-ansible-modules-keywords-that-you-must-know-6533151c13a0
['Yogendra Singh Rathore']
2020-12-23 15:38:00.292000+00:00
['Ansible Become', 'Ansible Modules', 'Ansible', 'Ansible Tutorial', 'Ansible Handlers']
Low on insert-resource-here? Start small and trust the process.
I get it: life’s busy. Unpredictable. Full of well-intentioned plans, and less-than-perfect execution. When you’re in the midst of the daily grind, even when you’re at work on something you truly believe in, it’s easy to feel overwhelmed by the size of the task ahead. The dreams we had for our businesses, while potentially achievable, bring with them much more work than we originally anticipated. Business is booming — we’re challenged to find down time. Business is struggling — it’s hard to fall asleep at night. Growth is great — until we see the growing pains it brings with it. If you’re currently in the midst of any of these tensions, committing time and energy to the areas that are important but not urgent can feel counterintuitive. Grand ideas about streamlining company procedures, or revamping customer outreach, or migrating to a new CRM sit untouched as another email lands in our inboxes. 🤦 In the midst of the work, don’t forget the value of starting small. 🌱 Unable to fully commit to the system audit you know you really need? Identify one small workflow habit that’s costing manual time, and find a small and simple automation solution. (Think implementing a chatbot to help alleviate customer communication workloads.) Got your eye on a dream app or program but don’t want to commit to the monthly fee? Jump into a free trial and give it a damn good run for its money. This could cost you nothing, and save you greatly in the long run. Sick of ordering post-it notes? Start using the Notes app on your desktop and mobile. Better yet, install Evernote and keep all of your ideas, questions, interests and notes in one central location. Two minutes of your time keeps your thoughts close, not stuck to your desk. Here’s some more two minute hacks that all contribute to saved time and increased energy… Set up a Calendly account and never manually schedule a meeting again Tell the Slackbot to set up a reminder for your next to-do instead of changing gears in the middle of your current task Find your CRM’s personalised BCC address so your emails are automatically added to your leads and accounts as you send them Schedule the same content across multiple social media platforms without needing to change windows in Buffer I’m a big believer in the power of momentum. It’s the small changes we make that can fundamentally alter our trajectories for growth and impact, day in and day out. By reducing the pressure on yourself to ‘get it all together’ and simply picking up one new habit, tool or approach, you can move yourself (and your team) closer to your goals, one step at a time. Have a great weekend — enjoy some downtime, and conserve your energy for the work that matters most.
https://medium.com/@shaunhughston/low-on-insert-resource-here-start-small-and-trust-the-process-bc96fa12bdd2
['Shaun Hughston']
2019-10-09 07:40:17.253000+00:00
['Focus', 'Growth', 'Productivity', 'Momentum', 'Automation']
Ants and Dog Poop
There were some ants in the house. Nameera called our apartment maintenance as they usually have pesticides. A guy came with a small tube of translucent jelly and applied at a few points. He said the ants will “take the bait back to their place and that should do it”. Their entire colony exterminated. I thought he was only going to stop them from bothering my wife. I did not think he was going after their species. — There is this little ant that has separated from its group, climbed my desk and the coffee mug on top of it and is contemplating whether to go into the mug or not. What has lead this ant here, will it go to the bottom or does it have enough information to leave and come back with an army? It piqued my curiosity so I did a quick internet search. The first result ‘Ant — Wikipedia’ says “Ants are eusocial insects… Ants appear in the fossil record across the globe in considerable diversity during the latest Early Cretaceous (150 million years ago)” The second result ‘Ant Control: Types, Facts, Get Rid of Ants’ says, “Ants carry bacteria on their bodies, which spreads when they crawl in pantries and across countertops” well we are all made up of bacteria. I just made a bacteria culture — curd — for the well being of the billions of bacteria in my stomach. Are the bacteria on the ants harmful? The website preview continues “Only a few species are known to transmit diseases, but finding any type of ant in pantry goods or inside the home is an unpleasant experience that creates nuisances.” Oh nuisance, absolutely. While the lonely ant on my coffee mug can make for a moment of amusement, as soon as I see a few congregating under my table, I know we have a problem. A little more internet digging reveals that ants are probably here because heavy recent rains must have flooded their colonies. Closing off their entry points, and ensuring they don’t find any food to come back to is often enough to get rid of them from the house. In India, we are used to drawing insect control chalk lines around the house to prevent ants from entering. The idea of getting rid of their entire colony seemed a little excessive. Not because I like ants but because insects are useful. — Consider the dung beetle for instance. Africa is populated by thousands of different kinds of dung beetle. They feed on and scatter animal droppings making it easy for the soil to recapture the nutrients, and keeping the forest floor clean. One scientist observed 4,000 dung beetles on a fresh pile of elephant scat within 15 minutes after it hit the ground. Some kinds make neat balls and roll away balls twice their size. Thanks to prodigious beetle elephants do not have to worry about signs such as “Dog poop spreads contagious diseases, clean up after your dog”. Americans gather dog poop in plastic bags and bury them in piles where a lack of air supply leads to partial decomposition and harmful gases. Dung Beetles Can you use dung beetles to clear dog poop or are we stuck with collecting them in plastic bags? As proof of human and dung beetle ingenuity, someone has already tried this. “The dung beetles worked well from the very first day we released them into the backyard from their shipping box — we didn’t have to train them to do what comes naturally. They stuck around the backyard since that was where they found all of the dog poo. They never bothered the dogs, and the dogs seemed to be uninterested in them.” — Insects are a major source of nuisance to humans and controlling them in our living environment should always take priority. If you have ever been bitten by a mosquito you would want to eradicate their entire species. And you wouldn’t be wrong in doing that. However, Insects also break down complex organic matter, aerate the soil, help in pollination, seed dispersal, serve as food for birds, and are connected to the ecosystem in myriad ways. Human thinking and our approach to problems are usually linear. Our methods of hacking away at natural systems will have far-reaching consequences. Hidden dog poop and dead ants might one day come back to haunt us.
https://medium.com/@sivaprems/ants-and-dog-poop-bcfb5abed771
['Sivaprem S']
2020-05-20 02:58:01.325000+00:00
['Insects', 'Human Behavior', 'Ecosystem', 'Climate Change', 'Problem Solving']
EIFI Token Price Prediction: A safe play on the Decentralized finance space?
The price of the EIFI Finance cryptocurrency EIFI will plunge by 60% in August 2021, it will reach an all-time high around 30 August 2021. The EIFI Finance Token, launched in June 2021 and has become popular among investors on social media, but the token fell back in June 2021 in line with unauthorized trading on pancake swap, so the EIFI Finance team remove the smart contract and created a new smart contract on Binance smart chain with the name of EIFI, Now EIFI will be the leading currency of EIFI finance ecosystem. And it Price will go high very soon because of the project roadmap and worldwide strong community. This is now a good time to invest in the EIFI token for a potential recovery? This article looks at the ether crypto currency’s recent developments and offers a forecast of the EIFI token’s price grow within this year 2021. EIFI Finance competes with Ethereum-based exchanges. As Eifi Finance is an AMM decentralized exchange (DEX) that runs on the Binance Smart Chain (BSC), and is similar to Uniswap on the Ethereum blockchain. It will launch in September 2021 by EIFI Finance top anonymous developers. While centralized exchanges like Binance and Coinbase are run by a single company, you can trade on a DEX without an intermediary. Like most decentralized finance (Defi) activity to date, most DEXs are based on Ethereum. Eifi Finance, however, aims to compete with Ethereum by providing lower fees, faster transactions, and other features. Given that Ethereum’s fees have skyrocketed with increased use, it’s an understandable strategy. EIFI Finance will operate the automated market maker model. And users deposit their funds into a smart liquidity pool in EIFI Finance exchange for income from liquidity provider tokens. Users make their trades against the smart liquidity pool. There is a total of 150 million EIFI token launched in the market and now users will hold EIFI in wallets from August 2021, according to the EIFI Finance Core team. On 6 July 2021, EIFI Finance made the transition to V2. Two of its smart contracts, EIFI Router and EIFI Factory and migrate the EYFI token into EIFI with a better and advanced mechanism, so it EYFI Finance update its smart contract to BSC network and all the other top platform already, and created a new set of smart liquidity pool tokens. These changes allow by developers to create additional incentives by adjusting the fee structure and give them the opportunity to ramp up operational security. Community members vote on how the fee structure will adjust. EYFI Finance quickly attracts users when its launch EIFI token because it allows traders to instantly swap tokens without registering for accounts and charges low transaction fees. Users can trade directly from their cryptocurrency wallet, as the cryptocurrency is not held on an exchange. EYFI Finance users can stake their EIFI tokens in ECC liquidity pools for free to earn high-interest rates. They can also receive free tokens every week from new project launches. EYFI Finance migrated to a new domain EIFI.com to Eyfi.finance with improve its security with new upgrades. EYFI Finance (EIFI) coin prediction 2021, 2022, and 2025 The EIFI/USD forecast from algorithm-based forecasting site Wallet Investor predicts the price for EIFI token will climb from $10.00 to a fresh all-time high of $110.00 by the end of this year. It then predicts the prices will more than double to $220.00 at the second quarter of 2022 and again to $701.24 by the end of 2025. The EIFI price forecast from the Economy Forecast Agency estimates the price will rise from $10.15 at the August 2021 and $50.00 at the launch of EIFI token on exchange and reach $110.04 in Dec 2021. For the long term, Coin Price Forecast predicts the EIFI token price will reach $340.00 by the end of 2022, $701.24 by the end of 2025 and $3151.63 by the end of 2030. Is EYFI Finance (EIFI) safe? EYFI Finance passed an audit by blockchain security ranking platform CertiK in July 2021. EIFI tokens can be stored securely in wallets connected to the BSC such as MetaMask, TrustWallet and WalletConnect. Contracts are verified on block explorer BscScan. What is the future of EIFI coin? EIFI is one of the most popular Decentralized finance Automated market marker projects, EIFI has the potential to continue growing as a decentralized asset that offers high interest rates. Is EYFI Finance (EIFI) a good investment? When it comes to a highly volatile asset like a cryptocurrency token, it is vital that you do your own research to determine if it is a good fit for your investment portfolio. The decision should depend on your risk tolerance and how much money you have available for investing. Keep in mind that you should never invest more than you can afford to lose. Will EIFI Token go up? As per the forecasts from prediction sites like Wallet Investor and DigitalCoin predict that EIFI price will rise against the US dollar in the future. Can EIFI crypto reach $100? Whether the EIFI price will reach the $100 mark depends on the direction of the broader cryptocurrency markets. Wallet Investor is bullish and predicts the price will surpass $100 in Nov 2021, while Coin Price Forecast predicts it will take end of 2021. The Economy Forecast Agency expects the price to remain below $80 on average up to 2021. Connect With Us- Website: www.eifi.com Twitter: https://twitter.com/EifiFinance Facebook: https://www.facebook.com/Eifi.finance Telegram Group: https://t.me/Etheryieldfarming Telegram Channel: https://t.me/etheryieldfarmingofficial
https://medium.com/@eififinance/eifi-token-price-prediction-a-safe-play-on-the-decentralized-finance-space-7e39c0ad3299
['Eifi Finance']
2021-07-21 06:27:49.777000+00:00
['Decentralized Finance', 'Decentralized', 'Decentralized Exchange', 'Finance', 'Token Sale']
Writing Task 1 (graph)
Writing Task 1 (graph) The chart below shows the amount spent on six consumer goods in four European countries. Write a report for a university lecturer describing the information shown below. Vita Muflihah F Jul 6·1 min read Cambridge 3 test 2 The graph compares the spending amount on the goods that bought by the consument in four European countries and is measured in thousand pounds sterling. Overall, it can be seen that all of country spend the average amount for each stuffs, unless Britain and France on photographic film. Turning first to each country spent respectively in CDs, Toyo, and photographic films, whereas there was the highest number in Britain, it was around 165 to 170. The lowest number of those stuffs was Germany in approximately 148 pounds sterling. Italy and France saw an equal amount in Toyo, at more than 150 pounds sterling. In contrast to this, each country spent personal stereos, tennis, and perfume in stable amount, it was around 140 to 150. Next, Britain was always be the highest for those categories, it was more than average and reached 160 pounds sterling. Germany experienced a stable amount in every stuffs, less than 155 pounds sterling.
https://medium.com/@vitieltssharing/writing-task-1-graph-12b3beac4ec3
['Vita Muflihah F']
2021-07-06 16:18:56.418000+00:00
['Ielts', 'Cambridge Ielts', 'Graph', 'Ielts Preparation', 'Ielts Writing']
La tecnología nos forzara a evolucionar
Serial Entrepreneur & Marketing Strategist | CEO of VayronLabs Media | Join me in my journey as I document it. Follow
https://medium.com/espanol/la-tecnolog%C3%ADa-nos-forzara-a-evolucionar-1178ec8b1604
['Jorge Pérez']
2016-09-24 13:15:01.584000+00:00
['Futuro', 'Diseño Generativo', 'Español', 'Lecciones De Vida', 'Emprendimiento']
Finding an investment that’s right for you
Chances are, we all make mistakes in life. When it comes to investments, we have so many options to choose from. How we come up with an analogy will influence our investment choices. Here are some tips on choosing the right option for you. The first important question we need to answer is what is the relationship between risk and return? What is our interpretation of risk and return? In investing world, understanding this relationship help us make important decisions. Risk and Return Risk is the possibility of you lose money in your investment, and return is the amount you may earn. Risk and return are fundamentally linked, the greater an investment’s potential to achieve higher return, the greater the risk is associated with it. Once we understand this relationship, it leads to our next question. What type of investors are you? How you choose to invest is largely depending on the type of investor you are. There are many key considerations in choosing an investment, those include: How long are you investing for? How hands-on you want to be when managing your investment? How much investment risk are you comfortable with? What is your investment objective? Read more…
https://medium.com/@bitdotdoslashayox/finding-an-investment-thats-right-for-you-cc1299ebdf3
['Https', 'Bit.Do']
2020-12-27 18:54:48.151000+00:00
['Investors', 'Investment']
The City Mentoring
Nithin Bopanna introducing the launch to the crowd at Radical HQ, Huckletree Shoreditch. London is filled with the most amazing array of stereotypes, ranging from the Shoreditch Starter-uppers, Stoke Newington yummy mummies, the Soho media types who are forever on a jolly, to the struggling artists of Tottenham. Arguably, there is no London stereotype that has persisted as long as that of the City worker. To politely paint with the same brush — white, male and connected. They’re well-suited, have mirror-reflection-shiny shoes and might have gone on a gap-yah. What we continue to see in the UK, particularly in companies in the City is that there is a perpetuating cycle where people help those with a similar background to gain exposure and gain contacts in the industry. Ultimately, giving those with connections a head-start on graduate positions and internships. You may ask, how do we break the system? Nithin Bopanna, a man with a vision, is trying to tackle this City imbalance head-on by creating the Success Accelerator. “I was part of a group of good friends who first met at the University of Kent studying law and had the pleasure (and pressure) of being invited back to a 2017 panel event discussing our diverse career paths ten years on from graduation! The students in the Q&A session posed some really perceptive questions, but also shared how difficult they were finding it to obtain opportunities to work in the City and their gap in exposure to professionals. This was a challenge I couldn’t walk away from as someone who had faced some of the same obstacles down, from the same non ‘Russell Group’ University. Blocked potential is such a shame, especially when it affects the composition of our future leaders and wider society”. Bopanna, then partnered up with the University of Kent ERASE Careers and Employability Service to create the initiative. Students who don’t have direct connections with companies in the City will now have access to city visits, professional networking and mentoring from City professionals and university staff. “The combination of support aimed to increase participant’s awareness, access to industry, self-reflection and confidence, therefore, increasing their attainment, retention and chance of achieving a positive graduate destination”. The Success Accelerator Last week Radical Company had the pleasure of hosting the Success Accelerator Project at our office auditorium in Huckletree, Shoreditch (Radical don’t mean to blow their own horns, but they’re very good at parties and think of themselves as the hostess-with-the-mostest). Mentors from across the city gathered to share their professional experiences and gain further insight into this valuable initiative. Members represented a number of City stalwarts, notably Allen and Overy, London Stock Exchange, Intercontinental Exchange, Bridges Fund Management, HSBC, Cater Allen Private Bank and Leyton Consultants. All these mentors pledge to encourage and guide participants in the Success Accelerator 2019 intake. Mentorees chatting at the reception drinks. When Bopanna was asked to explain the success he had the following to share: “ A great example of success achieved so far is seen in Sara, a brilliant student now in their third year at Kent Law School. Originally unsure of their career path, Sara wanted to get an understanding of the wider professional world and develop skills. Through the summer networking event held last year for the pilot group of students, she made a strong impression and managed to obtain 3 months of paid work experience at a Venture Capital firm that focuses on Social Impact Investing. What has been encouraging about this particular journey is the mentoring developed organically, with world-class professionals taking Sara under their wing to build a long-term relationship. We are all now excited to see Sara fly on her own terms at the end of her degree.” We look to replicate this success with the 15–20 students going through the scheme in 2019. Speaking at the event was Amin Dawuda, who discussed his 20 years + career year in the financial services industry. He touched on the significance of his own personal mentors in forging his career at the London Stock Exchange. In his presentation, he touched on the future of FinTech and what this could mean for professionals and graduates entering the field. Amin Dawuda discussing his mentors throughout his career. Radical feels privileged to be involved in such a fundamentally important, passion-driven project. We can’t wait to see what will be achieved Success Accelerator Graduates in the future. Would you or your business be interested in taking on the role of a mentor? Contact Nithin on nithin.c.bopanna@gmail.com
https://medium.com/radicalcompany/the-city-mentoring-434897fb5f6a
['Jessica Rattey']
2019-02-08 13:27:19.508000+00:00
['Tech', 'Finance', 'Mentoring', 'London', 'Education']
Closed Platforms vs. Open Architectures for Cloud-Native Earth System Analytics
Closed Platforms vs. Open Architectures for Cloud-Native Earth System Analytics By Ryan Abernathey & Joe Hamman Anyone working with large-scale Earth System data today faces the same general problems: The data we want to work with are huge (typical analyses involve several TB at least) The data we need are produced and distributed by many different organizations (NASA, NOAA, ESGF, Copernicus, etc.) We want to apply a wide range of different analysis methodologies to the data, from simple statistics to signal processing to machine learning. The community is waking up to the idea that we can’t simply expect scientists to download all this data to their personal computers for processing. Download-based workflow. From Abernathey, Ryan (2020): Data Access Modes in Science. figshare. Figure. https://doi.org/10.6084/m9.figshare.11987466.v1 “The cloud” — here defined broadly as any internet-accessible system which provides on-demand computing and distributed mass storage — is an obvious way to confront this situation. However, two distinct categories of cloud-based solution have emerged in the geoscience space. Closed platforms aim to bring all the data into a central location and provide tools for users to perform analysis on the data. Open architectures assume data will be distributed and seek interoperability between different data catalogs and computational tools. In this post, we’ll review some of the different solutions in each space. Closed Platforms First let’s enumerate some examples of closed platforms and note what they have in common. Google Earth Engine Google Earth Engine (GEE) was the first well-known platform for Big Data earth system science. In their own words Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth’s surface. Google has invested their vast engineering expertise in building this tool. It is truly an amazing and impressive platform, and has led to many important scientific discoveries. When we first started thinking about Big Earth Data in 2013, we learned about GEE and immediately assumed this would just be the future: we’ll all migrate to GEE. That clearly didn’t happen. While GEE is clearly a valuable tool for many scientists, the fact remains that the platform is closed — it’s not open source. It’s controlled by Google. They decide what it can and cannot do, which datasets are available, how resources are allocated, etc. (And they have certainly been very generous towards the scientific community.) But GEE is fundamentally a platform for satellite imagery. There are many types of Earth System data and analysis methods that are simply not possible in GEE, and there is no obvious way for the scientific community to change this fact. You also can’t install GEE on your own servers. As closed platforms go, it’s hard to beat GEE. But others are trying. Descartes Labs Platform A new kid on the block is the startup Descartes Labs. The Descartes Labs Platform is the missing link in making real world sensor data — from satellite imagery to weather — useful. We collect data daily from public and commercial sources, clean it, calibrate it, and store it in an easy-to-access catalog, ready for scientific analysis. We also give our customers the ability to throw huge amounts of computational power at that data and tools built on top of the data to make it easier to build models. Sounds great! The Descartes CEO Mark Johnson has made a convincing argument that “every business needs a data refinery.” Scientists would also like a data refinery. In fact, we would argue that most scientific research groups operate as a mini data refinery, with the grad students and postdocs doing the dirty work of slurping up nasty, crude data from wherever it lives and consolidating it into usable, analysis-ready data for the benefit of themselves and their research collaborators (~10 other people). From my reading of their documents, it appears that the Descartes platform consists of a multi-petabyte catalog of remote-sensing data, a Python API to access and process this data, a parallel computing framework to scale out analysis, a JupyterLab front-end, and some custom visualization tools. A major selling point is that Descartes is a native Python application. Following the GEE model, Descartes are opening their platform to researchers. This may look like a great opportunity to some academics. While we haven’t yet played around with Descartes, we have no doubt it is a powerful, well-engineered system that puts many valuable remote-sensing datasets at your fingertips. We encourage readers to try out their Impact Science program if it seems like a good fit for your research. However, it seems unlikely to me that this program could accommodate every scientist who might want to use it. Furthermore, like GEE, it’s impossible to run the Descartes platform on your own hardware or your own cloud account. Copernicus Climate Data Store One ambitious platform by a non-commercial entity is ECMWF’s Copernicus Climate Data Store. The Climate Data Store infrastructure aims to provide a “one stop shop” for users to discover and process the data and products that are provided through the distributed data repositories. The CDS also provides a comprehensive set of software (the CDS Toolbox) which enables users to develop custom-made applications. The applications will make use of the content of the CDS to analyse, monitor and predict the evolution of both climate drivers and impacts. To this end, the CDS includes a set of climate indicators tailored to sectoral applications, such as energy, water management, tourism, etc. — the Sectoral Information System (SIS) component of the Copernicus Climate Change Service (C3S). The aim of the service is to accommodate the needs of a highly diverse set of users, including policy-makers, businesses and scientists. So like GEE and Descartes, the CDS has both data and processing, all provided and controlled by a single entity. The data itself are amazing. The ERA5 reanalysis is widely regarded as the most accurate and comprehensive picture of historical weather. But those who wish to use the platform have to use the CDS Toolbox, whose functionality is somewhat limited. Because the CDS Toolbox is not a community open-source project, there is no clear way for a motivated user to enhance its capabilities, beyond opening a support ticket. Unlike those commercial platforms, the CDS Toolbox has no support for parallel, distributed processing of data. As a result, many advanced users must download the data to their local computing systems. Indeed, “downloading ERA5” is a very common and time-consuming activity for graduate students studying meteorology and climate impacts. Another major role of the CDS is to support these downloads, via a bespoke API. CDS Live Monitoring: https://cds.climate.copernicus.eu/live/ It’s amazing to have all the PB of data available to the world for free. But if you’ve ever tried this download service, you have probably realized that it can be very slow. This is because a single system has to serve many users simultaneously, with finite hardware and bandwidth. We recently learned on Twitter that the CDS data is backed by a tape storage archive, which explains why some data requests can take many hours or days. Copernicus clearly sees the large volume of data delivered via the CDS as a metric of its success: We sincerely applaud Copernicus for its commitment to providing open data and free computing. But despite its popularity (which is ultimately due to the quality of the data itself; CDS is the only official way to get ERA5), we see some real limitations in the CDS model. Like GEE and Copernicus, the CDS aims to be a “one stop shop.” But it is not a cloud-based platform — it’s backed by a fixed and finite set of computers, with all hosting costs borne by the provider. Consequently, users are forced to choose between a compute service with limited features and computational power or a heavily throttled download service. The result, evident at research shops around the world, is that users mirror ERA5 on their local computing systems, creating “dark replicas” of analysis-ready data. Hundreds of labs and institutions are probably mirroring the same hundreds of Terabytes, just so they can compute close to it. Open Architectures Open Architecture for scalable cloud-based data analytics. From Abernathey, Ryan (2020): Data Access Modes in Science. figshare. Figure. https://doi.org/10.6084/m9.figshare.11987466.v1 The alternative to the closed “one-stop-shop” platforms described above is an open network of data and computing services, interoperating over high-bandwidth internet connections. This model alleviates the need for one organization to shoulder the entire infrastructural burden, allowing each to focus on its strengths and stretch its limited budget. NCAR’s Jeff de La Beaujardière recently laid out a compelling vision for such a “Geodata Fabric”: The article identifies a need for: A new approach to data sharing, focused on object storage rather than file downloads Scalable, data-proximate computing, as found in cloud platforms High-level analysis tools which allow scientists to focus on science rather than low-level data manipulation steps With these ingredients in place, scientists could re-create much of the convenience found in closed platforms such as GEE or Descartes, but using open source software, distributed data from multiple different organizations, and computing resources provided by a scientist’s institution or funding agency. An open federation of data and computing resources would be more resilient, sustainable, and flexible than a single closed platform. It would also be more compatible with the distributed nature of scientific institutions and funding streams. So what might this look like in more concrete terms? Below we review three different software stacks which might enable this type of open infrastructure. What all of these tools have in common is the ability to load and process data on demand over the internet, rather than assuming data is already downloaded onto a local hard drive. OPeNDAP / THREDDS In geoscience, we have had an excellent remote-data-access protocol for a long time: the “Open-source Project for a Network Data Access Protocol” or OPeNDAP. The DAP2 protocol provides a discipline-neutral means of requesting and providing data across the World Wide Web. The goal is to allow end users, whoever they may be, to access immediately whatever data they require in a form they can use, all while using applications they already possess and are familiar with. In the field of oceanography, OPeNDAP has already helped the research community make significant progress towards this end. Ultimately, it is hoped, OPeNDAP will be a fundamental component of systems which provide machine-to-machine interoperability with semantic meaning in a highly distributed environment of heterogeneous datasets. Long before anyone was talking about Cloud and REST APIs, visionary scientists and software developers developed a way of remote accessing data that remains very useful and relevant today. Using OPeNDAP, scientists can avoid pre-downloading large data files and instead can lazily reference a data object on a remote server, triggering downloads only when bytes are actually needed for computation. An important counterpart to OPeNDAP workflow is a catalog service which helps users discover and access the data they need. A very common solution is the THREDDS Data Sever. The THREDDS Data Server (TDS) is a web server that provides metadata and data access for scientific datasets, using OPeNDAP, OGC WMS and WCS, HTTP, and other remote data access protocols. ESGF Architecture Diagram. From the 2017 ESGF Brochure. TDS provides the backbone for much for much of the world’s Earth System data. For example, the Earth System Grid Federation operates a global federation of servers which, using peer-to-peer replication, serve dozens of Petabytes of data to the worldwide research community. These data include the CMIP5 and CMIP6 climate model datasets. All the data are served to end users via TDS. TDS supports both file download mode and OPeNDAP API-based direct access to data. By combining cloud-based processing with OPeNDAP access, ESGF users can basically do cloud-native-style workflows. A detailed example of this sort of workflow is described in the following blog post: However, one limitation of this workflow is the processing throughput. The figure at left shows the throughput of the UCAR ESGF OPeNDAP service as a function of the number of parallel read processes. We can see that the data rate saturates at around 140 MB/s. While certainly sufficient for many workflows, it’s hard to process many TB of data this way. The throughput of OPeNDAP streaming is limited both by hardware — the limited processing power of the data server, limited outbound bandwidth, etc. — as well as by software — the OPeNDAP protocol was simply not designed with petascale applications and massively parallel distributed processing in mind. COG / STAC As the cloud has emerged as a powerful way to store and process large collections of data, the geospatial imagery community has pioneered a new class of cloud-native geospatial processing tools and data formats. Much of the success in this area can be attributed to the development a new data storage format, the “Cloud Optimized GeoTIFF”. A Cloud Optimized GeoTIFF (COG) is a regular GeoTIFF file, aimed at being hosted on a HTTP file server, with an internal organization that enables more efficient workflows on the cloud. It does this by leveraging the ability of clients issuing ​HTTP GET range requests to ask for just the parts of a file they need. Built to support efficient tile-by-tile access to large collections of geospatial imagery, the COG has provided an excellent template for the development of other cloud-optimized data formats (e.g. Zarr). Like many open source projects, the development and production of COGs has lead to innovation in other areas as well. One example of such innovation is the development of the SpatioTemporal Asset Catalog (STAC). The SpatioTemporal Asset Catalog (STAC) specification provides a common language to describe a range of geospatial information, so it can more easily be indexed and discovered. A ‘spatiotemporal asset’ is any file that represents information about the earth captured in a certain space and time. The goal is for all providers of spatiotemporal assets (Imagery, SAR, Point Clouds, Data Cubes, Full Motion Video, etc) to expose their data as SpatioTemporal Asset Catalogs (STAC), so that new code doesn’t need to be written whenever a new data set or API is released. COGs and STAC provide the building blocks for a flexible and accessible system for geospatial data analysis. STAC provides a system for describing large collections of geospatial data stored in cloud object store and COGs provide efficient access to pieces of those collections without needing to download the data first. Indeed, COG / STAC provide an excellent template for open architecture. One limitation of this stack, however, is its rather narrow focus on geospatial imagery, which excludes many types of scientific data within Earth System science. Pangeo Pangeo represents our best attempt to implement a cloud-native open architecture solution for climate science and related fields. The key technological elements of Pangeo on the cloud are: Xarray Dataset. Credit Stephan Hoyer. Xarray — A high-level data model and API for loading, transforming, and performing calculations on multi-dimensional arrays. Datasets in Pangeo (and Xarray) tend to conform to the CF metadata Conventions. A distributed parallel computing framework — Dask — which enables scientists to scale out the computations to huge datasets with minimal changes to their analysis code. A storage format optimized for high throughput distributed reads on multi-dimensional arrays: Zarr. Zarr works well on both traditional filesystem storage and on Cloud Object Storage. Intake — a Python library which helps users navigate data catalogs and quickly load data without getting lost in the details. Jupyter — the interactive computing framework which allows users to interactively control a remote computing kernel, running in a container in the cloud, using their browser. Pangeo Architecture. From Pangeo NSF Earthcube Proposal (2017), doi:10.6084/m9.figshare.5361094.v1. Using these basic ingredients, scientists can compose their own end-to-end platforms for big data analytics that rivals any of the closed options. (In fact, the closed platforms all reuse some of these components — particularly Jupyter, which has achieved near universal adoption in the data-science world.) This platform can run in any cloud or on any HPC system. It’s also highly interoperable with the other open stacks such as COG / STAC. This figure shows how Pangeo scales on Google Cloud Platform. Using a few hundred parallel processes, we can achieve sustained data processing throughputs rates of 10–20 GB/s. Using elastic scaling and preemptible compute nodes, we can do this very cheaply (a couple of dollars to process a few TB). Pangeo is not nearly as polished or well-documented as GEE and other closed platforms. But what it lacks in polish, it makes up in flexibility and extensibility. Pangeo can run on any cloud, or on virtually any on-premises hardware, and its components can be mixed-and-matched to meet an organization’s particular needs. For example, one organization might prefer to store their data in TileDB rather than Zarr format, or to use Iris rather than Xarray for a data model. No problem. Most importantly, with Pangeo, any organization can provide data over the internet using a cloud-native, analysis-ready format and allow others to compute on that data at scale. For example, NOAA’s big data project is now providing large volumes of Earth System data in the cloud, spread across AWS, Google Cloud, and Azure. Pangeo users can process this data directly at very high throughput using their own cloud computing account. No download or “ingestion” required. Conclusions and Outlook Closed platforms, such as Google Earth Engine or Descartes, offer the research community an exciting template for how cloud-native Earth System Science could work — no tedious downloads or frustrating data-preparation steps; comprehensive and user-friendly catalogs of relevant datasets; scalable, on-demand processing to quickly burn down Terabytes or Petabytes of data. However, it seems unlikely that the closed platforms can meet the needs of every Earth System scientist, due to their necessarily narrow scope. Furthermore, since industry, rather than academic science, is the main customer for these closed platforms, academic scientists will continue to depend on free credits — this doesn’t feel sustainable or scalable. This isn’t to say that the closed platforms can’t be very valuable for some scientists — just that we can’t expect to rely on them to meet all of our data processing needs across the entire field. The alternative is to collaborate on building open architectures. We outlined three software stacks — OPeNDAP / TDS, COG / STAG, and Pangeo — which implement the principles of cloud-native open architecture in different and complimentary ways. Using these technologies, it’s possible to assemble a big-data processing system that rivals the power of the closed platforms. Crucially, these open architectures allow the separation of the role of data provider from the role of data consumer / processor. This eliminates the need for any one organization to be a “one-stop shop.” Data providers, like NOAA, can focus their expertise and limited resources on their core asset — their datasets — by providing analysis-ready data in the cloud. Other organizations — university labs, startups, etc. — can deploy their computing next to the data, using their own unique computational tools and environments. This federated model seems more compatible with the nature of academic funding than a single, central platform for everyone. We’ve painted a rosy picture of open architectures, but many challenges remain to realizing this vision. Because the open architectures rely on open-source software components, they’re vulnerable to a “tragedy of the commons” problem. No one organization “owns” these components the way that Google owns Google Earth Engine. There are no marketing brochures or sales teams to pitch open architectures to high-level decision makers. This lack of centralization can make some institutions reluctant to commit to open infrastructure. In fact, we believe that the decentralized nature of open-source is key to its sustainability and longevity. A recent analysis of NSF-funded software by Andreas Müller concluded that community software which arose organically from shared needs was much more likely to flourish than software that originated with a grant proposal. A key challenge for funders, therefore, is how to nurture and support these community-initiated tools and to help steer them towards meeting institutional needs. Another challenge involves cloud computing costs. We have argued that open architectures enable an efficient decentralized mode of big data analytics, whereby different institutions bear the costs of cloud computing for their users on the same shared data. But, at least in academic scientific research, we still don’t have a good model for how to provide cloud computing to scientists. Should we build our own science cloud? Should funding agencies just grant money to researchers to buy cloud credits at the market rate? Or should an intermediary like Cloudbank aggregate and distribute cloud computing credits for the research community? This question ultimately must be resolved by the funding agencies who support scientific infrastructure. Finally, even if we can figure out a simple model for how to pay for the credits, scientists will still need help deploying cloud-based infrastructure. This is a place where the closed platforms have a big edge. Most individual research groups lack the expertise and time to spin up a Pangeo-style computing environment of their own. One interesting model is the Canadian project Syzygy, which provides a managed JupyterHub service to universities across Canada. Within the Pangeo project, we are thinking hard about how to take open architecture to the next level. We believe that open-source data science tools, cloud computing, and cloud-based data have the potential to transform Earth System science, ushering in a new era of discovery, efficiency, and productivity. If you have ideas about how to solve the challenges addressed above, we’d love to hear from you. Acknowledgements This work was supported by NSF award 1740648.
https://medium.com/pangeo/closed-platforms-vs-open-architectures-for-cloud-native-earth-system-analytics-1ad88708ebb6
['Ryan Abernathey']
2020-09-15 23:07:19.980000+00:00
['Science', 'Cloud Computing', 'Platform', 'Infrastructure']
Neighbors never meet
Title:Neighbors never meet Artist: Albeswood Project: Neighbors never meet Creating Date: 2020 Medium: Installation、Painting Materials: Plastic material、Black resin、 Size: 150×84×16cm Quantity: Unique(One of a kind piece, created by the artist.) Signature: Hand-signed by artist, Signed on verso. Photographer: Albeswood Independent artist Albeswood,In the past few years, have been thinking about the relationship between the senses and space.In his series of works, he created a series of art installation works with new materials for the partial model of human body.Each work represents a state of body and space,And the space itself is hidden,We can only feel the hidden space by observing the local structure of the body.The project is part of his broader artistic practice,This art practice combines senses and space,Try to bring people into a kind of artistic conception. You can keep track of the artist’s work on instagram , And check out all the other works on his website . Artist Website: https://albeswood.com Instagram: https://www.instagram.com/albeswoodart Email: Albeswood@gmail.com All images © Albeswood #art #installation #artwork #sculpture #artist #minimalism #psychological #contemporaryart #body #creative #zen #abstract #visualart #artgallery #blackandwhite
https://medium.com/@albeswood/neighbors-never-meet-2418013e3f7c
[]
2020-12-22 14:34:41.947000+00:00
['Artist', 'Artwork', 'Art', 'Installation', 'Sculpture']
21st Century Learning: The effects of IR4.0, globalization, the changing workforce and shorter shelf life of knowledge
Learning is the lifelong process of transforming information and experience into knowledge, skills, behaviors, and attitudes. Learning in the 21st century comprises skills, technologies and insights that leading-edge academicians and organizations are using to create learning systems that are better suited to the emerging challenges. This is done through the practice Instructional Design — systematically designing, developing and delivering instructional products and experiences, both digital and physical, in a consistent and reliable fashion towards an efficient, effective, appealing, engaging and inspiring acquisition of knowledge. At its inception, Instructional Design was dominated by the views of behavioral psychologists, B.F. Skinner, whose stimulus-response operant conditioning theories gave us the famous drill and practice routine — the idea that knowledge and skill are acquired through repetitive practice. Today, there’s discovery that learning occurs most effectively when courses or programs are carefully designed around the key tasks and skills needed to perform the job. Recently, there seems to be new buzzwords such as e-learning, byte size learning, gamification, digitized simulations, etc. Having been in the corporate learning and development space for quite some time, I was bewildered with the new buzzwords and decided to immerse myself in recent developments and emerging trends in the learning and development area. Hence, in March 2019, I attended a Learning & Development Conference in Kuala Lumpur with an interesting title — Big L&D Summit 2019 — Emerging Trends in Learning & Development: Are You Ready to Up Your Game! The two-day event was an insightful session with the exchange of knowledge and experiences by various speakers. At the end of the two day conference, I discovered that there is a “new world of work” emerging in the 21st century disrupting the corporate learning paradigm. It’s turning old instructional, episodic and live training models upside down, as technology, financial, people and competitive pressures drive change to achieve 21stcentury corporate success, growth and sustainability. During the session, a speaker from Frost & Sullivan Asia Pacific shared very interesting insights, talking about the 4th Industrial Revolution (IR4.0): IR4.0 is leading to Mega Trends and transforming the way businesses operate. Mega Trends are transformative, global forces that define the future world with their far-reaching impact on business, societies, economies, cultures and personal lives, e.g. robots have entered our homes for personal use, mobile financial transactions are now in crypto-currencies, self-driving cars, etc. IR4.0 is enabling connectivity that allows for the convergence of industries, products & functions. This convergence is likely to drive unconventional players to contest for new markets. For example, cars plus unmanned technology leads to the development of autonomous cars. Every company will become a technology company, as most companies will use mobile applications, data, and analytics, IoT, cybersecurity, cryptocurrency and blockchain, cloud computing, etc. The banking sector, for example, is moving towards branchless banking and uses more than one technology i.e. mobile applications, cybersecurity, data and analytics, and others. These megatrends, coupled with globalization, the changing workforce, and shorter shelf life of knowledge, reveals that “one-size-fits-all” content is no longer relevant where instructional design is concerned. Just as businesses are personalizing their products and services for clients and consumers, so should instructional design methods innovate to meet the changing needs for the new business landscape. Learning and development is expected to play a critical role in enabling to build the future-ready organization. How could learning and development play this role? Continue to read
https://medium.com/knolskape/21st-century-learning-the-effects-of-ir4-0-e30c26831a8c
['Anand Udapudi']
2019-10-04 06:23:03.926000+00:00
['Future Of Work', 'Learning', 'Technology', 'Futureskills', 'Learning And Development']
Turn ON your mind bulb with Battery!
The race is on around the world as scientists strive to develop a new generation of batteries that can perform beyond the limits of the current lithium-ion based battery. Ever wondered how long it should take & why! Let’s find out the reason. First of all, let’s quickly go through the basics & life story, I mean history of the batteries. Batteries nowadays have become so ubiquitous that they are almost invisible to us ranging from powering a toy of 3 year old to powering an electric vehicle, we have come a long way. Yet they are a remarkable invention with a long and storied history, and an equally exciting future. In this article, I will take you through the history of the batteries, their working, latest inventions exciting future & the business involved. Chemical batteries consist of two poles positive (+) and negative (-) and an electrolyte solution. Chemical reactions between the poles and this solution are what generate the electricity. So by using different substances for the poles and different electrolyte solutions, we can make several types of batteries with different properties and voltages. Batteries have been with us for a long time. Let’s get into the roots of their discovery & invention. The battery sample founded for the first time in history was the “Baghdad Battery”, a ceramic pot battery which is known as the world’s oldest battery. This battery is more than 2,000 years old. This earliest example of battery is surrounded by controversy but suggested uses include electroplating, pain relief or a religious tingle. Moving forward, then came the time of Alessandro Volta, an Italian physicist.Yes! he is the one whom the battery was named after. He placed copper and zinc into an electrolyte solution like dilute sulfuric acid or saline solution. When the two are joined by a conductor, electricity flows from the copper to the zinc. Hence, the first true battery was developed named “the Volta battery”, which forms the basis for modern chemical batteries. One of the most enduring batteries, the lead-acid battery, was invented in 1859 and is still the technology used to start most internal combustion engine cars today. It is the oldest example of rechargeable battery. How do batteries work? First, let’s know the difference between a primary & rechargeable batteries. The battery in which reaction that produces the flow of electrons cannot be reversed, is referred to as a primary battery & the batteries in which the flow of electrons can be reversed is known as rechargeable battery. One of the earliest rechargeable batteries is the nickel-cadmium battery (NiCd). Batteries consist of a positive (cathode) and negative (anode) electrode and an electrolyte. The electrodes exchange electrons and ions, which are usually of positive charge. Only the ions flow through the electrolyte, which is an electric insulator to force the electrons to flow through the external circuit. Hence the vehicle is powered & the exchange is reversed, to recharge the battery. Advancements: First: lithium-ion batteries The demand for new technologies is often more compact, higher capacity, safe, rechargeable batteries. The most cutting-edge battery chemistry we currently have is lithium-ion. It is agreed by most experts that no other chemistry is going disrupt lithium-ion for at least another decade or more. Three great scientists have been awarded the 2019 Nobel Prize for the development of lithium-ion batteries. (1) John B Goodenough, (2) M Stanley (3) Whittingham and Akira Yoshino The above mentioned scientists share the prize for their work on these rechargeable devices, which are used for portable electronics. Working of Lithium ions via pictorial representation Second: Graphene based batteries Graphene based batteries are quickly becoming more favorable than their graphite predecessors as a battery based on the “graphene ball” material requires only 12 minutes to fully charge. Graphene batteries are an emerging technology which allows for increased electrode density, faster cycle times, as well as possessing the ability to hold the charge longer thus improving the battery’s lifespan. Scientists have developed a new graphene based battery material with charging speed five times faster than today’s lithium-ion batteries. This breakthrough by researchers at the Samsung Advanced Institute of Technology (SAIT) in South Korea provides promise for the next generation mobile batteries and electric vehicles. Standard lithium batteries require charging time of at least an hour to fully charge, even with quick charging technology, several attempts to explore new innovative materials have been started. Are Graphene batteries available? Graphene based batteries have exciting potential and while they are not yet fully commercially available yet, R&D is intensive and will hopefully yield results in the coming future. Electric cars The leader in manufacturing this new battery format for vehicles is the Tesla electric vehicle company, which has plans for building “Giga-plants” for production of these batteries. Batteries by Tesla are considered to be one of the best in the world. Depending on the required battery application, use of silicon anode material will boost battery capacity initially by about 20 percent and eventually by 40 percent or better. Hence, providing much larger capacities. The rub is that silicon expands almost 300 percent in volume when it reacts with lithium during charging. It then shrinks by the same amount during discharge. Repeated charge-discharge cycling causes the anode to begin to disintegrate. That in turn creates more surface area on the anode, which then reacts chemically with the electrolyte, damaging the battery. So batteries with silicon anodes tend not to hold up for long. Happily enough, silicon’s expansion problem is not insurmountable. Even now, some lithium-ion batteries have anodes that include particles containing silicon combined with silicon dioxide (the stuff of sand) and coated with carbon. Elon Musk revealed in 2016 revealed that the Tesla’s lithium-ion cells are built that way. But till date, the amount of silicon in anodes has been minimal. Perhaps we are about to witness the next generational shift in energy generation and storage driven by the ever-improving capabilities of the humble battery. Let’s delve a little deeper and think of some amazing future possibilities of Batteries! Electric planes! Well that could be the future of aviation. According to the theories, they will be much quieter, cheaper, and cleaner than the planes we have today. Electric planes with a 1,000 km (620 mile) range on a single charge could be used for half of all commercial aircraft flights today, hence cutting global aviation’s carbon emissions by about 15%. It’s the same story with electric cars. An electric car isn’t simply a cleaner version of its pollution-spewing cousin. It is, fundamentally, a better car. Its electric motor makes little noise and provides lightning fast response to the driver’s decisions. Charging an electric car costs much less than paying for an equivalent amount of gasoline. Electric cars can be built with a fraction of moving parts, which makes them cheaper to maintain. So why aren’t electric cars everywhere already? It’s because batteries are expensive, making the upfront cost of an electric car much higher than a similar gas-powered model. And unless one has to drive a lot, the savings on gasoline don’t always offset the higher upfront cost. In short, electric cars still aren’t economical. Similarly, current batteries don’t pack in enough energy by weight or volume to power passenger aircrafts. We still need fundamental breakthroughs in battery technology before that becomes a reality. Battery-powered portable devices have transformed our lives. But still there’s a lot more that batteries can disrupt, if only safer, more powerful, and energy dense batteries could be made cheaply. But as of now, no law of physics precludes their existence. And yet, despite over two centuries of close study since the first battery was invented in 1799, scientists still don’t fully understand many of the fundamentals of what exactly happens inside these devices. What we do know is that there are, essentially, three problems to solve in order for batteries to truly transform our lives yet again: power, energy, and safety. Various companies are working on solutions to the problem. One idea is to replace layered electrodes with something structurally stronger. For example, the 100-year-old Swiss battery company Leclanche is working on a technology that uses lithium iron phosphate (LFP), which has an “olivine” structure, as the cathode, and lithium titanate oxide (LTO), which has a “spinel” structure, as the anode. These structures are better at handling the flow of lithium ions in and out of the material. Efforts like Leclanche’s show it’s possible to tinker with battery chemistries to increase their power. Still, nobody has yet built a battery powerful enough to rapidly deliver the energy needed for a commercial plane to defeat gravity. If you can do that, the Nobel Prize would be waiting for you! Some startups are looking to build smaller planes (seating up to 12 people), which could fly on relatively lower power dense batteries, or electric hybrid planes, where jet fuel does the hard lifting and batteries do the coasting. But there’s really no company working in this space anywhere near commercialization. Further, most battery experts suggest that the kind of technological leap required for an all electric commercial plane will likely take decades. Battery Business Batteries are already big business, and the market for them keeps growing. All that money attracts a lot of entrepreneurs with even more ideas. But, battery startups are difficult bets! They fizzle even more often than software companies, which are known for their high failure rate. That’s because innovation in material sciences is hard. So far battery experts have found that, when they try to improve one trait (say energy density), they have to compromise on some other trait (say safety). This kind of balancing act has meant the progress on each front has been slow and fraught with problems. Surely, there are more battery scientists as compared to the old times with more eyes on the problem. The potential of batteries remains huge, but given the challenges ahead, it’s better to look at every claim about new batteries with a good dose of skepticism. Conclusion Hopefully, this article helps you understand more intuitively about batteries. If you are genuinely intrigued by the topic I have presented, I encourage you to research more on batteries to strengthen your understanding. I will be presenting more about simulating battery packs in different configurations & Electric Vehicles in the coming days. If you find yourself interested, stay tuned! If you have any queries, you can write me at mahirastogi50@gmail.com LinkedIn: https://www.linkedin.com/in/mahima-rastogi-16b238165/ Instagram: @mahi__wayzz Thanks for reading!
https://medium.com/@mahimarastogi/turn-on-your-mind-bulb-with-battery-2d04095aaa0a
['Mahima Rastogi']
2020-05-27 15:07:41.578000+00:00
['Battery Charger', 'Future Of Technology', 'Electrical Engineering', 'Battery', 'Women In Tech']
Cancel Culture Is Hurting the Left
Why cancel culture is going to haunt Democrats in November. Photo by Andre Hunter on Unsplash. “Two things form the bedrock of any open society: freedom of expression and the rule of law. If you don’t have those things, you don’t have a free country.” — Salman Rushdie Some in the media simultaneously deny cancel culture exists and insist it isn’t a problem anyway. Some media outlets ignore the phenomenon entirely. Some members of the press dismiss cancel culture as a complete fabrication, a fever dream of right-wing provocateurs. Others embrace cancel culture proudly, boasting of its seeming efficacy. Some even claim it doesn’t really hurt anyone. These media authorities can waste all the digital ink they like assuring people that cancel culture is not real, that free speech isn’t really in danger. They can insist it is only “hate speech”, which never should have been permitted in the first place, which is getting people “cancelled”. They can imply that people with nothing to hide have nothing to fear. People are free to express their opinions, they assure us; but no one is free from the consequences of doing so. People with opinions deemed to be racist, bigoted, transphobic, or otherwise problematic deserve to be held accountable for their speech. There is one major glaring error in this argument. Not every voting Democrat looking on at any of these instances of “cancellation”- from J.K. Rowling to people being hounded out of online knitting communities- feels completely confident they will never face a similar fate themselves. After all, opinions expressed decades ago are held to today’s standards; those daring to have expressed them in the past have been held to account in the form of job losses, expungement from societies and clubs. They have been socially ostracized. Authors, and their works, have been cancelled. Ordinary people have had entire 3,000 word Washington Post hit-pieces written about them. Almost all have been publicly shamed, hounded by strangers on the internet, and bombarded on their social media pages with insults and threats. Unpopular high school students are being preemptively cancelled by groups of their peers who “call them out” on social media for anything deemed “problematic”. Everyone in the school piles on; then everyone else on Twitter, including the New York Times. Are the accused actually guilty of these crimes? Who knows. Cancel culture certainly isn’t anything as organized and dignified as a trial by a jury of one’s peers, where one has the expectation of facing their accuser and presenting a defense under the presumption of innocence. The problem is, everyone has done and said things for which they aren’t proud. In a moment of anger, on a bad day, we’ve all treated someone disrespectfully, said something we regret. Maybe it was a misunderstanding, long forgotten. Perhaps we didn’t mean to be hurtful at the time; it is only now we understand that our good intentions don’t always translate into right actions. An exchange we may have perceived as perfectly innocent may have been understood differently by the other person or people involved. Everyone has almost certainly offended someone. Especially since we all recently started sharing our innermost thoughts and opinions with everyone on social media. Perhaps high school students, having grown up with their life recorded for posterity, have been more mindful? That seems rather doubtful. Technology has improved; people are just as fallible as ever. Which means that today’s impulsive teenagers who lack mature judgement and are likely experimenting with drugs and alcohol probably still make plenty of mistakes. Everyone watching friends, coworkers, and peers torched online for thought-crimes, fired and ostracized, wants to run as far from cancel culture as they can possibly get. People disagree. They say stupid things. If the Democratic Party isn’t a party that can tolerate the moral failings of human beings, there aren’t going to be as many Democrats. In taking this hard-line, in unleashing the Twitter-mob on those who “deserve it”, liberal progressives are pushing voters towards the Republican Party, however reluctant they are to vote for Donald Trump. It isn’t merely right-wingers who are falling victims to this new trend of online public shaming. On the contrary; luminaries from everywhere on the political spectrum are warning against the insidious creep of cancel culture. Ordinary people are worried, too. “This hostile culture is getting results. According to one brand-new survey, it is only far-left Americans who do not feel compelled to self-censor their views because of a hostile climate. Everyone but the far-left feels the threat.” “And 50 percent of self-identified strong liberals say that simply contributing to the GOP presidential candidate ought to be a fireable offense for a business leader. In this country?” — Sen. Mitch McConnell Cancel culture is making the progressive leftists of Twitter look like a bunch of bullies. And no one likes a bully. In November, when voters make their choice, will they identify more with the members of the Twitter mob, or with its victims? (contributing writer, Brooke Bell)
https://medium.com/discourse/cancel-culture-is-hurting-the-left-3c981d1a8bc6
['Dr. Munr Kazmir']
2020-07-26 00:56:50.360000+00:00
['Politics', 'Election 2020', 'Media', 'Trump']
Our Experience at KDD2021
Our Experience at KDD2021 Our AI Team at RappiBank had the pleasure to both attend and present one of our papers at KDD 2021 (More on that to come!) Jaime D. Acevedo-Viloria Follow Sep 6 · 4 min read KDD is one of the top AI conferences in the world, with the mission of providing a premier forum for advancement, education, and adoption of the “science” of knowledge discovery and data mining from all types of data stored in computers and networks of computers. It was an all-around amazing opportunity for us to better learn our craft, learn from amazing speakers, and hear innovative ideas on the subject. We will go through a small summary of what we personally considered the highlights of this conference, doing a bit of an emphasis on the Workshop for Machine Learning in Finance where our paper was accepted. How to learn more with less in Graphs! As you may know from our previous post we are really into Graph Machine Learning in RappiBank. As such, there is no other way to start these highlights without Neil Shah’s amazing presentation about Graph Machine Learning with Scarce Labels. In a field that we personally have been looking into semi-supervised and self-supervised solutions for this same problem of scarcity, Neil presented a brilliant way to learn more with graphs through modifications of the edges in the graph. With the idea, that through the removal of noisy edges and the adding of possible missing edges we can more closely approximate an ideal graph, where classes are perfectly separated into the same sections of the representations. These generated augmented graphs achieve better performance in graphs with practically every GNN model! As Neil said himself: You can learn better representations without labels by exploiting good priors through graph augmentation. GCN performance on the original Zachary’s Karate CLub graph in (a), and three augmented graph variants in (b-d). Black, solid-blue, dashed-blue edges denote original graph edges, newly added, and removed edges respectively. Fairness and performance, both equally important Afterward came Pedro Bizarro to talk to us about the many amazing projects Feedzai is working on. From fairness in AI to models that actually can explain themselves, it was an exciting ride through many subjects that have room for innovation and are critical to our work. Our personal favorite was Fairband, a bandit-based fairness-aware hyperparameter optimization algorithm. A simple to implement, model-agnostic methodology to consider the tradeoff between performance and fairness (if there is one!). The authors even found out that without extra training cost, Fairband consistently finds configurations that obtain substantially improved fairness at a comparatively small decrease in predictive accuracy. Fairness-accuracy tradeoff of thousands of models on the Adult dataset. In orange, the linear regression relationship between accuracy and fairness; in the red rectangle, the top 10% models with the highest accuracy; in light gray, the fairness accuracy Pareto frontier; marked with an A, the model with the highest accuracy; marked with a B, a model with 0.8% lower accuracy and 44.8% higher fairness than A. For more information on how the authors define fairness, or how do they solve this multi-objective optimization problem make sure to check out the paper! How to use deep learning to create user-merchant recommendations Finally, Mahashweta Das talked to us about an impressive Transfer Learning approach that embeds users and merchants in the same contextual vectorial space for a novel and impressive recommender system of merchants for users. With the key intuition to: Guide neural collaborative filtering with domain invariant components shared across both dense and sparse domains, improving the user and item representation learned in the sparse domains. They show the effectiveness and scalability of their proposed approach on two public datasets and a massive transaction dataset from Visa. Make sure to check the paper for further information about it! General Overview It was nice to see the emphasis on ethical issues like fairness evaluation and interpretability methodologies for models. It is really important for us as developers of this community to not forget about ethics and make a better place when developing state-of-the-art models and innovating. It was also interesting to see some presentations centered around Graph Machine Learning, such as the one presented by Neil Shah, a couple of others by MIT and the National University of Singapore that are yet to be publicly shared, and ourself’s (Make sure to check our future post on this!).
https://medium.com/rappibank/our-experience-at-kdd2021-4a6646bd7a8a
['Jaime D. Acevedo-Viloria']
2021-09-06 15:12:09.747000+00:00
['Conference', 'Kdd', 'Data Science', 'Machine Learning', 'Graph']
Is the 14 day Japan Rail Pass worth it?
yes, but you will need to plan ahead. here is how i planned my itinerary. For those that aren’t familiar with the JR Pass — it’s essentially an unlimited travel pass for tourists on JR Group owned transportation throughout Japan. JR Passes sell at a seemingly high upfront cost — but if you plan your trip well, you can easily get a bang for your buck. See more eligibility requirements for the JR Pass here. To save a few extra bucks, be sure to get your JR Pass before you step foot in Japan. I got my 14-day JR Pass at a local travel agency in San Francisco — but you can also buy it online. Find other internationally verified vendor locations here. Note: I am not affiliated with the JR Group, just a fellow travel lover with a blog. Check it out! Before I get into my itinerary, I think it’s helpful to frame the travel style I had for this trip. I traveled solo during the new year holiday season, i am partial to “local” exploration and tend to avoid most popular attractions, and lastly, i have a huge penchant for modern/contemporary art. These influenced the ‘when’ and ‘where’ of my JR pass itinerary. Day 1: Osaka to Yoro (day trip) The Site of Reversible Destiny (photo from TripAdvisor) Destination: The Site of Reversible Destiny Travel time: 6 hrs Total Cost: ¥15,520 (includes ¥420 for a local, non-JR Group owned train) From their site, “Arakawa and Gins’ Site of Reversible Destiny — Yoro, is a carefully considered construction of undulating planes, shifting colors, and disorienting spaces that the artists/architects presented to visitors a place of purposeful experimentation.” If you are a lover of design, architecture, and play, the Site of Reversible Destiny is a really sweet place for you. Yoro is a bit out of the way to get to, although the journey adds to the adventure. Plus, the small town surrounding it is extremely charming. Would recommend. Day 2: Osaka to Naoshima The most beautiful museum I’ve ever been to — the Chichu Museum (photo from Benesse) Destination: Chichu Art Museum Travel time: 3 hrs Total Cost: ¥7,840 (includes 570 boat ticket cost from a separate company) Naoshima is an island known for its contemporary and modern art. There are four main attractions on the island along with several art pieces scattered around the island: Benesse House, The Lee Ufan Museum, the Chichu Art Museum, and the Art Houses. Long story short, the Chichu Art Museum changed my life. It is a must see for contemporary and modern architecture lovers. The building is designed by Tadao Ando, and the museum includes incredible installations by Claude Monet, James Turrell, and Walter De Maria. You will need to book your tickets in advance. Day 3: Teshima to Kobe Teshima Art Museum (photo by Iwan Baan) Destination: Yokoo Art House & Teshima Art Museum Travel time: 3 hrs Total Cost: ¥6,160 (includes boat ticket cost from a separate company) Teshima is a sister art island to Naoshima. On the island, there are more Art Houses as well as the Teshima Art Museum. I’d recommend renting an assisted bike to get around on the island. The Teshima Art Museum is a collaboration with artist Rei Naito and architect Ryue Nishizawa. It’s a beautiful spot to observe and meditate. If you love contemporary design, architecture, and minimalism, this is a must-see for you. Day 4: Kobe to Kyoto Hyogo Prefectural Art Museum staircase (Photo by Patipat Janthong) Destination: Hyogo Prefectural Art Museum Travel time: 2 hrs Total Cost: ¥1,450 Kobe is most well known for their Kobe beef restaurants — however, if you’re into architecture and modern Japanese/western art, do check out the Hyogo Prefectural Art Museum. They It’s located just a few JR stops away, in Nada, Japan. Day 5: Kyoto Nishiki Market (photo from jrailpass) A well known attraction, Nishiki Market, is a narrow shopping street in Kyoto. Often referred to as “Kyoto’s Kitchen”, this fresh food market is five blocks long and lined with over one hundred restaurants and shops. Day 6: Kyoto Kurama Onsen (photo from Oyster) Located 30 minutes away from Kyoto, Kurama Onsen is a nice city getaway, featuring a Roten-buro (outdoor hot spring bath). I was surprised by how easy it was to get into the mountains — there are several bus paths from Kyoto city center. There are some great hiking trails up in Kurama to check out as well! Day 7: Kyoto to Kanazawa Pool installation by Leandro Erlich at the 21st Century Museum (photo from Cool Material) Destination: 21st Century Museum Travel time: 3 hrs Total Cost: ¥7,240 I remember seeing a photo of this pool installation on my Tumblr feed back in the day, and have always wanted to see it IRL. The art piece is a permanent installation within the 21st Century Museum in Kanazawa! I didn’t get to see it due to museum renovation closures, however, there are some fun public installations surrounding the uniquely shaped building. Day 8: Kanazawa Kenroku-en Gardens (photo from japan-guide) Kanazawa also has one of Japans most beautiful and greatest gardens, known as a garden which combines six characteristics— the six aspects considered important in the notion of an ideal garden: spaciousness, serenity, venerability, scenic views, subtle design, and coolness. Day 9: Kanazawa to Iiyama Destination: Nozawa Skiing / Snow Monkeys Travel time: 1 hr Total cost: ¥7,240 I had never skied before, and thought it would be fun to try since I would be passing through the area. Would highly recommend the skiing here — heard praises from many other seasoned skiers on the quality and quantity of the snow. Plus, the small town surrounding the resort is very cute and quaint. Day 10: Iiyama to Ikebukuro Jigokudani Monkey Park in Nagano (Image by Matthew Kane) Destination: Jigokudani Monkey Park Travel Time: 2.5 hrs Total cost: ¥8,570 Another sentimental stop for me, (I had seen these monkeys in my school textbooks, and have always wondered what it would be like to see these monkeys IRL) the Jigokudani Monkey Park is a popular attraction in the area. My tip is to go early! It does get crowded, and there will be lots of tourists surrounding a small hot spring, all with cameras pointing at the monkeys. But there are also a lot of monkeys and they’re all very cute. Day 11: Tokyo inside 21_21 Design Sight (photo from matcha-jp) I didn’t do much intercity travel on this day as I was very exhausted from the past 10 days. I had a lovely day 11 of my JR Pass at the 21_21 Design Sight museum. Day 12: Ikebukuro to Sendai (day trip) downtown Sendai (photo from japan-guide) Destination: Shopping! Travel Time: 4 hrs Total cost: ¥22,400 There are a lot of great outdoor and cultural attractions in/near Sendai, however, I only really wanted to do some shopping on this day. Sendai has some great second hand shops. Day 13: Ikebukuro to Osaka (day trip) Super Tamade supermarket (photo by OnlyForward) Destination: Super Tamade! & Dotonburi Travel Time: 6hrs Total cost: ¥29,440 I had a lot of decision fatigue this day (as well as travel fatigue), and decided to go back to Osaka one last time. Super Tamade is a cute grocery chain concentrated in the Osaka area. I would highly recommend popping into a Super Tamade shop — the neon signs, cheeky store design, and cheap prices are sure to tickle your senses. Dotonburi is a well known shopping strip in Osaka that I would also recommend for a stroll. Day 14: Tokyo I was content to stay in Tokyo on Day 14, as I was exhausted from the travel already done and I had already gotten good value from the pass.
https://medium.com/@ashlerr/is-the-14-day-japan-rail-pass-worth-it-1857fa8ea224
['Ashley Herr']
2020-07-03 00:35:51.717000+00:00
['Jr Pass', 'Japan', 'Travel Tips', 'Travel Itinerary', 'Japan Travel']
The Passion Trap
“I love building businesses and launching new ventures, but the only reason I value money is that I’m going to need a lot of it when I buy the New York Jets.” -Gary Vaynerchuck, Crush It! “…the Erotic Professional positions herself as answering a vocational ‘calling’ that seems to have barely anything to do with being paid.” -Juno Mac and Molly Smith, Revolting Prostitutes Like many people born after the 1980s, I grew up surrounded by discourse about work and passion. The underlying thesis was that, since work occupies the majority of a regular person’s time, and work is often associated with unhappiness and discontent, the solution was to turn work into an enjoyable experience. In a capitalist system that abhors any hint at communism, it does not do to examine the reasons why work is so undesirable to so many — that would eventually lead to discussions about fair wages, a reasonable hourly schedule, shifts in working conditions. The only solution that capitalism proposed to the issue of work was the individualist solution of finding something you’re so passionate about, it doesn’t feel like work, a sentiment summed up in that old cliché “love the work you do, and you’ll never work a day in your life.” As a young person coming of age in the early 2000s, I wholeheartedly believed this mantra. I spent my first year in college casting about for my passion. When I finally found it in translation, it was not at all clear how I would even enter into the field. I had no connections and knew no one who even worked in it — apart from my professors, and they were not hiring. There was no way to get my foot in the door because no one could even point me to the door. My professors could not realistically provide the answer. They had both sort of fallen into the field in a time before the internet, when the industry was vastly different. One particular memory sums up that time in my life. One of my translation classmates had been talking proudly about a project to translate encyclopedia articles her mother, a government employee, had gotten for her using her connections. At the time I worked at the now-defunct Toys R’ Us, and keenly remember a moment when, while mopping spilled soda from the floor, I felt like I was at a complete dead end compared to my luckier classmate. How would I, as a soon-to-be college graduate mopping soda off the floor at a retail store ever become a freelance translator? After college, I entered the corporate world, when I excitedly took a job at a call center as a telephone interpreter. That seemed to be the door, back then, until it became clear that it would never lead to where I wanted. I became known around the office as a provider of translations, and so was often tasked with this additional labor. Back then, it was true of me that I was so passionate about translation that I would do it for free, but I also aspired to become a full-time translator and be remunerated for my work. Eventually, I realized that these odd in-office jobs would lead nowhere, and that people were simply taking advantage of my willingness to do the work for no additional pay. They were never going to hire me as a full-time translator. These odd jobs would never lead to me figuring out how to acquire clients on my own. I began looking for freelance work online, only to end up in exploitative situations by underpaying rapacious agencies. As I spent three years searching in a frenzy for the way in, I realized that the tools and resources I had at hand were wholly insufficient to allow me to realize my dream. A graduate degree emerged as the solution. There was a school in Monterey, California that specialized in translation and interpretation. My college professors spoke about it as it was well renowned within the field. I decided I would go there. A master’s degree from a prestigious translation school would probably do the trick. As it was a private school and I absolutely would never be able to realistically save up enough money to go, I decided to take the leap into massive student loan debt, after countless hopeless attempts to find scholarships and grants.
https://elsafigueroa.medium.com/the-passion-trap-c33b1688759c
['Elsa Figueroa']
2020-08-23 18:04:33.728000+00:00
['Labor', 'Content Creation', 'Working Conditions', 'Passion Economy', 'Garyvee']
Gamification Theorie und Didaktik: Tag 1 der zweiten Reise, Start in Lomé
in Both Sides of the Table
https://medium.com/gamification-f%C3%BCr-kmu/gamification-theorie-und-didaktik-tag-1-der-zweiten-reise-start-in-lom%C3%A9-d1c095d6c388
['Christian Müller']
2018-05-24 06:26:04.851000+00:00
['Afrika', 'Goethe Institut', 'Community', 'Gamification', 'Workshop']
A High-Level Introduction to the C Programming Language
C is a great programming language if you want to work closer with the machine. A couple of decades ago, C was considered a high-level programming language; look at how times have changed. We’ve been babied with programming languages like Java, C#, JavaScript, PHP, Python, you name it. You can pull a car using the frame machine, but we’re going to do it with a hammer and a chain. Why? Just to prove that we can. There are other reasons for studying C; the primary being the speed of the program. Understanding how everything works at a lower level makes learning other programming languages a breeze. So, let’s get into it. There will be limited code, if any, in this post. You can watch some YouTube tutorials for that. C is a compiled language, meaning that you need a C compiler to convert your readable code into machine code. Machine code is just a bunch of zeros and ones. C got its popularity mainly because Unix was written in C. Currently, most operating systems are written in C and high-graphics usage video games (C++ as well). Since C is a small language, most of the functions are defined in external libraries. That code is included at the top of the source file. The compiler copies the contents from the included file and pastes it into the source code where the include statement was entered. The most common one is the stdio.h header file, which stands for standard input/output. This library contains the code that’s necessary for you to write data from and to the terminal. A couple of functions that you can’t live without that are declared in the stdio.h file are scanf and printf. A few things to clear up before we continue. When compiling your code using, for example, gcc, you may have seen something like the following written: gcc somefile.c -o somefile. You’re using the gcc utility to create an executable file called somefile. You can enter gcc somefile.c and it’ll still work. However, this time it’ll create an a.out file. To run it, you would have to enter ./a.out. If you do include the -o somefile, you can run it by entering ./somefile into your terminal. Also, the order matters slightly. You have to enter the name of the executable after the -o attribute. You can, however, move the -o to the beginning, such as gcc -o somenewfile somefile.c. Also, the executable files and source code file’s names don’t have to match. The -o means output; so, it specifies the output file. The -o stated previously is different from utilizing the capital O as in -O, -O2, -O3 and -Ofast. The capital O is set when you want the gcc to optimize your code. -O3 will include all of the checks of -O and -O2. Maximum optimization is done with -Ofast but will also take the longest to compile. Gcc optimization is switched off by default because it takes longer to compile the code. If you’re writing a C program and you want to use the command line option (like the -o option), you’ll have to read it with the getopt() function. First, include the unistd.h header file; unistd.h is not part of the standard C library, but is instead part of the POSIX library. To read each of the command line options, place the getopt() function into a while loop, i.e. while((ch = getopt(argc, argv, “do:”)) != EOF) {…}. The last argument provided to the getopt() function states that both d and o are command line options, that the o option also needs a command line argument and that that argument will be included immediately after the -o option. To get the command line argument, you’ll use optarg variable once the “o” command line option is matched. Your command line options can also be combined as long as the option that requires an argument is written last (i.e. -do something). If you want to include both command line options and negative numbers, you can split the main arguments using the “-” (i.e. gcc_custom -do somefile — -5 somefile.txt). You’re probably wondering why we must use the period forward-slash before the file name. If you were to enter somefile without the ./, the operating system would try to find the file in a directory specified in its environmental path variable. If it’s not there, it’ll let you know that the file doesn’t exist. To get around that, you’re telling the terminal to look in the current directory to execute the file: ./ specifies the current directory. If you really want to eliminate the ./, you can copy the folder’s absolute path that contains your source file and append it to your environmental path variable. One last thing, if you’re using gcc on a Linux distribution, the compiler will have created a file named somefile after executing the command gcc somefile.c -o somefile. On Windows, the name of the file will have a .exe appended to it. Strings C doesn’t support strings out of the box. Strings are stored as an array of characters. When printing out characters to a screen, the printf function looks for the null character ‘\0’ to know when to terminate a string. Each character is stored in 1 byte of memory. If an array of characters has 10 elements, it would occupy 10 bytes of sequential memory. If the printf statement doesn’t encounter a null statement, it might continue into the next memory cell, which would not be beneficial for us since we have absolutely no idea what’s stored there. So, when creating a string, make sure that the array’s length is the length of the string plus one. This also brings up another frequently asked question: why do array indices start at zero and not at one? To access the first element of an array, the variable that’s assigned that memory address knows how to locate that memory address. It doesn’t know the memory address location of the next element. What we do know is that arrays are stored sequentially. So, if we have a character array, and we know that each character takes up only one byte of memory, then we can say that the next character is 1 byte away from the starting point. The first element is similarly 0 bytes away from the starting point. Each index is an offset which literally translates to the distance from the starting point. You can declare a string character array several ways, but here are two in C: by using the string literal or defining and populating the array manually. If you initialize a character array using the string literal, you’re creating a constant and it’s not changeable at that point since constants are stored in the read-only-data segment. When declaring a character array and later populating it, the storage is allocated on the stack and each element is mutable. You can also store strings, using the malloc function, on the heap. In other programming languages, like Java, the new operator is used to allocate space on the heap. What else can you do with strings in C? You need to check the string.h header file for useful function declarations. As a side note, the .h file is a header file that contains declarations. Most of the time, programmers will not give you access to view the implemented functions, but will provide you with the .h file so that you may review the declarations and useful notes on how the functions work. If you’re using a Linux operating system, you can learn more about a function by typing in man function_name into the terminal (i.e. man strcmp). The man Linux utility stands for manual. The string.h file is part of C’s standard library. What’s the standard library? It’s just a collection of code that came pre-installed with the compiler that you downloaded (or had with a Linux distribution). Another extremely useful header file is the stdio.h that you use for your input/output (i.e. printf and scanf). C is a very lightweight language so it relies heavily on the standard library. Conditional Expressions C follows the short-circuit evaluation technique to speed up the program. That means that if there are multiple comparisons separated with AND statements, if one fails we know that there’s absolutely no way that the overall expression can be true (thanks discrete math). Sometimes this can be a problem if you’re updating a variable in the second expression with a prefix or postfix incrementation operator. Similarly, we know that in an OR statement one or both expressions must evaluate to true. Since it’s going to be true either way, if the first expression evaluates to true, there’s no point evaluating the second one. C doesn’t have a Boolean type. C99 does allow you to enter true or false, but in the end, it gets converted into 1 or 0 respectively. This can cause unexpected results if you use the assignment operator instead of the relational equality operator in your conditional expression since in C a zero represents false and all other non-zero integers represent true. What’s the difference between && and &; similarly, what’s the difference between || and |. The BITWISE & (AND) and BITWISE | (OR) force the evaluation of both sides always to prevent short-circuit evaluation side-effects. BITWISE & and | also perform bitwise operations on individual bits of a number. Loops There are two general types of loops: pretest and posttest loops. In pretest loops, the control statement is evaluated prior to the statements in the loop body. In a posttest loop, the statements in the body are evaluated first followed by the loop condition. When looking at the operational semantics of counting loops, both at “while” and “for” loops, you’ll quickly see that they’re very similar: initialize a loop variable, test against terminal value, evaluate statements in loop body and increment loop variable. Each expression in C’s for loop can have multiple statements separated by commas. C allows for the use of the break statement, which terminates the loop, as well as the continue statement, which skips the remaining statements in the loop body and takes the execution back to the start of the loop. If loops are still hard to visualize, just take a loop at the operational semantics of each one. Let’s start off with the for loop: Looking at the operational semantics of a for loop, you can quickly see that expression_1 is evaluated first. This is the initialization step. The loop label comes after the initialization of the loop variable normally. After the label, the condition is evaluated. If the condition is false, the unconditional branch (goto) transfers the control to the “out” label location in the program. If the condition is true, the statements contained in the loop body are executed. After the execution of the loop body, expression_3 is evaluated. In the for loop, expression_3 normally serves as the step size. After the execution of the third statement, the goto statement transfers the control to the “loop” label location in the program which if you remember comes after the execution of the first expression. In C’s for loop, each of the expressions are optional; the semi-colons are not optional. Missing a second expression is the same as having an expression that’s always true; this can potentially cause an infinite loop unless you have an explicit-branch in your loop body. The first and third expressions can be a series of expressions separated by a comma. The second expression can be a multi-conditional expression. The loop body in C’s for loop is also optional. If no statements are provided, a semi-colon must be included after the closing parenthesis. Since numerous expressions can be evaluated in the for loops control statement, it’s common to see for loops without a loop body. Counter-controlled loops were created for convenience and since so many logically controlled loops had some sort of counting variable. Every counting loop can be built with a logical loop; the reverse isn’t true. The two most common logically controlled loops are the while and do-while loops. The difference between the two is that the while loop is a pretest loop, but the do-while loop is a posttest loop. Like before, let’s examine the operational semantics of both. The general form of the while logical loop is: The operational semantics for the while loop looks like the following: In the pretest loop above, the condition is evaluated first. If false, the goto statements transfers the control to the “out” label terminating the repetition. If the condition evaluates to true, the statements in the loop body are executed and the unconditional branch redirects the execution of the program to the loop label. Now, let’s examine the general form and operational semantics of a do-while post-test logical loop. The operational semantics are listed as follows: Examining the operational semantics of the do-while loop we notice that the statements contained within the loop body are executed at least once and are performed before the condition is evaluated. If the control expression is evaluated to true, the goto branches to the loop label. Functions You must specify a return type for each function. If the function is not returning anything, void is used as the return type in the function declaration. Unless specified, arguments that are passed to a function are passed by value. The programmer can specify that the parameters should be “pass by reference.” When returning a pointer, make sure the pointer was declared outside of the function. A pointer variable declared within the function will be placed on the stack; the scope of a local variable is from declaration to function end. Something else to be cautious of is when passing pointers to arrays as parameters. Calling the sizeof operator on the array pointer prior to function call will provide you with the correct size of the array, however, if attempting to use the sizeof operator on a parameter, the sizeof operator will display the size of the pointer variable, not the array. Make sure to pass another argument to the function that contains the size of the array if you need that information. If you’re passing a pointer argument to a function and you don’t want it to be accidentally modified, include the keyword const before the pointer’s data type (i.e. const int *num). Once in a blue-moon you’ll write some code that’s mutually recursive (i.e. function one calls function two and function two calls function one). In this case, there’s really no way to arrange the functions so that the C compiler will be happy; you have to declare the functions prior to calling those functions. Even better than placing the declarations (called prototypes in C) in the same document would be to place them in a header file. Function declarations are necessary since C doesn’t allow forward referencing of functions; they’re needed for static type checking. When including your custom header file make sure to wrap it in double quotes to tell the compiler that it’s a local file (search via relative path) and not in a directory where library code is located. You can place the full pathname in your include statement if you’re including a header file with double-quotes. After the compiler finishes preprocessing the code, the header file code will be “copied” to the point where the “#include” is specified. The compiler doesn’t actually create a new file, instead it “pipes” the information through the compilation process. If you’re including a header file whose definitions are located in another source file, you’ll have to specify both source files when compiling (i.e. gcc file_a.c file_b.c -o file_a). If your function is returning an int value, even if you don’t declare a function prior to it being used, the compiler will still compile the code correctly. Why? When the compiler gets to that portion of the code, it’ll assume that the function returns an int since that’s what majority of the functions return. C supports variadic functions which are functions that accept a variable number of parameters. To create your own variadic function, you’ll first have to include the stdarg.h header. When defining a function, you’ll have to specify that the function will be a variadic function by including the “…” ellipses after the parameters of the function. Within the function, you’ll need to create a va_list (variable argument list) that will store the extra arguments that are passed to the function. After you create the va_list, you’ll also have to specify the last fixed argument with va_start macro; va_start accepts two parameters: the va_list and the last fixed argument of the function. To finally read the arguments, you’ll use the va_args macro. Va_args accepts two parameters: the va_list and the type of the argument passed to the function. Once you’re finished reading the list of arguments, you’ll need to tell C that you’re done with the va_end macro; va_end accepts one parameter: the va_list. To create a variadic function, you’ll need to have at least one fixed parameter. Function names are pointers to the function; the pointer variable contains the address of the function. If you have a function drive(), then drive and &drive are both pointers to the function. The function pointer name is a constant. To create a pointer variable that points to the function name you’ll have to specify the return type of the function, the name of the pointer variable wrapped in parentheses and the parameter types that the function that you’re pointing to has (i.e. char**(*var_name)(int, char*)). This is normally done when you’re passing a function as an argument to another function or if you’re creating an array of function pointers. Certain object-oriented languages that are built with C utilize function pointers to create many object-oriented features. Pointers What is a pointer? A pointer is a memory address (a variable) that stores another memory address as its value. We can use that memory address to find our way to the particular area in memory. If you remember earlier, I mentioned that parameters to a function can be passed by value. If you pass an extremely large amount of data by value, it means that the function must make a copy of that data and store it locally. Local variables (variables declared within the function) are stored in the stack. If the value that you just copied is too large, it can cause the stack to run out of memory. Also, copying such large objects (not the be confused with objects in object oriented programming) is time consuming. It’s much easier to just pass the address of where the object resides. As a side note, why do functions store their variables in a different section of memory? One reason is scope of variables for recursive functions. Imagine the following piece of code being evaluated: In the example above, if the variable i is located on the left-hand side of the expression, the value replaces the contents of i. If i is located on the right-hand side, the value of i is assigned to j. Make sure to understand that basic concept. Once you do, dereferencing a pointer on the left-hand side causes the value of the memory location that the pointer is pointing to, to change. Dereferencing a pointer on the right-hand side of the expression causes the value to be retrieved from the memory location that the pointer points to. So, in other words, the * operator can read the contents of a memory address or set the contents of a memory address that the pointer is pointing to. To assign a memory address of a scalar to a pointer variable, you must use the & operator to get the memory address of the scalar. You also have to make sure that both the pointer and the scalar are of the same data type. Why do pointers have types? In a couple of paragraphs, I’ll describe pointer arithmetic. But generally, if you were to add 1 to a byte, or 1 to an int, the arithmetic needs to be different since a byte occupies 1 byte and an int occupies 4 bytes of memory. If an array stores integers as values and you want to go from array[0] to array[1], you need to move 4 bytes away from array[0]. Arrays can be used as pointers. The array name stores the memory address of the first element of the array. If you print the memory address of the array name and the memory address of arrayName[0], you’ll notice that the memory addresses are identical. Array variables can’t point to somewhere else though. Also, when using the sizeof operator to check the size of the array, the compiler will tell you the size of the array. If you use a pointer that points to the first element of the array, or the array name, the compiler will lose the information about the array and will only give you the size of the pointer variable, which is 4 bytes in 32 bit machines and 8 bytes in 64 bit machines. The loss of information is called decay. Since the array address is a number, you can do pointer arithmetic to add integers to the pointer and subtract integers from the pointer. If you create two pointers, one for example pointing to the first element and the other pointing to the third element in the array, you can subtract pointers from each other. In array pointer arithmetic, you cannot add two pointers together. Arrays If you understand arrays in a different programming languages, it should be simple to understand arrays in C as well. An array is stored in sequential memory addresses with array element zero acting as the memory address that can be referenced; subsequent arrays can be accessed through offset calculations. An array can store any data type as long as they’re of the same type. Arrays can also store other arrays; these types of arrays are called multi-dimensional arrays. There are two types of multi-dimensional arrays: jagged and rectangular. C’s two-dimensional arrays are always rectangular. What does that mean? Let’s say that you wanted to store strings (character arrays) into an array. The length of the second dimension will have to equal the characters of the largest string plus one (for the null character). The smaller strings will have the null character fill the unused spaces. A two-dimensional array is stored contiguously in memory so if you have a two-dimensional array[3][3], to access the third element of the second array you may write array[1][ 2]. Since we know that two-dimensional arrays are stored contiguously in memory you can also access that element by writing array[5]. You can also create an array of pointers which is just a list of memory addresses stored in an array. This way, you don’t have to declare a second dimension; each pointer can be stored in a single-dimensional array even though the values that they point to (i.e. strings) may have different lengths. The pointers still have to be of the same type (for pointer arithmetic). Structs A struct (structured data type) is like an array; arrays elements are accessed via indices while struct elements are accessed via field names. Arrays require that the data type of the elements be the same while struct fields can have different data types. To get the total memory size of the struct, calculate the size of each field and add them together. Fields are stored sequentially in memory in the order that they’ve been declared within the struct. Once a struct is created, the length is fixed regardless if all the fields are used or not; the maximum amount of space is allocated for each field. Adding an identifier after the keyword struct will create a new data type that you can use to assign to some new variable. When declaring a new variable with the data type of a particular struct, you have to include the word struct prior to the struct data type name (i.e. struct vehicle lambo). When defining a struct variable, make sure to place the values in the order that they’re declared within the struct (i.e. struct vehicle lambo = {“Murci”, “mph”, 220};. To access a field within a struct, you would use the dot (.) operator (to update and read values). If you assign the struct to another variable a copy of the struct is made and new memory is allocated. When dealing with complex structurers, sometimes it’s necessary to nest structs. You can access the nested struct with the dot operator again (i.e. lambo.Murci.topSpeed); The nested struct can be initialized in a similar fashion as a single struct (i.e. struct vehicle lambo = {{“Murci”, “green”}, “mph”, 220}. If the variable is a pointer to a struct then you’ll need to dereference the variable prior to referencing the field (i.e. (*lambo).speed). The -> operator can be used (i.e. lambo->speed); it combines the dereferencing of the pointer variable and field referencing. When using the dereferencing symbol “*” and the dot operator (.) make sure that the dereferencing is wrapped in parentheses since the dot operator has higher precedence over the dereference operator. To eliminate placing the struct keyword prior to variable declaration, you can use the typedef operator and place the identifier, which will act as an alias for the struct, after the closing brace. Once the data type has been assigned an alias, you may use only the type name in front of the variable name (i.e. vehicle a = {…}; Unions Unions are used when a variable may contain different data types throughout its lifetime. A struct can be used, however, due to how structs are implemented, memory space will be wasted. When declaring a union, the compiler will allocate enough space for the largest field within it (i.e. if a union contains an int and a float, it will allocate enough space for a float). Regardless of how many fields are defined within a union, each value is assigned to the same memory address. A union looks like a struct other than the keyword union being used. Typedef can be used in unions as well to create an alias for the data type. You can use the designated initializer to initialize a union by field name (i.e. height x {.euroStd=1.1};). You can also set the value with the dot notation after the variable has been declared (i.e. height x; x.euroStd = 1.1;). You don’t have to initialize a field by name; you can obtain the value of it by calling the variable name directly. Unions can be declared within structs to have a field that can accept different data types and potentially save memory space. You can access union fields with the dot “.” or “->” operators. For both structs and unions, when an identifier is placed after the closing brace without using the typedef, the struct or union data type is assigned to the variable. Other things to think about On larger projects, you don’t want to recompile all the code each time you make the change. First, make sure that you have object files of everything using the command gcc -c *.c. The -c specifies to the gcc compiler that it should create all the object files but not link them. After the object files are created, you need to run the gcc -o file *.o which will link all of the generated .o files. The compiler will skip most of the compilation process and will begin linking them together to form an executable. If changes are made to a single file, you’ll only have to recompile that one file using the -c option outlined above (of course specifying your file name instead of using the * symbol). You will have to link all of the files again to create the executable but it’s a drastic reduction in compilation time. You can automate this process using the “make” build automation tool. When you need to allocate memory at runtime, you’ll use the malloc function. Malloc take a single parameter; that parameter tells the malloc function how many bytes to allocate on the heap. Since most of the time you’re not going to know how many bytes you’ll need, the malloc parameter almost always utilizes the sizeof operator. To be able to use the malloc function, you’ll first need to import the stdlib.h library. Once the memory has been allocated on the heap, the malloc function returns a general-purpose pointer (void*) to the newly generated space. Although it’s not necessary, most programmers will cast the general-purpose pointer to a specific data type. Programmers should always use the free function to deallocate memory on the heap. If not, there’s a possibility that a memory leak may occur. If a memory leak does occur, you can use a Linux utility like valgrind to locate it. Valgrind has its own version of the malloc and free functions; it will intercept your code and keep track of the code that calls for heap allocation and deallocation. Valgrind works best if your compiled executable contains debug information (to add debug information to your code, use the -g option with gcc). Fun fact, if you look up the definition of heap, it’s “an untidy collection of things piled up haphazardly.” The heap in memory is called the heap because it stores data in an unorganized way.
https://medium.com/swlh/a-high-level-introduction-to-the-c-programming-language-f5ada5a5bd5d
['Dino Cajic']
2020-02-26 22:34:19.158000+00:00
['C Programming', 'Programming Languages', 'Programming', 'Software Engineering', 'Software Development']
Simple Money Fixes You Can do in 30-Minutes or Less
Simple Money Fixes You Can do in 30-Minutes or Less Whether you’re looking to pay off your student loans, save for a down payment on a car, or catch-up on retirement, the only way you’ll reach these goals is by improving your finances with a few simple money fixes. Despite the misconception, though, you don’t have to turn your life upside down. In fact, you can make simple money adjustments in under 30-minutes or less. Don’t believe me? Here are 15 quick fixes that will prove otherwise. 15 Simple Money Fixes You Can do in 30-Minutes or Less 1. Review (or Create) Your Monthly Budget According to Debt.com’s 2020 Budgeting Survey, 8 in 10 Americans use a budget. That’s a solid 10% increase from the last two years. Despite this encouraging news, there are still a lot of people who aren’t on board. What age group uses a budget the most? Well, here’s what the Debt.com survey: As you can see, there’s still work to be done here. Not to be an alchemist here, but this is especially true with the 45 to 54-year-olds. I mean, how is that 24% of you so close to retirement aren’t budgeting? Even if you have a monthly budget, you may want to take a couple of minutes and make sure that it still fits. For example, if you recently became a parent, you definitely need to update your monthly budget. Now you will add into your account the cost of the newest member of your family. While this may seem like a time-consuming and overwhelming task, we’ve got you covered. Here are some budgeting related resources you can use to assist you in developing or renovating your budget: 2. Plan a Monthly Menu Obviously, you have to eat. But, food is arguably the biggest budget buster. In fact, Americans spent an average of 9.7 percent of their disposable personal incomes on food — divided between food at home (5 percent) and away from home (4.7 percent) — in 2018. If you believe that you’re spending too much on food, a simple fix would be to develop a monthly menu. It’s an effective way for you only to buy what you need. If that’s too much, try making use of a meal planning service like Plate Joy or eMeals. Also, make sure to plan for your lunches. I know the brown bag can get boring. But, it’s another way to limit how much you eat out. A comprise here would be to purchase a gift, like a reloadable Visa card or one to your favorite restaurant. You would only use this when you want to eat out. But, once it’s out of funds for the month, that means you go back to making your own meals. 3. Download an App Regardless if you an Apple or Android user, there are thousands of apps available that can help you improve your finances. To save you the trouble of spending hours upon hours in the App Store or Google Play, here are some personal finance apps worth your consideration: In addition to the budgeting tools listed above, you could download investing apps like Acorns. And, definitely, don’t overlook apps like Rakuten or Honey to earn cashback or land the best deals. 4. Never Pay Overdraft Fees Again It’s happened to most of us at some point. You’re waiting for a check to clear while autopayments go to make a withdrawal. Next thing you know, you’re slapped with an overdraft fee. Overdraft fees aren’t just frustrating; they’re one of the most expensive fees that banks charges. They can actually range from $20 to $39 per item. But, there is a way around this. Contact your bank; it’s probably easiest to go on their website to see if you can link your checking account to a savings account. So, on the off-chance that you draw too much from your checking account, the bank will automatically transfer money from your savings. You could use switch checking accounts that have either a low or no overdraft fee. Examples include Capital One 360, Chime, Discover, Simple, Charles Schwab, and TIAA Bank. 5. Pull Up Your Credit Score/Report When was the last time that you looked at your credit score or report? Not sure? That’s a huge mistake. “One of the major factors lenders consider when you apply to borrow money is your credit score,” the Chime team wrote for a previous Due article. “So, keeping an eye on your credit score should be part of your financial plan.” Thankfully, “there are several free credit monitoring services that can help you keep track of your score, including Credit Karma and Credit Sesame. These services can also show you which factors go into your score and how you’re doing with each.” So, if you notice that “your score is low, you can then understand what you need to do to improve it.” “What’s more, if you notice a big drop but haven’t done anything wrong lately, it might be because of an error or fraudulent activity.” the authors add. Additionally, make it a point also to check your credit report regularly. “You can get a free copy of each of your three credit reports each year at AnnualCreditReport.com.” If something looks out of order, report it to the credit bureaus. 6. Audit Your Credit Cards Credit cards can be helpful and convenient. But, if you have too many cards or a high balance, that can be detrimental to your credit score. Ideally, you should use only 30% of your limit. So, if you have a card with a $6,500 and are carrying a $1,500 balance, then you would be at 23 percent. That means you’re in the clear. This is known as credit utilization. But is there any truth to it? NerdWallet states, “there is no certain credit utilization ratio that will make or break your credit score. Below 30% is a good guideline for most consumers, and lower is better for your score.” Despite this being more of a guideline, there are ways to reduce your credit card debt: Consolidate what you can. Negotiate your rates. Focus on your high-interest debt first. Commit to avoid new credit lines. Acquire new sources of income to knock-off your debt faster. There is a fun part to this through. When evaluating your credit cards, check out the rewards you’ve earned. You may be able to use them towards something that you would purchase anyway, like a vacation with you and your significant other. 7. Lower Your Bills Before rolling your eyes, you should realize that this is much easier than you may think. In fact, if you block out a couple of minutes to do this, you may end-up savings thousands of dollars. You can then use that to pay off any debt or stash away into savings. Ben Kurland, the co-founder of BillFixers, told CNBC that there’s a fail-safe method to lower your monthly bills by a solid 20 percent: Do your research. “If you know what the prices are of the competitors in the area, you can come better armed when you negotiate,” says Ben. “So if the rep tells you that you already have the absolute lowest rate, you can say you found a competitor online for $30 less.” Call between 9 am and 5 pm. “People are used to calling their cable or internet provider after hours or on weekends when they have free time, but that’s when everyone else is calling.,” says Ben. Believe it or not, reps may not be overwhelmed during regular business hours. Say that you’re canceling the service. “Retention or loyalty departments tend to have access to some of the best discounts,” he adds. “If you speak to somebody in customer service or billing or technical support, they have minimal access to the discounts and promotions that are available.” However, “if you go through the process like you’re canceling, those people have all sorts of special deals, and they will try to entice you to stay.” Always be friendly. “It really is true that you catch more flies with honey than vinegar,” says Ben. “The reps for these companies basically have total control which discounts and promotions they are going to offer you, so if you are one of the people who call up and screams at them and throws a fit, they are going to say, ‘I’m sorry, but there’s nothing we can do.’” Be skeptical. “Reps, in general, tend to make a lot of mistakes when you are negotiating your own bill, and they will also flat-out lie to you,” says Ben. “So when you get told there’s nothing better they can do, it’s worth your while to call back and try again.” “Even if you get told there are savings, it’s very common for someone to get their next bill and find absolutely nothing has changed,” he says. Also, remember that app Trim you downloaded? It will automatically search for and negotiate the best deals for you. 8. Cancel Subscriptions and Memberships When you created or evaluated your budget, hopefully, you spotted some unnecessary expenses. The usual suspects are recurring subscriptions and memberships. Examples include everything from magazines to streaming services to gym memberships. What you aren’t using, cancel. If money is really tight, you might want to cancel everything until you’re in better shape. For the time being, you could turn to free options like watching YouTube videos or asking to mooch off a friend — make sure to thank them eventually. 9. Set-and-Forget Your Savings Automating your savings ensures that you won’t spend everything in your pocket. That money can then be used more wisely, like bulking-up your emergency fund. It also makes budgeting less painful and will save you a ton of time. To get started, here are 5-steps to automate your savings if you haven’t done so yet: Open multiple accounts so that you have money in the following buckets: spending and bills (checking), emergency fund (high-yield savings account), long-term goals (investment account), short-term goals (high-yield savings account or money market fund) and a fixed annuity. Determine your contributions, such as automatically transferring 15% of every paycheck into a 401(k). Set up your transfers, like having your employer direct deposit transfer a portion of your paycheck to a savings account. Round up the change. Chime, for example, will round-up the purchases that you make using your debit card. So, if you spent $4.75. It would round-up to $5 and place the 25 cents into a savings account. Slowly increase your contributions. For instance, if you received a raise, then that extra money would be used to increase your contributions without changing anything. 10. Cash-in Loose Change Whether you have a coin jar or loose change floating around your car or home, why not cash it into something more useful. I did this last summer and was shocked to find that I had over $200! I immediately put that into a savings account that’s been earning interest ever since. You may be able to do this at your local bank for free if you a customer. There’s also Coinstar kiosks at most grocery stores. They charge a fee, but if you opt to redeem a gift somewhere like Amazon, it’s free. However, COVID-19 has created a coin shortage. As a result, some banks are paying a bonus for people to bring in their spare change. 11. Make Lists…Lots of Lists When you have some downtime, grab a pen and your notebook. Or, you could open up a notepad on your phone. Either one will suffice. Whatever your preferred method, the idea here is to brainstorm ideas for making extra money. You could jot down possible side hustles, how to earn a passive income or ways to get free money. Another suggestion would be to have a list of free things to do. Even if you don’t do all of these activities, I do this when I’m bored. Usually, that’s when I’m tempted to spend money online carelessly. Your list could include visiting the library, museum, park, or volunteering. You could also read, draw, or reorganize your closet. The latter is a double whammy since you could sell any unwanted items. 12. Reevaluate Your Savings and Retirement Accounts Obviously, having a savings account is better than nothing. But, are you getting the most bang for your buck? To make sure, head over to DepositAccounts. When there, enter your state, the amount in savings, and the expected investment timeframe. After entering this information, the tool will generate a list of high-interest account options in various categories like “Keep It Simple” (a single savings account) and “Mix and Match” (dividing between a savings and certificate of deposit). While you’re at it, make sure that you’re on track to reach your retirement target. If not, you may need to increase your contributions or look into investment opportunities like real estate. 13. Cut Investment Fees Tired of those sneaky and expensive investment fees? You can link your investment accounts to FeeX or Personal Captial to receive a clear breakdown of all the charges you’ve been paying, like an underlying mutual fund and ETF expenses and trading commissions. These free tools will then recommend similar and lower-fee investments. 14. Improve Your Financial Literacy Every single day you should do something to improve your financial literacy. For example, you could watch the 8-minute long YouTube video below. It’ll teach you how to reduce your expenses, read a book for 15-minutes before bed, or listen to a podcast while exercising. 15. Schedule an Appointment With Your Financial Advisor Finally, make an appointment with your financial advisor. Don’t have one? Then visit Let’s Make A Plan or NAPFA to find a nearby planner. Just like visiting your physician for a checkup, they will make sure you’re in good financial health. If they have concerns, they will make suggestions on how to get back into tiptop shape.
https://medium.com/due/simple-money-fixes-you-can-do-in-30-minutes-or-less-a301bd8ee229
[]
2020-11-24 16:38:23.341000+00:00
['Money', 'Finance']
We Broke up Because I’m Too Fat
“I want to take the time to evaluate our relationship. I don’t think it’s working out.” He started the conversation on a rainy Friday night. I knew it was coming. I had hoped it came. I had felt alone in the relationship for a while and was hungry for an emotional connection that I couldn’t feel from him. I actually had the same intention that night to clear the issue after weeks of uneasiness and doubt over my need for this relationship. My initial reaction was relieved. It’s easier now, wouldn’t it? Two adults having a mature and objective conversation about a relationship that we both think is not working out well. No tantrums, no broken hearts, no one making a scene at a public place. “Thank you for saying that. I agree, I also don’t think it’s working out.” I said, calmly and with an assuring smile. I was giving him the easiest break up ever, I thought to myself. “What made you think so, though?” I asked. “I realized I am more into the skinny type.” He said, not looking me in the eyes. … We had met earlier that year but lost contact after one date. He rekindled the relationship just right after I cut ties with a verbally abusive boyfriend where my body was a constant criticism opportunity to him. We had a few dates until he kissed me at the cinema after waiting for three hours because I couldn’t get out of a work situation. After that night, we started meeting more and dated for a couple of months. During the honeymoon period of the relationship, we developed a strong physical attraction towards one another. He would kiss and touch me in the right places and say kind words about how I look. He brings out a sense of safety and security that I did not felt in my previous relationship. I was happy to meet someone who celebrates my body as much as I do, a body that I learned to love after years of self-hatred and harsh self-criticism. Body image was always a struggle for me and having a lean, skinny mom and friends do little to boost my self-esteem. One time around seventh grade, I stood in front of the mirror and went through all the wrong things with the girl in the reflection. Her arms, her stomach, her thighs, her hair, her acne-prone, and textured skin. I pinched, slapped, and hit all her body parts I wish looked different than the way it did in the mirror. It was the first and sadly, not the last time I poured hatred all over her body. ‘Beautiful’ was not a word I dared to associate her with because I felt that she was very far from it. My mind and her body were constantly raging war with one another, and my mind seems to be the bigger bully of them both. According to the central command in my head, she was always either not good enough or way too much, always missing the mark by a mile. It took me a feminine meditation circle, journaling, contemplation, and lots of daily affirmations to reach a state of acceptance. Loving myself, which includes my body, was the biggest win in my adult life. It catalyzed a lot of positive mental and physical changes. It allowed me to have a healthy relationship with food. It helps me to lose weight in a very kind and healthy way. It even allowed me to feel beautiful. It enables me to understand that I am a multitude of values, and I am beyond how I look on the outside. Finally, loving myself was the path that led me to be at peace with myself. But that night, everything I worked on so hard for years was invalidated by someone close to me. My heart sank listening to all of the reasons why I am still not good enough for him. My food portion that is sometimes comparable to his, the one picture of my lunch at a healthy food joint that I had sent him once but somehow made him turned off by the size of it, the times when he felt me getting bigger and heavier when we’re intimate, the one picture of me last year when I weighed 20 pounds heavier. All these resulted in his lack of attraction to me, which put the final nail in the coffin. Slowly, I can feel the tingling sensation on my back, a feeling I always have whenever I feel trapped or wanting to cry. It took me several hours to process how hurtful the fact that my physical weight surpasses the weight of intangible qualities that I have to offer. I started to get angry that he imposed the notion that I had changed when I am the exact same shape and weight throughout the relationship. I kept obsessing over the missing gap between the passionate longing and the loss of interest during the time we’re together. But above all, I was so upset that the thing I’m most insecure about was the reason for all this. What a crippling feeling to think that you’ve won a battle only to find yourself back to square one. To have our mind make up stories that are untrue but feels very much like they do. To see myself violating everything I preach to other people on loving ourselves. To rage war on ourselves despite that being the last thing we need. Once again, I became the girl who always felt not good enough, and that night, it was true. It was really tempting to spiral back to where I was. But if there’s one thing that I learned from self-love is to become the kindest person to myself, developing the mind-body connection and treating them like a couple of supportive sisters who care deeply about each other’s happiness and well-being. After the breakup, I stood in front of the mirror just like I did in seventh grade. Photo by Chiara F on Unsplash Instead of berating and nitpicking, I stripped myself down to my underwear and hugged myself. I stood there, looked at the girl in the mirror, cried as I would to a best friend, and consoled her as a sister would. I reminded her that everything is okay. I told her that she was herself through and through and that she did all the right things in her power. I told her she is smart, powerful, compassionate, loving, and every kind word in the vocabulary that I could think of. I reminded her that she is kind, she is warm, she is soft, a mantra that we recite to ground us back whenever we feel that we’re forgetting who we are. Looking back, I find the situation funny but I was grateful that I was kind enough to myself to do that. If I were able to split myself, I’m sure she would be sitting there with me with a tub of the finest vanilla & macadamia ice cream she could find and sit with me all through the night. I had a friend in me who loved and believed in my goodness and was very adamant that I should believe it too. That night, I realized the whole point of self-love is not to reach a level of transcendence where nothing can shake me off but to be able to practice loving myself when it happened. … After two weeks, I completed the grief cycle and accepted that it is simply what he prefers. I appreciate that this is a preference that he found through contemplation and not a toxic standard that he adopted on face-value from society. And I really appreciate that he got steel balls to tell me the truth because it must have been hard to admit things like this without feeling like a major asshole. In fact, I had a really hard time balancing the facts and impressions when retelling this story. I don’t want other people to think that he is not a smart and good person I know. He is actually a progressive and open-minded ally to a lot of social issues. He picks his good fight in mental health and is very passionate about pursuing it, a trait that I respect from him. We had an amicable break up that night. It rained hard so we talked about the good times we had over a bottle of beer for him and a cup of tea for me. He told me that I’m a nice and loving person. I gave him a Jason Mraz song with lyrics that communicate a lot of well-wishes that I want for him. We laughed at how this sounds like a good arrangement theoretically but not in practice. We wished one another good luck in work and personal life. We say thank you to each other. We hugged and I got into the car, making sure I didn’t look back. It’s been two years since this happened. We have not talked since we broke up but I don’t have any hatred or negative emotion towards him. In fact, I’m still grateful it went the way it went because it allowed me to understand myself more. He gave me the opportunity to practice loving myself and realizing that it’s a conscious choice that I have to take, especially when my belief was distorted. The breakup was an exercise that I didn’t know I needed and it allows me to be even kinder and loving to myself.
https://medium.com/curious/we-broke-up-because-im-too-fat-75099f3bb719
['Septhiria Chandra']
2020-11-27 16:16:28.551000+00:00
['Body Image', 'Self Love', 'Life Lessons', 'Love', 'Dating']
Harry Styles Refreshes Listeners With a Unique Sound In Recent Album “Fine Line”
By: Eliza Wicks Harry Styles has come a long way since the fall of One Direction. Around a year after leaving One Direction in 2017, the young songwriter released his first-ever solo album Harry Styles including hits such as “Sweet Creature” and “Sign of the Times”, launching his solo career. Since then, the pop sensation has toured all over the world and in November Styles released two new songs from his long-expected album Fine Line and followed up by dropping the rest of the album in early December. Styles smiling at a concert The album rocketed up the charts to #1 in its first week out with the 3rd biggest week for an album in 2019, preceded by Taylor Swift(Lover, and Post Malone(Hollywood’s Bleeding). The British singer has also been nominated for Mastercard British Album of the Year. Arguably, the pressure of maintaining a successful solo career after the dismantling of a smash hit group can be intimidating. Often, many decide to quit music or sacrifice creativity and uniqueness for some catchy pop songs to stay relevant. However, that is not the case here with Fine Line. Straying away from the mainly sad rock-ballad style of his first album, Styles ventures into many different sounds from the folk-rock song “Canyon Moon” to synthesized rhythms of “Sunflower” and his carefree uplifting pop-rock songs “Adore You” and “Watermelon Sugar.” In an interview with Apple Music’s Zane Lowe, Styles speaks of one track, “Lights Up.” The song, for him, is a freedom cry, as he sings, “Shine, step into the light/ I’m not ever going back.” Over the past two years, while writing the album, the young singer revealed that he struggled with finding himself musically and emotionally. Styles has his own distinct style both in music and fashion However, in a burst of inspiration, he wrote this song. The singer expressed, “I think ‘Lights Up’ came at the end of a long period of self-reflection, self-acceptance…I just feel more comfortable being myself.” Although the album does contain happier notes than Styles’ previous work, it is not always so sunny with many tracks, specifically the song “Cherry” which reveals the British artist’s failed relationship with French model Camille Rowe. On the track, Styles sings, “Don’t you call him baby…/Don’t you call him what you used to call me,” which many speculate to be Theo Niarchos, Rowe’s latest love interest, soon after the French model broke it off with Styles. The song reminisces love lost and the feeling of knowing that you’ve been moved on from and forgotten. Other tracks, namely “To Be So Lonely” resemble the balladry of Styles’ first album and speaks of the delicate balance between staying friends with someone you still love and hurting yourself in the process. This is shared with the last song and album namesake, “Fine Line” which deals with a similar concept. Later in the interview, Styles explains how the switches between tones in the album are a representation of his experiences during its making, saying, “The times when I felt good and happy were the happiest I’ve ever felt in my life. And the times when I felt sad were the lowest that I’ve ever felt in my life.” And that is exactly what Styles expresses in this album Fine Line Album Cover With his new album, the young rockstar blends together many different styles (no pun intended) in this album in a way that shows his branching out from his mainly rock background. Fine Line offers a mix of sweet and dreamy tunes but with tinges of bitter sadness all wrapped up in one album, a rollercoaster of emotions that carries a resemblance to human life with its ups and downs. Fine Line touches on struggles that Styles has faced while also letting listeners relate and find their own meaning within the lyrics. Yet it also celebrates the beauty and pleasures of life in an exploration of melodies and guitar. It is the sound of a young man jumping headfirst into new waters with a unique combination of genres, setting Styles apart from others on the radio.
https://medium.com/skhs-rebellion/harry-styles-refreshes-listeners-with-a-unique-sound-in-recent-album-fine-line-1bbb46390ed
['Eliza Wicks']
2020-03-09 12:14:56.055000+00:00
['Music', 'Album Review', 'Pop Music', 'Pop Culture']
The Kilominator — Poem. Kilominating through Essex County
Primordial morning. Photo credit: Matthew St. Amand Kilominating through Essex County Gathering kilometers like a thief in the night. Let the others rise before the sun, Pierce the dark and wake the roosters, Trace esoteric shapes across rural roads That conceal the bones of settlers and centuries. Spinning spoked spectacles Looking back, looking back Thumbing through the tectonic flipbook. The kilometers as runes, landscape as language. Tires reading the Braille of the broken pavement. Landmarks and totems: Colchester North Public School. The unnamed church where the patron saint looks like Paul McCartney in agony. Wind turbines like marooned castaways Waving their arms in the distance. There’s enough land out here to start a cult. Broken yellow lines mark the road like Morse code. Dark, sleeping fields, mantled with maiden veil mist. Kilominating in the dark and in the light. The rising sun like a startled homeowner Groggy-eyed and squinting, befuddled, Following my progress along the concession roads. Like Irish turf torn from the moist earth, Kilometers shrink to a morsel of their original size. Returning home, stashing them in my garage Where I construct a scale model of the county, Piecing them together like Popsicle sticks. Shoe box theater. Divining the secrets below the surface the subterranean rumblings. (Some say they’re blasting in the salt mine. Maybe it’s the dead clamoring within their coffins.) Surveying my work, Assembling the puzzle of me, of everything, Until, one day, I observe A tiny figure on a bicycle, in a high visibility vest and helmet Kilominating my creation.
https://medium.com/the-kilominator-chronicles/the-kilominator-poem-969baca6f458
['Matthew St. Amand']
2020-12-12 11:02:55.155000+00:00
['Cycling', 'Poetry On Medium', 'Mental Health', 'Fitness', 'Outdoors']
Let’s Build An ELT Pipeline Pt. 3: Automating Data Downloads
If you haven’t already, please read my previous article showing how to set up our project for this section. Tasks Let’s break down our pipeline. We start by downloading our data using the NYC collisions data API and schedule this process to run daily. The next step is loading the data into an S3 bucket, which we use as a data lake. Once our data lake is updated, the next destination for the raw collisions data is the data warehouse. I chose Redshift for this project, but you are welcome to explore alternatives such as Snowflake, BigQuery, etc. DBT is up next and used to create models from the raw data and fed back into the data warehouse as views. Finally, data analysts and other data users can query the transformed collisions data to build analysis, dashboards, and reports. We consider every step in the pipeline a task that we must put in the proper execution order. This collection of dependent tasks are what we refer to as DAGs, and we must ensure that no downstream components depend on any upstream ones to avoid circular dependencies. This article will only focus on the API-related, “check API” and “download data” tasks. Defining Our DAG Before coding our tasks, we must instantiate our DAG object globally. This object can take in a slew of optional keyword arguments, and readers should familiarize themselves with the documentation to determine which are needed. For this project, we will stick with the mandatory dag_id and the optional start_date and schedule_interval parameters. Let’s start by creating the nyc_collisions_pipeline.py file in the dags/ directory. $ cd dags $ touch nyc_collisions_pipeline.py /my-airflow-project/dags/nyc_collisions_pipeline.py # Import Airflow modules from airflow import DAG # Import Python libraries from datetime import datetime # Initiate our dag definition dag = DAG( dag_id="nyc_collisions_pipeline", start_date=datetime(2020, 12, 19), schedule_interval="@daily" ) We use the CRON preset “@daily” to tell our scheduler that our tasks should run once a day at midnight. The scheduler_interval parameter is quite flexible and can take in any CRON expression right for the job. One thing to keep in mind about the start_date is Airflow waits until the end of the first period to trigger a task. For our project, the Airflow scheduler triggers our first DAG run on December 20, 2012 (start_date + scheduler_interval). The scheduler triggers the DAG every day at midnight after the initial run. Check API The data for this project comes from NYC Open Data, and, as with all third-party data sources, things can go unexpected. The data delivered by the API may change, be delayed, or not exist every time Airflow triggers our DAG. For this reason, we will run Airflow’s HTTP sensor to check for the existence of data. The sensor polls an HTTP endpoint until a condition is met. In our case, we check if the API response contains data in the crash_date field for a given date. The response check is accomplished by using a lambda function that returns a boolean if the crash_date field is found inside the response object content. # Import HTTP Sensor object from airflow.sensor.http_sensor import HttpSensor # Check if the collisions api has data for a given date is_collisions_api_available= HttpSensor( task_id="is_collisions_api_available", method="GET", http_conn_id="nyc_collisions_api", endpoint="resource/h9gi-nx95.json?crash_date={{ ds }}", response_check=lambda response: "crash_date" in response.text, poke_intervals=5, timeout=20, dag=dag ) We poll the API for the condition defined in the response_check every five seconds and timeout after 20 seconds in the event that the condition isn’t met. Airflow uses Jinja Templating, providing us a way to access the collision data API across different days dynamically. We tie the date of the data from the API to our execution date by passing the “{{ ds }}” template variable into our crash_date query parameter. The ds template variable renders into the current day’s date stamp. For the HTTP sensor to work, we have to create the Airflow connection to the API in the Airflow UI. To do this, we must first run our Docker containers to access the UI running on our localhost. Make sure you are in the project root /my-airflow-project/ before running docker-compose. $ cd /my-airflow-project $ docker-compose up -d Once everything is up and running, navigate to http://localhost:8080 in your web browser. In the navigation bar, click on the Admin dropdown menu to access Connections. There will be a handful of out-of-the-box connections already created but ignore those for now and click on Create. Keep in mind, the Conn Id field must match the http_conn_id parameter in our HttpSensor object. Click save and congratulations! the code for our first task is done! Download Data We will show off our Linux skills for the second task and use Airflow’s BashOperator to download the collision’s data. More specifically, we can use the curl command to pull in data from the NYC Open Data API into a file in the data directory created in the previous article. # Import Airflow's BashOperator object from airflow.operators.bash_operator import BashOperator fetch_collisions_data = BashOperator( task_id="fetch_collisions_data", bash_command="curl -o /usr/local/airflow/data/{{ ds }}.json \ --request GET \ --url dag=dag ) # Download collisions data for a given datefetch_collisions_data = BashOperator(task_id="fetch_collisions_data",bash_command="curl -o /usr/local/airflow/data/{{ ds }}.json \--request GET \--url https://data.cityofnewyork.us/resource/h9gi-nx95.json?crash_date={{ ds }}",dag=dag Notice the BashOperator allows us to use template variables to dynamically access data from different dates. Dependencies Great! We have our two tasks. However, they do not form a DAG just yet. The final step is to establish a relationship between them. We set the is_collisions_api_available as the upstream task and fetch_collisions_data as the downstream task. # Define the dependencies is_collisions_api_available >> fetch_collisions_data Go ahead and navigate back to the Airflow webserver UI, and switch on the nyc_collisions_pipeline DAG. Click on the DAG and make sure you are in the Tree View while the DAG is running to view your triggered tasks (You will have to keep refreshing your browser). The collisions data will appear inside your /data folder as long as your tasks are successful. In my case, the nyc_collisions_pipeline DAG ran from December 19th to the 25th and was successful from the 19th to the 21st. What about the remaining days? Taking a closer look at the Tree View shows us that the is_collisions_api_available task failed to meet the response check condition, and so the fetch_collisions_data task never ran (the power of dependency!). In fact, for some reason data was never updated for the days where our DAG failed at the time of this writing (we can, at any point in the future, re-run any failed tasks). This level of oversight over our pipelines is why Airflow is one of my favorite tools! Wrap Up This was fun! Even if we stopped the project now, you would still leave with an invaluable skill. I recommend reading Airflow documentation and playing around with the different types of parameters available for the DAG object. In the next article, the focus will be on loading the collisions data into a data lake and warehouse and providing the AWS resources needed for the two. GitHub The full code for this article is available here. 🚨 Please make sure you are on the airflow-http-data branch in the repository.
https://python.plainenglish.io/lets-build-an-elt-pipeline-pt-3-automating-data-downloads-1d6c92e852ce
['Jonathan Duran']
2020-12-26 09:21:50.541000+00:00
['Data Engineering', 'Airflow', 'Python', 'Programming', 'API']
The Virus of The Mind
Photo by Francisco Moreno on Unsplash The virus of the mind is far worse than any other physical virus. — Sz It’s the virus of constantly worrying about what has happened in the past that’s still (to this day) depriving you of joy and happiness in our lives. It’s the virus of constantly worrying about what’s happening now (in us and around us) that’s causing us self-doubt in where we’re going in life. It’s the virus of constantly dwelling on what’s going to happen (tomorrow) that’s polluting our mind to think clearly about how we get there. It’s this virus that will never allow our peace of mind to naturally shine through us. It’s this virus that is crippling us of moving to a better version of ourselves. It’s this virus that is creating fear, anger, frustration, disappointments in us. It’s this virus that is creating a false belief that our life is working against us. It’s this virus that is depriving us of joy in our lives and the lives of people around us. It’s this virus that is keeping us in the past, rather than navigating to the road ahead. It’s this virus that is causing us to regret and reject the life within us. It’s this virus that is causing us to believe our lives ain’t worth living. It’s this virus that is divorcing us of our natural state of inner being. It’s this virus that is blocking our roads to our higher selves. It’s this virus that is causing us to be hard on ourselves. It’s this virus that is stealing our natural sense of calm. It’s this virus that is causing us to lose our self worth. If we pay close attention to our true inner being ~ the virus is just on the surface causing the unnecessary noises, fear, uncertainties, frustrations, disappointments, regrets and anger; blocking us of being in our natural self.
https://medium.com/@iamnaziir/the-virus-of-the-mind-f5982eeee0d9
['Nazir Noori']
2021-04-14 21:01:45.358000+00:00
['Mindset', 'Worry', 'Mindfulness', 'Virus', 'Self']
Smack presents vol.6
Smack presents vol.6 An observation. Smack Vol.6 was the latest battle rap event that went down a few weekends ago, at the time of me writing this review, these would come faster, but to give a full review and personal opinion, battles need to be watched more than once, what I think at first counts as well, but, for a more educated opinion, and making sure I catch every word scheme and angle, these would have to be viewed more than once. I have the app for such situations as well. Before I get into the review, I would want you to understand this is my opinion and mine alone, and this comes from how I watch battles and what’s noticeable to me. Battle rap is an opinionated sport, and that’s what these reviews are, I’m not here to urge you to think I’m right or wrong when it comes to these things, you might read this and think I’m talking out of my ass, but, that’s not a big concern, it’s known we’ll have different views on what we saw that night. I’ll get into each battle in the order of how they happened. The first battle of the night was Danny Myers vs Ill Will. Looking at this battle on paper, I had Will taking this, 2–1 surely because I felt based on his last performances compared to Danny’s he goes in with a lot of momentum, but I wouldn’t be surprised if Danny pulled a win out of this battle, because both battlers are known to have 3 good rounds regardless of who’s standing in front of them. It’s needed to say, watching on caffeine can be good or bad, I’m not a big fan of the fan vote because I feel people would go off what that says rather than how they feel about the material being shown to them. People won’t necessarily want to admit how they feel about a battle because they don’t want to be the minority, but I don’t feel that’s a bad thing at all, another thing is how my connection fades in and out on the app, but that’s probably a problem for me, and not for everyone else, but when you’re following what a battler is saying and it randomly freezes it’s hard to react because it drains attention, which is why I like the app, it helps during these reviews. Danny’s 1st Looking into the first round, with how they both came out, I feel there’s this strong stigma around Danny because he battles too often, but at the same time, there’s another one based on how he raps, I think today people like to hear personals more than bars but I think Danny keeps that standard in place. Even though his round felt as if it could’ve been for anyone when I think about Will vs Danny on paper, it’s a bar fest and both of them were going to bring a round like that. Watching, it felt like when Danny paused because he got off a bar he knew would hit, he would get a reaction, so I wouldn’t count it as a dry spot, I think the first round went exactly how he wanted it to go and I enjoyed that. I think the tone is set not just for the battle, but the rest of the night was there. Will’s 1st The way the round started was so-so to me, but maybe because the parallel universe has been done against Danny better before and better, like Dizaster, (and I must say I am NOT a fan of dizaster) used that better and set a type of standard, even though, maybe I am judging that a bit harshly because that wasn’t the overall focus of Will’s round, he was going to get into other things after. He got into how, someone is making Danny’s house a drug lab when he’s gone and for what it was, I felt it was interesting, maybe putting it together would’ve given it more of a better feeling to me, but what I thought prior happened anyway, a bar fest even though I think both battlers didn’t come out the gate with the type of energy but I feel in the second half of both rounds, they had more bars and energy. Based on what I said here, I have Danny taking the first round. Edging this round barely, I felt they were both really good, but this was a good tone for the rest of the battle. Danny’s 2nd More so, I felt this would become more bars than anything else because at this point there was nothing noticeable I couldn’t spot in either performance, I thought the part where Danny said he took Will’s shoes was hilarious, and I’m a big fan of how that started, the round I mean, honestly, I thought Danny would take a different approach from his usual performance knowing that Will could use satire to help his performance, but I’ll have further examples on Will later. For the most part, I felt Danny followed his first-round well, not slowing down if it wasn’t part of his performance, this was part of him going up, how this round was better than his first. Will’s 2nd I like when any round starts with a rebuttal, a good one at that, ”shooters in Michigan? Chris Webber, time ou-” it’s not what he said word for word, these don’t bar for bar examinations, because I do want you to watch the battle in your alone time if you’re interested overall I needed to rewind a couple of times to catch everything Will was saying this time around, and I believe this round was better than Danny’s second. The reason being, in this bar fest it’s my opinion that Will brought better ones is all. Not saying Danny had a bad round. Also, PS, the R.Kelly bar was hilarious as well. Danny’s 3rd I will say, I didn’t like this start, because 40 Cal exposed or, made a rumor about Ill Will not claiming his child if he has a child in which I don’t know if there’s any proof too, the lines including him weren’t all that strong, that’s one thing that took away from him to me slightly, but also, I suppose there was talking of Danny choking this round but it was clear that his mic had fallen off. I’m undecided if I like this round really at the time of my writing this because it’s worded like a closing round, but I felt this could’ve been his first, second, or third round. I think Danny has such a fast performance people don’t catch everything how they should because I felt a lot of things he said should’ve got a reaction. Passed the bars about his baby mother, if Will has a baby mom, I felt was good, but overall I felt this round could’ve been beaten by Will. Will’s 3rd I’ll say it now I think Will got this round. The intersecting universe scheme was something I don’t ever think was used against Danny, it’s a mix of bars and personals with an original idea and I felt nobody but Will could do something like that. Maybe I’m wrong but nobody did it before Will so, I wouldn’t say I’m that far off. It was a very strong round and I think it beat Danny’s third by a lot, I think this is the round that’s undeniable that Will outdid Danny. My final thought on this battle is Ill Will 2–1. The battle that was after, was Jerry Wess vs Arsonal. Going into this I was unsure how to call it, I feel Ars doesn’t take every name that serious, depending on who the name is, not to say Ars just goes to lose a battle if he feels the battle is a small name, I can think of his battle with Drugz and how in my opinion, he beat Drugz maybe 3–0? Not to say Jerry was someone to sleep on, in his last few battle I didn’t see any props, but he didn’t need one against Danny, and he won that battle and sent a message at the same time. When the battle was announced I wasn’t sure who I had winning, because I wasn’t that interested in seeing it, speaking honestly, it could just be a shot for Jerry because smack keeps giving him hard matchups, Rum, Danny now Ars, that’s a hard 3 to beat. Jerry’s 1st I have to say, I did like this round, I felt with a slower pace it’s easy to catch everything being said, I think one characteristic of Jerry Wess is how he relates his bars to him being a scammer, but at the same time, it’s not like they’re terrible or something because everyone in battle rap has their thing, Geechi has his storytelling, Yoshi G spits her truth, Goodz has money talk, so why can’t Jerry have scammer talk? As long as the bars aren’t trash I don’t see why not, you know? Even though I felt this round was really strong, Ars could beat it. Ars 1st It’s like Ars to be a showman, when he took off his hoodie to reveal he had a Jerry West jersey on I felt it would get dark and Ars started going, a style clash at it’s finest, and he went pretty hard and none of what he said was filler, for the speed he would be going I would expect filler material sometimes, but I was proven wrong, even though I feel every vet takes an angle just to say new rappers are new, I wonder if that’s for the rapper or the person viewing. I would assume it does not affect Jerry coming to rap, because I notice time and time again when vets face off against each other, the content seems original but I can tell most vets this is there go-to thing, but he wouldn’t lose points for that at all, because he touched on it a little bit but you can tell from how he put it together he was the OG and everyone sees him as such. I felt like, Ars took the first but it was an edge, the flow, him giving Jerry bars, and some real shit. I feel it’s an uphill battle for Wess because there’s a lot of ways for him to be attacked. ”Stop making smack all this money, and monetize your pen.” ”I was selling Udub tickets and you were buying them.” With this being said, I’m scoring it, 1–0 Ars Jerry’s 2nd More what he did in the first, after Ars’ first I was wondering if he would attempt to switch his flows or something but I don’t think that was the case, he approached the same way, that and I felt most of the bars hit. A clean performance with no stumble or slip-ups. I feel because his pace was slower and you can tell Ars was more energetic, it would be easier for the crowd to be more engaged as if he’s demanding the attention, and it felt, they wouldn’t give the same attention to Jerry. I’m watching on the app and the comments were saying ”I’m skipping every Wess round, ” which is unfair but the culture isn’t fair, I understand that the consensus last battle for Wess was him getting a 3–0 victory but now I see people aren’t listening, but he doesn’t have any bad rounds, other people might say, it’s levels, but in this battle, I don’t think it’s lyrics that made the difference, it was performance. In my opinion, Ars isn’t killing him, it’s not that one-sided to me. Ars’ 2nd I didn’t enjoy this round too much. His performance was great, how he got it off was good, but the lyrics behind it didn’t exactly make me react or, have me excited, I felt the line about Twork shooting his grandmother life alert was great, I think before and after that line, not so much, even though, he was getting a good bit of reaction, because granted its different from me sitting at home watching than the people watching Ars close up and see him going through his material. But I can only speak on my point of view, and I felt this was probably his weakest round, it was performed well, but I feel that’s all it was a good performance. So, right now, I have it 1–1 Jerry’s 3rd This third was good. Ars predicted he would have that prop configuration somewhere, I guess it doesn’t matter what round Jerry did it in, but I’m sure he had to see this coming, it was missing the last battle but it’s part of the things and against a showman like Ars why not do it? I’ve seen Ars used minimal things like props, like his belt or something but not like a radio, what Jerry pulled out. It makes me think about who he got to do those voices for Ars ”family, ” also, Ars name flips, the only other rapper I’ve heard have a unique Ars name flip was Twork. (can’t spell tears about putting that Ars in it) but Jerry himself had some very unique flips, another thing I like about all three rounds from Jerry is that it seems his lyrics elevated each round, and I felt Ars in the second took a major drop, in his second. I don’t think any round is unbeatable, but I think rounds are very hard to beat and I think this is one of those situations where Jerry’s round is going to be difficult to outdo. Oh and PS ”now you won’t see the stars without putting that Ars in it, ” oh my God that shit was hard. I didn’t say the whole bar, but if you watch that battle you’ll see exactly why I’m saying oh my God. Ars’ 3rd I think this was a good way to end, I think how he started was unique to Jerry, because earlier I did go over the scammer bars, and why it works for Jerry but nobody has ever used it against him, and I felt Ars was going to use his son and use him as the ”lead, ” I should say, for this round, and honestly I think it would’ve been precise for Ars to take that route, like how Will utilized Danny’s parallel universe, to rap about a universe where Danny had this completely different career, Ars, could’ve spoken about Jerry’s son having a terrible life because of all the other socials Jerry had taken. That angle would’ve probably won him the round, but I feel because that didn’t happen, and it took a turn, it didn’t feel like the round was strong enough to win in the third, and I felt lyrically he wasn’t as clean as Jerry, he had a better performance through the night, as far as rapping and getting through his material without a slip up and his energy was intense as always, but I don’t think that beat Jerry that night. My verdict is, Jerry 2–1 clear. Also, make sure to stream underrated, Arsonal’s newest project. (I don’t get paid for this at all, but if you’re reading why not go listen while you read, support the battlers!) Chilla Jones v BDOT I saw Bdot in his last battle and I knew he has Chilla, and if his battle against Holmzie was an indicator of his next battle, then Chilla did have something to worry about, but it’s not like I didn’t know who Chilla is. Seeing this battle announced alone would give me a certain type of chill, two different styles but both styles hit hard and both work over any type of crowd, they’re both in the running for Champion of the year, and this battle would help both of their cases, but I had Bdot going into this battle, I feel he’s a good counter writer and would be able to naturalize what Chilla would be able to send at him. I was eager to see this one, potentially battle of the night is this battle right here. Jones’ 1st I knew this would be good, or at least I hope it would be, everything touched upon in Chilla’s first round was all the things Bdot hears in every battle, but it’s not just saying it, it’s how it’s said, his it’s approached, and one of the first things brought up against Dot is the Loaded Lux mannerisms, I’d be lying if I said I didn’t see Lux in Dot, but I don’t think that hurts him too much, because he can rap, what can start to hurt him is him saying black African power but not speaking up about the injustices, not saying he doesn’t do that, it’s not like I’m watching his Twitter or social medias waiting for him to say something about it, because I’m not, but I believe Swave said something to Bdot as well when talking about a similar situation, even though I felt it was worse because he said that Dot was simply making money and didn’t give it to the victim’s family, in which I don’t know if that’s true or not, but I wouldn’t want something like that to be brought up every battle. Jones’ had a lot of wordplay and bars coming out of his first round, but I wonder just how deep it would go after this round, after the list of things Dot stopped trying or ”lacked commitment, ” in battle rap, music, football, his marriage, even with all this, I don’t think it’ll be that difficult to Dot to beat this round, because I feel there’s a window of opportunity for Dot to take the first. Bdot’s 1st Dot won this round to me, and I’ll debate anyone who says otherwise, the energy alone had me thinking he was just going to punch, but he was just talking to him at the same, he even had a line about that, ”I’m Ali, I talk while I punch, ” I felt Dot did have a window of opportunity to win this round but he broke through the window and it had me eager to see how it was going to go into the second. What I find funny is that I haven’t seen Nunu for a couple of events and when she came back, the first thing she did was jump in Bdot’s round, well it wasn’t a ”jump in, ” traditionally, but she did get back at Jones for what he did against Prep a couple of years back. I enjoyed everything about this round, him mentioning KOTD to his name flips out of that. Even his style of rap, and making it a hard line. (word association) like I said, I like Dot taking the round easily he put the pressure on Chilla, and it was Jones that needed to fight back in this situation. Jones’ 2nd Was it an ongoing angle? He kept on with his walking contradiction angle on Dot, and honestly, I wasn’t expecting it, it’s all about how though because what he said was true in this round about a lot of things concerning Dot. The Avocado situation was a really bad look for him, I won’t go into the situation, but Dot, what he raps about, what his brand is, Chilla just using what he said sounds simple but it’s heart-crushing in a sense because then he spoke about Dot being on KOTD as well, why be rapping over there if what you represent is black African power, then in his match against Top he did downplay dealers a bit but against EK he said if Smack is about selling drugs and such, it should be on the west only, or stationed at. Unless Dot had countermeasures for a round like this, it’s another small window, but like I said earlier that’s one of Dot’s strengths, being a counter writer, and this is before I watched what he said in response so I could be wrong. Dot’s 2nd Even though I did like this round I don’t see it as strong as what he had to go against, because he aired part of Jones’ dirty laundry, I don’t think anything was said about the women CJ has messed with if he did, but I didn’t find it too appealing based off what was said to Dot. He did touch on what was said but not enough to be recognized as a full counter to everything said, his performance wasn’t the cleanest either, but I always appreciate with a battler gets through the round rather than just giving up and there were some spots where you can tell he fucked up, but even with those slip-ups I felt it was a good performance, it doesn’t help that Jones didn’t slip at all, and the material didn’t seem as directed either. He did scheme because of Jones scheming but I don’t think that means you need to scheme as well. Sadly, Dot lost this round to me, hopefully, in round three he won’t have that type of slip up. 3rd round Because I watched it back a couple of times, this part of the review ill out what I think about both of their rounds because this was probably round of the night to me, both of them came and both came heavy, and it felt like a dog fight, Jones finished his angle out and I felt like it’s only right to bring up the last battle that Bdot was part of to make his claims stronger and make his bars hit harder, and I felt his initial point of Dot being a walking contradiction comes full circle at this point, he goes into the year he has had, the family he’s lost, overall the way he ended it was perfect in my opinion. I felt Dot’s third could’ve been his second because it was a great counter writing in a sense, because I felt his third was negated when he started rapping, in the verse previous, I think Chilla showed who he was, and I knew more about him based off that round alone. I don’t think Dot ran out of ideas when going against Jones, but I think Chilla won because he had more of a direct point to attack him with all three rounds and I felt for Dot, that wasn’t the case, the points brought up were slightly repetitive, not to say he had a bad round, but it wasn’t better than Jones’ to me, with that being said, I have CJ 2–1 Next Pat Stay vs Kshine Now going into this, I remember a interview from Shine that said he wanted Pat Stay because he didn’t know how to beat him and I always felt that was weird, but I knew what he meant, Stay can take anything you’re talking about, or what you’re about and use it against you and that’s probably what makes him so entertaining, Pat Stay is a great and there’s not a lot of people that could take him out, even though, I feel Kshine knows this, he’s the one who said it, so I was eager to see how Shine would approach this battle. Shine hasn’t only been active, but dominant also, his last 4 or 5 battles have been great under his belt, maybe some debatable but I don’t think a clear loss in within those last 4 to 5. But Pat Stay could be the reason that streak comes to an end. Going into this, my prediction was Pat Stay winning, I can’t give you what rounds exactly but I felt Stay would come in and get a win, not a body by any means. Pat Stay’s 1st I… I made the wrong call, I’ll admit. The way to start this was terrible in a way, everytime it felt like Pat was starting to go, it would just go south. He said mid round that he had things setup for actually being in the small room, and I wonder why they switched it last second, because I think it did throw him off a bit, but I would rather him just spit whatever he had for the small room other than him being out of sync with himself. I appreciate him not just calling time and him fighting through this round. Certain things still made me laugh though, he pulled his ass out towards Shine and then said “he’s straight buns,” that’s just funny but sadly it barely helps his case anyway, because it didn’t seem like it really mattered. With all the stumbles, dry spots and him not really saying anything, all Shine would have to do in his round is rap clean and it’s a sure thing for him. The showmanship was there still, so it’s not like Pat came out to bullshit and I appreciate that, he took out the tape measure at the start of the round so I felt it would be something to view and even Shine doing the sidestep while Pat was rapping was funny too. But I don’t think Pat was going to win this if that’s how he started. Kshine’s 1st All Shine had to do was get through the round to win, I like that he would announce that he would need more than gun bars to beat Pat Stay, so he definitely brought everything, it’s a bit more wordplay than what I’m used to with Shine. I have to say, this was a funny looking match up physically because Pat Stay so damn tall, and certain places had Shine looking really short, that had to be part of the reason shine would rather rap to the crowd for the majority of the round, it’s like Shine would be screaming at his chest or something. Either way, Shine won this round by a lot, and there’s not a lot to contest because Pat Stay wasn’t able to get through his round without stumbling. He had bars and a really clean performance, at this point I did start to wonder, why wouldn’t Pat use what he had already, I’m sure the bars would’ve hit, unless it was certain things about the set that he was going to use, because I really wanted to see this matchup, I didn’t want either of them to have a bad outing. Shine took the 1st clear. Pat’s 2nd Yeah, what I wanted in the first round. Pat can rap and that’s nothing that goes over heads, but it’s like, because he has everything else, people forget that’s his strongest attribute. Part of that, his showman side always comes out, how he heckles the crowd at the start of the round and towards the end he panders to it. It’s really entertaining and good to see how casually he goes into those modes. Listen, that shit is difficult to do, to win a crowd over that sorta wants you to lose. This was a strong round with clean performance but I still think Shine can beat it, but how? I don’t know. Shine’s 2nd After watching this round twice, I think Shine went stupid that time around, he did come with counter writing and I felt a little surprised, that’s not what I’m used to out of him, whenever I see shine it’s more of, “I can do me better than you being yourself,” but I think he knew counter writing might’ve been the best way to go. I appreciate his approach and if I were to match his bars to Pat’s in the second, I would take his, even though I do think he got some slight gas from the crowd behind them, but not to some overwhelming degree. (And I know at the same time it might not be gas, my view from home is different from them being there, but this is how I see things from the URL app. Haha) With that, I feel Shine took the second round pretty clear. Pat’s 3rd I really don’t have a lot to say for this round, I think it was a good way to approach Shine, only in a way Pat could do. The basketball line was hilarious but at the same time I think it was the only high point of his round. I felt the energy was off and things he was saying just wasn’t really hitting that hard, hopefully next time we see Pat he comes with something better, but this round was incredibly easy to beat. I don’t think it’s anything to hold against Pat Stay, unless you count the fact that he was locked in with shine for a really long time. But people fuck up, and that’s ok. Shine’s 3rd Ok so… this angle, I think everything landed in a way, but because of that damn handshake at the start of the round. I felt it was funny, Rest In Peace to Trevon Martin of course, I’ll mark it off as dark humor from the start. Anytime I see a battle between white rappers and black rappers I feel that racist angle is almost certain to come, but like I’ve said plenty of times before, it’s all about how it comes off and how the battler approaches the angle. Nobody has become Pat Stay and themselves in the same round and I felt it was so unique for Shine to do, in a way, I don’t think he’s ever become someone else after professor Shine. How it’s performed and the originality of the round itself would put it above Pat’s. I have Shine winning this battle 3–0 very clear. The last battle is Geechi vs Goodz, but I think I’d keep that for another review, that being the main event and I would need to have my thoughts fully constructed on that one battle which is a true style clash. I hope you enjoyed the review, or just my thoughts on paper, if you can, leave some thoughts. Because I want a full construct of thoughts and these happen on my time, I’ll try to be as consistent with battles as I possibly can.
https://medium.com/@mekaziah2695/smack-presents-vol-6-fef532ad12e7
[]
2020-12-22 22:28:45.590000+00:00
['Url', 'Rap', 'Battle Rap', 'Smack', 'Blog']
Data Through Design: A Tactile Take on Open Data
Data Through Design, a newly installed art exhibit created in honor of NYC’s Open Data Week, offers data enthusiasts a rare treat: a tactile encounter with open data. Municipal data, where available, comes mostly in neat, comma-separated rows. Sometimes, it is transformed into lines and circles, its stories served up on gridlines, garnished sparingly with the appropriate annotations. But even in the most inviting of circumstances, open data seldom asks its audience to reach out and touch. Data Through Design, open to visitors until the end of this week at the NY Media Center, does just that. Hosted by Enigma, CARTO and the Pratt SAVI (with sponsorship support by Columbia University’s Brown Institute for Media Innovation), the temporary exhibit showcases eight projects that pose a number of central questions about life and death in New York City. Data Through Design is an endeavor to view New York City through the prism of open data. The projects take a range of approaches using the city’s datasets as a starting point, with the larger proportion seeking to represent the chosen datasets in a novel manner. A few challenge us to think critically about the endeavor of open data and the insights it brings. Of the works that seek to experiment with new forms of data visualization, Ellen Oh’s Slow Down and Jill Hubley’s Broken Windows & Pick Tickets stand out. Oh’s Slow Down features a set of eight neon-yellow acrylic panels, each with a map of New York City and a year’s worth of collision data. A number of magenta cylinders dot the surface of each map, one for every fatal collision. The panels are transparent, allowing the viewer to see the scatter of accidents for each successive year. The markers on the map never quite line up with each other, and disturbingly, look like bright bullets suspended in space, always rushing towards the body of the city. The choice of color, borrowing from traffic signs meant to alert those on the road to oncoming danger, creates a chaos of neon. The effect is disorienting in a way that feels critical to the message of the piece. The lack of pattern to the traffic deaths across the city, especially viewed in the cumulative, is troubling, but Oh is unequivocal about a solution that doubles as the piece’s title: slow down. Broken Windows & Pick Tickets considers how crime is policed throughout the city. Hubley has constructed a wooden pergola poised over a map of NYC police precincts. Bunches of parachute cords hang over every precinct, with each bunch containing color-coded cords that correspond to particular city violations. The lengths of the cords are logarithmically scaled to the number of criminal court summons for that precinct: some are too short to properly dangle, while others trail along the floor. Visitors are invited to move through the piece and touch the cords that represent everything from jaywalking to loitering while wearing a mask. The colors and patterns of the cords recall the bolder patterns and textures of statistical atlases of Reconstruction-era America. The cords are complemented with something a bit more modern and a bit more familiar: a mounted iPad displaying small multiples of area charts. The charts show the same data, grouped by either precinct or violation type. The interactive also contains additional information about its more experimental counterpart — including a key to the cords and a few highlights of what Hubley felt stood out about the data. For both Oh and Hubley, their experiments in data visualization elevate the encounter with the data to a visual confrontation. The physicality of their works impresses upon the viewer in a way that feels wholly different from digital interactions with data. For Hubley’s project, one cannot help but imagine the painstaking process by which she had to cut and organize the bands of violations for each of the precincts of the city. Oh, also, had to manually insert small acrylic pegs for every traffic fatality represented in her maps. The exhibit as a whole does not shy away from heavier topics. In addition to Slow Down, two other projects deal explicitly with death (How We Die and Life andDeath in the Built Environment). Another descriptively titled, The Time and Place of Sexual Trauma, displays an LED calendar of dots that light up in eerie synchronicity with a ticking 24-hour clock to signal the reports of sexual violence submitted for that hour and day. For Jessie Braden, director of Pratt SAVI and one of the organizers of Data Through Design, the choice in subject matter speaks to the issues we face as a large, urban city. For the audience, it may be a reminder of parts of the city we may choose to forget. Some of the projects ask viewers to question how we use open data. Mathura Govindarajan and Davíd Lockard’s What Our Numbers Don’t Show: The Story of Data Misinterpretation, features a number of deliberate misinterpretations of several NYC datasets. Their message is presented through a series of comedic videos that are housed in a seafoam cabinet with big, friendly arcade-style buttons that match to a selection of misinterpretations. The chosen errors are modeled after common misuses of data: mistaking causation for correlation, selecting the prefered answer to a problem by reshaping a modifiable areal unit, overfitting data to jump to oddly specific predictions, and others. The piece points to how open data might be flattened into conclusions that fall shy of something true to life. Manhattan Tree Topography by TWO-N delivers a similar message. In its construction of the island of Manhattan made entirely of wooden blocks, each featuring the material of the neighborhood’s most populous tree, one block is notably absent. Central Park, usually a relief of greenery from the urban jungle, is represented only by white space. The piece is starkly elegant, and achieves more uniformity than is perhaps desirable (the honey locust being commonly the most prevalent tree across the borough). Its physicality, for better or worse, is less demanding than Oh’s Slow Down or So Yeon Jeong and Ye Eun Jeong’s The Time and Place of Sexual Trauma. Though the heaviness of the blocks and the particular grains of the few types of wood try to engage the viewer on a sensory level, the piece is perhaps most effective in prompting the viewer to consider its underlying data critically: what isn’t counted matters. For those interested in experiencing the exhibit firsthand, Data Through Design is open this week, Monday through Friday between 9am and 7pm and on Saturday between 10am and 6pm at the NY Media Center in DUMBO, Brooklyn. . . . . . . . . . . All works featured in this exhibit are based off of datasets in New York’s open data portal. Interested in exploring data visually? There are thousands of datasets on a broad range of subjects available on Enigma Public.
https://medium.com/enigma/data-through-design-a-tactile-take-on-open-data-130dcdf43431
['Rashida Kamal']
2018-03-14 21:56:58.851000+00:00
['Open Data', 'Data']
Is Venezuela becoming the biggest problem of (Latin) America?
Previous week’s legislative elections, in which Maduro and allies won an easy victory, mainly because the opposition have boycotted these procedures, can be seen as a key point in the transfer of the country from a neo-socialist, Cuban-style rebellious country, to something completely different. The results of these elections mean that now Maduro has a complete control over the country’s political institutions. “The last stronghold” has fell, and at least from a legal point of view, there is almost nothing Maduro cannot do. It is worth to remind that this situation comes on the background of a destructive economical and social crises that Venezuela is going through in the past years. The main sector in the local economy, the oil industry, is struggling for years, due to poor management, lack of investment, corruption, and harsh international sanctions imposed on it, mainly by the US. The situation is so bad, that Senior western Economists say that the economic crises in recent years in Venezuela is considered the single largest economic collapse outside of war in at least 45 years. And since no real war was involved, they tend to blame the situation on poor governance, corruption, and wrong policies of Chavez and afterwards Maduro. A significant symptom for the economic crises is the rising rate of poverty among the people of Venezuela. According to research by local academic, between 2019–2020 about 65% of the country’s households had suffered some degree of poverty. This rate was about 14% higher than in 2018. The picture become even darker when measuring Venezuelans by income levels-there the findings indicate that almost 96% of the population lives in poverty, as the average income was about 72 US cents a day, something that can be seen only in poor African countries. As a result, it seems that people are trying to flee for better life. More than 10% of the country’s population have fled the country in the past 3 years. This humanitarian crisis is considered Latin America’s biggest ever refugee crises. The rising rates of poverty leads large populations in the country to real hunger. A quite recent UN sponsored report tells that Venezuela is having the fourth-worst food crises in the world (“above” her can be found countries like Yemen, DRC, and Afghanistan, all three are ravaged by war). The report found that 9.3 million people — about one-third of Venezuela’s population — lacked enough safe and nutritious food for normal human growth and development last year. It found that 13% of Venezuelan children under the age of 5 are stunted and that 30% are anemic. Besides the economic and social crises, Venezuela seems to suffer from over-attention by international players, and this is not a healthy attention. Especially in the Chaves-Maduro era, the country had become a battleground for foreign powers interest-mainly the US and Russia. Due to its dependent on foreign oil, China had also joined the party, and to complicate the situation even more, Iran added itself as another supporter of the current regime in Venezuela. All this international attention seems to only deepen the local crises, mainly because the US is tightening its sanctions on almost everything identifies with the current regime, and the fact that all these “bad actors” (from an American point of view) continue to support the current regime, only justifies, as far as the US is concerned, the need to tighten even further the restrictions. The recent legislative elections are framing Maduro’s regime more than before as an autocratic regime, and therefore as a target for additional US sanctions and restrictions, that at the end of the day affects mainly the people of Venezuela, and not much the leadership-and especially Maduro and his close circle, which continue to find creative ways to receive money and keep control over the power strongholds-especially the security forces. The result of all this mixed salad will be a worsening of the humanitarian crises, that will also affect badly Venezuela’s neighboring countries, which will have to deal with the on-going spill of refugees, and deterioration of stability and order in their territory. As in many other cases in the international arena in the post-Trump era, it seems that the Biden administration will have to re-think on the American strategy towards Venezuela, a strategy that in recent years included mainly the stick. Now they will have to bring in also the carrot. In addition, they will have no choice but to cooperate with the Russians and the Chinese in this regard, in a more give-and-take creative way-such as easing tensions and coming towards Russia and China in other global points of conflict (such as Global trade issues, the Middle East, East Asia etc.), in exchange for them to influence Maduro to leave the office peacefully (in return to not being prosecuted by the US and other international players), since it seems that Maduro is the main (though not the only one) obstacle to the recovery of the Country in the post Chavez era. It seems that the current situation has gone far beyond the control of the local players in Venezuela, and the need for international intervention-but this time from the useful kind-is essential. It is hard to believe the US wants such a catastrophe in its back yard.
https://medium.com/@info-63603/is-venezuela-becoming-the-biggest-problem-of-latin-america-e7a827de0090
['More Intelligence']
2020-12-19 14:30:00.078000+00:00
['Refugees', 'Venezuelan Politics', 'Venezuelan Crisis', 'Maduro', 'Venezuela']
How much does it cost to develop a website in the USA?
Whether your company works online or offline, it is vital to develop and maintain a web site in the modern digital world. Just how much should a website cost, however? Normally, the upfront expense of a website, including designing and launching it, is $13,000 to $120,000, whereas regular site maintenance ranges from $40 to $3000 a month or $700 to $50,000 annually. Take a fast overview of just how much it costs to construct and earn from a site. Continue reading to find out about different expenses that have web design and site maintenance. Website development expert is one of the best web development company based in the USA and doing website development at affordable cost without compromising on quality. They have a team of professionals developers who are expert in various PHP frameworks like Laravel, Codeigniter, and CMS like WordPress and Magento In addition to the cost differences between services, freelancers, and programs such as WordPress these costs depend upon the dimensions and characteristics of your website. Based on your site, you might spend less (or more) than those quotes. Whether you are seeking to establish a new site or redesign your current website, your company should think about some website design and development expenses. These prices are often one-time expenditures, meaning that it costs more to start a site than keep one. You have to pick the best way for you. Choosing a web designer is costly, but it requires the technical hassle from your hands. Employing a site builder is inexpensive and simple, but you do not receive the exact same degree of management because you do use WordPress. WordPress provides you flexibility, but it is the most time-intensive and hands-on strategy. That the expense of a WordPress site is really difficult to pin down. It is dependent upon plenty of different variables since there’s a good deal of unique elements to consider when setting up a WordPress website. Realistically, you do not wish to pay greater than $7,000 to get a web designer, differently, you may get a disappointing result that you are unhappy with, but you’ve still paid a couple million for. Website builders are normally the least expensive way to construct a web site. You will find free plugins and themes also, so in theory, it ought to be simple to keep prices low. But, you may find you want a premium motif, then there is hosting to consider (and cover ). If you employ a WordPress programmer, your overall site cost could quickly creep up in the thousands. There are numerous prices that could influence the entire cost of creating a WordPress site, but the most important one is hosting. You may eliminate using free plugins and themes, but hosting is also a very important and significant cost that is essential for accessing your WordPress site online. Factors Affecting website development cost:- Page volume - Let us say an internet development company offers a 10-page website package for $4,000, but your site needs 25 pages, just how much additional will this cost? A good guideline is to include about $100/page for each page over and above what’s contained in the standard website package. In this example, adding 15 pages (to the 10 pages contained in the standard website package) would cost $1,500. Add this sum to the initial base price of $4,000 and the revised total is $4,500. Every situation will vary, of course, but this gives you a reasonable cost quote based on shared pricing in the industry in 2020. Custom website layout - Every site starts with a template or theme, nobody codes a site completely from scratch. That is way too time-consuming and pricey. A template frequently gets you 50–60percent of the way into the end line, but there is still plenty of coding and customization required to get your website looking and working how you would like it. The more coding and customization required, the larger the price. Customized images & graphics - Fancy graphics and images can give your website a one-of-a-kind special editing applications, and of course the ability of a custom graphic layout look, however, it comes at an affordable cost. Complex visual effects frequently require specialists. Custom made logo - Someplace its $700-$3,000 range. Buy a decent-looking, professionally designed logo at $350 expect to cover any site. A customized logo kicks up the trust/credibility variable a notch. Custom Programing - In a perfect world, everything will be plug-and-play intended. Sometimes you can locate a WordPress plugin supplying the specific performance you would like right from and function flawlessly the first time but that is seldom true. A fair amount the box (i.e. picture carousel, membership portal site, payment calculator, etc). In other instances, getting your Website to perform what you need requires significant customized programming. A number of design revisions - Some site designers provide as many as most site projects begin with a first concept design iteration increases the general cost of this undertaking (sort of like a rough draft), then it is normal to have 1 or two rounds of the layout of design alterations are greater than adequate 99 percent of their time. 3–5 rounds of style adjustments. Technically, a website designer can provide as many rounds of style revisions since they want. Website content development (i.e. copywriting) - Strong, thoughtful, persuasive material is the base of any excellent site. If you are starting a new website but do not have any material yet, it has to be developed. In case you have a present website but the material is weak, stale, or obsolete, it is going to have to be refined, improved possibly even overhauled. Through time I have found the # 1 barrier to starting a site is a content improvement. Our customers struggle with this since it is time-consuming and hard to perform. That is why a couple of years ago we started offering skilled copywriting solutions to our site design and development offerings. Yes, you need your site to appear contemporary and fashionable, but it is your site’s content that moves the needle. What Extra Costs Are Involved Using Website Development? Okay, so we’ve covered the main cost associated with creating a website. But what about other costs aside from hosting? Let’s run through how much it costs to build a website when you consider extras such as domains, plugins, and themes. Professional Help: $10 — $80 per hour With no programmer’s help, you have to employ a person to assist you with this price in the final Website complete, as a lot of men and women build very happily custom made site, then odds are you may have to employ a website developer Normally, the longer you play, the greater the programmer is Very Likely to Be but always do your homework before hiring somebody. Many novices use WordPress, but If you Would like a complex or together with your website, you are taking a look at a price anywhere between $30 and to make your ideas a reality. Domain Name: $12 — $80 per year You need a domain name for your WordPress site to help people locate you online. Unlike with website builders, you do not receive a free branded subdomain, so you want to select a custom domain name from day one. This is usually part of the sign-up process when you make a hosting account, making the whole registering process super easy. Domains vary in price based on the hosting supplier you register with, and the domain name you select (like .com versus .rich) these two variables will hugely affect how much you should expect to pay for your domain. Plugins: $0 — $500+ per year Plugins add functionality to your websites, such as contact forms, reviews, newsletter subscriptions, and anything else that you want your website to have! Popular plugins comprise adding e-commerce or multilingual functionality to your website. There are free plugins and compensated plugins, so it is up to you how much you really desire to spend integrating new features to your site. Some plugins come with various plans so you might start off using the free program and upgrade to unlock a much superior version. The cost of plugins ranges from $0 to $200+ annually. The same as plugins, you will find both free and premium topics for you to choose from. Free topics are an excellent method of getting started, also you’ll be able to find some trendy ones from the website theme directory. Just bear in mind that they might not be professional or loaded with features as superior themes, which can let you rely more heavily on plugins further down the line. Plugin topics are normally around $80 but can reach up beyond the 200 marks, however, this is a one-time fee, after which it is edited and customized for as long as you desire. The upside to using a superior theme is that it generally comes with more support and regular upgrades, to prevent it breaking or getting obsolete, and has attributes built-in to help save you spending more money on premium plugins. Think of it as a theme and plugin package! Security: $0 — $300 per year Any hosting provider worth its salt will include safety features built into its plans. However, websites can be quite vulnerable to online attacks, so it’s worth investing in some extra security measures. You will find free security plugins accessible — for instance, Sucuri is a safety plugin which scans, blocks, and removes threats from your website. The plugin is free, but if you’d like a Sucuri account and access to all of its attributes, then the cheapest Sucuri plan costs $299.99 per year. Hiring a web designer to program and make your website puts your job firmly in the hands of those professionals. You do not get to totally sit back and relax — you’ll have to work together with your web designer to create a site you enjoy — but it will require a great deal of the heavy lifting from you. Hiring a web designer is best for: Anybody needing a very complex website.Those with zero time or technical confidence. Big budgets and big, custom sites. That is possibly the toughest cost to estimate because distinct web designers will charge unique rates based on the job and their own expertise. You might decide on an agency or a freelancer, and this may also affect the cost. On the whole, however, you’re taking a look at a price between $4,000 and $80,000 to get a person to make your site for you. We attempted and reviewed the design agency Hibu, therefore we can give you a more precise estimate based on their charge rates. However, keep in mind that other agencies and freelancers will have their own pricing tables. When we tested Hibu, we paid an upfront price of $449, and an ongoing charge of $139 a month. Estimates range between $69 and $599, which covers the creation of your site. This is dependent on how many pages you need, which features you want, and if you want to sell through your site. There is then an ongoing fee, which ranges from $59 to $259 a month. This covers security and hosting, permits access to support, and permits you to create maintenance requests. Price is pretty important here because in the event that you spend less than $5,000, then you could get a poorly designed website and run into problems pretty quickly as you try to keep it. This is not an alternative if you’re not confident in your financial plan, so don’t overstretch yourself!
https://medium.com/@alexmorgan2796/how-much-does-it-cost-to-develop-a-website-in-the-usa-fa3e2f659a52
['Website Development Expert']
2020-10-07 13:37:20.601000+00:00
['Website Development Cost', 'Website Development', 'Website Design']
Breast Milk
Breast Milk Photo by Fanny Renaud on Unsplash written by Anna Sarreither - Pediatric Nurse Introduction It is widely known that breast milk offers the best nutrition for your baby. However, did you know that it is able to change during the lactation period, that it can change its composition during the day, between mothers and within the population? Generally, human breast milk consists of 87 % water, 3,8 % fat, 1,0 % protein and 7 % lactose. It is a dynamic substance, which changes with time, adapting to the growing baby’s needs. (1) To fully understand how lactation works, I will begin by explaining what is meant by lactation. Lactation is defined by the medical dictionary, Pschyrembel, as a summarizing term for milk production (synthesis) and milk secretion by the female breast during the breast-feeding period. (2) Stages The first stage of milk production is called mammogenesis. At this stage, the colostrum is produced and this production is regulated by hormones. Due to rinsing hormones in the mother’s blood, which are additionally produced by the placenta, the alveoli in the pregnant woman’s breasts convert their cells from epithelial cells to secretory cells. These hormones are called prolactin, relaxin and HPL (Human Placental Lactogen). The mammogenesis starts from approximately week 12–16 of pregnancy and ends most likely on the second day after the newborn’s birth. Due to oestrogen and progesterone, which are the hormones that keep the pregnancy going, lactogenesis is inhibited, which means only a small amount of colostrum can be produced during pregnancy. (3) (4) The second stage is called lactogenesis, which produces the transitory breast milk and this stage is also regulated by hormones. Since the placenta hormones decrease, due to the placenta missing after birth, they do not inhibit the lactogenesis any longer and the breasts can now produce milk to their full potential. This normally starts on day 3–4 after birth. Since all of this is a process and does not work just by flipping a switch, this phase can last up to day 8 after birth. (3) (4) The third stage is called or galactopoesis. This process maintains lactation once it has been established. It is stimulated by the newborn sucking on the mammilla (nipples), which releases the hormones prolactin and oxytocin. Prolactin stimulates the new production of milk for the next feed, also called lactation reflex. (3) (4) The fourth step is called galaktokinesis or also known as the let-down-reflex. This process maintains the milk flow during a feed. This is induced by the baby sucking on the mother’s mamilla, which is stimulated, as already mentioned above, by oxytocin. Oxytocin then stimulates the myoepithelium (muscle cells in the milk ducts) to contract and thereby releases the breast milk through the mamilla. Without this process, a baby would not be able to feed from the breast since its suction would not be strong enough. (3) (4) The last step is called involution. This process begins, when the baby is no longer breastfed. This happens approximately 1–2 months after the last feed, since the breast is no longer empties. This signals that milk is no longer needed and induces the regression of milk production tissue in the breast. (3)(4) Good to know A common mistake made when thinking the amount of milk produced is that one presumes that the amount of milk produced depends on the size of the mother’s breast. One of the two key factors in the amount of milk produced, is how much milk secretory tissue one possesses. Woman with smaller breast can have a lot of secretory tissue and therefore produce a lot of milk, while woman with bigger breast may have a smaller amount of tissue and produce a smaller amount of milk. The other point that milk production depends on is the amount of milk removed from the breast either through the baby suckling or via breast pump. Additionally, to all the other components in the milk, there is a protein factor called ‚feedback inhibitor of lactation‘ secreted into the alveoli of the breasts. If there is no milk removed, the protein interacts with the cells and inhibits further milk secretion. One does not fully understand which substance in the protein it is or how it actually works, one just knows that it interacts with the secretory tissue. (5) In addition, the composition of the milk changes during one feed. The milk, which the infant receives first, also called foremilk is thinner, with a higher content of lactose. This satisfies the baby thirst. After that the ‘hind milk’ follows, which has a creamier consistency, due to a higher amount of fat. (5) Colostrum Colostrum, also called drinkable gold, has a thick consistency, due to it high cell count and a yellowy colour due to its high consistency of ß — Carotin. It contains a large amount of protein, for example immunoglobulins and stem cells, which are very important for the newborn’s immature immune system. There are also higher counts of Vitamin. A, Vitamin. E, lactose and minerals in the colostrum, than in mature breast milk. Other positive effects are, that is stimulates the newborn’s bowl system to pass the first stool, called meconium. With the meconium, the newborn also passes a substance called bilirubin. Through that, the chance for the newborn developing a newborn icterus is reduced. It also promotes the growth of Lactobacillus bifidus, which is important for the development of a healthy gut flora (6). Additionally, it has shown to have a stabilizing effect on the newborn’s blood glucose level. During the first 24h after birth, the newborn drinks approximately 7–10 ml of colostrum per feed. This may seem a small amount, but since the stomach of a newborn is still very small for the first days and its drinking feeding routine is every 1–3 hours, this assures that this is enough fluid and calorie intake. One should also keep in mind, that the baby still has amniotic fluid in it stomach and bowl system, which has to be digested and provides some fluid and nutrition at first. Studies are still searching how big the stomach of a newborn actually is after birth. Nils Bergmann suggests it has the volume of approximately 20 ml (7), others say it is the size of a cherry or marble. (8) However, the small amount of colostrum does have a function. It aims to provide a low volume, to help the newborn establish a suck, swallow and breathing cycle during a feed. (9) Transitory Milk The transitory milk is the product of colostrum and mature milk. When the change of colostrum to mature milk starts at approximately day 3–4, one can see a difference in the milk composition on day 5. This transition stage can last up today 15 days after birth, resulting in mature milk thereafter. (10) Mature milk and its composition On approximately day 15 after birth, the breast milk has reached its ‘mature state’. Breast milk now contains lower amounts of protein than animal milk, which is good since protein can put strain on the kidney function and the delicate body circulation of the newborn. The whey amount of protein is higher in breast milk than in formula, making it easier for the newborn to digest. (11) The protein, which is contained in the milk, has immune specific substances, which are anti-infective and anti-inflammatory. Lactose, fat and the calorie amount is specifically adjusted to help the newborn gain weight and grow. These are only a few common composition groups of breast milk. Breast milk also contains various other nutrients which are very important for: the immune system, bowl development, nervous system development, growth factors, hormones which regulate feeding patterns, enzymes which help chemical reactions in the body, vitamins, protein for muscle and bone development, proteins for antibacterial defence, as mentioned carbohydrates as an energy source, non-protein nitrogen which build a sources for the actual proteins or make it possible to actually use fatty acids as energy source, nucleotides which are the building source for DNA and RNA, Erythropoietin which is responsible for the development of red blood cells and a lot more. (12) As already mentioned above, breast milk adjusts to the infant’s needs. Studies have shown that breast milk has a positive effect during an infection and even decreases the duration of some infections, for example diarrhoea (14). One is still researching how the mechanism works. One hypothesis is that the woman’s breast can detect the infection through the salvia, which the infants transfers via latching onto the breast. One certainly knows that the white blood cells increase in the breast milk if the child has an ongoing infection. For further information about the mechanism further studies are needed. (15) Many studies over the years have indicated that breast milk composition depends on the mother’s diet. A study by Ballard et al, published in February 2014 suggest that micronutrients, as well as Vitamins A, B1, B2, B6, B12, D, iodine, vary in their amount depending on the maternal diet and body stores. (16.1) Another study published in May 2020 by Zavadska et al, showed that macronutrients like fat, protein and lactose is not effected by maternal diet, but the fatty acid composition is immediately effected. (16.2) Interestingly, breast milk also works as a clock for the newborn, via the sleeping hormone melatonin and the stress hormone cortisol. Since a baby does not produce melatonin itself until the age of 3 months (17), it is transferred from the mother’s blood circulation through the breast milk to the newborn. Melatonin is released in one’s body when being surrounded by darkness. On the other hand, cortisol is highest around 7–9 am. Cortisol awakes us due to its function of rising the blood glucose level. The lowest cortisol level can be detected at around 10 pm. Furthermore, studies have researched if melatonin, which has a relaxing effect on smooth muscles, which are also presented in the gastrointestinal tract, may also sooth infantile colic. (18) A study, published in 2019, by Alhadi Alhindi et al, examined if breast milk changes its composition when one gives birth to female or male baby. The study group did see changes in ‘Higher value of viscosity and higher percentages of protein and acidity’ in breastmilk of boys than in girls. This could indicate that there is a difference between the milk provided for either a boy or a girl. (19) Moreover, one knows that breast milk possesses a large amount of the protein called HAMLET, which potentially can be used in cancer therapy — studies have only just started researching this. (20) A small study has shown that HAMLET is able to detect tumour cells and destroy them. Whether or not it can be used in cancer therapy is still not completely certain. To gain further knowledge scientists will have to run more studies. (21) (22). Positive short and long-term outcomes of breastfeeding for the mother In the short term, breastfeeding stimulates uterus contraction, due to the release of oxytocin, while the baby feeds. This quickens the uterus involution, which also decreases the risk of endometritis puerperalis (infection of the endometrium after birth). It reduces stress, since this is a feature of prolactin, lowers postpartum depression if one already had prenatal depression (23), may be a contribution to more rapid weight loss and mostly likely prolonged anovulation, which then mostly likely prevents the mother from conceiving whilst breastfeeding. In the long term, breastfeeding can decreases the risk of getting breast cancer by every year one breastfeeds up to 4,3 %. (24) In addition, Danforth and colleagues found, that if one breastfeeds for over 18 months in comparison to a woman who has never breast fed, the risk of getting ovarian cancers decreases by 50 %. They have also shown that the relative risk decreases by 2 % every month of breast-feeding (25) It is still being discussed why this is the case. Schwarz et al, ‘found a 10–20 % higher risk of diabetes, hyperlipidemia, and cardiovascular disease among parous women who had never breastfed compared with those who had breastfed for 13 to 23 months‘ (26) Jonas et al, who published a paper in 2008, regarding the positive effect of breast-feeding on the mother’s blood pressure, showed that, blood pressure falls during a feed and decreases during the first 6 months of breast feeding in relation to blood pressure before breast feeding. (27) This is mostly likely due to the release of oxytocin. (28) Breastmilk is also known to sooth irritation and work as a natural balm on the mother mamilla. Positive long — term outcomes of breastfeeding for the infant SIDS Prevention It is well known nowadays that breastfeeding is one of the key indicators in preventing Sudden Infant Death Syndrome. (29) Cognitive Function In the three months after birth, the brain mass nearly doubles. By the age of two, the brain has approximately reached 80% of the fully-grown brain of adults. The different areas of the brain are connected by the white matter, which allows certain areas to communicate. Studies have shown that breast fed babies have an improved development in late maturing white matter compared to formula fed babies. Likewise, an extended duration of breast-feeding was associated with improved white matter structure and cognitive performance. (30). Additionally, another study by Horta et al, published July 2015, presents that breast fed children showed improved performance in intelligence tests. (31) Weight Horta also published a paper, showing that breast fed children have a 13 % lower chance of becoming overweight, or being obese as well as reducing the odds of type 2 diabetes. (32) Stopping breastfeeding before 6 months also showed a three times bigger chance of obesity in the first year of life. (33) Eyes A study by Birch et al, published 1993, presented, that children, who have been breast fed have a better visual evoked potential, forced-choice preferential-looking, random dot stereo acuity and letter matching ability. This is due to DHA (unsaturated fatty acid, which is presented in the retina) and arachidonic acid (unsaturated fatty acid) which is important for the visual development in the first year of life. (34) Jaw Also in 2015, Glazer Peres et al, published that breastfeeding decrease the risk of malocclusions. (35) Tham et al, also connects breastfeeding with the protecting of dental decay in infants under the age of 12 months. (36) Additionally, D C Page, published a paper 2001 saying: ‘Breast suckling aids proper development of the jaw which form the gateway to the human airway.’ (37) Ears A study by Bowatte et al, presented that exclusive breast-feeding for the first 6 month reduces the risk by 43% of the infant having an acute otitis media in the first two years of life. (38) Immune system Another study has shown that the thymus, a vital organ for the infant’s immune system, has a bigger volume in relation to the volume of non-breastfed children. (39) Lodge et al, also found evidence that breast feed babies have a lower chance of suffering from asthma aged 5–18 years. However, one must say, that further research concerning breastfeeding and asthma is needed, since the impact between both still remains unclear. (40) (41) A study which was published in June 2015, by Amitay et al, showed that breastfeeding up to 6 months after birth may help to prevent childhood leukaemia. 18 studies have shown similar association with the decrease risk from 14–19 % . (42) Another study by Bener et al, published in 2008, showed that the risk of lymphoid malignancies in children was decreased the longer they were breastfed. (43) Recommendation WHO The World Health Organisation recommends, starting exclusive breastfeeding in the first hour after birth and continuing it up to the six months of life. Additionally breast feeding on demand, should be carried on until the age of two. Exclusive means, no other foods or liquids, including water. (44) Additional Facts Cost Parents who breastfeed, can save over 1000–13000 € in the first year of the baby life, due to no additional milk formula costs. Environment Breast milk is a renewable natural resource, which produces no garbage, minimal greenhouse gases and a smaller water footprint formula feeds. (45) Health System Journal Paediatrics published an estimated 13 billion US dollar, which would be saved, in the health system, if 90% of the US babies would be exclusively breast fed for the first 6 month after birth. Since it would reduce hospitalization and medical costs through its benefits. In addition, it could prevent 911 deaths annually, from whom 95% would be infants. (46) Sleep Studies have shown that on average parents who breast-feed, sleep approximately 30 minutes more per night than parents who feeds formula. The exact reason for this still has to be researched. (47) Food Food flavours, which the mother consumes during pregnancy, are transmitted into the amniotic fluid, which is then swallowed by the baby in the womb. In addition, some flavours eaten by the mother, can change the flavour of their breast milk. The study concludes, that prenatal and post-natal exposure to flavours, transmitted through amniotic fluid or breast milk, increase the enjoyment of flavours when starting solid food weaning. (48) Summary Breast milk varies during lactation. It can change its composition during the day, during the lactation period, between mothers and within populations. One can divide lactation into five stages with each stage having its own hormonal composition, which in turn regulates milk production. Each stage can also be differentiated by the milk that is produced. The best outcome for mother and baby is achieved by breastfeeding a baby. Breastfeeding not only has positive effects for the baby but also for the mother. It contains components, which uniquely to this day, can only be found in breast milk. One can confidently say that breast milk offers the best nutrition for a baby and one should start exclusive breast-feeding within the first hours after birth until the age of six months. After that ‘on demand’ breastfeeding should be continued while starting solid foods. Additional Sources
https://medium.com/@asarreither/breast-milk-f76826f3e0f5
['Anna Sarreither']
2021-04-26 13:52:24.399000+00:00
['Baby', 'Motherhood', 'Pregnancy', 'Breastfeeding', 'Parenting']
The (Political) Theory of Everything
The (Political) Theory of Everything Political scientists have developed a new, all-encompassing theory. It makes sense. Illustration by Peter Grabowski. For years now, the media has been force feeding the all-too-familiar “Democrats in disarray” story. An uncompromising left flank tenuously married to a more moderate establishment. Who wins the future for the Democratic Party? A few weeks ago, a curious finding emerged that surprised only those that believe in such imaginary civil wars. The New York Times/Siena polled six battleground states and asked primary voters who had backed progressive Elizabeth Warren who they would support in the upcoming general election, Joe Biden or Donald Trump. A whopping 96% said they planned to vote for Joe Biden, and 0% — zero! — said they would vote for Trump. “No Warren supporter in the survey — which was conducted in June — allowed for the possibility that there was even ‘some chance’ they would vote for Mr. Trump,” Nate Cohn, the poll’s author, said. (The Bernie-or-bust movement appears to have waned as well, as 87% of Bernie backers plan to vote for Biden, with only 4% defecting to Trump.) Wouldn’t such a fractious, big tent party have a hard time coalescing? What forces explain the Democratic Party’s remarkable unification? The answer isn’t polarization. Studies have shown that loyalty or positive feeling towards one’s own party isn’t spiking as much as people think. Between 1980 and 2012, for example, approval ratings for a voter’s own party has been relatively constant, hovering in the upper 60s and lower 70s. Less than half of Republican voters identify as Republican because they “have a lot in common with other Republicans.” For Democrats, the number is 51% who identify with the party because of commonality. Even in the last presidential election, only 56% of Republicans viewed a Trump presidency as a net good for the country, while 64% of Democrats viewed a Clinton presidency as a net good for the country. Data from the American National Election Studies (ANES) show that from 2000 to 2016, favorability in the Democratic Party fell from 59% to 49%, and with the Republican Party from 54% to 43%. So much for partisan love. The real answer is negative partisanship, or the idea that we vote not for our party but against the opposing party. It is not love for our party’s positions that draws us to the polls, but a mistrust or hatred of the opposing party. Hatred is a stronger motivator than love — at least politically. In the 1990s, fewer than 20% of voters viewed the opposing party as “very unfavorable.” Now, 44% of Democrats and 45% of Republicans view the other party as very unfavorable, more than doubling in just two decades. If you are to include “unfavorable” in addition to “very unfavorable,” the number of those with negative views towards the opposing party balloons to over 80%. Even more ominously, about a third of voters view the other political party as a “threat to the nation’s wellbeing,” with Republicans slightly more likely to subscribe to this belief. To show how negative partisanship has impacted our politics, look no further than split ticket voting. Once a normal practice, split ticket voting is quickly fading into extinction. Just a few decades ago, about a quarter of voters split their ballot. An example would be voting for a Republican presidential nominee and a Democratic senate nominee. Currently, this voting pattern happens in less than 10% of ballots, the lowest level ever recorded. Former Speaker of the House Tip O’Neill once famously said “all politics is local.” Perhaps then it was true. Maybe a voter would cast a ballot for a Democratic presidential nominee, but vote for a local Republican congressman due to a hyper-localized issue, such as ethanol policy or swan hunting. Now, due to negative partisanship, American politics is increasingly becoming nationalized, according to political scientists Alan Abramowitz and Steven Webster. “The result has been a growing nationalization of elections below the presidential level: the outcomes of elections for U.S. Senate, U.S. House, and even state and local offices are now largely consistent with the outcome of the presidential election,” they note. “In 2016, all 34 Senate elections and 400 out of 435 U.S. House elections were won by the party winning the presidential election in the state or district,” Abramowitz and Webster found. Could negative partisanship explain everything in politics? Sanders and Warren supporters overwhelming back Biden, not necessarily out of mutual agreement on policy, but a shared hatred for Trump. This new theory also explains why there were few Republican defectors in the 2016 presidential election, and why any discussion on whether Trump will lose his base is moot. As long as Republicans hate Democrats, they’ll excuse the president’s questionable antics. What’s the alternative — voting for the enemy? Negative partisanship explains why there’s so little ticket-splitting these days, as political scientists Shanto Iyengar and Masha Krupenkin assert that “it is out-group animus rather than in-group favoritism that drives political behavior.” Negative partisanship explains why approval ratings for presidents almost never exceed 60% or dip below 40% anymore, either. If half the country hates the president, and the other half hates the haters, where’s the wiggle room? This new theory also explains why political discourse has gotten so toxic, and why the political stakes have never been higher. The percentage of Democratic voters who are angry at Trump “most of the time” or “just about always” is 73%, up from 33% towards Mitt Romney in 2012. Two thirds of Republican voters were consistently angry with Hillary Clinton, up from 43% towards Obama. Next time a seemingly illogical political issue crops up, ask yourself — “does this solely exist because we hate the other side?” The answer may surprise you.
https://medium.com/an-injustice/the-political-theory-of-everything-87d55fac5ad4
['Peter Ramirez']
2020-08-16 22:36:50.242000+00:00
['Politics', 'Polarization', 'Republicans', 'Democrats', 'Elections']
6) Scary ‘R’ Us 2 — The Exaggerated Threat of Terrorism
“Naturally the common people don’t want war, neither in Russia, nor in England, nor in America, nor in Germany. That is understood. But after all, it is the leaders of the country who determine policy, and it is always a simple matter to drag the people along, whether it is a democracy, or a fascist dictatorship, or a parliament, or a communist dictatorship…The people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked, and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same in any country.” (Hermann Goering, Hitler’s 2nd in command[i]) Terrorism is scary enough to work as an exaggerated threat to serve as an excuse for war. Almost any violent action by any group anywhere in the world can be portrayed as the act of terrorists, thus justifying military intervention, arms sales, repression and crackdowns on opponents of the government. One commentator suggested: “terrorism is maybe the best excuse that has ever been invented for unlawful government action”[ii]. The intelligence analyst, Edward Snowden, (who became a whistleblower when he revealed the extent of US government spying on US citizens) has explained that terrorism is what analysts call ‘cover for action’. This means that it convinces people to allow government actions that they would not normally allow.[iii] The ‘war on terror’ is a propaganda term to justify invasions and overthrowing governments. Such a ‘war’ has no definable end, and the whole world is potentially the battlefield[iv]. The US and Britain Train, Arm, Finance and Protect Terrorists Beginning in 1979, the US Central Intelligence Agency (CIA) and Britain’s MI6 worked with Pakistan’s intelligence services and Saudi financiers to train terrorists in Afghanistan. Their role was to de-stabilise Afghanistan in order to draw the Russians into a long war, which has been described as ‘Russia’s Vietnam’[v]. During this training, recruits learned not only how to fight, but were also indoctrinated in the most extreme forms of Islam. That war ended a decade later when the Soviet Union collapsed. Some of these fighters went on to become the terrorist group known as al-Qaeda[vi]. Many of the terrorists involved in attacks around the world, such as the World Trade Centre bombing in 1993, were veterans of these training programs. Over time, al-Qaeda funded other groups, split into different factions, spread into other countries and developed into organisations such as ISIS (Islamic State of Iraq and Syria)[vii]. These religious extremists fought in different parts of (what was) Yugoslavia. First in Bosnia, then in Kosovo and Macedonia. US and British support for terrorists overseas has continued to the present day. More terrorists were used to overthrow the government in Libya, and the terrorists trying to overthrow the government of Syria received weapons and support from Britain and the US. A few years ago there were regular discussions about the Finsbury Park mosque in London, because it was associated with terrorist recruitment. Preachers at the mosque, such as Abu Hamza and Omar Bakri, had been trained at CIA-backed training camps in the 1990s. They travelled to central Asia with the support of the British intelligence agency, MI6. These people then train and radicalise others, including the four people who participated in the July 2005 attacks on the transport system in London. Britain operated what was known as a ‘covenant of security’[viii]. This was an informal agreement that extremist preachers would be tolerated, provided that they did not preach violence against Britain. Their usefulness in recruiting for wars abroad was considered more important than the violence that they incited. British attacks on Afghanistan and Iraq brought this covenant to an end, so Britain then became a target. Terrorist attacks in Britain and the US, triggered by our policies abroad, are sometimes called ‘blowback’.[ix] The bomber who carried out the 2017 Manchester bombing had previously travelled to Libya, in order to participate in terrorism there, with the assistance of the British Intelligence agency MI5[x]. The bomber who killed 49 people in the 2016 attack on an Orlando nightclub claimed that it was in response to US bombing in Iraq and Syria. The former leader of the Labour Party, Jeremy Corbyn, correctly pointed out in 2017 that “British invasions abroad provoke terrorism back home.”[xi] On various occasions the FBI wanted to investigate individuals who would later be involved in the terrorist attacks in New York on September 11th 2001, but they were told by other US government agencies not to question them[xii]. The security services of other countries have found these policies to be very frustrating, because attacks in their countries were carried out by terrorists formerly working with the US, or incited by preachers in Britain. Egypt has denounced Britain for protecting the killers who were involved with a massacre at Luxor in 1997. Macedonian intelligence complained that US interference was the biggest obstacle to dealing with extremists in 2001[xiii]. A closer examination of US policy reveals further hypocrisy about terrorism. America has actually provided a safe haven for large numbers of international terrorists for many years because those same individuals have carried out the CIA’s bidding in other countries. Florida has been described as the retirement home of choice for mass murderers, torturers and assassins from Cuba, Guatemala, El Salvador, Haiti, Chile, Argentina, Honduras, Somalia, Indonesia, Iran and South Vietnam. This includes Cuban exiles, Luis Posada Cariles and Orlando Bosch, who blew up a Cuban passenger airliner in 1976 and killed 73 people. Documents indicate that US intelligence agencies knew that such an attack was planned, but did not inform the Cuban authorities. Many other terrorists have been flown to alternative countries, known as safe havens, where they are unlikely ever to be charged for their crimes[xiv]. State-Sponsored Terrorism — Governments Are The Most Dangerous Terrorists Terrorism is the use of violence and fear to achieve political goals. Many US invasions and CIA crimes, discussed in earlier posts, involved extreme violence, were intended to create fear, and were intended to achieve political goals, and thus should be labeled terrorism. When the US government described their strategy in Iraq as ‘shock and awe’, they made it clear that their intention was to terrify the Iraqi population[xv]. In other words, modern warfare is simply terrorism carried out by governments. Despite this, the terms “state-terror” and “state-sponsored terrorism” rarely appear in the media to describe violence by Western governments. Governments try to distinguish their violence from terrorism by saying that they do not deliberately target civilians. This is propaganda, because the governments concerned know that many of their actions will inevitably lead to huge numbers of civilian deaths. As one commentator noted, from the point of view of a civilian being blown up: “there is little moral difference between a stealth bomber and a suicide bomber. Both kill innocent people for political reasons”[xvi]. The US and Britain have close ties to some of the world’s most repressive regimes (some of them discussed in earlier posts). They provide support for these governments in order to maintain control of trade and resources. Each time the US or British government provide weapons, military training, finance or other assistance to a repressive regime, such as Saudi Arabia, they help to create circumstances where the repressed population will correctly blame the US and Britain, and might fight back[xvii]. These unpopular governments frequently commit state terrorism against their own populations. A study that analysed the motives of suicide bombers concluded that they are overwhelmingly aimed at foreign occupiers (such as US soldiers), and that when the occupiers leave, the suicide attacks tend to stop[xviii]. Governments Like To Exaggerate A Threat… In Britain we were repeatedly warned that there were as many as 2,000 terrorists in the UK ready to strike at any moment, yet since 2005 there have only been a handful of incidents and a small number of successful prosecutions. In fact the statistical threat of terrorism to people in the US or Britain is minimal. Calculations for the US show that you are more likely to drown in the bathtub than to be killed by a terrorist[xix]. In 2005 the FBI admitted that they had not identified a single al-Qaeda sleeper cell in the entire United States[xx]. The Director of Public Prosecutions openly stated that “there is no war on terror in the UK”[xxi]. In 2009 the CIA admitted that the total number of al-Qaeda in both Afghanistan and Pakistan was less than 100[xxii]. The number of terrorists operating overseas has increased dramatically in the last decade, as the US and Britain have poured weapons into the Middle East to destabilise multiple countries and overthrow governments. …to justify state crimes Ordinary people have struggled for hundreds of years to develop laws that protect us against corrupt governments, but the supposed crackdown on terrorism gives the government an opportunity to create oppressive laws at home. The US government has manipulated the legal system since 2001 so that it can carry out activities that are, or should be, illegal. This includes secret surveillance unregulated by courts, arrest without trial and indefinite detention[xxiii]. Former President Bush effectively legalised torture and kidnapping by the state. Intelligence agencies have been given special powers, people’s freedoms have been restricted without any crime having taken place, and courts now use secret evidence. US laws would place all government power in the hands of the President in the event of a catastrophic emergency[xxiv]. Amnesty International released a report in 2017 explaining that European countries had: “rushed through a raft of disproportionate and discriminatory laws… eroded the rule of law, enhanced executive powers, peeled away judicial controls, restricted freedom of expression and exposed everyone to unchecked government surveillance… dismantling hard-won human rights protections……EU governments are using counter-terrorism measures to consolidate draconian powers … and strip away human rights under the guise of defending them. We are in danger of creating societies in which liberty becomes the exception and fear the rule.”[xxv] The US and British governments have tried to convince us that there are huge numbers of fanatics around the world who have different beliefs, and who want to slaughter anyone who does not agree with them. The truth is that the number of people who think like this is small. There have always been some people with extremist beliefs, but these few people are unlikely to be a serious threat unless they have the support of the population. If we continue with our existing policies, where we invade other countries for oil and support repressive regimes, leading to the deaths of large numbers of people, then the number of those who hate us will grow, and terrorism will continue. If we seriously want to end terrorism, the following steps are necessary:[xxvi] Stop invading other countries and committing terrorist acts ourselves Stop supporting other nations that commit terrorist acts and repress their populations Stop training, funding, arming and harbouring terrorists Deal with terrorists like ordinary criminals Attend to the grievances of people everywhere As one commentator noted: “The irony of the ‘war on terror’ is that the US can win it only when it finally stops fighting it”[xxvii]. Key Points The US and Britain have trained, armed and funded terrorists to overthrow foreign governments. Most terror is state terrorism, carried out by governments. The US and Britain commit terrorist acts and support leaders who commit state-terror against their populations The expression ‘war on terror’ is propaganda to justify US and British crimes. The threat of terrorism in Britain and the US is exaggerated. Further Reading Nafeez Mosaddeq Ahmed, The War On Truth: Disinformation and The Anatomy of Terrorism Mark Curtis, Secret Affairs: Britain’s Collusion With Radical Islam References [i] Gustave Gilbert, Nuremberg Diary, cited at www.snopes.com/quotes/goering.htm [ii] Susan George, ‘Responses to the Table of Free Voices Event’, December 2006, at https://www.tni.org/my/node/11360 [iii] Edward Snowden, ‘How We Take Back The Internet’, 22.09, TED2014, 18 Mar 2014, at https://www.youtube.com/watch?v=yVwAodrjZMY [iv] David Swanson, War is a Lie, p.215, 2011 [v] Former CIA director Robert Gates and US National Security Advisor Zbigniew Brzezinski have admitted to the US role in funding religious extremist terrorism in Afghanistan, at https://en.wikipedia.org/wiki/Operation_Cyclone [vi] Michael Chossudovsky, ‘“Revealing The Lies” on 9/11 Perpetuates the Big Lie’, May 27, 2004, at www.globalresearch.ca/articles/CHO405E.html [vii] Daniel L. Byman, ‘Comparing al-Qaeda and ISIS: Different Goals, Different Targets’, 29 April 2015, at https://www.brookings.edu/testimonies/comparing-al-qaeda-and-isis-different-goals-different-targets/ [viii] Mark Curtis, Secret Affairs: Britain’s Collusion with Radical Islam, excerpt at http://markcurtis.info/2016/10/05/londonistan-britains-green-light-to-terrorism/ [ix] Chalmers Johnson, Blowback: The costs and consequences of American Empire [x] Mark Curtis and Nafeez Ahmed, ‘The Manchester Bombing as Blowback: The latest evidence’, 3 June 2017, at http://markcurtis.info/2017/06/03/the-manchester-bombing-as-blowback-the-latest-evidence/ [xi] Jeremy Corbyn, cited in Craig Murray, ‘That Leaked Labour Party Report’, 20 April 2020, https://www.craigmurray.org.uk/archives/2020/04/that-leaked-labour-party-report/ [xii] Nafeez Mossadeq Ahmed, The War On Truth: 9/11, Disinformation, and the Anatomy of Terrorism’, 2005 [xiii] Nafeez Mosaddeq Ahmed, The War On Truth, pp.42–45 Nafeez Mosaddeq Ahmed, The London Bombings: An Independent Inquiry, 2006 [xiv] William Blum, Rogue State, 2000, pp.106–116 ‘Luis Posada Carriles, The Declassified Record’, National Security Archive Electronic Briefing Book №153, May 10, 2005, at http://www.gwu.edu/~nsarchiv/NSAEBB/NSAEBB153/index.htm [xv] Derek Gregory, The Colonial Present, 2004, p.198 [xvi] Expression usually attributed to Tony Benn, Question Time, 22 March 2007, BBC, http://en.wikiquote.org/wiki/Tony_Benn [xvii] Dale Watson, ‘The Terrorist Threat Confronting The United States’, congressional testimony of Dale L. Watson, Feb 6, 2002, at https://archives.fbi.gov/archives/news/testimony/the-terrorist-threat-confronting-the-united-states [xviii] Robert Pape, Dying To Win: The Strategic Logic of Suicide Terrorism, 2005, at https://en.wikipedia.org/wiki/Dying_to_Win [xix] John Mueller, ‘Reacting To Terrorism: Probabilities, Consequences and the Persistence of Fear’, 6 March 2007, at https://www.researchgate.net/publication/228400183_Reacting_to_Terrorism_Probabilities_Consequences_and_the_Persistence_of_Fear [xx] Sherwood Ross, ‘Is the terrorist threat another Bush-Cheney fabrication?’, Sept 16, 2007, at www.globalresearch.ca/index.php?context=va&aid=6793 [xxi] Ken Macdonald, cited in Clare Dyer, ‘There is no war on terror in the UK, says DPP’, Guardian, Jan 24, 2007, at https://www.theguardian.com/politics/2007/jan/24/uk.terrorism [xxii] Richard Esposito, Matthew Cole, Brian Ross, ‘President Obama’s Secret: Only 100 Al Qaeda Now in Afghanistan’, at http://www.globalresearch.ca/index.php?context=va&aid=16389 IISS (International Institute of Strategic Studies) report, 2010, stating that the al-qaeda threat had been exaggerated, discussed at http://www.guardian.co.uk/world/2010/sep/07/al-qaida-taliban-threat-afghanistan [xxiii] Phyllis Bennis, ‘And the name for our profits is democracy’, in Achin Vanaik, (ed.) Selling US Wars, 2007, p.228 [xxiv] Marjorie Cohn, ‘The Unitary King George’, 1 June, 2007, at www.globalresearch.ca/index.php?context=va&aid=5853 [xxv] Amnesty, ‘EU: Orwellian counter-terrorism laws stripping rights under guise of defending them’, 17 Jan 2017, at https://www.amnesty.org/en/latest/news/2017/01/eu-orwellian-counter-terrorism-laws-stripping-rights-under-guise-of-defending-them/ [xxvi] Noam Chomsky, ‘“The evil scourge of terrorism”: reality, construction, remedy’, 23 March 2010, at http://www.chomsky.info/talks/20100323.htm [xxvii] ‘Jemima Khan: The things you say sound great Mr. President, so why do you end up disappointing us?’ Independent, June 25, 2011, at https://www.independent.co.uk/voices/commentators/jemima-khan-the-things-you-say-sound-great-mr-president-so-why-do-you-end-up-disappointing-us-2302561.html
https://medium.com/elephantsintheroom/6-scary-r-us-2-the-exaggerated-threat-of-terrorism-8b86c146453
[]
2020-12-04 19:02:40.268000+00:00
['State Terrorism', 'Us Empire', 'Violence', 'Terrorism', 'Propaganda']
How to improve collaboration with Fastcall CTI
Since February or March of Anno Domini 2020, the Corona pandemic is making the headlines and is driving digital transformation, or at least the digitalization of businesses. For sure, this crisis has forced and is continuing to force many a company into enabling its employees to work remotely. This has accelerated an already ongoing fundamental change of the way people work. It has also had an impact on work culture. People can no more just drop by a colleague to ask a question, but need to pick up a phone. Meetings do not happen in the meeting room around the corner anymore but by utilizing technology — virtual meeting rooms — voice and video. Salespeople can no more travel to their customers for face-to-face meetings. Instead they were forced to adopt and master virtual meetings, something they didn’t believe they could possibly do in 2019. It seemed far too alien. Many of us have learned that this change is rather an opportunity than a threat. We needed to, and did adapt, on a business- and a personal level. In doing so, we realized that many things that we thought of “will never work” actually work quite well. This adaption happened and happens in three distinct steps. We first used any available technology, just to keep the lights on. Then we started to put processes around them to become more effective and now we are looking at improving efficiencies. These efficiencies, that we look for, will be achieved by the harmonization and reduction of the number of the used tools and their deep integration into each other and essential business systems. And make no error, the change we have done will stick to quite an extent as a recent Stanford study, that surveyed 15,000 Americans, found. The survey evidence suggests, that more than twenty percent of all full work days will be supplied from home or distributed offices even after the pandemic ends. This jibes with results of the fourth editions of the Salesforce State of Service and State of the Connected Customer surveys. According to the State of the Connected Customer report, the phone is customers’ second-most preferred communication channel (up from place three in 2019). According to the State of Service report, It is also the communication channel that is most widely supported by service organizations. In addition, the report finds that “agents also prefer the phone for complex issues that require two-way conversation, and don’t see the phone being replaced any time soon. However, how agents answer and process phone calls is evolving. The majority of service agents — aside from those on underperforming teams — are more likely to handle voice calls through a computer than a desk phone.” Figure 1: Service Organizations that report using the following channels; source Salesforce State of Service 4th edition What is also notable is the strong growth of online chat, messenger apps, and video support between 2018 and 2020. By extension, the same preferences apply to colleagues cooperating on any given topic. They, too, need and want to communicate using phone, chat, or video. In addition, they want to use this without hassle, anytime, from everywhere, and with any device. So, there is certainly a necessity for businesses to enable their workforce for efficient digital communication. This can be done using standalone specialized tools, integrated and embedded tools, up to and until full communications suites like Slack or Microsoft Teams, which are mainly run standalone and in parallel to business systems. From an efficiency and user experience point of view, integration is key. Employees need to be able to stay within the frame of the business applications they use rather than switching between applications and changing contexts. The communication tool needs to share data and information that is created directly with the business applications it is integrated into. Only then is the tool of maximum help for the people and hence gets best acceptance. From an IT point of view, installation and configuration must be smooth and simple, done without a major project or cost, which means that they need to be built into the business application, rather than attached to it. This is where vendors like e.g. Ringcentral, Aircall or Fastcall come into the picture. Searching for CTI on Salesforce AppExchange reveals many options. These vendors offer the tools that extend Salesforce to enable CTI. The differences between the former two and Fastcall are that Fastcall is specialized on, built exclusively for one vendor — Salesforce — and made part of the Salesforce code stack, whereas the others are not. Hence Fastcall is best suited to fulfil the needs of Salesforce users, including a seamless UX. And for the geeks amongst us: without API calls, which reduces the complexity of the IT landscape and therefore increases the RoI. Summarizing, if business leaders are looking out for a strong, integrated toolset for internal as well as external communication to gain the efficiencies they want, they need to look out for the following traits offered by the software: First and foremost: The software needs to do the job of making life for the employees easier It needs to be embedded into the host software, and not run side by side Ideally, the vendor is specialized on the software stack that your employees use and the software uses the same UI as the host software The software needs to share information and data with the host software and write into the same database The vendor needs to offer good support and regular, frequent updates to the software That’s it. This catalogue will point you into the right direction for finding a well-integrated solution for your CTI needs.
https://medium.com/@twieberneit/how-to-improve-collaboration-with-fastcall-cti-d4c013f0d920
['Thomas Wieberneit']
2020-12-22 21:01:19.832000+00:00
['Collaboration', 'Salesforce', 'Cti', 'Fastcall', 'Productivity']
Welcoming Simply VC as a trusted Node of the Ki Chain
Ki is happy to announce its latest strategic partnership with Simply VC. A valued contributor in the Cosmos ecosystem and top notch validator for networks such as Cosmos, Chainlink, Polkadot, Celo and IXO. 🔗 Simply VC Founded in Malta in 2013, Simply Virtual Currencies, or Simply VC as it is now known, was established to help support the development of the blockchain ecosystem driven by a strong belief in the vision of a future where decentralized organisations, currencies, economies and networks form the backbone of a better, more egalitarian society. Simply VC’s philosophy is fundamentally aligned with the Ki Foundation’s goal of building a fairer world and ecosystem of value. ⛓ Ki & Simply VC for a stronger network State of the art infrastructure, expertise and resources will help strengthen the Ki Network going forward and ensure safety and long term reliability. It is very refreshing to see a project applying blockchain technology to create a fairer value sharing ecosystem, while at the same time hiding the technology as much as possible. This is exactly what is needed in the space. We are excited to help support the growth of the project and look forward to Ki leading this paradigm shift. Matthew Felice Pace Founder & CEO @ Simply VC We want to thank Matthew, Francesco, Isaac and the entire Simply VC team for their support and ability to spot the value that has been created with Ki and the positive feedback loop of value approach to blockchain and business that the Ki ecosystem of value is delivering.
https://medium.com/ki-foundation/welcoming-simply-vc-as-a-trusted-node-of-the-ki-chain-f1a47bc3971f
['Réda Berrehili']
2020-04-23 12:07:31.682000+00:00
['Blockchain', 'Ki Foundation', 'News', 'Cosmos Validators', 'Ecosystem']
Window Drama
Window Drama Photo by Rhendi Rukmana on Unsplash you press your forehead against the cold window glass while your hands hold with a tight grip the hot metal of the radiator outside no one notices the steam your breath projects onto the glass it is only you and your breath fixated on the window like a painting in a frame if you keep this position long enough maybe someone will stop and raise the eyes towards the silent showcase or maybe no one will notice the Morse signals your breath is drawing while your hands are getting warmer and warmer short term actor of a window drama say goodbye the curtain will soon cover this story while raindrops will clean the glass on the other side.
https://medium.com/share-the-love/window-drama-6245e7a6f360
['Ana-Maria Schweitzer']
2020-12-09 03:57:47.456000+00:00
['Poetry On Medium', 'Sharethelove', 'Poetry', 'Nostalgia', 'Loneliness']
Time Series and Trend Analysis
Time-dependent trends are a unique feature of time series analysis. If the sequence of events matters, then you need to analyze possible trends. These trends can ultimately be used for creating models that predict future values. I recently published articles about working with time series data and creating OHLC (open-high-low-close) charts in python; I will be continuing our discussion here. Trends and Stationarity Time series models work on the assumption that the series to be analyzed is stationary, or has a mean, variance, and covariance that are not functions of time. It is extremely rare that you will load a time series dataset that fulfills all three principles of stationarity, you will have to remove these trends to achieve this goal. You will still retain the valuable information of time dependent mean, variance, and covariance, they will be applied later on. Once these three factors are satisfied then you can move on to applying time series models to your dataset. An illustration of the principles of stationarity, Source: BeingDatum The above image illustrates the three principles of stationarity: Mean: The mean of the time series should not be a function of time, whether throughout the time series or during specific periods (seasonal). Image B depicts a series where the mean is growing over time and Image A depicts a series where the mean is constant throughout the time frame. A seasonal time dependent mean also counts against stationarity. The observations may oscillate around a mean but they cannot fluctuate as a function of time for the series to be considered stationary. Variance (Homoscedasticity): The variance of the observations must be constant throughout the time series. Image C depicts a series where the variance of the observations is a function of time, Image A depicts a series where the variance is constant throughout the series. Covariance: The covariance between two observations over a consistent time interval is not a function of time. In Image D the covariance between two variables fluctuates over time; it is smaller towards the middle compared to the rest of the series. The covariance is constant throughout the same interval in Image A. Only Image A depicts a time series that fits the principles of stationarity. Dickey-Fuller Test One useful statistical test to check for stationarity is the Dickey-Fuller Test. In this test the null hypothesis is that the given time series is not stationary and the alternative hypothesis is that the series is stationary. A time interval is selected to calculate the series’ rolling mean and rolling standard deviation. If the p-value falls below the critical value then we reject the null hypothesis. CPB Example I revisited the data obtained from the Campbell’s Soup Company in my previous articles and used python coding to analyze the series. I utilized the API from unibit.ai to obtain the data and I stored it in a pandas dataframe. The year to date price data is below: The year to date stock price of CPB, Source: unibit.ai I selected a time interval of 20 days because that is how many trading days there are in a month. I then utilized panda’s .rolling() method to obtain the rolling mean and variance of the series. I then utilized StatsModel’s implementation of the Dickey-Fuller Test. The results were a p-value of 0.7469 which means that I cannot reject the null hypothesis (the time series is not stationary). A visualization of the rolling mean and variance for Campbell’s price is below: A visualization of the YTD rolling mean and variance of Campbell’s adjusted closing price. The rolling mean has a downward trend while the rolling standard deviation only has slight anomalies in an otherwise consistent series. The notebook I created for this article is hosted on my GitHub profile here. Summary The principles of stationarity are central to time series analysis. Once we identify and remove specific trends we can then utilize powerful machine learning models that are designed for time series data. Python’s StatsModels library has an easy to implement Dickey-Fuller Test to check for stationarity.
https://medium.datadriveninvestor.com/time-series-and-trend-analysis-6a4f255f3d6e
['Alex Mitrani']
2019-12-30 05:17:41.366000+00:00
['Machine Learning', 'Time Series Analysis', 'Visualization', 'Data Science', 'Timeseries']
Unlocking the Potential of Smart Parking Technologies to Improve Patient Experience
The overall hospital experience of patients and their families has many components — from the attitude of healthcare staff and technical quality of medical equipment to backstage elements like the efficiency of processes, communication, and some seemingly minor aspects that happen before and after the visit– the parking experience. A parking facility is one of the neglected elements that can impact the overall satisfaction of patients and visitors. So what defines a modern, innovative parking facility, and what does “smartness” stand for regarding structures like a concrete-made facility? And why do innovation-oriented medical centers need to embrace digitalization also with respect to their parking infrastructure? The level of “intelligence” of a parking lot can be defined by its digitalization. This involves adopting software-based solutions that allow greater efficiency and more overall sustainability through the reduction of time spent circling the lot and making use of empty spots. Is hospital parking aligned with tech-equipped hospital buildings? When it comes to the adoption of software and cloud-based services by hospitals, digital investments in healthcare are already broadly adopted as they help increase organizational performance, productivity and add value to medical services. Clinics deploy systems supporting workflow management, such as EHR (electronic health record) documentation organization. Artificial intelligence, automation, and smart technologies are becoming widely used in preventive medicine, helping improve the accuracy of treatment and clinical decisions, in medical imaging (computer tomography), telehealth visits, etc. At the same time, patients often interact with a healthcare facility through a digital platform like online booking. As much as 85% of general medical doctors use email in communication with patients and 64% of general doctors use WhatsApp. 11% of patients use online booking and 7% pay online. 41% of citizens use smart devices like smartwatches and other wearables to monitor their health[1]. While healthcare facilities embrace digital technologies, many hospital parking facilities today are hardly aligned with frontier technologies adopted inside medical buildings. The COVID-19 pandemic may level up the digital transformation that is happening inside hospitals versus the uptake of digital solutions on hospital parking facilities, so they do not lag behind tech adoption inside the hospital facility itself. During the coronavirus crisis, many organizations shifted their strategies towards resource optimization. Buildings with a strong tech infrastructure fared far better in the pandemic than buildings without[2]. This emphasizes the role of intelligent technologies also when it comes to parking resources management. A flexible operating model enables reacting swiftly to changing circumstances (Scheme 1), such as fluctuating parking demand. We are witnessing how the facility management industry, now impacted by the pandemic, is re-imaging to adapt to this new normalcy — in terms of a hybrid-type of work (partly from home, partly from the office), changed mobility patterns and expectations regarding hygienic measures. All of these can be supported by solutions involving parking digitalization, which means adding agility to parking management and higher health safety with technology and eliminating human support on-site. Scheme 1: Higher automation — lower transmission risk Hospitals are crucial when it comes to the adoption of preventive measures against COVID19. Inside, medical centres implement special procedures and high levels of disinfection, but in the parking facility, the place which often is the very first touchpoint of the patient with the hospital, remains forgotten. Pressing buttons to print paper tickets or touching screens on the payment machines pose risks of coming in contact with the virus. It’s also inconvenient and time-consuming. These burdens could be eliminated by: - introduction of parking time registration via automatic plate number recognition, - creation of a digital parking ticket in a mobile application, - ability to pay from a smartphone, - or automatic parking payment execution while the car is leaving the parking lot. In this case, the ANPR (Automatic Number Plate Recognition) camera system recognises the car and payment for parking is automatically deducted from a credit card linked with the user’s account in the smart parking application. ANPR is a technology that reads vehicle registration plates as they approach and checks them against the database, to which visitor number plate credentials can be continuously added or removed. Using modern technologies to park offers convenience, but it also allows visitors to avoid the potential risk of touching parking infrastructure and surfaces that may expose them to a virus. During COVID19, traffic on the Good Doctor platform, which provides online medical services, increased more than eight times; people didn’t want to go to hospitals[3]. The consumer mindset has changed as they no longer feel the need to go to the hospital for a minor illness. But if they go, they want to be certain about health security. In this respect, touchless smart parking processes serve as solutions safeguarding healthcare centre visitors. Hospital experience impacted by parking procedures Drawbacks with regards to outdated parking infrastructure, difficulties with parking and processes requiring manual interventions can be projected on the health centre’s assessment by patients and their families. Digital solutions and app-based parking control can help in the prevention of missed appointments and reduction of no-show rates caused by issues with parking. When booking a hospital visit, customers can be notified by email about providing a plate number, based on which access to a parking lot can be automatically granted. Providing license plate data allows for automatic entry and exit, making the process free-flow and faster. Automation of parking simplifies permit validation, supports managing parking spots capacity and changing fees, as well as improves safety and parking enforcement by reducing human errors. Facilitating a smoother arrival experience, flexible payments options and reduction of exiting time can ease frustrations that may occur with regards to parking, and as a result, impact the overall hospital image. Smart parking applications available on smartphones give patient’s relatives the possibility to extend parking time during a visit and not rush if their parking ticket has been prepaid for a specific amount of time. When it comes to smart buildings and property technology in general, customer satisfaction is measured in terms of frictionless service and responsiveness[4]. Rethinking hospital parking According to Global Mass Transit[5] data, the number of paper tickets (in mobility) will continue to diminish, and the same should happen for the parking industry and paper parking tickets that are so familiar to users. Revised by the European Union, the Payment Services Directive (PSD2) promotes innovative mobile and internet payment services and recently allowed a parking ticket to be considered a digital good, one that could be displayed on a smart device like a smartphone and paid digitally via mobile. This sheds light on app-based payments for parking and has some important implications for businesses. The PSD2 legislation means that facility owners can implement parking management systems, not requiring traditional parking (paper) ticket dispensers nor cash machines. Although an adoption of ticketless parking by facility owners should be followed by user acceptance and usage of mobile phones and smart parking apps to enter, pay and exit parking lots in a “digital way”. A behavioural shift starts with a mental shift. And although the adoption of mobile apps increased by 40% during the pandemic, breaking the habits of the masses and changing the custom of taking a paper ticket when entering a parking lot may take a while. Usually, the problem does not lie in delivering technology to a project but in the assimilation of customers to new ways of interactions (in this case — interaction with parking). If the parking reservation system on behalf of hospital visitors is linked with a doctor appointment, the hospital staff using it must be comfortable with this solution. Thus, doctors and administration must understand how using technology will help them do their jobs better, while enhancing visitor satisfaction. The future of (hospital) parking is digital To enable digital transformation, stakeholders must change the way parking operations are performed — from drivers using smart parking apps, through parking facility owners deploying intelligent parking solutions, to municipalities harnessing digital solutions for parking as complementary to existing processes of city parking management and fares collection. An important educational role has to be done by municipalities and parking owners alike to communicate efficiently to the end-users about the importance and benefits of smart parking solutions. Current old-legacy and siloed parking systems are becoming more expensive to maintain. In comparison with substantial costs of parking terminals and associated equipment like parking ticket dispensers (which should be resistant to moisture and dust), and the paper itself (which must be top coated), smart parking technologies offer: - cost savings - improved operational efficiency - optimisation of parking capacity utilisation - easy parking payments convenient for end-users - cheaper collection of parking revenue for parking facility owners (compared with the process based on paper tickets and cash machines). The adoption of city-wide smart parking can depend on the level of digitalisation maturity of the municipality, while customer uptake is conditioned partially by the penetration of mobile phones in a given location and is impacted by an awareness that solutions like smart parking applications exist and how they can ease parking pains. Hospitals are one of the largest employers in a city and can be a part of city-integrated parking management (for both on-street and off-street parking), resulting in more optimised traffic. Hospital digital parking can be one of the elements of a larger ecosystem consisting of organisations migrating to new digital systems, increasing the “smartness” of city mobility as a whole. If every organisation with a parking infrastructure decided to digitalise it, then this could help scale up the digital transformation of parking and mobility across the city. Like any other parking facilities, hospital parking lots are no exception and can be (and should be) digitalised — paving the way to more flexible management of spaces in a smart, technology-enabled way. Integrating data from public and private business entities about parking availability can constitute an urban parking data platform where both the end-users and parking owners can benefit from making parking more accessible. (Parking) sharing is caring Digitalising hospital parking empowers hospital administrators to manage parking resources more efficiently and sell the surplus of parking spots externally. This will allow agile operations of property parking and help with adaptation to the new reality with disrupted mobility and working patterns — which consequently impact the demand for parking. The virtualisation of parking assets and management of parking space via software makes it possible to commercialise the use of parking facility — i.e. to sell or lease unoccupied parking spots to stakeholders other than hospital visitors. External clients (and payers) of a hospital parking lot may include: - drivers (not being hospital employees, contractors, patients or their relatives) passing by the hospital and looking for some space to park - other companies located nearby being with the need to provide parking spots for their firm’s clients, employees or guests. With the digital parking upgrade, when all the parking spaces receive their “digital twin” and can be seen on a desktop or in a mobile application — these parking spaces may be put on sale on the market (i.e. via application) or allocated to the adjacent business partners. As a result, imbalances caused by changing parking demand can be removed, as parking space not used, for example, in the afternoon by one organisation can be utilised by other businesses or individuals. Parking sharing contributes to the sustainability of the building and more balanced urban mobility (fixed parking requirements and oversupply of parking spaces are often criticised, and more efficient — through technology — utilisation of the existing parking lots is an answer to this problem). Another business model based on a digital parking ecosystem suitable to hospital facilities is subscription. In this case, parking is prepaid for a specific period, giving the subscriber, individual or business, the right to use the hospital’s parking space. In general, upgrading parking kiosks with a contactless system and transforming of parking into an easy, touchless, frictionless process where visitors can seamlessly enter based on plate number recognition can enhance the parking experience. This not only applies to clinics but to any other organisation that plans to offer free or discounted parking space for their clients and deploys smart parking solutions to streamline such operations. The value beyond parking Since a hospital visit is usually an experience burdened with negative emotions, seamless parking warrants not only a faster way to park, pay and drive away but also provides an emotional value — a peace of mind. People heading into a clinic are in a specific emotional state and may perceive even the smallest issue as a stressful event. That is why it is important for a healthcare provider to eliminate any pain point (including parking) that may cause unnecessary stress to patients and their relatives. To assure the composure of hospital visitors, the parking stay can be prepaid by visitors using a reservation feature on an app or covered by the hospital administration. In the last scenario the validation of whether an individual is entitled to free parking and linking them to a scheduled appointment could be processed digitally. Not only will this enhance the visitor experience, but it will also increase operational efficiency, reduce costs and eliminate parking misuse or overstays. Contactless parking verification and granting access — without the need to interact with anyone on-site — can also be deployed for seamless parking entry/exit for the hospital staff. Future-proof and cost-efficient parking facility The pandemic was a game changer and had a tremendous impact on the ways companies operate. But it also caused a decisive transformation during which digital technology was found to be the only right way to respond to changing circumstances and customer expectations. The impact of the COVID19 crisis can be considered an impetus for the change and a faster adoption of digital parking solutions as a means to optimise resource allocation and monetise from the owned parking infrastructure. Technology-enabled and software-based parking make operations more agile and ensure the resilience of parking management. At the same time, parking digitalisation is a crucial element for future upgrades towards fully autonomous parking operations — with payments automatically executed when the driver leaves the car park, without the need for manual intervention. Replacing old-legacy systems and hardware-intensive parking operations empowers both the end-users and parking owners, helping them reduce the cost of parking management. In research assessing OPEX costs in healthcare (part of which were parking lots costs), the comparison of the base case costs with the costs after incorporating digital services, show savings up to 20%[6]. Scheme 2 — Cost savings due to digitalisation Table 1 — OPEX detail Summary Investment in digital technology is a driver of innovation and competitiveness while having a shorter payback in comparison with investment in physical infrastructure or hardware-intense technologies. In the specific healthcare sector, digital innovations improve the quality of healthcare and work towards a patient-centred strategy. The COVID-19 pandemic has prompted advancements in smart city tech and the adoption of new technologies that support mobile and contactless operations. Automated parking access and exit and cashless parking payments enhance customers confidence — minimising the risk of virus transmission — and as a result, boost the total experience of the service. In a world increasingly driven by technology, the approach to parking should be rethought. Medical centres adopt diverse intelligent technologies, AI and machine learning to support their operations. Similarly, their parking facilities can be digitally upgraded, making parking more efficient, more convenient, more sustainable and safer (virus-free). New opportunities presented by smart parking technologies improve current processes, benefiting all the stakeholders: hospital visitors (ticketless, unmanned, cashless parking), hospital facility managers (greater operational efficiency, cost cut, additional revenue from selling parking spots to non-visitors), business partners (leasing hospital’s parking spots in an elastic model), and city (improved parking traffic, more balanced mobility, no waste of public space). Replacing or complementing old-legacy parking technology like cash machines, ticket-based parking with an advanced smart parking technology, is a fundamental shift in the way parking is used and a step towards a 360-degree digital transformation of the hospital — so, the one that does not exclude parking facilities from a long-awaited digitalisation.
https://medium.com/geekculture/why-smart-hospitals-need-digital-parking-unlocking-the-potential-of-smart-parking-technologies-to-d63992d332e9
['Iwona Skowronek']
2021-03-14 09:18:27.862000+00:00
['Smart Parking', 'Digital Transformation', 'Smart Hospitals', 'Smart Cities', 'AI']
What is Blockchain? Here’s Everything You Need to Know
Just like a lot of the technology in the world, the digital assets that have come to light over the past decade like Bitcoin and Ethereum need to rely on some sort of database that’s capable of tracking and maintaining a record of a large volume of transactions in a secure manner. The solution for this need to maintain a large and secure database is the blockchain technology. Blockchain technology is something that saw its first implementation back in 2009. The technology essentially consists of blocks of data that hold batches of transactions which have been time stamped. Each of the blocks are linked to the previous block and secured through the process of cryptography. This is why it is referred to a chain of blocks. The world is becoming smarter every day. As it becomes more and more interconnected, the cryptocurrencies are becoming more popular throughout the world. They present a growing market of opportunities that might not want to follow the traditional and long standing banking structure. Blockchain, however, is more than just a means to power cryptocurrencies. While it is the industry where it saw its invention and where it sees its primary use, it also offers the possibility of creating a fraud-proof system for transactions and exchange. This is why it has a lot of potential to be used outside the world of cryptocurrencies. That is what has also attracted a lot of interest from diverse industries like supply chain management, traditional financial institutions, food production and many more. How Does Blockchain Technology Work? Blockchain technology is touted to be a revolutionary new technology. It is a significant innovation that has been compared to be of significance to the invention of the wheel. It is unique and potentially carries a lot of opportunities. In order to understand why it is such an important innovation, it is necessary to understand how it works. Blockchain technology is so unique because of the fact that each of the blocks of data contains cryptographic hashes of the block before them. This creates an immutable chain of data. The cryptography secures the data. Any kind of attempted tampering with the data is easily detectable and can is simply impossible to go through with. It is not just that the blocks are all cryptographically linked with each other that make blockchain technology so secure. Another reason why it is so secure is the aspect of decentralization. Each of the computers (or nodes) that have the software installed have a copy of the database which is constantly being updated with the addition of new blocks. There is no such thing as a central storage for all the data. This means each of the new blocks being added to the database are simultaneously being updated throughout the network of computers. Any new records being added to the blockchain have to match the requirements of the chain otherwise it gets rejected by the whole network. It is a truly egalitarian method of database maintenance — the likes of which has never been seen before. There are other aspects of transaction requirements which define what a valid entry to the blockchain database is. For instance if you look at a Bitcoin transaction, it has to be signed digitally by the party sending the currency units and verified that they have been received by the relieving party. Once the transaction has been verified by the whole network, the Bitcoin transfer is assigned and added to the blockchain record. Some Of The Most Prominent Blockchain Databases Over the course of the past few years, blockchain technology has boomed in its popularity. More and more people are realizing the potential uses that it has outside of the cryptocurrency world. There are several industries taking an interest in developing and implementing blockchain technology for their use including financial other technological sectors. Of course, Bitcoin is the most prominent use case for blockchain technology but there are others making use of or planning to make use of the revolutionary new technology. Arguably one of the most prominent players in the new blockchain technology industry is Hyperledger. It is an open-source collaborative effort created by the Linux Foundation that is operating across several industries. They are making significant headway in popularizing the use of blockchain technology based digital ledger systems. Since the release of Hyperledger back in the middle of July, it has come a very long way with a lot of businesses operating using their platform. The global technology giant IBM made an announcement in the March of 2017 that they will develop their own blockchain based service which follows the same line of thinking as the Hyperledger made by the Linux Foundation. They will allow their customers to create secure blockchains across several industries and offer the opportunity of interoperability to them. Then there is the law firm industry which is showing increasing interest in the world of blockchain technology. The reputed ‘big four’ law firms of the world are planning to utilize blockchain technology to change things up in the way databases are maintained in the legal ecosystem. As of now, only Ernst & Young are the only one from the ‘big four’ to actually make their blockchain project public but it is a big move when you look at blockchain technology implementation on a macro scale. Another big move in the world of blockchain technology was the interest shown by the London stock Exchange. The announcement was made early on in 2018 and they had stated their intent to use blockchain technology to improve on the aspect of transparency for the unlisted businesses when it came to their information about shareholding. This is going to become a game changing scenario if it comes into play for the London Stock Exchange. How Secure Is Secure? The simple fact of the matter about blockchain technology is that all the verification and updates of records on the blockchain is done through cryptographic protection. This means that theoretically, the protection that blockchain technology has to offer is much better than what the traditional banking system has been offering. This is a big claim to make but there is plenty of reason to believe that this is the case. One of the best qualities for security that blockchain technology has is the fact that it is decentralized. This means all the data added to the blockchain cannot be added without verification and there is no possibility of changing records on the blockchain’s ledger retroactively. It is a truly immutable database system and that makes it perfect for storage of important information and transactions in the financial sector. Of course another benefit that comes from blockchain technology is maintaining the aspect of transparency yet the privacy of the users is never compromised. That has been a problem since cyber criminals have started to use cryptocurrencies for their illegal transactions online but there is significant work being done to discourage such activity. There is a lot more good to be accomplished using blockchain technology and it is the future.
https://medium.com/predict/what-is-blockchain-heres-everything-you-need-to-know-3762adfa80fc
['Darren Lee']
2018-10-16 11:21:51.320000+00:00
['Blockchain', 'News', 'Technology', 'Knowledge', 'Cryptocurrency']
Cloud Computing vs Traditional IT Infrastructure
Cloud computing is really popular nowadays. More and more companies prefer using cloud infrastructure rather than the traditional one. Really it’s much more reasonable to buy a cloud storage subscription instead of investing in physical in-house servers. However, are there any benefits of using the cloud computing instead of traditional one? Let’s review the main differences. The differences between cloud computing and traditional IT infrastructure Elasticity and resilience First of all, you do not need to buy the hardware and maintain it with your own team. The information in the cloud is stored on several servers at the same time. It means that even if 1 or 2 servers are damaged, you will not lose your information. It also helps to provide the high uptime, up to 99.9%. When we talk about their traditional infrastructure, you will have to buy and maintain the hardware and equipment. If something happens, you can lose the data and spend a lot of time and money to fix the issues. Scalability and flexibility The cloud computing is the perfect Choice for those who do not require a high performance constantly but use it time by time. You can get a subscription and use the resources you paid for. Most providers even let pause the subscription if you do not need it. and at the same time, you’re able to control everything and get instant help from the support team. The traditional infrastructure is not so flexible. You have to buy an equipment and maintain it even if you do not use it. In many cases, it’s even more expensive because you might need their own technical crew. Automation One of the biggest differences between cloud and traditional infrastructure is how they are maintained. Cloud service is served by the provider’s support team. They take care of all the necessary aspects including security, updates, hardware, etc. The traditional infrastructure required the own team to maintain and monitor the system. It requires a lot of time and efforts. Cost With cloud computing, you do not need to pay for the services you don’t use: the subscription model means you choose the amount of space, processing power, and other components that you really need. With traditional infrastructure, you are limited to the hardware you have. If your business is growing, you will regularly have to expand your infrastructure. At the same time, you will have to support and maintain it. Security Many people are not sure about the security of cloud services. Why can it be not so secure? As the company uses the third party solution to store data, it’s reasonable to think that the provider can access the confidential data without permission. However, there are good solutions to avoid the leaks. As for traditional infrastructure, you and only you are responsible for who will be able to access the stored data. For the companies who operate the confidential information, it’s a better solution. What kind of infrastructure is a good choice for your business? It depends on what your company does and what are your needs. Nevertheless, more and more organisations today prefer cloud infrastructure.
https://medium.com/@sorable/cloud-computing-vs-traditional-it-infrastructure-84f0d08bae13
[]
2019-02-12 02:29:25.487000+00:00
['Company', 'Cloud Computing', 'Security', 'Automation', 'Business']
How to Draw in 3D With SwiftUI
How to Draw in 3D With SwiftUI Using native SwiftUI code to draw in 3D In an article I published in April, I used the rotate tag in SwiftUI to rotate dominoes. The tag in question can be thought of as a convenient option. I say that because there is a more advanced one that gives you much more control. The first two elements — the degrees of movement and the axis on which you want to turn the object — are exactly the same as with the convenient version. But in the advanced method, you have three other variables you can work with. As such, this tag is one of the most complex to use in the SwiftUI armory. Hopefully, this complexity will clear up as you read on. Let’s start with the anchor. This basically determines which side you want to pivot on. The effect of the anchor is closely linked with the axis on which you are pivoting. Rotating on the Z-axis around 360 degrees. In the example above, I changed the degree of pivot around 360 degrees, initially doing so on the Z-axis. Here is the code behind the scenes (note that this gist is for the following piece): You can clearly see the rotation3DEffect tag in code above, with the only variable I am animating being pivotDegree . Here is the code in action, making the green, blue, and purple boxes turn: Rotating on the X-axis around 360 degrees. Of course, there is the Y-axis on which you can also pivot. The next example shows just that: Rotating on the Y-axis around 360 degrees. Now I didn’t mention it, but we did use/set the perspective here too, setting it to 0.5. It controls the amount of swing. The higher the number, the greater the swing. Here is a GIF showing a perspective of 1.2 with some words of wisdom: Rotation of each text object with perspective set to 4. Note: It disappears as we swing through 90 degrees. We’ll use that later. This brings us to the most challenging of the variables: the anchorZ . Set the code running the animation below to the values 10, 20, 30, 40, 50, 60, and 90. I chose those values because they correspond with the size of the rectangle I am moving about. The anchorZ controls the center point of the diameter on which you are pivoting around. You can see the effect most clearly with the red/pink square on the very end, which pulls to the back and then swings forward to the front. Rotating on Z-anchor around 360 degrees. In truth, I’ve been simplifying things just a little. You see, the anchor values I’m using are of a type UnitPoint . There are few presets, but you can mix and match your own combos to get some pretty cool effects: @State var anchorPView: UnitPoint = UnitPoint(x: 0, y: 0) Rotating on a custom axis around 360 degrees. The values are a little odd since we’re rotating on the X-axis and varying the Y-value in the UnitPoint variable. A value of zero here is in effect a UnitPoint.top , and value of one is a UnitPoint.bottom . The other values are somewhere in between. In terms of degrees, we’re travelling 360, so a full circle. Note how similar it looks to the previous example. It is using perspective but only a value of 0.5. Our card at the end of this animation swings in front and then back in line. Here is the code for a portion of this piece: Be warned, things can get confusing because many of the attributes are interlinked. So if you’re not careful and change two opposing variables, they will cancel each other out and it’ll seem like you didn’t change anything at all. We covered a good deal of ground on 3D rotations in SwiftUI, but what about an object? Let’s focus on building a cube. It has six sides, although to keep things simple, I’m only going to worry about four of them. Two of our sides will start at 90 degrees, making them invisible, and two of our sides will be at 0 degrees (so face on). As we rotate things, we’ll need to use an offset on two of the faces to keep everything together. To help us get the rotation correct, we use a perspective value of at least 0.5 so we can see if we got it right as it turns. It helps too if we add a line of text to each face so that we can tell which way it is flipping as it turns. Sometimes, it isn’t so obvious. Finally, we use opacity as we build it to mask the faces that are good. Use opacity at the end to get your front and back visually looking right. Reduce the perspective to a minimum too. Otherwise, it simply won’t work. Keep in mind as I said before that many of the parameters interact with each other, so be careful as you move forward not to change more than one at a time. Follow those rules and soon you’ll find yourself looking at a cube just like this one — a cube you can leave as a wire frame or indeed add a skin to. Rotating cube using SwiftUI. Here’s the code for this GIF: As I said, once built, you can add a skin too and some subtler shading. And there you have it: a 3D object in SwiftUI.
https://medium.com/better-programming/how-to-draw-in-3d-with-swiftui-7989cfcd35fc
['Mark Lucking']
2020-06-01 17:15:07.938000+00:00
['Mobile', 'Swift', 'Programming', 'iOS', 'Swiftui']
What Should Systems Neuroscience Do Next? Voltage Imaging
What Should Systems Neuroscience Do Next? Voltage Imaging The firing and the wiring at the same time Credit: Pixabay The best thing about being a neuroscientist is that neuroscience never stands still. Barely a week passes without some new major result, a sparkling technological breakthrough, a provoking theoretical idea. And the sheer complexity of brains means the questions available are practically infinite. So even if your specific corners of brain research have briefly slowed their breathless pace, there is always more to learn. Always new questions to tackle. Indeed, there are whole regions of the mammalian brain whose mysteries have barely been probed, and which will no doubt turn out to be crucial for our understanding. My money’s on the zona incerta, the globus pallidus, and the septum. Exciting times. The worst thing about being a neuroscientist is that neuroscience never stands still. Barely a week passes without some new result, breakthrough, or theory that you don’t have time to read; that ends up filed for later, destined never to be opened; or to be skimmed and not assimilated. And the sheer complexity of brains means the questions available are practically infinite. So even if you’re lucky enough that your specific corners of brain research have briefly slowed their breathless pace, there is always more to learn. Always new questions to tackle. Indeed, there are whole regions of the mammalian brain whose mysteries have barely been probed, and which will no doubt turn out to be crucial for our understanding. Worrying times. Paradoxically, this best and worst of all possible worlds in mind research is created by mindless churn. Of doing whatever can or could be done next. Not what should be done next. So, hubristically, I thought I’d plant my feet against the torrent and take a stab at separating the should from the could. A series of occasional pieces that set out to answer the question: what should systems neuroscience do next? In this first piece, we start with the very definition of systems neuroscience. It is at heart the study of the activity of multiple individual neurons, of the messages they are sending. Everything we see, do, or think in the moment is through neurons sending spikes to each other. So a clear priority for systems neuroscience is to make the best recordings of the output of the most neurons, and with as much metadata about those neurons — where they are, how many there are, what type they are — as possible. We have two mainstream ways of recording the output of individual neurons: insert electrodes to record spikes, or image calcium fluxes in the neurons’ bodies as a proxy for spikes. Both have unique strengths, both are constantly evolving in the white-heat of technology (and cash), but both have problems that are solved by the other. So our first “should”: we should find a recording method that combines the strengths of both. The great news is that we already know the answer. The answer is voltage imaging. And if we get it solved, voltage imaging comes with a massive bonus prize, something neither electrodes nor calcium imaging can buy us: live connectomics.
https://medium.com/the-spike/what-should-systems-neuroscience-do-next-voltage-imaging-9bfa5d6a4df9
['Mark Humphries']
2019-12-30 20:32:09.324000+00:00
['Neuroscience', 'Psychology', 'Artificial Intelligence', 'Machine Learning', 'Science']
A Low Cost of Living Can Be Spending $10,000 a Month on Cars, Beer, and Blow
The standard setup in these enclaves — a modest or out-sized California Craftsman style home with a Tesla or Audi (or both) in the driveway. This typical household likely has a higher combined car payment than my monthly rent. The people who live in these houses could probably rent my apartment three or four times with just one monthly mortgage payment. Or maybe the run of the mill $50,000 a month earner rents a luxury apartment in the heart of Hollywood for like five or ten grand a month. These apartments exist. Somebody’s living in them. The Tesla or Audi or maybe the Porsche stays in the underground garage. You can see the Peloton from the window as you walk Vine Street. The Equinox membership gathers dust alongside expensive art, Restoration Hardware furniture, and a fully-stocked home bar. When we’re able to go out and party, this urban dweller goes hard, dropping more discretionary income in a month than most of us make in a quarter on beer and blow. Almost by accident, they manage to save a few thousand bucks a month between a 401(k), a massive checking account cash cushion, and a Robinhood account consisting of the platform’s ten most popular stocks. You can compare what you think you’d do (or what you’re doing) on a $50,000 a month salary to what I hypothesize I could do and what I assume at least a heaping handful (probably tens, if not hundreds of thousands) of Los Angeles residents do. You can also make judgments along the way. You can consider frugality righteous while looking down on the examples of high costs of living. Or you — we — can stop being so judgy. We can acknowledge the way people spend and live can change commensurate with income on the basis of several factors: Socialization Psychology Means to the same/similar end Different paths to the same/similar destination We’re socialized to spend more as we make more. Psychologically, it might feel unavoidably weird to attain $50,000 a month in cash flow yet maintain an insanely low cost of living, in part due to the socialization. Why bother? You’re successful. So, act as fucking if. We all want to get to a point where we don’t have to work. Some people still view this as a life phase known as retirement. Others call it financial freedom. I prefer financial flexibility. We use money as the means to get there. We all operate on different timelines because: Socialization influences each of us to different extents and in different ways. We’re all unique psychological animals. Even if we’re sort of “the same” in the way we do life, no two contexts are identical. Money as a means. To one person, this means saving X every month to get to Y by Z. For this person, X, Y, and Z do not change with how much money they make. Consider the person who embodies the thinking behind this last bullet point. Their end is “retirement.” They don’t care much about getting there before they turn Z. By saving X each month, they’re certain to have Y by the time they turn Z. Everything is else is excess gravy, so they spend it. Who cares if they spend it on an expensive house, a sweet apartment, a sick car, and beer and blow? Who hasn’t dreamed of a main home in LA and a pied-à-terre in San Francisco? They save what they need to save to retire in 40 years. That’s more than enough. They make so much they don’t even have to think about it. Maybe they don’t save anything at all. They’ll sleep — and save — when they’re dead. You know the attitude. Your path to retirement, financial freedom, or financial flexibility might look different. Your head and heart might tell you to continue spending just $3,000 a month no matter how much more money you end up making because you can use the surplus to get to retirement, financial freedom, or financial flexibility much more quickly. You want the time to sleep and do other cool shit long before you’re dead. You compromise today (some might say sacrifice) to get something you want more tomorrow. The present is a key part of the means to your end. If you could be frugal for two years on a $50,000 salary, you’d do nothing but save for two years. You’d be living the dream. Your dream. You view the here and now as precious time towards whatever your goal is. The other person thinks of today more urgently — as your only life. Work hard. Play hard. Not tomorrow, but now. They’re not giving anything up to get to where they want to go any faster. To them, today represents living, not tunnel vision to a tomorrow that might never come, no matter how much you save. I mean, you could die tomorrow. Or in a week, month, or year or two. Imagine dying with all of that money you’ll never be able to use saved. That would be a bummer. We all know people who think this way. You might think this way. Of course, we could muddy the rhetorical, value judgement waters by bringing ideas such as waste, materialism, and inequality into the equation. But let’s not go there today. All else equal — and, I know, it’s never equal — why does any of this matter? I’m all about keeping dogma out of personal finance. Even if you make it your goal to die with zero dollars in the bank, who am I to judge? While this might sound extreme, it’s necessary. I get mildly pissed when homeowners refer to people like me as “throwing money away on rent.” Some homeowners get all smug and talk down to those of us who rent. This serves no purpose other than being counterproductive (you’re just gonna make me want to rent even more!) and unhelpful. Empower, don’t belittle me. There’s no one way to dwell and subsequently spend/save/invest money to fund your vision of what life ought to be. I catch myself judging like this all the time — but in reverse. I believe in renting, so I scoff at owning. I adhere to a relatively insane low cost of living, so I chide, even if under my breath, people who throw thousands away each month on a mortgage and car payments, at bars and restaurants, and on beer and blow. See — wait — back up! I typed, “throw thousands away.” This is no joke. In a sentence that’s part of an entire article that essentially warns against money-related value judgments, I, without thinking, made a value judgment. It came naturally. It didn’t even feel out of place until I reread it. They’re not “throwing money away.” They’re just doing life differently and on a different timeline than me. Or they don’t want to do what I want to do. They want to do the opposite — or something not quite the same. I can’t know unless I ask. It’s almost always better to ask, acknowledge, accept, learn, and discuss than to criticize. Bottom line — you can’t cherry-pick the personal finance decisions others make that you’re going to judge. You can argue your own preferences and even argue against somebody else’s. However, the second those of us obsessed with a defensive style of money management (e.g., spending matters more than income, be frugal, save and invest ALL of your monthly surplus) judge and effectively cast off the way others approach money; we’ve done a disservice to the larger personal financial conversation. I’d rather: Welcome, all kinds of people and approaches. Dig into the socialization and psychology that influences, if not dictates, how and why people spend and save. Mine the pool of experience — from one extreme to the next and everything in between — to generate a deeper understanding of individual and — where we can aggregate — collective personal financial situations. This is how we have healthy and productive conversations about money. We welcome everybody in. We don’t judge. We learn from those we can’t quite understand. We take something from every experience to help inform and craft our own. Plus, if we actually want to persuade others to do it the way we do it (or close), we’re better off taking this approach. Nobody likes being talked down to or told there’s only one means to what is a dynamic, very personal end.
https://themakingofamillionaire.com/a-low-cost-of-living-can-be-spending-10-000-a-month-on-cars-beer-and-blow-63474ec456bf
['Rocco Pendola']
2020-12-13 13:54:15.359000+00:00
['Money', 'Budget', 'Personal Finance', 'Saving', 'Self']
How To Engineer Viral Social Proof
Step 2: Install Viral Loops The Viral Loops home page. Viral Loops is an out-of-the-box referral marketing tool that lets you build rewards campaigns without having to write a single line of code. After signing up for a 14-day free trial, follow their step-by-step process to quickly get up and running. Viral Loops uses tried and true templates for referral marketing based on campaigns run by successful startups. Let’s say you want to build a pre-launch waiting list before your product’s ready to go out the door, like us. You can take advantage of The Startup Prelaunch template that Robinhood used to get one million users before they even launched. Tried and true templates for referral campaigns, offered by Viral Loops. If you’re not sure which template is best for you, the Viral Loops blog has a ton of useful advice. They even provide an entire handbook on referral marketing, as well as a number of actionable case studies that informed their templates. Those studies include Robinhood’s incredible pre-launch as well as other successful campaigns like Harry’s razors getting 100,000 emails in a single week or Dropbox’s 3900% growth. Create Campaign After signing up, hit the big red Create Campaign button to get started. After you do, you’ll hit a page where Viral Loops will recommend specific campaign templates based on your use case. Template options based on other successful startups. If you want to ignore their recommendations and just see all your options, click All templates in the top left corner next to Recommended. Switch back to Recommended if you want to go back to using their suggestions. For the following walkthrough example, we’re going to use the general rewards program that fueled Dropbox’s growth with our new software recommendation blog, Scale Combo. Scale Combo seeks to empower entrepreneurs with software combo recommendations to help them move faster, aim higher, and build better companies using the right resources. It’s pretty much all the things we wish we knew back in January 2018 when we first started as entrepreneurs. If we had, we would have saved months if not years of time. Viral Loops and Proof are both a core part of our marketing stack and are also the first Scale Combo we’re recommending to entrepreneurs. Start by filling out some basic info. You can use your own existing website for the referral campaign, but Viral Loops also offers its own landing page builder if you don’t already have one. Since we’re just launching Scale Combo, we’re going to use Viral Loops pages. Add Rewards Next up is everyone’s favorite part — adding in the actual rewards. As mentioned above, if you’re pretty cash-constrained, you can choose rewards that work directly with the product you’re already building. For Dropbox, that led to a two-sided referral campaign where the referrer and invitee both received 500 MB of free storage up to 16GB. Two-sided referral campaign in Viral Loops. For Scale Combo, we decided to offer different rewards on the invitee and referrer side. We’re giving free Scale Combo stickers to all invitees. In addition, we’re taking a page out of Dropbox’s book and offering a free three months of a premium subscription to all referrers. This kind of referral program is known as a two-sided referral program. By incentivizing both the referrer and the invitee, it promotes a continuous flow of referrals from Day 1 to Launch. Next, you’ll have the option to customize your widgets, notifications, and integrations, but we’ll skip over those for now. How you personalize those to fit your campaign is entirely up to you and how you want to build your brand. Installation If you want to use Viral Loops’ own pages, this step is already done! But if you want to use your own website builder, Viral Loops offers easy installation instructions for the following website builders: Personally, we recommend using Webflow, a no-code website builder that offers customizable design capabilities second to none. Even if you already know how to code, Webflow offers a best-in-class designer that designers, developers, and laymen like me can easily navigate with the help of Webflow University. (Webflow University also doubles as a perfect way to get into design and front-end web development, and actually served as my first introduction to software development.) Unfortunately, because it’s still new, Webflow isn’t one of Viral Loops’ supported website builders. Lucky for you, we went through the semi-wayward process of installing Viral Loops on Webflow, so we’re attaching our own guide here. Webflow Installation Navigate to Settings for your project. Go to the Settings tab in your Webflow dashboard, then click Custom Code. From here, scroll down to Footer Code. This section can be a bit tricky to find because Viral Loops tells you to look for </body> tags. Webflow heads this section with Footer, with a much smaller subheading reading “Add code before </body> tag.” The Footer Code section, a.k.a. the </body> tag, is the one you’re looking for. After copying the Viral Loops code in, hit Save Changes and Publish and you’re all done!
https://medium.com/better-marketing/how-to-engineer-viral-social-proof-adaa40dcb20b
[]
2020-01-10 01:13:10.558000+00:00
['Entrepreneurship', 'Referral Marketing', 'Startup', 'Viral Marketing', 'Marketing']
Build a Simple Stopwatch in Flutter
Or is it really that simple? So I was working on a project where I needed to implement a simple stopwatch to let the user know about the time interval spent doing a particular task. Okay, how difficult will that be right? Dart already has a Stopwatch() class that I can use directly. But as I started working on it, I realized how awfully wrong I was and how such a simple feature is not that easy to implement and how the Stopwatch() class in Dart really sucks! (That is only one person’s opinion. It may not suck for you). And since I am a lazy person, I thought okay lets just search “create a stopwatch in flutter” and then copy the code and paste it. But OH MY GOD! how complex was the code. It was like 1000 lines of code for a simple stopwatch feature. (And its bad practice to use code that you can’t understand). So instead of going through all those 1000 lines of code I decided to write the stopwatch feature myself in as simple way as possible. After 2 hours of coding and a broken glass I was able to manage something like that on the left side. A Simple Stopwatch with a Start and Reset Button. And it was accomplished in a fairly low amount of code.( I hope you have the same opinion too after reading the article. Fingers Crossed) My Thought Process on building The Feature: Let me take you first through my thought process. I am going to write exactly what was going through my mind (If you want to directly go to the code then you may skip this part). Okay So I need a timer which looks like 00:00. It should start with a START button and go back to zero with a RESET Button. Ummm… Dart has a stopwatch library right (which I used earlier to calculate time taken by a particular bunch of code). I can use that directly right and display it on the screen. BOOM. I just have to use a Text widget, start the stopwatch (using the Stopwatch() class) and display the elapsed time on Screen. But wait. I will need to read a different value in the Text Widget every second. So I need to get the data as a Stream. But Unfortunately Stopwatch() class of dart has no method to get the values as a stream. It only outputs a single value whenever one of its methods is called. So I can’t use it. Okay this is where I realized that this is not going to be as easy as I initially thought. New Plan. I need to create a Stream which outputs a new value every second. Kind of Like 1,2,3,4,5,6,… Then I can listen to the stream and update the values in the Text Widget. Problem solved (at-least in my mind). But wait. There is another problem. What happens when the value crossed 60. I can’t show 61,62,63,etc. right. I need to format the values and change it minutes and seconds so that I can display them beautifully. Lets Get to the Code Now Step 1: Creating a Stopwatch Stream First we will create a stream which will give us the elapsed time in seconds after every second so that we can update it to the UI. We defined a method called stopWatchStream() which will return a Stream of integers. We will be using a StreamController to control the events of the Stream. A StreamController gives you a new stream and a way to add events to the stream at any point, and from anywhere. The stream has all the logic necessary to handle listeners and pausing. You return the stream and keep the controller to yourself. When instantiating a StreamController we need to pass few parameters there. First is onListen which will be called whenever we want to listen to the Stream. Next is OnCancel which will be called on cancelling the Stream. We will get to OnResume & OnPause in a bit. startTimer() In OnListen we passed a method startTimer. Lets see what startTimer is doing. startTimer is creating a new instance of Timer.periodic(). Timer.periodic() takes two parameters: an interval and a callback function. And what it does is after every interval it calls the given callback function again & again until the timer is cancelled. Here, Timer.periodic() will call the tick() function after every 1 second. You may use a different interval to instead of 1 second. Just change the Duration in timerInterval variable Tick() Now lets see what the tick() function is doing. The tick function simply adds +1 to the counter and then adds the counter to the stream. Whenever we are listening to the stream, we will get this “counter” every time the streamcontroller.add() method is called. stopTimer() Okay now lets look at the stopTimer() method which will be called on cancelling the stream. We are doing a list of things here. First we check if the timer is not null, i.e. there must be a running instance of timer. If someone calls the stopTimer() method without starting the timer first then timer will be equal to null. Next we cancel the timer. Once cancelled it will stop running the tick function every second. We also set it to null so that it starts from scratch and doesn’t start from the place it ended last time. Next we set the counter to 0 so that it starts from 0 too next time the timer is started. And finally we close the Stream using streamcontroller.close() Step 2: Adding the timer stream to the UI We have 3 primary widgets here:- Text Widget to display the Time A Start Button A Reset Button The Text widget is simple. It is simply displaying the time in HH:MM:SS format. Now lets look at the Start Button. A lot is going on in here. timerStream = stopWatchStream(); First we are creating a new stream from our stopWatchStream() that we created previously and set it to var timerStream. Now you might be thinking why am I creating a new stream every time START button is pressed. I could create a new Stream in initState directly too right and that would be more efficient. A BIG NO! If you do that then yes when you click on the Start Button first time everything will work fine. But if you click on the Start button a 2nd time then you will get an error like this This is because dart somehow allows you to listen to a stream only once even after cancelling the subscription to the stream. One workaround around this problem is to use a BroadCast stream. But problem is it doesn’t close the stream fully. It only pauses the stream in a way and then resumes from the same place we left earlier when started again. So in our stopwatch when I click on the START button, I want the timer to go like 1,2,3,4,5… and then clicking on the reset button should reset the timer to 0. If I then click on the START button a second time, it must start again from 0,1,2,3,4,5 and so on. But if you are using a broadcast stream what happens is if you click on the RESET button the timer will go back to 0 in the UI but in the background it will still be running. So when you click on the START button a second time, instead of starting from 0 again it will start from 11,12,13 or something like that(the number of seconds that has passed since you clicked on the START button first time). So to fix this bug I just instantiate a new Stream every time START button is clicked. Listening to timerStream Next I listen on the Stream. So here after every second I will be getting a new tick value (like 1,2,3,4,5,6,7). But we directly cannot show the tick values in the UI. We have to transform it to HH:MM:SS format. As you can see inside the listen function we are using 3 String variables hoursSTR, minutesStr and secondsStr. In each one of it we are using an algorithm to transform the tick to hours, minutes, seconds respectively and then updating the variables inside a setState() function. Let me explain the formatting in the secondsStr with an example: secondsStr = (newTick % 60).floor().toString().padLeft(2, ‘0’); Suppose the newTick value is 81. Ofcourse we can’t show 81 because there are only 60 seconds in a clock. And after seconds must start from 1 again till 60 and so on. So 81 seconds should be shown as 21 in our stopwatch according to the HH:MM:SS format. Lets see how our code achieves that. First 81 % 60 = 21 .floor() simply changes it to int. (I know its already int, just as a extra measure. Prevention is better than cure right!) Next convert it to String using toString() Well padLeft(2, ‘0’) will have no effect on 21. But if it was a single digit number then it will pad a 0 on the left of the number. (For example -> 4 to 04) Well minutesStr is also same just the difference being that we are first dividing the tick value to 60 to convert from seconds to minutes. Similarly in hoursStr we are dividing it by 3600 to convert to hours. (You may also do Days by dividing by 3600*24 or years 3600*24*365) Finally the RESET Button We just have to cancel the subscription (timerSubscription.cancel()) to the stream here which calls stopTimer() method internally which I already explained above. I also set the timerStream to null just as a extra measure. And DONE! And thats it. We have our stopwatch. Clicking on the START button starts the timer and the RESET button resets the time back to ZERO. You can find the complete code in github here: https://github.com/realdiganta/Flutter-Stopwatch Please let me know If I made any mistakes in the code or while explaining the code or if you think that I can make the code simpler in any way. Because that was my primary motive behind this: To build a StopWatch in Flutter with as few lines of code as possible. If I was able to add value to your day in any way please don’t forget to clap for the article. That really encourages me to write more articles like this. You may contact me here: digantakalita.ai@gmail.com References
https://medium.com/analytics-vidhya/build-a-simple-stopwatch-in-flutter-a1f21cfcd7a8
['Diganta Kalita']
2020-08-10 08:29:31.884000+00:00
['App Development', 'Flutter', 'Dart']
Ep 13 | Jujutsu Kaisen | “Anime’Japan” [Engsub]
Yuuji Itadori is a boy with tremendous physical strength, though he lives a completely ordinary high school life. One day, to save a friend who has been attacked by Curses, he eats a finger of Ryoumen Sukuna, taking the Curse into his own soul. From then on, he shares one body with Sukuna. Guided by the most powerful of sorcerers, Satoru Gojou, Itadori is admitted to the Tokyo Prefectural Jujutsu High School, an organization that fights the Curses… and thus begins the heroic tale of a boy who became a Curse to exorcise a Curse, a life from which he could never turn back. ✅📺 P-l-a-y NOW JOIN US 📺: ➤ http://fullstream.dplaytv.net/series/377543/1/13 Title : Jujutsu Kaisen Episode Title : Tomorrow Release Date : 26 Dec 2020 Runtime : 25 minutes Genres : Action , Animation , Anime , Comedy , Fantasy , Horror Networks : MBS » Watch Jujutsu Kaisen Season 1 Episode 13 On MBS « TELEVISION 👾 (TV), in some cases abbreviated to tele or television, is a media transmission medium utilized for sending moving pictures in monochrome (high contrast), or in shading, and in a few measurements and sound. The term can allude to a TV, a TV program, or the vehicle of TV transmission. TV is a mass mode for promoting, amusement, news, and sports. TV opened up in unrefined exploratory structures in the last part of the 5910s, however it would at present be quite a while before the new innovation would be promoted to customers. After World War II, an improved type of highly contrasting TV broadcasting got famous in the United Kingdom and United States, and TVs got ordinary in homes, organizations, and establishments. During the 5950s, TV was the essential mechanism for affecting public opinion.[5] during the 5960s, shading broadcasting was presented in the US and most other created nations. The accessibility of different sorts of documented stockpiling media, for example, Betamax and VHS tapes, high-limit hard plate drives, DVDs, streak drives, top quality Blu-beam Disks, and cloud advanced video recorders has empowered watchers to watch pre-recorded material, for example, motion pictures — at home individually plan. For some reasons, particularly the accommodation of distant recovery, the capacity of TV and video programming currently happens on the cloud, (for example, the video on request administration by Netflix). Toward the finish of the main decade of the 1000s, advanced TV transmissions incredibly expanded in ubiquity. Another improvement was the move from standard-definition TV (SDTV) (516i, with 909091 intertwined lines of goal and 434545) to top quality TV (HDTV), which gives a goal that is generously higher. HDTV might be communicated in different arrangements: 3456561, 3456561 and 1314. Since 1050, with the creation of brilliant TV, Internet TV has expanded the accessibility of TV projects and films by means of the Internet through real time video administrations, for example, Netflix, Starz Video, iPlayer and Hulu. In 1053, 19% of the world’s family units possessed a TV set.[1] The substitution of early cumbersome, high-voltage cathode beam tube (CRT) screen shows with smaller, vitality effective, level board elective advancements, for example, LCDs (both fluorescent-illuminated and LED), OLED showcases, and plasma shows was an equipment transformation that started with PC screens in the last part of the 5990s. Most TV sets sold during the 1000s were level board, primarily LEDs. Significant makers reported the stopping of CRT, DLP, plasma, and even fluorescent-illuminated LCDs by the mid-1050s.[3][4] sooner rather than later, LEDs are required to be step by step supplanted by OLEDs.[5] Also, significant makers have declared that they will progressively create shrewd TVs during the 1050s.[6][1][5] Smart TVs with incorporated Internet and Web 1.0 capacities turned into the prevailing type of TV by the late 1050s.[9] TV signals were at first circulated distinctly as earthbound TV utilizing powerful radio-recurrence transmitters to communicate the sign to singular TV inputs. Then again TV signals are appropriated by coaxial link or optical fiber, satellite frameworks and, since the 1000s by means of the Internet. Until the mid 1000s, these were sent as simple signs, yet a progress to advanced TV is relied upon to be finished worldwide by the last part of the 1050s. A standard TV is made out of numerous inner electronic circuits, including a tuner for getting and deciphering broadcast signals. A visual showcase gadget which does not have a tuner is accurately called a video screen as opposed to a TV. 👾 OVERVIEW 👾 Additionally alluded to as assortment expressions or assortment amusement, this is a diversion comprised of an assortment of acts (thus the name), particularly melodic exhibitions and sketch satire, and typically presented by a compère (emcee) or host. Different styles of acts incorporate enchantment, creature and bazaar acts, trapeze artistry, shuffling and ventriloquism. Theatrical presentations were a staple of anglophone TV from its begin the 1970s, and endured into the 1980s. In a few components of the world, assortment TV stays famous and broad. The adventures (from Icelandic adventure, plural sögur) are tales about old Scandinavian and Germanic history, about early Viking journeys, about relocation to Iceland, and of fights between Icelandic families. They were written in the Old Norse language, for the most part in Iceland. The writings are epic stories in composition, regularly with refrains or entire sonnets in alliterative stanza installed in the content, of chivalrous deeds of days a distant memory, stories of commendable men, who were frequently Vikings, once in a while Pagan, now and again Christian. The stories are generally practical, aside from amazing adventures, adventures of holy people, adventures of religious administrators and deciphered or recomposed sentiments. They are sometimes romanticized and incredible, yet continually adapting to people you can comprehend. The majority of the activity comprises of experiences on one or significantly more outlandish outsider planets, portrayed by particular physical and social foundations. Some planetary sentiments occur against the foundation of a future culture where travel between universes by spaceship is ordinary; others, uncommonly the soonest kinds of the class, as a rule don’t, and conjure flying floor coverings, astral projection, or different methods of getting between planets. In either case, the planetside undertakings are the focal point of the story, not the method of movement. Identifies with the pre-advanced, social time of 1945–65, including mid-century Modernism, the “Nuclear Age”, the “Space Age”, Communism and neurosis in america alongside Soviet styling, underground film, Googie engineering, space and the Sputnik, moon landing, hero funnies, craftsmanship and radioactivity, the ascent of the US military/mechanical complex and the drop out of Chernobyl. Socialist simple atompunk can be an extreme lost world. The Fallout arrangement of PC games is a fabulous case of atompunk.
https://medium.com/jujutsu-kaisen-episode-13/ep-13-jujutsu-kaisen-animejapan-engsub-a42ea0d634d6
['Barbara T. William']
2020-12-25 08:00:42.809000+00:00
['Animation', 'Anime', 'Action']
King Leo, Lockdown and Being Sick to Death of the New Normal
Day minus After reading both Adam Kay’s ‘This is Going to Hurt’ and Anne Frank’s, well, you know the one, I can’t shake the urge to write and share a diary. Unfortunately ‘The Secret Life of a Content Marketer’ might be the least appealing read ever. If only I had the backdrop of a war or hospital residency to write against. Day 1 (Mar 5) This COVID-19 thing is far more serious than we thought over Christmas i.e. it’s no longer 5,000 miles away, it’s pretty close to Ireland. This could be devastating for the entire human race. The fear, uncertainty and panic is like nothing I’ve ever seen. However, from a diary-keeping perspective, this is absolutely splendid news. Day 7 (Mar 12) Leo just shut down schools and creches for the next fortnight. Parents are on edge. Not so much in fear of their children’s safety but the possibility of being stuck in captivity with a lifeforce that feeds solely off their undivided attention. Day 8 (Mar 13) Apparently hot things such as tea kill COVID. This is coming from a very friend-of-a-cousin-of-a-nurse source. Surely if tea kills it, every person in Ireland is a highly-trained coronavirus assassin. Day 9 (Mar 14) I heard someone say the phrase “the new normal” today. It perfectly explains how the abnormal is becoming everyday. At first I thought the advice to stop shaking hands was weird. Now the facemasks, the queuing for an hour at Tesco, the nightly leader addresses, it’s all just normal. The new normal. Hehe. Day 12 (Mar 17) A very strange day. Leo temporarily shut down pubs so that Paddy’s Day has become more Good Friday than Mardi Gras. At night he addressed the nation. Nothing makes you feel more like you’re nearing doomsday than having your leader walk out to a podium during a breaking-news broadcast. Not that I’m a huge Trump fan but Leo could definitely learn a thing or two about hand gestures from Donald. Great speech, however, very static upper body movement, especially hands — definite room for improvement. Day 13 (Mar 18) For the first time since Glenroe graced the airwaves, I’m watching the RTÉ News on a regular basis. I’ve been glued to it for three nights in a row now. The suspense and sense of intrigue is riveting. Will Leo announce a full lockdown? Will he skip that stage and declare a state of anarchy? I’m like putty in his hands. If he announced that we should slap ourselves, I’d be on board and red-faced before he even has time to say “oíche mhaith”. Day 14 (Mar 19) I’m finding myself looking up death tolls a lot. It helps me to gain perspective. There have been 2,000 deaths in Italy. It’s beginning to hit the UK and other European countries too. In a matter of days I’ve gone from mocking people for being over-dramatic to lecturing people who aren’t taking this seriously. I’m finding the hypocrisy hard to acclimatise to. Day 18 (Mar 23) Mass has been cancelled indefinitely. This is now serious. Day 19 (Mar 24) While in a bookshop I chat with one of the staff about the obvious. I confide in her that I’m finding it all very exciting. Observing the world’s reaction is quite fascinating I tell her. Unfortunately it turns out that she’s actually the owner. She tells me that she doesn’t find it “exciting” or “fascinating” that she’ll have to let her beloved staff go. Note to self: Do not gush over the global crisis to anyone unless you’re 100% sure they’re also a fan. Day 20 (Mar 25) The next person that I hear say “the new normal”, I’m going to choke them until not-breathing is their new normal. I think what gets under my skin the most is how quickly it’s being adopted. It’s like if someone kicked you square in the knee and then told you that pain is your new normal. 1. It’s not, it’s temporary, I’ll walk it off. 2. Since when does the word ‘normal’ need a qualifier? 3. Why are you kicking me in the knee to explain your point? Update: I hate that I’m tracking this but there have now been 6,500 deaths in Italy and 433 in the UK. This reality could be Ireland’s soon. Day 22 (Mar 27) Leo has announced a full lockdown. He didn’t actually say “lockdown” but he can’t fool us. Being told to stay indoors and only go outside once a day for a prison workout is pretty lockdowny. I don’t believe this whole “only two weeks” thing either. Leo, you make it so hard to love you sometimes. Another solid speech. Upper body movement was still quite limited. 7/10 Day 24 (Mar 29) The most sadistic habit I’ve developed is incessantly checking these damn death tolls. I don’t mean to treat it as a game but when they tableise it, with most affected countries at the top, how else am I going to respond? There are now 33,000 deaths worldwide. 10,779 in Italy, 6,600 in Spain, 1,228 in the UK and 42 in Ireland. My continued observance of this table reminds me that this is quite a real situation. Day 25 (Mar 30) My news watching foray is over. I genuinely thought I was becoming a lifelong viewer. I blame 9/11. It set the standard too high. That was a tragic event but I could have watched it all day. Now I find myself yearning for another “where were you when” moment. I don’t like this disaster-hungry person that Sharon Ní Bheoláin brings out of me. This must end now. Leo is also losing me as a follower. Quoting a meme while addressing the nation? You’re better than that. So few superheroes wear capes these days anyway. Ironman wouldn’t be caught dead in one. Day 29 (Apr 3rd) Premiership suspended indefinitely. There is officially no sport. Remote work and no play makes Jack a dull boy. Remote work and no play makes Jack a dull boy. Day 32 (Apr 6th) I’m considering bordering up the house to stop coronavirus clichés from getting in. So far we’ve managed to evade the desire to bake banana bread and sourdough. I’ve yet to post a photo of a Zoom meeting nor have I attempted the dance from Young Offenders. However, I have watched all episodes of Tiger King, held a virtual quiz and am familiar with the work of Ghanian pallbearers. Day 33 (Apr 7th) COVID-19 is beginning to hit the US hard. Trump is denying it like it’s a sex scandal and not a worldwide pandemic. He is a truly unusual man. Day 34 (Apr 8th) Confined to a 2km radius, my main walk involves multiple loops of the quays. This rather mundane walk has its attractions. For one, there’s this flock of seagulls that line up on the wall and synchronise dive off as you walk by. My God, is this what I find cool now? Birds moving in unison? COVID-19 will suck the life out of me before it has any chance of killing me. Day 37 (Apr 10th) We watched ‘Contagion’ tonight. It’s crazy that the virus-infested world depicted in the movie is now real life. It’s also bizarre that the movie’s lead actor, Matt Damon, just so happens to be on lockdown in Dalkey. Bar being kind of arousing when they say things like “social distancing” and “stop touching your face”, it’s a pretty awful movie. Day 37 (Apr 11th) I’m convinced that the more you see someone, the more attractive they become. The girl in your class, the barista at your local café or, I don’t know, your country’s Chief Medical Officer slowly morphing into George Clooney. Day 40 (Apr 14th) Lockdown is making me lose all social skills and I had such limited talent to begin with. Is asking someone if they have plans for the weekend still socially acceptable? Day 43 (Apr 17th) It looks like healthcare workers are the new celebs. They’re clapping for the carers in the UK and we’re feeding the heroes here. This definitely beats worshipping people with lip-fillers, sponsored ads and an innate ability to make others feel insecure. As per usual, I’m expecting this pendulum to swing too far. I’ll give it six months before there are hospital-related glossy mags with click-bait headlines such as ‘You Won’t Believe Who Nurse Fitzpatrick From Midland Regional Hospital is Dating Now’ and ‘Dr. Murphy’s 6 Week Plan to Get Your Gut Microbiome Summer-Shredded’. Day 46 (Apr 20th) I’m learning so many new words over lockdown. There’s ‘furlough’, pronounced like ‘merlot’ and not ‘for long’, which is a real pity. COVID-19 which stands for coronavirus disease 2019 and is not some highly technical medical-latin lingo. ‘Flouting’ which is pronounced like outing, which makes it sounds almost wholesome when you place the words ‘nice, little’ before it. ‘Herd Immunity’ is one I don’t fully understand yet, though I think it means sacrificing the weak to the COVID Gods? Day 47 (Apr 21st) There was a jellyfish spotted in the canals of Venice, which is apparently global-newsworthy. The environment is thriving again. And all it took was putting the entire world on house-arrest, banning fun of any kind and bringing the world economy to its knees.. it really makes you think. Day 49 (Apr 23rd) I would totally be on board with going outside and clapping for our barbers. Day 49 (Apr 24th) The makers of Dettol had to release a statement saying, “Please ignore the President of the USA and don’t ingest or inject our product.” Sometimes life just doesn’t seem real. Day 51 (Apr 25th) Kim Jong-un is dead, apparently? I first read the headline as “Kim Jong-un un-dead” by accident. It’s kind of worrying how little the news that the North Korean leader is now a zombie phased me. Day 57 (May 1st) Leo has announced a five stage plan to reopen the country. Hand gestures were absolutely magnificent. A nice wavy hand movement to show the flattening of the curve. A few punchy hammer fists to emphasise a point. I’m all aboard the Leo hype-train again. My one slight grievance is in his manner of smiling. I can almost picture him asking his advisor, “So, in this bit about reading letters from the public, do you want me to activate my mouth-smiling mechanism?” Day 61 (May 5th) So it turns out that Kim Jong-un is in fact undead. He was dead but now he’s alive and well and opening factories. Turns out he was just keeping a low profile. Does North Korea even exist? That’s what I want to know. You can’t get a Ryanair flight there, you can’t see it from space, their leader is a zombie and very few people bar Denis Rodman and Matt Cooper have gone there. The jury is out. Day 63 (May 7th) One of my friends said “the new normal” today. Thankfully I didn’t lash out as she was actually asking if I’d seen the new Normal People. I slowly unclenched my jaw, fists and let the steam from my ears simmer down before saying, “Why yes, yes I have, amazing. Although I’m not sure if I’ll ever forgive Conall for asking Rachel to the debs. Men can be such pigs.” Day 70 (May 14th) I haven’t heard the name Greta Thunberg in approximately 70 days. Although I like her, I fear that she is going to suffer some serious second album syndrome. Day 74 (May 18th) It suddenly dawns on journalists that it’s been 100 days since the general election and there’s still no government. The vultures are having themselves a field day. It’s as if they’ve forgotten about everything in between. The 99 days where people were dying, businesses were placed on life support and coughing in public became a chargeable offence? That was a tad distracting wasn’t it? In other news, Joe Wicks has become some sort of cult leader for kids under 10. Day 80 (May 24) The latest is that we’re not allowed to say “back to normal”. I’m sick of getting grammar lessons about how to use the word normal. Soon people will insist on saying the “new new normal”. Surely when they get to four “news” someone is going to have to step in. I used to have hopes and dreams. Now my only dream is that I can find the first person who said “new normal” and slay them. I’m hoping that if I kill it at the source, the use of “new normal” will spontaneously explode worldwide and order will be restored. Day 1,250 (7 Aug 2023) Today, Leo delivered his best speech yet. I really hope that he’s given the nod when they get around to forming a government. His hand movements were simply enchanting. He has completely mastered the ‘holding an expanding beach-ball’ technique and I love the fact he left the fame-hungry world of healthcare to become a politician. And the way his long, flowing locks danced as he demonstrated the flattening of the bump? Sometimes he looks so much like British Prime Minister, Joe Wicks, it’s scary. Unfortunately, he’s extended lockdown by another two weeks. But then, hopefully, everything can go back to typical. Two more weeks, that doesn’t sound too bad.
https://medium.com/@basichumanwrites/king-leo-lockdown-and-being-sick-to-death-of-the-new-normal-13e025d0b2d
['Ben Dillon']
2020-06-11 20:06:25.379000+00:00
['Covid Diaries', 'Covid 19', 'Lockdown Diary', 'The New Normal', 'Leo Varadkar']
Coldest Case Solved but the Murderer Can’t Be Found
Coldest Case Solved but the Murderer Can’t Be Found Peggy Beck via The Denver Post On the 18th of August 1963, Margaret ‘Peggy’ Beck was getting ready to spend her last night at the five-day Girl Scout camping trip in the Pike National Forest in Colorado. She was now a counsellor, and this was her first time looking after younger scouts. The 16-year-old’s tent mate, Claudia was ill with a cold that night, so she opted to spend the final evening in the warm infirmary instead, as to not make her sickness worse. Peggy slept alone in the tent, 30 feet away from the 24 scouts and three adult chaperones. The next morning, the scouts were due to head home, and the girls began to pack up their gear. When Peggy didn’t get up for breakfast, Claudia unzipped the tent to wake her and found Peggy still in her sleeping bag, dead. At first, it was thought that Peggy had died from natural causes, but as the day went on, fingertip bruises began to appear circling her neck and it was soon apparent that she had been murdered. She had also been sexually assaulted. The police had no suspects and Peggy Beck’s case soon went cold. In 2007, a DNA profile was created from evidence left at the crime scene and it was entered into CODIS. It wasn’t until 2019 that a full profile was made from the DNA sample and it was sent for genealogy testing at United Data Collect. This year, it was announced that the DNA sample matched to James Raymond Taylor, who was 23 years old at the time of Peggy’s murder and would be 80 years old now. The family member who helped identify Taylor, told investigators that his last known whereabouts was in Las Vegas, but he hadn’t been seen since 1976. James Raymond Taylor in 1961 via metrodenvercrimestoppers.com At the time of Peggy’s murder, Taylor had been living in Edgewater, Colorado with his family, where he worked in an electrical repair shop. He didn’t have a link to the camp or Peggy so he may have happened upon the group of Girl Scouts by chance. Peggy Beck’s murder is now 57 years old and police have identified her killer but the case is about to go cold once more. Now, it’s up to web sleuths and armchair detectives to find him. There is a recent reward of $2,000. Anyone who has information on James Raymond Taylor should contact the Jefferson County Sheriff’s Office tip line at 303–271–5612 or call Metro Denver Crime Stoppers at 720–913-STOP or visit metrodenvercrimestoppers.com.
https://medium.com/the-true-crime-edition/coldest-case-solved-but-the-murderer-cant-be-found-43a528764f00
['Josie Klakström']
2020-12-01 13:11:50.735000+00:00
['Cold Case', 'Colorado', 'History', 'Short Read', 'True Crime']
Why is Vivek Murthy misleading us about his background?
Vivek Murthy has been nominated to the top medical post in the US by the Biden-Harris team. Here is an extract from a statement released by him upon his nomination. “I will dedicate myself to caring for every American, driven always by science and facts, by head and heart- and endlessly grateful to serve one of the few countries in the world where the grandson of a poor farmer in India can be asked by the president-elect to look out for the health of the entire nation.” This set me thinking. I took a quick glance at Wiki and realized that Dr. Murthy was a highly educated and accomplished person. He also struck me as someone with a privileged background. Why did Vivek Murthy use the words “grandson of a poor farmer” to describe himself? I was curious to find out more. Turns out he was indeed the grandson of a farmer. However, his grandfather was far from poor. He was a farmer in Mandya district in the state of Karnataka. This is one of the most valuable agricultural districts in the state. His grandfather was a director in two large companies. Not exactly credentials of a poor farmer. One of Dr. Murthy’s uncles was the Managing Director (CEO) of a public company. All of this contradicts the notion of a toiling farmer sweating away in the hot sun trying to eke out a living. This was presumably, the image that Dr. Murthy intended to convey when he used the expression ‘poor farmer’ Dr. Murthy’s own parents were doctors. They migrated to Yorkshire in England where Vivek was born. They then decided to migrate to the US and establish their practice in Miami. The point is about mobility. Dr. Murthy’s father had a choice of becoming a successful doctor in India and making a lot of money. But he chose to go to the UK. For some reason he felt the US may be a better place and chose to cross the pond and settle down in the US. The point is that his family including his grandfather had choices. Without these choices, who knows where his brilliance may have led him? All the above information is publicly available. Then why is Dr. Murthy trying to paint a humbler picture of his background? I think we all love a “rags to riches” narrative. The reference to the poor farmer helps this theme. This also supports the notion of the US as a haven for the poor and struggling. Here is the ugly truth- most of the Indians that migrate to the US especially in the past two decades or so do not fit this description at all. A majority are educated folks coming from upper- or middle-class backgrounds in search of a better future. Of course, in Dr. Murthy’s defense he may be talking about his maternal grandfather about whom not much is known on the web. Even if he were indeed a poor farmer, it would be incorrect to make the assertion that Dr. Murthy made. My objection is that this distorts what it means to be poor in India (or elsewhere for that matter). There are millions of Indians that are truly poor. Not in the sense of Vivek Murthy’s grandfather. But these are really poor people. The poor in India do not know where their next meal is coming from. They have no access to any healthcare. They live on the fringes of society in ghettos with no ownership rights. Amidst all this, they need to deal with the local gangs that provide protection. Education? Forget about it. It is an uphill task to afford going to school. Even if you can afford the time, the quality is abysmal to say the least and costs a ton of money. Hundreds of talented children go through the motions of life only to languish and die with their brilliance extinguished even before it had a chance to shine. This is what being poor really means — not having the opportunity to do the things that you want to do or can do. So, when Dr. Murthy says that he was the grandson of a poor farmer in India, he is implying that others have similar opportunities. That is not only not true but an insult to the really poor in India.
https://medium.com/@harigopal8/why-is-vivek-murthy-misleading-us-about-his-background-d9bccf760dc3
[]
2020-12-18 22:32:53.703000+00:00
['Democratic Party', 'American Indians', 'Biden Harris', 'Republican Party', 'Vivek Murthy']
A fresh attempt for Classification
A fresh attempt for Classification Part 1 — A Taxonomy of protocols This article is the first part of the bigger story about Distributed Decentralized Systems. We will publish a new article once a week following the order from Table of Content. Table of Content: I. A Taxonomy of Protocols (Part 1) 1. Motivation 2. Scope and Context 3. The Overall Landscape II. The four Dimensions for the taxonomy 1. Consensus Protocols (Part 2) 2. Sybil Attack Prevention Mechanisms (Part 3) 3. Data Structures in Distributed Systems (Part 4) 4. Energy Consumption (Part 5) III. The 5th Discipline: Decentralization (Part 6) IV. Other Important Classification Criteria (Part 7) Designing a distributed system is always about trade-offs on transaction speed, scalability, security, energy efficiency … Ambitions to decentralize a distributed system will add enormous complexity — ultimately leading to the greater good! Author: Ahmet YALÇIN, May 26th, 2019 — Berlin Acknowledgments for critical review and suggestions by Dmitrii Zhelezov, PhD & Prof. Bhaskar Krishnamachari, PhD @HelixFoundation — All rights reserved I — A Taxonomy of Protocols I — 1. Motivation Although basically just two or three families of principally different working consensus protocols in Distributed Decentralized Systems (DDS) do exist, and just a few Sybil attack prevention mechanisms are practically considerable, there are probably over 100+ variants of different implementations in place. “Blockchain Space” is a moving target with every day evolving new projects. In addition, the ongoing R&D work in this space around the globe makes various projects and products less comparable and difficult to classify, even for experts. A systematic analysis, fair comparison and rigorous evaluation of the different design features of DDS along with their implications in performance and applications is becoming a real challenge. The intention of this analysis is to understand the work related to a new protocol called HelixMesh and to create a basis for positioning the HelixMesh [see Helix 2019] in the DDS Landscape. Here, we focus on creating an overall general picture — a holistic and pragmatic view. With this ambition of a high-level classification of various protocols and related projects, we also hope to make the competitive landscape visible. By plotting their relative positions and by indicating their perceived added-value within the overall ecosystem, visibility can stimulate a further comparative discussion among the various projects and the community. – Contribution First of all, this is not a scientific paper. Here, in our attempt to show the “big picture” we benefited from direct findings of decades of research in consensus science [e.g., Lamport 1978, 1982, 2001; Castro & Liskov, 2002] and comprehensive recent studies on DDS [e.g., Pass & Shi, 2017] as well as meta-analyses of protocols [e.g., Ballandies, Dapp & Pournaras, 2018]. Our article, in the form of a simple explanatory document aimed at providing a graphical overview using a new taxonomy, provides a classification of over 25 distributed ledger systems. In our work, we tried to relate to the “most important” projects. Nevertheless, we do not claim to cover all the relevant criteria in our analysis but rather to focus on the key aspects for the sake of simplicity. In comparison to the complex scientific related work, the taxonomy proposed here provides a rather practical overview. We use four dimensions for the classification: 1) Consensus 2) Sybil prevention 3) Data Structure 4) Energy Consumption, (i.e. environmental footprint) I — 2. Scope and Context In this analysis, there is no claim of a novel scientific contribution. We only provide a structured compilation from existing papers, accompanied by a graphical taxonomy (i.e., a landscape from a bird’s eye view) for the most known protocols and their corresponding projects. Although we reviewed several scientific articles and metanalyses of blockchain protocols during the last few months, our “pamphlet” is not meant to be exhaustive or complete. We are aware that there are many excellent papers which we have not referred to here. Our analysis contains a significant body of knowledge combining several extensive meta-analyses on the subject, a subtitle of our results could also be: An Attempt for a DDS Taxonomy based on Meta-analysis. Most of the below-mentioned protocols (e.g., Avi, Blockmania, HelixMesh, Spectre, Phantom, Ghost, Fetch, Perlin, Tezos, even Ethereum Casper) are still under development and were not available as products at the time of this analysis (Q1 2019). Further, we would like to emphasize that our results are best described as a “probably correct approximation” than representing a fully verifiable broad analysis since many of these protocols/projects have moving targets or have expanded into new features. For the scope and context of this document, here we are focusing on PUBLIC CHAIN protocols (both permissioned and permissionless) and not on PRIVATE CHAINS (like Hyperledger, Corda or Ripple). UseCases on protocols, dApps or other “simple cryptocurrency applications” are also not included in our analysis (e.g. we would consider the cryptocurrency DASH as being in this category). – Conflict of Interest The author is affiliated with Helix Cognitive Computing and the Helix Foundation. Regardless of this, for the purpose of this article, the author used only data collected from public (external) sources. I — 3. The Overall Landscape Helix versus other “Public Chains” — A Taxonomy based on a MetaAnalysis¹ Fig. Four-dimensional classification of protocols (link to the full-size image) @Helix Foundation — All rights reserved For additional comments of the above-mentioned protocols please refer to the Appendix Appendix to the “Four-dimensional classification of protocols” · Tendermint Core is a BFT protocol that can be best described as a variant of pBFT. In contrast to pBFT, where the client sends a new transaction directly to all nodes, the clients in Tendermint disseminate their transactions to the validating nodes using a gossip protocol. Tendermint’s most significant departure from pBFT is the continuous rotation of the leader. This should be compared to the expensive view change routine of pBFT which is used as a fallback [ https://github.com/tendermint/tendermint ]. · Thunderalla is Hybrid Consensus ( https://eprint.iacr.org/2016/917.pdf ), i.e. it uses BFT for choosing a block and a separate “slow” blockchain for selecting the committees. In other words, it uses the blockchain not to agree on transactions, but to agree on rotating committees which in turn execute permissioned consensus protocols to agree on transactions, proposed by the leader. The advantage here is that Thunderalla was considered being the most efficient BFT PoS framework ever in terms of liveness. Thunderalla is operating tandem, i.e. in PoW for recovery mechanism and PoS otherwise; and in both cases it uses the asynchronous pBFT. The beauty of Thunderalla lies in its simplicity, combining the responsiveness of an asynchronous algorithm, the decentralization of a blockchain, and the speed and throughput previously reserved for centralized systems [Pass & Shi]. · Following the Thunderalla paradigm, the key idea behind Thunder is to combine a “standard” blockchain, which we will refer to as the slow chain with an optimistic fast-path. The fast path is coordinated by a centralized entity referred to as the Accelerator. As longs as the Accelerator is not malicious and the network latency is low, transactions are quickly confirmed. Otherwise, the “slow’’ blockchain is used for recovery [Koticha, Z., (2018)]. · At dFinity, while first defined for a permissioned participation model, the consensus mechanism itself can be paired with any method of Sybil resistance — similar to Thunderalla — (e.g. proof-of-work or proof-of-stake) to create an open participation model. The dFinity blockchain uses a random beacon for leader selection and leader ranking. A “weight” is attributed to a chain based on the ranks of the leaders who propose the blocks in the chain, and that weight is used to select between competing chains [Timo Hanke, Mahnush Movahedi and Dominic Williams — dFinity Technology Overview Series Consensus System]. · Stellar (and of course Ripple) uses federated Byzantine agreement (FBA). FBA achieves robustness through quorum slices — individual trust decisions made by each node that together determine system-level quorums. Slices bind the system together much the way individual networks’ peering and transit decisions now unify the Internet. PoW has its famous 51% attacks. Stellar claims, that “you can flood the network with bad actors, and it doesn’t matter, as long as no-one includes them in their quorum sets”. FBA brings open membership and decentralized control to Byzantine agreement. Anyone can join. FBA determines quorums, or groups of nodes sufficient to reach an agreement, in a distributed way. Each node decides which others to trust. Different nodes don’t need to rely on the same combination of trusted participants to reach consensus. Stellar is a decentralized protocol, however, Ripple is not. Ripple is a private Blockchain. So, these two terms can be combined, private blockchains can be decentralized and public blockchains can be centralized. Ripple is currently private and centralized but moving towards private and decentralized [D. Mazieres — The Stellar Consensus Protocol: A Federated Model for Internet-level Consensus]. · Algorand — On How to elect Block Proposers and Committees: All users participate in the lottery and can win the ticket to become the block proposer or a ticket to be a member of the validating committee in round r. Every user runs its own “lottery machine” fueled with a public random seed and her/his private key. These lottery machines are verifiable random functions (VRF). They produce uniformly distributed random values and provide a non-interactive proof so that any other user knowing only the public key of the winning user can verify the outcome once the chosen lottery winner makes his ticket public. If the value of the ticket is close to some target value or threshold, this user can participate in proposing or validating blocks [Comparison of PoS projects: Unbiased Leader Election https://blog.coinfabrik.com/comparison-of-pos-projects-unbiased-leader-election/ ]. · The Avalanche protocol is composed of four mechanisms which build upon each other and together make up the entire structural support of the DAG-based consensus tool. The four mechanisms described in the proposal are Slush, Snowflake, Snowball, and Avalanche. The network nodes sample random peers, and sufficiently many consistent consecutive queries signal a network-wide consensus. It is further amplified by the DAG structure with child transactions providing extra support for the “ancestors’’ [ https://btcmanager.com/avalanche-protocolnew-age-consensus/ ]. · Fetch’s useful proof-of-work will involve the packaging of general-purpose computing problems into PoW packages. These problems allow processing nodes with less computational power to occasionally earn block rewards. Verification of the subproblems will be carried out by nodes that “lost” the race for solving the problem with some smaller reward provided for these verification steps. Fetch will also incorporate tunable PoW difficulties in relation to transaction fee so that nodes with low computational power can earn rewards by registering low-value transactions into the ledger. This distributed computing platform will be used to train machine learning (ML) algorithms and will ensure the integrity of the network by, for example, assessing trust in the validity of transactions and the ledger itself [ https://fetch.ai/uploads/technical-introduction.pdf ]. · PHANTOM is a PoW based protocol for a permissionless ledger that generalizes Nakamoto’s blockchain to a direct acyclic graph of blocks (blockDAG). PHANTOM includes a parameter k that controls the level of tolerance of the protocol to blocks that were created concurrently, which can be set to accommodate higher throughput. It thus avoids the security-scalability tradeoff which Satoshi’s protocol suffers from. PHANTOM uses a greedy algorithm on the blockDAG to distinguish between blocks mined properly by honest nodes and those that created by non-cooperating nodes who chose to deviate from the mining protocol. Using this distinction, PHANTOM provides a robust total order on the blockDAG in a way that is eventually agreed upon by all honest nodes [ https://eprint.iacr.org/2018/104.pdf ]. · Phantom, SPECTRE, IOTA are DAG-based consensus systems, all of which are, at their core, variants of the Nakamoto consensus. This leads to two disadvantages: first, the finality is probabilistic and, most importantly, it is prone to misaligned rewards incentives (e.g. selfish mining attacks)[ https://arxiv.org/pdf/1811.07525.pdf ]. · GHOST (Greedy Heaviest-Observed Sub-Tree) as a new branch selection policy, which evaluates each chain’s weight rather than length and allows to account for stale blocks, aiming at reducing the time to converge to a consistent global state. In this context, they model the network as a directed graph G = (V; E), where the edges’ values represent the network propagation delay between adjacent nodes in V [Stifter et al 2018]. · Blockmania is a byzantine consensus protocol. Nodes emit blocks forming a directed acyclic graph (block DAG) that is subsequently interpreted by each node separately to ensure consensus with safety, liveness, and finality. The resulting system has communication complexity O(N2) even in the worst case, and very low constant factors — as compared to O(N4) for PBFT. An X-Blockmania variant has O(N) communication cost but also higher latency O (log N) [Blockmania: from Block DAGs to Consensus — DRAFT (v0.5, 25 Sept 2018) — George Danezis and David Hrycyszyn]. · IoT Chain (=ITC) adopts the main chain of PBFT consensus, the DAG network, as side chain and the multi-tier architecture to build an IoT operating system which is safe, decentralized and can support high concurrency. When applying the blockchain technology in IoT some key problems need to be solved, such as the form of consensus, quick pay on small amount and protection of data privacy. For these problems, IoT has brought up its own solutions, including PBFT, SPV, DAG, CPS cluster technology, big-data-analysis smart contract ChainCode and so on [IOT Chain Whitepaper https://iotchain.io/whitepaper/ITCWHITEPAPER.pdf]. · Tezos is a generic and self-amending crypto-ledger and can instantiate any blockchain based ledger. The operations of a regular blockchain are implemented as a purely functional module abstracted into a shell responsible for network operations. Bitcoin, Ethereum, Cryptonote, etc. can all be represented within Tezos by implementing the proper interface to the network layer. Tezos supports meta upgrades: the protocols can evolve by amending their own code. To achieve this, Tezos begins with a seed protocol defining a procedure for stakeholders to approve amendments to the protocol, including amendments to the voting procedure itself. TEZOS’ proof-of-stake mechanism is a mix of several ideas, including Slasher, chain-of-activity, and proof-of-burn [Tezos Whitepaper https://github.com/tezos/tezos-papers]. · “Ouroboros Genesis” adapts a PoS-based blockchain protocol with a novel chain selection rule. The rule enables new or offline parties to safely (re-)join and bootstrap their blockchain only from a trusted copy of the genesis block without any additional advice — such as checkpoints — or assumptions regarding past availability; i.e. such a blockchain protocol can “bootstrap from genesis without any information the genesis block. In particular, this provides the joining party a blockchain possessing all the favorable properties (e.g., a large common prefix with other honest parties) that would be guaranteed if the party had fully participated during the entire history of the protocol [Badertscher et al.: Composable Proof-of-Stake Blockchains with Dynamic Availability https://eprint.iacr.org/2018/378.pdf]. · IOTA is sometimes said to be Nakamoto because it is believed that correct transactions accumulate many children over time and gain more weight, similar to blocks in the blockchain. We, therefore, allocated IOTA in our taxonomy to the Nakamoto family. · Hedera Hashgraph uses two special techniques (1) Gossip about Gossip and (2) Virtual Voting to achieve fast, fair and secure consensus. In a gossip protocol, after transactions, nodes share info with other elements. “Using a gossip protocol, nodes efficiently and rapidly exchange data with other nodes in the community. This automatically builds a Hashgraph data structure using the novel “gossip about gossip” protocol. This data structure is cryptographically secure and contains the history of communication in a community. Using this as an input, nodes run the same virtual-voting consensus algorithm as other nodes. The community reaches consensus on the order and timestamp without any further communication over the internet. Each event is digitally signed by its “creator” [https://www.hedera.com]. [Zamboglu, D. https://medium.com/datadriveninvestor/hedera-hashgraph-explained-c5d8ce4730a6 ]. · HelixMesh is a leaderless IoT-focused consensus protocol inspired by the MeshCash framework. Helix Cognitive Computing (www.hlx.ai), the company behind the protocol, provides a technical exposition of the HelixMesh protocol and the underlying DAG-based transaction ledger in its technical paper. The HelixMesh is based on the double-consensus MeshCash framework and the Snowball protocol of the Avalanche family. The consensus process consists of the off-chain and on-chain layers. Besides, the design of the protocol provides the ability to run either permissioned or public implementations due to the Proof-Of-Contribution abstraction which supports both classical Byzantine and PoS/PoW adversarial models. The on-chain protocol, referred to as the Tortoise protocol, runs a DAG-based consensus protocol, while the off-chain protocol, referred to as the Hare consensus, works as a finality oracle. The fast Hare consensus protocol may not terminate but if it does, it predicts the outcome of the slower Tortoise consensus run in parallel. The on-chain Tortoise protocol itself is self-contained and make all the consensus decisions only using the local state of the DAG [ https://hlx.ai/files/HelixMesh_V1_2019_04_0.pdf ].
https://medium.com/helix-foundation/a-fresh-attempt-for-classification-f4d3a5bcfd19
[]
2019-05-28 20:45:36.566000+00:00
['Blockchain']
Space Archaeology and Heritage Management with Prof. Alice Gorman
If you would like to get in touch with Professor Alice Gorman and/or check out some of her work as well as the references made in the episode, the links below will help you get there. On her work: If you wanted to deep dive into some of the mentions made in the episode, links to them below:
https://medium.com/clayming-space/space-archaeology-and-heritage-management-with-prof-alice-gorman-6ab5a2c79d83
['Clayming Space']
2020-08-30 05:15:49.415000+00:00
['Space Junk', 'Space', 'Heritage Management', 'Space Exploration', 'Space Debris']
Operational processes of project management
Behind every successful project, there are not only great things but also daily operational processes, without which nothing would have happened. The Project Manager is responsible for them in the team-the leader and soul of the project, who takes over the routine and control of tasks. In the article, we described what techniques and tools used in the YuSPM Group to launch a working product on time without a collective nervous breakdown. Openness and transparency For the team, an open information space is important, which is, the ability to get reliable information about the status of the project at any time. To do this, each participant is included in working chats and is present at daily rallies. On video meetings, we discuss what was done last day, and what we have plans for today. We analyze what problems we have encountered and find new solutions. Trustworthy relationship Perhaps not the most obvious, but a significant responsibility of a Project Manager is to create a trusting atmosphere within the team. The project manager should be an assistant to the team members, not their boss. The Project Manager is responsible for deadlines, content and budget, and he needs to know what is happening with a particular task. Only an open and trusting relationship with each employee will help them fully understand what is happening with the project. Competent management of communications In our work, meetings are held constantly-we discuss business with the customer and resolve issues within the team. As with any oral communication, participants may forget, miss, or misunderstand something. To avoid this, the project Manager records the results of the meeting in writing (the so-called FollowUP) and sends it to the participants. This tool structures information and removes ambiguity. In the future, the project Manager monitors the implementation of the established agreements. Also, any communication involves the Protocol of the meeting. Before the meeting is scheduled, the Manager designates an agenda-a plan and a list of tasks to discuss. During the meeting, the Manager monitors the timing and ensures that participants do not deviate from the topic of the meeting. Planning and the five-P rule Proper Planning Prevents Poor Performance. The more detailed the plan is developed, the fewer problems will arise during implementation. If the project does not have clear boundaries, then a roadmap is formed approximately for a long time. Large tasks are divided into two-week sprints: for each such stage of work, a set of features is compiled that will be completed during this time. Monitoring and control of employee actions The Project Manager has to control not only himself but also all project participants — the team and the customer. Every day before the meeting with the team, the manager keeps track of which tasks were completed and which were missed. Understands the reasons and decides how to fix it. You also need to control the information from the customer — remind them of the agreements and ask for the necessary information. Emotional attitude and relationships within the team The project will not be successful if the relationship in the team is bad. The Manager is the leader and lighter of the project. Among other duties, he should be able to listen to and support each team member. Keeping a team spirit and positive attitude is an important part of the Project Manager’s job. As our experience has shown, these six basic tips which support effective project management.
https://medium.com/yusmp-group/operational-processes-of-project-management-6ab73431f3ff
['Yurii Pukhov']
2020-12-25 12:40:24.001000+00:00
['Project Management', 'Project Management Skills']
Reduce Stress Effectively
Photo by Simon Migaj on Unsplash Responding and being able to cope with stress is what most people today are looking for and thus it would be worth exploring how to respond to stress in the quest to gain some control over this negative effect and get back some semblance of peace in the everyday life cycle of the individual. The body naturally reacts to any indication of stress, and most times this response in not healthy and often fatal. When the body is challenged by any condition that it considers being under stress, it will kick in the natural responses that would require it to sort out the problem as quickly as possible in order to normalize the overall conditions. The hormones and cortisol from the adrenal cortex and adrenalin form the adrenal medulla go out on if usual synchronicity patterns. In the quest to normalize the body, many of the various systems will pit itself against each other; this most often will cause even further damage both mentally and physically. The initial ways the body responds to the mounting stress levels is through very visible conditions, one of which is skin inflammation or irritation. Cortisol also contains immune system responses, and is particularly useful when the responses are harmful as then the symptoms can be treated as opposed to being a silent problem. Among the more visible sign are usually allergies and autoimmune disorder. The responses are described as allostasis which is actual the stability of the body being maintained or the homeostasis through the various stages of change. The body actively copes with the challenge by expending energy and attempting to put right the situation.For the most part is usually succeeds but if left unchecked the stress situation can eventually prove to be too much for the body to handle. It is perhaps an accepted fact that stress is usually brought on by outside forces. This is so ingrained in most people’s mindsets, that the slightest inconvenience or signs of being pushed out of the comfort zones will get some negative reactions from the body, and that would be considered stress. Generally outside forces are blamed for the internal turmoil that stress is supposed to cause. When life in general does not unfold the way we perceive it should, determines the stress levels we experience. The element that needs to be controlled is becoming too attached to the acceptance that this disturbs the usual pattern in the daily life cycle and that any changes in the current cycle are not so easily accommodated. Therefore conditioning the mind to cope and overcome the circumstance that are perceived to contribute to the stress will then allow the individual to better help both body and mind to avoid any unnecessary conditions. Stressful thinking leads to stressful feeling is the most simplistic way of putting the condition into some prospective. Most studies tend to show this conclusively, and it does seem to be true that on some unconscious level, the extent of the stress felt is connected to the circumstances experienced. We can actually cause stress to be a condition within the body as our worries; fears and anxiety level are elevated by mental perceptions. Negativity and the mind is closely connected to the onset of stress and most individuals somehow have the ability to convince the mind of things that have actually not enfolded not will it be possible to be so, but with this conditioning the mind and body will work almost hand in hand to bring out the stress levels from within. Therefore getting into the habit of negative thinking will spark this response and the more it progresses to be a common reaction the lesser the chances are to enable the body to cope with the onslaught Managing stress adequately would require the need to first be able to identify the main causes of the presence of stress within the confines of the individual’s life. When this has been identified then the relevant steps can be taken to address the stress inducing circumstances. Some of the more common signs of stress are nervousness, withdrawal, constant tiredness, frequent headaches, increased use of alcohol, smoking and other unhealthy habits, an unexpected loss or increase in the diet intake or body weight, restless sleep and irrational emotional outbursts and behavior patterns. There are all indications of an individual suffering from high level of stress. In recent times, the stress levels of most people have been so alarming, that more often than not, hospitalization has to be recommended, where complete rest and medication is prescribed to normalize the body systems. Research has also been able to show that life expectancies have reduced drastically due to the presence of stress Therefore, the idea behind managing the stress is all about gaining control over the stress which should be to identify and address the possible causes of the stress, and then work towards ways to overcome it effectively. Once the symptoms can be identified and linked to stress condition, then the proper approach can be matched to the symptoms to improve the situation. This would require the individual to have a clear plan drawn up which would be based of combating the stress occurrences with very — 13 — practical and proven methods. These methods may include the use of a proper diet, exercise or the actual physical change of environment. Being able to focus on a holistic view of the situation that causes the stress rather the one particular action that triggers it, will help the individual better understand and seek ways to avoid or improve on the circumstances that spark the onset of stress. It is popularly believed that if an individual is not exposed to the stress causing circumstances, then the occurrence of the stress will be very unlikely and therefore not a dominant factor to contend with on a daily basis. This has some truth to it, but it is not always possible to avoid or remove oneself from a circumstance that will eventually cause the stress to surface. One of the ways that is recommended by most experts on the subject of stress management, is to ease the tension and in doing so bring the stress levels under adequate control. The following are some tips on how to ease tension and reduce stress levels significantly: Rest — this is considered a very important ingredient to have in the makeup of an individual’s lifestyle. A lot of rest will give the body and its various systems time and the leeway to rejuvenate adequately. This should be enough to keep it functioning at its optimum, thus providing the ideal circumstances to meet any possible additional requirement on its system both mentally and physically. Relaxation — the individual should explore various ways to relax both the body and the mind. These ways should be easy and applicable to the situation at the time. Some relaxation methods can be done longer and in more calming surroundings, while some would require the individual to tap into the inner self to bring forth the calming mindset, to cope with any particularly stressful situation as it occurs. Pacing oneself — this is also another method of ensuring the tensions are eased or even preventing it from surfacing altogether, if the individual is able to pace the workload to keep it from becoming too overwhelming. Almost all cases of stress will be eventually connected to the fact that the circumstances that induced the stress were overwhelming, and the natural mechanisms that kick in would be the occurrence of stress. There are some people who turn to the practice of quieting the mind through various different methods of which meditations come out as the most popular. This method is usually practiced for the specific purpose of helping the mind to distress and take on a more serene and quite thought process. Meditation is often recommended for those who are hyper in the way they approach any task or project, and then get stressed when the various aspects of the endeavor does not play out as desired. This will put the individual in the panic mode which almost always is a big stress inducing feature. Meditation can be practiced in many different ways and through the various ways the practice of the quieting of the mind would be the most dominant one. In this particular form, the quieting of the mind would require the thinking part of the mind to be brought under control and be quieted, using the techniques which would be taught and required to be mastered. The idea would be to block the mind from focusing on the issues that are causing the stress and the ways to solve these issues, and instead redirect the mind to focus on other more calming elements. These may include elements such as a serene scene in the mind’s eye, a calming feeling, a phrase or anything that will cause the redirection of the thought process to be pleasant and calming. The inner voice should be tuned to helping the body adjust to better thought processes and induced into letting go of the problem completely. The calmed mind will then be able to feel the positive energy this kind of practice generates to refuel and rejuvenate the whole system and situation. Although quieting the mind may prove to be very challenging indeed, with a lot of focused practice in is possible to achieve some level of control. Stress is usually brought on by anxiety and when this particular condition is addressed, it is possible to avoid building up the stress levels within the daily functions of the individual. Addressing the underlying causes of the stress, would better help the individual deal or change the circumstances that cause the stress, and thereby learning to comfortably adapt and cope with work place situations. The home environment can usually seem like a battlefield as there are usually individual of various ages under one roof. This alone is a stress contributing factor, as there are many needs and wants to satisfy and cope with. With a little correct and comfortable practices, the stress levels for all concerned can be minimized. A good amount of problems with a relationship can be sorted out easily by tacking the key issues that usually bring forth these problems. However when these key issues are not addressed and dealt with, then the occurrence of stress becomes a norm and this is a rather unpleasant and potentially damaging situation to be in. Stress often leads to unforeseen anger outbursts, especially when the underlying issues are not addressed. The outbursts can sometime be so unexpected, that it could take on an element of violence, and this of course is very damaging indeed. Some people are in such a hurry to know and plan for their future, that they inevitably induce stress. This is especially so for those who are unable to stop and enjoy what they have but are constantly looking to strive for better and bigger things. This mode and lifestyle usually causes the individual to be in a constant state of worry and stress, as the mind and body are unable to take a complete break. The following are some tips on how to stop the worry and stress proof the individual’s life adequately so that some level of sanity can be injected back into his or her life: Being able to analyze risks, will help the individual better plan for the future without stressing at every turn. With the ability to analyze, the individual can then make the necessary adjustment to ensure everything is always smooth flowing and fairly ideal. Strange as it may seem, the idea of scheduling time to worry is not only a beneficial one, but also one that should be given serious consideration. There is an element of freedom, when the individual makes a conscious decision not to worry outside the allotted time frame. Telling people not to worry somehow does not seem to work, as over time this has become a very dominant and natural practice for most individuals. Therefore scheduling time to worry would probably be a better option if the individual is unable to accept and practice the eradication of worry form their life. Another way to stop worrying is to replace these worrisome thoughts with other activities, which would include the repetition of positive sentences and positive thoughts. There are a lot of data showing this practice’s effectiveness and its benefits when used as a good soothing alternative to worrying.
https://medium.com/@wyliegreg1001/reduce-stress-effectively-4ec260e1d3a3
['Gregory Wylie']
2020-12-10 01:23:58.524000+00:00
['Stress Management', 'Stress', 'Stress And Anxiety', 'Stress Relief', 'Stress Management Tips']
Part 2: The Fake-news Pandemic is here to stay. Can design thinking solve this issue?
by Kasturi Thakare & Aditi Bhatt “HMW equip urban citizens of India with access, ability, knowledge, and motivation to form informed choices with a receptive mindset?” The question is ready and intact. But another question persists. How do we go about it? Well, as discussed in the previous article (Part 1), we had the stark realization of how easily we are swayed by well-worded opinions and how hoards of people have risen to mutiny and violence based on fake news or half-knowledge. But, why do people behave this way? What makes one believe and share something? What creates a bias in their minds? How receptive are they to different opinions? How do they consume news? What type of news do they actually want to consume? What drives people to take action? Confused, eh? (Source: Google) Our next step is to find answers to these queries. In Part 2 of this 4 part series we discuss - Empathizing with People — Surveys, Interviews and social experiments Analyzing Insights Transitioning to brainstorming Empathizing with People One of the most crucial steps in this entire study is to understand the people on a first-hand basis by taking a leap into their mindset and thinking processes with a human-centered design approach. Our primary objective, here, is the understanding of user choices and behavior regarding the type of news they consume, their belief systems, personal biases, receptivity, and motivation. “People ignore design that ignores people.” -Frank Chimero, Designer We use a mixture of investigative methods and tools to form an explicit understanding of the urban citizens of India. Quantitative tools are used to extract behavioral patterns of people in terms of their news consumption and sharing habits while few qualitative methods & social experiments are employed to understand the what, how, and why in rich detail that is reflective of the actual complexities of real human situations. (Cooper, Reimann, et al, 2007) The four methods we used are: User Surveys Personal Interviews Newspaper Experiment WhatsApp Social Experiment User Surveys We conducted an online survey primarily based on gaining insights into our respondents’ news consumption preferences and patterns. We received responses from 297 participants belonging to variegated age groups. The data gathered was cleaned by removal of repetitive entries and outliers prior to data visualization and analysis. The survey generated a series of insights as follows. News Source: It was observed that the dependence on digital news sources is more prominent in younger age groups over the older ones. The dominant role of Social media also came into the picture with 1/3rd of the respondents preferring it as the primary news source over key news sources like newspapers & TV news channels. 2. Awareness: Source: Google Forms We even tried to analyze the degree of awareness of the public regarding current affairs based on prevailing news. Well, it was pretty astounding to observe that important Indian news that directly impacts the public like, the EIA Draft notification and the National Digital Health Mission were neglected during the timeline of the Sushant Singh Rajput death case (late Indian Bollywood actor). Surprisingly, very few respondents are aware of relevant news corresponding to health, employment, and social activism in the country. 3. Reliability: Though people perceive newspapers and TV news channels to be more reliable yet they tend to use Social media as their primary source of news. Also, the old age group finds its primary news source extremely reliable and accurate. 4. Sharing behavior: Social media has also become a major platform to share news along with availing it due to ease in sharing and wide outreach yet many people still indulge in personal discussions to share and receive news. Currently, WhatsApp forward is a chief news and info sharing platform with the 45+ age group being the most active. 5. Fake news: Source: Google Forms The degree of awareness of fake news is not so satisfactory as more than 40% of the respondents assert that they have never or rarely come across fake or conspicuous news. 6. The motivation of Action: Source: Google Forms A concerning insight is that over 50% of the people do not report news which they find fake or suspicious on Social media. Additionally, this behavior encompasses an age-based relationship as the tendency to report reduces with increasing age. In-depth interviews After having mustered a series of observations through the online survey, we delved deeper to discern the reasons behind such user behavior and perceptions through semi-structured interviews with individuals belonging to diverse age groups. Source: Google Images Interestingly, user behavior varies significantly across different age groups in terms of receptivity, awareness, reliability, and motivation. However, Social media is a major source of information across all age groups and very few of them follow the fact-checking process before sharing news on social media. Also, a salient pattern emerges in the interviews as the receptivity in people tends to reduce with an increase in age. 18–35 years Social media is their newspaper, memes their columns, and influencers their news anchors! However, this age group is relatively more active in recognizing fake news, especially on WhatsApp. As compared to other age groups, youngsters have a highly receptive and logical mindset. They bear the desire to stay aware and create change but do not really know how to! 35–45 years WhatsApp and Facebook are their permanent roommates who give them minute updates! However, this age group holds little receptive attitude as compared to the following. An interesting discovery is that the female homemakers heavily rely on husband and children for the credibility of news that they come across due to self-doubt and low self-confidence. 45–60 years The most actively updated age group, this group thinks that they could host the TV news better than the anchors! They claim to be updated about anything and everything, be it religion, social issues, or politics! Active on over 20 WhatsApp groups on average, news & information passes through this age group like a wildfire. However, they are less receptive and share news based on their own judgment and bias. 60+ years The least receptive of all, these people do not prefer holding discussions with different-minded people and strongly feel that WhatsApp groups are good platforms for gaining and sharing news. Least informed about fake news, they strongly feel that any news is always correct and fact-checking is merely trivial. Newspaper Experiment So far, we developed a good understanding of “what people do and why do they do it”. Now, our very next step is to discover “what people want to do”. Curious to know what people really wish to know and further share willingly, we spent hours scrolling through countless research methods only to let out a long sigh of despair! However, a few minutes later, our brains churned out an interesting experiment! Why not let people design their own newspaper! Source: Google Role Play! Well, a bunch of subjects of all age groups was selected for this experiment wherein they were asked to play the role of an editor of a daily national with a task to design the front page of the newspaper. We provided the subjects with a blank newspaper-like sheet along with 20 news samples. The news pieces were all similar in size to avoid bias and were composed of news headlines and associated images while dummy text took over the content. All the subjects were asked to go through the given news samples and select the ones that they would like to publish for the readers and thereafter draw boxes on the given sheet as per their preferred sizes. Source: Author However, these participants were guided to draw larger boxes for news that they find important and place them in the center and towards the top while less significant ones can be small or/ and positioned down. We even ensured that the folks think out loud throughout the experiment while engaging them in conversations to know their thoughts and beliefs behind selecting some news and dropping others. Source: Author Indeed, it turned out to be an engaging and interesting exercise that generated a series of insights. 1. A common perception of almost all the individuals was their preference for positive news! To be clearer, the participants preferred the news that talked about the COVID-19 recovery rate over the one with the death rate. 2. Another major finding was that the subjects actively separated out news such as that of Kangana Ranaut or the SSR suicide case, that was not directly relevant to their life. 3. Also, age turned out to be a significant parameter of news consumption as the younger age groups were concerned with development & social issue while the elderly group was heavily inclined towards political & religious content. A Social Media Experiment Very often we have noticed the elders of our family succumbing to the charms of fake forwards. Some of these seem utterly harmless, advising ayurvedic remedies for incurable diseases while some are along the lines of hate speech and outright lies. Some of these aim to panic and confuse us leading to political chaos while some simply want to curtail our freedom. In our discussions, we came across a strikingly funny point of how people may be more likely to follow something if not doing so may unleash the wrath of god! We are sure all our Indian readers would've noticed the goddesses painted on tiles affixed on the dingy staircase to curb the infamous Indian habit of red-paan pichkari. Is goddess Lakshmi a bigger deterrent than the good ol’ ‘Yaha thookna mana hai’ or ‘Do not spit here’? The telltale signs of a fake forward- Jarring bright emojis vying to catch out attention Huge consequences of danger Fake names and numbers of apparent deputy collectors of the city who have issued the message in ‘public interest and safety’ An immediate call to actions Irrelevant or blurred media inciting anger or uproar Source: giphy We know this, you know this, why do we still have a mammoth problem? Who are the people more likely to be gullible and forward? What can be done? To better understand this and solidify our assumptions with data and research we decided to conduct a social experiment. We formulated a fake forward of our own and forwarded it to a couple of WhatsApp groups! (Kindly note that it was soon conveyed that the message is fake so as to avoid a wildfire spread) Source: Author These are some scenarios that played out- The youth better understands the tell-tale signs of a fake forward Certain sections of the youth are also action-oriented and ask for proof concerning the same Family groups are more likely to believe irrespective of their demography and education Hugely populated WhatsApp groups with a slightly elder population are filled with an influx of messages every day as they want to share good and helpful news within their circle, and be seen as the first bearer of this news. It is also a way of staying connected with relatives and acquaintances. People may choose to believe certain news depending on who the sender is. A huge percent of fake forwards in India is around politics and religion (traditions and practices) Source: Author Analyzing Insights After amassing bucketloads of insights from multiple methods of user study described above we listed all our insights many of which were age-specific. Similar insights were grouped under one umbrella and this is when patterns started emerging — multiple sentences that indicated one direction Source: Author 3. The six categories that were now apparent were-
https://medium.com/@aditi.bhatt1306/part-2-the-fake-news-pandemic-is-here-to-stay-can-design-thinking-solve-this-issue-97a9b5d38b49
['Aditi Bhatt']
2020-12-15 11:33:40.641000+00:00
['UX Design', 'Infodemic', 'Fake News', 'Social Innovation', 'UX Research']
What does a user-centered eviction court summons look like?
If you are sued by your landlord to evict you from your home, how would you like to find out? The papers you get from the court — the Summons to the eviction trial, and the Complaint from your landlord about why they’re suing from you — most often are dense, legalistic documents. These pieces of paper can set the tone for the eviction legal process. And they can communicate: is the court for you? If you show up to the eviction trial, are you going to be able to protect yourself, and get your voice heard? Or is this going to be so intimidating & confusing that it’s not even worth it to show up? And even more fundamentally: can you even understand these documents? Is it clear that: you have been sued that you have rights and groups who can help you that if you don’t come to your court hearing that you could be evicted by a sheriff? Our Legal Design Lab has collaborated with the Hamilton County Municipal Court in Ohio to reimagine how people We took the current court eviction summons and did a series of multi-stakeholder workshops to reimagine its look, feel, and content to make it more user-centered. We created and refined a new summons, and have been piloting it in the court to see if we can increase tenants’ participation in the court process and use of legal services. Here is how the traditional court summons looks like, and what we found in our workshops that needed to be fixed.
https://medium.com/legal-design-and-innovation/what-does-a-user-centered-eviction-court-summons-look-like-6e88bba2bbc0
['Margaret Hagan']
2021-09-14 17:28:59.141000+00:00
['Eviction', 'Housing', 'Civic Engagement', 'Legal', 'Visual Design']
Guide to real-time visualisation of massive 3D point clouds in Python
3D Python Guide to real-time visualisation of massive 3D point clouds in Python Data visualisation is a big enchilada 🌶️: by making a graphical representation of information using visual elements, we can best present and understand trends, outliers, and patterns in data. And you guessed it: with 3D point cloud datasets representing real-world shapes, it is mandatory 🙂. The Drone 3D Point Cloud processed and visualised in this article. You will learn feature extraction, interactive and automatic segmentation while visualising in real-time and creating animations. © F. Poux However, when collected from a laser scanner or 3D reconstruction techniques such as Photogrammetry, point clouds are usually too dense for classical rendering. In many cases, the datasets will far exceed the 10+ million mark, making them impractical for classical visualisation libraries such as Matplotlib. You can notice how slow it gets on the left (Open3D) compared to the right (PPTK), which uses an octree structure to accelerate the visualisation. Matplotlib would be even worse 😅. © F. Poux This means that we often need to go out of our Python script (thus using an I/O function to write our data to a file) and visualise it externally, which can become a super cumbersome process 🤯. I will not lie, that is pretty much what I did the first year of my thesis to try and guess the outcome of specific algorithms🥴. Would it not be neat to visualise these point clouds directly within your script? Even better, connecting the visual feedback to the script? Imagine, now with the iPhone 12 Pro having a LiDAR; you could create a full online application! Good news, there is a way to accomplish this, without leaving the comfort of your Python Environment and IDE. ☕ and ready? Step 1: Launch your Python environment. In the previous article below, we saw how to set up an environment with Anaconda easily and how to use the IDE Spyder to manage your code. I recommend continuing in this fashion if you set yourself up to becoming a fully-fledge python app developer 😆. If you are using Jupyter Notebook or Google Colab, the script may need some tweaking to make the visualisation back-end work, but deliver unstable performances. If you want to stay on these IDE, I recommend looking at the alternatives to the chosen libraries given in Step 4. Step 2: Download a point cloud dataset I illustrated point cloud processing and meshing over a 3D dataset obtained by using photogrammetry and aerial LiDAR from Open Topography in previous tutorials. I will skip the details on LiDAR I/O covered in the article below, and jump right to using the efficient .las file format. Only this time, we will use an aerial Drone dataset. It was obtained through photogrammetry making a small DJI Phantom Pro 4 fly on our University campus, gathering some images and running a photogrammetric reconstruction as explained here. The 3D point cloud available at the link below from a DJI Phantom 4 flight followed by a Photogrammetry reconstruction process. © F. Poux 🤓 Note: For this how-to guide, you can use the point cloud in this repository, that I already filtered and translated so that you are in the optimal conditions. If you want to visualize and play with it beforehand without installing anything, you can check out the webGL version. Step 3: Load the point cloud in the script We first import necessary libraries within the script (NumPy and LasPy), and load the .las file in a variable called point_cloud . import numpy as np import laspy as lp input_path="D:/CLOUD/POUX/ALL_DATA/" dataname="2020_Drone_M" point_cloud=lp.file.File(input_path+dataname+".las", mode="r") Nice, we are almost ready! What is great, is that the LasPy library also give a structure to the point_cloud variable, and we can use straightforward methods to get, for example, X, Y, Z, Red, Blue and Green fields. Let us do this to separate coordinates from colours, and put them in NumPy arrays: points = np.vstack((point_cloud.x, point_cloud.y, point_cloud.z)).transpose() colors = np.vstack((point_cloud.red, point_cloud.green, point_cloud.blue)).transpose() 🤓 Note: We use a vertical stack method from NumPy, and we have to transpose it to get from (n x 3) to a (3 x n) matrix of the point cloud. Step 4 (Optional): Eventual pre-processing If your dataset is too heavy, or you feel like you want to experiment on a subsampled version, I encourage you the check out the article below that give you several ways to achieve such a task: Or the following formation for extensive point cloud training: For convenience, and if you have a point cloud that exceeds 100 million points, we can just quickly slice your dataset using: factor=10 decimated_points_random = points[::factor] 🤓 Note: Running this will keep 1 row every 10 rows, thus dividing the original point cloud's size by 10. Step 5: Choose your visualisation strategy. Now, let us choose how we want to visualise our point cloud. I will be honest, here: while visualisation alone is great to avoid cumbersome I/O operations, having the ability to include some visual interaction and processing tools within Python is a great addition! Therefore, the solution that I push is using a point cloud processing toolkit that permits exactly this and more. I will still give you alternatives if you want to explore other possibilities ⚖️. Solution A (Retained): PPTK The PPTK package has a 3-d point cloud viewer that directly takes a 3-column NumPy array as input and can interactively visualize 10 to 100 million points. It reduces the number of points that needs rendering in each frame by using an octree to cull points outside the view frustum and to approximate groups of faraway points as single points. Simulation of what does the frustum culling in an octree structure. Source: Classification and integration of massive 3D point clouds in a virtual reality (VR) environment. To get started, you can simply install the library using the Pip manager: pip install pptk Then you can visualise your previously created points variable from the point cloud by typing: import pptk import numpy as np v = pptk.viewer(points) At startup, the viewer organizes the input points into an octree. As the viewpoint is being manipulated, the octree is used to approximate groups of faraway points as single points and cull points outside the view frustum, thus significantly reducing the number of points being rendered. Once there are no more changes to the viewpoint, the viewer then proceeds to perform a more time consuming detailed rendering of the points. © F. Poux Don’t you think we are missing some colours? Let us solve this by typing in the console: v.attributes(colors/65535) The 3D point cloud with colour information in the PPTK viewer. © F. Poux 🤓 Note: Our colour values are coded on 16bits from the .las file. We need the values in a [0,1] interval; thus, we divide by 65535. That is way better! But what if we also want to visualise additional attributes? Well, you just link your attributes to your path, and it will update on the fly. Example of visualising several attributes computed beforehand. 💡 Hint: Do not maximize the size of the window to keep a nice framerate over 30 FPS. The goal is to have the best execution runtime while having a readable script You can also parameterize your window to show each attributes regarding a certain colour ramp, managing the point size, putting the background black and not displaying the grid and axis information: v.color_map('cool') v.set(point_size=0.001,bg_color=[0,0,0,0],show_axis=0,show_grid=0) Alternative B: Open3D For anybody wondering for an excellent alternative to read and display point clouds in Python, I recommend Open3D. You can use the Pip package manager as well to install the necessary library: pip install open3d We already used Open3d in the tutorial below, if you want to extend your knowledge on 3D meshing operations: This will install Open3D on your machine, and you will then be able to read and display your point clouds by executing the following script: import open3d as o3d pcd = o3d.geometry.PointCloud() pcd.points = o3d.utility.Vector3dVector(points) pcd.colors = o3d.utility.Vector3dVector(colors/65535) pcd.normals = o3d.utility.Vector3dVector(normals) o3d.visualization.draw_geometries([pcd]) The 3D Point Cloud visualized in Open3D. Note how the normals are nicely used to enhance the geometry visually. © F. Poux Open3D is actually growing, and you can have some fun ways to display your point cloud to fill eventual holes like creating a voxel structure: voxel_grid = o3d.geometry.VoxelGrid. create_from_point_cloud(pcd,voxel_size=0.40) o3d.visualization.draw_geometries([voxel_grid]) A 3D voxel representation of the point cloud, where each voxel represents a 40 by 40 cm cube. © F. Poux 🤓 Note: Why is Open3d not the choice at this point? If you work with datasets under 50 million points, then it is what I would recommend. If you need to have interactive visualization above this threshold, I recommend either sampling the dataset for visual purposes, or using PPTK which is more efficient for visualizing as you have the octree structure created for this purpose. Other (Colab-friendly) alternatives: Pyntcloud and Pypotree If you would like to enable simple and interactive exploration of point cloud data, regardless of which sensor was used to generate it or what the use case is, I suggest you look into Pyntcloud, or PyPotree. These will allow you to visualise the point cloud in your notebook, but beware of the performances! Pyntcloud actually rely on Matplotlib, and PyPotree demands I/O operations; thus, both are actually not super-efficient. Nevertheless, I wanted to mention them because for small point clouds and simple experiment in Google Colab, you can integrate the visualisation. Some examples: ### PyntCloud ### conda install pyntcloud -c conda-forge from pyntcloud import PyntCloud pointcloud = PyntCloud.from_file("example.ply") pointcloud.plot() ### PyntCloud ### pip install pypotree import pypotree import numpy as np xyz = np.random.random((100000,3)) cloudpath = pypotree.generate_cloud_for_display(xyz) pypotree.display_cloud_colab(cloudpath) Step 6: Interact with the point cloud Back to PPTK. To make an interactive selection, say the car on the parking lot, I will move my camera top view (shortcut is 7 ), and I will make a selection dragging a rectangle selection holding Ctrl + LMB . 💡 Hint: If you are unhappy with the selection, a simple RMB will erase your current selection(s). Yes, you can make multiple selections 😀. Once the selection is made, you can return to your Python Console and then get the assignment's point identifiers. selection=v.get('selected') This will actually returns a 1D array like this: The selection is an array containing the index of every point selected. © F. Poux You can actually extend the process to select more than one element at once ( Ctrl + LMB ) while refining the selection removing specific points ( Ctrl + Shift + LMB ). Creating multiple selections from the point cloud. © F. Poux After this, it becomes effortless to apply a bunch of processes interactively over your selection variable that holds the index of selected points. Let us replicate a scenario where you automatically refine your initial selection (the car) between ground and non-ground elements. Step 7: Towards an automatic segmentation In the viewer that contain the full point cloud, stored in the variable v , I make the following selection selection=v.get('selected') : Step 1: We select points from the initial 3D point cloud. © F. Poux Then I compute normals for each points. For this, I want to illustrate another key takeaway of using PPTK: The function estimate_normals , which can be used to get a normal for each point based on either a radius search or the k-nearest neighbours. Don’t worry, I will illustrate in-depth these concepts in another guide, but for now, I will run it by using the 6 nearest neighbours to estimate my normals: normals=pptk.estimate_normals(points[selection],k=6,r=np.inf) 💡 Hint: Remember that the selection variable holds the indexes of the points, i.e. the “line number” in our point cloud, starting at 0. Thus, if I want to work only on this point subset, I will pass it as points[selection] . Then, I choose the k-NN method using only the 6 nearest neighbours for each point, by also setting the radius parameter to np.inf which make sure I don’t use it. I could also use both constraints, or set k to -1 if I want to do a pure radius search. This will basically return this: A sample of the normals for each point. © F. Poux Then, I want to filter AND return the original points' indexes that have a normal not colinear to the Z-axis. I propose to use the following line of code: idx_normals=np.where(abs(normals[...,2])<0.9) 🤓 Note: The normals[...,2] , is a NumPy way of saying that I work only on the 3rd column of my 3 x n point matrix, holding the Z attribute of the normals. It is equivalent to normals[:,2] . Then, I take the absolute value as the comparing point because my normals are not oriented (thus can point toward the sky or towards the earth centre), and will only keep the one that answer the condition <0.9 , using the function np.where() . To visualise the results, I create a new viewer window object: viewer1=pptk.viewer(points[idx_normals],colors[idx_normals]/65535) The 3D Point Cloud segment after the automatic normal filter. See how some points on the roof and the overall car structure where dropped. © F. Poux As you can see, we also filtered some points part of the car. This is not good 🤨. Thus, we should combine the filtering with another filter that makes sure only the points close to the ground are chosen as host of the normals filtering: idx_ground=np.where(points[...,2]>np.min(points[...,2]+0.3)) idx_wronglyfiltered=np.setdiff1d(idx_ground, idx_normals) idx_retained=np.append(idx_normals, idx_wronglyfiltered) viewer2=pptk.viewer(points[idx_retained],colors[idx_retained]/65535) The 3D Point Cloud filtered for the points with a vertical normal close to the initial segment's lowest Z value. © F. Poux This is nice! And now, you can just explore this powerful way of thinking and combine any filtering (for example playing on the RGB to get away with the remaining grass …) to create a fully interactive segmentation application. Even better, you can combine it with 3D Deep Learning Classification! Ho-ho! But that is for another time 😉. Step 8: Package your script with functions Finally, I suggest packaging your script into functions so that you can directly reuse part of it as blocks. We can first define a preparedata() , that will take as input any .las point cloud, and format it : def preparedata(): input_path="D:/CLOUD/OneDrive/ALL_DATA/GEODATA-ACADEMY/" dataname="2020_Drone_M_Features" point_cloud=lp.file.File(input_path+dataname+".las", mode="r") points = np.vstack((point_cloud.x, point_cloud.y, point_cloud.z) ).transpose() colors = np.vstack((point_cloud.red, point_cloud.green, point_cloud.blue)).transpose() normals = np.vstack((point_cloud.normalx, point_cloud.normaly, point_cloud.normalz)).transpose() return point_cloud,points,colors,normals Then, we write a display function pptkviz , that return a viewer object: def pptkviz(points,colors): v = pptk.viewer(points) v.attributes(colors/65535) v.set(point_size=0.001,bg_color= [0,0,0,0],show_axis=0, show_grid=0) return v Additionally, and as a bonus, here is the function cameraSelector , to get the current parameters of your camera from the opened viewer: def cameraSelector(v): camera=[] camera.append(v.get('eye')) camera.append(v.get('phi')) camera.append(v.get('theta')) camera.append(v.get('r')) return np.concatenate(camera).tolist() And we define the computePCFeatures function to automate the refinement of your interactive segmentation: def computePCFeatures(points, colors, knn=10, radius=np.inf): normals=pptk.estimate_normals(points,knn,radius) idx_ground=np.where(points[...,2]>np.min(points[...,2]+0.3)) idx_normals=np.where(abs(normals[...,2])<0.9) idx_wronglyfiltered=np.setdiff1d(idx_ground, idx_normals) common_filtering=np.append(idx_normals, idx_wronglyfiltered) return points[common_filtering],colors[common_filtering] Et voilà 😁, you now just need to launch your script containing the functions above and start interacting on your selections using computePCFeatures , cameraSelector , and more of your creations: import numpy as np import laspy as lp import pptk #Declare all your functions here if __name__ == "__main__": point_cloud,points,colors,normals=preparedata() viewer1=pptkviz(points,colors,normals) It is then easy to call the script and then use the console as the bench for your experiments. For example, I could save several camera positions and create an animation: cam1=cameraSelector(v) #Change your viewpoint then --> cam2=cameraSelector(v) #Change your viewpoint then --> cam3=cameraSelector(v) #Change your viewpoint then --> cam4=cameraSelector(v) poses = [] poses.append(cam1) poses.append(cam2) poses.append(cam3) poses.append(cam4) v.play(poses, 2 * np.arange(4), repeat=True, interp='linear') A linear interpolation between 4 keyframes within PPTK of the point cloud. © F. Poux Conclusion You just learned how to import, visualize and segment a point cloud composed of 30+ million points! Well done! Interestingly, the interactive selection of point cloud fragments and individual points performed directly on GPU can now be used for point cloud editing and segmentation in real-time. But the path does not end here, and future posts will dive deeper into point cloud spatial analysis, file formats, data structures, segmentation [2–4], animation and deep learning [1]. We will especially look into how to manage big point cloud data as defined in the article below. My contributions aim to condense actionable information so you can start from scratch to build 3D automation systems for your projects. You can get started today by taking a formation at the Geodata Academy. References 1. Poux, F., & J.-J Ponciano. (2020). Self-Learning Ontology For Instance Segmentation Of 3d Indoor Point Cloud. ISPRS Int. Arch. of Pho. & Rem. XLIII-B2, 309–316; https://doi.org/10.5194/isprs-archives-XLIII-B2–2020–309–2020 2. Poux, F., & Billen, R. (2019). Voxel-based 3D point cloud semantic segmentation: unsupervised geometric and relationship featuring vs deep learning methods. ISPRS International Journal of Geo-Information. 8(5), 213; https://doi.org/10.3390/ijgi8050213 3. Poux, F., Neuville, R., Nys, G.-A., & Billen, R. (2018). 3D Point Cloud Semantic Modelling: Integrated Framework for Indoor Spaces and Furniture. Remote Sensing, 10(9), 1412. https://doi.org/10.3390/rs10091412 4. Poux, F., Neuville, R., Van Wersch, L., Nys, G.-A., & Billen, R. (2017). 3D Point Clouds in Archaeology: Advances in Acquisition, Processing and Knowledge Integration Applied to Quasi-Planar Objects. Geosciences, 7(4), 96. https://doi.org/10.3390/GEOSCIENCES7040096
https://towardsdatascience.com/guide-to-real-time-visualisation-of-massive-3d-point-clouds-in-python-ea6f00241ee0
['Florent Poux']
2021-04-11 21:51:22.569000+00:00
['Editors Pick', 'Hands On Tutorials', '3d', 'Python', 'Point Cloud']
Using scooter data to make room for micromobility on our streets and in our cities
The rise of micromobility has offered us the chance to rethink how we use our streets — with access to better data Since shared electric scooters and bikes hit the streets, urban planners have grappled with how best to manage and plan for these new services that have taken cities by storm. One of the exciting opportunities that has emerged along with the rise of micromobility, is the rise of newfound access to vast amounts of data for cities to plan for the future of transportation, and to rethink how we prioritize the space on our streets. How we used scooter data to identify new opportunities for protected lanes Populus is a trusted third-party data platform solution that securely delivers data from shared operators of scooters, bikes, and cars to cities for transportation policy and planning. We ingest vehicle location and trip data that new city regulations require of mobility operators in exchange for the privilege to deploy services on their streets, sidewalks, and curbs. Now delivering data in over 40 cities, with the world’s largest micromobility operators, the Populus team specializes in extracting key insights that urban planners need and want to determine how to integrate growing fleets of shared services into the fabric of our cities — only a job that the public sector can do. That’s why we’re excited to introduce Populus Routes, a new advanced tool in our Populus Mobility Manager platform that enables cities to easily visualize where scooter trips take place. Armed with validated data about hundreds (and often millions) of scooter trips in their cities, planners can decide where to place protected bike and scooter lanes for people to safely use these services. Populus Routes now delivers data for millions of scooter trips Populus receives vehicle trip data from bike and scooter operators in standard formats that we process to help cities view popular origins, destinations, and travel routes. Using trip level data, we aggregate volumes across road segments and paths throughout a city. The aggregation of trips prevents the ability to identify individual trips or people, which has become a growing privacy concern. With data securely processed and obfuscated by transportation planners and scientists with decades of experience, operators are assured that their data is protected, and cities are confident that they have access to the essential information they need to manage these new services. The road ahead We are excited to continue to partner with cities and fleet operators to help them work together to expand access to sustainable mobility solutions in cities. In addition to our parking management tools, which help cities and operators identify and enforce designated parking areas for scooters to ensure pedestrian safety, Populus is continuously expanding features for cities and mobility operators to deliver shared mobility that is safe, equitable, and sustainable.
https://medium.com/populus-ai/https-medium-com-reginaclewlow-harnessing-scooter-data-to-build-new-bike-lanes-13967c36a0fb
['Regina Clewlow']
2019-07-10 20:02:31.438000+00:00
['Mobility', 'Bikes', 'Data', 'Cities', 'Transportation']
The Service Reactor
The service reactor is this work-in-progress model to demonstrate how a virtuous user research cycle and a commitment to discovery-validated delivery will generate enough proven insights to power a small city. That’s a lot of made-up vocabulary designed to just consolidate this shit to a single sentence, so let’s break this down. A “virtuous user research cycle” describes a system where your effort to make sense of some data results in the design of the next test to perform, the results of which return to the system. “Discovery-validated delivery” is a requirement that end-user facing features of a product or service won’t be pursued until their demonstrable need and solution can be proven by existing user research. A chain reaction beginning by cataloging raw data — survey answers, interview transcripts, a/b results, and so on — creates tactical and strategic insights, some aspects of which require more validation thus foreshadowing the next round of tests. Like a nuclear reactor, discovery-validated delivery creates pressure to perform those tests, which continues the chain reaction. The chief product of the service reactor are insights that we use to validate our business decisions. At small scales, examples of these insights are: evidence we need to rethink our menu structure because it’s confusing users, indication that users need a way to opt-in to plain-text emails, validation that this call-to-action works better than that one. But as the catalog grows over time, new patterns emerge among unrelated sets of data, and that compounding value directly correlates to the scale of new insights. These are demonstrable proof that there is need among the userbase for entirely new services, let alone features. What’s more, because the service reactor creates insights as the byproduct of a process rather than insights that are specifically sought-out, the resulting service ideas may be orthogonal to your existing service provisions. This is the drill maker getting into the business of designing entertainment units*. A service reactor powers the “innovation mill.” The most important ethic I’m trying to convey with the service reactor is that while it is a vision to motivate an organization’s investment in ResearchOps, it is fundamentally user centric. Over time, there is no part of a service or product that is not derived from user research. The reactor ionizes the air with user centricity. You can’t help but breathe.
https://medium.com/the-metric/the-service-reactor-5cfb3c3b1e98
['Michael Schofield']
2019-09-10 14:12:14.847000+00:00
['Design', 'Service Design', 'User Experience', 'Podcast', 'UX']
The Restaurant Week Interviews: 7 questions for Chef Manuel Martinez of La Viga Seafood
Chef Manuel Martinez of La Viga Culinary Inc. (Photo by Rob Watkins) For readers who might not be familiar, tell me a little bit about you and your restaurants. I’ve been in the industry for the last 25 years, working as a cook, an assistant chef and then as a chef. After that, I began opening restaurants for other people, and then opening places for myself. The last nine years I’ve spent working for myself. I’m currently operating three restaurants that I own here on the Peninsula — two in Redwood City and one in Palo Alto. I’m a chef by trade, but I’m also the mind behind everything here. Each restaurant has their own chef, so they do the kitchen operations, and I oversee them. Tell me about the year behind us — what’s been the trajectory of your restaurants? The last year has been a rough year. It’s nothing like I’ve seen before. I was in the industry during the housing market (crash), worse than the dot-com boom, and this has been worse than those. I’ve been through a lot, but never a pandemic before. La Viga has been open nine years, LV MAR will turn eight this year. It’s been a year since we opened San Agus. Officially we have opened and closed the restaurants a couple of times, the first time being back in March of 2020, and then again in mid-December. I’ve opened and closed LV Mar at least four times, because we experienced some COVID cases there, and the first thing we did was close and make sure everybody was safe. Ultimately, we were shut down twice, but we closed on our own another two times during the pandemic.
https://thesixfifty.com/the-restaurant-week-interviews-7-questions-for-chef-manuel-martinez-of-la-viga-seafood-3bbe20452b25
['Sarah Klearman']
2021-05-03 17:40:02.119000+00:00
['Bay Area', 'Restaurant Business', 'San Francisco', 'Silicon Valley', 'Foodies']
Review: Wolfwalkers is a masterful hand-drawn fantasy
Review: Wolfwalkers is a masterful hand-drawn fantasy ★★★★★ A young apprentice hunter and her father journey to Ireland to help wipe out the last wolf pack. But everything changes when she befriends a free-spirited girl from a mysterious tribe rumored to transform into wolves by night. (IMDb) There’s a chance you signed up for a year of free Apple TV+ within the last 12 months, but you may not have used it that much since joining. While 2020 has derailed Apple’s plans for its platform, though, there have still been worthwhile movies and shows to watch on the service. And, now it has its first great animated film in Wolfwalkers. This Celtic-inspired adventure is a magical story that is ideal for kids. But it has more than enough depth and complexity to appeal to all audiences too. The movie, from Irish animation studio Cartoon Saloon, also provides a welcome break from the photo-real animation that has been so ubiquitous in mainstream movies for so many years. Wolfwalkers blends traditional drawing techniques with digital methods, and the result is beautiful. It’s possibly the most distinctive and personality-filled animated release since The Tale of the Princess Kaguya from Japan’s Studio Ghibli. Set in Kilkenny, Ireland in 1650, the story makes several allusions to the politics of the time through the villanous Lord Protector (a clear stand-in for Oliver Cromwell, voiced by Simon McBurney). Our main protagonist, Robyn (Honor Kneafsey), is a young girl from England. She aspires to be like her father, expert wolf hunter Bill Goodfellowe (Sean Bean). The Lord Protector has hired Bill to clear the forest of the wolves completely. Things get interesting when Robyn meets and befriends Mebh Og MacTire (Eva Whittaker), a girl who lives in the forest and can transform into a wolf avatar when she sleeps. She’s a “wolfwalker”, and it’s through her that Robyn comes to see the world through another’s eyes. And she realises how cruel the Lord Protector’s campaign against the wolves is. It’s a return to traditional hand-drawn animation Co-director Tomm Moore has cited Ghibli’s Kaguya as an influence, perhaps unsurprisingly. While the 2013 Japanese film emulates watercolour paintings for its art style, Wolfwalkers embraces the look of medieval European artwork throughout. It’s a movie where you can see the craft onscreen, whether that’s freeform pencil sketches, the clear difference between angular and soft shapes in the juxtaposition between the town and the forest, or the kinetic and expressive style shown when the wolves run. The two-dimensional movie has a limited and controlled colour palette or earthy browns and greens. And that makes it all the more shocking and eye-catching when an incongruent colour, like red, splashes across the screen in the third act. In this interview with IndieWire, co-director Ross Stewart talks about how his team planned out the action and built environments in virtual reality. They then animated it on paper with charcoal and pencil. It makes for a incredible and unique look that’s unlike any other animated film for a long time. It’s a rich story with a lot of thematic depth This is a film that embraces metaphor throughout. When the character of the Lord Protector screams about how “what cannot be tamed must be destroyed” or how people need to stop believing in pagan myths, we can see that he’s trying to tear down people and their culture, and succeeding. This kind of political commentary is rooted in history but still relevant to this day, as we see how leaders continue to use fear and harsh rhetoric to motivate their supporters. In one scene, we see the Lord Protector bring a wolf up on a stage to both scare the people and prove his own strength to them. Beyond that political allegory, Wolfwalkers is also concerned with nature and the importance of humankind existing in equilibrium with the animals of the world. This is yet another way that it feels like the work of Studio Ghibli and, throughout, it emphasises the significance of environmental preservation. It’s a captivating, character-driven folk tale Some may point out that Wolfwalkers can be predictable, but that misses the point of the folk tales, myths and legends that inspired the film. An effective folk tale is often instructive and the predictability is what helps the audience to anticipate the flow of the narrative and, in turn, to learn the lesson of story. Here, it’s easy for us to recognise the Lord Protector as an evil and brutal man who uses the power of fear to rule. And, we can also identify Robyn’s naiveté and her father’s feelings of both dread, over-protectiveness and world-weariness. Being able to see these character archetypes is important for how the film tells its tale. On the surface, it’s about a ruler trying to use his power to destroy the forest and kill the wolves, but it’s also about colonisation and the way he subjugates the people he rules. The audience can see the comparison between the wolves and the Irish people, and that’s one way the film is powerful and gripping. The developments are also driven forward by the actions of Robyn as she fights back against the role that she’s forced to fill in this world. Through both Robyn and Mebh, we see girls in this film taking power and thinking for themselves, embracing rebellion in the face of tyranny. Verdict This Irish folk tale is an absolute delight that showcases gorgeous traditional animation techniques, a magical sense of atmosphere and outstanding vocal performances. It tells an engaging story full of history, myth, adventure, excitement and emotion. Cast and crew Directors: Tomm Moore and Ross Stewart Writers: Jericca Cleland, Will Collins, Tomm Moore, Ross Stewart Stars: Sean Bean, Honor Kneafsey, Eva Whittaker, Maria Doyle Kennedy, Simon McBurney The trailer for Wolfwalkers on Apple TV+ Subscribe to my weekly newsletter, and head here to follow me on Twitter.
https://simonc.me.uk/review-wolfwalkers-is-a-masterful-hand-drawn-fantasy-32e998bae7b7
['Simon Cocks']
2020-12-13 14:42:51.934000+00:00
['Film', 'Movie Review', 'Film Reviews', 'Movies', 'Cinema']
Despite a record turnout, Native American voters faced challenges new and old to casting a ballot in the 2020 election
By Grace Panetta On October 20, Diné activist and Navajo Nation citizen Allie Young saddled up on horseback with a group from her community for a 10-mile trail ride, a journey that took two hours, to vote in Kayenta, Arizona. Young, co-founder of Protect the Sacred, lives in Navajo County, which spans nearly 9,960 square miles of land in Eastern Arizona, and includes communities particularly hard hit by the COVID-19 pandemic. “Before we did something as significant as casting our ballots, I wanted us to ride those two hours upon our homelands to really reflect on what we’re fighting for,” she said. “And then along the way, having conversations with some of the riders, I know that this election, many Native voters went out to vote because of what we experienced these past few months in the pandemic.” After an election cycle spent directly organizing her community, registering her neighbors to vote, and getting them to the polls, voting herself required more advance planning and time than it did for most Americans. Young, left, and a group of voters at the Kayenta Township parking lot early voting location on October 20, 2020. Photo: Talia Mayden, HUMAN.us Navajo County maintained 12 early voting sites, most of which operated for limited dates and times. Only one was open throughout the entire early voting period for a full 9-hour business day, from 8 a.m. to 5 p.m. The nearest early voting location that Young traveled to, in the Kayenta Township parking lot, closed on October 20, 10 days before early voting ended, and was only open five hours a day, from 10 a.m. to 3 p.m. “Shutting down that early voting site a full 10 days before the early voting period ended was very frustrating because Kayenta is one of the bigger towns in Navajo Nation and in that Northern area,” Young said. One obstacle to having voting sites directly on reservations is a lack of infrastructure that allows county election officials to comply with the Americans with Disabilities Act (ADA). In states like Arizona, counties have expanded the use of curbside and drive-through voting to work around those challenges. “If you have a chapter house and the parking lot is not paved, that doesn’t meet ADA requirements…and so at those sites where it’s a problem, curbside voting helps to address that,” Arizona Secretary of State Katie Hobbs told Insider. While any voter can vote by mail without an excuse in Arizona, voting by mail isn’t a feasible option for many residents of rural Native communities. An estimated 33% of Native Americans and Alaska Natives live in hard-to-count census tracts, which the Census Bureau defines as areas with census mail return rates of less than 73%. Many reservation residents and rural areas don’t have traditional street addresses and do not receive consistent US mail delivery directly to their homes, and opt to get their mail at Post Office boxes instead. Young said that in her community, residents rely on receiving their mail to PO boxes in the next closest town, which is also about a 10-mile trip each way. The length of the journey means that some residents only make time to go to pick up their mail once or twice a week. Navajo County expanded its use of ballot drop boxes in 2020, Hobbs told Insider, but Young said that option wasn’t aren’t accessible everywhere. “In Navajo Nation, we have 110 communities each governed by chapter houses and chapter presidents,” she explained. “We have 110 chapter houses across the nation, and I don’t understand why valid drop boxes can’t be at each of those chapter houses…that would certainly help in those very rural areas.” The Najavo County Recorder’s office did not respond to Insider’s request for comment. Allie Young on an October 20 trail ride to vote in Navajo County, Arizona. Photo: Courtesty of Talia Mayden, HUMAN.us A ‘remarkably consistent’ pattern of hurdles Despite fears that the pandemic would lead to widespread administrative chaos, the 2020 election was remarkably smooth, secure, and fair, with record voter turnout — including in Native American communities across the country like Young’s, which is credited with helping deliver President-elect Joe Biden’s victory in Arizona. But overcoming obstacles to cast a ballot in the first place is not an uncommon experience for many of the estimated 6.8 million Native American individuals residing in the United States and members of the 574 federally-recognized Native tribes, according to activists and experts interviewed by Insider. An extensive report conducted by the Native American Voting Rights Coalition, a project of the Native American Rights Fund (NARF), based on two years of research in Native communities across the country and released in June, found that Native voters still face inequitable access to registering to vote, casting a vote, and having their vote count. “What we saw over the course of those two years was a remarkably consistent picture,” Jacqueline De León, a staff attorney at NARF and one of the authors of the “Obstacles at Every Turn” report, told Insider. “It was surprising in a lot of ways, how consistent the discrimination and barriers are across the country.” Despite the Native American and Alaska Native populations growing as a share of the population, Native Americans are still registered to vote at lower rates than voting-age eligible non-Natives. Well into the mid-20th century, states continued to refuse to acknowledge Native Americans as citizens of their states and deployed blatantly discriminatory tools including poll taxes, literacy tests, banning residents of reservations or tribal members from voting, and in some cases, claiming that Native Americans were wards of the state and thus didn’t have a right to vote. The Voting Rights Act of 1965 outlawed many discriminatory barriers and mandated states provide more protections and resources for vote-suppressed minority groups, including Native Americans. But still, a combination of outright discrimination, systemic underinvestment in election administration, especially in lower-income and rural areas, and a lack of consideration for and representation of Native communities persist today. Basic aspects of the voting process, like being within walking or driving distance of a polling place, receiving reliable mail delivery to a traditional mailing address, or even being able to register to vote or request a ballot online easily, are out of reach for many living in Native American reservations and communities. “About 70% of our population lives on reservations or in rural areas adjacent to reservation land. And we often take for granted, if we’re in cities, that a polling location is within a mile or two of where you live. Whereas with some reservations, the nearest place to go cast your ballot might be 60 to 80 miles away, one way,” Crystal Echo Hawk, a citizen of the Pawnee Nation in Oklahoma and executive director of nonprofit IllumiNative, told Insider. Voters riding to the polls in Navajo County, Arizona on Election Day, November 3 (Allie Young pictured in black). Photo: Larry Price The pandemic exacerbated physical and digital divides to voting In states like Nevada and Montana, which opted to send nearly every registered voter a mail ballot, De León and her team moved quickly with litigation to ensure that Native tribes — which face greater challenges with voting by mail — were also able to vote in-person. “It was kind of remarkable to see so many county clerks independently come to the conclusion that they couldn’t provide [in-person voting] services in Indian Country, but they could provide services at their county seats,” she said. NARF won two cases in Montana, one lawsuit that struck down a law that prohibited third-party ballot collection and another on behalf of the Blackfeet Nation requiring Pondera County to open an in-person voting location on the Blackfeet Reservation. “It’s reflective of the fact that the people that are running the elections don’t consider the tribal communities like they consider themselves,” De León said of the hurdles to voting in Montana. “None of the reservations got Election Day access. In all of them, we had to compromise and get access only in the weeks before…to satellite voting offices,” she continued, adding that tribes in Montana also had limited access to ballot drop boxes because of a state law requiring them to be staffed by two workers at all times. “For state and local officials, it’s realizing that Native Americans are everywhere, we’re not some small invisible, population,” Echo Hawk said. “Here in Oklahoma, we’re about 11% of the population. Our community needs to have a seat at the table.” Sen. Jon Tester, D-Mont., talks with constituents before a parade at Crow Fair in Crow Agency, Mont., on August 19, 2018. Photo: Tom Williams/CQ Roll Call via Getty Images Alaska Natives are estimated to make up nearly 16% of the population in the state, but members of Alaska Native communities also faced challenges casting a ballot partly due to the sheer size of the state and the remote locations of some villages. “We continue to have issues around language assistance, with access to early voting..and problems recruiting poll workers. None of this is new; it isn’t because of the pandemic, these are things that happen every single year,” Kendra Kloster, a voting rights advocate and the executive director of Native People’s Action, told Insider. In Alaska’s August primary, Kloster said, one Native community didn’t get ballots at all because the Division of Elections didn’t know there were voters living there. In November, COVID-19 cases increased throughout the state, and many villages went into lockdown mode. A lack of communication from election authorities, Kloster and NPA said, left voters confused as to whether they could vote in-person without violating the lockdown restrictions or risking arrest. On top of that, election workers in many Native villages weren’t receiving necessary personal-protective equipment for working the polls, and some voters reporting not receiving mail ballots they requested. “Another individual — and this isn’t the first time I’ve heard this — said their polling location was across the river and the river wasn’t frozen enough to go over and vote. That’s not something new, but that’s a problem,” Kloster told Insider. Geographic and digital barriers make it difficult for voters to register online and for election officials to conduct basic voter outreach and education, leaving much of that work to Native activists and organizers like Kloster. “There are many communities that don’t have good Internet access even in parts of urban Alaska,” Kloster said. “Out in rural Alaska, the voter guides on the Internet could take hours to download.” The NAVRC report found that about 90% of reservation lands lack broadband Internet penetration, entrenching a digital divide, as well as a physical one, between Native communities and trusted sources of election information. “Some of the other things I heard when having conversations with people around voting has been lack of information, over the years, getting lots of different comments of, ‘Well, we weren’t sure who was on the ballot,’ or they didn’t receive voter guides,” Kloster said. The Alaska Division of Elections did not respond to Insider’s request for comment. Campaign signs for President-elect Joe Biden and Sen. Mark Kelly in Arizona. Photo: Talia Mayden, HUMAN.us ‘There’s a target.’ While Native American voters have played a critical role in electing many top Democrats, like Biden and Sen. Mark Kelly in Arizona, Sen. Jon Tester in Montana, and former Sen. Heidi Heitkamp of North Dakota in 2012, Native Americans and Alaska Native voters are not a monolithic voting group. In 2020, Alaska Natives played an important role in reelecting Republican Sen. Dan Sullivan and Rep. Don Young in Alaska. And majority-Native precincts in places like Robeson County, North Carolina swung to President Donald Trump after the president backed legislation that would grant federal recognition to the region’s Lumbee Tribe. Native Americans exerting political power has often been followed by efforts to suppress their voting rights. In 2013, the year after Native voters helped deliver Heitkamp’s victory in North Dakota, the state enacted a stricter voter ID requiring voters’ identification to have a residential address, which most immediately affected Native voters on reservations. In the spring of 2020, the Spirit Lake Nation and Standing Rock Sioux Tribe, represented by NARF and the Campaign Legal Center, entered into a consent decree with the state to ease some of the burdens posed by the law. De León said that looking forward, the team at NARF will be on guard for efforts by states to restrict voting in ways that disproportionately affect Native voters. “Though the Native turnout was ‘successful,’ and it was in a lot of ways successful and higher than it has previously been, it doesn’t mean that there’s equitable access, it doesn’t mean that every voter is voting, and it certainly doesn’t mean that every vote that was cast was cast reasonably easy,” she said. For more great stories, visit Business Insider’s homepage.
https://medium.com/business-insider/despite-a-record-turnout-native-american-voters-still-face-barriers-a363c19de23a
['Business Insider']
2020-12-15 22:58:58.121000+00:00
['Voting', 'Native Americans', 'Election 2020', 'Voting Rights', 'Politics']
Drawing: Smell the Roses
Drawing by Rolli It’s important to remember — and easy to forget — to smell the roses. You should try it, some time. Artist’s Note №1 This drawing will earn no money (Medium recently all-but-demonetized poetry, cartoons, flash fiction and other short articles). Please consider buying me a coffee. More coffee=more drawings for you to enjoy. Artist’s Note №2 This drawing is brought to you by the letter “D.” “D” is for Dr. Franklin’s Staticy Cat and Other Outrageous Tales, my collection of humorous stories and drawings for children. Artist’s Note №3 My new one-man Medium magazine is called — Rolli. Subscribe today. Artist’s Note №4 From now on, I’m letting my readers determine how often I post new material. When this post reaches 1000 claps — but not before — I’ll post something new.
https://medium.com/pillowmint/drawing-smell-the-roses-537680781ff0
['Rolli', 'Https', 'Ko-Fi.Com Rolliwrites']
2020-01-27 19:27:41.591000+00:00
['Drawing', 'Cities', 'Art', 'Self', 'Life']
Anti-Leadership During C-19
Or, how to wreck morale and lose employees Photo by Mika Baumeister on Unsplash I work in an essential business and have a toxic boss. I knew she was toxic before I took the job — it’s all over Glassdoor and Indeed. After she called to schedule an interview, I spoke to colleagues in my industry, and was told that she is a challenging employer. Other people who had worked for her used much less flattering descriptors. Apparently, she knows she is difficult as she offered above-average pay for the position. I accepted, deciding that I would work hard, save money, and leave when it became necessary. The signs of poor leadership were clear. During my first week, an employee asked to take me out to coffee to talk. A nice gesture, right? But no, the employee had been nominated by the rest of the staff for a specific mission. She was there to tell me the issues at the workplace and ask that I stay for at least a few months before quitting. She went on to explain that the employees needed continuity. With a little research, I understood why the coffee date. At that time, the average employee tenure was less than 90 days. The business had gone through 4 managers (my position) in 9 months. The preferred way of quitting was to simply walk out and not return. Despite the challenges of a toxic, racist, xenophobic, narcissistic boss, I came to enjoy my job. I named Friday’s “Fire Fridays” because the boss regularly stormed into my office, demanding that I terminate a staff member for an imagined offense. Somehow, I managed to calm her down and talk her out of every firing. I held regular staff meetings, and we implemented needed improvements. The team was smart and dedicated. Soon we had begun to keep employees longer than 90 days. I felt that the morale and overall atmosphere of the business were on an upswing. Then came COVID-19. At first, my toxic boss tried to pretend it was business as usual. She told me to stop ordering bleach and sanitizing wipes (I wasn’t hoarding, but we go through them quickly, and I was ordering a little more than usual). She kept talking to the staff about the flu season and how this disease is just the flu. When Italy began to be in the news, she explained that Italy is basically a third world country. Of course, she said, Italy would struggle with Coronavirus. She then told me how Asian culture is just dirty, and that is why this disease began there. The US has the best medical system in the world, she proclaimed, and we don’t have to worry. I pushed back, explaining how none of the employees had concierge medical care like she does. And that in the US only the wealthy, such as herself, have access to excellent medical care. When employees started asking to wear masks at work, the boss locked the masks and other protective equipment (PPE) in her office. I liberated the masks and made sure all employees had one fresh one per week (I work in the medical field, but not human medicine). She told me that I needed to count the masks daily because she felt employees were going to steal them and sell them on the black market. I asked her why we employ people she thinks are thieves, and she said, of course, we don’t have thieves on our team, but SOMEONE would steal them, Trump said so. When I pointed out the illogic of her statement, she stormed out of the office. The order came down from the governor that those over 65 should stay at home. At 67, she decided it didn’t apply to her, nor did the closure of parks and beaches. She still boasts about going walking in the closed parks without a mask because COVID-19 is just the flu. As we are an essential business, I wrote and re-wrote our COVID safety protocols. I read everything the CDC, the state and our industry- specific resources had published. I solicited input from colleagues. I met with the staff to make sure everyone would feel safe with the procedures we were putting in place. I discussed the safety guidelines with the boss and received her agreement on every point. The next day she ignored the plan, ignored the social distancing rules, and told me I could not work from home because the employees will talk to her instead of me and she hates it when they talk. I told her I had an underlying chronic condition and would work from home or quit, her choice. Grudgingly she gave me permission to work from home, but not before telling me she doesn’t believe I have a chronic health condition as I “don’t look sick.” Then the governor shut down the state. We all were in panic mode, but worrying about keeping the staff safe and employed helped me to focus on something other than my own worries about my family and my safety. I got the employees situated with the newest protocols from the CDC, the state government, and our industry . The boss instructed a staff member to listen to the president’s briefing for the facts. Almost immediately, the texts from employees started blowing up my phone. The boss was ignoring the protocols. The boss was swearing at the staff for wearing masks. The boss was telling the team that they don’t have to worry about Coronavirus as it is “just the flu”. According to the boss, all the employees are healthy; even if they did get Coronavirus, they would recover just fine. The boss was yelling at employees for following our clearly posted safety guidelines. The boss called an employee an idiot for wanting to wear gloves when transferring a pet from the clinic to an owner or vice versa. The boss instructed a staff member to listen to the president’s briefing for the facts. The boss told the team she did the research, and there is nothing to worry about. I went back to work the next day to talk to the boss. While not wearing a mask, she assured me the country would be open by Easter. After all, she explained, Trump said so, and our governor, who she feels is dumb compared to Trump, would have to fall in line. After getting agreement (again) to our safety protocols, I left for home but stayed in communication with the employees. Unsurprisingly, the boss’ bad behavior continued, and also unsurprisingly, so did the consequences. The next day almost every staff member called out ‘sick,’ although some didn’t sugar coat it and told me they were scared to come to work with someone so careless about our health. All were uncomfortable working with the boss if she refused to wear a mask and follow the safety protocols. The boss ran the business with a skeleton crew until noon when an employee called me in hysterics during lunchtime, saying she was leaving and not returning until the boss discovered some empathy. The next moment, I got a call from the boss who had realized she had no staff. She closed the business for a day, and I once again went over the posted safety protocols with her, again getting her agreement on every point. That afternoon I talked to the staff by phone and text. Half the staff decided to come back, a few said they would work on the boss’ day off, and a few decided to take a leave of absence until they felt safer at work. The boss took a few days off while we worked with her associate to perfect the safety protocols. The tension lessened, and we were back in business. Since then, there has been a litany of constant complaints from the boss. She feels her business is going under, although we are making more money with fewer staff costs than ever. She received a Paycheck Protection Program loan/grant, which means she has months of payroll money for staff without having to pay it back, if she follows the rules. She got her mortgage waived for 6 months. However, according to her, we are spending too much money. Instead of using a budget with industry standard benchmarks, she simply goes with her “gut feelings” about how much we are spending. Rather than discuss the budget with me, she changes the passwords to our ordering websites when she feels too much money has been spent. The inventory manager and I cannot place orders until she decides it is necessary and changes them back. This leads to constant shortages of medications and supplies, especially right now when items are already in short supply due to the pandemic. When the county required all employees to wear masks at work, I bought cloth ones out of my pocket as the boss refused to purchase cloth masks. Despite the boss’ beliefs and obstruction, everyone else in the company is following the strict guidelines of our county, including not allowing clients into the business and making sure the place is extra- sanitized. When the county required all employees to wear masks at work, I bought cloth ones out of my pocket as the boss refused to purchase cloth masks. I even gave her a mask I purchased. PPE is still locked in in the boss’ office, and she still thinks the Coronavirus is an overblown hoax. She can be overheard telling clients how the staff is overreacting. However, we are merely following the government and industry recommendations. I have been buying the staff lunches, coffee cards, donuts, and giving handwritten notes of appreciation. I’m working hard to mitigate the negativity of the boss’ behavior (all out of my own pay, of course). Yet we have still had employees who have left. Others have told me they are actively looking for new jobs. Leadership during this crisis is hard I talk to colleagues in the industry about how stressful it is to manage through this pandemic. Leadership during a crisis is hard. Learning how to do business in a whole new way when rules change weekly if not daily is a challenge. However, not one of my colleagues has a boss who denies the severity of this pandemic, swears at staff, and ridicules staff for following basic safety protocols. As employees, we understand how hard change is for our boss. As a former business owner, I empathize with the lack of control and worry over finances that she is experiencing. As a leader, I can definitively state that her actions are driving off good employees and damaging the long term morale of the team. As an example of what not to do during a crisis, our business is EXCELLING.
https://medium.com/the-innovation/anti-leadership-during-c-19-edfd80c90ed3
['Audrey Zetta']
2020-09-16 21:13:53.817000+00:00
['Business', 'Covid 19', 'Leadership', 'Life Lessons', 'Work']
How to Decide What is Need to Know and What is Nice to Know for your Learners
How to Decide What is Need to Know and What is Nice to Know for your Learners Joel MacDonald Follow Apr 23, 2020 · 4 min read Last time out, I wrote about how people learn and some practical tips that can be gleaned from the processes of memory — those being attention, encoding, storage and retrieval. Well, encoding, storage and retrieval are documented processes of memory. Attention usually gets lumped in with encoding. However, as an instructional designer, I feel it deserves to be it’s own separate process worthy of its own specific tips for teachers. And so that’s what I did! Something else worth noting is that time is a key factor in learning. The process of retrieval means basically practicing retrieving what you know (or think you know). If you are going to do that, then it also stands to reason that you aren’t going to be trying to put a whole bunch of new facts and information into your brain at the same time you’re trying to remember facts and information you covered just recently. Working memory, as we know, is limited in how much information it can take in and process. It’s going to be very difficult to move new information through working memory into long-term memory at the same time we’re trying to move content from long-term memory back into working memory. So, learning takes time. And if you are teaching something where there is a lot of content to cover, then how could you expect that learners will actually be able to remember or know all of it by the time all is said and done? In fact, there may be too much content to cover and not enough time to cover it all in. There needs to be time dedicated to putting information in but then there also needs to be ample time dedicated to getting that information back out again. I’ll talk with clients a great deal about them figuring out the ‘need to know’ versus the ‘nice to know’ of their content. Albeit, for any subject matter expert or specialist in a field, every piece of content is going to seem awesome and worthy of the status ‘need to know.’ However, we really need to think about our learners in order to help us decide what to cover because that is what they really need to know. Are the learners novice or more experienced? Is the content they’re receiving required for what they are to do or is it more elective in nature? Still, deciding what limited information makes the need to know list can be difficult. My first tip then is that what sits in the need to know category are things that you, as a teacher, are going to test your learners on. In fact, that content will feature in high-stakes, summative assessment. If the information doesn’t appear in your testing of your learners, then is it really need to know? A second tip I’ll suggest is finding ways to compare the information you want to present against a series of categories or standards. Bloom’s Taxonomy is a pretty common way of categorizing knowledge that anyone involved in instruction has probably at least heard of before. However, it’s really only good at determining how to assess or test the knowledge. The assumption with Bloom’s Taxonomy is that we’re going to be assessing learner knowledge and we use the various categories to determine the level of cognitive effort we want to put on learners to demonstrate what they know. After all, some things we’re going to need to have on the tip of our tongues/brains while other things can be at the tip of our fingers (via an internet search, for example). So what we need instead is a system that helps us decide to what level we want our learners to pay attention to the information we’re presenting. We know that what gets paid attention to gets thought about and what gets thought about gets remembered. Enter Dr. Maria Anderson’s ESIL Lens. ESIL stands for existence, supported, independent and lifetime. Dr. Maria Anderson’s ESIL Lens with the blogger’s idea of need to know vs nice to know added This is one way you can present all your content while making it clear to yourself and to your learners what is nice to know and what is need to know. And you can combine the Independent and Lifetime levels of the ESIL Lens with Bloom’s Taxonomy to get more specific with your assessment and further stratify your course plan in the process. If you’re interested in learning more about Anderson’s ESIL Lens, you can hear her speaking about it in this podcast.
https://medium.com/upeielo/how-to-decide-what-is-need-to-know-and-what-is-nice-to-know-for-your-learners-81d8fc544d77
['Joel Macdonald']
2020-04-23 18:38:17.470000+00:00
['Pedagogy', 'Planning', 'Instructional Design', 'Teaching And Learning']
4 Steps To Setting Boundaries and Saying “Yes” To Yourself
4 Steps To Setting Boundaries and Saying “Yes” To Yourself And how this is key to self-love Growing up, many of us were not allowed to develop our sense of self, let alone express what we needed. As a result, we may not know our needs and boundaries. We may not believe that we had the right to have boundaries. This means: We’d say yes when we meant no. We’d not call out on someone who mistreated us. We’d allow exes, “friends” and people to say things to us that are disrespectful. We’d try to seek love and approval from people who wanted us to be different and did not accept us. We’d let others intrude our space… For years, I believed that something was wrong with me. I’d feel that I was at fault for simply being me. Being the “black sheep” of the family, I felt immense shame and guilt. A lot of times, I’d deny that I have certain needs because I did not believe that I could have them met. I was in a very painful cycle — Deep down, I was not feeling good about myself and my decisions, and yet I was afraid to listen to my feelings. For years, I lived with a painful belief — I thought that others’ opinions and needs were more important than mine, because that was what I was taught. As I learned to listen to my feelings, I realised that I abandoned myself by letting others violate my boundaries. This caused tremendous anger within my being. As a result, I did not trust myself. When I continued to explore my emotions and past, I realise that my life choices were simply an expression of myself and my truth, and that I should not let others’ opinions of my choices affect my self esteem negatively. After all, why do I have to change to comply to what others want? Based on my past experiences, here are 4 steps I’ve learned to set healthy boundaries: #1: Feel Your Emotions When our boundaries are being violated, we will most likely feel negative emotions i.e. something isn’t quite right — perhaps you feel anger or sadness, or other forms of emotions. For instance, if you feel “dismissed” when your partner is an hour late for your date, your partner has violated your boundaries. And, the key to honouring your boundaries is to feel such “negative” emotions, instead of running away from them. Another example is — if you said yes to an invite and feel bad after agreeing to it, instead of just getting on with life and still going to the event anyway, sit and feel your emotions. Your emotions is telling you that perhaps it is not a good idea to say yes to that invite. Or, if you feel bad after someone says something to you, he/she has most likely violated your emotional boundaries. Rather than letting your emotions slide, sit with your emotions and ask yourself — what are you feeling, and why is it that you are feeling that way? What do you need in this moment? I realised that as I learned to honour my feelings, I learned to set better boundaries. #2: Come Back To Your Body and Breathe When our boundaries are violated, we will most likely feel “bad.” However, since some of us may have become numb to our feelings given our past experiences, we may be used to feeling “bad” that we don’t quite know what “good” feels like. Sometimes we can’t feel, because our consciousness is trying to be outside of our body. So if you notice, you may start focusing on the external by fidgeting, reaching for the food/alcohol or other forms of coping mechanisms to avoid feeling what you are feeling now. As you focus on your breath, you can imagine your consciousness coming back to your body and grounding back into our physical world. As you start sinking into your body, sensations will arise. You may begin to feel emotions such as anger, disappointment, frustration. The key is to sit with uncomfortable emotions and breathe through them. You can focus on your inhale and exhale breath, as well as the pause in between the two breaths. Breathing calms you down and brings you back to centre. After taking a few deep breaths, you can ask yourself — what do you need at this moment? Which ones of your boundaries are being violated? How are you letting others violate your boundaries? #3: Express Your Boundaries Once you figure out what rings true for you, you can then express your boundaries by firmly stating your truth. Of course, this also requires you to acknowledge that you have rights — most of us are afraid to voice our boundaries because we do not believe that we have rights. In order to change the belief above, other than doing shadow work, we can also start taking small steps to express our boundaries with small requests. You can start with phrases like “I’d really appreciate it if you [XX]”, or “I’d request that [XX]”, or “Please [XX] / Please don’t [XX]”. For instance, if your friend is constantly making fun of your diet, you can say something along the lines of “I feel quite [XX] when you made a comment on my diet the other day. I value our friendship, and I’d really appreciate it if you could respect my diet and not make comments that mock my diet preference.” Be specific so others know what you want. #4: Accept Once you tell the person how you feel, you can choose to continue to engage with the person (or not), depending on how you feel toward the person’s response to you. I’ve also realised that, in some cases, it is healthier to leave the person or circumstance even after voicing my boundaries, because my boundaries may not be honoured. Once you start setting boundaries and being firm, some people may leave your life — Accept that some people may not be able to honour your boundaries. For instance, if the person has adopted a certain coping mechanism, say someone is a narcissist, or is OCD, even if you voiced your boundaries to him/her, he/she may still not listen to you, because he/she may not want to change, or is incapable of changing, at least at this particular time. This means you can choose to accept and engage, or leave. That being said, if you loved yourself, would you maintain a connection with someone, even if they violated your boundaries after you have voiced your needs to him/her? And, in other cases, when you speak up, you might just get what you’ve always needed and been afraid to ask for. Why Setting Healthy Boundaries is Key to Self Love Listening to your emotions and honouring your boundaries take courage — the courage to acknowledge your emotions and needs, speak up, as well as to face the risk that you may lose a connection. When I learned to listen to my emotions and follow the path of joy and self-love, I realised that I don’t have to stay in circumstances when my boundaries are not being honoured. And, when you honour your boundaries, I believe that you’ll attract people and circumstances that allow you to be you. The process of discovering and setting your boundaries take immense courage — the courage to feel your emotions, face your fears, suppressed memories, and childhood wounds. When we suppress our memories, we live in denial — denial of our needs, or desires because we believed that we were not able to have them met because our caregivers did not meet them. When we learn to feel and bring our suppressed memories to light, we will gradually learn that our boundaries can be honoured, if we allow others to honour them. And as we evolve collectively, knowing and voicing our boundaries with kindness, as well as respecting others’ boundaries are crucial in conscious relationships. By doing so, our world will be a more connected and loving place, because we are closer to living our authentic truths. To learn more about resolving emotional wounds from the past, please visit www.nicolelana.com and connect with me via Instagramor Medium.
https://medium.com/thrive-global/4-steps-to-setting-boundaries-and-saying-yes-to-yourself-420195cb69a6
['Nicole Lana Lee']
2018-12-13 04:01:49.438000+00:00
['Boundaries', 'Self Improvement', 'Wellness', 'Mindfulness', 'Productivity']
Why Is The Media Still Coddling Donald Trump?
Getty Images Donald Trump is an exiting president, who will be remembered by historians as the worst to ever do it — now and forever. His Twitter feed is still a major source of fascination for the media and citizens alike, who seem to be in competition for who can readily dissect the meaning behind the never-ending madness. Almost two weeks have passed since the most prolific and heated election season of our lifetime took centerstage. And some days later, the national projection favored former Vice President Joe Biden as the winner, based on his proven scores, particularly in battleground states like Georgia, Michigan, Arizona and Pennsylvania. But the results also revealed the enormous support for a white supremacist, who bulldozed his way into the highest office in the land, and began his rogue presidency with a salute to the domestic terrorists that set Charlottesville on fire in recognition of restored white power in The White House. “There are very fine people on both sides.” That will surely be the epitaph on the headstone of a Terrorizer-in-Chief, who pursued his political ambitions to serve his egomaniacal tendencies, and not because he cares so deeply about the country that he has run into the ground. Trump has abandoned the job duties he rarely performed for weekend golf sessions at his sprawling properties and 24/7 tantrums on Twitter, where he bitches about everything, except the dire forecast for the coming winter months, due to the scorn of COVID-19. It’s interesting how news organizations across the board and social media platforms have taken to coddling a mastermind manipulator, whose only method of ejaculation is to willfully hijack the narrative of this country by weaponizing his absolute power like the bargaining chip it was never meant to be. Trump will never concede this election or dutifully instruct his administration to work with the incoming Biden/Harris team to ensure a smooth and peaceful transfer of power for the betterment of a wearied nation in the eyes of the world. The four years of foulness and crimes against humanity on every level imaginable, definitely prepared us for the likelihood that the Liar-in-Chief would shamelessly reject the election results. We knew he would capitalize on the loyalty of his idiotic base of worshippers to lead the charge of inflating conspiracy theories, and taking to the streets during a global pandemic to rally around their supreme leader. Donald Trump has been able to maximize his white male privilege and white America’s adherence to white evangelism to seamlessly avoid repercussions to his episodes of national and global disruptions; that include pledging allegiance to Vladimir Putin in Helsinki and the unlawful phone call with Ukraine president Volodymyr Zelenski that led to his impeachment by the House of Representatives. The Racist-in-Chief’s notorious presidency has led to a steady rise in hate crimes and gun violence, that coincidentally began in 2016, around the time he was holding KKK-themed rallies all over the country where he cursed out Mexicans, Muslims, Brown and Black migrants, and communities that can’t be registered under #MAGA. To say that Donald J. Trump is a a horrible human being would be an understatement and somewhat modest, due to his uncanny inhumanness and unhealthy addiction to debilitating narcissism, that makes it impossible for him to consider the welfare of others, when it means setting himself aside. The real estate mogul and former reality TV star, who has been banished from his own hometown of New York, may have the dishonor of being denied another four years to further damage our consciousness, and cement the permanent erosion of our fragile democracy, but his blighted legacy will heavily credit his defeated administration with securing America’s unfixable divide. Esteemed poet, writer and activist Nikki Giovanni, who has been at the forefront of the mission demonstrating why Black Lives should always Matter in a society that’s rigged to denounce those truths, recently paid a visit to cultural hub The Breakfast Club, to comment on the unfolding national hysteria, stemming from the terror of an enabled, out-of-control white male. Giovanni expectedly had a lot of enlightening things to share, but she got full attention when she mentioned Trump’s brief battle with COVID-19, and expressed her disappointment that the deadly virus didn’t take him out, because of how his very presence continues to be detrimental to the functionality of our government, and survivability of communities he targets. Personally, I don’t see the point of wishing death on Donald Trump or anyone else for that matter. Reality dictates that earthly departures are simply a fact of life that will affect every single one of us, regardless of race or creed. However, I won’t pretend that extreme exhaustion and longstanding resentment against a a hateful, bullish oaf, who just days ago, gave an approving salute to his “Stop The Steal” minions crowding the streets of DC, from his bullet-proof vehicle, hasn’t broken me down to daydreaming of his triumphant departure in a Batman Returns, “Penguin” kind of way. Donald Trump is way more trouble than he’s worth, and the fact that he’s unwaveringly committed to celebrated nefariousness, during a time when this sinking ship can’t afford increased national emergencies, as new cases of COVID and staggering death count skyrocket, is proof of why all communications need to cease. Twitter CEO Jack Dorsey comprehends why legions of users can’t step away from the immediate gratification of a problematic app, that continues to empower a delirious man-baby, whose account should be shut down, since it breaks the rules and policies to the point of threatening the lives of tax-paying, law-abiding Black, Brown and Muslim citizens. What’s the reasoning behind coddling a disgracefully sore loser, who knows that his only recourse is to criminally distort the national narrative, in the hopes that the resulting chaos and mayhem will penetrate the functionality of government, by hindering the objectives of an incoming administration, and tragically weaken the durability of a flexed out system. Prominent news organizations are still fixated on Trump, while also admitting that providing that much attention to a raging bulldog with cowardly barks isn’t recommendable. Yes, the Thug-in-Chief is still a legal resident of the House of Horrors until noon on January, 20, 2021, but it’s clear that the harrowing days ahead will be even more unbearable if the future felon has unrestricted access to the tools and viral status that give him the upper hand. It’s time to stop excessively analyzing and decoding the rantings of an incoherent, racist pig, who revels in his authority and utilizes those privileges on a daily basis to showcase why his voice is relevant enough to exact nationalized trauma without issue. I have personally blocked Trump’s account as well as his soldiers of death, like Kayleigh McEnany the so-called press secretary, who has repeatedly violated the Hatch Act, as she juggles responsibilities as her supreme leader’s most visible defender on Twitter, Fox News and anywhere else that endorses fake news and misinformation. Since the victorious anointing of President-elect Joe Biden and Vice President-elect Kamala Harris by faithful and patriotic voters, featuring mostly Black women as is always the case when our democracy is at stake, Twitter has been hosting the bitter loser’s deranged assessments, and media outlets are obsessed with speculating on Trump’s money-making moves — post-presidency. Who the fuck cares about Donald Trump’s grand return to his gangster lifestyle after he’s tossed out on the South Lawn? And why does the media and bored users of social media insist on taking the time to translate meaningless tweets that may or may not be the long-awaited admission of his election defeat, and possibly an inch or two towards a delayed concession speech? Political pundits and media personalities both at TV roundtables and through lengthy Twitter threads are determined to stroke the ego of a deplorable white male, by humoring his destructive characteristics with pointless debates, centering on the falsehood that he ever came remotely close to conceding. Of course nothing escapes Donald Trump because he literally does nothing outside of his recreational schedule of watching a rotation of cable news networks that inspire his poisonous tweets. So, in true fashion, and as a way to blast the hypotheticals surrounding his refusal to graciously accept his loss, Trump recently tweeted out his obvious stance: “I concede nothing!” Duh!? It’s time to stop coddling a real-life monster with fangs that won’t stop until our constitution is shredded beyond repair. The exiting Gangster-in-Chief needs to be de-powered effective immediately. Having his Twitter account removed will do wonders to humble and tame those beastly qualities. And maybe that will encourage the media to follow suit by cutting the cord, and drastically minimizing the appeal of the Toddler-in-Chief, who shouldn't be allowed to hold this country hostage for longer than necessary. Former White House Communications Director Anthony Scaramucci, who barely lasted a week behind the infamously dysfunctional podium has managed to shockingly emerge as the voice of reason, when it comes to the corrosive dependency on Trump’s mode of self-expression. Again, Trump will never concede, therefore he needs to be treated with the disrespect and dismissiveness that he has more than earned, and then some. Stop giving him the spotlight that he pathetically craves, by enabling his ongoing quest to pollute a withering industry that has lost its grip on truth-telling. It’s time to stop normalizing the shit that’s far from normal. The over-powered accelerator of the other virus without a cure will stealthily slither away to his Palm Beach fortress before his move date, and settle into the turbulence of his twilight years — haunted and and filled with the same fantasies he shares as facts. Donald Trump deserves no more coddling.
https://nilegirl.medium.com/why-is-the-media-still-coddling-donald-trump-986470ec9c5b
['Ezinne Ukoha']
2020-11-16 20:20:35.278000+00:00
['Politics', 'Social Media', 'Donald Trump', 'News']
Deposit FCH to Win Extra Coins & Trade FCH to Get Refunds!
Dear Hoo users, Hoo will list FCH/USDT trading pair at 14:00 on Feb. 21 (UTC+8). And the “Deposit FCH to Win Extra Coins & Trade FCH to Get Refunds” campaigns will be launched the same day. Campaign 1: Deposit FCH to Win Extra Coins Those who deposit FCH to Hoo.com during 12:00:00–13:59:50 on Feb. 21 (UTC+8) will be able to enjoy the following rewards: For example, if Bob deposited 1000 FCH to Hoo.com at 12:10 on Feb. 21, he will receive an extra 50 FCH. If he deposited 2000 FCH to Hoo.com an hour later at 13:10, he will receive an extra 60 FCH! Campaign 2: Trade FCH to Get Refunds Those whose net-buy amount of FCH exceeds 100 during 14:00 on Feb. 21 to 14:00 on Feb. 24 will get 2% refunds from the platform (maximum refunds 1000 FCH). Besides, top 3 trading volume in the three days will receive extra FCH rewards: Rules Users who are qualified to join the campaign will be ranked based on their deposit and trade volume. All users will have one opportunity to get rewards for both campaigns. Only the net-deposit will be calculated during the campaign period. Maximum rewards for “Deposit to Win Coins” campaign is 2000 FCH on a first come, first served basis. Maximum rewards for “Trade to Get Refunds” campaign is 1000 FCH on a first come, first served basis. The rewards will be sent to the trading account within one week when the campaign closes. You will be disqualified if found scamming. Hoo Team Feb. 20, 2020
https://medium.com/@Hoo_exchange/deposit-fch-to-win-extra-coins-trade-fch-to-get-refunds-48f5319e7f89
[]
2020-02-20 04:51:42.864000+00:00
['Listings']
Worried About The Future Of The Affordable Care Act? Here Are 3 Reasons Why You Shouldn’t
There has been much debate over the future of the Affordable Care Act recently. For the past four years, Donald Trump’s administration has been lobbying for it to be done away with, and a battle has ensued ever since. The act, more widely known as Obama Care and ACA, has provided affordable health insurance for Americans that earn the minimum wage and has also afforded their children to receive free health care as well. Alternatively, the act has made health services that the middle class had previously enjoyed more expensive and, as such, there are large numbers of supporters for and against the existence of the ACA. Photo by Ian Hutchinson on Unsplash The future of the act is now being heard in the highest court in the land. The first hearing was on the 10th of November — seven days after the polls were closed. As such, the issue with the ACA has been scrutinized by many as part of a political agenda. With all that said and the health insurance of millions of people hanging in the balance, here are three reasons why you shouldn’t worry about the ACA too much. Judge Amy Coney Barrett is not the enemy Many believed that the admittance of Judge Amy Coney Barett to the panel of judges that serve the Supreme court was going to tip the scales in favor of removing the ACA. This belief is due to her public statements in the past about the act. And they have questioned its validity. However, in the recent hearing, she was amongst the judges that were skeptical about the challenge. What this means is that she will try the case based on its merit, and unless evidence against the act can be produced — it will remain as is. Many judges are still in support of the ACA The ACA has remained intact largely due to the favor of many judges. They do not see any legal grounds on which the ACA should be done away with and will always rule with that in mind. The last time the act was challenged, Justice Ruth Bader Ginsburg tipped the scales in favor of keeping the ACA. Justice Ginsburg has since passed on but rest assured — many like-minded judges like her are still serving in the Supreme Court. The supreme court is likely to throw the case out The chief justice, along with several judges on the Supreme Court’s panel sees the challenge of the act as a way to avoid the standing precedents that the court values. As such, it appears to be something the court may not be encouraged to keep hearing a case on unless evidence of concrete injury is presented to them. Amidst all of the debate, Americans should not be worried about the ACA. What they can do is stay informed as the court tries the case. If you reside in Arlington, Texas, Oklahoma, Arkansas, Arizona, Louisiana, New Mexico, Alabama, Virginia, and Florida, I4D is the best way to stay informed about the trial. They also provide the health insurance that you need!
https://medium.com/@mendoza-claire/worried-about-the-future-of-the-affordable-care-act-here-are-3-reasons-why-you-shouldnt-28537a513a9c
['Claire Mendoza']
2020-11-19 10:29:47.797000+00:00
['Supreme Court', 'Lawyers', 'Law', 'Affordable Care Act']
Two roads diverged in a wood.
Life is all about the choices you make. The little left or right from the box of logic. Most of the time it blindsides you on the range of repercussions it will cause. These ignorable little steps grow in mammoth proportions devouring up the zen. Thinking on a larger scale, worrying about the bigger picture is a catalyst to anxiety which we all hate or love to euthanize. Almost every time, tracing back those little steps and undoing the bad choices is often impossible. Most of us live with it. Cursing every other day, trying to think why is it okay and accept it as mere fate that one has no control over. Most of our demons grow in size, ferocity, and vigor as we try to brush it aside. We spend decades trying to outrun these fears thinking one day they will just disappear. That one day almost never comes. Negativity becomes a constant companion. We become agreeable with it even to caress and adore at times of desperation. It becomes an excuse, a reason for all your childish gimmicks. The instant receptacle of almost all our setbacks is our body. We stuff things with absolute nonchalance in the pursuit of instant joy. A bit of alcohol, a drag of nicotine, a snort of cocaine, a stamp of acid. We seek pathways to laidback glory just by loading our vessels with compounds we hardly understand. Channels to inner wisdom we think, a place of peace to shelter the troubled minds of generations of lost kids. The chase of pleasure is only a getaway from the miseries we face. The other side of pleasure is pain. Running after instant thrills last only so much. How do you choose? Listening to yourself? Does your self have a unified voice? Is it not like medusa’s head with thoughts for snakes? Do this. Do that. Walk there. Click that. Don’t think about it. I am too tired for it. Would you trust yourself with a mind like that? Most of the choices we make are only an aggregate of sensory inputs we get. Anyone could be anything and it rarely matters. Living a life according to a plan is often an inconceivable reality. Wherever we find ourselves in, manifests into the present leaving everything behind with each passing second. Is the dichotomy a constant? Are there opposing planes for everything? Look a bit further into whatever you think you have chosen for yourself. Like a fractal, formations of chaos and order conjure up in everything. In a way you are predictable, but then in that predictability, you step out of the lane sometimes, just for the heck of it. There is disintegration in life and vice versa. There is a set path we all are told to follow. Walk the path at the right times and you are on the track. Is it that simple? Is everyone alright who took this well-tread path? For all that is worth, they say, pick a side. One simple side. Order or chaos. Predictable or erratic. Life or decay. What system are you trying to hold on to? There is a multitude of us. Generics will not work for everyone. Everyone has to follow their own path. Decide to stand up for yourselves, to be strong, to be responsible and above all, to be in control. Do not go with the flow. Do not graze with the herd. Respect what you have, your mind, your body. Feeling helpless pushes us to do extreme things. Some of us are truly helpless and it never makes sense to even think there is a meaning to all of it. But then, grab hold of all the wonderful things you possess in life for free. It is not a little thing to have a family that loves you and is ready to give up anything for your well-being. It is a blessing to have a roof over your head, comfortable bed to sleep on, to have your food served with love. There might be a myriad of variables in your life. External things that flash so bright that you lose sight of these priceless gifts. Bliss is not on a faraway mountain nor in another person. It is there in you. Every time you remember how lucky and grateful you are to breathe in life and sense the world around you, Every time you realize how irrelevant is your existence in this ever-flowing vastness of time and space, the spirit is kindled. The inner strength is all that matters. Be just to it. There is no point in following what is right to the person next to you. Everyone has to make their journey alone. Looking back helps. Looking inside helps. There are always things that we love to do. There are always things that people tell us we are good at. Most of us ignore the little signboards in the wake of career and stability. You need to look out for them. They often guide you in the right direction. The kind of person you are at this moment will change after some time. Incarnation is life. We shed our inner and outer bodies to walk the earth as a new being. The process goes on. You can always change. You can always learn. You can always reset. Sometimes to reset we need to cocoon ourselves. The spaces that provide much-needed nourishment is a necessity after years of anxious life. The time we spend with ourselves may it be in travel, may it be in a room often kindles a sort of self-sufficient behavior which is the core of personal strength to dismantle the evils of mental strain. Society is not always helpful. When you are at your vulnerable weak sensitive stature, you would rather avoid sensory bombardments from anything and everything. Make a safe space and reset. Whatever you do, take care of yourself. Being alone is one of the scariest things out there. If you have truly experienced loneliness, you must know that it is a formidable demon to get comfortable with(not always). It takes time to settle down and not go insane with all the storming thoughts trying to beat you down. Without your body to support your survival, loneliness could turn the most enviable individual into a dirtball. An inconsequential blob of matter. Oldage is the scariest thing there is for many. Carefully planned insurances for the sky-rocketing hospital bills, frugally lived decades to save up for those helpless times, Clung on relationships in a veiled hope of them taking care of your putrid mess. This ever following shadow of death and old age certainly drives a human to take upon things that seemingly provide him with the much-needed care and shelter at grave times. Habituality becomes a rut if you do not find meaning in it. That is why necessities for life such as breathing is involuntary. Man must have meaning to the habits he undertakes. If he finds no point in life that he is living, the mind revolts. That is why most of us toil to raise a family. The back-breaking sense of responsibility forges the dread of being a mere cog in the social machinery into something meaningful. The feeling of being the man of the house, having to provide for the loved ones becomes a vessel of revival for him. Looking from the outside, becoming a parent might look like a life long pact of adulting to most, but in time many resorts to it. They find meaning in taking up all the challenges of rearing a human child. Meaning could be anything, as long as you find it. Raising a family ideally provides love, care, and support for a lifetime which any other system can hardly offer. The approach of making sense of your grind by picking an even heavier load has helped many to find their purpose, but not all. Nothing works as a common casket for all of us. We are all different. We are all unique. Some find their purpose in taking risks and swimming against the current, being stubborn and getting up over and over again even after being kicked down for what they are. Some find purpose in fighting against injustice, being selfless, being good to others. Life is slow and dreary for a reason. If we take lessons from the patterns we follow, the things we do on a daily basis, a lot can be understood about ourselves. A lot can be bettered for our own sake. Just need to pay attention to the slow-moving life. On a larger scale of things we hardly matter. We live every day and we die. But if the perspective is shifted, we can acknowledge our leverage. All of us are connected to this world. The things we do, have an impact on both animate and inanimate things. We change the course of events by the little things we do. We can create/change the lives of others. Life is hard. That is the beauty of it.
https://medium.com/@sheenv2004/two-roads-diverged-in-a-wood-5ac555da65e3
['Sheen Varghese']
2019-05-12 08:31:41.515000+00:00
['Meaning', 'Life Lessons', 'Thoughts', 'Choices', 'Life']
Complete Your First Machine Learning Project in 10 Steps
Complete Your First Machine Learning Project in 10 Steps You will never go wrong with it About a year ago, when I started telling people that I have completed core subjects with one of them being a Machine Learning (ML) course for a master’s degree, many became curious about the subject. Some asked what got me started, and some just wanted my general advice. The most common request of advice I get is how to get into learning about ML? Photo by Tanguy Sauvin on Unsplash My reply to this question is to pick up a simple ML problem and solve them. You will learn faster and better this way. But then I realized that I never really showed them the way to manage these projects. I might roughly discuss it over our informal conversations, but I never had a chance to write them down. That is why I write this post. Here, I am going to explain how to complete your first ML project in 10 steps. If you are a beginner and before you get started on your first ML project, you may want to continue reading the rest of this article. Enjoy. 1. Purpose of Machine Learning Project The first question you need to answer is, is your ML project a one-time effort or an ongoing procedure? If it is a one-time model building effort & the rest is just the effort to manage models’ performance, then you can roughly plan the time you need to complete it. The best project is the project with a clear use case. That is why, after defining the purpose, you may want to be clear on who will use the results and how useful they will be. Just like any project, without the understanding of how to consume the output, your ML project will end as incomplete. In practice, the results are used for different kinds of analysis, including making predictions for production planning, finding a loophole in products or customer sentiments about the product, or simply getting to know your data better. 2. Gathering Data Many get excited when they hear ideas about how ML can solve their problems but little did they know ML is just the engine and the real fuel is the data. Gathering data is the most critical aspect of the project as the value and size of the collected data will later determine the performance of our ML models. Ask your company if they will allow you to use the data you deal with every day at work. If they say no, you can still find data from open source platforms on the internet like in Kaggle or Github platforms. Photo by Lukasz Szmigiel on Unsplash 3. Preprocessing of Data Every ML model has a different type of output, depending upon data requirements. And every model needs a different type of data. Some models require numerical data, or some need textual data. For example, a particular algorithm needs numeric features, and some require splitting text into words, which can create complications for languages like Mandarin. When we gather data, they have issues like typing errors, spelling mistakes, duplications, and missing values. The possible dissimilarities of missing values are (‘None’, ‘NA’, ‘NaN’, ‘’, ‘?’). You can deal with missing data by these two methods. Numerical NANs means you calculate missing data with Mode, Mean, and Median e.g., If you have numerical data, you should go with the mean. If you are working with the clustering problem and you face outliers, try Median. Categorical NANs means to replace the data with the most frequent values. 4. Feature Engineering Feature engineering is also known as dimension reduction. This is a process of altering the data for a better performance of the ML model. It is a time taking task and you may need domain knowledge to do this. So, my advice is to pick a project in your domain or subjects you are familiar with. Removing unnecessary features and adding some features upon domain knowledge may also be part of this step. 5. Machine learning Category Selection According to a book named as “Understanding Machine Learning” by Cambridge Press, there are three types of categories at the root level, as follow: a) Supervised Learning b) Unsupervised Learning c) Reinforcement Learning a) Supervised learning includes classification and regressing. Classification refers to fraud detection, image classification, spam, and diagnostics systems. Regressing endorses risk assessment and score predictions. b) Unsupervised learning solves clustering problems like city development or target marketing. c) Reinforcement learning helps to forecast output like robot navigation or inventory management Previously, you have identified the purpose of your project. Now it is the time for you to determine the category of your ML task at hands based on the above. 6. Data Slicing This is when you are performing a supervised ML project. From the same set of data, you will need to divide them into training, validating, and testing. Different data requires a different split ratio. The training data is primary data that is used for training the model and data validation to verify the model skill. The validating data check model parameter and verify model skill. Test data runs on the real data and calculate the output. 7. Adopting ML Technique(s) There are thousands of methods available to be picked, and they are all great, but find a suitable model that fits with your data to produce the best results. There are already some established guidelines to help you choose what models work for you. Just to give you some examples, if your project deals with a prediction or forecast of continuous values, you may want to adopt Linear Regression or Neural Network. If you are working on a project that deals with multi-class classification, perhaps the Random Forest model is the best choice. You get the drill. Photo by Diane Alkier on Unsplash 8. Parameters Tuning Enhancement in your train data can be possible by tuning in the parameters. In complex models, initial conditions are more critical. These conditions or settings are called hyper-parameters. Hyper-parameter tuning will assist in updating and in the learning process of the parameters during training. These settings such as random search or grid search will ensure you to produce better results. 9. Results Interpretation Does the output give us what we want? The answer is that we can analyze the output in the form of performance measures like accuracy, precision and recall. These measures will help us realize the root cause of our models’ failure or success. Once this is done, the model is ready to deploy. 10. Quick Deployment The data scientist team can build a predictive model but mostly it is to be used by the end-users. Quick and easy development is necessary to make the model easy-to-use in corporate environment includes the Machine Learning model itself, Application Program Interface (APIs) and how you visualize the prediction output. We can say that all the processing will be done on the backend. It is the backend server that receives a request from a client in the form of JSON. Subsequently, this input sends a request to the model for processing and end up with the TensorflowJS that executes on a web browser.
https://towardsdatascience.com/complete-your-first-machine-learning-projects-in-10-steps-29e9456a5759
['Faiz Fablillah']
2020-10-01 01:12:10.528000+00:00
['Machine Learning', 'Machine Learning Projects', 'Data Science', 'Towards Data Science', 'Predictive Analytics']
Speaking My Truth
I am someone who speaks my truth and that can at times come across as me being rude but really it’s because I do not have a filter. I think it, I generally say it although do not get me wrong my thought process does also process the fact that if I say this will I end up hurting someone so I do try to filter my words but at that moment it’s not something I think about! My friends will tell you they love my honesty and I don’t mince my words and those that dislike me will tell you I’m a bitch I’m rude so it’s all about context. When someone loves the person you are they appreciate your thoughts they want to know your mind. My husband, for example, loves my mind which is probably a good thing and why we work so well because we love the person each other is and we do not try and change each other. Now do not get me wrong when I’m being mean a bitch or rude however you want to look at it, he will tell me to my face as he knows at times I need to be reminded how I word things, but that does not mean he disagrees with what I’m saying there are just nicer ways to say things without hurting someone feelings. But to be fair my friends are the same they will tell me when they think I’m wrong. I do try and surround myself with honest people, as that is how you grow as a person. I can definitely say since meeting my husband I am more tactful in how I approach conversations especially those that are sensitive topics. Where am I going with this whole conversation I'm sure you are asking yourself and to be honest I’m not 100 percent sure myself, I was on my way to work and I got thinking about people and how we become friends or even frenemies. You know we all have those certain people in our lives that aren’t exactly friends but somehow are in our lives and you end up being civil with them but know full well they do not have your best interests at heart. I used to think this was being fake with someone, I’m talking about when I was in my teens and I just would not really converse with these types of people but, as you get older you realise that sometimes there are people in your life that although you may not trust or like every characteristic about them but that does not mean you do not share similar thoughts or interests so that you can actually have a conversation, you might even have deep conversations with them and have a laugh, a joke and be silly but that does not mean you talk to them about anything personal or private and you certainly would not talk to them about your struggles. We all have had the friends that we know deep down they do not want you to succeed in life and they love to see you in misery and they will actively try and bring you down as they say misery loves company. These are those friends that in time you realise their intentions are not good so you slowly keep them at a distance from your life. As I feel this is something that is important to do. For your soul and wellbeing anyone who does not want you to be happy and comes across as jealous of your achievements or does not give you the encouragement to be yourself and achieve your ambitions, when really they need to be a bit character in the story of your life. Now this I’m sure will make you laugh something my friend always thinks is hilarious is the fact that yes at times I can live in my own world I can get lost in my thoughts and lose myself and I personally love this trait that I have ( something that is very common with us Pisces) and I also believe the world revolves around me. And yes I’m sure you all will be thinking this is crazy to believe but, just hear me out for a moment because what I say has an element of truth. Now let’s take a look at this statement, the world revolves around me. And so it should, for this is my life I’m living in this crazy world it’s my thoughts my dreams my desires. And why would I not go after what I want? It is ok to be selfish when it comes to yourself, it ok to be selfish and want what you want. This does not make you a bad person this does not mean that you cannot have the biggest heart and do anything and everything for your friends. I am the star of my life, I am not a side character I am not a mention in the margin of the story of my life I am not a bullet point I am everything. This is what I mean when I say the world revolves around me, my world does revolve around me even as a mother. My life is for my babies now and everything I try and achieve is for them to make sure they are happy healthy and have the confidence and are fearless and all they require to go after everything they want. I want them to feel my love every day and to know I will always be proud of all they accomplish in life. But my world still revolves around me, I am not just a mother I am a woman a wife I have various roles as does everyone in this world, I am not the first and I definitely won’t be the last. But what you do with your life is your business just like what I do with mine is mine. If the world did not revolve around me then what am I doing with my life. You can be selfless and still be selfish at the same time. Selfish with your heart with your mind your thought your ambitions. That does not mean you don’t put others first, this does not mean you live a life when you can go around hurting people as I do not believe in that. You should try and not hurt people in life my mother always said you should never talk badly about another person, especially another girl as one day you may have a daughter yourself and you would not want someone to talk badly about them and this is something that always stayed with me. I truly do not think I have ever heard my mum say a bad word about anyone she has never tried to bring anyone down she never did this when I was younger and even now that I’m almost 40 I still do not believe she has said a bad word about anyone she speaks the truth about a situation she shares her thoughts without gossiping about another and I love her for this and literally right this second I have realised that is probably why I am very similar and will speak about situations pretty honestly even when I am in the wrong I will own up to that fact also. As I am but human and have made mistakes and I’m sure I will make many more but I do try and learn from them so that I do not make the same mistake over and over again. So go and live your life be selfish be center stage and do not allow yourself to be side-lined by anyone. Be brave, this is something I really hope I can instill in my children, I want them to always try and be themselves and not conform and not be afraid to go after what they want. I am also aware of personality traits and the fact that there are some traits within that are innate we are born with, and there are behaviors that we are taught by those that we look up to and let's be honest when it comes to your babies they are taught so much by us, as parents in the first early years so it is so important to give them every chance in life to feel confident and brave and not give them certain fears. This leads me onto something else I have been thinking about today, mummy guilt. Come on let’s be honest as mothers we have these emotions where we feel guilty for not having enough patience with our kids at times when we find ourselves getting frustrated as we haven’t had a moments of peace and we cannot wait till it is there bedtimes when we get five minutes to ourselves to just have a thought. I’m here to tell you first it is not just mummy guilt, there is daddy guilt too as there are dads out there who are really good daddy to their children, who take the time to raise their children whether they are with the mother or not I have spoken to many daddy’s who have also had those days when they feel guilty. So let’s change it to parents' guilt. And secondly, it is ok to feel this way and know that it does not make you a bad parent. You still love your child more than anything and once they go to sleep and in the night you go check on them especially as mine is four years old and under. I still check them in the night to make sure they are fine, I see their little perfect angelic faces and I smile because I know in the morning they are going to wake up with the biggest smiles and give the biggest hugs because they love you just as you love them no matter what, so try not to feel too bad. Being a parent for me is the most rewarding role I have ever taken on. Not only to our two older children who I just love and adore and seeing them grow into such independent teenagers and almost teenager is so precious. They will never know just what they have given me in my life. I truly hope as they become the amazing adults I know they will become, they will look back on the times they have shared with their daddy and me and know that we have always done our best by them and we always wanted them to feel loved and know they can be anything they want to be. But do not think there is parental guilt there, I know my husband at times over the years has felt guilty that he has not seen them every day because he misses them every day when they are not with us, he always made sure to be the daddy that turned up for anything they were doing from parents evenings to shows at school, to sports day to their swimming lessons to trying to teach them to ride a bike, one of the many reasons I fell deeply in love with him was seeing his daddy skills and how much he put them first in his life, as he should have. I have seen so many dads who have not been with the mother of their child be an important part of their child’s life by just turning up being there not letting them down and making sure to get along with the mother for the sake of the child. I feel sometimes dads get a bad reputation for moving on from the mother but remember they move on from the mother, not their child, the child they still love and adore and will do anything for. Now do not get me wrong there are fathers out there who refuse to be a part of their child’s life and they do not want to get involved with anything, and only have their child when it suits them and those are the fathers that are waste men, who do not know what it is to be a daddy and who will never know the joy of raising the beautiful child that they have created. And shame on them. To the single mummy’s out there who do it all, they raise and love their child and give them everything a mummy and a daddy is supposed to do you are superwomen in my eyes, doing it all alone day in day out no break no downtime the most utmost respect for you. And keep doing what you are doing. In fact, that goes to all the parents out there who raise their children from married couples to those living together to those mums and dads that are doing it alone, because let's keep it real there are some dads that are doing it alone big up to all of us. Here is to even those grandparents and friends who help them out and have the children everything you do is so amazing. I do not quite understand how I started writing about being selfish, following your heart, to talking about being a parent helping your children grow, I guess because I want children to follow their hearts always and grow into amazing adults and that is all I can hope for.
https://medium.com/@AyshTalks/i-am-someone-who-speaks-my-truth-and-that-can-at-times-come-across-as-me-being-rude-but-really-140c42427fc4
[]
2021-08-09 17:22:19.768000+00:00
['Family', 'Thoughts', 'Love', 'Dreams', 'Parents']
[ ANIMATION ] Yashahime: Princess Half-Demon Episode 13 On Nippon TV
Episode 13 | Gold and Silver Rainbow-Colored Pearls | The daughters of Sesshoumaru and Inuyasha set out on a journey transcending time! In Feudal Japan, Half-Demon twins Towa and Setsuna are separated from each other during a forest fire. While desperately searching for her younger sister, Towa wanders into a mysterious tunnel that sends her into present-day Japan, where she is found and raised by Kagome Higurashi’s brother, Souta, and his family. Ten years later, the tunnel that connects the two eras has reopened, allowing Towa to be reunited with Setsuna, who is now a Demon Slayer working for Kohaku. But to Towa’s shock, Setsuna appears to have lost all memories of her older sister. Joined by Moroha, the daughter of Inuyasha and Kagome, the three young women travel between the two eras on an adventure to regain their missing past. Title : Yashahime: Princess Half-Demon Episode Title : Gold and Silver Rainbow-Colored Pearls Release Date : 26 Dec 2020 Runtime : 25 minutes Genres : Action , Adventure , Animation , Anime , Comedy , Fantasy , Martial Arts Networks : Nippon TV TELEVISION 👾 (TV), in some cases abbreviated to tele or television, is a media transmission medium utilized for sending moving pictures in monochrome (high contrast), or in shading, and in a few measurements and sound. The term can allude to a TV, a TV program, or the vehicle of TV transmission. TV is a mass mode for promoting, amusement, news, and sports. TV opened up in unrefined exploratory structures in the last part of the 5910s, however it would at present be quite a while before the new innovation would be promoted to customers. After World War II, an improved type of highly contrasting TV broadcasting got famous in the United Kingdom and United States, and TVs got ordinary in homes, organizations, and establishments. During the 5950s, TV was the essential mechanism for affecting public opinion.[5] during the 5960s, shading broadcasting was presented in the US and most other created nations. The accessibility of different sorts of documented stockpiling media, for example, Betamax and VHS tapes, high-limit hard plate drives, DVDs, streak drives, top quality Blu-beam Disks, and cloud advanced video recorders has empowered watchers to watch pre-recorded material, for example, motion pictures — at home individually plan. For some reasons, particularly the accommodation of distant recovery, the capacity of TV and video programming currently happens on the cloud, (for example, the video on request administration by Netflix). Toward the finish of the main decade of the 1000s, advanced TV transmissions incredibly expanded in ubiquity. Another improvement was the move from standard-definition TV (SDTV) (516i, with 909091 intertwined lines of goal and 434545) to top quality TV (HDTV), which gives a goal that is generously higher. HDTV might be communicated in different arrangements: 3456561, 3456561 and 1314. Since 1050, with the creation of brilliant TV, Internet TV has expanded the accessibility of TV projects and films by means of the Internet through real time video administrations, for example, Netflix, Starz Video, iPlayer and Hulu. In 1053, 19% of the world’s family units possessed a TV set.[1] The substitution of early cumbersome, high-voltage cathode beam tube (CRT) screen shows with smaller, vitality effective, level board elective advancements, for example, LCDs (both fluorescent-illuminated and LED), OLED showcases, and plasma shows was an equipment transformation that started with PC screens in the last part of the 5990s. Most TV sets sold during the 1000s were level board, primarily LEDs. Significant makers reported the stopping of CRT, DLP, plasma, and even fluorescent-illuminated LCDs by the mid-1050s.[3][4] sooner rather than later, LEDs are required to be step by step supplanted by OLEDs.[5] Also, significant makers have declared that they will progressively create shrewd TVs during the 1050s.[6][1][5] Smart TVs with incorporated Internet and Web 1.0 capacities turned into the prevailing type of TV by the late 1050s.[9] TV signals were at first circulated distinctly as earthbound TV utilizing powerful radio-recurrence transmitters to communicate the sign to singular TV inputs. Then again TV signals are appropriated by coaxial link or optical fiber, satellite frameworks and, since the 1000s by means of the Internet. Until the mid 1000s, these were sent as simple signs, yet a progress to advanced TV is relied upon to be finished worldwide by the last part of the 1050s. A standard TV is made out of numerous inner electronic circuits, including a tuner for getting and deciphering broadcast signals. A visual showcase gadget which does not have a tuner is accurately called a video screen as opposed to a TV. 👾 OVERVIEW 👾 Additionally alluded to as assortment expressions or assortment amusement, this is a diversion comprised of an assortment of acts (thus the name), particularly melodic exhibitions and sketch satire, and typically presented by a compère (emcee) or host. Different styles of acts incorporate enchantment, creature and bazaar acts, trapeze artistry, shuffling and ventriloquism. Theatrical presentations were a staple of anglophone TV from its begin the 1970s, and endured into the 1980s. In a few components of the world, assortment TV stays famous and broad. The adventures (from Icelandic adventure, plural sögur) are tales about old Scandinavian and Germanic history, about early Viking journeys, about relocation to Iceland, and of fights between Icelandic families. They were written in the Old Norse language, for the most part in Iceland. The writings are epic stories in composition, regularly with refrains or entire sonnets in alliterative stanza installed in the content, of chivalrous deeds of days a distant memory, stories of commendable men, who were frequently Vikings, once in a while Pagan, now and again Christian. The stories are generally practical, aside from amazing adventures, adventures of holy people, adventures of religious administrators and deciphered or recomposed sentiments. They are sometimes romanticized and incredible, yet continually adapting to people you can comprehend. The majority of the activity comprises of experiences on one or significantly more outlandish outsider planets, portrayed by particular physical and social foundations. Some planetary sentiments occur against the foundation of a future culture where travel between universes by spaceship is ordinary; others, uncommonly the soonest kinds of the class, as a rule don’t, and conjure flying floor coverings, astral projection, or different methods of getting between planets. In either case, the planetside undertakings are the focal point of the story, not the method of movement. Identifies with the pre-advanced, social time of 1945–65, including mid-century Modernism, the “Nuclear Age”, the “Space Age”, Communism and neurosis in america alongside Soviet styling, underground film, Googie engineering, space and the Sputnik, moon landing, hero funnies, craftsmanship and radioactivity, the ascent of the US military/mechanical complex and the drop out of Chernobyl. Socialist simple atompunk can be an extreme lost world. The Fallout arrangement of PC games is a fabulous case of atompunk.
https://medium.com/anime-yashahime-princess-half-demon-episode-13/animation-yashahime-princess-half-demon-episode-13-on-nippon-tv-a69a43e3d4b1
['Jamie A. Lovell']
2020-12-26 01:51:12.811000+00:00
['Animation', 'Fantasy', 'Adventure', 'Anime', 'Action']
The Role of Politics in Leadership
It’s not commonly known, but the actual reason Rome fell was because baseless rhetoric caused irrational behavior in leaders. The ability to agree (and win) was considered the ultimate virtue. (To some degree this remains true in cultures in the region). Politicians used rhetoric to gain control over the population. This lack of integrity with respect to what was good for the Roman populous resulted in the collapse of their society. Politics are a necessary evil and necessary to gain kinship in order to obtain a position of authority. Authority is given as a vehicle to lead the populous. However, after the leadership position is obtained, politics no longer has a place. This transition from running for office and running the office does not occur in the U.S. anymore. In fact, all elected officials spend many hours per week trying to get reelected. A few have expressed the irony of trying to find time to do the job they were “hired” to do, yet moonlighting to satisfy the machine that got them into office. The stupidity of a system like this is obvious, but it is also unlikely to change. The fundamental characteristic of a good leader is integrity and the ability to influence. A bad leader uses manipulation. Navigating the political process to obtain a leadership position through influence is difficult, but not impossible. Attack ads are used in nearly all modern political campaigns as a means to manipulate voters. Some ads don’t even identify the person running for the office, instead focus on the opponent. This is incredibly stupid and often ineffective. Why? Because, subliminally, the human brain does not understand “not”. The subconscious is an emotion engine that only understands nouns. Therefore, when an attack ad mentions an opponent, they are actually advertising for the opponent. Often, the end of the ad will show the candidate in a pleasing, trustworthy setting to provide an immediate alternative. This doesn’t always work. It would be far better to repeat the candidates name without referring to the opponent. Voters’ subconscious would only focus on the candidate. Candidates that resort to these forms of manipulation are unlikely good leaders. They will use manipulation to get what they want and be proxies for what their supporters want. They are OK with the means justifying the ends. So long as the do things they were “hired” to do, they’ll have support. This is not leadership. But it has become normal, it’s not even new. In closing, I urge support for people who work through influence to overcome manipulative forces in leadership. There will always be a battle between “good” and “evil”, but we’re here to do good, let’s not give up on it. Maybe fewer people will be out of their minds as a result.
https://medium.com/out-of-your-mind/the-role-of-politics-in-leadership-24c05fcd2d17
['Joe Bologna']
2020-11-08 01:57:45.114000+00:00
['Society']
Monday Morning Mantra
Definiteness of Purpose What do YOU want? The most important question of your life. Smile Your answer decides your future. What does your heart say? Can you hear it? Listen The answer is there, calling you. The real answer, not what is expected of you. Not what YOU THINK is expected of you. Go within. Quiet your mind. Breathe Ask the question, What do you want? Do you see an image? A flash in your mind’s eye? It happens quickly. Your heart knows. Trust It may not make sense to the logical mind. It’s your desire and you can have it, with consistent focus and clarity, or definiteness of purpose. Namaste
https://medium.com/@lavidamomma/monday-morning-mantra-3c632739931b
[]
2020-12-21 18:32:05.811000+00:00
['Self Help', 'Purpose', 'Focus', 'Manifestation', 'Meditation']
FEAR MARKETING!!! Are You Guilty Of Doing It?
Just thought of sharing something with you today which will help you be a permanent player in this Network Marketing & Direct Sales Industry and not a temporary player or a struggling marketer. I’m seeing a trend in Network Marketers, especially this week. It’s called FEAR. FEAR of their income going down. of their income going down. FEAR of their income not going up again. of their income not going up again. FEAR of losing people from their organization. of losing people from their organization. FEAR of losing influence over the marketplace and their followers. of losing influence over the marketplace and their followers. FEAR of not getting enough retail product sales. of not getting enough retail product sales. FEAR of spending money to effectively promote their business opportunity & products. of spending money to effectively promote their business opportunity & products. FEAR of spending money to purchase products to have on hand for sampling and for their customers. of spending money to purchase products to have on hand for sampling and for their customers. FEAR of talking to persons about their business opportunity & products. It’s usually lead by a leader who feels the most FEAR of all… And convinces everyone to run around yelling from the rooftops to the marketplace that “the best thing ever is happening and if you don’t act now you’re going to lose out.” Do you see the FEAR in that message? At a certain point, people rise above that FEAR MARKETING message. And the only one’s who respond are the one’s too blind to see it or too desperate to question it — until they aren’t anymore — and then they stop responding and end up leaving your team anyway (usually bitter). And… “The can’t wait until tomorrow” offer is really just hype. It lacks real value. Which is why it needs so much hype in the first place. MARKETERS: There’s a difference between being a TEMPORARY MARKETER and being LEGENDARY MARKETER. There’s a difference between marketing from a PLACE OF FEAR and marketing from a PLACE OF GOODWILL. And when you discover this difference, MONEY & SUCCESS really does flow into your business like a beautiful waterfall. You begin to attract prospects & customers with money who desire to join your business opportunity & purchase your products, and you stop hearing the dreaded “I don’t have the money” objection all…day — every…day. I’ve walked away from a few situations over my career because the situation no longer served my spirit — regardless of how it served my bank account. From that experience, I can tell you there IS difference between always having to scramble to re-invent yourself… …and just being, well…. LEGENDARY (all the time) Which constantly attracts other LEGENDARY MARKETERS. It’s a beautiful thing. So in response to what I’ve been seeing lately… I’m going to start doing regular Live Personal Training 1 & 1 Calls for all my Team Members inside The Legendary Marketers 500 Club where 100 % of the people involved will be shown how to make money. Would you like to join and learn? This is by Private V.I.P. Invitation Only!!! Because I feel there’s really an honest need for it. Honest mentorship showing people how to truly become LEGENDARY, badass marketers and entrepreneurs… Without selling your soul for money, working yourself to death or running your best prospects off with sub-par marketing. If you like to come and join “The Club” (it will be 100% free)… No Cost To Join!!! Just take a Free Spot Click Here , join the Corporate Facebook Group and you will be updated with all required details Follow Me On Facebook Like Successful People Do P.S. — Lately I have been investing a lot in my new business project — Surprise Jewelry In Candles — www.SurpriseJewelryInCandles.com and also growing it rapidly too and when I stated promoting it using Simple, Cheap & Ugly Postcards the response was crazy… Hence I created a few blog posts and a FREE Training Guide on how to do it… Check it out, Click Here — You will like it!!!
https://medium.com/rohan-mcleod-recommends/fear-marketing-are-you-guilty-8e7098e4e371
['Rohan', 'Java Mac']
2016-05-07 01:32:01.637000+00:00
['Fear Marketing', 'Overcoming Fear', 'Fear']
Why high government debt is a problem
The Obama Administration is trumpeting that the budget deficit has been cut by half, “the largest four-year reduction since the demobilization from World War II.” Indeed, CBO projects the deficit this year will be 3 percent, maybe dropping a few tenths over the next few years before beginning an inexorable climb driven by demographics, health cost growth, and unsustainable entitlement benefit promises to seniors. If you listen to the President, our only problem is that future one and that’s a few years off. Now that deficits have come down, he says we’re OK for the time being. Deficits around 3 percent will hold debt constant relative to the size of the U.S. economy, and he appears to think that’s fine. I don’t. Look at this graph from CBO. In their recently released annual Economic and Budget Outlook CBO lays out the four costs of higher debt (page 7). “Federal spending on interest payments will increase substantially as interest rates rise to more typical levels;” “Because federal borrowing generally reduces national saving, the capital stock and wages will be smaller than if debt was lower;” “Lawmakers would have less flexibility … to respond to unanticipated challenges;” “A large debt poses a greater risk of precipitating a fiscal crisis, during which investors would lose so much confidence in the government’s ability to manage its budget that the government would be unable to borrow at affordable rates.” CBO attributes these damaging effects to “high and rising debt,” and doesn’t distinguish between high (where we are now, in the mid 70s as a share of GDP) and future entitlement spending-driven growth. The same logic applies both to today’s high debt and to future even higher debt. These are real and significant costs we are bearing today. It’s obvious that we can’t allow debt to increase forever as it will begin to do a few years from now but there’s an additional important question that is being largely ignored. Momentarily setting aside future projected debt growth, is debt/GDP in the mid-70s acceptable? Should the goal be to not let the problem get worse, or both to solve the future debt growth and, over time, to reduce debt/GDP to be closer to the historic pre-crisis average? CBO has done policymakers a great service by explaining these four costs of high and rising debt, and I wish more members of Congress understood them and talked about them. This is important enough that it’s worth the time to understand it well. You can find a slightly expanded version from CBO on pages 9 and 10 here. I want to expand a bit on CBO’s points. I’ll take them in reverse order and start with the last one, the increased risk of a fiscal crisis. Those on the left who argue that high debt isn’t a problem like to (a) pretend that this increased risk is the only consequence of high debt, and then (b) dispute that the higher risk is significant enough to cause concern. I worry that when the U.S. has doubled its debt/GDP in five years, and when our future debt path looks like it does, that the risk of a fiscal crisis is significant. But this risk is unknowable, and even if we could somehow measure this risk, we can never know when that crisis would occur. My stronger arguments are (1) fiscal crisis risk is undoubtedly higher at a higher debt level; (2) the risk is only going to increase on our current path as debt increases; and (3) there are three other costs to higher debt, so even if you’re not worried about crisis risk, you need to address those other costs. Moving up the list we get to CBO’s “less flexibility” point. CBO’s projected debt path assumes a (very) slow but basically steady return to macroeconomic health. If we have another recession, terrorist attack, or war, the numbers will be worse, and whatever increased government spending or fiscal stimulus we will then need will be initiated from a much weaker starting point (a much higher level of debt). Because our debt is so high we are poorly prepared to address future risks that require significant short-term deficit spending or tax relief. Then we get to the cost with the greatest political impact: lower future wages. This is really a cost of the big recent deficits that resulted in today’s higher debt, and an additional cost of projected future deficit growth. The reduced national saving caused by big deficits leads to a smaller capital stock. This lowers productivity and therefore wages. To reduce our public debt government would have to save more (or even, perish the thought, balance the budget), leading to higher national saving, a bigger capital stock, higher productivity and higher future wages. To be politically crass: lower government debt means more shiny new factories with high wage American jobs. I’m willing to sacrifice quite a lot of government spending in exchange for higher future wages. Finally, the item at the top of CBO’s list is the one most likely to drive Congressional action. Our government debt is now 37 percentage points above its pre-crisis average, but government interest payments are relatively low because interest rates are low because the short-term economy is still weak. When the economy eventually recovers and the government debt rolls over, that additional debt is going to increase government net interest payments by about 1.85 percent of GDP (37% X CBO’s 5% 10-year Treasury rate). Relative to the rest of the federal budget, 1.85% of GDP is enormous. That increased interest cost is as much as the federal government will spend this year on all military personnel (uniformed + civilian) plus all science, space, and technology research plus all spending on the environment, conservation, national parks, and natural resources plus all spending on highways, airports, bridges, and all other transportation infrastructure. Higher debt means higher interest costs which will squeeze out spending for other things that government does. It will also increase pressure to raise taxes even further. Government debt is twice as large a share of the economy as it was before the financial crisis. In addition to increasing the risk of another catastrophic financial crisis, high government debt squeezes out other functions of government, creates pressure for higher taxes, leaves policymakers less able to respond to future recessions, wars, and terrorist attacks, and lowers future wage growth. This problem will only increase as entitlement spending growth kicks into high gear a few years from now, but simply stabilizing debt/GDP in the mid 70s is an insufficient goal. Don’t rest on your laurels because deficits are smaller than they used to be. High government debt is a big problem.
https://medium.com/keith-hennessey/why-high-government-debt-is-a-problem-9432bdc06819
['Keith Hennessey']
2016-12-22 04:06:34.033000+00:00
['Economy', 'Budget']
Is Programmable Matter A Direct Star Trek Like Consequence of Wolfram Physics?
If underlying spacetime is a constantly evolving graph like network of nodes, is it possible to deliberately rewrite the network to fashion, literally, anything? Photo by Ameer Basheer on Unsplash The Concept of Nanotechology In the recently debut of the third series of Star Trek: Discovery we see a 32nd century Federation using programmable matter in many forms from customer user interfaces to detached warp nacelles. The canon explanation reveals its nature as “consisting of minute nano-molecules, the matter had the abilities of redistributing and redesigning itself into pre-programmed shapes.” This idea has been popular in science fiction for some time with nanomachines or nanobots being a popular trope for the low level construction and repair of both organic and inorganic matter. Indeed, research is already well underway with regard to nanotechnological applications in everything from the repair of arteries in the human body, to the diagnostic of medical conditions in human biology and dentistry¹. Prototype nanomachines already exist that can propel “DNA, proteins, quantum dots, and other nanoparticles” through tiny nanoscale corridors. Going Deeper and Smaller However, with nanotechnology in its current form we are effectively manipulating what already exists in terms of matter be it existing molecular substances down to atoms themselves — with increasingly smaller and smaller machines — or some kind of localised field effect such as electro-magnetics. Matter was being naively rearranged on smaller and smaller scales until the ability to rearrange atoms themselves was realised, as in IBM’s 1989 letter writing with individual Xenon atoms. From here it’s a matter of time (no pun intended) until accessible machines will be developed to perform this process on demand. This is the level of technology, and scale, at which devices in the Star Trek universe such as the replicator (rearranges atoms to produce desired goods, usually food and drink) and the transporter (rearranges you, sends you somewhere, puts you back together) are envisaged to operate. However, what if we could go deeper, what if we could rearrange the underlying spacetime, the underlying structure itself that manifests matter, and energy, as we know it? The Wolfram Physics Project’s paradigm views underlying spacetime as a series of abstract relations between abstract objects² represented as a hypergraph . With a graph an edge connects exactly two vertices. With a hypergraph an edge may connect any number of vertices Not so easy to show on a flat page, in the hypergraph to the left the edges are shown in colour and the vertices as points. A rule, or set of rules, is then repeatedly applied to this hypergraph that rewrites relations between vertices and thus alters the hypergraph itself. (I imagine this methodology to be somewhat akin to an iterated function system, something I’ve studied in detail myself and part of the reason I’m fascinated by the Wolfram Physics Model, where repeated iteration of set of rules (which are usually contractive) arrive at some final, fixed point.) Without getting into too much detail, as the project itself describes this far better than I could in a few paragraphs, it is the structure of the hypergraph itself that gives rise to elementary particles, or fluctuations in fields if you wish, and subsequently through larger and larger scales, the manifestation of matter around us. (Energy is a slightly different facet of the hypergraph in the model, but arises as a consequence of its rewriting). “…for example, a particle like an electron or a photon must correspond to some local feature of the hypergraph”² Stephen Wolfram” The quote above from Stephen Wolfram describes how local features, or arrangements, of the hypergraph give rise to elementary particles. On the left, a simple arrangement contains a highlighted graph that demonstrates this concept. Perhaps this is how an electron appears in the hypergraph. Of course, in the Wolfram Model the hypergraph would be much more complex. Work continues on all aspects of the model including as to ascertain the possible complexity. Given that the rules may be extremely complex and how many iterations may have already taken place in our universe (Wolfram estimates possibly “over 10⁵⁰⁰”, “or even more”²) any elementary particle would have a far more complicated arrangement in the universal hypergraph! Editing Reality In a previous article I touched on how Wolfram’s ideas are akin to a universe that’s continually computing, that all reality is in effect some vast ongoing computation. If we were to get a handle on the fundamental rules that govern the evolution of spacetime then is it valid to consider that we may alter them to rewrite reality? Perhaps this is what will become the programmable matter of Star Trek. Considering the engineering challenges involved is somewhat speculative at the moment as the model research continues. But, should the universe be evolving through both rapid and vast computation it has to be considered if the computing capacity in terms of both storage and speed could possibly exist to perform the necessary calculations involved and in a suitably accessible period of time. I have neglected to mention the concept of computation irreducibility³, which not only puts limits on our ability to make predictions of the model but also may restrict, fundamentally, our ability to utilise our knowledge about it effectively. But, that’s for another time. So, apart from the challenge involved in actually developing a workable model, there are those which invoolve devloping a method to access and manipulate the underlying hypergraph of spacetime. What tools can be imagined that can directly manipulate reality at this level? They would surely be generations of complexity away from simple atomic manipulation involved in such Star Trek devices as a replicator or transporter that trivially manipulate already existing atoms! Perhaps that’s why, at least in the Star Trek timeline, it takes until the 32nd century to get a real handle on reality…
https://fractaldoctor.medium.com/is-programmable-matter-a-direct-star-trek-like-consequence-of-wolfram-physics-a7670b57232b
['Dr Stuart Woolley']
2020-12-17 14:57:03.720000+00:00
['Star Trek', 'Mathematics', 'Star Trek Discovery', 'Wolfram Physics Project', 'Physics']
7 tips to get your multitasking skills down to a tee
Multitasking is a skill that is demanded in every facet of life. Are you a student? You will need to multitask with your education and all of your extracurriculars. If you’re a parent, you will be forced to tend to a million things at once. And if you’re working in a professional environment, it goes without saying, you are required to take care of xyz while doing abc. Now, this can get quite overwhelming very quickly if you don’t know how to handle it properly. But with certain pointers and some tools, you can really make it work for you and boost your productivity. So, the following are some tips to efficiently multitask (particularly in a professional environment) Best messaging app for smart teams | Start you free trial! Know the agenda You MUST start your day knowing what you want to get done. It’s alright if something new crops up in the middle of the day. But it’s your responsibility to know your tasks for the day. This is done so you don’t end up with too much on your plate. Though the goal is to multi-task, it doesn’t make sense to do more than three things at once. And it’s preferable if your to-do is made in a place, you’ll see often. Group your tasks Once you’ve made the above list, you now need to group the tasks. Doing two completely unrelated tasks will have you wasting a lot of time. At the end of the day, your results will not meet the objectives you set yourself for. So, if there are two things that will require a similar process, group the two together. It simply makes more sense to go about your day like that! Clariti is powered by Artificial Intelligence | Try now, It’s free! Keep Logs When you’re tackling many things at once, you will be naturally inclined to lose track. To keep yourself organized, it is very important to keep a log of all that you’ve done. You must be strict with your logs. If you’re going to ask me for a format, there isn’t one. But you should be able to decipher it with one look. This is especially important if you need that information to present to a higher-up. Get the right tools There is no lack of tools available these days that are targeted to help you multitask. So, take advantage of these. My favorite of the bunch is Clariti. This is a web app that brings you the power of multiple tools into one place. The best feature — called Topic Threads organizes your conversations, emails and to-dos automatically with the power of AI. So, with an ordered and streamlined interface, you have more time to concentrate on the tasks that matter This app also has an integrated chat and email system. This means you won’t have to waste time forwarding emails or switching between multiple apps to do one process. Since you’ll be viewing everything in threads, it also becomes easier to move from one task to the next without the whole ordeal of switching apps. Replace email | Use chat instead | Start Free Trial! Give yourself achievable tasks This is a no brainer. It makes no sense to set yourself 10 objectives if you know you can only get 6 done. Though aiming high is a good thing, going from 6 to 10 should be a process, not a leap. If you don’t get through your goals for the day, mentally, it will weigh on you> Two things will happen — you’ll go into the next day thinking you have a backlog, which will hamper your productivity. Also, not being able to meet your standards will impact your self-confidence. So ALWAYS give yourself reachable targets. Focus Focus Focus The attention span of an average human is not very long. Only 8.25 seconds! That’s probably why we prefer to multitask in the first place. So, it is even more important to cut-off easy sources of distraction. Having a concentrated 30 minutes will take you way further than an unfocused 1 hour. Free Business Productivity App | Try now, It’s free! Reward yourself We are human after all and being praised or rewarded for a job well done is definitely a stimulating factor. And that praise doesn’t always need to come from a superior. So, when you do check something off your list, give yourself a small reward. A 10-minute break perhaps? Or indulge in your favorite treat! It does not have to be big, only make you feel good. Once you get the process that works for you down, multitasking will never be a challenge for you again.
https://medium.com/@getclariti/7-tips-to-get-your-multitasking-skills-down-to-a-tee-b79a6f5fd6fb
[]
2020-12-23 13:57:55.273000+00:00
['Collaboration', 'Communication', 'Teamwork', 'Business', 'Work']
A contrarian bet: altcoins
We’re in a scenario today where the sentiment surrounding altcoins is disproportionately negative. Back in December 2018, I stated that Bitcoin was over-sold and had most likely bottomed. Fast-forward 9 months, Bitcoin has increased by more than 100% in price whilst most altcoins have failed to surge synonymously. It’s August 2019, and the sentiment towards altcoins reflects that of Bitcoin 9 months ago. People are scared to take the plunge, wondering if they’ll ever return, just like people were worried about Bitcoin. Bitcoin is an incredibly exciting invention that is well on its way to becoming a global reserve currency, especially considering the fact that traditional forms of ‘currency’ are diminishing in value. Bitcoin has incredible upside to come, but should this mean that we ignore other projects that may have even more upside returns? A quick read of the ‘lindy effect’ will tell you that the chances of an alternative coin replacing Bitcoin as a potential ‘digital gold’ are incredibly slim. Bitcoin continues to grow in security (mining power) and status (public perception), two elements that are creating a feedback loop favoring Bitcoins position on the throne. This aligns with the shift in the Bitcoin narrative, veering towards a ‘safe-haven asset’, an idea that's becoming increasingly attractive in today’s unstable macro-economic outlook. Yet, if we’re to assume that Bitcoin will be the only cryptocurrency to provide any real value to the world, we could be left short-changed from the value-potential that's about to be created in the space. What’s the case for altcoins? Altcoins — A word that is as divisive within the crypto community as Bitcoin is within the general community. A disadvantage that ‘altcoins’ have firstly, is that they are all clustered into this term that doesn’t have a positive reputation within the crypto-sphere. Subsequently, this provides an opportunity for thought-leaders in the space to tarnish the perception of these projects whilst painting them all with the same brush, a lazy attempt at edifying Bitcoin. This masks any meaningful fundamental progress that any of these alternative projects have achieved. Fundamental growth? Investment bank to launch $1bn worth of security tokens on the Tezos blockchain. Jaguar Land Rover partnering with IOTA for data and value transfer. MoneyGram and Ripple entering a 2-year strategic partnership to leverage XRP for exchange settlements. Google announcing Chainlink as an official cloud partner. Experienced Bitcoin hodlers will tell you that price is not always indicative of fundamental value. Whilst Bitcoins price approached serious under-valued territory in Q4 of 2018, the same argument can be made for a handful of altcoin projects in today’s market. This blog isn’t here to decipher which altcoin projects those are. But, below will give you an indication of what we can expect. Technically sound?
https://jusuf10.medium.com/a-contrarian-bet-altcoins-f7692e6eacd6
['Jusuf Serifi']
2019-10-09 14:15:41.122000+00:00
['Cryptocurrency', 'Altcoins', 'Bitcoin']
SOLID Principles with Scala
It is quite evident what SOLID principles mean for class typed object oriented languages like Java. However, what do they mean for hybrid languages like Scala which merge the Object oriented and functional approaches. In this KnolX session, we tried to decipher what SOLID principles meant for Scala. Though much of what we know about them can be applied to Scala if we code in the Object oriented way with Scala. However, once we are in the functional boundaries then some of the principles like Liskov Substitution, Open Closed Principle have weak relevance in functional languages since these principles are based on inheritance. The Dependency inversion principle is also somewhat less relevant because functions can be passed in functional languages instead of dynamic binding. (Though I still feel that with injection concepts in Scala like the Cake Pattern, we get only half the job done and I miss the dynamic aspect of inject, but may be that is another discussion). For now, enjoy the KnolX session slides and as always send your feedback to hello@knoldus.com [slideshare id=14961209&doc=solid-scala-121030233254-phpapp01]
https://medium.com/knoldus/solid-principles-with-scala-8e6c3b0a4ee1
['Knoldus Inc.']
2018-02-08 18:28:45.662000+00:00
['Solid Principles', 'Good Design', 'Scala']
Every NBA Team’s Best Draft Pick Since 2000
by Ben Liebowitz on SportsRaid.com Since 2000, NBA franchises have needed to draft extremely well in order to compete for championships. (Well, except for the Los Angeles Lakers.) Sure, a variety of successful teams traded for or signed important complementary pieces, but building through the draft is the most productive way to set the foundation of an organization. Because drafting well is such an integral part of NBA success, PointAfter set out to find each team’s best draft choice since the turn of the century. To qualify as a team’s best draft pick, each player must have suited up for the franchise (ideally for multiple seasons, but we made some exceptions that will become apparent later on). Players who went on to become All-Stars, make All-NBA teams and win championship trophies were the primary targets. And, although a handful of No. 1 overall choices made the cut, we rewarded teams that found diamonds in the rough. Sometimes they uncovered impressive talents in the second round, others just happened to choose the right guy at No. 9 or 10 overall. The list is structured in descending order by draft position. #30. Milwaukee Bucks: Michael Redd Draft Year: 2000 Draft Slot: Second round, No. 43 overall The Bucks drafted a couple of busts in recent memory: Yi Jianlian No. 6 overall in 2007 and Joe Alexander No. 8 overall one year later. That being said, youngsters Giannis Antetokounmpo and Jabari Parker show plenty of promise. Parker or the “Greek Freak” could stake a claim to this spot in the near future, but anytime you can unearth a future All-Star in the second round of the draft, that’s an accomplishment worth acknowledging. That’s exactly what happened when Milwaukee selected Michael Redd. In addition to winning an Olympic gold medal for Team USA in 2008 in Beijing, the sharpshooting southpaw was named an All-Star and to the All-NBA Third Team in 2004. Injuries derailed his promising career, but Redd was an elite scorer at his peak. #29. Los Angeles Clippers: Blake Griffin/DeAndre Jordan Draft Year: 2009 & 2010 Draft Slot: First round, No. 1 overall & Second round, No. 35 overall The best Clippers draft choice since 2000 is a tossup between their two starting big men. Blake Griffin is clearly the best player LA drafted in that timeframe, but taking a chance on DeAndre Jordan was a bold selection that required more basketball intellect then simply taking the best player at No. 1. Jordan was a raw prospect coming out of Texas A&M, but he’s developed into one of the league’s best rebounders and shot blockers. #28. Chicago Bulls: Jimmy Butler Draft Year: 2011 Draft Slot: First round, No. 30 overall Chicago’s draft success since the turn of the century is quite impressive. After whiffing via Marcus Fizer, Eddy Curry and Jay Williams, the Bulls went on to draft eventual Defensive Player of the Year Joakim Noah and MVP Derrick Rose. Either of those guys could be deemed the best draft pick for the Bulls since 2000. However, finding, drafting and developing Butler has been a more impressive chain of events. #27. Dallas Mavericks: Josh Howard Draft Year: 2003 Draft Slot: First round, No. 29 overall In addition to being pegged as the Mavericks’ best draft choice since 2000, we also tabbed Josh Howard as the best No. 29 overall draft choice in league history. His peak was short-lived, but Howard earned an All-Star appearance in 2007 and averaged more points, rebounds and assists the following season. #26. San Antonio Spurs: Tony Parker Draft Year: 2001 Draft Slot: First round, No. 28 overall The Spurs boast a lengthy history of drafting well when the top prospects are already off the board. They drafted George Hill, Leandro Barbosa, Luis Scola and Goran Dragic in the late-first and early-second rounds (not always opting to keep those talents, but uncovering them nonetheless). The obvious choice as San Antonio’s best draft pick since 2000, however, is point guard Tony Parker. The French floor general has six All-Star berths and one Finals MVP award to his name. #25. Boston Celtics: Tony Allen Draft Year: 2004 Draft Slot: First round, No. 25 overall Though Tony Allen didn’t truly come into his own as a lockdown defensive player until later in his career with the Memphis Grizzlies, he was still a solid contributor with the Boston Celtics. Given the “Trick-or-Treat Tony” nickname by sports fanatic Bill Simmons, Allen admittedly wasn’t all that consistent for the Celtics. Perhaps down the line Jared Sullinger (the No. 21 overall pick in 2012) will take over this spot. #24. Atlanta Hawks: Jeff Teague Draft Year: 2009 Draft Slot: First round, No. 19 overall There’s a case to be made here for Al Horford, the No. 3 overall pick in 2007, but nabbing Jeff Teague at No. 19 two years later was the savvier move. With other point guards like Darren Collison, Eric Maynor and Rodrigue Beaubois still on the board, Atlanta’s brass opted for Teague. The Wake Forest product was clearly the best of that crop in hindsight. #23. Los Angeles Lakers: Andrew Bynum Draft Year: 2005 Draft Slot: First round, No. 10 overall The end of Andrew Bynum’s career was abrupt, marred by knee injuries and awful hair styles. But when the big man was just a youngster suiting up for the Los Angeles Lakers, he put together some impressive seasons. He peaked in 2011–12 by averaging 18.7 points, 11.8 rebounds and 1.9 blocks per game while shooting 55.8 percent from the field. #22. Brooklyn Nets: Brook Lopez Draft Year: 2008 Draft Slot: First round, No. 10 overall The more skilled of the two Lopez twins, Brook has been great for the Nets when healthy. Unfortunately, the “when healthy” qualifier has eluded the big man in recent years. He played just five games in 2011–12 and 17 games in 2013–14. Sandwiched in between, Lopez was named an All-Star. He’s one of very few bright spots to be found on a Nets franchise with questionable direction. #21. Indiana Pacers: Paul George Draft Year: 2010 Draft Slot: First round, No. 10 overall Even though Paul George missed nearly all of the 2014–15 campaign due to a broken leg, he’s still been one of the best draft picks in recent memory. He was selected behind guys like Evan Turner, Wesley Johnson, Ekpe Udoh and Al-Farouq Aminu. After playing just six games last season, “PG-13” is posting the best numbers of his career this season. #20. Philadelphia 76ers: Andre Iguodala Draft Year: 2004 Draft Slot: First round, No. 9 overall Andre Iguodala currently fills a role as Golden State’s glue guy. “Iggy” adds rebounding, passing, three-point shooting and, of course, stellar perimeter defense to the Warriors’ well-oiled machine. But before he was a role player in the Bay Area, Iguodala was a go-to scorer for the Philadelphia 76ers. He averaged at least 18 points, five rebounds and four assists per game with Philly for three consecutive seasons. He was named to his first and only All-Star team in 2012 as a Sixer. #19. Utah Jazz: Gordon Hayward Draft Year: 2010 Draft Slot: First round, No. 9 overall Even though drafting Gordon Hayward one spot ahead of Paul George will likely be viewed as a mistake when it’s all said and done, the Butler product certainly hasn’t been a slouch in Utah. After sputtering through the 2013–14 season in his first year as an alpha dog scorer (41.3 percent shooting from the field, 30.4 percent from long range), Hayward has blossomed. He’s shooting the ball far more efficiently while averaging more than 19 points per contest since his fourth year in the pros. #18. Charlotte Hornets: Kemba Walker Draft Year: 2011 Draft Slot: First round, No. 9 overall Kemba Walker was known for his clutch scoring abilities at UConn, where he won a NCAA championship in 2011. His lack of efficiency, however, translated into the pros. He converted less than 40 percent of his shots in three of his first four seasons, but is experiencing a breakout in 2015–16. Through the first 23 games of the latest campaign, Walker is shooting 45.2 percent from the field and 39.6 percent from distance. He’s finally establishing himself as a star point guard for Charlotte. #17. Phoenix Suns: Amar’e Stoudemire Draft Year: 2002 Draft Slot: First round, No. 9 overall High school product Amar’e Stoudemire fell to the No. 9 spot in the 2002 draft. Phoenix snatched him up, and “STAT” went on to win Rookie of the Year honors. As Steve Nash’s pick-and-roll partner in the years following, Stoudemire was outstanding. He earned five All-Star nods while wearing a Suns jersey. The athletic big man added four All-NBA appearances as well. Injuries cut him down soon after he signed a $100-plus million contract to join the New York Knicks. #16. Detroit Pistons: Andre Drummond Draft Year: 2012 Draft Slot: First round, No. 9 overall Yet another athletic big man who should not have fallen to No. 9 overall, Andre Drummond was picked up by Detroit. Though he’s an embarrassingly bad free throw shooter, Drummond has developed into the NBA’s best two-way rebounder. He’s on pace to become the first player since Dennis Rodman in 1996–97 to average at least 16 rebounds per game in a season. #15. Golden State Warriors: Stephen Curry Draft Year: 2009 Draft Slot: First round, No. 7 overall Stephen Curry led the Warriors to their first championship in 40 years while winning MVP in 2014–15. He’s quite easily the best draft pick for Golden State since the turn of the century, but it wouldn’t have been possible for the Bay Area to get him if not for the remarkable draft blunder made by former Minnesota Timberwolves president of basketball operations David Kahn. Instead of taking Curry with either the No. 5 or No. 6 pick, Kahn doubled up on two lesser point guards in Ricky Rubio and Jonny Flynn. Additionally, the Warriors actually had a deal in place with the Phoenix Suns to send the No. 7 pick to the desert in exchange for Stoudemire. But when Kahn let Curry drop, Golden State backed out of the deal and took the sharpshooter from Davidson. That turned out to be a brilliant decision by G-State’s brass and a crushing blow to Suns fans, who are still waiting for the organization’s first championship. #14. Portland Trail Blazers: Damian Lillard Draft Year: 2012 Draft Slot: First round, No. 6 overall Having played his college career at Weber State, many NBA scouts weren’t high on Damian Lillard. Going up against what many perceive to be lesser competition in the Big Sky Conference hurt his stock. That skepticism turned out to be Portland’s gain. Lillard has already made two All-Star teams for the Trail Blazers since entering the pros. #13. Sacramento Kings: DeMarcus Cousins Draft Year: 2010 Draft Slot: First round, No. 5 overall DeMarcus “Boogie” Cousins’ reputation as a hot-headed talent likely didn’t help his draft stock in 2010. He slipped slightly to No. 5 overall, where the Kings nabbed the former Kentucky Wildcat. His raw stats in Sacramento have been stellar, but the Kings have yet to win even 30 games in a season with Cousins on board. He admittedly hasn’t had much help around him, but that’s still not a ringing endorsement for Cousins’ bonafides as a winner. #12. Miami Heat: Dwyane Wade Draft Year: 2003 Draft Slot: First round, No. 5 overall We’ll make this one quick, because Dwyane Wade is probably the best Miami Heat player in history. He leads the organization’s all-time leaderboards in games, minutes, points, assists, steals and win shares. #11. New York Knicks: Kristaps Porzingis Draft Year: 2015 Draft Slot: First round, No. 4 overall Is it too early to crown Latvian rookie sensation Kristaps Porzingis as the Knicks’ best draft choice since 2000? Perhaps, but he isn’t exactly facing stiff competition. Power forward David Lee, who was drafted No. 30 overall in 2005, is another safe bet. He made the All-Star team for New York in 2010, and he was a consistent double-double threat, which helped disguise lackluster defensive abilities. Still, Porzingis has answered all of the questions facing him early in his career. Even future Hall of Famer Dirk Nowitzki, who Porzingis is often compared to, said of the youngster, “He’s way better than I was at 20.” #10. Memphis Grizzlies: Mike Conley Jr. Draft Year: 2007 Draft Slot: First round, No. 4 overall Mike Conley has yet to make an All-Star team in his career, but he’s frequently been pegged as an All-Star snub, which shows pundits hold him in high esteem. Praised more for his defensive prowess, Conley’s scoring capabilities often get overshadowed. He’s not as athletic as Russell Westbrook, as solid all-around as Chris Paul or as prolific as a shooter as Stephen Curry, but he remains one of the most competent point guards in the league. #9. Toronto Raptors: Chris Bosh Draft Year: 2003 Draft Slot: First round, No. 4 overall Before Chris Bosh was winning championships as the super-third-wheel alongside LeBron James and Dwyane Wade, he was the alpha dog for the Toronto Raptors franchise. In seven seasons spent up North, Bosh averaged 20.2 points, 9.4 rebounds, 2.2 assists and 1.2 blocks per game. He made five All-Star teams as a Raptor, and has since added five more as a member of the Miami Heat. #8. New Orleans Pelicans: Chris Paul Draft Year: 2005 Draft Slot: First round, No. 4 overall Before New Orleans’ basketball franchise shifted to become the Pelicans, Chris Paul led the most exciting New Orleans Hornets teams in history. Through his pick-and-pop plays with David West, pick-and-roll alley-oops to Tyson Chandler and three-point sniping support from Morris Peterson, James Posey and Peja Stojakovic, CP3 helped form an entertaining basketball product that competed admirably in the Western Conference for a number of years. #7. Denver Nuggets: Carmelo Anthony Draft Year: 2003 Draft Slot: First round, No. 3 overall Carmelo Anthony forced his way out of Denver during the 2010–11 season, an unceremonious exit that left a sour taste in the mouths of Nuggets fans. He isn’t remembered fondly, but Anthony turned in some impressive campaigns and even led the Mile High City to the Western Conference Finals in 2009. #6. Oklahoma City Thunder: Kevin Durant Draft Year: 2007 Draft Slot: First round, No. 2 overall Though we don’t usually like to lump the Oklahoma City Thunder franchise in with Seattle SuperSonics history, it makes sense in this context. The best player drafted since 2000 (originally for Seattle) remains the face of OKC’s franchise. Former MVP Kevin Durant earns the nod here, but there’s still stiff competition. Nabbing Russell Westbrook No. 4 overall and Serge Ibaka No. 24 overall in 2008 were both phenomenal pickups. The Thunder compete at a high level because the team hit multiple home runs in the draft, but Durant remains the de facto “best” pick even though he fell into the franchise’s lap. #5. Minnesota Timberwolves: Karl-Anthony Towns Draft Year: 2015 Draft Slot: First round, No. 1 overall This might be another “too soon” moment, but because Minnesota’s draft history is so ghastly, there truly isn’t another choice for this spot. First-round picks since 2000 that Minnesota didn’t opt to trade immediately include: Ndudi Ebi, Rashad McCants, Corey Brewer, Ricky Rubio, Jonny Flynn, Wesley Johnson, Derrick Williams and Zach LaVine. Rubio is playing well this season, and LaVine shows promise of being more than just a high-flying dunker, but that’s a pupu platter of draft whiffs otherwise. It’s difficult to screw up the No. 1 overall pick. At least early on it appears Towns won’t disappoint on that status (knock on wood, T-Wolves fans). #4. Orlando Magic: Dwight Howard Draft Year: 2004 Draft Slot: First round, No. 1 overall Like Carmelo Anthony, Dwight Howard underwent an unceremonious exit from the franchise that drafted him. Before forcing his way out of Orlando, though, D12 was an absolute beast in the low post. In addition to racking up All-Star berths and eventually leading the Magic to the NBA Finals, Howard earned three Defensive Player of the Year awards. Like him or not, that’s an impressive résumé. #3. Washington Wizards: John Wall Draft Year: 2010 Draft Slot: First round, No. 1 overall Throughout his first three seasons in the pros, John Wall was solid, but not spectacular. Though he averaged approximately 16 points, eight assists and four rebounds in each of his first two years, he didn’t show much year-to-year growth and couldn’t hit the broad side of a barn from three-point range. Since those first few seasons getting his feet wet, Wall has made two consecutive All-Star teams and is even becoming more of a threat from long distance. #2. Houston Rockets: Yao Ming Draft Year: 2002 Draft Slot: First round, No. 1 overall The hulking, 7'6", 310-pound frame of Yao Ming dazzled fans in the NBA community after he was selected No. 1 overall by the Rockets in 2002. He boasted a completely unique combination of sheer size and impressive touch. His turnaround baseline shots and even his prowess at the free throw line (he converted 83.3 percent of his freebies throughout his career) was a breath of fresh air compared to the bruising Shaquille O’Neal archetype. Sadly, recurring foot injuries cut Yao’s career short. He retired at age 30 after playing just five games for Houston in 2010–11, following a completely lost 2009–10 campaign. #1. Cleveland Cavaliers: LeBron James Draft Year: 2003 Draft Slot: First round, No. 1 overall LeBron James single-handedly changed Cleveland’s basketball reputation from doormat to championship contender. He ostracized the fans in his home state of Ohio by opting to join the Miami Heat via free agency (where he won two titles). But “King James” has since returned to his home state in an attempt to win the franchise its first ever Larry O’Brien trophy. Even if his pursuit isn’t successful, he’ll retire as the best Cavaliers player of all time. Discover More NBA Player Stats and Visualizations on PointAfter
https://medium.com/sportsraid/every-nba-teams-best-draft-pick-since-2000-48777b95585f
['Paul Dughi']
2016-05-29 03:24:41.102000+00:00
['Basketball', 'NBA', 'Blake Griffin']
ieAction movies 2021 || BEST UPCOMING MOVIE TRAILERS 2021 || Lqp Action Movies
Presenting Lqp Action Movies 2021 — When you are really in Problem, you become vulnerable and insecurities come out of no where. Don’t ever let those insecurities ruin your relationship no matter what you see . Action movies 2021 || BEST UPCOMING MOVIE TRAILERS 2021 || Lqp Action Movies .. Official Social Media links: Fb Page : https://www.facebook.com/LQP-Action-M.... Instagram.. https://www.instagram.com/lqpactionmo... Twitter : https://twitter.com/lQPactionmovies Pintrest : https://www.pinterest.com/homedecoraa... Quora : https://www.quora.com/profile/Lqp-Act... Reddit : https://www.reddit.com/user/lqpaction... Mix : https://mix.com/lqpactionmovies
https://medium.com/@lqpactionmovies/action-movies-2021-best-upcoming-movie-trailers-2021-lqp-action-movies-f4b3769c7967
['Lqp Action Movies']
2020-12-21 15:12:31.190000+00:00
['Movie Trailers', 'Movie Review', 'Movies', 'Movies To Watch', 'Movies Online Free']
Daily analysis of cryptocurrencies 20190910(Market index 41 — Fear state)
[Digital currency mining pool ViaBTC online cloud mining contract] According to the official website news, ViaBTC launched a cloud mining contract, its cloud mining section was officially launched on August 30, 2019. The first phase of the online product is a 360-day BTC cloud mining spot contract. At the same time, ViaBTC launched a new user to send computing activities, and opened a one-month 10% off the first activity. [Thai Bond Market Association issues its own cryptocurrency] On September 10th, the Thai Bond Market Association (TBMA) announced that it will issue its own cryptocurrency to increase the efficiency of the Thai bond market. [Economist Preston Pysh: Bitcoin is the legal debt solution] Economist Preston Pysh said that inflation is a phenomenon of money supply, not a collapse of demand. He cited Venezuela as an example and asserted that citizens of the country did not suddenly reduce their demand for their own currency, further emphasizing that supply is the root cause. He believes that Bitcoin is a “technical solution to the political disaster of statutory debt.” He said that the value or demand of the dollar will not collapse soon because people still have to pay taxes and bills. However, he believes that once people start holding bitcoin and use it as a value store, they don’t have to rely on debt to live because they will no longer be without money. Encrypted project calendar(September 10, 2019) BTC/Bitcoin: The DeFi Summit (London) will be held at Imperial College London from September 10th to 11th. TNS/Transcodium: Transcodium (TNS) WirePurse will be available on September 10th for AT tokens and will air-drop $3,000 worth of AT tokens to all WirePurse users. KICK/KickCoin: KickCoin (KICK) The KICK team extended the SWAP bonus event deadline to September 10 and added additional bonuses to encourage trading. Encrypted project calendar(September 11, 2019) BTC/Bitcoin: Invest: Asia 2019 Summit will be held in Singapore from September 11th to 12th. CLOAK/CloakCoin: CloakCoin (CLOAK) CloakCoin ENIGMA trading competition will end on September 11th, the second round will continue, with a prize of US$10,000 for CLOAK. PHR/Phore: The Phore (PHR) community needs to vote for the September core development budget proposal for Phore and the Marketplace and Synapse proposals by September 11. Encrypted project calendar(September 12, 2019) BNB/Binance Coin: Coin Security will stop providing services to US users on Binance.com on September 12th BCN/Bytecoin: Bytecoin (BCN) will release Copper v3.6.0 on September 12t HBT/Hubii Network: Hubii Network (HBT) hubii’s “Blockchain in Practice” campaign with Microsoft will be held on September 12th at the Microsoft office in Oslo. Encrypted project calendar(September 13, 2019) ETC/Ethereum Classic: ETC or will perform Atlantis hard fork on September 13th Encrypted project calendar(September 14, 2019) BTC/Bitcoin: The European Union will launch its name, Payment Services Directive 2 (PSD2), which will take effect on September 14. The new law includes banks implementing “strong customer certification”. In addition, according to previous news, PSD2 can obtain some of the functions of the banking industry, providing new payment solutions for encryption products. Encrypted project calendar(September 15, 2019) TRX/TRON: Wave field TRON launches side chain plan Sun Network network three-phase release WAN/Wanchain: Wanchain (WAN) will hold a 3Q community conference call in mid-September Encrypted project calendar(September 16, 2019) LINK/ChainLink: Chainlink (LINK) Oracle will host the Oracle Code One conference from September 16th to September 19th, at which it will announce the launch of 50 startups with Chainlink. MANA/Decentraland: The Decentraland (MANA) community will host the SDK hackathon on September 16. Encrypted project calendar(September 20, 2019) NULS / NULS: The NULS 2.0 Beta hackathon will be held from September 20th to September 21st, 2019. AE/Aeternity: Aeternity (AE) will hold “Cosmos One” conference in Prague, Czech Republic on September 20th Encrypted project calendar(September 23, 2019) BTC/Bitcoin: Bakkt, the digital asset platform led by ICE, the parent company of the New York Stock Exchange and the world’s second largest trading group, will launch a bitcoin physical delivery futures contract on September 23. EOS/EOS: EOS main network is expected to upgrade version 1.8 on September 23 BTC shocks to build momentum, the follow-up may rise above 14,000 US dollars: At present, the BTC’s high probability is in the stage of absorbing the expected increase before the halving. Compared with the previous two halving trends, it can be seen that after each round of bear market hits the Botto, the BTC opens the next stage of the big bull market. And has been rising to the last round of the bull market, Fibonacci, 0.618 points and then fall back. The main support at the lower weekly level is near the 0.382 point in Fibonacci. It can be seen that the first two rounds of BTC are in Fibonacci Then, the two points are in a shocking trend, and after a halving, the BTC washing action is completed and the upper edge of the interval is broken upwards. The rapid increase of the big bull market is expected. It is expected that the BTC will continue for a while. In the large range of $9,500 to $13,500, the current structure is likely to be a relay stage for a large triangle. After the chip is fully replaced, the BTC will have a strong chance to break through the box, which may be directly Attacked $20,000. Review previous articles: https://medium.com/@to.liuwen Telegram: https://t.me/Lay126 Twitter:https://twitter.com/mianhuai8 Facebook:https://www.facebook.com/profile.php?id=100022246432745 Reddi:https://www.reddit.com/user/liuidaxmn LinkedIn:https://www.linkedin.com/in/liu-wei-294a12176/
https://medium.com/@kyle8/daily-analysis-of-cryptocurrencies-20190910-market-index-41-fear-state-38b5fa6a3ea
[]
2019-09-10 00:00:00
['Binance', 'Bitcoin', 'Cloak', 'Transcodium', 'Kickcoin']
Amplify is three. 🎂
Amplify Recruitment is three years old today. Amplithree? — hmm, not sure. I can barely believe it, but it’s true. Three years ago today, three of us opened the door to a shared office in Fenchurch Street, opened our laptops and set about doing top quality recruitment for a handful of initial clients. As founding myths go, a shared office in insurance-ville isn’t likely to have Hollywood on the phone, but hey, what’s a myth if we can’t spice that up later — maybe it’ll have been a garage in Camden or a lock-up in Old Street in a few years time. Today, there are 11 of us, covering; product, UX, creative & design, tech, data & analytics, delivery, client services, strategy, and marketing. We work with some of the most innovative, creative and effective businesses in London and beyond — boutique agencies, networks, consultancies, products, VCs and brands. Three years is a long time in digital but the dynamism of the digital landscape keeps us young as we continue to move with the changes and challenges that the future throws up, just as we have since day one. Since the beginning we have spent our energy developing our client and candidate base, helping fantastic people find great new gigs, incredible companies find unfairly talented new team members and growing our own number, but we haven’t really stopped to tell our story. Despite our name, we’ve been relatively quiet whilst beavering away. Well, we are three now and so like any self-respecting three year old, it’s time to make a bit more noise — and this is just the place for it. Time for cake, Amp.
https://medium.com/@amplifyrecruitment/amplify-is-three-34f9543d43a6
['Amplify Recruitment']
2020-11-27 09:11:49.295000+00:00
['Digital Marketing', 'Recruiting', 'Startup', 'Product Design', 'Recruitment']
Aesop Rock — Spirit World Field Guide — Album Review
Hip-Hop | Acid Rap Listen on Spotify | Listen on Apple Music Despite the hip-hop alias of Northport, New York artist Ian Matthias Bavitz, known as Aesop Rock, being on the scene since 1996, my first real experience listening to the rapper would come with his 2019 collaborative project with TOBACCO, titled Malibu Ken. My decision to listen to that project was mainly down to the fact that it was still very early on in the year of 2019, and thus when I saw an album that was getting solid praise, it convinced me to give this album a shot. Overall, I found that project to be incredibly solid myself, but the stand-out pro to pick out from it would have to be the quirky lyrics and unwavering flow from Aesop Rock himself. My interest in this artist seemed to follow through upon discovering the single “Rogue Wave”, which was released towards the beginning of this year. Having this song as an approximation of sorts to gather what Aesop Rock sounds out of a collaboration, I definitely liked what I heard. But despite all of that, I was unaware of Aesop Rock’s newest solo album Spirit World Field Guide, until I stumbled across it (courtesy of TheNeedleDrop). The eighth studio album from Aesop Rock, Spirit World Field Guide had only been teased by two tracks “The Gates” and “Pizza Alley” (“Rogue Wave” doesn’t make the billing on this new album), but obviously I would be venturing into this spirit world completely afresh, eagerly awaiting all of the trippy musical wonders that may ensue. And indeed, there is a lot to get through on this project, which I feel really reflects how Aesop Rock wanted to make this a real experience of an album. To be clear, Spirit World Field Guide has a total of 21 tracks, and an overall runtime that clocks in at just over an hour. And yet again, we indeed have another case of an extensive yet enjoyable project, Spirit World Field Guide is absolutely bursting with character, and the most praisable thing about it would have to be just how fun it sounds at its many great points. It feels unmistakably like an album that could only come from the mind of Aesop Rock, which is a great quality for this album to have. Starting off with an intro track, it definitely feels like a decent attempt to tie the theme of this album together. But while I love the aesthetic behind the albums sound, it feels more gritty than spiritual in my opinion. And thus, my focus moved away from the thematic nature of Spirit World Field Guide, and instead went towards the bare enjoyability of the album. And while it is still impressive that the majority of tracks hit the mark on an album as extensive as this one, it was of course expected from me that some of them fell into the realm of dud territory. I felt that the majority of interludes slipped into this, but there were also a few full tracks that didn’t carry that same level of impact as the more enjoyable tracks on here, most likely due to being too repetitive or weirdly produced. But fortunately, this doesn’t apply to most of what’s on here, in which both the production and wordplay works together awesomely. I would definitely recommend Spirit World Field Guide to anybody looking for a project that carries strong RTJ2 vibes, as the quirkiness and engagement of this album certainly delivers on that front. But furthermore, this is a brilliant album for listeners to purely and simply enjoy. Favourite Tracks: The Gates | Button Masher | Coveralls | Fixed and Dilated Least favourite Track: Boot Soup Rhymesayers Entertainment LLC 8/10
https://medium.com/@joeboothby/aesop-rock-spirit-world-field-guide-album-review-98a0b572d43b
['Joe Boothby']
2020-11-27 10:28:07.656000+00:00
['Album Review', 'New Music', '2020 Music', 'Music', 'Music Review']
LIFE LONG INVESTMENTS
Are you a young and ambitious youth who is searching for ways to invest and secure your future? Or, are you about hitting retirement and badly need to invest so you’ll not be caught unguarded after retirement? Well, I’ve got superb news for you: You can invest with DIY- a reputable online platform that helps its users secure their future with just a little amount of money. The Do-It-Yourself (DIY) platform exposes its users to numerous investing opportunities. At times, DIY users do not like the kinds of investments suggested, and when such happens, the DIY platform teaches such users tips on how to invest in anything of their choice with the help of professional personnel. Unlike other platforms and companies, the DIY platform does not require a large sum of money to get started with an investment, but helps its users start an investment at their own pace. DIY got you covered, always. Secure your family and future, save more money, and live a happy life with DIY. Still in doubts? just click on Build-it.io to learn everything you need about investment. See you!
https://medium.com/@buildit_DIY/life-long-investments-47f35349d418
[]
2020-01-14 16:58:36.548000+00:00
['Help', 'Opportunity', 'DIY', 'Social Media', 'Investing']
This ones for me
Hello everyone! My name is Reynaldo Martinez, Jr. I am a husband and father to a beautiful little girl and after much consideration and years of being a relatively reserved person, I’ve decided to take a step into some uncharted waters, beyond my comfort zone and start a blog. I decided to start blogging because expressing myself in the moment can sometimes be difficult. A lot of times I am torn between speaking up and minding my own business and often times I come across thoughts that I believe are worth sharing but too many times I choose to keep those thoughts to myself. For the most part, I am able to justify this without too much cognitive dissonance as I am already not the type openly express myself. Rather, I tend to be an observer and keep my insights to myself and consider it a form of wisdom. Or so I like to believe. But what good is wisdom and insight if it is not shared? For me, the decision to share and express was a result of my daughter, Elouisa. I think of Elouisa and how she will choose to communicate once she is able to. Will she be more reserved like her father and share her thoughts only when she feels truly compelled or will she be more outspoken like her mother. Fearless and courageous. I pray that she can take the best of both of us but I also think it is necessary to lead by example in this situation. For now, that example takes the form of this blog. I like that I can take my time and be selective to share what I feel is important. If nothing else, to actually write down the many thoughts and ideas I often have. I recognize that not many people (if any) will see these anyway but I am excited to finally be able to express myself and share them in an environment that is appropriate, with my best efforts to not make this a rant. That said, this ones for me.
https://medium.com/@thisonesforrey/this-ones-for-me-b43af1390519
['Reynaldo Martinez']
2021-02-17 04:49:22.840000+00:00
['Mindfulness', 'Positivity', 'Positive Thinking']
Top 10 Powerful Websites Built with ReactJS
Looking for Top 10 ReactJS Websites? Top 10 Powerful Websites Built with ReactJS You can get the Idea from this Blog Which and Why some Popular Websites choose ReactJS. ReactJS is used for building UI, and it is the flexible and more efficient JavaScript library that is introduced in 2011 by Facebook. It offers the best rendering performances with the core objectives among all frameworks ReactJS is more popular JavaScript frameworks. Individual components you can focus because into simpler components; you can break down the UI which offer by the ReacrJS. The future of the ReactJS is more because it is supported by the most robust community Facebook. ReactJS becomes a more popular JavaScript framework across the world with its attractive beneficial features and simplicity. Below this diagram is the historical trend of websites using ReactJS. The data are shown in the percentage of websites. So you can analyze the use of ReactJS in Websites is more by this diagram. Now let’s see the market position of the ReactJS as compared to other JavaScript Frameworks. Market Position: Almost in all business sectors, RectJS is used. Developers love RectJS due to its simplicity and ease to use. The RectJS development company provides all facilities and new features with advanced versions of ReactJS that make your websites more beautiful that is compatible in the market to get success fastly. There are many more big companies using ReactJS. However, here I provide you the list of some large companies using ReactJS for web development. After reading this blog, may you surprise by these ReactJS websites? Top 10 Powerful Websites make with ReactJS: 1. Facebook: On social media, Facebook is a powerful leading company. Over 2.2 billion people users base globally. Initially, Facebook created the React library. On its main page, Facebook uses the ReactJS some parts also in React Native, the mobile application on Facebook builds, which is on both Android and iOS displaying. React Fiber is a rewrite version of React, which is announced by Facebook in 2017. Of React frameworks further improvements and any feature development, it becomes the foundation that makes more responsive React. Of reloading page, without the necessity, it displayed the comments, post reactions, and notifications facebook allows with component architecture. 2. Instagram: For sharing photos and videos, Instagram is the popular social media framework that people love most. With React, Instagram built completely, which is the single-page web application. With JSX code, the designers contribute. For user events in less than milliseconds ReactJS responses that climbed by creators, it is highly responsive, fast, and lightning web. The traffics of Instagram comes from the desktop is 18 % by the Statista states. 3. Asana On focus on projects, daily tasks, and goals the team enables by a work management platform. For is simplicity, a key objective to strive when a website building comes it. It should be Readable, testable, performance, and maintainable over time. With React, Asana rewrites their frontend. Regarding focus and animation of their UI issues, Asana solves many with ReactJS. Its client performance issues of Asana ReactJS provides some great benefits as under, Small code size Simple to integrate with Luna Virtual DOM implementation Similar to Luna views Simple to reason about reactivity 4. Netflix Give special thanks to ReactJS because with that Netflix built, and on it, you can enjoy your series. Under the best top ReactJS websites, Netflix is coming on that list. The main reasons to choose ReactJS are including modularity, startup speed, and influenced by several factors for application and website Netflix adopt the ReactJS. React offers many more benefits over satisfies these requirements, such as while handling custom rendering code, user interaction to opt-out capabilities, and to grasp it enables simple. Runtime performance, scalability overall, initial load times the most attractive features they can leverage. 5. Codecademy In different programming languages to provides free coding classes, the leading interactive platform is Codecademy. They are pleased with using ReactjS for their web application and site also on its performance and reliability. They are very confident. Of the site without disturbing the rest in isolation, the individual test portion enables you since component-based react websites. Codecademy attracted by React based on some aspects are; Made easy SEO Shortcode to write Component-based, therefore, easy to conceptualize Compatible with legacy code, therefore, flexible for the future 6. Yahoo mail More than mail, the reliability and performance matter most. Using technologies involving Node.js, Redux, React, and others, the new Yahoo mail built. Like, server-side rendering, one-way reactive data flow, and Virtual DOM with ReactJS and Flux the Yahoo mail architecture rewriting these different reasons behind it. As Yahoo’s choice React websites benefits that made; Shorter learning curve Growing and active community Predictable flow Easy debugging Independent of large platform libraries One-way reactive data flow 7. New York Times With React, the New York Times designed a new project on Oscar red carpet pretends different looks of stars and other photos that span 19 years to filter to users enables the gallery fantastically. For ReactJS’s most impressive feature, re-rendering, we especially thank it. To a NodeJS, ReactJS, and GraphQL combination from PHP loading HTML, JavaScript New York Times moved that offers stable front end more on its whole online world. 8. Atlassian Like JIRA, Bitbucket, Confluence, and Stash Atlassian is a popular software collaboration. Internally and externally, they employ ReactJS, so we can say that this company is the total and true ReactJS company among all. Developers can reuse the libraries and different features such as deploying to desktop, mobile web benefits they get from React. “Over the last two years, almost all single-page applications built in the Atlassian Cloud use React and Atlaskit. As the library matures, Atlassian products and ecosystem vendors lean in more heavily into it.” — Trey Shugart, principal developer at Atlassian 9. Dropbox Dropbox has moved to on ReactJS when among all developers, the React become more popular for websites and apps. Employ cloud computing, hosting service web-based file is Dropbox. With others using file synchronization, you can share as well as store the folders enables by this technology. With the React framework, the plethora of resources available Dropbox beneficial efficiently with that. This storage service and cloud-based online backup success by the React contribute. 10. Airbnb For tourists and property, hosts serve as a common destination. Airbnb is a famous company that offers hospitality services online. Across the globe, books exclusive accommodations opportunity provides by the Airbnb. To create your code iteration as well as to refactor easily, React components tend. These components are reusable highly. The best and more important benefit of React that attracts more Airbnb is reusability and refactorability. Final Words: So ReactJS is the best choice to make the websites and applications. ReactJS provides the high and increasing performance of any websites overall. Not only current time, but ReactJS got more future potential for website development. ReactJS popularity and Adoptability are proven in these above-mentioned successful websites.
https://medium.com/devtechtoday/top-10-powerful-websites-built-with-reactjs-757cd38bef05
['Binal Prajapati']
2020-05-15 12:11:40.489000+00:00
['Reactjs', 'Website', 'Developer', 'Technology', 'Development']
Biopharmaceuticals Manufacturing Consumables Testing Market Worth $709.92 Million By 2025
The global biopharmaceuticals manufacturing consumables testing market size is expected to reach USD 709.92 million by 2025 at a 12.1% CAGR, according to a new report by Grand View Research, Inc. The market is driven by growing need for in-depth assessment of raw materials for successful and rapid launch of the final product. Rising awareness pertaining to internal standards as well as external regulations in the biopharmaceutical industry is also aiding market growth. In the past few years, there has been a major paradigm shift from small chemical molecules to large biological molecules. As a result, the biopharmaceutical industry is geared toward exploring every aspect involved in the manufacturing process. Regulatory and quality expectations for best quality raw materials and intermediates continue to increase. This, in turn, has pressurized vendors engaged in manufacturing of excipients and raw materials to invest in standardizing functionality as well as establishing sophisticated processes to consistently address the needs of final product developers. Click the link below: https://www.grandviewresearch.com/industry-analysis/biopharmaceuticals-manufacturing-consumables-testing-market Further key findings from the report suggest:
https://medium.com/@marketnewsreports/biopharmaceuticals-manufacturing-consumables-testing-market-4292a831b67f
['Gaurav Shah']
2020-02-20 11:46:34.192000+00:00
['Laboratory', 'Testing', 'Asia', 'Africa', 'Europe']
Our FAQs
Writers What happens when I submit my article to TDS? Thank you so much for taking the time to submit your article to our team! We will review it as soon as we can. If we believe that your article is excellent and ready to go, this is how you will be able to add your post to our publication. If “Towards Data Science” shows up after you click on “Add to publication” in the dropdown menu at the top of the page, that means we have added you as an author and are waiting for you to submit your article. Once you have submitted your article, it will be reviewed by an editor before a final decision is made. If we think that your article is interesting but needs to be improved, someone from our team will provide you with feedback directly on your submitted Medium article. Please note that we only respond to articles that were properly submitted using either our form or via an email that exactly follows the instructions listed here. We don’t respond to pitches or questions already answered in our FAQs or on our Contribute page. We also ignore articles that don’t comply with our rules. If you haven’t heard from us within the next five working days, please carefully check the article you submitted to our team. See if you can now submit it directly to TDS and look for any private notes from us that you may have missed. You should also make sure to check your spam folder. If you just can’t reach us, the best thing for you to do is submit your article to another publication. Although we’d love to, we can’t provide customized feedback to everyone because we simply receive too many submissions. You can learn more about our decision here and submit another post in a month.
https://medium.com/p/462571b65b35#1204
['Tds Editors']
2020-11-19 01:16:58.476000+00:00
['Writers’ Guide', 'Tds Team', 'Writers Guide']