title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Face Recognition Using OpenCV and Deep Learning
As we all know that Computer vision is one of the areas that’s been advancing rapidly and that’s possible due to deep learning. And Face Recognition is one of the interesting application in the field of computer vision. As I have gone through many articles, I have experienced that there are some terms like face detection, face verification, face recognition etc. that are inter-related to each other and often produce some confusion for beginners. So through this article, I would like to explain all these terms in a very simple way and later on, we will see how to implement our own face recognition system which will recognize faces in images as well as live video stream. Face Detection In simple words, we can tell Face detection means to identifies human faces in digital images. Fig. Face Detection in image As we can see in the above picture, faces are detected and yellow box are drawn around each faces. Face verification Face verification problem is if you’re given an input image as well as a name or ID of a person and the job of the system is to verify whether or not the input image is that of the claimed person. So, sometimes this is also called a one to one problem where you just want to know if the person is the person they claim to be. For example, A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem as we are verifying that whether the input image is of the same person whose ID or name has been provided along with the image as input. Face verification- “is this the claimed person?” Face recognition Face recognition problem is if you’re given an input image only and the job of the system is to identify to which person this image belongs to. For example, attendance system of employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem as we are providing one input image and system trying to identify to which person out of K persons this image belongs to or this doesn’t belongs to any of them. Face Recognition-“who is this person?”. One-Shot Learning One of the challenges of face recognition is that we need to solve the one-shot learning problem. Let’s understand through example, suppose we have database of ten pictures of 10 different employees in an organization. One of them is Tom. What the system has to do is, despite ever having seen only one image of a Tom, to recognize that this is actually the same person. And, in contrast, if it sees someone that’s not in this database, then it should recognize that this is not any of the ten persons in the database. Our system has to learn from just one example to recognize the person again. The major advantage of one shot learning algorithm is we don’t need to train our convnet every time a new employee added or removed from the database. To carry out one-shot learning, we want a neural network to learn a similarity function d, which inputs two images and outputs the degree of difference between the two images. So if the two images are of the same person, we want this to output a small number. And if the two images are of two very different people we want it to output a large number. So during recognition time, if the degree of difference between them is less than some threshold called tau, which is a hyperparameter. Then you would predict that these two pictures are the same person. And if it is greater than tau, you would predict that these are different persons. So for face recognition task, when system gets new image it will use this function d to compare this image with every images stored in database and return minimum degree of difference and on this basis, we can tell that the image belongs to which person or doesn’t belongs to any of them. In case, if we have someone new join our team, we can add that person to our database, and it just works fine. Triplet Loss To train neural network to learn function d, we will use triplet loss function. Here d will output difference between the encodings of the two images. What we’re going do is always look at one anchor image and then we want to distance between the anchor and the positive image, really a positive example, meaning as the same person to be similar. Whereas, we want the anchor when pairs are compared to the negative example for their distances to be much further apart. This loss function operates on three inputs: Anchor (A) is any arbitrary data point, Positive (P) which is the same class as the anchor Negative (N) which is a different class from the anchor Mathematically, it can be defined as: L=max(d(A,P)−d(A,N)+margin,0). The neural networks will use gradient descent to minimize this loss, which pushes d(A,P) to 0 and d(A,N) to be greater than d(A,P)+margin. This means that neural networks will learn to output encodings for positive image approximate similar to anchor image and encodings for negative image very different from anchor image. Implementation Now we are familiar with the terminologies used in face recognition and seen how to train neural networks, we can move ahead to know how we can implement our own face recognition system. The whole code implementation along with explanation can be found on my github repo. For face detection, we are using OpenCV Haar Cascade. The cascades themselves are just a bunch of XML files that contain OpenCV data used to detect objects. OpenCV comes with a number of built-in cascades for detecting everything from faces to eyes to hands to legs. We will be using face Cascade classifier in our case. The cascades are readily available on official github repo of OpenCV. cascPath = "haarcascade_frontalface_default.xml" # Create the haar cascade # Detect faces in the image faces = faceCascade.detectMultiScale( gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30), flags = cv2.cv.CV_HAAR_SCALE_IMAGE ) Note:- Since these OpenCV classifiers are based on Machine Learning algorithm so we need to experiment with different values for the window size, scale factor, and so on until we found one that works best for our use. For face recognition, we will crop faces using the face location obtained through haar cascade detector from images and resize into 96x96x3 dimensions and use FaceNet Deep Learning Model which has been already trained on millions of images using the triplet loss function as defined above. FaceNet inputs a face image (or batch of m face images) as a tensor of shape (m,nC,nH,nW)=(m,3,96,96) and outputs a matrix of shape (m,128)that encodes each input face image into a 128-dimensional vector. So project implementation is mainly done as follows:- 1. Face Encodings:- Input images of each person to face detector mentioned above and then using FaceNet Model to get encodings of each face. And then finally storing all face encodings in database. 2. Face recognition in Images :- Input image to face detector to get faces in images then pass those faces to FaceNet Model to get encodings of each face. Then we will compute degree of difference for each face encodings with the encodings stored in database and return name of the person(as labels) having minimum degree of difference for each encoding if this minimum distance is less than the predefined threshold tau or else return unknown(not found in database). Finally putting boxes around the faces along with labels obtained in the images. Output of the face recognition in an image:- Fig. Trip to Lonavala with my friends And we can see that it’s recognizing face correctly in the above image. 3. Live Face Recognition:- For live face recognition, everything are same as of recognizing face in images but only difference is we are taking frames from the live video as input through OpenCV to the face detector rather than simply taking images stored in disk. Note: For better understanding of code and implementation, kindly visit my github repo. I hope this helps!! Thanks for reading. As this is my first article here, any feedbacks/suggestions will be highly appreciated.
https://medium.com/@krsanu555/face-recognition-using-opencv-and-deep-learning-9155383b3779
['Kumar Sanu']
2020-10-19 08:30:04.033000+00:00
['Face Recognition', 'Face Detection', 'Opencv', 'Face Verification', 'Deep Learning']
Bayes’ Theorem Unbound
Bayes’ Theorem Unbound E.T. Jaynes’ reformulation of Bayes’ theorem is as beautiful as it is useful and gives deep insight into the impact of evidence on the probability of just about everything Graeme Keith Follow Jan 18 · 9 min read Bayes’ theorem tells us how to update probabilities in the light of new data. When we’re assessing the probability of an event with a binary outcome — something either happens or it doesn’t — there is a particularly elegant formulation due to the high priest of Bayesianism E.T. Jaynes that richly deserves a much wider audience than it has today. It has consequences for how we look at success and failure in all branches of life. Vanilla Bayes’ Theorem (The traditional practical example for introducing Bayes’ theorem is tests for medical conditions, but I figure we’re all pretty tired of talking about those kinds of test, so here’s another example drawn from the annals of lockdown) Photo by Nina PhotoLab on Unsplash What is the probability my son Theo is currently playing Minecraft on the internet? It’s 1600. He plays for an hour between 1500 when he finishes home schooling and 1900 when he comes down for dinner, but it’s pretty random exactly when. In the absence of further information, we’ll say the probability is 25%. Now I notice that our internet connection has ground to a halt. I know this because my other son Carl is trying to watch YouTube and he’s yelling at his brother to stop hogging the internet connection. But what is the probability Theo is actually playing Minecraft online, given that Carl’s connection has crashed? My hypothesis is that Theo is playing Minecraft. My datum is Carl is complaining that the internet connection has crashed. Bayes theorem is usually written like this Equation 1: Vanilla Bayes H is hypothesis, D datum, P probability, and the little vertical line means that the datum after the line is to be accounted for when we’re trying to calculate the probability distribution of the hypothesis before the line. So Bayes tells us The probability that the hypothesis is true (i.e. that Theo is playing Minecraft) given that we have observed the datum (the internet has crashed and Carl is yelling) in terms of The probability that the internet has crashed given that Theo is playing Minecraft (the probability the datum is observed if the hypothesis is true). Notice the switch in order here; this is why it’s sometimes called Bayesian inversion. and The probability the internet has crashed (the probability of observing the datum — this is usually a right pain, but we’re going to get rid of it shortly) and The probability that Theo is playing Minecraft in the blissful case when we are utterly ignorant as to whether or not the internet has crashed. This is sometimes called the prior; it’s the probability without the information we’re trying to account for. The genius of Bayes’ theorem is the inversion that gives the probability of hypothesis given datum, which we want to find, in terms of the probability of datum given hypothesis, which we can usually measure by doing experiments or find by processing historical data. Odds On We’re used to see probabilities as numbers between 0 and 1 (or 0% and 100%), but when there are only two outcomes, we can also use odds: the ratio of the probability of one outcome to the other. Before Carl started shouting, the probability Theo was playing Minecraft, P(H) was 25%. The probability he wasn’t playing Minecraft, which I’ll write P(~H), was 75%. The odds Theo was playing Minecraft was 1:3 (or three to one against), which we can also just write as odds = 1/3. He was three times more likely not to have been playing as to have been playing. Odds Bayes Equation 2: Vanilla Bayes for the negation of H This is Bayes’ theorem for Theo not playing Minecraft. We’ll use this to work out how to change the odds of the hypothesis that Theo is playing Minecraft in the light of the datum that the internet has crashed. The odds of Theo playing Minecraft given crashing is the probability of Theo playing Minecraft given crashing divided by the probability of Theo not playing Minecraft given crashing. We can find these odds just by dividing the two equations here (equation 1 / equation 2). When we do this, jolly mathematical fortuities begin to emerge. The troublesome probability of the internet crashing disappears, and the blissful state of ignorance prior probabilities also end up making an odds. We get the following: Equation 3: Bayes’ theorem in terms of odds This is rather fine. It says that the odds of Theo playing Minecraft given Carl shouting about the internet connection increase with a factor that is exactly the ratio of the probability the connection crashes when he is playing to the probability the connection crashes when he isn’t playing. If the datum is more likely when the hypothesis is true than when it isn’t then the odds increases (the hypothesis becomes more likely) and vice versa. Bayesian Insights I promised deep insight into the impact of evidence on probability in all branches of life, well here it is. What this equation tells you is that to understand how data change the odds of the truth of a hypothesis, you have to know both how likely you are to see that datum when the hypothesis is true and how likely you are to see it when it’s false. Hypotheses can also be about things that are going to happen (or not); we’d usually called these outcomes. Photo by Dingzeyu Li on Unsplash So when successful people tell you that the secret to their success is, for example, meditating in the morning, this tells you nothing by itself. You have to look at unsuccessful people and see if they’ve also been meditating in the morning. The hypothesized outcome here is that you will be a success and morning meditation is being offered as a datum that supports that outcome. But if the incidence of meditation among unsuccessful people is similar to that among successful people then meditating doesn’t change your odds. (We’re not even getting started on the fact that even if the odds does change, that in no way supports the inference that the datum is causally related to the hypothesis). If a management consultant tells you, you have to implement such and such a governance system or organization principle, because many of the most successful companies in your industry has done just that, ask them about the unsuccessful companies. A much more serious example is the space shuttle Challenger, which exploded when O-ring seals on an auxiliary fuel tank failed due to the extreme cold on the morning of the launch. When the decision to launch was taken, it was noted that failures occurred across a range of temperatures. The hypothesized outcome here is that the seals will fail, but only data relating to cases of that outcome— failures — were discussed. Had the launch team seen data for the converse of that outcome — successful missions, they would have seen that the O-rings only ever held when it was warm. From Wikimedia Commons Without this information, looking only at failures, it looked as if temperature wasn’t a factor. The probability it was cold when a seal failed was similar to the probability it was warm when the seal failed. But looking at when the seals held, it was clear that the probability it was cold in a success case was very small, so the odds of a failure when it’s cold is very high. Apotheosis: Jaynes’ Final Formulation So the odds form of Bayes equation is incredibly insightful (and easy to use), but Jaynes, eminent mathematician that he was, wasn’t entirely happy about the form of odds. Outcomes that are more likely than not spread themselves out between 1 and infinity, but outcomes that are less likely than not are crammed in between 0 and 1. Jaynes realized that the logarithm of the odds is much more elegant and symmetrical. The logarithm of odds stretches from negative infinity for something that’s never going to happen to positive infinity for a sure thing. Odds of 1, exactly as likely as not, have logarithm 0. No information. If you know about logarithms, you’ll know that the equation above quickly gives us the following. I’ve followed Jaynes in the choice of logarithm base and the factor 10, but they aren’t important, they just give handy numbers and the quiet nerdy satisfaction that the units of evidence are decibels. Equation 4: Logarithm of Bayes equation for odds If we define what Jaynes calls the evidence for an event H then we have Equation 6: Jaynes’ form of Bayes equation for binary outcomes in terms of evidence This is an absolute pearl of an equation. The impact of data on evidence is linear. All we’ve done is to take that big mess of 10 log odds etc in equation 4 and call it J (in honour of Jaynes). But this evidence J is just another way of writing a probability. If you give me a probability P(H), I can give you an evidence J(H) and vice versa. The relationship is shown here. What Jaynes’ equation says is that to update an evidence in the light of data, you just add or subtract the middle term in equation 6 to your starting evidence. If your datum is more likely in the case that your hypothesis is true than in the case that it is not, the evidence for your hypothesis will go up (the fraction is greater than 1 so its logarithm is positive). If your datum is less likely in the case that your hypothesis is true, it will go down. Background Rates Photo by National Cancer Institute on Unsplash If you start with very low probability then the evidence has fallen off the bottom of the very steep part of curve to the left of the figure above. A strongly supportive datum (say 10 dB) will lift you up quite a bit, but because the curve is so steep there, it won’t move you very far in probability, i.e. horizontally. This is because if the probability for a hypothesis is very low, it’s much more likely any supporting evidence is false positive evidence. This is why if the background rate of the incidence of illnesses is low, a positive test has to be extremely faithful to move the probability of being sick, i.e. the probability of true positive (D|H) must be very much higher than the probability of false positive (D|~H). The evidence formulation automatically takes care of this. Conclusion Unfortunately there is no equivalent to the evidence formulation for events with several outcomes. (Jaynes’ spectacularly leaves this as an exercise for the reader.) But in the binary case, the evidence formulation is incredibly powerful — both for the clarity it provides with respect to the impact of data and the effects of background rates, but also for separation of the characterization of tests from the probability of the things being tested.
https://www.cantorsparadise.com/bayes-unbound-f0b464683e7c
['Graeme Keith']
2021-01-20 10:52:45.416000+00:00
['Probability', 'Data', 'Evidence', 'Bayes Theorem']
Reflections over Resolutions
Reflections over Resolutions Originally posted at https://www.danielstillman.com/blog/reflections-over-resolutions New Year’s resolutions are declining in popularity — A Forbes survey showed that about 75% of people over 45 don’t bother with them anymore. And good riddance. Research shows that resolutions are not very effective at changing behaviors — after a month, nearly half of resolvers had failed at whatever they were resolving to do. Resolutions are driven by what we think we should do The top ten resolutions should look pretty familiar to you: Exercise more, Lose weight, Get organized, Learn a new skill or hobby, Live life to the fullest, Save more money / spend less money, Quit smoking, Spend more time with family and friends, Travel more, Read more. Most of these are driven by an idea of what we *should* look like, *should* have or *should* be like. Resolutions are driven from the outside. Resolutions are also problematic because they are focused on the goal, devoid of any plan or process to make them happen. So let’s peel these two issues back to the heart of the matter: How to Reflect so you develop deep insights about how you want to be and How to Resolve so you accomplish what you want. Reflect before you Resolve Towards the end of 2020, I packed up my laptop, my sketchbooks from 2020, a mess of post-its and markers and loaded up my bicycle panniers. I took a 30 mile ride North up out of New York City to a quiet town in the woods. I booked a small AirBnb for myself and took a few days to think about the year. I was exhausted from the emotional roller coaster that was 2020 and couldn’t really think about the new year. So all I knew was that I wanted to do a deep dive on the year that had just passed. How to Reflect: Use a Process There’s nothing wrong with randomness or improvisation in your reflection process…but even Improv has some fundamental rules with coherent internal logic. So I wanted to use a simple format to guide my process. Reflection is a conversation you have with yourself and I believe that we can, do and should design our conversations — both with others and ourselves. I’ve found that the best processes have a deep, internal logic. That’s why I fell in love with Design Thinking more than a decade ago. Discover, Define, Develop and Deliver are four words that can move any creative conversation forward with clarity, regardless of the challenge. And Reflection is the perfect process to make the “Discover and Define” upfront part of the process effective. Some of my favorite reflection prompts come in groups: Plus/Delta is the simplest of all of the approaches I’ve used, and I learned it from Gamestorming, a powerful library of group process designs. Plus is positive and Delta is the symbol for change. This model of reflection asks us to ask ourselves “what worked?” and “what would you want to change?” . This simple process is a great reflection format because negativity is removed from the conversation — if there’s a “minus” we’re asked to turn it into a “delta”, which is a fundamental approach to reframing challenges. Rose/Thorn/Bud (RTB)is attributed to the Boy Scouts of America. I first learned about this format from a co-worker of mine who used this format to facilitate a better dinner table conversation with his three daughters. He’d ask each of them for something nice that happened that day (A “rose”) and also would ask them for something not-so-nice that happened that day (A “thorn”). While Plus/Delta removes negativity on purpose, RTB includes it, on purpose. Knowing that negativity is included in the conversation can create clarity and safety. If you’ve ever seen the Pixar movie Inside Out you know how damaging it can be to focus only on the positive side of things. “Buds” are like little roses…they’re not in full bloom, but they might develop into a rose with the right support. Buds can be something on the horizon, something emergent, something hopeful. A very simple way to put RTB is “Positive/Negative/Potential”. Facts/Feelings/Insights/Potential: This approach has many mothers. A foundational approach in Non-Violent Communication (NVC) is separating out Observations, Feelings, Needs/Values, and Requests (OFNR). Disagreements in groups of people usually happen when we dance around and between each of these elements or combine them haphazardly. Using the OFNR approach as the foundation for a reflective process can be powerful, and it’s the approach I used for my personal retreat. The ONFR approach smells a lot like What/So What/Now What, another favorite for group reflections attributed to Rolfe et al in 2001. Separating Facts and Feelings is powerful. The key reflection questions here are: “What Happened?” and “How did it Make me Feel?” These two questions are extremely neutral, which is an advantage, but also very general. I use these two questions in my workshops often because I don’t want to put my thumb on the scale when teams are thinking. But looking with more specific detail can be helpful. Positive/Negative/Potential is one more detailed way to look back over the year: What happened that was awesome? What happened that was awful? What happened that has potential? These questions address the first phase of the Design Thinking process to help us Discover “What Happened” in a more balanced way. A friend recommended Alex Vermeer’s 8,760 Hours as a guide to my retreat. Vermeer’s approach is to do a mind map of 12 life areas: Values & Purpose, Contribution & Impact, Location & Tangibles, Money & Finances, Career & Work, Health & Fitness, Education & Skill Development, Social Life & Relationships, Emotions & Well-Being, Character & Identity, Productivity & Organization and (finally!) Adventure & Creativity. Doing a RTB on *each* of these areas will give you a much clearer picture of “what happened” over the last year and a much deeper sense of how you feel about these elements of your life. If you don’t like Alex’s 12 life areas, choose your own or synthesize some other approaches. Alex has his own suggested Reflection Questions for each element: What went well? What did not go well? Where did you try hard? Where did you not try hard enough? By the end of a half-day, I had flipped through my sketchbooks for the last year and captured a series of nuggets of inspiration and sketched a host of mind maps for each area of my life. I was also beginning to feel energized about possibilities for 2021 (which surprised me). Don’t Forget Gratitude I talked over my plans for a retreat with my wife and my therapist and they both had the same advice: Don’t forget gratitude. Looking over the year (as difficult and chaotic as it was) and finding moments of brightness was profound. Gratitude and joy, in this context are data about how I felt. Many years back I diligently kept a gratitude journal: Three things I was grateful for, each evening. At the end of the year, I copied all of my entries to sticky notes and made a huge wall-sized map of what triggered gratitude for me. This map was a map to my happiness — it was pretty clear what elements I needed to keep cultivating in my life to keep me alive inside and out. How to Resolve: Run Some Experiments Using the “Discover” mindset from Design Thinking along with a series of thoughtful prompts can help you begin to “Define” what areas are most in need of support. Support to continue flourishing, and support to get on track. Resolving to achieve ALL of the top ten resolutions will leave you pretty spread out and exhausted. You can’t Exercise more, Lose weight, Get organized, Learn a new skill or hobby, Live life to the fullest, Save more money / spend less money, Quit smoking, Spend more time with family and friends, Travel more, Read more ALL at the same time. So pick 2–3 areas from your reflection to work on. So when I suggest “run some experiments” I mean SOME not all. Pick 2–3 areas from your reflection map and decide how you want to shift each of them. Leverage Lean Startup for your Life experiments: Measure-Build-Learn There are a lot of versions of this multi-circle diagram and they are all dizzying. The key idea is that there is a Design Thinking — Lean Startup handoff at the midpoint of the traditional double diamond, expanding and adding more detail to the “Develop/Deliver” phases. We’ve leveraged a self-empathy process with our reflection questions to address the upfront, problem solving phase in the below diagram. In the Lean mindset, you build as little as possible, try something out and then measure the results. We pick something we want to learn about, build something to help us do that, and measure the results. Define your Approach Clearly “Get in Shape” is a pretty broad goal. Round is a shape, after all. Do you want to run a marathon? Lift your body weight? Swim the English Channel? Regardless of your goal asking about your Why is critical. Why do you want to run a marathon? Asking 5 Whys is a minimum — 9 Whys will potentially get you to the heart of the matter. As Nietzsche said, He who has a why to live for can bear almost any how. When the going gets tough (and it will) knowing your why will help pull you through. In essence, instead of resolutions, frame a hypothesis: If I sign up to run a marathon I’ll get in better shape. (maybe) If I sign up for a running group I’ll train for the marathon. (much more likely) Behind this hypothesis is a tacit assumption and a deeper Why: If I’m in better shape, I’ll be happier. One of my goals is to read more so I can learn more and be smarter. I like being smart since it makes me better at my job. One of the ways I’m going to make that happen is having more authors on my podcast. I think it’s rather rude to invite someone for an interview and *not* read their work. In essence, my podcast is a system I’ve set up to enforce my goals. Systems over Events Getting in Shape and Reading More are not events…they’re processes that occur over time. Wishing to “get in shape” doesn’t make it so. Not smoking a single cigarette doesn’t a quitting make (although not smoking a single cigarette is much easier than never smoking a cigarette ever again. Check out the behavior grid for more.) Signing up for a running group is changing the system your fitness approach exists in. Telling my wife that I want to stop looking at my phone after 10PM is changing the system my sleep habits exist in. I see the events and the patterns…but unless and until we shift the system our lives exist in, the effort to stick to our goals will be more challenging than it has to be. The Iceberg Model is a key mindset for reframing your goals in terms of increasing impact. A few years back my friend Rob struggled with quitting smoking. He noticed that he smoked the most when he was working from home during the week. So over the weekend, he’d throw out his cigarettes and give his wallet to one of our friends. He didn’t need money during the week — He could order nearly anything over the internet with his credit cards stored in his computer while his corner bodega, where he would buy cigarettes, would only take cash. Rob decided to take the events and the patterns out of his hands by shifting the system, a place of much higher leverage to foster change. How can you change the system, rather than just relying on your willpower to shift events one at a time? The Iceberg Model: Events, Patterns and Systems Inspect the Results Regularly In essence, the problem with resolutions is that they are the ultimate waterfall approach. It’s no surprise that making resolutions once a year with no check-ins along the way fails 81% of the time over two years. How often should you check in with yourself? (in essence, how often should you re-reflect?) In Scrum, there’s a daily standup. And that’s not a bad suggestion, but a little lightweight according to Ben Franklin, who suggested a morning standup and an evening check-in! He suggested that each morning one should ask “What good shall I do this day?” and at night to check back in with “What good have I done this day?” My wife and I actually have a RTB conversation at the dinner table most nights, so, I’m not against this approach. For me, I’m planning weekly check-ins with myself and a 90-day retreat. That’s just for my business. I have a weekly men’s group as well as therapy to check in on my emotional wellbeing.
https://medium.com/@daniel-stillman/reflections-over-resolutions-fded3c6d1211
['Daniel Stillman']
2021-02-08 01:40:31.802000+00:00
['Lean Startup', 'Reflections', 'Systems Thinking', 'New Years Resolutions', 'Design Thinking']
Trade wars and rare earths
IMAGE: Peggy Greb, US Department of Agriculture (Public Domain) The worst aspect of the US-China trade war, apart from the fact that there are never any winners in trade wars, is that nothing about it makes any sense. The conflict has been triggered by one of the most irresponsible politicians in history, which mounting instability: today I block you, tomorrow I postpone the measures for three months, the next day I say that Huawei is threat to national security and two days later I suggest it could be included in some kind of trade agreement. In all seriousness: if a company is a threat to national security, it cannot possibly be included in any trade agreement, and conversely, if a company can be included in a trade agreement, it’s not a threat to national security. But as said, nothing about this makes sense. Block Huawei? The Chinese giant has imported enough components from the United States to continue manufacturing at its normal pace for the rest of this year, and more than enough time to develop the vast majority of these components in China if necessary. If it were necessary, which I doubt, that would be bad for US industry, because China would have been obliged to develop alternative components that would be its worst nightmare in the international markets. If the restrictions are maintained over time, the biggest problem for Trump would not come from China or Huawei, which is under no pressure from investors as it’s an unlisted company; but instead from US industry. Apple’s potential losses are enough to strike fear into investors, but many more companies face serious problems if the trade war heats up. Look no further than Google: forced by the pathetic Donald Trump and his clumsy and ill-calculated efforts to restrict its dealings with Huawei, the company has now been forced to show that its Android operating system is anything but open, that it rules it with an iron hand, and that as well as prompting many misgivings among the public, has potentially been exposed to further regulation. Might China retaliate by restricting rare earth element exports used in the manufacture of electronic components. In the same way US threats are largely empty, are China’s. Rare earth elements, in fact, are not so rare, nor is China blessed with a particular abundance of them. The only reason China is the main supplier of rare earths for industry is its lax environmental laws and comparatively cheap labor, but rare earths can be found in many places, including California, and once extracted, where they are usually found with other elements, the rest of the process is reasonably straightforward. Again: faced with a hypothetical restriction on exports of rare earths from China, all that would happen is that other countries such as Australia, Brazil, Canada, India and the United States would take up the slack, with China the biggest loser. Artificial constraints are always bad for everyone, and trade wars are, to a large extent, that: clumsy attempts to generate artificial constraints. Donald Trump believes that geopolitics can be managed by bullying, making this trade war a grotesque, absurd and pointless episode, which of course is of no concern to smartphone owners (and much less appeal to the authorities to intervene). These are meaningless, short-term actions and not lasting restrictions that will force changes in the industry that nobody cares about. None of this makes sense. In practice, the best thing that can be done about the erratic decisions and tantrums of Donald Trump is to ignore them, do nothing and wait for them to pass.
https://medium.com/enrique-dans/trade-wars-and-rare-earths-3b86c5a68aaf
['Enrique Dans']
2019-05-24 16:30:36.906000+00:00
['USA', 'China', 'Politics', 'Trade War', 'Trump']
StyleGAN2 Projection. A Reliable Method for Image Forensics?
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/merzazine/stylegan2-projection-a-reliable-method-for-image-forensics-7850f351002b
['Vlad Alex', 'Merzmensch']
2020-11-13 10:21:38.645000+00:00
['Artificial Intelligence', 'AI', 'Published Tds', 'Ai In Discourse']
Migrating from a conventional CMS to a decoupled technology stack!
Migrating from a conventional CMS to a decoupled technology stack! How we migrated the main Comic Relief site over a 5 month period from Drupal 8 to a headless CMS and a more modern software stack (aka, JAM stack). From “serverness” to serverless 🚀 Why move away from Drupal?👋 Initially, we thought we could iterate on our Drupal platform to move to the headless model. From Drupal 7 to Drupal 8, we have always been big advocates of Drupal and avid contributors to its community. After a long and happy relationship, we have decided to let it go and move on, despite having a very strong and capable platform, some of the constraints were becoming too much for us to keep the old stack. “Regular core updates for security fixes, sometimes vital modules are no longer supported in the new version, or no longer maintained at all, requiring some bespoke patching.” “Tied into the supplied template engine (Twig): flexible but often requires a lot of pre-processing and digging into render arrays to get exactly what is required for any one piece of work/design. Even simple fields provide a lot of stuff to the FE, which is customisable to some extent but still time consuming” - Andy Phipps, Senior Eng. What’s JAM stack? “A modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup” - Mathias Biilmann (CEO & Co-founder of Netlify). To learn more about it check out https://jamstack.org/ . Why did we decide to go JAM? Firstly, Part of our core strategy is to use Serverless technology wherever possible to reduce stress and the potential scaling issues that our night on TV creates. Read Caroline’s article (our Product Lead), to understand more about our approach “The real business value unlocked by going Serverless”. Secondly, we wanted to focus on the core foundations of the web experience. We want to deliver a fast, reliable and accessible website to our users. Security, Performance, best practice and A11y are the attributes we value the most on every application we build. JAM stack provides us with all the tools we need to achieve that. From very early on the project we kept those reports 100% as our benchmark. The cost was a big factor too. Our server cost was very high. To host our Drupal site, we were spending around £ 70k a year. Hosting static files in CloudFront, we are on track to reduce our costs by around 90% while improving reliability, scalability and performance. Engineer Experience. The experience developing in a JAM stack is quite delightful. Our new setup is very attractive. We believe that to bring the best talent is important to stay relevant and updated with the latest technologies such as React and GraphQL. ( we are hiring 😃) “Contentful is very user-friendly, without much overhead on development.” “User experience improved a lot through serving less demanding static content. - Mohamed Labib, Senior Eng. Faster development time. Developing in headless CMS setup makes things easier and faster. Engineers do their work very independently and were able to add more components into the CMS than we previously achieved on the Drupal platform. “Aligning the front end across our products and using Contentful has allowed us to give much greater flexibility for content management — tasks which previously had to be developed in code are now achievable for our content management team. This frees up engineering team time to work on improving accessibility and iterating on components to continually improve the site.” - Caroline Rennie, Product Lead New comicrelief.com 🎉 To manage our content we use Contentful as our headless CMS. To generate our frontend app we use Gatsby. To style our react component we use styled-components. To build our Component Library we use Styleguidist. To deploy our app Concourse CI / CD and to host our files we use AWS CloudFront. Image inspired by Gatsbyjs.org Contentful. What once was different Drupal backends to manage content of our campaign websites. Now we have one content hub, very user-friendly, to deliver content to different applications across our organisation. Gatsby. We wanted a framework to help us to focus on what matters the most to us, which is delivering content to our users in an interactive, clean and transparent way. And that’s what Gatsby provide us with. Gatsby gives you easy access to features like modern Javascript syntax, code bundling and hot reloading, without having to maintain custom tooling. Build app-like experiences faster — with Gatsby. Styled-components. Our CSS-in-JS library of choice made building our react components very fun. Once we made the transition from “modifiers CSS classes” to props we could be very creative and flexible with our components. It’s very interesting to use props instead of classes to customise the behaviour of a component. All your styles are contained within your component so there is less risk to overwrite the style of other components. I really like the Jest integration and features like css nesting. - Louis Tchamegni, Eng. Styleguidist. To develop our react components in isolation we chose Styleguidist. It helped us to keep our code documented and tested. Easy way to share with the team and show our components variations and interactivity. ConcourseCI. We have been using concourse CI for quite some time. We love to see our pipeline executing all our jobs, tasks, tests and notifications making sure our code is reliable and our site doesn’t degrade. Tests. We use jest-styled-components to snapshot our component once it’s been tested. And in the app level, we use Cypress for our end-to-end testing. CloudFront. Is our CDN (it speaks for itself). Final thoughts My experience in this project was great. I’d say if we were not using this selection of tools, frameworks and libraries we would not achieve what we did in the timeframe we had. Building our react styled-components in isolation on Styleguidist was a great way to keep us working in the frontend while we were still working out our backend integration. Using GraphQL in Gatsby was so effective to pull the data we need and how we need it. Contentful and Gatsby work really well together. Contentful richText field made it possible for us to use our “atoms” in our content models body fields. And Gatsby just pulls it all together very nicely. Our concerns were rapidly resolved. At first, we were worried about having to build our site for every content update but in the end, it meant we test every content that goes to production. How did we migrate our content from Drupal to Contentful? We managed to migrate the majority of our content using contentful-migration. Having more them 3k pages, our initial build time was around 10mins. But with the latest Gatsby updates and some performance improvements that we made in our build process, we went down to 3mins. Previewing content? Gatsby preview! It’s a quite recent feature, Gatsby team is doing an awesome job improving it almost day by day. Gatsby preview provides us with a temporary URL allowing us to preview our content changes almost immediately. What’s even better now is how fast and smart we can deliver value to our product and it’s end-users. The team and our stakeholder are happy :)
https://medium.com/comic-relief/migrating-from-a-conventional-cms-to-a-decoupled-technology-stack-6114cb97aaf5
['Gustavo Liedke']
2020-01-28 17:28:48.621000+00:00
['Gatsbyjs', 'Serverless', 'Jamstack', 'Contentful', 'Engineering']
[DEX] People making Ionia DEX — Holder-
This is IONIA :) In Ionia DEX, there are several types of participants for trading, which is the original function of the exchange. Maker Taker Holder They form a deal at Ionia DEX. The exchange will move. You can see in previous posts about the role of makers and takers. I’m going to talk about the holder today. What role does Holder play in Ionia DEX? Holder : IONIA DEX does not deal in crypto currency, but a user who stores/manages cryptog currency in Multiwallet IONIA. For example Although it has various crypto asset by purchasing cryptocurrency through various exchanges, it is a user who uses multi-wallet Ionia to manage them as assets in one place in anticipation of future value. They don’t sell at Ionia DEX, but they’re the ones who can participate in Ionia DEX at any future time.
https://medium.com/ionia-io/dex-people-making-ionia-dex-holder-f8b9fd1f6737
[]
2018-09-27 08:29:41.455000+00:00
['Exchange', 'Blockchain']
A Comprehensive Guide to Lewin’s Change Model
Organizational Change is a common thread that runs through all businesses regardless of size, industry and age. It is about the process of changing an organization’s strategies, processes, procedures, technologies, and culture, as well as the effect of such changes on the organization. Our world is changing fast and, as such, organizations must change quickly too. Organizations that handle change well thrive, whilst those that do not may struggle to survive. Lewin’s change management model is a framework for helping with organizational change which is divided into three steps that entails: Unfreezing — Create the perception that a change is needed Changing — Move toward the new, desired level of behavior Refreezing — Solidify new behavior as the norm. 3 Step Lewin’s Change Template and Instruction Lewin model are fairly obvious in that it’s the simplest model out there. This makes it easy to plan around, especially in organizations not accustomed to the technical mathematical models. It is still widely used and serves as the basis for many modern change models. Present your Lewin’s Change Analysis with Online Infographic
https://medium.com/@warren2lynch/a-comprehensive-guide-to-lewins-change-model-d1e023d9459d
['Warren Lynch']
2020-03-03 06:20:28.987000+00:00
['Startup', 'Agile Methodology', 'Strategic Planning', 'Change Management', 'Business Strategy']
Feeding the Needy
First of all I want to discuss why their is a need to feed the hungry and the needy people. In a developing country like Pakistan where there is a lot of people living in the slums and where there is a lot of child labor and poverty and even in some areas the people can’t get a meal 3 times a day to feed their children. My idea for the mega project is to establish a foundation where the needy can be feed 3 times a day almost free of cost or with the minimal cost.(The minimal cost is to avoid the wastage of the food) For this purpose, I have also worked for some social cause personally as much as possible without any foundation(because of the politics in the foundations and NGO’s) I have also visited some of such foundations and found out that how they work and what strategy they adapt for this purpose. I always feel very heart-rending by seeing such families and always try to help them in some sense. I hope to establish such kind of foundation one day in the near future to reach my goal. To help the needy gives me eternal satisfaction.
https://medium.com/@muhammad-salman168/feeding-the-needy-22f055b195b3
['Salman Shahid']
2020-12-27 08:09:43.001000+00:00
['Amal Academy', 'Amal Fellowship', 'Amalkindness', 'Megaprojects']
More Ways to Iterate Through JavaScript Arrays
Photo by Andy Chilton on Unsplash There’re many ways to do things with JavaScript. For instance, there’re lots of ways to iterate through the items of an array. In this article, we’ll look at several ways we can iterate through a JavaScript array. The While Loop The while loop is a loop that’s fast. And we only need the run condition to run the loop. For instance, we can use it to loop through an array as follows: const arr = [1, 2, 3] let i = 0; while (i < arr.length) { console.log(arr[i]); i++; } In the code above, we have a while loop with the initial index defined outside as i . Then in the while loop, we defined the run condition as i < arr.length , so it’ll run if i is less than arr.length . Then inside the loop body, we log the items from arr by its index, and then increment i by 1 at the end of the loop. The Do-while Loop The do...while loop runs the first iteration regardless of the condition. Then at the end of each iteration, it’ll check the run condition to see if it’s still satisfied. If it is, then it’ll continue with the next iteration. Otherwise, it’ll stop. For instance, we can loop through an array as follows: const arr = [1, 2, 3] let i = 0; do { console.log(arr[i]); i++; } while (i < arr.length) In the code above, we have: do { console.log(arr[i]); i++; } Which iterates through the entries by accessing the arr ‘s entry via index i , then i is incremented by 1. Next, it checks the condition in the while loop. Then it moves on to the next iteration until i < arr.length returns false . Array.prototype.map The array instance’s map method is for mapping each array entry into a new value as specified by the callback function. The callback takes up to 3 parameters. The first is the current item, which is required. The 2nd and 3rd are the current array index and original array respectively. The callback returns a value derived from the current item. It can also take an optional 2nd argument to set the value of this inside the callback. For instance, if we want to add 1 to each number in the array, we can write: const arr = [1, 2, 3].map(a => a + 1); In the code above, we have a => a + 1 , which is used to add 1 to a , which is the current item being processed. Therefore, we get [2, 3, 4] as the value of arr . We can pass in a value for this in the callback and use it as follows: const arr = [1, 2, 3].map(function(a) { return a + this.num; }, { num: 1 }); In the code above, we have: { num: 1 } which is set as the value of this . Then this.num is 1 as we specified in the object. So we get the same result as the previous example. map doesn’t mutate the original array. It returns a new one with the mapped values. Photo by Athena Lam on Unsplash Array.prototype.filter The array instance’s filter method returns a new array that meets the condition returned in the callback. The callback takes up to 3 parameters. The first is the current item, which is required. The 2nd and 3rd are the current array index and original array respectively. The callback returns a value derived from the current item. It can also take an optional 2nd argument to set the value of this inside the callback. For instance, we can get a new array with all the entries that are bigger than 1 as follows: const arr = [1, 2, 3].filter(a => a > 1); In the code above, we called filter with a => a > 1 to only take entries that are bigger than 1 from the array it’s called on and put it in the returned array. Then we get that arr is [2, 3] . To pass in a value of this and use it, we can write: const arr = [1, 2, 3].filter(function(a) { return a > this.min; }, { min: 1 }); Since we assigned this inside the callback to: { min: 1 } this.min will be 1, so we get the same result as in the previous example. Conclusion We can use the while and do...while loop to loop through items until the given run condition is no longer true. The array instance’s map method is used to map each entry of an array to a different value. The mapped values are returned in a new array.
https://medium.com/swlh/more-ways-to-iterate-through-javascript-arrays-a1ff7cc46c3b
['John Au-Yeung']
2020-05-16 17:09:25.179000+00:00
['Technology', 'JavaScript', 'Software Development', 'Programming', 'Web Development']
Better ga and Characterize.vim
betterga (GitHub: manicmaniac/betterga) by Ryosuke Ito is an extended version of the :ascii command. This command is typically invoked with ga , and shows the ASCII value of the character under the cursor in decimal, hexidecimal, and octal. Ryosuke’s version of ga adds some useful extra information, including unicode details. For example: <a> [LATIN SMALL LETTER A] 97, Hex 0x61, Octal 0141 <®> [REGISTERED SIGN] 174, Hex 0xae, Octal 0256 <∆> [INCREMENT] 8710, Hex 0x2206, Octal 021006 The template is defined with g:betterga_template , so you can change what values get displayed. If you want even more character representations, then try tpope/vim-characterize by Tim Pope. This one returns emoji and HTML entities: <©> 169, \251, U+00A9 COPYRIGHT SIGN, ^KCo, ^KcO, :copyright:, ©
https://medium.com/usevim/better-ga-and-characterize-vim-66d60755661
['Alex R. Young']
2017-02-15 16:45:32.183000+00:00
['Plugins', 'Characters', 'Scripts', 'Ascii']
Importance If HR In a Startups
INTRODUCTION Congratulations on successfully establishing your start-up! Now that you have taken the big leap you have a bunch of things to be done. Arranging funds, planning your business, generating leads, advertising your products, installing the technology, building traction, and much more. In the hustle of resonating your passion with your business, you neglect to realize the need for company culture, attending to your employee’s concerns, recognizing and rewarding your employees. These are some important factors you need to consider to provide a good working environment for them. (Restructure the sentence) as your employees are the backbone of your company. Creating an employee-friendly environment is inevitable to retain them and keep them interested and motivated to work hard with passion. This is why hiring an efficient and the right person as an HR for your company is of great significance. START-UP A start-up is a newly established organization founded by one or more enthusiastic entrepreneurs usually young to monetize their passion. They aim at bringing their innovative product or service into the market. At the initial stage, a start-up has around 5 to 25 employees which gradually increases with the development of the company. WHO IS HR? An HR or the HR department is the abbreviation for human resources. As the name suggests, they deal with the workforce of their organization. They are the gatekeepers for diversity in hiring and deal with the conflict mediation between employees if something goes wrong. They are responsible for hiring efficient and appropriate candidates for their respective roles, firing employees, ensuring the business activities are carried out on par with legal policies, they keep track of records, employee policies, deal with incentives and salary hikes, ensure the company culture is being respected by the employees. They decide on vacation time for employees based on their performance and resolve disputes within the employees and drive to ensure a safe and comfortable working decorum is maintained. They also take care of employee benefits. Their main responsibility is generally about helping the employees grow, making sure the right tools are provided to them, and offering support for their development and growth in the company. In a nutshell, their job revolves around catalysing career growth and ensuring employee wellbeing. WHY STARTUPS DON’T CONSIDER HIRING AN HR? Start-ups face quite a few challenges and hurdles that cause them to neglect to hire an HR. The requirement of funds is the major factor that intimidates them from hiring a lot of employees and they decide not to hire an HR to cut down costs. They don’t prefer to have a culture the same as a big company. They assume they can compensate for the role of Hr by using software and online tools that deal with dashboards which in turn deal with benefits and all the employees’ details and records, they consider it to be a wise alternative to cut down costs by cutting off the HR roles whereas it is not the correct decision. The most glaring issue is that since few employees are working, they convince themselves there aren’t many people to deal with. WHY SHOULD YOU HIRE HR? Employees’ relaxation Usually, when you have established your start-up, you are likely to concentrate more on customers, sales keeping track of the expenses, and so on. It might not occur to you about your employees. It is significant to provide a healthy working environment with benefits and policies for them. Providing them with health insurance policies, keeping track of their leaves might be difficult to handle yourselves. Whereas when you hire a person who is particularly supposed to concentrate only on your employees and your workforce, it makes the employees comfortable. Recognizing and appreciating your employees’ achievements and hard work is essential. If one of your employees has been working hard and achieves specific targets at an outstanding rate or in an exceptionally small time period you might need to offer them a paid vacation or just grant them a few days of leave which might motivate them and increase their efficiency. When you have a person to carry out especially these tasks it is convenient for you to concentrate on sales and growth of your company. Salary and incentives Employees always expect a raise in salary after a particular time period probably once a year. This is another area of great importance. An HR would do a perfect job of analysing the projects and work done by employees to calculate the deserving incentives to be provided for the respective employees. Legal policies Another often neglected yet a matter of concern is to make sure the organization is functioning hand in hand with the legal policies and make sure you are in a safe region Good employer-friendly environment Employees spend most of the time at your organization working for you. Among the interactions and cooperation’s between the workers might, unfortunately, be subjected to conflicts and arguments which may lead to unwanted chaos. It will be helpful to have someone to deal with the conflicts professionally and diplomatically. Creating policies that benefit your workers could be done well with a professional role. HRs play a significant role in dealing with and helping out if any of your employees are being abused or harassed. CONCLUSION There may be several alternatives that you may think will replace the position of an efficient but that is not the truth. Hiring an efficient HR who suits your company culture and style are inevitable to enforce the benefits to help and deal with the human side of your start-up and you can get busy with your business without the burden of having to worry over their well-being as an HR will eliminate that burden off your shoulders. If you haven’t considered hiring an HR this is the right time to do it. We wish you reap benefits with hiring an HR and wish you success. https://insellers.com/blogs/business/importance-of-hr-in-a-startups/
https://medium.com/@shivankk1905/importance-if-hr-in-a-startups-98b66539a9de
['Shivank Shrivastava']
2021-07-02 16:21:27.573000+00:00
['Company', 'Growth', 'Business']
How Learning How to Prioritize Tasks Will Help Manage Your Time at Work
If there is a universal problem the majority of (project) managers struggle with today, it’s time. And as a result, the way that we use our available time to properly prioritize tasks. How often have you said one of the following: “I wish I had one more hour in the day.” “The day is not long enough for everything we need to accomplish.” “We’re putting in the work, but there’s never enough time.” When it comes to projects, all of these matter very little. No matter what, you have to complete that project in time. Even when time is hard to come by. In this article, we’re going to show you how to prioritize tasks with the best task prioritization methods. Not only will you complete your projects in time, but you’ll also deal with less stress than before. Ready? Let’s take a look! Task Prioritization When Everything Is Important Task prioritization and time management in project teams can get really messy without a reliable process. Project managers usually juggle their own tasks while juggling their entire team’s tasks. And to individual team members, their tasks are the priority. However, you as the project manager have to set your own — and general project priorities. It sure looks like tasks are streaming in from all over; from legal to IT departments. And in that case, the most important thing you can do to prioritize is capture all the tasks first. You can use a simple to-do list for this, and just add on to it. If you’re using task management software, you can create an automated process. Different team members add their tasks, and you just assign them and prioritize accordingly. Break down your tasks to the simplest activities. For example, if you have a task that says: “make an app,” that doesn’t really explain all the work that goes into making an app. Instead, break it down so it looks more like: And so on. Make sure you understand task dependencies. This is the part of the process where you can see the influence of task prioritization on time management. When you understand which tasks are dependent on others, you’ll be able to create a much more efficient schedule. Otherwise, you’re running the risk of trying to do everything at once, only to realize that the most important tasks haven’t been completed. When you know the order and the specifics of the tasks you need to complete, it’s time to turn to task prioritization methods. The Best Methods for Project Managers to Prioritize Tasks When you have all your tasks in one place, it’s time to prioritize and organize, AKA: get things done. And speaking of that… 1. GTD: “Getting Things Done” Task Prioritization Method Getting Things Done (GTD) is a productivity method developed by David Allen. It centers around the idea that your mind is for having ideas, not holding them. So you must have a method of capturing the ideas and tasks and storing them for review and prioritizing at a later time. After capturing all your tasks and turning them into actionable items, it’s time to organize them. With GTD, this takes place through the so-called decision tree. You’ll ask yourself questions necessary to prioritize one task over others: GTD first sorts non-actionable tasks into three categories: Trash — Scrap these tasks, they don’t matter. Reference — These tasks contain valuable information, but there is nothing to act on at the moment. Someday/maybe — These tasks don’t have to be completed right now, but they might have to be completed at some point. This is your backlog. Then, it’s time to sort actionable tasks: Right away — If you can complete tasks in one step, complete them right away and mark them as a priority. Waiting-For — If you’ve assigned tasks to other team members, add them to the Waiting-For list. Next-Action — If you don’t have to complete actions on multi-step tasks right away, add them to this list. Calendar — If your task has a deadline, add it to the calendar. 2. “Eat That Frog” Method of Prioritizing Tasks Eat That Frog is a method that requires strategic planning. The name of this prioritization method comes from Mark Twain’s saying: “If it’s your job to eat a frog, it’s best to do it first thing in the morning. And if it’s your job to eat two frogs, it’s best to eat the biggest one first.” What this really means is that you should complete the most important tasks first. At the beginning of every day (or period), you have to identify your objectives and assess your task list from that perspective: If you have to complete important but complex tasks, complete them first thing. After you’ve eaten the biggest frog, complete other tasks. Prioritize carefully according to your objectives. Your frog is the most important task on your list. 3. ABCDE Method The ABCDE method is one of the best ways to prioritize tasks. Essentially, your “task alphabet” starts with A: the highest priorities. Then, you sort other tasks according to the following method: A — high priority B — medium priority C — low priority tasks D — delegate to other team members E — eliminate Source: Teodesk If you want to supercharge your efficiency, you can also take a page out of Stephen Covey’s book, who additionally sorts the ABCDE tasks according to urgency and importance. 4. Chunking/Timeboxing Chunking is an excellent time management and task prioritization method. Instead of completing tasks as you go, and falling prey to the productivity killer that is multitasking, you should divide your day into “chunks” — blocks of time dedicated to completing each task. Source: Lucidchart After capturing all of your tasks, chunking dictates that you should sort them according to context. For example, if your team is building an app, you’d sort the tasks into front-end and back-end design activities. Those would be your two contexts. Then, dedicate portions of time to each task group. This allows you to focus on the work in front of you, instead of constantly breaking focus to concentrate on something else. Chunking is an incredibly efficient method for project teams that often work on multiple projects at once. 5. Use the Eisenhower Matrix to Prioritize Your Team’s Tasks The Eisenhower Matrix is one of the best, albeit most basic task prioritization methods. Source: M. Z. Ashraf With the Eisenhower Matrix, you group tasks according to importance and urgency: 1. Important and Urgent tasks: Complete immediately 2. Important and Not Urgent tasks: Schedule 3. Not Important and Urgent tasks: Delegate 4. Not Important and Not Urgent: Eliminate Sometimes, the best way to get things done really is the easiest way. Keeping Everyone on the Same Page As you make your team’s schedule and start completing tasks, it’s important to keep all the stakeholders in the loop about progress, milestones, and other aspects. A good task and project management tool can help a lot. With Office 365 and Project Central, you’ll be able to monitor project progress and keep everyone informed. Your team members will know exactly what they need to do next. You can even give special permissions to clients and top management. Say goodbye to emails, and step into the era of getting things done!
https://projectcentral.medium.com/how-learning-how-to-prioritize-tasks-will-help-manage-your-time-at-work-d51c61641077
[]
2020-02-04 10:28:05.532000+00:00
['Project Manager', 'Project Management', 'Task Management', 'Office 365', 'Tasks']
Moving Freetrade into Figma
What is Free…trade? So for those of you who aren’t familiar with Freetrade, we’re a challenger stockbroker. Our mission is opening the world of investing to everyone. We do this by cutting out the fees, rethinking the tech and, most importantly, using super-simple, intuitive design for our products. A little background Going back just a few months we only had one designer. However, come November last year we quickly grew the team from 1 to 3 with a 4th hire coming not long after. We knew that having a master file with no version control wasn’t going to work for us so we wanted to explore how we evolve our process. What mattered to us? Now, there’s nothing design teams love more than to try a load of shiny new tools but really, it came down to what mattered most to us. Open & Transparent At Freetrade we have a super transparent culture. We share work early and often both internally and externally with our community. We have weekly design critiques, where anyone from the team can join and give feedback. We even opened up our roadmap for you all to see. So we wanted our new process to reflect that transparency and enable us to share more seamlessly. Remote collaboration We have a partially distributed team — meaning there are often times where we’ll want to collaborate but aren’t able to sit beside one another. Previously, collaborating like this means entering the hell that is sharing your screen and trying to guide others around, pointing frantically only to realise they can’t see anything you’re doing. Future proof Because we’re a small team migrating everyone was relatively simple. But, we would be growing the team quickly and would prefer not to need to do this again anytime soon. So we wanted to make sure that the approach we took would grow with us. Our choice? Figma. Whilst we believed this would be best for our team there were some things we’d be sad to see go. Pulling live data into our designs. Various auto-layout plugins i.e. Anima. What do we need to do? Now that we knew where we wanted to get too, what did we need to do to get there? Component Library With such a new team we knew that moving to Figma without bringing our component library with us would create a lot of inconsistency. So we decided this would be the first thing we wanted to have. Core Flows To make things easier for everyone we also wanted to have our core screens and flows in Figma so that people could reuse them rather than have to start from scratch each time. This also gave us a good opportunity to iron out any issues with our component library Bring in the rest of the team! With both our component library and most core flows in Figma we could begin to bring in the rest of the team. Instead of just bring everyone in all at once we asked that people just use Figma for any new projects so as not to disrupt any on-going work. Timelines I’d love to say we did some fancy cost/benefit analysis. A Gantt chart or two and spreadsheets as far as the eye could see but in reality, it looked a little more like this: Switching Day So, we switched - from this point on Figma would be the tool we used for all new projects. Working together From day one being able to collaborate in the same file was great. We could jump on a call, and both work side by side, leaving comments or iterations of our own. Components Because of the way that overrides work in Figma we were able to go from 11 variations of a row down to just 5. And, it was a similar story for the rest of our component library. This made it easier to maintain and faster to find and use our components. Cost savings Here’s your chance to make friends with your finance team. For us, switching to using just Figma instead of Sketch, Wake & Zeplin was ~65% cheaper. About £279/person each year. A not-so-obvious problem But, not everything went smoothly. People were familiar with our existing process and because there was no official way of marking something as the leading idea it became hard for people to focus their feedback or even know if work was ready to be built. Problem solved… right? I decided to start adding watermelons to designs that were my current favourite. Now, obviously, this was genius. However, it didn’t catch on… Another approach After we put in more than 30 seconds of thought into this problem we decided to create a document template for our projects. This would also appear in the project browser. First, we created a page for a project thumbnail this essentially maps to the epic in Jira and makes it easy for someone to go and learn more about the context of a project Page per user story We then created pages for each user story We did this to make it easy to understand specific flows, keep feedback focussed to the problem we’re solving and make it easier to hand over chunks of work to developers rather than crushing them by dropping the whole project on them at once. Status block We’ve also created a component that makes it easy to set the status of a piece of work. Currently, the states we have are: Work in progress Open for feedback Ready for development However, iteration is a naturally messy and somewhat unstructured process so we wanted to have a place where we could explore quickly, make a mess and find the best solution. For that, we added a drafts page: Drafts Here, we have every single iteration we go through as we work on a project. It’s messy but having it in its own page allows us to explore freely without clogging up more polished flows.
https://blog.prototypr.io/moving-freetrade-into-figma-798fe289398d
['Mitchell Petrie']
2019-03-10 19:27:57.973000+00:00
['Figma', 'Product Design', 'Design Process', 'Design']
How to have a Merry X-mas?
Next week it is X-mas and I wanted to share some tips on how to make it a nice holiday despite the circumstances, as this year it is different, it is tougher for everyone due to COVID. Photo by Andreea Radu on Unsplash Personally for me it is going to be very different than any other X-mas. Last year I had my due date exactly on 24th of December, so I spent it very very pregnant and waiting for a new life to start. This year I spend it with my husband and son and my husband’s family in Barcelona so it is a wonderful thing, the first X-mas of my son, OMG! On the other hand, this year I won’t be able to see my family in Hungary, we can see each other through the screen and talk, however it is not the same. They can’t be here with me or with my son and that is definitely the hardest part of this X-mas. I can just hope that they are all right and they are not worried or lonely. So how can we all make it a wonderful X-mas despite COVID and being isolated? Check out the below list and build them into your daily practise for a happier holiday. Photo by Amadeo Valar on Unsplash Gratitude — I can really just recommend it to everyone to try to build gratitude into your life. You can start with writing a list every morning or evening of 5 things you are grateful for, and as an advanced level, try to make it an all-day practise. Whenever something bad happens or you have negative thoughts, just find the positive side of the situation what you are grateful for. There is always something good in everything and there is always something you can be grateful for. For example, I am sad as I can’t see my family this X-mas and I even don’t know when we will see each other next time. But I am grateful that this year we got to know each other better and better with my in-laws, they help me a lot with my son and as a fresh mum I really appreciate the support system around me. So I am happy that we have them here close, in the same city and that we can celebrate together. Socialise — even if you are far away, use the advantage of technology and make video calls whenever you need your loved ones close. We used to talk via phone with my family and friends, I actually have a huge practise as I am living in Barcelona for 10 years, far away from many friends and family. This year we changed to video calls and it is much better, I can see their face, their expressions, I see where they are… it just gives a lot to it. I do feel somewhat alone sometimes as our social life is very limited recently compared to my needs, I just love to meet friends, hand out with them, invite them over for lunch or go out and have dinner together. As we limited these events this year, I miss it a lot, so I often just reach out to people on my phone, sometimes to people I hadn’t talked for a while or I just give a call to my sister, and it makes it easier to handle this situation. Do good for others — no matter how miserable you feel, you will always find someone who can benefit from your help, you can always make some charity work, volunteering or just very simply help out some friends, your family or people around you. It is not only good for them, but believe me, it will make your day or week as well. The more good you make with others, the better you will feel about yourself and you will focus less on your own problems. Perspective — not only there are people who are in a worst situation than you (so you can again practise gratitude), but this year the whole world experiences difficulties and there are lot of people alone, isolated, with financial difficulties, being anxious about COVID, and so on. You are not alone in this situation, and if you understand and feel how we all are together in this, it will make it easier to handle it. Control — whatever makes this X-mas difficult for you particularly, think on the following: what can you do about it and what are the factors outside of your control? Can you reduce your loneliness with calling some people and reach out to them or with doing something good with random strangers? Yes. Can you end COVID and help on millions of people around the world? Most probably not so the sooner you accept it the better. The growth mind-set helps you to see the opportunities in the situations and let go the factors you can’t control. Well, I hope this helps and I wish you all a very merry X-mas!
https://medium.com/@eszter-zsiray-coaching/how-to-have-a-merry-x-mas-ce4c6ef2e1b1
['Eszter Zsiray Coaching']
2020-12-17 10:16:59.167000+00:00
['Gratitude', 'Happy Christmas', 'Be Happy', 'Charity', 'Covid Christmas']
7 Things I Learned from Participating in Toxic Friendships
(Mean Girls starring Lindsey Lohan, Amanda Seyfried, Lacey Chabert, and Rachel McAdams) “Are you serious!” I blurted out, shattering the silence of twenty seventeen year olds diligently completing math equations. I hastily shoved my phone back into its designated pocket and scurried out the door before any more attention could be drawn towards my sweaty, hyperventilating, ready to run self. I accidentally tripped over someone’s desk “I-I-I-I-II’m sorry,” I said. The teacher hurried after me, but disappeared after I speed walked around the corner of the hallway, insisting that I was fine. The rest of the school would know I wasn’t fine when I screamed at the top of my lungs and ran at the assistant principal, pointed my finger in his face and told him to “tell [friend] to leave me the f*** alone.” This is only a highlight of several emotional outbreaks related to friends verbally attacking me. I once threw a burrito into the street. Another time I laid in the middle of the school hallway covered in ugly tears. In the past 10 years, I have been a victim of bullying, a friend to many, and a best friend to only a few; most of which ended in one of the ugly situations I took the leisure of describing to you above. I am now 18, about to head to college, and as I spend my last month planning socially distanced meetups, I realize that I finally found a group of wonderful people who I deserve and that deserve me. There are no miraculous secrets. There is no jealousy, passive aggressiveness or intentional extreme aggressiveness. Just fun, love, and support. How I came about writing this article was through a game of truth or dare, or as my friends and I like to call: it truth & truth, since we only ask truth questions. Someone asked me… If I could go back and change one thing about high school, what would I change. Immediately, I jumped to my relationship with [Friend]. By the end of our friendship, we’d hurt each other so badly that I wished I never met her and I spent more time with better people instead. I quickly changed my answer. “actually. I would change nothing.” It’s because of these toxic relationships where I lied, fought every other day, and was extremely unhappy to finally understand exactly what I need while looking for friends in college. I realize that I’m only eighteen and still have plenty to learn, but these are the things I wish were drilled into my brain before I went to high school. So whether you’re twelve, sixteen, twenty-five, or sixty there’s always something new to learn about the complicated concept of human connection.
https://emily-rosenberg168.medium.com/7-things-i-learned-from-participating-in-toxic-friendships-2daedbc4be66
['Emily Rosenberg']
2020-12-21 03:03:55.114000+00:00
['Life Lessons', 'Lists', 'Tips', 'Relationships', 'Friendship']
A Recap: Gitcoin at ETHDenver
A Recap: Gitcoin at ETHDenver The Burner Wallet, Kudos, and Grant Matching, oh my! Gitcoin was built in Colorado. It’s a big part of our culture and history, and we showed up ETHDenver in a big way to celebrate! 2019 has a lot of fantastic events lined up — and they all have to live up to high expectations set this weekend at ETHDenver. Set in the Sports Castle, 1,500 participants (including 750 hackers) shuttled into 6 floors including three hacking floors, a chill room, a Maker Space, and lots of interesting conversations about Ethereum and the future of the internet. We won’t be able to cover everything in this article, but did want to cover the fun experiments we had at the event! The BuffiDai Wallet: Buying Food With Crypto Austin Griffith, Director of Research at Gitcoin, unveiled the Burner Wallet earlier this year as a quick and easy onboarding to crypto payments, built using MakerDAO’s DAI and POA Network’s sidechain, xDai. At ETHDenver, the Burner Wallet rebranded as the BuffiDai Wallet and was used across ETHDenver to buy meals at the several great food trucks at the venue. Here’s a view of the wallet in use, from the lens of the wallet itself. 🤯 As it was used by each person at the event, it naturally was something folks thought about for hacks and happy hours. A particularly interesting project created a Burner Wallet for private transactions — while the MakerDAO Dappy Hour used the wallet to pay for beer at the event, while a monitor updated with the latest and greatest stats on purchases for the night. A highlight of the weekend and, arguably, the year, for crypto is the Burner Wallet. UX improvements are not coming — they are here. The ETHDenver Kudos Game!
https://medium.com/gitcoin/a-recap-gitcoin-at-ethdenver-1e48bfc93805
['Vivek Singh']
2019-02-22 23:09:40.409000+00:00
['Tech', 'Blockchain', 'Ethereum', 'Technology', 'Open Source']
How to Build a Node.js Application with Docker
How to Build a Node.js Application with Docker If you’ve found this article, that means that you have been tasked to setup an application and run it inside of a Docker container. By the end of this article, you will be able to do exactly that. Before diving into the necessary code, it’s important to understand the underlying problem that containers and Docker solve. Luckily, this problem is simple to understand, and will make our code easier to comprehend. For readers who are already familiar with Docker and its use case, scroll past the next section. What is the problem that Containers solve? Most developers can relate to the following scenario that illustrates the purpose of containers well. A co-worker asks you to clone their GitHub repository in order to help them debug an issue. After you’ve cloned it and installed the necessary dependencies, you find that the application will not run. It’s throwing all sorts of errors, none of which are the problem that your co-worker requested help with. The problem that you two have just ran into is that of running an application on a different machine with a different environment. No two developers have the same machine. Just because one developer’s application works on their machine doesn’t mean that it will work on your machine, since your machine may have many different attributes in their respective environments, like: Operating system; Software version; Installed dependencies; The list of differences can go on and on! What is the solution to this problem? This is where containers are a solution. By packaging code and all of its dependencies into an isolated container, a developer can run that container on any machine and have predictable results, since everything the application needs to run is included in the container. Once an application is containerized, it can be shared and run on any machine. So, the aforementioned problem of incompatible development environments is solved. We’ve arrived at the definition of a container: a container is a software unit that packages up code and all of the code’s dependencies so that it can run on any machine. Let’s translate this definition into a Node.js application; what would the process of packaging up the code and its dependencies be like? First, the host that will run the application will need the Node.js runtime, so that should be included. Next, we’ll need the actual code that comprises the application, like index.js , and all of the other files that make up the application. We will also need the package.json , as that file defines the necessary dependencies for the application. Additionally, we will need to define the command(s) necessary to actually run the application. Now that we’ve seen the problem that Docker containers solve, let’s actually create one for a Node.js application. Defining the Dockerfile Go to a new directory and run npm init . This will create a new package.json file for defining an application and its dependencies. Let’s add a dependency: run npm i express to install express as a dependency. We will use this package to create a server. Now, create the entry point file for your application. It’s name will be whatever you defined it as during the init setup. On my machine, it’s index.js , so I’ll run touch index.js . Open index.js and paste the following code: index.js is just specifying a host and port combination to listen on for requests. When we visit the root, / , we will receive the response on line 11 from the code. This is enough code to write and test out Docker containers. In order to spin up a Docker container, we first need to write a Dockerfile which defines the application, its dependencies, and any commands that need to be executed for the application to run. Run the command touch Dockerfile and open it in a text editor. First, we need to define the base image that will be used to build this container upon. An image is used to build a Docker container. The image provides the necessary environment for running a particular application or software inside of it as a container. In our case, we are running a Node.js application, so we will need to build from the Node.js image to provide our application with the Node.js runtime. To do that, mimic the following Dockerfile: We have selected a certain image provided by Node.js. There are many different options; in this case, we opted for a small image of it. Next, let’s define the working directory inside of the container. This directory will house our application’s code. Up next is copying the package.json file from the current directory on our machine into the image, and also running npm install to install the dependencies defined in package.json . Since we’ve used the base image of Node.js, NPM is already installed and can be run inside the container. For this, we’ll use the commands COPY and RUN. The COPY package.json . will copy the package.json in our current directory to the working directory, /usr/src/app/package.json . We’re almost done — now, we need to copy the rest of our application’s source code; in this case, it’s just index.js . Define another COPY command for the current directory. We’ll need to use a new command now, EXPOSE. This command tells the Docker container which port to listen to at runtime. In our case, our application listens on 3000, so expose the same port in the container. In addition to exposing the container’s 3000 port, run the final command to start the application: node index.js . Before actually building this image and running the container, create a .dockerignore file to define which files from the current directory should be ignored when building the image. This file should exist in the same directory as the Dockerfile. As it stands right now, the entire directory is getting copied over to the working directory in the image, including the node modules. This directory can be massive. We don’t want to include node_modules , so add it to .dockerignore . Building the Image The Dockerfile is defined, but we still haven’t built the image. In order to run this application in a container, we will need to run the image representing the application. So, let’s build our image from the Dockerfile, and then run the image as a container. In the current directory, run docker build -t <your_name>/node_app . . The . at the end specifies to check the current directory for a Dockerfile, while the -t <your_name>/node_app gives the image a tag that you can succinctly specify to run the image. After it has been successfully built, it’s time to run the image as a container. Run the following command: docker run -p 5000:3000 -d <your_name>/node_app . -p specifies a public port on your machine to map to an internal port in the Docker container. In this case, we map to the port 3000 so that requests to our machine at 5000 will be routed to the Docker container’s 3000 port. -d instructs Docker to run this container as a background process so you can keep operating in the terminal. Visit localhost:5000 in a browser, and the message ‘Running on Docker!’ should be present. Conclusion As one can see, Docker containers give developers immense power by ensuring that we can run any application on any machine. In this article, we’ve only been exposed to a tiny sliver of Docker’s power. For any readers who want additional work, try the following exercise. Create an application that uses a Redis instance in some way. Then, define a docker-compose file that can seamlessly run both the application and Redis and allow them to communicate over a shared network. Good luck and happy coding!
https://javascript.plainenglish.io/how-to-build-a-node-js-application-with-docker-4a0164fdc9ca
['Jordan Moore']
2020-12-11 14:53:22.208000+00:00
['Nodejs', 'Docker', 'Software Engineering', 'Expressjs', 'JavaScript']
Fall Robins
Photo by Ruth Solnit Today only their edges show. A feather, a foot, a flash of orange, stirring leaves left on the big madrona, playing hide-and-seek, suddenly exotic. Ruth Solnit November 2020
https://medium.com/@rpsolnit/fall-robins-8260e232c5eb
[]
2021-01-31 23:22:47.723000+00:00
['Birds', 'Poem', 'Seasons']
Strange Days in the Land of $147 Plane Tickets and Two-Dollar Gas
We’re living in strange days. But unlike The Doors’ sentiment in the song, there aren’t any new towns to find. There are no places that offer an escape. We can’t go eat in a restaurant, but we can go into grocery stores packed to the rafters with customers, unable to maintain the randomly arrived-at six feet of “social distance” between ourselves and others. We cannot go into a bar or craft beer brewery and sit, stay, or linger, but we can all walk into a tiny liquor store, shoulder to shoulder, and buy whatever we want. Masks? The vast majority of people aren’t wearing one. Beyond being told by an unrestricted, unmasked media to wear one, the people who do wear masks might not be aware of the operative facts. Doctors tell us that protecting the wearer is difficult: It requires medical-grade respirator masks, a proper fit, and proper care putting on the mask and removing it. But we are also told that masks can be worn to prevent transmission to others. None other than the Centers for Disease Control and Prevention tell us when and why we should wear a mask. “If you are sick,” the CDC says, “you should wear a mask when you are around other people and before you enter a healthcare provider’s office.” But “if you are NOT sick,” it adds, “you do not need to wear a mask unless you are caring for someone who is sick.” So if the folks wearing a mask are doing it for the benefit of others because they think they are or might be infected, then why are they outside or close to others in the first place? Not so altruistic if you think about it. But more on that strange, self-contradictory behavior to follow. We were told for weeks that air travel might be shut down domestically, but it has not been. Whether this is a good thing or not, it raises some bizarre situations. The airports resemble ghost towns, with the aforementioned restaurants and bars shut down, but the Starbucks and Dunkin kiosks open. Every departure/arrival gate is manned with multiple airline agents, smiling and chipper at the relative paucity of customers, and there are planes rolling in and out of every gate. TSA is at full-strength. I’ve had to fly a few times, and there have never been more than seven passengers on each flight. And the airlines are not cancelling the flights, when in the past, they would have cancelled without giving it a second thought. A one-way ticket from DFW to Washington, DC, is currently about $150 dollars. These are 1992 prices. How are the airlines able to fly seven people without going bankrupt? Have they been told from the inception that a massive bailout is on the horizon? A gallon of gas is about two bucks right now. Actually, the average price nationally is about $1.80. Interstate traffic is light. It’s actually a great time to travel on the road, at least along a route where you won’t get accosted regarding where you came from and where you’re headed. (Do your research beforehand.). On the other hand, you can’t go to a church, or mosque, or synagogue, or temple. You’re told it’s for your own good. It would be a gathering of too many people. But the same number of people in a Wal-Mart, Target, Pilot, or liquor store, for example, is not a gathering of too many people. Consider this for a moment. Putting aside the jokes, why are liquor stores “essential?” Essential to whom? We keep hearing about “reopening the economy;” indeed it’s a time of new buzz words and catch phrases we’ve hardly — if ever — used before. “Social distancing.” “Flattening the curve.” “Stay home.” “Essential activities.” “Shelter in place.” “Reopening the economy.” And so on. But the Wal-Marts, Targets, and liquor stores don’t need the economy reopened; they — and the governments that tax their sales — are making a killing right now. These responsible-sounding little phrases are constantly wielded to subtly coercive effect in an attempt to keep everyone in line with the current groupthink. It’s an interesting experiment in mass psychology. Who follows and obeys, and who resists? One wonders if someone, somewhere is keeping track. After all, most people want to be altruistic, or at least appear to be, especially on social media these days. (Except when they cleaned out the shelves at the aforementioned Wal-Marts of all remaining hand sanitizer and toilet paper in those early days.) This last point raises some ironic questions regarding which “rules” and admonitions people choose to obey and ignore when it’s about them personally as opposed to others. But that’s a subject for another day. And so, some people continue to sit inside like the good citizens out of 1984, never questioning, so quick to accept it and install themselves as self-appointed enforcers of it all. And yet others continue to live their lives. It’s the grand social experiment of our times. And ironically, leave to a commercial based on 1984 to capture the essence of these times: “Today, we celebrate the first glorious anniversary of the Information Purification Directives. We have created, for the first time in all history, a garden of pure ideology — where each worker may bloom, secure from the pests purveying contradictory truths. Our Unification of Thoughts is more powerful a weapon than any fleet or army on earth. We are one people, with one will, one resolve, one cause. Our enemies shall talk themselves to death, and we will bury them with their own confusion. We shall prevail!” A push for “Information Purification” and “Unification of Thoughts,” and demonizing anyone “purveying contradictory truths” as “pests.” Sound familiar? Strange days indeed.
https://medium.com/vantage-points/strange-days-in-the-land-of-147-plane-tickets-and-two-dollar-gas-bfd6cd60d259
['Glen Hines']
2020-04-23 16:15:26.588000+00:00
['Perspective', 'Society', 'Psychology', 'History', 'Current Events']
I’m Not Like Other Girls
Ok let me start off by saying that my life has been stained by the advantages of male privilege. There I have said it and it is now out in the open. In addition, I have been medically diagnosed as transgender, clinically proven. That is not up for any debate. Finally, in the current, generally accepted gender vernacular, I am a trans woman. So off we go. Am I really a woman or am I just a guy with a few bits removed? Given that no one can give me a definitive answer to what a woman is other than the militantly classic and massively ignorant argument that “your chromosomes and your genitalia are the only things that decide your gender” argument, I am going to say with total confidence that I am a woman. Relax, I am not saying that I am like you if you’re a woman or the women in your life if you’re a guy. I am different and I am ok with that. Just like you can state your gender with absolute authority because you simply know, I can state with equal authority that I also just simply know my gender. Having gotten past the “I think therefore I am” argument, I now want to deal with the “you are just a man trying to be a woman” retort. I agree to a degree. Yes, I was raised as a male but it was always brutally enforced against my deep, internal sense of my female gender. It’s like forcing a left-handed person to write with their right. It can be done but it never, ever feels natural. The next challenge is the “you are acting like a woman based on how men see women which is a completely false sense of womanhood” old standby. Oh, come on, that just leads back to “what is a woman?”. Of course, my life experiences are going to impact my vision of who and what I am and of course I will overcompensate in expressing my femininity. I have been forced to express a false masculine image all my life. At my age I want to maximize my female experience, both good and bad, for whatever time I have left. So, I am not like the other girls. I am the product of the classic mixture of nurture and nature, with a humorous gender twist thrown in by nature. In the end, I am defined by my life experiences and my gender identification, just like a Kurdish woman soldier is different from a Manhattan mother, from a Syrian female refugee, from a French runway model, from a starving Ethiopian girl, from a Mid-western farmer woman, from a Dallas police woman, from a Catholic nun from a black, Indian American Vice President. On the female range of gender expression from too feminine to too masculine, who has declared that they have the right to pick it for me? Who claims to have that power over my life, the remnants of an obsolete and ignorant society that once pigeon-holed me for a lifetime in the wrong gender? Thankfully society has finally begun to accept the humanity of my gender and given me the hope to finally be who I truly am. The last thing I will allow is a flat-worlding, gender Nazi force me back into the binary cell I have just escaped from. So please forgive me if my gender expression fails your gender test. Give me a little time to adjust. I just got out of jail. Emma Holiday Please also read : I have tied all of my stories to the above thread. Writers note: If you have read any of my writings on Medium you will have noticed a definite theme: the incredible pain of gender dysphoria and all the difficult aspects of just being transgender. My writing has three specific goals: 1. Writing is my therapy. I have a very limited outlet for my thoughts so I write to find a way to process the most profound experience in my life. I need to understand and I need to accept myself to move forward. 2. Being transgender, for me, is a very lonely existence and if I can share some of the things that I feel and think as I go through the process of transitioning with others who are transgender and, in some way, lessen their pain and sense of loneliness, then all of this public exposure of my personal thoughts is not a waste. 3. I write to help cisgender people understand that all trans people want is to simply be understood, accepted and treated as a normal person. We are.
https://medium.com/prismnpen/im-not-like-other-girls-2d47e87a6709
['Emma Holiday']
2020-11-24 14:50:05.663000+00:00
['Gender', 'LGBTQ', 'Transgender', 'Creative Non Fiction', 'Society']
Quantify, Understand, Model and Predict Financial Time-Series
Hurst Exponent to determine momentum of time-series Quantify, Understand, Model and Predict Financial Time-Series Image by author FORECASTING stock price accurately is a formidable task. Irregular temporal behavior is omnipresent in the stock market prices due to the inherent complex behavior. Here, I will show a predictive modeling framework for forecasting the future returns of financial markets with a 3-stage approach. Fractal modeling and recurrence analysis, and test the efficient market hypothesis to comprehend the temporal behavior in order to investigate autoregressive properties. Granger causality tests in a VAR environment to explore the causal interaction structures and identify the explanatory variables for predictive analytics. ML and ANN algorithms to learn the inherent patterns and predicting future movements. I will use the precious metals (Gold & Silver) stock prices, Energy stock (Crude Oil) prices and S&P500 index since 2000 -2019. The objective is to estimate the future returns and the nexus of these products with S&P index. Data pre-processing & Descriptive statistics: After doing all the necessary data purification, filling the missing values with previous ones, renaming the column, indexing date etc. following is the output where we can do further analysis and investigation. Data purification takes a lot of time and effort. So, I will skip the explanations on data handling procedures and rather focus on the objective here. if __name__ == "__main__": df = pd.read_excel('brownian data.xlsx', index_col=0) # removing unnecessary columns df = df.drop(['Date', 'Year', 'NaturalGasPrice'], axis=1) # converting 'DatesAvailable' to datetime df['DatesAvailable']= pd.to_datetime(df['DatesAvailable']) # setting 'DatesAvailable' as index column df.set_index('DatesAvailable', inplace=True) df1 = pd.read_excel('S&P Futures.xlsx', parse_dates = True, index_col=0) DF = df1.loc['20190729':'20000104'] data = [DF,df] # combining dataframes result = pd.concat(data, axis=1) # renaming column result = result.rename(columns = {"Price": "SPPrice"}) DATA = result.fillna(method ='pad') # filling the missing values with previous ones DATA.index.name = 'Date' print(DATA.head()) DATA.plot(subplots=True, layout = (2,2), figsize = (15,6), sharex = False, grid =True) plt.tight_layout() plt.show() Daily closing prices are taken for all the series. The plots exhibit a clear trend for S&P Index; for commodities quite random in nature. Visually it can be assumed that, none of the series are stationary and commodities have a Brownian motion movement. The results are in table format for the ease of understanding. Skewness and kurtosis values confirm that none of the series follow normal distribution. The percentiles along with the standard deviation suggest a large spread for Index and Gold Price. The large spread for Gold in this series will likely make accurate predictions difficult if it is caused by random fluctuation. Mann-Whitney test The null hypothesis of Mann-Whitney U test is that there is no difference between the distributions of the data samples. Unit root test Original series The test statistics confirm that none of the series follow normal distribution. This is also an indication of non-parametric and nonstationary series and justifies the deployment of advanced ML and DNN algorithms for predictive modeling exercise. However, we won’t get a stable result if the time series follows a Brownian motion. So, it is important to test the RWH (Random walk hypothesis). Unit root test 1st order difference The outcome revealed that the series is first order stationary. Hence, I will consider the return series for econometric analysis to assess the direction of causation. Non-linear dynamics: Here, we will perform fractal modeling and recurrence quantification analysis (RQA) to check RWH and gain deeper insights about temporal evolutionary patterns. Let’s look into FD (fractal dimension), R/S (rescaled range), H(Hurst exponent) and RQA (recurrence quantification analysis). Let us understand Hurst Exponent. The Hurst exponent is used as a measure of long-term memory of timeseries. The goal of the Hurst Exponent is to provide us with a scalar value that will help us to identify whether a series is mean reverting, random walk or trending. It is a statistical inference. In particular: H < 0:5 — Anti-persistent time-series which roughly translates to mean reverting. H = 0:5 — The time series is random walk and prediction of future based on past data is not possible. H > 0:5 — Persistent time-series which roughly translates to trending. FD = ln(N)/ln(1/d), N = number of circles, d = diameter. This equation shows how the number of circles related to the diameter of the circle. The value of FD lies between 1 and 2 for a time series. The FD of Brownian motion is 1.5. If 1.5 < FD < 2, then a time series is an anti-persistent process, and if 1 < FD < 1.5, then the series is a long memory process (persistent). H is related to FD (FD =2-H) and a characteristic parameter of long-range dependence. In case of H, the value of 0.5 signifies no long-term memory, < 0.5 means anti-persistence and > 0.5 means that the process is correlated across time and is persistent. R/S is the central tool of FD and calculated from dividing the range of its mean adjusted cumulative deviate series by the standard deviation of the time series itself (1). It is the measure characterizing the divergence of time series defined as the range of the mean-centered values for a given duration (T) divided by the standard deviation for that duration (2). Recurrence Quantification Analysis (RQA) can deal with non-stationarity in the series and contribute to the understanding of the complex dynamics hidden in financial markets. Now, let’s use the quantification technique by computing the measures like REC, DET, TT, and LAM. Compute H: if __name__ == "__main__": index = DATA[['SPIndex']] lag1, lag2 = 2,20 lags = range(lag1, lag2) tau = [sqrt(std(subtract(index[lag:], index[:-lag]))) for lag in lags] m = polyfit(log(lags), log(tau), 1) hurst = m[0]*2 gold = DATA[['GoldPrice']] lag1, lag2 = 2,20 lags = range(lag1, lag2) tau = [sqrt(std(subtract(gold[lag:], gold[:-lag]))) for lag in lags] m = polyfit(log(lags), log(tau), 1) hurst = m[0]*2 crude = DATA[['CrudeOilPrice']] lag1, lag2 = 2,20 lags = range(lag1, lag2) tau = [sqrt(std(subtract(crude[lag:], crude[:-lag]))) for lag in lags] m = polyfit(log(lags), log(tau), 1) hurst = m[0]*2 silver = DATA[['SilverPrice']] lag1, lag2 = 2,20 lags = range(lag1, lag2) tau = [sqrt(std(subtract(silver[lag:], silver[:-lag]))) for lag in lags] m = polyfit(log(lags), log(tau), 1) hurst = m[0]*2 print('*'*60) print( 'hurst (Index), 2-20 lags = ',hurst[0]) print('*'*60) print( 'hurst (Crude), 2-20 lags = ',hurst[0]) print('*'*60) print( 'hurst (Gold), 2-20 lags = ',hurst[0]) print('*'*60) print( 'hurst (Silver), 2-20 lags = ',hurst[0]) np.random.seed(42) random_changes = 1. + np.random.randn(5019) / 1000. DATA.index = np.cumprod(random_changes) H, c, result = compute_Hc(DATA.SPIndex, kind='price', simplified=True) plt.rcParams['figure.figsize'] = 10, 5 f, ax = plt.subplots() _ = ax.plot(result[0], c*result[0]**H) _ = ax.scatter(result[0], result[1]) _ = ax.set_xscale('log') _ = ax.set_yscale('log') _ = ax.set_xlabel('log(time interval)') _ = ax.set_ylabel('log(R/S ratio)') print("H={:.3f}, c={:.3f}".format(H,c)) The Hurst exponent ‘H’ is the slope of the plot of each range’s log(R/S) versus each range’s log(size). Here log(R/S) is the dependent or the y variable and log(size) is the independent or the x variable. This value indicates that our data is a persistent one. However, we are working on a small data set and it cannot be concluded from the output that, H values are significantly higher, especially the commodities values (0.585) but the given time series has some degree of predictability. RQA will help us to understand the degree of predictability. Compute RQA: time_series = TimeSeries(DATA.SPIndex, embedding_dimension=2, time_delay=2) settings = Settings(time_series, analysis_type=Classic, neighbourhood=FixedRadius(0.65), similarity_measure=EuclideanMetric, theiler_corrector=1) computation = RQAComputation.create(settings,verbose=True) result = computation.run() result.min_diagonal_line_length = 2 result.min_vertical_line_length = 2 result.min_white_vertical_line_lelngth = 2 print(result) Here, we see that, the RR (recurrence rate) values of all the time series are not on a higher side indicating a lower degree of periodicity. DET (S&P, Crude & Silver) & LAM values Crude, Gold & Silver) are higher supporting the deterministic structure. However, this also confirms the presence of higher order deterministic chaos. Econometric approach: A series of tests procedures have been employed here to explore the causal interaction structures among the variables and identify the explanatory variables for predictive analytics. Pearson correlations display significant correlations between S&P & Gold, Crude & Gold, Crude & Silver and Gold & Silver. Instantaneous Phase Synchrony (IPA) and Granger Causality tests were performed. Empirical studies suggest that, even a strong correlation between two variables is not a guarantor of causality. It does not provide information about directionality between the two signals such as which signal leads and which follows. Now, I will use Granger causality test to inspect the causal interrelationship for identifying predictors. VAR was considered a means of conducting Granger causality tests. In the VAR model, each variable is modelled as a linear combination of past values of itself and the past values of other variables in the system. We have 4 time series that influence each other, so, we will have a system of 4 equations. Y1, t = α1 + β11, 1Y1, t-1 + β12, 1Y2, t-1 + β13, 1Y3, t-1 + β14, 1Y4, t-1 + ε1, t Y2, t = α2 + β21, 1Y1, t-1 + β22, 1Y2, t-1 + β23, 1Y3, t-1 + β24, 1Y4, t-1 + ε2, t Y3, t = α3 + β31, 1Y1, t-1 + β32, 1Y2, t-1 + β33, 1Y3, t-1 + β34, 1Y4, t-1 + ε3, t Y4, t = α4 + β41, 1Y1, t-1 + β42, 1Y2, t-1 + β43, 1Y3, t-1 + β44, 1Y4, t-1 + ε4, t Here, Y{1,t-1, Y{2,t-1}, Y{3,t-1}, Y{4,t-1} are the first lag of time series Y1,Y2, Y3, Y4 respectively. The above equations are referred to as a VAR (1) model, because, each equation is of order 1, that is, it contains up to one lag of each of the predictors (Y1, Y2, Y3 and Y4). Since the Y terms in the equations are interrelated, the Y’s are considered as endogenous variables, rather than as exogenous predictors. To thwart the issue of structural instability, I have used VAR framework choosing the lag length according to AIC. # make a VAR model model = VAR(DATA_diff) model.select_order(12) x = model.select_order(maxlags=12) x.summary() Lag selection (VAR) The lowest value of AIC obtained at lag 4. So, causality analysis is carried out on this basis. A = model.fit(maxlags=4, ic=’aic’) # pass a maximum number of lags and the order criterion to use for order selection R1=A.test_causality(‘Index’, [‘CrudeOilPrice’, ‘GoldPrice’, ‘SilverPrice’], kind=’f’) R2=A.test_causality(‘CrudeOilPrice’, [‘Index’, ‘GoldPrice’, ‘SilverPrice’], kind=’f’) R3=A.test_causality(‘GoldPrice’, [‘Index’, ‘CrudeOilPrice’, ‘SilverPrice’], kind=’f’) R4=A.test_causality(‘SilverPrice’, [‘Index’, ‘GoldPrice’, ‘CrudeOilPrice’], kind=’f’) R5=A.test_causality(‘Index’, [‘CrudeOilPrice’], kind=’f’) Granger Casuality test result We can see here from above table that, S&P Index does not cause the given other stocks prices in the series but other stocks prices have significant impact on S&P Index. Therefore, the causal structure is unidirectional here. granger_test_result = grangercausalitytests(DATA_diff[[‘Index’, ‘CrudeOilPrice’]].values,maxlag=4) granger_test_result = grangercausalitytests(DATA_diff[[‘Index’, ‘GoldPrice’]].values,maxlag=4) granger_test_result = grangercausalitytests(DATA_diff[[‘Index’, ‘SilverPrice’]].values,maxlag=4) granger_test_result = grangercausalitytests(DATA_diff[[‘CrudeOilPrice’, ‘SilverPrice’]].values,maxlag=4) granger_test_result = grangercausalitytests(DATA_diff[[‘CrudeOilPrice’, ‘GoldPrice’]].values,maxlag=4) granger_test_result = grangercausalitytests(DATA_diff[[‘GoldPrice’, ‘SilverPrice’]].values,maxlag=4) Next, the IR (impulse response) is estimated for assessing the impacts of shock from one asset on another. IR plot has displays the expected level of the shock in a given period, the dotted lines represent 95% Confidence Interval (a low estimate and a high estimate). Further the % of variance in returns of all the four series are largely explained by themselves in the short run. In the short run, shocks of their own movement tend to play a significant role in volatility. Series that are found to possess impact on the others by Granger causality, have marginal impact on variance as well. Hence, the overall findings of causation analysis are validated through the outcomes of variance decomposition. Results of causal interactions help in comprehending the structure of interrelationships. To further justify, forecast error variance decomposition (FEVD) is performed. The basic idea is to decompose the variance-covariance matrix so that Σ=PP−1, where P is a lower triangular matrix with positive diagonal elements, which is obtained by a Chelsi decomposition. Forecast error variance decomposition FEVD indicates the amount of information each variable contributes to the other variables in the auto-regression. It determines how much of the forecast error variance of each of the variables can be explained by exogenous shocks to the other variables. The percentages of variance in returns of all the three stock indexes are largely explained by themselves in the short run. However, Granger causality do not necessarily reflect true causality. Therefore, for rigor, statistically verified the existence of the synchronization between the assets using IPA (Instantaneous phase synchrony). Here IPA is used for visual pattern recognition. It measures by incrementally shifting one time series vector and repeatedly calculating the correlation between two signals. The peak correlation at the center indicates the two-time series are most synchronized at that time. The transformation to the frequency domain achieved by Hilbert transformation after bandpass filtering. The preprocessing stage, prior to performing any phase synchronization require de-trending the financial data and removing the high-frequency noise components. The filtering stage uses the SST as a bandpass filter (SST is a block algorithm, so the data was first windowed), applied to only the low frequency oscillations of the de-trended signal. An important criterion for the band-pass filtering is to ensure that the resulting signal is as narrowband as possible. Pearson r is a measure of global synchrony which reduces the relationship between two signals to a single value. However, we have used instantaneous phase synchrony measure here to compute moment-to-moment synchrony between two signals without arbitrarily deciding the window size as done in rolling window correlations. S&P Index & Crude Oil S&P Index and Gold S&P Index and Silver Crude Oil & Gold Crude Oil & Silver Gold & Silver Each plot consists of filtered time series (top), angle of each signal at each moment in time (middle row), and IPS measure (bottom). When two signals line up in phase their angular difference becomes zero. The angles are calculated through the Hilbert transform of the signal. Here, I have simulated a signal with phase synchrony. Both signals y1 and y2 are at 30Hz (30 days) for each of the plots. I have added random noise to the data and see how it changes the results. The fluctuating phase synchrony graph indicates that IPS is sensitive to noise and highlights the importance of filtering and choosing a frequency band for analysis. Simple moving-average and bandpass filters were employed to remove this noise from the signals, before moving on to the actual processing of waveforms. However, the IPA yields a global measure. Hence, during decomposing the signal into multi-components is a key criterion is to ensure the associated frequency is locally valid. The presence and duration of synchronization reflect dynamics of financial market. Predictive model Selection of explanatory constructs is pivotal for predictive modelling. Some widely used technical indicators are SMA, MACD and CCI are used for trend, STD and BB were used for volatility, RSI and WR were used for momentum. Therefore, the model tested with multivariate framework using Crude oil, Gold & Silver as input variable that significantly impact S&P index and a set of technical indicators as listed below. The original dataset was randomly split into 85% training and 15% validation data. An algo search was performed on kNN, DT, RF, GBM, SVN and DNN to select a robust model to train on the given data set. The objective is to get the lowest error and by that logic, it can be seen that GBM/XGB displays lowest error rate on both training and testing data. This justifies the use of GBM algo to develop predictive model for the given time series. DNN is known for their ability to handle continuous and high-volume data. Therefore, DNN was also chosen to develop a hybrid predictive model which can deal with high velocity big data. I have performed heavy duty performance tuning with 5-fold CV technique to obtain the best of the two algos (XGB & DNN). Finally used stacked ensemble technique where combination of the two for superior predictive performance. The average MSE, RMSE, and MAE values are found to be small for Crude Oil and Silver indicating effective predictive power of the model for all. The result also supports the fact that historically Gold has tended to be resilient during stock market crashes and the two are negatively correlated. Here also it shows no exception with high MSE value. Let’s see how our features are prioritized. Feature importance-XGB Feature importance-DNN The features are plotted as per their relative strength and according to their order of importance. This implies that almost all the features selected, have some degree of importance to the model building. Conclusion Here, I have explained the key characteristics of selected stocks and S&P and subsequently delves into their causal interrelationship and predictive analysis. Some key technical indicators are explained here. This forecasting structure is a combination of econometric model and machine learning algorithms. Though I have tested with ML algos, however, technical indicators can also be used for a successful trading strategy. This has to be followed by a roll forward validation on validation set. Connect me here. Reference: (1) Mandelbrot, B. B., & Wallis, J. R. (1969). Computer experiments with fractional Gaussian noises: Part 2, rescaled ranges and spectra. Water resources research, 5(1), 242–259. (2) Hurst, H. E., Black, R. P., & Simaika, Y. M. (1965). Long-term storage: an experimental study Constable. London UK.
https://towardsdatascience.com/quantify-understand-model-and-predict-brownian-movements-of-financial-time-series-f8bc6f6191e
['Sarit Maitra']
2021-07-17 08:47:36.697000+00:00
['Time Series Analysis', 'Momentum Trading', 'Brownian Motion', 'Econometrics']
Change the Narrative: Parenting is a Strength for Lawyers
Elizabeth Lippy, Esq., Executive Director of Trial Advocacy Consulting & Training, Founder of Fairlie & Lippy “Are women lawyers paying enough attention to upward mobility?” With all due respect to the author of the ABA article published June 29, 2021, the title of the article alone illustrates a clear need to reassess the messaging of the body of the article. Perhaps the article was meant to grab the attention of women lawyers. Perhaps it was meant to garner interaction from the public. Perhaps the author’s intent was pure — to help women lawyers succeed. Whatever the intent, the American Bar Association failed in supporting women lawyers, but nonetheless succeeded immensely in uniting hundreds of women lawyers in outrage over the article. Change the narrative. Instead of a line-by-line rebuttal, I prefer to write this response instead to help change the narrative that was portrayed. Instead of discussing the vulnerabilities of lawyers who are also parents, the public needs to be more aware of all of the strengths that attorneys who are also parents bring to the table. (Note I intentionally do not refer to lady lawyers who are moms, but all lawyers). The narrative needs to change. Perception is NOT always reality. The perception of several lawyer-parent weaknesses identified in the article proves that perception is not always reality. Throughout my life, I’ve been told that I’m “too competitive,” “too assertive,” “too fake.” Just like the perceptions portrayed in the article, the way others have perceived my personality traits was not reality. Recently, I had the benefit of taking Gallup’s Clifton Strengths Assessment. After taking the 30-minute test, I received a 35-page report indicating my strengths. No surprise that my number one strength was “achiever.” But the remaining top ten strengths surprised me and made me realize that the negative narratives I have been told throughout my life about competition, assertiveness, etc. were simply untrue. They are in fact strengths. Certified by a global analytics and advice firm. For my entire life, I listened to the negative narrative of others, which limited me in a variety of ways. Once I saw the Strengths Assessment report and reviewed it with Strengths Coach Vanessa Kuljis, however, my mindset completely shifted. Which, in turn, shifted my approach to my work, my business, my clients, my children, and my life. Being a lawyer-parent is a strength. When you change the narrative from negative to positive, it is amazing what results. No one can dispute that being a parent has its challenges. Being a lawyer has its challenges too. So, naturally, being a mother, as well as a trial attorney, law professor, and business owner carries many responsibilities, duties, and stress. But instead of discussing the negative attributes of lawyers who are parents, insinuating that parenting is destructive to one’s law career, why not focus on the strengths that come from being a parent and systemic ways the legal field can support lawyer parents? There are so many examples of the strengths of being a parent but let me provide just a few. Empathy People hire litigators when they are in a dispute with someone else. They rely on us to relate to them and fight for them. Empathy is defined as “the action of understanding, being aware of, being sensitive to, and vicariously experiencing the feelings, thoughts, and experience of another of either the past or present without having the feelings, thoughts, and experience fully communicated in an objectively explicit manner.”[1] Although I have always identified as an empathetic person, I gained the truest understanding of empathy when I became a parent. It wasn’t until I cared for a sick baby, cleaned off cut knees after a fall outside, or secretly watched my kids interact with others did I truly grasp the ability to understand and share the feelings of another. That strength helps clients. It helps trial attorneys persuade juries and judges. It benefits lawyers. Organization & Time Management Parents perfect their organizational and time management skills by scheduling their time out of necessity. When you must taxi your children to multiple extracurricular activities or pick up children from the bus stop, you learn immediately how to schedule the rest of your day around those crucial events and to maximize the remaining available time for work. This is not a negative attribute that law firms should penalize — it is a strength from which law firms and clients benefit. Lawyer parents cannot afford to procrastinate, plus they value other people’s time. Someone commented on my LinkedIn post today that their sister, who is not a lawyer but is a proud mother, “schedules her days with the kind of precision that would shame a drill sergeant.[2]” This strength of lawyer parents is something to be respected, not punished. Law firms that respect a parent’s ability to multitask and respects the necessity of childcare retain their lawyers. If you treat people with respect and value them, they stay, and the firm benefits from all that they bring to the table. Innovation & Communication Skills Being a lawyer requires effective communication — both in and out of the courtroom. We must communicate with opposing counsel, the bench, internally at the office, with clients, and the list goes on and on. The most persuasive advocacy also requires innovation to forge new paths. Any parent has learned the skill of communication and innovation. I cannot begin to tell you how many times I have masked healthy food in a creative and innovative way just to get my super picky eaters to try something healthy. Or how I have learned to communicate manners and important skills to twins who are currently six years old. When I teach trial advocacy courses, I incorporate my children into some of the exercises. If my law students can give an opening statement that my 6-year-old twins can understand, then they have improved their communication skills. My little girl still sometimes randomly discusses one of my law students who gave an opening about a fatal bus accident negligence case. That is persuasive advocacy. These strengths I have gained from being a parent helped me create a litigation training that is groundbreaking. And clearly so badly needed to combat the outdated mentality contained in Blakely’s ABA article. Through my company Trial Advocacy Consulting & Training (TACT), I created a two-day trial advocacy training for women by women. A safe space for lady lawyers to feel comfortable enough to be empowered and grow their skills. Determination Lawyers need to persevere towards difficult goals to help their clients. Determination is a strength. I became even more determined after becoming a parent. Candidly, I never would have achieved such success had I not become a parent. Every parent understands the desire to make sure their children succeed. Every parent is determined to keep their children safe from harm, no matter what. That grit and determination applies equally at work. Courage Having children instills courage like nothing else. Just three weeks ago, my six-year-old daughter swam in her first ever swim meet. She is extremely shy and often insecure. We walked into the crowded pool club and Grace began to cry hysterically. She told me she was scared and could not do it. With tears streaming down her face, she still jumped into the pool for warmup. It took everything in me to walk away and give Grace a chance to fight her own insecurities. When it was her turn to swim freestyle, I watched her from the other end of the pool as she was still crying hysterically and visibly shaking. The other five girls dove into the pool while she remained on deck. Her coach said something to her, and she finally jumped in the pool and caught up with the other girls. When I greeted her at the end of the lap, she was still crying. But she got out and swam backstroke when it was her turn. And guess what? She won. She has been winning the races ever since and looks forward to each swim meet to try to beat her personal best swimming times. The courage to raise children and help them grow into independent people is real. That courage permeates work life just as much as it does at home. Your narrative is your choice. To anyone who read the article today and was disheartened, I empathize with you. But stay determined and have the courage to be a Grace. Find your strength and swim that lap. In the words of one of my colleagues, “anyone who tells you that you cannot do it all underestimates your capabilities.”[3] [1] https://www.merriam-webster.com/dictionary/empathy [2] Thank you, John (Jack) Zulkey for your phraseology. [3] Sara Jacobson, Executive Director of the Public Defender Association of Pennsylvania
https://medium.com/@lawyerparents/change-the-narrative-parenting-is-a-strength-for-lawyers-8a7e74852750
['Elizabeth Lippy']
2021-07-06 12:12:56.026000+00:00
['Aba', 'Lady Lawyers', 'Lawyers']
Raised US$89M, TNG’s Alex Kong Shares Tips of Building Success Team
Alex Kong is the founder and CEO of TNG FinTech Group, Inc – a Hong-Kong based company that provides next-generation financial services to 1.2 billion unbanked individuals throughout Asia. Bill is the Founder and CEO of StackTrek, a company specializing in using algorithms and data to build and scale programming teams for tech companies. Each week, Bill talks with top executives about startups, culture, and tech hiring. Bill: When did you found TNG? Alex: We incorporated the company in 2012, but we launched our services in November 2015. Bill: How many employees when you first launched your service, and how many you have now? Alex: When we first launched our service in 2015, we had less than 30 people, and today we have close to 400 employees across 14 countries. Bill: So your role in the company has changed from day one to now. How has it changed? Alex: I look at managing my company, it’s like we’re going through different phases of corporate life cycle, just like human beings. When we first started to launch our service, we are like a baby, so we are at the stage of baby. So, the way we manage a baby, a 20-people company, to 50 people, to 100, 200, 300, 400, are very different. Just like human beings, we go through different corporate life cycle. My role changes so fast because we grow so fast. We don’t even bother to print our position or title on the business card, because the role keep changing. It’s an on-going challenge. It’s also an on-going change management, because the way we manage the business and the people are very different compare to day one. From the freedom of working anytime, come to the office anytime, come to work at 11 a.m., go to lunch anytime to now in which we become very systematic. You better come on time, and go to lunch on time. A lot more professional and a lot more systematic. There are different phases of growth but to me, I’m excited about the unique DNA that we have created because of the rapid growth of the company, so we built a culture of obsoleting ourselves. Every week, our people, our different departments will discuss, sit down and discuss what happened, what we did last week, and what are the small changes we can do this week. And we work on their improvement week after week, and then the business keeps growing. Bill: You’ve mentioned company culture. Can you share with us what your company culture is like? Alex: It’s about survival. We have very little cash, and with that little cash, how do we survive? And we have to do anything to pay the rent, pay the salary. We didn’t talk about culture at all, we just work, work, work, day and night, and through that, we kind of built a culture of survival. Now, the company is profitable. We have a lot more resources that we can dedicate for benefits and rewards that help us cultivate certain culture. For example, recently we turned an entire office floor into a dedicated co-working-like space that promotes collaboration. It has an in-house cafe that serves coffees, sandwiches, soups, healthy drinks, etc. We now provide a lot more fringe benefits, and stop asking people to come on time. We don’t enforce the on-time policy. We don’t believe in that. But if they come on time, we reward them for something more. There’s a lot more freedom. Bill: So it’s a very rewarding culture. Alex: Yes. We want to make the culture rewarding because we believe people by nature want to work hard. We want to build a happy house — a happy environment that people look forward to come to, and collaborate with each other, and together create a solution and platform that serves as many as billions of the unbanked population around the world. Our mission is to bank the unbanked. Helping the unfortunate people who couldn’t open a bank account to gain access to banking or financial services. We need people to believe in our mission. We need people to understand that we are doing something great together. And while doing it, we are enjoying every moment of it. Bill: So as the company grows, you don’t see everybody as often as possible, especially everybody. So yesterday, I was talking to Tony Fernandes. He’s managing 21,000 people, but everybody has a direct line to him. They can text him directly. In your case, how can you ensure that as the company grows, they still feel like Alex is still part of the team? Since they don’t see your face every day. Alex: We are in a very virtual environment. I have 123,000 unread emails, and thousands of unread messages. People still send or copy me in messages, but I don’t really read every single one of them. Anything emergency, they will call me. And every time they come to me, it’s for a decision. So my job is, every moment in the office, one after another is making important decisions. And I delegate down to my second-liner and third-liner, to entrust them to make the relevant decisions and create a policy, and create a system with check and balance. So you have to entrust the people to carry out the job. They cannot depend everything on me anymore, only come to me for important decisions. Bill: Can you share any tips with leaders who want to build a successful team? Alex: There are a lot of things that you need to build a successful team, but one thing that has always stuck in my mind is: people that are part of the solution. I tell myself everyday: if I’m not part of the solution, then I’m part of the problem. When you first hear it, it sounds very harsh. I ask myself that question everyday, “Am I part of the solution?” If not, then I’ve became part of the problem. I share that to my people as well. When they come to me with a problem, I want them to think about solutions. You don’t come to me with just a problem. You have to come to me with proposed solutions. If you only come to me with your problem, then you yourself become part of the problem. Then what am I hiring you for? It’s harsh, but it’s a necessary evil. When we build teams, we need to make sure that we are building with people that can be part of the solution.
https://medium.com/stacktrek/raised-us-89m-tngs-alex-kong-shares-tips-of-building-success-team-cf9db9b643d3
[]
2020-11-26 02:13:44.679000+00:00
['Technology', 'Recruiting', 'Hiring', 'Startup', 'Tech']
Wood Floor Buying Guide | Floor Experts
Wood Floor Buying Guide | Floor Experts Over the past few years, a number of people are moving towards wooden flooring in their homes. It is highly durable and adds a touch of class to any room decor. Following are some of the wooden floor types that you can choose from based upon your needs. 1. Laminate Floors Laminate floors are less expensive when compared to solid wood or engineered hardwood flooring. It is made by fusing multiple layers of materials together, with the inner core prepared from HDF and a high-resolution image on the surface — stone, wood or any other material. It is finished off with a protective layer. Laminate flooring that is made of high-quality material is highly durable and easy to maintain and clean. It is the best choice of flooring if you have a house with children and pets. In addition to this, if you do not want to spend a heavy amount of money on flooring, a laminate floor is the best option. 2. Engineered Floors Engineered hardwood floors, also known as real wood flooring, has at least three to four layers that consist of a HDF layer, solid oak layer, and softwood core along with a strengthening layer at the bottom. Make sure that you do not expose the engineered floors to water, and do not lay the flooring where it can be damaged by water. This type of flooring is more expensive when compared to the laminate flooring discussed above. 3. Vinyl Floors Known for its durability, Vinyl flooring is actually easy to clean and maintain. It is crafted from PVC and is available in the form of tiles or wood. The thicker the product is, the more durable it tends to be. You can find cheap vinyl flooring about 15mm tick . While more expensive options are available at 80mm — These are more durable. Before fitting the vinyl flooring in your house, make sure the subfloor is prepared properly. Make sure it is levelled up completely to avoid the undulation in the vinyl. 4. Solid Floors If you want a stunning and stylish flooring inside your home, you can’t get better than solid wood flooring. You can create the feel you want with a range of woods in light to mid shades that have a matt finish. These are costly to be installed, so make sure that it is in your budget. Use the tips above to find the most suitable wood flooring for your home.
https://medium.com/@mybestarticle1/wood-floor-buying-guide-floor-experts-9bbc5551d786
['Your Medical Records - Are They Really Private']
2019-09-14 10:06:45.721000+00:00
['Interior Design']
How incremental backups work in PostgreSQL, and how to implement them in 10 minutes
Postgres as an RDBMS offers limited support for incremental backups, in that they are officially supported via WAL (Write Ahead Log) archiving, but there is no built in method for actually implementing them. The DBA is expected to configure and create scripts and cron jobs to manage backups, using the built in tools. Such scripts can be very simple or extremely complex, and often create overhead for a DBA, especially one who is migrating to Postgres from an enterprise rdbms such as Oracle, which includes powerful backup and recovery options with inbuilt tools which allow for minimal effort to have a robust backup system in place. Enter Barman. Barman is an open source tool maintained by 2ndQuadrant, and is essentially a wrapper for all of Postgres’s inbuilt Backup And Recovery MANagement. With this tool, a DBA can specify backup policies in a manner similar to Oracle’s RMAN, and not have to worry about maintaining their own backup scripts. Installation is very simple, and so is usage. In this post I will explain how to install and configure Barman, and explain some of how it works under the hood. For more information you can check out the project on the official github repository, here. First off, what are WAL files? As in other RDBMSs, Postgres logs transactions in log files, using which the database can roll forwards changes and achieve PITR (Point In Time Recovery). As long as the database has access to the log files, it can always use the data in them to roll the database to any point in time. WAL archiving is simply the process of storing the WAL files in a secure location for future use. Streaming replication is arching to a standby database in real-time. Postgres servers are specified with a data directory, containing all cluster data. To restore a server, you create a new data directory, with the datafiles relevant to the desired point in time, and then start the postgres process using the new data directory. You can also cold (offline) clone a postgres cluster by copying the data directory to a new server and starting the instance. To achieve non incremental backup, Barman uses the replication slot feature in Postgres, which streams the WALs to any specified location, and uses pg_basebackup to hot backup an entire database cluster. For incremental backups, Barman uses a different approach, relying on rsync with ssh. You set an ssh command which connects the barman server to the database server with passwordless ssh, and rsync will handle the transfer of all necessary files. Somthing that confused me is that in many other rdbms’s, you specify a full backup interval, say once a week, and then in shorter intervals you have a an incremental backup which stores WALs of some sort, building on the full backup. The incremental backups containing the transaction logs would be useless without the underlying full backup, because during a restore, the database restores the full backup and then “rolls forward” the logs, effectively applying the transactions that happened after the full backup state has been achieved in the recovery environment. But Barman does not do this, instead only using an incremental backup schedule without specifying a full backup. I couldn’t understand how Barman never takes a full backup to use with the incremental ones. At first I thought that maybe it transparently creates a full one whenever it is needed, but that didn’t make sense to me and so I did some digging. The results are what prompted me to make this explanation post. The important thing to note here is that barman is not what creates the incremental backups. rather rsync is what manages the deltas. This allows for a very simple scheduling system where you only specify an incremental backup schedule, which does not require a full database backup schedule, ever. Here’s how. The way rsync handles copying deltas is by making use of hardlinks. When you copy a file with rsync, it defaults to using incremental copy. What happens is that rsync will check the destination directory for the source files. If they do not exist, rsync will copy them. If they exist but are different, rsync will replace them. If they exist and are identical (not just the file names but the permissions and last modified dates as well. From the man page: -c, --checksum This changes the way rsync checks if the files have been changed and are in need of a transfer. Without this option, rsync uses a "quick check" that (by default) checks if each file’s size and time of last modification match between the sender and receiver. This option changes this to compare a 128-bit checksum for each file that has a matching size. Generating the checksums means that both sides will expend a lot of disk I/O reading all the data in the files in the transfer (and this is prior to any reading that will be done to transfer changed files), so this can slow things down significantly. ), rsync will not touch them . What does this mean? After all, we do need the unchanged files to be present in the new (or backup, in this case) directory as well, so how can it not copy them? The trick is hardlinking. rsync does not copy in new files when they have not changed, but it does create a hardlink to them. So, both the original and new directories will now have an identical file. How is this different from simply copying the file, and how does this create an incremental copy? Let’s take a look at link structure. A soft (symbolic) link is a completely separate file which points to an origin file. if the origin file is deleted, the link file will not point anywhere and will have no data. A hard link however, works quite differently. Say we have a file named FILE1, with the line “I am a file with some data” in it. If we were to create a hardlink from it, to a file named FILE2, The two files would be identical in every way. even the inode number would be the same. How is this achieved? We must remember that the files you see in the filesystem on your unix machine are not actually the location of the bits on the disk, but rather pointers, much like an index leaf in a database. So in our example, when you create a hardlink, the operating system will create another pointer named FILE2, that points to the location on disk where “I am a file with some data” is stored. Because the data is the same data and not a copy, the inode is the same. This leads to an interesting point- the hard link file FILE2 is not reliant on the original file FILE1 in any way. Even if FILE1 were to be deleted, FILE2 still points to the data on disk. So as long as there is even one hard link left, the data will not be lost. Therefore, when creating a backup for the first time, the entire postgres data directory is created as a clone. The next time the backup is run, rsync will only bring in new changes (and in this usage also delete the files that have deletes) and create hardlinks to the files created on the first run, Thus, the second run will be significantly shorter and take less space- it isn’t creating many of the new bits, they are not being written to the disk again. Then, at some point you reach the end of the storage policy you specify in Barman, and the original files are deleted. But you don’t need to create a new full backup, because the next backup that was once the incremental one is now the full backup. As long as you have even one hardlink, you can safely delete the older ones and your data is the same. It’s the same file! Just a different pointer. So you can have your incremental schedule and not have to worry about creating a full backup as well. Now, how do I set up and use Barman? Here is the basic installation and configuration for an incremental backup policy with Barman. The architecture in this example is as follows: One postgres server, called pghost. One Barman server, called bmhost. Each server is a separate linux machine. To install barman on bmhost, run the following: sudo apt-get install barman or sudo yum install barman You may also need to install the package barman-cli. Packages are also available to download from https://sourceforge.net/projects/pgbarman/files/. You can even build from source if you like. Set up passwordless ssh between pghost and bmhost. You can use the guide I linked earlier, or this handy one: The default location for the main barman configuration file is /etc/barman.conf Enter the following values: [barman] barman_home = /data/barman barman_user = barman log_file = /var/log/barman/barman.log compression = gzip reuse_backup = link backup_method = rsync archiver = on I’m assuming that /data/ is a separate drive which can be extended and is secure, this is where all backups will be and you don’t want to run out of space. For using logical backups (instead of incremental ones) change the backup_method to “postgres’. Each postgres cluster that you want barman to manage gets its own configuration file. A good practice is to use hostname.conf . These files are often stored in /etc/barman.d . Our server configuration file will contain the following lines: [pghost] description = “Postgres server” ssh_command = ssh postgres@pghost conninfo = host=pghost user=postgres port=5432 retention_policy_mode = auto retention_policy = RECOVERY WINDOW OF 7 days wal_retention_policy = main Instead of having barman store a time window, you can set it to hold a number of backups. Do this by changing retention_policy to: retention_policy = REDUNDANCY 4 Naturally, you can change the number to any integer. If using a time window, you can specify in weeks or months as well. Next we allow barman to connect to the postgres server, by editing Postgres’s Host Based Authentication file, pg_hba.conf . This file is always in the data directory we mentioned earlier. The path changes depending on postgres version, but you can usually find it in $PGDATA . The file must contain the following line: host barman all pghosts_ip_address:/32 trust If you care about security, you should set the “trust” option to something other than trust. md5 is for password. You can also use peer or ident. More information can be found in the postgres documentation: The postgresql.conf , also in the $PGDATA, must accept the incoming request from the barman server. Set: listen_adressess = '*' To listen on all interfaces. Next we need to make sure that WAL streaming is set up correctly. In the postgresql.conf : Replace /PATH/TO/WALS/ with the actual path barman thinks they are getting sent to, verifiable by running: barman show-server pghost | grep streaming_wals_directory %p points to the path the wals are in, %f is each wal file name. After any change in postgresql.conf , we need to reload the postgres server. This will bounce your db, so be careful!!! pg_ctl reload or systemctl reload postgresql Within barman’s server you can list all managed servers with barman list-server To check a particular server’s status: barman check pghost Everything should return “OK” except for the minimum redundancy/retention policy, as we have not yet taken any backups. If wal archive failed, make sure the server is rsyncing properly, to the right paths. To force start a new WAL file: barman switch-wal pghost There are many guides and tutorials for configuring barman. If you still have failed, try a google search. There is a lot of good information. Once everything is “OK”, backup the severer for the first time: barman backup pghost You can the check status of backups for a server with barman list-backup pghost Each backup will have an ID with a timestamp that you can check with: barman show-backup pghost BACKUPID At this point you can schedule a cronjob to run the backups. Besides the backup command itself, you have to schedule the command “barman cron”. In later version repository installs of barman, this is generated automatically. If not, set the following: * * * * * /usr/bin/barman cron To backup your server once a day at 8 pm: 0 20 * * * /usr/bin/barman backup pghost To restore your backup, you use the barman recover command to create a data directory: barman recover <server_name> <backup_id> /path/to/recover/dir DO NOT set this as the current data directory that you are restoring. Instead set up a new instance. If you MUST have it in the same location, first stop the original server if it is running and remove the $PGDATA contents first. It is HIGHLY recommended to first restore to a 3rd site to verify that the cluster is restored properly before replacing your production PGDATA. If you want to restore to a remote host, which you usually do, add an ssh command to barman recover: barman recover <server_name> <backup_id> --remote-ssh-command <COMMAND> /path/to/recover/dir You can add another flag to set a PITR recover: --target-time TARGET_TIME Where TARGET_TIME is a standard timestamp ‘2019–11–03 13:00:00’ Once your data directory has been created, use pg_ctl to start the server using the new directory. That’s it. You now have incremental backup managed by barman. If you encounter any issues, go over the barman manual: docs.pgbarman.org/release/2.9/ Good luck, and happy Postgres-ing!!
https://medium.com/@kcaps/how-incremental-backups-work-in-postgresql-and-how-to-implement-them-in-10-minutes-d3689e8414d9
['Kobi Rosenstein']
2019-11-03 10:53:18.151000+00:00
['Postgresql', 'Backup', 'Dba', 'Incremental Backup', 'DevOps']
The Tragic Story of Skylar Neese, Brutally Murdered by her Best Friends
Skylar Neese was a sixteen-year-old girl who lived in Star City, a sleepy town in West Virginia. Skylar was the only child of parents David and Mary, so was often described as ‘spoilt rotten.’ In terms of academia, Skylar was a high achiever, excelling in almost every subject, including those she disliked. Since being a toddler, Skylar’s best friend was a girl named Morgan Lawrence. The two were inseparable for most of Skylar’s early life, and the two families were extremely close as a result. Though the two slowly drifted apart as Skylar made new friends, Morgan insists to this day that the two would have always been ‘sisters.’ Skylar’s other best friend, Sheila Eddy, was also an only child, and the two entered freshman year together. It was around this time that a new girl named Rachael Shoaf joined the school, and the three became joined at the hip, often referred to as ‘The Three Musketeers’. The trio gained a ‘rowdy’ reputation; they often snuck out after curfew, went to parties, drank, and did drugs. Their friendship was extensively documented on social media, mostly Twitter, on which the three girls were extremely active. On July 6th, 2012, Skylar arrived home from a late shift at Wendy’s and retired to her bedroom. Little did her parents know, this would be the last time they saw their daughter.
https://medium.com/crimebeat/the-tragic-story-of-skylar-neese-brutally-murdered-by-her-best-friends-8862d9302837
['Hannah Marland']
2020-12-12 13:02:25.182000+00:00
['True Crime', 'West Virginia', 'Crime', 'Murder']
Translation of the certificate of conformity
Translation of the certificate of conformity In our life, sometimes there are situations when a certificate of quality (conformity ) may be required . This is an official document that confirms the full compliance of a product or product with the requirements of the state where it is issued. The certificate is issued upon request by a special certification body. Accordingly, to provide a document in another country, you must first translate it. The text of the certificate is filled with numerous terms and formulations, and the thematic focus can be very different: medical, technical, economic, etc. Based on this, the translation of the document should be entrusted to a narrow-profile specialist who is well versed in the topic and specifics of the direction. It is important to correctly translate the terminology that is used to describe the characteristics of goods and products. Linguistic knowledge and skills alone are not enough. Translation Agency legal service translation is ready to provide professional services for the translation of quality certificates at the proper level. When translating a certificate of conformity, a linguist will have to translate the following data:
https://medium.com/@seopro6592/translation-of-the-certificate-of-conformity-fbd34ecd2bc6
[]
2021-12-15 05:48:11.193000+00:00
['Translation', 'Certificate']
The truth about the new SpaceX ‘Mini-Bakery’
Yes, you heard that right! SpaceX has a new mini-bakery! But it’s not feeding all the hungry SpaceX engineers working on the Starship at their Boca-Chica site. Instead, they make their heat shield tiles here. SpaceX Van outside of the mini-bakery in Florida As Starship prepares for its first orbital flight, the thermal protection system will play a crucial part in making the mission a success. The SpaceX factory which makes the tiles have been widely rumored around on the internet for many years. But in 2019 we started seeing some SpaceX vans outside, something that looked more like a warehouse in Cape Canaveral, Florida. But oh boy, was it a normal warehouse, in May 2020 there was a site inspection that gave us a much closer look at the new SpaceX facility. It’s said to have 20 employees at the time of inspection, they run 24 hours a day and they work 7-days a week, in 3 shifts. The facility is said to be about 40,000 Sq.ft in size. The number of employees at the time of writing this blog must have surly gone up as the SpaceX Starship prepares for its first orbital test flight. Space Shuttle heat shield tiles So now that we know where they are made, let’s talk about how they are made. The new SpaceX heat shield tiles are very similar to that of NASA’s Space Shuttle’s thermal protection system. They need to be able to withstand very high temperatures during the re-entry. For this, they need to have low thermal conductivity and high specific heat capacity and melting point. Elon Musk has mentioned that the tiles are made out of Silicon and Aluminum Oxide. The tiles are 90% air and 10% Silica and they are a bit like hard form. That’s because Air has a low thermal conductivity and high specific heat capacity. This is very similar to those of the Space Shuttle. 1/2 of the new SN20 is covered with these tiles. SpaceX SN20 Covered in the new heat shield tiles If we take a closer look at these tiles, we can see that they are labeled with red and green stickers. Red and green labels on SN20 Closer look at labeled tiles The tiles with red ones are found to have been broken or damaged somehow during the inspection. The ones labeled in green are found to be misaligned during the fitting. Now, this might be one of the very first and very important problems for SpaceX to solve. It’s not as simple as baking a few foams like tiles and sticking them on the Starship. For the Starship to be fully reusable, it needs to avoid such inspections. To understand this better, SpaceX plans on reusing the Starship at least three times a day. NASA’s Space Shuttle had a similar technology and it took literal months for inspection and maintenance between launches. The main reason for this is that the Space Shuttle had a much more complex shape than the SpaceX Starship. NASA’s Space Shuttle Thermal protection system(heat shield tiles) on the Space Shuttle The Space Shuttle had many different shapes of tiles, and during the launch ice would fall from the main tank hit these tiles thus, damaging them. SpaceX has about 15,000 tiles compared to 20,000 of the Space Shuttle. The Space Shuttles tiles were glued in place, but SpaceX uses a red robot to weld the mounting pins onto the body of the Starship and a person just comes along and gives it a nice push into place. The reason they choose the hexagon shape is that if they were to go with for example a square shape. then the heat would go between the tiles and the body of the Starship would be exposed to the Starship. The thermal protection on Starship is much more simpler and efficient than compared to that of the Space Shuttle. On June 7th 2021 a Boca Chica watcher and Twitter user @StarshipGazer took some really good photos of a few shipments from the so-called “mini-bakery”. One of them was a wooden crate and was labeled incoming mini-bakery. This could mean that they are moving the mini-bakery, near the production site. Again this cannot be confirmed yet, since it could be some more tiles from Florida as well. What do you think about new SpaceX ‘mini-bakery’? Let me know in the comment section below!
https://medium.com/@adityakm24/the-truth-about-the-new-spacex-mini-bakery-19b7dd55bc3b
['Aditya Krishnan Mohan']
2021-09-05 18:02:10.750000+00:00
['Spacex', 'Space', 'Mars', 'Space Exploration', 'Technology']
JavaScript Best Practices — Variables, Arrays, and Objects
Photo by Brenan Greene on Unsplash JavaScript is an easy to learn programming language. It’s easy to write programs that run and does something. However, it’s hard to account for all the uses cases and write robust JavaScript code. In this article, we’ll look at how to format file source code for readability. Also, we should make our lives easier by making the best use of recent JavaScript features. Array Literals We should be careful when we create array literals. The spacing and indentation should be consistent. Use Trailing Commas Trailing commas should be added so that we can rearrange them easier. For instance, we should write: const values = [ 'first', 'second', ]; Don’t Use the Array Constructor We should never use the Array constructor. It acts differently when we pass in one argument and we pass in more than one argument. If we have one argument, then it returns an array with the number of empty slots. If there’s more than one argument, then it returns an array with the arguments in it. Instead, we should use array literals. For instance, we write: const arr = [1, 2, 3]; instead of: const arr = new Array(); or: const arr = new Array(1); or: const arr = new Array(1, 2, 3); Non-Numeric Properties We shouldn't have non-numeric properties ion an array other than length . Instead, we should use a Map on an object. Arrays aren’t meant for storing key-value pairs. Destructuring We can use arrays on the left-hand side to destructure the entries. Also, a final rest element may be included. For instance, we can write: const [a, b, c, ...rest] = getArray(); or: const [, ,a, b] = getArray(); Spread Operator The spread operator can be used to make shallow copies of arrays or merged entries from multiple arrays into one. For instance, we can write: const copy = [...foo]; instead of: const copy = Array.prototype.slice.call(foo); And: const merged = [...foo, ...bar]; instead of: const merged = foo.concat(bar); Object Literals There are things to consider when we’re defining and using object literals. Use Trailing Commas We should use trailing commas and a line break between the final property and the closing brace. For instance, we can write: const foo = { a: 1, b: 2, } This way, each key-value pair is consistent and we can rearrange them easier. Don’t Use the Object Constructor We shouldn’t use the Object constructor. We just have to write extra code to use the Object constructor to create objects without extra benefits. Instead, we should use object literals. For instance, we can write: const foo = { a: 1, b: 2, } Don’t Mix Quotes and Unquoted Keys We shouldn’t mix quoted and unquoted keys unless. For instance, we shouldn’t write: { height: 2, 'maxHeight': 43, } Instead, we write: { 'height': 2, 'maxHeight': 43, } or: { height: 2, maxHeight: 43, } Method Shorthand We should use the method shorthand in classes or objects instead of the function keyword. They do the same thing. For instance, instead of writing: const foo = { bar: function() { return 'bar;; }, }; We write: const foo = { bar() { return 'bar;; }, }; Likewise, we can do the same with class methods, we write: class Foo { bar() { //... } } Shorthand Properties We can use shorthand properties in object literals. For instance, we write: const foo = 1; const bar = 2; const obj = { foo, bar, }; Object Destructuring Like arrays, we can destructure objects. For instance, we can write: function foo(bar, { num, str = 'some default' } = {}) { //... } This way, we separate the 2nd argument, which should be an object into separate variables. However, we shouldn’t nest destructured variables too deeply. For instance, we may want to think twice if we write: function foo(bar, { num, baz: { str = 'some default' } } = {}) { //... } The more nesting is our code, the harder it is to read. Photo by Brooke Lark on Unsplash Enums We can create constant objects that act like enums. To do that, we have upper case keys in an object. The object should be assigned to a const variable. Also, the variable name should be PascalCase. For instance, we can write: const LengthUnit = { METER: 'meter', FEET: 'feet', }; Conclusion We should use new features in JavaScript to make our lives easier. They include destructuring, let and const . Also, we should define arrays and objects with array and object literals respectively as much as possible.
https://medium.com/swlh/javascript-best-practices-variables-arrays-and-objects-1ec2b2ad1465
['John Au-Yeung']
2020-06-04 15:46:48.225000+00:00
['Technology', 'Software Development', 'JavaScript', 'Programming', 'Web Development']
A Little Stuff Can Make You Happy
A Little Stuff Can Make You Happy Rejecting materialism doesn’t guarantee anyone more joy. A friend of mine kept giving away all of her shit. Furniture. TV. Dishes. Sex toys. A few months later, she’d buy it all back. The cycle did nothing but drain her bank account. Every year, she announced her purge on Facebook. Finally, people stopped caring. Maybe she wanted attention. Or something else. Like some kind of spiritual enlightenment. But giving away all your shit doesn’t lead to eternal happiness. Going on a Buddhism retreat doesn’t turn you into Julia Roberts. It just means you’ll own less. My friend skipped a few steps. That book Eat, Pray, Love was still floating around. She talked about it all the time. Like she thought she’d fly off to Europe. Then Bali. Write a memoir about her trip. Become rich. There’s this ageless fad. Acquire a lot of shit. Realize shit doesn’t make you happy. Promptly dump said shit off at Good Will. Hop on the next jet to another continent. Make a big deal about it. Oh what fun. These people all have one thing in common. They had a decent amount of shit in the first place. Some of us never acquire that much. Look at me. Most of my furniture comes from Target. Donating it all wouldn’t make me feel much better or worse. Well, probably worse. Most of the time, I’m not sure I’m happy. Or unhappy. It’s a useless question for a lot of us. We’re just trying to stay employed. Pay our rent. Cover the bills. I’m happy when I’m out hiking. When I’m writing. When I’m fucking. And when I’m drinking. Everything else is just in between. Wait, I forgot Netflix. And thunderstorms. I like those. That’s a pretty simple life. Let’s say I gave up everything and moved into the woods. I’d spend most of my time hunting and gathering. Avoiding predators. And bacteria. Learning how to start a fire. Would my life really feel better? Only if I could survive for a year, then come back and write a best-selling book about my experience. That would assure me decades of hiking, running, fucking, and thunderstorms. The stuff you want One of my students asked me for twenty bucks last semester. No joke. Said she needed cash for gas. So I gave her an Andrew Jackson. Yeah, arguably the most famous crook in American history. Instead of gas, she bought nail polish and makeup. Kinda pissed me off. Some teens at my school do need money for essentials. Maybe makeup and nail polish makes you happy, if you typically can’t afford them. For all I know, she had a job interview and wanted to look nice. Or she was trying to launch her Instagram. Understandable, if misguided. When you’re poor, you’ll try anything. Imagine me giving this girl a lecture on how she doesn’t need makeup to find true happiness in this world. How judgmental. Me, a full grown adult who can buy makeup whenever I want, I’m going to convince a 19-year-old that she doesn’t need what I don’t even have to budget for. “You don’t need it.” In truth, that’s the line I’d use only if I were trying to excuse myself from giving someone money. That’s the line you feed a toddler who wants something from the toy aisle at the supermarket. Over the past couple of years, I’ve loaned about $300 to my students. When I say loaned, I mean gave. But to them I say loan to help everyone save a little face. They say they’ll pay me back. They don’t. Because they can’t. There’s no point in me guilt-tripping them. Sometimes they need bus fare. Rent. Or baby formula. My school’s a mixed bag. Some students come from middle class families. Others don’t. The fact that teens have to beg for money to buy a little lipstick speaks to the wealth gap in the U.S. Somewhere out there, someone’s wearing a bra made out of diamonds. It’s a real thing. They show up in Victoria’s Secret fashion shows sometimes. A diamond bra costs about three million dollars. The existence of this bra makes me want to puke. Such a flagrant display of wealth. Meanwhile, some college students have to beg their professors for gas money. Maybe a little makeup. Wearing a diamond bra probably doesn’t make you feel that amazing. And yet some people get off on flaunting their extreme wealth and privilege. It’s the closest they’ll ever come to joy. The stuff you like having They say money can’t buy happiness. Or love. Entrepreneurs like telling us we don’t need more dough to feel successful. But without it, we can’t function. Ask anyone with fifty grand in student debt if a little extra cash would make them happy. For one, it would make us less anxious. We would sleep better, not sending ten percent of our paycheck to Sallie Mae. Just to cover interest. It’s really my fault, though. Instead of going to grad school, I should’ve just started a marketing company and learned how to brand myself. I’ve noticed something odd. A handful of self-appointed experts are making a ton of income by telling everyone they don’t need money. You don’t need money to find self worth. But sign up for my course so you can find out how to make a million dollars. There’s nothing wrong with wanting more money. Maybe one day capitalism will sink in on itself. But that day’s not coming anytime soon. Meanwhile, you probably do need a car. Because most cities don’t invest in public transportation like they should. There’s no inherent evil in wanting a few things. My first few months as a professor, I could barely afford groceries. My soul pined for an espresso maker and a mug. So I set aside $50 for a Mr. Coffee. Making my own espresso made me happy. It took three more months to save up enough to frame my PhD diploma. What a day to remember. Hanging my degree on the wall. Taking a selfie with it. One that got seven likes. For a year, I lived in a studio apartment with no laundry machines and no dishwasher. Sure, I wasn’t miserable. But I hated having to spend two hours a week at the laundromat. Now, I finally have my own washer and drier. It fucking rocks. I can do laundry in my underwear. I can do laundry at 2 am. I don’t have shouting matches with token machines. Owning some furniture and key appliances has made my life easier and more comfortable. Don’t even get me started on couches. I love them. For most of my 20s, I never owned a couch. I’ve seriously been missing out. You see, couches make watching television a lot easier. Last year I bought a $150 office chair. My ass spends enough time behind a desk. Why not give it a little cushion? And I also bought a window unit for my home office. Not absolutely necessary. But nice to have in July. The stuff that doesn’t matter You could take away my smartphone. My laptop. My flat screen TV. Banish me to a cabin in the woods with no electricity. I’m sure I could cope. I’d find happiness despite my lack of my gadgets. That doesn’t mean their absence caused my happiness. Which means you could stick someone else in the same cabin, and they’d crack like dried mud. They were miserable beforehand. All you did was remove their distractions. Becoming one with nature might lead them to enlightenment, or not. Connecting with some kind of spiritual plain might be the only way for them to escape utter boredom. When they come back, they’re still staring down all their old problems. No kind of newfound spirituality’s going to help any of us pay down our student loans. The only life philosophy that’s ever helped me is stoicism. In short, you can get through life much better if you control your emotions. The world might be burning down around you. All you can do is keep your own shit together and try to solve problems in a calm, reasonable manner. I’ve always tried to avoid isms. I’m all about the ics. Stoic. Laconic. Sarcastic. Sadistic. Bottom line, you don’t need to chase happiness. Just do things you like. Avoid things you don’t. If you plan to purge your belongings, let me know so I can go through your stuff.
https://jessicalexicus.medium.com/a-little-stuff-can-make-you-happy-f33da089f88a
['Jessica Wildfire']
2018-06-26 22:10:18.543000+00:00
['Life Lessons', 'Spirituality', 'Humor', 'Materialism', 'Self Improvement']
Starting again is always hard, very hard
It doesn’t matter how many time you’ve done it, if you mastered it and you became used to do it regularly, almost without thinking, if the time comes and life makes you stop, you will need to start all over again. Writing is something that have been significantly growing for me and in me over the last couple of years, but it was not until last year that became something else, something important, something that was able to do and I felt comfortable doing. I loved it and even start calling myself a ‘writer’. Complex ideas where able to be translated faster and more clearer the more I wrote. I had voice, a way of thinking and way of writing that was constantly improving. I was also doing it EVERY-SINGLE-DAY. Let me tell you something, it made a massive difference, as that was the period were I experienced the most significant improvements. Fast forward exactly a year later and things look very different. This is the same period of time when in 2018 decided to take part on the 100 days challenge, a project created by one of my favourite artists elle luna; and it simply consists on choosing something you like and/or enjoy and would like to do regularly and do it for 100 days in a row. For me it wa writing. Now, after and hiatus and several life hiccups, I’ve been finding myself struggling to get back into it, questioning why is not working and how to start again. The answer? You’re reading it. Just start. Some might say that it is easier said than done, and it is true, but it is nothing less than deciding to take action and go with it. As I’ve struggling to get around the words and sentences, to structuring ideas, and times, even finding inspiration, is not until we decide to act and make the effort to stick to it that things actually happen. It’s a chain I once read about this advice a comedian received from Jerry Seinfeld when he asked him about sitting jokes. The article said that Seinfeld’s response to his questions was simple: “You need to write jokes every day. Get a calendar and mark with an X the days you write jokes, after a while you will have a chain. You don’t want to break the chain”. That’s exactly what it is. Practice. Consistency. Repetition. Who ever you admire, musician, athlete, writer, actor, business person, there’s one truth that I observed, the secret is always the same. Beyond talent and the exuberance of the outcomes we all see and dream to one day replicate or that inspire us, there’s a lot of practice, a lot of consistency, a lot of repetition. Nothing else. No accidents, no miracles, no coincidences, no chance. Practice. Time to take action.
https://medium.com/thoughts-on-the-go-journal/starting-again-is-always-hard-very-hard-902f6466fd29
['Joseph Emmi']
2019-05-16 22:25:49.352000+00:00
['Writing', 'Journal', 'Life Lessons', 'Motivation', 'Commitment']
It Won’t Be The Last One
By Nana Dadzie Ghansah Sooner or later, the world will gain control over the COVID-19 outbreak. It will be through containment, effective treatment, a vaccine or a combination of the three. History teaches that. Even the Black Death ended. Even the incurable HIV/AIDS has been controlled. History also tells us that sooner or later, human behavior will lead to another epidemic or even pandemic. How is that? Disease outbreaks occur through uncleanliness, vectors, lack of prevention (Anti-Vaxxers etc.) and zoonotic spillovers. There are areas of the world that still lack clean drinking water and these areas still have outbreaks of cholera and typhoid. The mosquito still transmits yellow fever and other viral diseases that are endemic in areas in the Tropics and flare up into epidemics every now and then. Even the bubonic plague broke out not too long ago in Madagascar! Then is the fact that viruses are spilling over from other mammals to humans causing events like the COVID-19 and SARS outbreaks. Lastly, refusal by some to get vaccinated means that occasionally, we are going to see diseases like measles and polio break out. Human behavior does not only lead to the direct breakout of diseases. What we do after these disease break out will ensure that we will forever see epidemics or even pandemics. Since time immemorial, the attitude of those in power towards the outbreak of diseases has worsened these events. One can almost predict these reactions and the Chinese authorities epitomized it wonderfully during the initial weeks of the COVID-19 outbreak in December 2019. Instead of appreciating the observations of Li Wenliang and his colleagues that there was a new cluster of patients presenting with SARS-like pneumonia, they censored them. When a disease breaks out, there are always those, often healthcare workers, who notice the initial cluster of cases and sound the alarm. Those in power will often deny these reports. Then as the cases mount, they’ll seek to suppress the scientific or observational findings of those who are seeing this cluster swell. When that does not work, they try to argue that things are not so bad. By the time leaders realize things are bad, the initial outbreak is beyond containment. We can forgive the lack of scientific knowledge for the reasons leaders in the Antiquity and Middle Ages gave for the epidemics that afflicted them. The Antonine plaque of 165 AD in the Roman Empire was blamed on an angry Jupiter. It was smallpox. The Church claimed The Black Death was due to bad miasma. Others said it was caused by the Jews and slaughtered them for that. It was bubonic plague. (Looking at how Copernicus and Galileo were treated, I doubt the Church would have listened.) However, to deny outbreaks, seek to suppress their reporting or make light of their severity has to be unforgivable in our present times. This is especially egregious since early action can contain disease outbreaks. And yet those in power do it. We saw President Woodrow Wilson and other allied leaders do it during the Spanish Flu epidemic in 1918 leading to 50 million people dying worldwide. They suppressed information about the epidemic so as not to depress morale during the 1st World War. It happened during the outbreaks of bubonic plague in San Francisco in the early 1900s. 190 people ended up dying. We saw President Reagan avoid the issue of HIV/AIDs until 1985. We saw several African Heads of States, like Thabo Mbeki of South Africa, refuse to accept the fact that HIV/AIDs was killing their people in the 1990s and 2000s. We have seen the Chinese reactions to SARs and COVID-19. It is not only leaders who misbehave when diseases break out. Among the general population, denial abounds too. That is often compounded by crazy conspiracy theories. This is followed by a period of panic and hysteria. When these reactions do not work, fear sets in. Deep crippling fear. Finally, people learn to accept the new reality and a rational response ensues. In that regard, we of the present day are no different than the Flagellants in Europe of the 14th century, who whipped themselves bloody to get God to stop the plague during Black Death. Another factor that adds to the possibility is the unwillingness by governments to spend the money necessary to prevent these diseases from breaking out. Preventive programs in the hotspots of the world are often underfunded. Even developed nations are cutting back. The US recently axed its Pandemic Team as well as the PREDICT Program — a program made of scientists working around the world to hunt down the viruses, like COVID-19, that could lead to the next epidemic or pandemic. So yes, human behavior being what it is plus economic policy that often short changes public health, we will continue to see epidemics and pandemics. Even in spite of all the scientific advancements, yes, we will continue to see these events.
https://medium.com/@ndghansah/it-wont-be-the-last-one-33894a10a6de
['Nana Dadzie Ghansah']
2020-03-07 04:21:30.002000+00:00
['Pandemic', 'Epidemic', 'Virus', 'Covid 19', 'Public Health']
Mastering 3 Common Workplace Problems
As we make progress in our careers, we run into certain common workplace situations that cause us frustration and anxiety. Left unchecked, this leads to increased unhappiness, poor engagement and eventually we ask ourselves if the job is worth it. Let’s look at the three most common workplace problems that you are likely to encounter, and how to overcome them. Photo by Karla Hernandez on Unsplash 1. “Should I go along to get along?” Becca is a product manager on my team. She is great with customers, has a great work ethic and consistently performs well in her job. In a 1:1, she expressed reservations about working with her engineering partner, Ali. She added “I know this is a peer feedback driven organization. I like Ali as a person, but he is asking me to take notes every time, treating me as nothing more than a scribe. I don’t mind helping from time to time but this is becoming a pattern. Going against Ali would likely hurt how he writes about me during the performance review cycle” Becca viewed this as a zero sum situation, and was approaching it from a mindset of fear. (“Her performance review would be impacted if she disagreed”). Here is the path we charted for Becca. First we agreed that Becca had important viewpoints that warranted a team level discussion. Next, we identified other goals or objectives that Ali cares about. We uncovered that Ali cared about manager feedback and cultivating a people centric work culture. Tapping into this insight, we structured the following path forward for Becca. Becca would step back and instead of talking about the specific disagreement issues, she would talk to Ali about the culture he wanted to set up for the team. Set up a culture of inviting disagreement and respecting each other’s points of view. This breaks the fear cycle and instead moves to a spirit of discovery, tapping into the collective wisdom of the team. Look for opportunities to highlight the culture pioneered by Ali, with senior leads in the organization. Having set up the structure, Becca can now raise questions in a safe space and gets recognized for personifying the team’s culture. The open dialogue helped her uncover additional considerations that Ali was able to highlight, which she had previously missed. More importantly, Becca no longer gets worked up about negative peer feedback, instead she has helped improve the culture for the entire working group. 2. “The Steamroller” Another example that Becca needed help with went something like this. Adam continues to steam roll his way and gets what he wants. He is more senior than me, so disagreeing with him, even when his ideas have flaws, is becoming difficult. I am frustrated that management seems to be doing nothing to stop Adam. What Becca really needs is mastery over setting and managing boundaries with co-workers. Why boundaries? Boundaries are a naturally occurring fact in life. My father in law used to coach his kids to have boundaries using this amazing quote “Your freedom ends where my nose begins”. Our skin represents the boundary of our physical self. Fences mark the boundaries of a property. Without these metaphorical boundaries, you allow anyone or anything to breach your space and lose control of your life. You set boundaries by defining what’s okay and what’s not okay. Boundaries are about setting limits and being consistent in enforcing them, to regain control of things you value (e.g. your time). The person that cannot or will not set boundaries, feels controlled by others. Fear is a common reason why people hesitate to have strong boundaries, and this can be overcome by acting with empathy. Emotions vs Actions. It is okay for the steamroller to be angry, frustrated and generally express their emotions. Other people’s emotions is not your problem. You should however set boundaries on their actions. For example, being angry is okay but yelling at you is not okay. Be assertive, communicative and have a bias for action. Inaction — choosing to do nothing — is the worst form of action. Our approach should be to try and fail (and learn from it) vs failing to try. For example, if this person is imposing an unrealistic deadline, then just let them know you need more advance notice to meet deadlines and won’t be able to make it happen. Authentic — set your boundaries based on your own expectations, is true to who you are and what you will and will not tolerate. Don’t design your boundaries based on other people’s boundaries. Clarity. You should be 100% clear on what’s okay and what’s not okay. Be direct and specific in your communication. Do unto others: Respect others boundaries, however different they might be from yours, just like you expect others to respect yours. Are you worried about hurt feelings? Taking into account others feelings is important. You must always act with empathy. It is not a reason to stop doing what you need to do. Your clear decision making and boundary setting might upset someone. That is understandable. You are not accountable for how someone feels. Their feelings should not stop you from setting up boundaries for yourself. When boundaries are violated, there has to be a clear signal that it is unacceptable (must be consequences) and reinforce your boundaries. There are no exceptions. Escalations, without making personal attacks, to that person’s manager or to your own manager are usually effective. 3. The “Poor Planner/everything is urgent” manager We have all seen examples of this. You are a meticulous planner and have time allotted to hit your own deadlines. Surprise! Your manager pops in and asks you to get something done that is time consuming and is urgent. There goes the time you had planned to spend on your own work, your hobbies/friends or something else. This is another example of boundary violation. It is not easy to say no to your boss but here is what you can do Highlight the impact of the most recent fire drill to your boss, and if possible highlight how other projects your manager cares about have now been affected. Don’t just complain, offer solutions. For example, If your manager cares about building a bench, then highlight how this can be a great career opportunity for someone on the team that is potentially more junior and might be excited to work on such strategic projects directly with your manager. This creates a path forward to handle urgent problems without impacting your time in the same way. Offer that when she assigns last minute work to you in the future, in the spirit of teamwork, your manager should chip in and work together on the deadlines. You have to get the manager to hate the urgent rush as much as you do. If you can highlight this is a pattern with multiple occurrences, raise the topic of hiring additional people to help with the workload. Using these techniques, you can regain control of your life from situations that were seemingly uncontrollable. Good luck! If you liked reading this, consider signing up for career coaching. Schedule a complimentary consultation with me to discuss how I can help you accelerate your career.
https://medium.com/@karthikln/mastering-3-common-workplace-problems-826bdfe30013
['Karthik Lakshminarayanan']
2021-02-14 20:01:41.365000+00:00
['Career Advice', 'Workplace', 'Work']
How I hacked into India’s top matrimonial website and earned amazon gift card worth 10K INR.
Hey friends, Hope you all are safe and good. Don’t know why suddenly I was getting more requests in my matrimonial profile after I got married. I didn’t want to delete my account since browsing through the profiles of beautiful girls seems really interesting (disclaimer : Don’t try this at home when your wife is nearby). One day , I was very tired after the work (I work as a developer). As usual, our previous day’s bug fix release introduced some interesting new bugs in our application and I had to do some urgent data supports to fix it. I decided to learn some new topics before I sleep as our manager had asked me to gain knowledge in some of the new technologies. So I opened YouTube and started learning. And only at 2:30 am in the morning I realized that I ended up in watching a video titled “Hey!Look my red cat is peeing on my bald head”. Sh**t!! I started watching a technology video and how the hell I ended up here ^_^ . Anyway, I checked my email and found some notifications from the matrimony website. I used to check profiles using their android app so I forgot the password. When I clicked on the forgot password option, the website asked me to enter the OTP received in my registered mobile number. Good! I am big fan of OTP brute-forcing. I decided to give a try. I entered wrong OTP and resent the request more than 20 times. I noticed that there is no limit in trying the invalid OTP numbers. So I opened the Burp Suite and forwarded the request to the Burp intruder. Using Burp intruder you can mark an input and provide a set of payloads. Once you start the attack, the burp will send continuous request to the endpoint by replacing the payloads in the marked input in each request. Since the OTP is 6 digit, I set the payload as 100000 to 999999. I started the attack and after few minutes I started getting responses with 500(internal server server) as the code. I paused the burp and tried accessing the website in the browser. I felt like running away when I saw that the website is down. I started panicking. I suddenly turned off my laptop and went to sleep. The target doesn’t have a bug bounty program so as you know it is illegal to do these kind of stuffs. I opened their android application but that also was not working. I started imagining me getting arrested by the police and every one is laughing because I hacked a matrimonial website. Sh*t I should have hacked some bank websites instead :(. This is why intelligent people says ‘never go behind beautiful girls’. I decided I will delete my matrimonial account once the server is up. I slept somehow. When I wake up in the morning I checked the website first. Wooh!! It is working now. Everything is back to normal. There are no news of an international cyber criminal being arrested for hacking the matrimonial website. All good, I am a brave person and today is Saturday. So I decided to check the attack again. This time I decided to reduce the rate of requests/seconds . I respect the server. But instead of starting the payload from ‘100000’ I accidentally gave starting payload as just 0. So the payload range is from 0 to 999999. When attack was started I found something interesting. The first request with OTP value as 0 have given some different response. I was surprised when I checked the response. The response contained a link with my encrypted password!!! So if I click on that link I will be redirect to my account without any trouble. I knew my friend who was using the same application. I entered his mobile number and 0 as otp in the forgot password request. Voila!! I received the link with his encrypted password. So I can hack into anyone’s account without knowing the password. But I need to know their mobile number right? After checking the request again I found that I can provide either mobile number/matrimonyid/ email in the request.The request was as follows So I can log into any account just by giving their mobile number/matrimonyID/email-address in the ID field and 0 in loginOTP field. I immediately reported the issue to their customer support. I know there is no bug bounty program for them but as this is a high impact bug I thought it should be reported. Their customer support team didn’t understand the impact first. Later I got a chance to contact one of their technical manager. They fixed bug immediately. Even though they don’t have a official bug bounty program, considering the impact, they rewarded me with 10000 rs worth amazon gift card.
https://infosecwriteups.com/how-i-hacked-into-indias-top-matrimonial-website-and-earned-amazon-gift-card-worth-10k-inr-2a0b376219fa
['Vivek P S']
2021-04-25 12:34:54.490000+00:00
['Bugs', 'Hacking', 'Cybersecurity', 'Bug Bounty']
Best Security Cameras for 2021: Let’s Keep an Eye in Your Home with These Smart CCTV Cameras.
Best Security Cameras for 2021: Let’s Keep an Eye in Your Home with These Smart CCTV Cameras. Cis Kimhill Jan 15·4 min read Today the smart security cameras designed to surveil both from inside and outside the home is a must if you want to know what’s happening to your property when you are not around. Unlike basic CCTV cameras that store data to a small PC or enterprise systems that charge you for a subscription package, now you can find smart cameras that allow you to use both cloud storage and a real-time video feed so that you will get the flexibility to check on your property whenever you like and wherever you are. The most fascinating thing about smart surveillance cameras is that they can often be controlled by your other smart devices. Well, there are numerous security cameras available in the market that have several amazing features, thus it can be quite daunting to select one from them. Therefore, we have researched and some of the amazing cameras that anyone would love to install. IGET SECURITY M3P15V2 – iGET SECURITY M3P15v2 is one of the solid mid-range security cameras. It has a large set of features with a wireless rotating Full HD design to rotate the camera in all directions. The camera also has automatic motion detection and auto-recording, so if there’s any movement in the room behind you, the camera will inform you via the Internet on your mobile phone and starts auto-recording on the micro SD card. NETATMO WELCOME – Netatmo Welcome is definitely the solid pick for home security. The Netatmo welcome is a compact surveillance camera that protects your home even better, all thanks to revolutionary face recognition technology. Because the welcome camera recognizes faces you’ll know who is at home — whether your loved ones or someone else. It will also send the names of the people it registers directly to your smart device, so it will notify you when your children or grandparents return home and also alert you when it detects a foreigner. EZVIZ C3A-B – EZVIZ C3A is a separate outdoor, 100% wireless security camera. This CCTV camera is powered by a battery allowing users to easily and seamlessly place them anywhere without any hassle with cables. It comes with a rechargeable lithium battery of 5500 mAh, which once fully charged can work for up to 3 months and provide constant protection. This outdoor camera boasts brilliant image quality of the 1920 × 1080 FHD resolution. You can connect this camera directly to your Wi-Fi, or you can connect it to the EZVIZ W2D or WLB base station. The EZVIZ C3A also supports Micro SD cards up to 128 GB so you will get the facility of local storage too. IP DUAL THERMO-OPTICAL CAMERA – IP Dual Thermo Optical is a smart camera. It features powerful behavior analyses that are based on an advanced learning algorithm to detect line crossing, input, and output from the area, and more. Its thermal module features 160x120 thermal resolutions at 25fps, imaging sensors. It also has an alarm smart function that buzzes once the camera detects line crossing, any intrusion or entry/exit to/from the marked area. However, this security is a bit expensive but is really a great fit. EZVIZ LC1C WHITE – EZVIZ LC1C is a security camera and with outdoor lighting to offer more security with the night light in the yard, driveway, and the like. The camera not only offers luminosity but also monitors your surroundings with active defense and real-time movement alerts. Thanks to its night vision technology, it not only in the daytime but offers stunning video in the dim light too and night images are sharp and clear despite the darkness of the environment. So, now if you want to install the home security cameras and planning on buying a surveillance system online in Dubai, I recommend visiting gear-up.me. Gear-up.me is an online store where you may find the latest security systems at affordable prices.
https://medium.com/@cis-kimhill/best-security-cameras-for-2021-lets-keep-an-eye-in-your-home-with-these-smart-cctv-cameras-c1da20b4c3f5
['Cis Kimhill']
2021-01-15 09:25:56.665000+00:00
['Home Security Cameras', 'Surveillance Camera', 'Cctv System', 'Security Cameras']
ElasticSearch with Node.js
Introduction Working with documents in software is fun. It means that storage fit your code not the other way around. This removes the object relational impedance mismatch between how you model your application and how you store those models. Even if you do not have immediate use of documents, learning how to use documents will broaden your perspective of storage systems. Recently I have written a post about document databases in postgreSQL and build a small application using .NET Core. However, this time, I am going to use ElasticSearch as document persistence mechanism and Node.js to work with elastic-search. Elastic Stack (ELK) have other components e.g. Kibana, Log-Stash etc., but I will be only using elastic-search component for discussion. Elastic Search and Index Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java. What is an Index The word index itself has different meanings in different context in elastic-search. Unlike conventional database, In ES, an index is a place to store related documents. An index is a collection of documents that have somewhat similar characteristics. The act of storing documents in an index is known indexing. An index is identified by a name, which is used to refer to the index when performing indexing, search, update and delete operations against the documents in it. We’ll see later how to create index, delete index and store documents in the index etc. Installation You can download installer for elastic-search from official website and install it or alternatively you can use some cloud services for same purpose. For the demo, I installed it on my local machine and then browse to following URL to verify the installation. Document Before we start working with elastic-search, lets talk a little bit about documents. You can also read my previous post if you need more details about documents. Documents are essentially a set of key-value pair. A document is a basic unit of information that can be indexed e.g. you can have a document for a single customer, another document or single product and yet another for a single order. The document is expressed in JSON which is ubiquitous internet data interchange format. Within an index/type, you can store as many documents as you want. You can imagine those records in a relational database thinking. Elastic-Search Restful API Elastic-Search has quite a few APIS: “cluster” API: to manage clusters. “index” API: Give access to our indices, mapping, aliases etc. “search” API: To query, count, filter, data access multiple indices and types. “document” API: to add data etc. Here are few of the API calls using browser: ElasticSearch with Node Now we will build a simple Node.js application to perform some operations on elastic-search. I initialized the source-code folder with npm init command: Install elasticsearch package Next, I installed elasticsearch npm package as follows: Connection to ElasticSearch connection.js file encapsulate connection to elastic-search. To verify the connection I add the following code: Lets run it and see if the connection works: Ok, our connection is working and lets continue to work with index and documents. Build an Index API NPM elasticsearch package expose many methods which we can use in our application. We can create index, delete them, add documents to them etc. Index name should be lower case. If index already exists, you get ‘index_already_exists_exception’. You can have an index for customer data, another for product catalog and yet another index for order data. For this demo I will create an API IndexManager to encapsulate these concerns and then we can just use that API. I will keep the implementation simple, however, feel free to adjust as per your style. The API will have following functionality: Create Index Delete Index Check if Index exist Add documents to index Here is the code for the API which is self explanatory, however if something is not clear, ask in comment: Here is the method implementation code: Client Code Now, as we have functionality related to connection and working with index and documents, let execute these commands: Create Index ‘Blog’: Add document to Index I created a class for Post and this will be searlized to json and save as document into index blog: Lets update the code to save document into index: and execute the code: and we can see that data is inserted to elasticsearch: Import documents (blogs) into ElasticSearch Index ‘Blog’ The following code will read the data from a josn file, parse it and then save it in the elasticsearch index. you can use this method to populate documents. JSON data file JSON data loader the following code, read json data file and return the documents objects: and also update the client-code, as follows: here is the data of all documents after executing the code: Summary Elastic-Search is very powerful and easy to integrate option for your persistence mechanism. The REST api provides very useful services to work with elasticsearch. You can download the code for this application from this git repo. Till next time, Happy Coding. References
https://medium.com/@jawadhasan80/elasticsearch-with-node-js-8198bc7e8b79
[]
2020-09-26 13:46:51.312000+00:00
['Elasticsearch', 'Nodejs', 'NoSQL']
“JavaScriptmas” challenges with Scrimba
8. Rolling Dice This challenge is quite different because it involving also HTML and CSS. DESCRIPTION: In this challenge a casino has asked you to make an online dice that works just like it would in real life. Using the pre-made dice face that represents ‘one’, make the faces for ‘two’, ‘three’, ‘four’, ‘five’ and ‘six’. Now when the users clicks the dice on the screen the dice is expected to show one of the faces randomly. DETAILED INSTRUCTIONS: 1. pick out the neccesary elements from the HTML 2. Create other 5 dice faces in CSS 3. use eventlisteners on the appropriate div 4. Display dice faces randomly on click STRETCH GOALS: - Can you show the number you rolled as a integer along-side the dice face? - Can you improve the overall design? My explanation You have plenty of solutions for this. I will be focused on why I decided to do it in this way. CSS: For dice, I decided to use the grid layout. Each dot is in a 3x3 grid and has its own coordinates described in classes: dot-a , dot-b … dot-i To center the dots inside fields I used align-self: center and justify-self: center HTML: For the proper displaying starting face I added class dot-e to the existing dot. I also added two div blocks with information about how to reroll and the placeholder for information about the number of drawn dots. JS: DICE_FACE_CONFIG represent information about which dots need to be displayed. In index 0 we have information about one-dot-face, in index 4 about five-dots-face, etc. Our main function is diceRoll inside it we will: * drew the number from 1 to 6, * get dice-face config from DICE_FACE_CONFIG based on drawn number * clear the previous dice-face by removing all dice child nodes from DOM * set new face based on config by adding dots with proper “dot-char” classes * set information about drawn dots number
https://medium.com/@kospel/javascriptmas-challenges-with-scrimba-9ac2fd36317e
['Rafał Bogacz']
2020-12-18 22:36:23.185000+00:00
['Challenge', 'JavaScript', 'Scrimba', 'Programming', 'Javascriptmas']
Chet Hanks: Transracial Jamaican, Appropriation or Delusional Privilege
Chet Hanks: Transracial Jamaican, Appropriation or Delusional Privilege Culture vultures versus real artists Photo by Kyle Cleveland on Unsplash Chester Marlon Hanks (Chet) is the son of millionaire actor Tom Hanks. Chet is also an actor and aspiring rapper, Chet Haze who’s occasionally transracial Jamaican since January 2020. If you’re not familiar with hip hop culture, he’s like the male Amethyst Amelia Kelly, a runaway white girl from Australia that magically morphed into hoodlicious, Black-inspired rapper, Iggy Azelea. As a woman of African descent with friends and family from Jamaica, I find Chet Hanks’ fascination and use of Jamaican patios to be both annoying and sad, but I don’t totally blame him. I think the music industry and social media platforms should share the blame because they make millions of dollars featuring white people like Chet and countless others that steal and appropriate Black culture, music and intellectual property. I refuse to use my platform to get them more profitable clicks and views — so feel free to Google a popular Black artist and there’s probably a video of some random suburban teen or adult culture vulture calling it “their new song” (without permission, credit or compensation to the original artists). Chet isn’t known for having prolific acting or writing skills and he doesn’t have a distinctive rap flow. Therefore, he stays relevant and connected to Black music culture through social media and his brief role on Empire. Yet, Chet is a “struggling” artist that’s why I think Chet’s case is a little deeper. I don’t think he’s doing it for money — he doesn’t it. I think he’s doing to for attention. Again, I put the bulk of the blame on the music and social media industries because they know what they’re doing — even if Chet should and doesn’t. They are marketing a tried and true formula: steal from Black culture and artists and make white-washed versions of their image and songs to market to white audiences so they get inspired wash, rinse and repeat and another white star (and cash cow) is born. Chet is a Jamaican culture vulture. It may be a little funny, but for real artists this is no laughing matter. The money spent to promote Chet and his antics could have gone to real reggae artist. This is where the phrase “blue-eyed soul” came from. The music industry literally stole the look, lyrics, compositions, music rights even dance routines from Black artist’s songs often leaving them destitute or struggling on the Chitlin Circuit — to segregated audiences. The white record companies would work with white owned radio and television stations, clubs and venues so they could market and promote white artists to the masses. And the rest is literally history. Unfortunately, there isn’t a Black or African American culture board so there really isn’t anyone to stop this highly profitable, parasitic behavior or hold any of the culture vultures or the companies responsible — unless the artist sues like the Marvin Gaye v Robin Thicke case. The humanity As a mother — I feel sorry for Chet. Based on his sincere defense of his bizarre and embarrassing behavior, he is drowning in delusional privilege and doesn’t understand why it’s wrong to profit from pretending or acting like he’s from an oppressed country or culture. Something is clearly wrong. This situation begs the question: Why can’t Chet just be happy being Chester — son of “nice”, rich actor Tom Hanks? What happened to Chet that made him lose touch with reality in this current social and political climate? Who is just as delusional and culturally incompetent to cosign Chet’s verbal Blackface with a reggae twist? When I first saw Chet use Jamaican patios to congratulate his father for his award, I cringed — it was hard to watch because he looked like a kid playing a character. This is the primary reason I think Chet’s case differs from other white performers who have consciously and deliberately made characters for profit. Chet seems to desperately wants to be anyone other than himself and contrary to popular belief — Black people are some the most loving, welcoming and forgiving people regardless of country of origin some of us are letting him slide… This also explains why Chet isn’t facing a bigger scandal or backlash from the Jamaican community. We are way too nice, tolerant and trusting…How else could colonization started… We gave one invitation and a gift — but predators are going to prey and colonizers are going colonize…You give an inch and they literally take your whole country, steal our art and culture and then tell us we’re uncivilized…but I digress. True to form, this is where I feel Chet get’s a little mercy. He is in recovery and a loving father of a beautiful, biracial daughter. That’s where he differs from other white “performers” that seamlessly go back to good, ol’ white privilege like Elvis and his hip hop versions: Marky Mark who is now mega rich actor, and family man, Mark Wahlberg, part owner of Wahlburgers and UFC. Mr. Wahlberg admitted to and has since apologized for his vicious, race related attack on an Asian man that left him partially blind. He tried to get the case expunged, but the incident went public, again. Picture courtesy of People.com Then, there’s Vanilla Ice who is now a real estate investor and house flipper, Robert Matthew Van Winkle. Van Winkle has since apologized for his appropriation and caucastic coonery after selling 40 million copies of his single. Yet, he says he was “a puppet”- a puppet that made millions off a culture I don’t think he likes or respects, but hey he’s since bought the rights to Under Pressure (the sample he originally used without permission) so at least the white artists he stole from got some compensation. Youtube Apology I would throw Pink in the mix, but her white girl roughneck image was also created by her record label. She’s talked about how record execs wanted to a push a false narrative, but she didn’t comply. She’s always been a bad-ass, white girl and never faked being a “sista” with the fake, stereotypical Black girl accent — and it’s appreciated. She just happens to have a soulful, strong voice — that matches her equally strong, acrobatic body — get em Pink (you rock). Real and white But as Dr. Dre said, “back to the lecture at hand”… Chet could easily take a page out of Eminem, Paul Wall or Bubba Sparxxx’s books who are white, rap artists that “don’t front”, have fake accents or gangster back stories. They love and respect hip hop culture and they’re accepted as real hip hop artists that bring their own authentic flavor — as white, American men. Eminem is undeniably a brilliant lyricist and battle rapper. Paul Wall and Bubba Sparxxx bring their southern swag without the messiness of appropriation. As a matter of fact, Paul Wall gets extra props for being a proud, loving husband and father of two children with his beautiful wife, Crystal and if you question his love of sistas — check out his song with Jill Scott. Bubba on the other hand is a proud, country boy and can kinda be credited for ushering in the age of “new booty” with his 2005 hit song, Miss New Booty. That song is a club banger and still makes me laugh and dance at the same time. But there is always an enigma and his name is Collie Buddz. I’d be lying if I didn’t admit that I have Come Around on my playlist, but I don’t know where he fits in this discussion. His bio says, “Colin Patrick Harper hails from the tiny island of Bermuda. In the music industry, he is known as the reggae artist Collie Buddz. Buddz was born on August 21, 1984 in New Orleans, Louisiana.” I don’t know Colin’s connection to Bermuda when he was born in New Orleans and graduated top of his class from Full Sail University in Florida. Yet, even he doesn’t have a full on Caribbean accent in his interviews and he’s worked with some noteworthy Jamaican and Ghanaian artists. Honestly, I put his music in the “weed culture” genre more than reggae so I don’t really see him exploiting Caribbean/West Indian culture and music, but that could just be my colonized mind finding sympathy for colonizers… Conclusion I really don’t know how or why white people like Chet aren’t happy with their privilege and love stealing Black and African swagger, fashion, music, food, dialects, accents, culture or lives in general. It’s like white America is Black people’s Single White Female… They copy what we do out of strange, obsessive flattery, admiration and a tinge of jealousy, but become defensive, aggressive and dangerous when we call them out or ask them to stop. I hope Chet has some real people from Jamaican and the Black community so he can learn how hurtful and offensive his behavior is, but most importantly — I hope he finds is authentic self. It’s too late for him to be brave like Pink, fight the power to live and push a false narrative, but I pray he finds some clarity, integrity and real cultural competence. I hope he can find the real Chet, and become a beacon of hope for his fellow white brothers and sisters like Eminem, Paul and Bubba who can love, like and be a part of our various cultures and communities without profiting from our mockery and their proximity to power and privilege. Or at the very least, I hope Chet sees the error of his ways and halfway apologize like Van Winkle… Thank you for reading. Sources
https://medium.com/an-injustice/chet-hanks-a9e00356e89d
['Gfc', 'Grown Folk Conversations']
2020-12-08 23:38:55.338000+00:00
['Appropriation', 'Hip Hop', 'Race', 'Chet Hanks', 'White Privilege']
Deep Learning for House Number Detection
SVHN Dataset Photo by Jon Tyson on Unsplash This is a Stanford collected Dataset and is available for the public to experiment and to learn. SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting. It can be seen as similar in flavor to MNIST(e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images) and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images. [2] The images are, in no way, preprocessed or ready to be used yet. Hence, whoever wants to use it has to do a bit of the work! The Challenge Build an algorithm to classify the different house numbers from the Dataset. The Problem The dataset that is available on the website is in the .mat format. And in case you don’t know, Python Notebooks and all the algorithms can’t process with these kind of files. Hence why, it’s a compulsion to convert the data to an acceptable data format before getting into the cool stuff. And honestly, being a newbie, I couldn’t write functions to convert the files myself and hence did a lot of digging. After 10s of articles and youtube videos, I finally stumbled upon a Repository that was somewhat helpful. Now, let’s move on to the model creation! Environment and Tools Keras Numpy Scipy Numpy The Code? I’ll upload the complete code on Github and will link it here later. But here’s some of it. import os import sys import keras import tarfile import numpy as np import urllib.request as urllib import matplotlib.pyplot as plt from keras.regularizers import l2 from keras.models import Sequential from keras.optimizers import Adam, SGD from keras.engine.training import Model from keras import backend as K, regularizers from keras.callbacks import LearningRateScheduler from keras.preprocessing.image import ImageDataGenerator from keras.layers import Add, Conv2D, MaxPooling2D, Dropout, Flatten, Dense, BatchNormalization, Activation Next, let’s preprocess the Data: #Dataset location train_location = 'Drive link to the Train.mat file' test_location = 'Drive link to the Test.mat file' def load_train_data(): train_dict = sio.loadmat(train_location) X = np.asarray(train_dict['X']) X_train = [] for i in range(X.shape[3]): X_train.append(X[:,:,:,i]) X_train = np.asarray(X_train) Y_train = train_dict['y'] for i in range(len(Y_train)): if Y_train[i]%10 == 0: Y_train[i] = 0 Y_train = to_categorical(Y_train,10) return (X_train,Y_train) def load_test_data(): test_dict = sio.loadmat(test_location) X = np.asarray(test_dict['X']) X_test = []for i in range(X.shape[3]): X_test.append(X[:,:,:,i]) X_test = np.asarray(X_test) Y_test = test_dict['y'] for i in range(len(Y_test)):if Y_test[i]%10 == 0: Y_test[i] = 0 Y_test = to_categorical(Y_test,10) return (X_test,Y_test) And we use this function as: (x_train, y_train) = load_train_data() (x_test, y_test) = load_test_data() x_train.shape # (73257, 32, 32, 3) x_test.shape # (26032, 32, 32, 3) Now, when we have the data in our hands, we can scale it. Now, it’s a norm in the era to normalize the data before loading it into the architecture because it’s easier of the model to learn from a scaled data rather than from learning a randomly spread out data. Normalizing data: # Converting the arrays to Float type x_train = x_train.astype('float32') x_test = x_test.astype('float32') # Normalizing x_train = x_train / 255.0 x_test = x_test / 255.0 Now, we do have 73,257 images but Training Set Size and Money are never enough! Hence, we create some more images here using the ImageDataGenerator from keras.preprocessing.image import ImageDataGenerator # applying transformation to image train_gen = ImageDataGenerator( rotation_range=15, zoom_range = 0.10, width_shift_range=0.3, height_shift_range=0.3, brightness_range=[0.2,1.0] ) # test_gen = ImageDataGenerator() train_gen.fit(x_train) test_set = train_gen.flow(x_test, y_test, batch_size=256) We’re just generating the Testing Images for now and will hit the Training set later, because I like it that way. Model Architecture Here is the fun part! The architecture I’ve designed here consists of 8 Convolutional Blocks! Now, I used Convolutional Layers with: model = Sequential() # Block 1 model.add(Conv2D(32, kernel_size=5, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same', input_shape=(32, 32, 3))) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(MaxPooling2D(2, 2)) model.add(Dropout(0.3)) # Block 2 model.add(Conv2D(64, kernel_size=5, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same')) model.add(Activation('elu')) model.add(BatchNormalization()) # model.add(MaxPooling2D(2, 2)) model.add(Dropout(0.3)) # Block 3model.add(Conv2D(128, kernel_size=5, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same')) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(MaxPooling2D(2, 2)) model.add(Dropout(0.3)) # Block 4model.add(Conv2D(180, kernel_size=5, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same')) model.add(Activation('elu')) model.add(BatchNormalization()) # model.add(MaxPooling2D(2, 2)) model.add(Dropout(0.3)) # Block 5model.add(Conv2D(256, kernel_size=5, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same')) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(MaxPooling2D(2, 2)) model.add(Dropout(0.3)) # Block 6model.add(Conv2D(280, kernel_size=5, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same')) model.add(Activation('elu')) model.add(BatchNormalization()) # model.add(MaxPooling2D(2, 2)) model.add(Dropout(0.3)) # Block 7model.add(Conv2D(300, kernel_size=5, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same')) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(MaxPooling2D(2, 2)) model.add(Dropout(0.3)) # Block 8model.add(Conv2D(320, kernel_size=5, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same')) model.add(Activation('elu')) model.add(BatchNormalization()) # model.add(MaxPooling2D(2, 2)) model.add(Dropout(0.3)) And then the three Dense layers with: model.add(Flatten()) # Dense 1 model.add(Dense(2800, kernel_regularizer=l2(0.0005))) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) # Dense 2model.add(Dense(1200, kernel_regularizer=l2(0.0005))) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) # Dense 3model.add(Dense(10, kernel_regularizer=l2(0.0005), activation='softmax')) Hyper parameter setting In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are learned. [3] I used a custom Learning Rate modifier function: initial_lr = 1e-3 def lr_scheduler(epoch): if epoch < 20: return initial_lr elif epoch < 40: return initial_lr / 2 elif epoch < 50: return initial_lr / 4 elif epoch < 60: return initial_lr / 8 elif epoch < 70: return initial_lr / 16 elif epoch < 80: return initial_lr / 32 elif epoch < 90: return initial_lr / 64 else: return initial_lr / 128 Parameters: Training I trained this model with breaks in between slight changes. history = model.fit( train_gen.flow(x_train, y_train, batch_size=YOUR_CHOICE), epochs=YOUR_CHOICE, verbose=YOUR_CHOICE, validation_data=test_set, callbacks=[LearningRateScheduler(lr_scheduler)], shuffle=True ) Training 1: 100 epochs with the batch size of 128. Now, this is the graph of the Validation Accuracy of 100 epochs. It is evident that in the first 10–20 epochs, we had fairly high accuracy. This is the Validation loss. Same here, we got pretty close to the converging point in the initial epochs. Now, I know that 95.7% accuracy is okay, but nah. So, … Training 2: Alright, I gave the batch size 512 some tries, but got nowhere. So, here we are with 100 epochs with the batch size of 2048. Now, this is the graph of the Validation Accuracy of 75 epochs. We can see that we kept on fluctuating but eventually got to 96.0%. This is the Validation loss. Well, there is a very small variance here because the model had already been trained like for 200+ epochs. Alright, 96.02% accuracy is okay now. :p Results & Conclusion After hours of training, we’re at 96.0% and honestly, I’m pretty happy with it. I initially had a problem with the .mat filesystem but once I found that repository, it was pretty straight forward then. Thanks to him, whosoever he was. :) And this is it for this article. I’ll link to the notebook when I’ll be done with it, I still have changes that I want to try. SVHN is a very large and extensive dataset that comes from a significantly more difficult problem where images contain a lot of clutter and noisy features. It seems to be under utilised in the literature compared to MNIST, CIFAR-10 and CIFAR-100. Unlike MNIST and other datasets, preprocessing is common practice and very important for fairly comparing results. A form of contrast normalisation, in particular local contrast normalisation, is a common technique for preprocessing the SVHN dataset images. Alright y’all, I hope this article helps you. Let’s connect on Linkedin! References/Further Reading I’m inspired by him: [1] MNIST Database, Wikipedia, https://en.wikipedia.org/wiki/MNIST_database [2] The Street View House Numbers (SVHN) Dataset, Stanford, http://ufldl.stanford.edu/housenumbers/ [3] Hyperparameter optimization, Wikipedia, https://en.wikipedia.org/wiki/Hyperparameter_optimization
https://medium.com/swlh/deep-learning-for-house-number-detection-25a45e62c8e5
['Danyal Jamil']
2020-09-22 02:49:02.598000+00:00
['Cnn', 'Deep Learning', 'Svm', 'Data Science', 'Machine Learning']
Object-Oriented Analysis And Design — Introduction (Part 1)
The Concept Of Object-Orientation Object-orientation is what’s referred to as a programming paradigm. It’s not a language itself but a set of concepts that is supported by many languages. If you aren’t familiar with the concepts of object-orientation, you may take a look at The Story of Object-Oriented Programming. If everything we do in these languages is object-oriented, it means, we are oriented or focused around objects. Now in an object-oriented language, this one large program will instead be split apart into self contained objects, almost like having several mini-programs, each object representing a different part of the application. And each object contains its own data and its own logic, and they communicate between themselves. These objects aren’t random. They represent the way you talk and think about the problem you are trying to solve in your real life. They represent things like employees, images, bank accounts, spaceships, asteroids, video segment, audio files, or whatever exists in your program. Object-Oriented Analysis And Design (OOAD) It’s a structured method for analyzing, designing a system by applying the object-orientated concepts, and develop a set of graphical system models during the development life cycle of the software. OOAD In The SDLC The software life cycle is typically divided up into stages going from abstract descriptions of the problem to designs then to code and testing and finally to deployment. The earliest stages of this process are analysis (requirements) and design. The distinction between analysis and design is often described as “what Vs how”. In analysis developers work with users and domain experts to define what the system is supposed to do. Implementation details are supposed to be mostly or totally ignored at this phase. The goal of the analysis phase is to create a model of the system regardless of constraints such as appropriate technology. This is typically done via use cases and abstract definition of the most important objects using conceptual model. The design phase refines the analysis model and applies the needed technology and other implementation constrains. It focuses on describing the objects, their attributes, behavior, and interactions. The design model should have all the details required so that programmers can implement the design in code. They’re best conducted in an iterative and incremental software methodologies. So, the activities of OOAD and the developed models aren’t done once, we will revisit and refine these steps continually. Object-Oriented Analysis In the object-oriented analysis, we … Elicit requirements: Define what does the software need to do, and what’s the problem the software trying to solve. Specify requirements: Describe the requirements, usually, using use cases (and scenarios) or user stories. Conceptual model: Identify the important objects, refine them, and define their relationships and behavior and draw them in a simple diagram. We’re not going to cover the first two activities, just the last one. These are already explained in detail in Requirements Engineering. Object-Oriented Design The analysis phase identifies the objects, their relationship, and behavior using the conceptual model (an abstract definition for the objects). While in design phase, we describe these objects (by creating class diagram from conceptual diagram — usually mapping conceptual model to class diagram), their attributes, behavior, and interactions. In addition to applying the software design principles and patterns which will be covered in later tutorials. The input for object-oriented design is provided by the output of object-oriented analysis. But, analysis and design may occur in parallel, and the results of one activity can be used by the other. In the object-oriented design, we … Describe the classes and their relationships using class diagram. Describe the interaction between the objects using sequence diagram. Apply software design principles and design patterns. A class diagram gives a visual representation of the classes you need. And here is where you get to be really specific about object-oriented principles like inheritance and polymorphism. Describing the interactions between those objects lets you better understand the responsibilities of the different objects, the behaviors they need to have. — Other diagrams There are many other diagrams we can use to model the system from different perspectives; interactions between objects, structure of the system, or the behavior of the system and how it responds to events. It’s always about selecting the right diagram for the right need. You should realize which diagrams will be useful when thinking about or discussing a situation that isn’t clear. System modeling and the different models we can use will be discussed next. System Modeling System modeling is the process of developing models of the system, with each model representing a different perspectives of that system. The most important aspect about a system model is that it leaves out detail; It’s an abstract representation of the system. The models are usually based on graphical notation, which is almost always based on the notations in the Unified Modeling Language (UML). Other models of the system like mathematical model; a detailed system description. Models are used during the analysis process to help to elicit the requirements, during the design process to describe the system to engineers, and after implementation to document the system structure and operation. Different Perspectives We may develop a model to represent the system from different perspectives. External, where you model the context or the environment of the system. Interaction, where you model the interaction between components of a system, or between a system and other systems. Structural, where you model the organization of the system, or the structure of the data being processed by the system. Behavioral, where you model the dynamic behavior of the system and how it respond to events. Unified Modeling Language (UML) The unified modeling language become the standard modeling language for object-oriented modeling. It has many diagrams, however, the most diagrams that are commonly used are: Use case diagram : It shows the interaction between a system and it’s environment (users or systems) within a particular situation. : It shows the interaction between a system and it’s environment (users or systems) within a particular situation. Class diagram : It shows the different objects, their relationship, their behaviors, and attributes. : It shows the different objects, their relationship, their behaviors, and attributes. Sequence diagram : It shows the interactions between the different objects in the system, and between actors and the objects in a system. : It shows the interactions between the different objects in the system, and between actors and the objects in a system. State machine diagram : It shows how the system respond to external and internal events. : It shows how the system respond to external and internal events. Activity diagram: It shows the flow of the data between the processes in the system. You can do diagramming work on paper or on a whiteboard, at least in the initial stages of a project. But there are some diagramming tools that will help you to draw these UML diagrams.
https://medium.com/omarelgabrys-blog/object-oriented-analysis-and-design-introduction-part-1-a93b0ca69d36
['Omar Elgabry']
2017-10-23 10:56:28.039000+00:00
['Software Engineering', 'Analysis', 'Object Oriented', 'Software Development']
5 Reasons Why Night Weaning of a Sleep Trained Baby Isn’t Going Well
There comes a time in a Mom’s life when you are thinking… okay kiddo, I know you can put yourself to sleep without me, you are gaining weight steadily, and you’re getting up there in age……so why am I still feeding you in the middle of the night? When to eliminate night feeds is a very personal decision, and takes into consideration baby’s weight and weight percentile as well as how Mom feels about night weaning. I’ve talked to Moms of 10 months old who cherish the night time feed as a time of beautiful togetherness. I’ve also talked to Moms who have 6 month olds with babies in the 97% for weight, who are tired from chasing after a 2 year old all day and need a solid night’s sleep. Everyone’s needs are different. Night feeding refers to feeding over the 11–12 hour of night time sleep. For example, 6 pm to 6 am would be a night time sleep stretch, therefore that dream feed that you are doing at 10 pm counts as a night feed. Different sources quote 1–2 night feeds (or more) from 4–6 months of age, and then often 1 night feed until the age of 12 months. Other authors quote 5 months and 15 pounds as their threshold for eliminating night feeding. There is a lot of variety in the literature. In the end, does anyone really know? This must be where mother’s intuition comes into play. Many older babies still eat at night out of habit. They have become accustomed to consuming calories at a specific stretch of the evening. Imagine if one night you woke up at 3 am and had a bowl of cereal, and did the same thing the next night, quickly you have developed a habit causing your body to regularly wake up at 3 am grumbling for that bowl of cereal. If your baby wakes up at the same time every night for that night feeding, this represents a habit. The goal of weaning night feeds is not to eliminate those calories from the baby’s diet, but rather shift them into daytime consumption. When a baby drops a night time feed, you will notice that they will nurse or eat more at their first morning feed. Many Mom’s find that when they are feeding in the middle of the night for a baby past 6 months, that they baby isn’t really that hungry for the first morning feed. They are coming to the buffet and just having a salad. So you’ve decided to take the plunge and wean off night feeds, and you’ve implemented the strategies you used to sleep train your child. However, you are finding things just aren’t going as planned and your baby is staying awake and not going back to bed as scripted. Perhaps she’s not crying but just staying awake. What is up with that? 5 Reasons your night weaning may not be going well…. 1. Sleep Props Your child doesn’t know how to fall asleep independently and still needs you to soothe her back to sleep. You may think you are putting her down awake, but if you’ve nursed her in the 20 minutes prior to laying her down, chances are that she’s not really “awake”. The nursing or feeding provides a rhythmic comfort which relaxes her and helps her drift off into sleep. You’ll have to first teach her how to fall asleep independently in conjunction with the night feeding elimination or reduction. 2. Age Many experts recommend that babies under 6 months, who do not yet have solids well established, area not able to wean from night feeds because they can’t sustain themselves for long periods between feeds. You’re just ahead of the game trying to increase your nightly sleep hours. 3. Acute Teething From about 5–6 months and onwards, teething seems to pop up at the most inconvenient times. You may have just started your 4 day plan for night weaning and all of a sudden she’s in the midst of acute teething. The signs of this are hands in the mouth, rubbing her ears, and increased fussiness during the day. A child who is teething requires comfort of some sort to reduce the pain, so night weaning may not be the right time for this. Medication is always an option, but even Advil only lasts 8 hours so she may be susceptible to pain in the early morning hours. 4. Lack of a clear plan and consistency on your part One night you let her squawk for 20 minutes and she fell back asleep but the next night you were really tired and couldn’t take it and you caved and fed her after 7 minutes. That’s a confusing message for a baby. Is the buffet open tonight or not? 5. Overtired If she’s not getting enough good naps in during the day or her bedtime is too late, this can lead to baby being overtired. The result of that is more frequent night waking, and difficultly falling back to sleep.
https://medium.com/@sleepcoachsarah/5-reasons-why-night-weaning-of-a-sleep-trained-baby-isnt-going-well-c58ae9898a0b
['Dr Sarah Mitchell']
2019-11-27 10:18:42.127000+00:00
['Baby Sleep Consultant', 'Baby Sleep', 'Sleep', 'Weaning', 'Breastfeeding']
Latest picks: In case you missed them:
Sign up for The Variable By Towards Data Science Every Thursday, the Variable delivers the very best of Towards Data Science: from hands-on tutorials and cutting-edge research to original features you don't want to miss. Take a look.
https://towardsdatascience.com/latest-picks-modern-recommender-systems-a8c09fea7db0
['Tds Editors']
2021-01-26 14:27:38.879000+00:00
['Editors Pick', 'Towards Data Science', 'The Daily Pick', 'Data Science', 'Machine Learning']
Three Things to Consider Before You Begin a Workout Program
I used to do the same exact thing at the gym, every single week — for years. The same exercises, for the same number of sets. I would simply try to beat the amount of weight I did the week prior, which I would recall by memory. I absolutely didn’t bother to write anything down. Technically, this is a form of progressive overload, and when you’re still a beginner, you can get away with that for a while. Because when you go from doing nothing at all to just about anything, progress will likely be made. But you can only do that for so long before you become stagnant, or worse, get hurt. Once I began implementing proper workout programs, as opposed to just “winging it”, I was able to take my strength & conditioning to the next level. But if I had to do it all over again, I realize now that I could’ve made a lot more progress in a lot less time — if I had followed a properly structured program from the beginning. But no single program is “one size fits all”, and it’s important to know exactly what you’re getting into before you begin one. Consider the following three things before selecting the program that’s right for you. #1 — Assess your fitness level There are many different programs available with varying degrees of difficulty, and the program that you want to do may not be the program that you should do. At least, not at first. The core of every fundamentally sound workout program should be anchored around a few basic movement patterns: a push, a pull, a squat, a hinge, and a lunge. If you’re new to the gym and are unable to properly perform these movements with your bodyweight alone, you shouldn’t jump straight into a program that calls for heavy deadlifts, bench presses, and squats. Rather, opt for a program that teaches you to properly execute these movements in their most basic form — exercises like bodyweight squats and push ups. Once you’ve developed mastery of those movements, then you can gradually progress towards loading those exercises up with additional weight. A progression from a bodyweight push up to a barbell bench press may look something like this: Push ups on an incline bench (note: If you can’t do a pushup, try these instead of doing push ups on your knees. It’s a more natural movement.) (note: If you can’t do a pushup, try these instead of doing push ups on your knees. It’s a more natural movement.) Bodyweight push ups Dumbbell floor press Dumbbell bench press Barbell bench press This progression could be made over the course of several months or even years. It is entirely dependent upon the individual, and how quickly or slowly you progress isn’t important; what is important is that you’re able to “walk before you run”. #2 — Be sure your program aligns with your goals Before beginning a program, have a clear understanding of what and who the program was designed for versus what your specific goals are. If your main goal is to compete on a bodybuilding stage, then a program designed for Olympic weightlifters may not be ideal. I had a friend whose number one goal was to add 50 pounds to his one rep max on the barbell back squat. He purchased the program, ran it for 12 weeks, and was sorely disappointed to find that he was barely squatting 10 pounds more than his previous max by the end of the training cycle. What he didn’t read, which was clearly stated in the program’s description, was that this particular program was designed by somebody with chronic lower back issues, specifically for people who need a way to train their legs without incorporating any heavy barbell work — exactly what my friend was trying to improve. Always have a clear understanding of exactly what you’re doing before you invest the time and the money into doing it. #3 — Autoregulate your training Think of any program you’re on as “written in pencil, not in pen”. That means if the program says that Monday’s workout will include 3 sets of 5 reps at 80% of your one rep max (1RM), understand that those numbers can always be subject to change. For example, let’s assume your 1RM on the barbell bench press is 200 pounds. 200*80%=160, so according to the program, you’d be performing 3 sets of 5 reps with 160 pounds. You might feel great on week #1 and were easily able to crush that number. Awesome! But on week #2, maybe you’re overstressed from any of the countless variables that tend to sneak up in our lives at the wrong time — such as work or family issues. Now, maybe 160 pounds doesn’t feel so light. Maybe 160 pounds feels more like 90% than 80%. At this point, you’ve got two options: Option 1: You can “suck it up” and try to use the same weight anyway, which will cause a breakdown in technique and increases the risk of injury. (Hint: This is the wrong option.) Option 2: You can autoregulate. You acknowledge that just because 80% of your 1RM is 160 pounds under ideal circumstances, that doesn’t mean it will feel that easy every day. So you adjust as necessary. Maybe for that specific day, you subtract 10% of the weight and see how 145 pounds feels. The opposite is true for days that 80% feels abnormally light — because autoregulation is a two way street. If 160 pounds feels more like 70%, that’s great! Add 5–10 pounds and push yourself — as long as it can be done without compromising your technique or your safety. The takeaway is this: When you’re training with percentages of your 1RM, that percentage is meant to be used as a guideline; it is not set in stone. When the heavy weight is there for the taking, take it. When you need to back off, do so. Much like saving or investing money, consistent effort will be the key factor in your long term success — regardless of what the exact amount is on a week to week basis.
https://medium.com/in-fitness-and-in-health/three-things-to-consider-before-you-begin-a-workout-program-cc2616b04135
['Zack Harris']
2020-11-03 19:59:11.934000+00:00
['Wellness', 'Health', 'Self Improvement', 'Life', 'Fitness']
Video: The Human in the Machine — Identifying Fraud Victims to Take On Bias in ML
The Human in the Machine: Identifying Fraud Victims to Take On Bias in ML — Reka Eszter Bodo This is the golden age for data: data points, models, anomalies, patterns and more. We can learn so much from the data, but so can hostile people and organizations, turning the people into the weakest links in fraud detection. The elderly and other vulnerable communities are being scammed in phishing attacks, social engineering, and other fraud attacks. One of our biggest challenges in this age is protecting the users from themselves. In this talk, Reka shares on how attackers can use the lack of knowledge of those communities, how we can fight it, and what is our social responsibility in protecting them. Originally from Hungary, Réka has been living in Israel for the last ten years. She made aliyah after years of work and volunteering in the Hungarian Jewish community. Studying Talmud at the Hebrew University and working as a vet technician in JPSCA were short detours before she started working for Riskified as a research analyst three years ago. Her job includes analysing patterns and anomalies, training models and creating optimization to performance by constantly monitoring fraud trends. This event was a collaborated event with Fraud Fighters IL community. *Talk is in English
https://medium.com/riskified-technology/video-the-human-in-the-machine-identifying-fraud-victims-to-take-on-bias-in-ml-bf9fcef6d706
['Riskified Technology']
2020-10-01 08:07:51.261000+00:00
['Machine Learning', 'Data Science', 'Vulnerable', 'Videos', 'Fraud Detection']
Evangelicals today no longer have a laser focus on evangelism and spiritual renewal. As a result, I believe they will fade away as will the very term.”
Evangelicals today no longer have a laser focus on evangelism and spiritual renewal. As a result, I believe they will fade away as will the very term.” Pjhanze Nov 19, 2020·6 min read Evangelicalism in America is nearing extinction due to the movement’s devotion to politics at the expense of its original calling to share the gospel, according to Mark Galli, former editor-in-chief of Christianity Today. “The evangelicalism that transformed the world is, for all practical purposes, dying if not already dead,” Galli said during the “Conversations that Matter” webinar hosted by Baptist News Global Oct. 8. He spoke with BNG Executive Director and Publisher Mark Wingfield in an hour-long webinar that is available for viewing on BNG’s YouTube channel. Image for post https://www.reddit.com/r/Seahawkvcardinalslive/ https://t.co/6GWbQQjLa2?amp=1 https://t.co/PdsfYXsTp9?amp=1 https://t.co/K3H1iPFmvR?amp=1 https://t.co/QQZq0DYOxN?amp=1 https://t.co/9922wEu9ZO?amp=1 https://t.co/4BTDKEMnjH?amp=1 https://t.co/dazF0FNvyS?amp=1 https://t.co/ZUXkY5uB5u?amp=1 https://t.co/ru7jsoSkDK?amp=1 https://t.co/APG7GjvGWU?amp=1 https://t.co/s3URPfECwu?amp=1 https://t.co/DvBUXgnRk5?amp=1 https://t.co/Ofqs83gxGW?amp=1 https://t.co/zMx2jMbOub?amp=1 https://t.co/ojHmGOBr5W?amp=1 https://t.co/P9PrdLXSkI?amp=1 https://t.co/PrKEtZWJEH?amp=1 https://t.co/HSB8WIQnwS?amp=1 https://t.co/jZ5myCLnpd?amp=1 https://t.co/AlaqgbRfVv?amp=1 https://t.co/q5rMeI8yoK?amp=1 Seattle Seahawks vs. Arizona Cardinals: How to Watch, Now semi-retired, Galli served 20 years at Christianity Today and is the author of a new book, When Did We Start Forgetting God: The Root of the Evangelical Crisis and Hope for the Future. While he has identified at times as Presbyterian, Episcopalian, Anglican and recently becoming Roman Catholic, Galli said he has remained true to his evangelical upbringing that emphasized evangelism and spiritual renewal. “I am an evangelical Catholic,” he said. Galli spoke on an array of other topics including the culture war divisions between Americans, the polities that divide churches, and how dialogue may help pastors and others hurdle those barriers. That editorial But he hit on a very high-profile topic, too: his December 2019 Christianity Today editorial describing President Donald Trump as morally unfit to hold office and arguing for his removal. It was published during the Congressional impeachment hearings. “He himself has admitted to immoral actions in business and his relationship with women, about which he remains proud,” Galli wrote. “His Twitter feed alone — with its habitual string of mischaracterizations, lies and slanders — is a near perfect example of a human being who is morally lost and confused.” The piece generated severe backlash from the right, including from the president himself. The viciousness of responses often was hard to bear, Galli said. The one possible thing he would redo, he said, is the headline — “Trump Should Be Removed from Office” — that placed the emphasis on politics, when it was faith that motivated his position, Galli explained. “I was making moral arguments to fellow evangelicals. But it sounded like a political comment.” The editorial was not, as some claimed, an effort to back Trump’s opponent in the 2020 election. It’s just that Trump has “such deeply flawed moral character” that he needs to leave office, Galli said. Trump has “such deeply flawed moral character” that he needs to leave office. He has no quarrel with conservative evangelicals who acknowledge Trump’s flaws but still vote for him because he lines up on issues important to them, Galli said. There were certainly plenty of those in 2016, according to a pre-election Pew survey that Christianity Today published titled, “Most Evangelicals Will Vote Trump, But Not for Trump.” Rather than citing issues like abortion, religious freedom and support for Israel as rationale for voting Trump, white evangelicals were much more concerned about the economy four years ago, Galli recalled. “I get it. I disagree with their choice, but I respect their wrestling.” On the other hand, he said he does not understand those evangelicals who refuse to criticize Trump on moral grounds, who believe liberals need some shaking up and describe the president in messianic terms. He recalled an anecdote about a pro-Trump Christian describing the president as sitting “at the right hand of the Father” and said of this ideology: “That’s idolatry, clearly and simply.” Demise of evangelicalism To explain the demise of evangelicalism, Galli cited the legacy of Billy Graham, who even in advanced age preached to invite men and women of all races and cultures to Christ. “He was the glue that held evangelicalism together for many years,” Galli said. “An unfortunate symbol of what evangelicalism has become is epitomized by his son, Franklin,” he continued. “Franklin stands for evangelicals on both the right and the left who believe that politics is an essential work of evangelical faith.” “Franklin (Graham) stands for evangelicals on both the right and the left who believe that politics is an essential work of evangelical faith.” One symbol of that politicization is an organization called Evangelicals for Trump. “In describing themselves in that way, they have become just another political interest group, taking the great name ‘evangelical,’ with all its theological and doctrinal and gospel history and meaning and putting it in the service of a political candidate,” Galli asserted. And from his vantage point, the news is no better from the evangelical left. “What’s really troubling to me is that instead of decrying this coopting of the term ‘evangelical’ for political gain, the evangelical left has only mirrored this tragic move when they recently formed a group called Evangelicals for Joe Biden.” Evangelical groups that focus almost solely on social justice and cultural change, instead of sharing the gospel, are contributing to the decline, too, he said. “As a result, we’ve started to let the agenda of the world determine the agenda of the church, and we’ve sidelined evangelism and church renewal as the result.” Galli said he noticed this trend during the hiring process at Christianity Today beginning in the 1990s. Candidates overwhelmingly were interested in cultural analysis, and perhaps one in 10 story ideas pitched was about evangelistic outreach. For the most part, he added, the lack of interest in that founding mission of faith sharing exists across the board. “I am going to go so far as to say that our fascination with social amelioration, and political activism, has watered down the evangelical faith to the point that it looks little different than mainline Christianity,” he said. “We’ve forgotten that the genius of evangelical faith was its singular focus: spiritual renewal. ‘You must be born again’ was preached to individuals and to whole churches and denominations, from George Whitefield, John Wesley, to Charles Finney, to Dwight Moody to Billy Graham. It was preached in the First and Second Great Awakenings, it was preached by the circuit riders, and at local Baptist revivals every year or many times a year.” Yet, that message is not being preached much nowadays, and there will be consequences, he said. “Evangelicals today no longer have a laser focus on evangelism and spiritual renewal. As a result, I believe they will fade away as will the very term.” Who will the Lord raise up? But Galli predicted the mission of evangelism will continue, possibly under a different name. “In every generation, the Lord raises up some Christians to whom he gives the charism of evangelism and spiritual renewal. What they will be called in the future, I don’t know.” “In every generation, the Lord raises up some Christians to whom he gives the charism of evangelism and spiritual renewal. What they will be called in the future, I don’t know.” Citing the tradition of various orders within Roman Catholicism — Benedictines, Franciscans, Jesuits and so forth — he suggested one way to reclaim evangelicalism is for those called to evangelism to rise up as a holy order across the church universal. With some portion of the church focused on evangelism, then Christians can be involved in the public square, love their neighbors and work for social and political justice, he added. “Christians should not run away from culture but dash right into the middle of it and do whatever it takes to show forth the righteousness of God.” Friendship amid differences Galli explained that he’s developed these insights partly in becoming Catholic, which has provided a different vantage point from which to view evangelicalism and the wider church. Regarding Christian unity, he said: “I don’t know if there is a reason for us to be apart, but it’s hard to get together because no one is willing to give up anything. For example, talking to a Methodist and a Presbyterian reveals little difference, “but Methodists don’t want to give up their bishops and Presbyterians don’t want to submit to bishops.” Divisions within congregations, especially politically driven ones, must be addressed delicately, Galli said, suggesting pastors preach on the Bible from the pulpit and speak with parishioners aside from their sermons about politics. But he acknowledged that even the Bible has been politicized in the current climate. “Unfortunately, everything is perceived as political,” he said. “We just have to remind ourselves there are more important things than politics.” Former Supreme Court Justices Ruth Bader Ginsburg and Antonin Scalia lived that approach, he said. They did not let ideological differences prevent a friendship. “That is something American leaders ought to be promoting,” he concluded.
https://medium.com/@pjhanze/evangelicalism-in-america-is-nearing-extinction-due-to-the-movements-devotion-to-politics-at-the-cbbcdf3943b7
[]
2020-11-19 23:59:33.031000+00:00
['Covid 19', 'Social Media', 'Sports', 'Live Streaming']
Summary of KNN algorithm when used for classification
Introduction: The aim of writing this article is to summarise the KNN or the K-nearest neighbor algorithm in such a way that the parameters that will be discussed will be useful to decide how the algorithm will be used for classification. Let us start with the first parameter that is the problem definition. 1) Problem Definition: This is the first and most important parameter as it refers to the part that if we should use the KNN algorithm or not as many other algorithms can be used for classification. The main advantage of KNN over other algorithms is that KNN can be used for multiclass classification. Therefore if the data consists of more than two labels or in simple words if you are required to classify the data in more than two categories then KNN can be a suitable algorithm. 2) Assumptions: There are no specific assumptions that should be made concerning the data. For example, the data has to be linearly separable to use the Logistic regression algorithm. As the KNN is capable of performing the multiclass classification it does not require any specific assumptions. It works on all kinds of data on which the classification is to be performed. The graphical representation of the two parameters that are discussed is shown below. As we can see below, there are more than two classes and the data is also not linearly separable. The new data element will be classified in either of the three classes. 3) Preliminary concepts: The preliminary concepts that one needs to be aware of are: nearest neighbors, distance metrics — Euclidian distance, Manhattan distance, Majority vote. All these concepts are the basic mathematical concepts on which the classification in the KNN algorithm is done. 4) Optimization: As the classification is done by taking into consideration the K-nearest neighbors, we need to decide the optimum value of ‘K’ or the number of nearest neighbors. By default, the value of ‘K’ is set as ‘5’. One more thing that needs to be considered is that as the algorithm makes use of majority voting, it will be better if we set an odd number value for ‘K’ to avoid any condition where a two-class classification needs to be performed and say for example if we set the value of ‘K’ an even number like 6, then there can be a situation where there are equal votes for both the classes (3 for each class). In this case, there is a possibility of misclassification. Hence we should consider an odd number value for ‘K’. A simple method for deciding the optimum value is by trying different values of ‘K’. For this, we can simply make use of a ‘for’ loop. A syntax for the same is shown below. for i in range(1,105,2): knn = KNeighborsClassifier(n_neighbors=i) #Train the model using the training sets knn.fit(X_train, y_train) #Predict the response for test dataset y_pred = knn.predict(X_test) print(“Accuracy:”,metrics.accuracy_score(y_test, y_pred), “for “,i) accuracy_list.append(metrics.accuracy_score(y_test, y_pred)) The accuracy for all the values of ‘K’ will be printed based on which we will be able to decide the optimum value to get the maximum accuracy. Other parameters that can be used for getting the optimum accuracy are by using cross-validation. 5) Time and space complexity: The time and space complexity is decided by the algorithm we use for choosing the neighbors. There are a total of three algorithms and each one has different time complexity. 1. Brute Force: Consider there are N samples and D dimensions. The time complexity will be O[DN²]. Thus if we have small dimensions and overall a small dataset, this would take an acceptable amount of time. An increase in the size of the data set will correspond to an increase in time complexity. 2. K-D Tree: This algorithm improves the time complexity by reducing the number of distance calculations. The time complexity for this algorithm turns out to be O[D N *log(N)]. This is better than brute force if the number of samples is more. But an increase in the dimensions of the data will again cause the algorithm to take a longer time. 3. Ball Tree: If data is having higher dimensions then this algorithm is used. The time complexity for this algorithm turns out to be O[Dlog(N)]. 6) Limitations of the KNN algorithm: As it is clear that the KNN algorithm internally calculates the distance between the points, it is therefore obvious that the time taken by the algorithm for classification will be more as compared to other algorithms in certain cases. It is advised to use the KNN algorithm for multiclass classification if the number of samples of the data is less than 50,000. Another limitation is the feature importance is not possible for the KNN algorithm. It means that there is not an easy way which is defined to compute the features which are responsible for the classification. Conclusion: Thus by taking into consideration all the parameters that are discussed, it will be easy to use the KNN algorithm for classification and get optimum accuracy. That’s all for today!
https://medium.com/analytics-vidhya/summary-of-knn-algorithm-when-used-for-classification-4934a1040983
['Rohan Kulkarni']
2020-05-23 16:09:27.179000+00:00
['Artificial Intelligence', 'Classification', 'Multiclass Classification', 'Machine Learning', 'Knn Algorithm']
A Journey from Silicon Valley to Death Valley
You chose a difficult path. Don’t expect it otherwise. The road ahead goes like this. It starts small but always seems a lot at the moment. The pleasure satisfies your needs for a little. The minute you have it, you are excited, calm, disappointed. Before it ends, you want a little more. You are looking for something else. More, better, never enough. Somewhere in this loop, you are drawn into the center. Your head spins a little faster, the day goes a little shorter, sleeping gets a little harder. The things closest to you start to drift apart. Space is stretched, time is bent. It is leaving you, so hold on to it. It is decaying, so fix it. Your dream just passed by, so chase it. You try to keep your feet on the ground while your arms are reaching for the stars. Just a little longer, you can make it. You realize you have to let something go to get what you want, so you don’t hold on to anything. It is beyond repair, so don’t bother fixing it. Another dream is approaching, so wait for it to come to you. Everything only matters at the moment. Everything deteriorates outside your orbit. Every dream has an ending. So you drove to the valley of death, climbed the mountain with Dante, played a round of golf with the devil, drank the badwater off the basin, and painted the hills with palettes. You asked the universe a million questions, but the air has no sound. The discomfort, the silence of truth, and nothing but heat
https://medium.com/@wilfredcho/a-journey-from-silicon-valley-to-death-valley-432a36d2c3d0
[]
2020-11-20 17:04:01.377000+00:00
['Growth', 'Travel Writing', 'Therapy', 'Silicon Valley', 'Thoughts']
Try Medium's new short-form story--A Tritriplicata
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/illumination/a-tritriplicata-inviting-you-to-try-mediums-new-short-form-story-ed1f438dd21a
['Jim Mcaulay']
2020-12-14 02:00:58.038000+00:00
['Tritriplicata', 'Illumination', 'Jim Mcaulay', 'Humour', 'Humor']
Life Sciences Financings and Commentary #39
Get these analyses to your inbox — https://axialfc.substack.com/ This is a newsletter for new financing events with some simple analysis. More well thought out work can be found at — https://axial.substack.com/ Axial partners with great founders and inventors. We invest in early-stage life sciences companies often when they are no more than an idea. We are fanatical about helping the rare inventor who is compelled to build their own enduring business. If you or someone you know has a great idea or company in life sciences, Axial would be excited to get to know you and possibly invest in your vision and company . We are excited to be in business with you — email us at info@axialvc.com Happy holidays. Life Sciences Financings and Commentary #39 — December 19, 2020 — December 24, 2020 Financings Number of deals: 4 & Total capital invested: $134.3M - Coagulo Medical Technologies raised $6.5M from 20/20 HealthCare Partners, Sands Capital Ventures among others to develop products that detect and measure the effects of anticoagulants in whole blood. - ONL Therapeutics raised $46.9M led by BIos Equity Partners to develop new medicines as photoreceptor protective agents to treat patients with retinal disease. ONL is targeting the FAS apoptotic pathway. - Peptilogics raised $35.4M led by Presight Capital to use its computational drug development platform for antibiotics focusing on multidrug-resistant bacterial infections. - Provivi raised $45.5M led by Vivo Capital to develop biopesticides for agriculture, commercial, household products, and public health pest management. Based on technology from the Arnold Lab at Caltech, the company manufactures carbene/nitrene transferase enzymes that activate diazo and azide compounds to generate iron-carbenoids and iron-nitrenoids that are used to produce the pheromones Provivi is commercializing. Exits Number of exits: 2 & Total exit value: Over $1B - Inhibikase Therapeutics went public raising $18M to develop new medicines targeting kinases for Parkinson’s Disease and other neurodegenerative diseases — https://sec.report/Document/0001104659-20-085909/ - Jacobio Pharmaceuticals raised ~$174M in an IPO on the Hong Kong Stock Exchange to develop small molecule drugs that bind allosteric sites. The company is focused in China with their lead program targeting SHP-2 — https://endpts.com/months-after-striking-shp2-deal-with-abbvie-jacobio-pulls-in-174m-from-hong-kong-ipo/ Deals Number of deals: 7 & Total deal value: Over $6B - Agios Pharmaceuticals is selling its oncology business including its approved AML drug, Tibsovo, to Servier Pharmaceuticals for $1.8M in cash and $200M in milestone payments for the glioma drug candidate, vorasidenib — https://www.fiercebiotech.com/biotech/agios-offloads-oncology-unit-to-servier-1-8b-deal-zeroes-genetically-defined-disease - Merck entered in a collaborating and licensing agreement with Janux Therapeutics in a deal worth up to ~$500M per target (for 2 targets) in upfront/milestone payments to Janux along with royalties to develop new T-cell engager immunotherapies relying on Janux’s Tumor Activated T Cell Engager (TRACTr) platform — https://www.fiercebiotech.com/biotech/janux-pairs-up-merck-for-a-1b-plus-t-cell-engager-deal - Nektar Therapeutics sold their royalties on Adynovate (for Hemophilia A) and Movantik (for opioid-induced constipation) to Healthcare Royalty Management for $150M — https://www.prnewswire.com/news-releases/nektar-therapeutics-announces-agreement-with-healthcare-royalty-to-sell-adynovate-and-movantik-royalties-for-150-million-301197350.html - Peptidream announced a research collaboration with Amolyt Pharma to use the former’s peptide design platform to test growth hormone receptor antagonist (GHRA) for acromegaly - https://www.globenewswire.com/news-release/2020/12/08/2141298/0/en/Amolyt-Pharma-Announces-Research-Collaboration-with-PeptiDream.html - Regenxbio sold a portion of its royalty rights on Zolgensma, an AAV gene therapy approved to treat spinal muscular atrophy (SMA), for $200M to Healthcare Royalty Management (HRM). Once 1.3x and 1.5x of the $200M is paid back to HRM by November 7, 2024 or afterwards, respectively, the royalties revert back to Regenxbio — https://endpts.com/riding-on-rosy-estimates-of-zolgensma-sales-regenxbio-sells-part-of-its-royalty-for-200m/ - Sosei Heptares announced a deal with GSK worth up to $481M ($44M upfront) to develop G protein-coupled receptor (GPCR) agonists for inflammatory bowel disease (IBD) and related conditions. The agreement gives GSK global rights to a portfolio of GPR35 agonists discovered by Sosei — https://www.bioworld.com/articles/501607-sosei-heptares-gsk-ink-481m-deal-tackling-gpcr-target-for-gi-disorders?v=preview - Skyhawk Therapeutics announced a deal worth up to $2.2B ($40M in an upfront payment) with Vertex Pharmaceuticals to develop small molecules to target splicing — https://www.businesswire.com/news/home/20201222005032/en/Skyhawk-and-Vertex-Establish-a-Strategic-Collaboration-to-Discover-and-Develop-Novel-Small-Molecules-that-Modulate-RNA-Splicing-for-Serious-Diseases I used to play football a long time ago — I had some decent success mainly because of my teammates with some of them playing in the NFL. Every practice, every game was an existential fight. I’m not that athletic so every year some 6’4 monster would show up trying to take my spot. Similarly, all of these companies are fighting for their right to exist whether they raised $1M or $100M. Building a great business requires passion and focus. A really powerful and useful framework to lead groups of people and win was developed by Bill Walsh — https://www.youtube.com/watch?v=fd_CnRkKOqA
https://medium.com/@axialxyz/life-sciences-financings-and-commentary-39-d2ee04136556
[]
2020-12-24 20:28:06.154000+00:00
['Venture Capital', 'Biotechnology', 'Investing', 'Healthcare', 'Medicine']
Explicit Push For Equality Leaves Racists Angry
Explicit Push For Equality Leaves Racists Angry “What If I Said I Want A White Editor?” Photo by Steven Van on Unsplash It repeatedly comes up: find out someone is racist, they are removed/fired from their position. However, racism is more than just slurs and police brutality, it’s the constant underlying preferential treatment and deference given to white people. It’s the fact that proving discrimination is damn near impossible because this society needs the white hoods and the slurs to actually say, “that was racist” with their whole chest. Without that, there is ambiguity (at least on the part of white people) and they will focus on the “intention” of the white individual rather than the demonstrable impact that their actions create. The most recent case was the instance shared by Nicole French on Twitter showcases this perfectly. A Black editor, Rikarlo Handy, was looking for other Black editors and white editors decided to lose their collective white shit. The issue is that white people understand the game of racism. They know that implicit racism is rarely effectively challenged because they’ve been doing it, and thriving, for decades. Impact is rarely the focus of any argument regarding racism — it is always about intent because that leaves them protected. When a field is largely white, that is all the proof needed to demonstrate racism at play. Racism is multifaceted (bias, discrimination, prejudice) and this all snowballs into white people rationalizing hiring white people for a job over their Black and people of color counterparts while claiming there is no racism. These white editors cried anti-racism (a myth) and made it seem like they’re being hunted and driven out because someone was looking specifically for Black ediotrs who are not given the access granted to white people. They all said “imagine if we said we only wanted to hire white editors”. White people, we don’t have to imagine what a world of mostly white editors look like — we’re in it. Nathan Lee Bush, a white editor, went on a rant that by asking specifically to hire Black editors, who are underrepresented, is discriminatory against him and all white people, despite the inequalities. Nathan Lee Bush has worked with countless businesses (Nike, Capital One, Youtube, Old Navy, Morgan Stanley, to name a few) and his mindset is that of a racist scared that they will actually have to earn solely on their skill and character rather than implicit racism lining their pockets. There is a difference between equality and equity. Equality means henceforth all will be fair and ignores the generational racism that curtailed wealth and advancement for those who were not white. You can’t be 5 feet ahead in every race while putting ankle weights on the other runners behind you, then say “to make it fair we will take of your weights” while still leaving them 5 feet behind you. Everyone must start at the same line, with no biases holding them back. That is equity and it’s precisely why white people like Nathan Lee Bush, Russ Blaise, Jared Tarlow (worked with Shondaland, Greys Anatomy, Scandal, ABC), Jonathan Launer, Jon Budd, Simon Hutchins, David Beerman, Antonio CT, Jeff Samet, and Marc Fisher (MTV) are scared, because if everyone starts at the same line, the game is no longer fixed. So how will they win?
https://darkskylady.medium.com/explicit-push-for-equality-leaves-racists-angry-117c2f2894b7
[]
2020-06-20 19:17:16.255000+00:00
['Race', 'Equality', 'Culture', 'Media', 'Racism']
Post Alpha Release Updates!
After a tremendously positive response on Alpha PINT App, Beta development kept us on our toes. What have we been up to post the Alpha Release? · Our core developers worked on each feedback received during Alpha Launch campaign and our users are happy with how things look now. · We are at the verge of Beta Launch. · Worked to enhance our PINT App security and got in touch with reputed organizations to put PINT security through certified security tests. Our team confidence is sky rocketing with a positive go ahead on security standards from reputed organizations. · Leadership and developers brainstormed over the need of a PINT Peer to Peer Marketplace and PINT Peer to Peer marketplace reached the first stage of development. The successful PINT Alpha launch kept us too busy with Venture Capitalists and Agile Investors. We have a huge announcement coming ahead. Stay Tuned!
https://medium.com/bitfia/post-alpha-release-updates-745a0b945a25
['Bitfia Labs']
2018-04-13 09:24:15.172000+00:00
['Startup', 'Cryptocurrency', 'Bitcoin', 'Blockchain']
McConnell Blocks Vote on $2,000 Stimulus Checks
Senate Majority Leader Mitch McConnell blocked Democrats’ effort to quickly hold a vote on a stand-alone measure to increase stimulus payments from $600 to $2,000 on Tuesday. This move comes just one day after the House of Representatives passed the bill with bipartisan support. Despite requests from Senator Minority Leader Chuck Schumer and Senator Bernie Sanders to fast-track the bill, McConnell objected. While he did not explain why he objected, he signaled his support of President Donald Trump’s demands to increase stimulus checks, repeal Section 230, and investigate election fraud. “Those are the three important subjects the president has linked together. This week the Senate will begin a process to bring those three priorities into focus,” McConnell said on the Senate floor on Tuesday. It remains unclear exactly how he will proceed, but tying desperately needed stimulus checks to baseless claims of election fraud and limiting legal liability protections for tech companies will make the bill that much more difficult to pass. As it is, Trump’s delayed signing of the COVID-19 relief package cost many Americans a week of enhanced unemployment benefits. In response to McConnell’s objection, Sanders threatened to delay Wednesday’s vote to override Trump’s veto of the annual defense authorization bill unless McConnell allows the Senate to hold a vote on the stimulus bill this week. “The leaders of our country, President Trump, President-elect Biden, Minority Leader Chuck Schumer, the Speaker of the House Nancy Pelosi are all in agreement,” Sanders said on Tuesday. “We have got to raise the direct payment to $2,000. Do we turn our backs on struggling working families or do we respond to their pain?” Until McConnell allows the Senate to vote on the bill to increase stimulus payments to $2,000, Americans making under $75,000 a year will be sent stimulus checks at the $600 level. The payments are expected to start to going out at some point this week. If the Senate approves the stimulus increase, those who already received the initial $600 will be sent an additional $1,400. “Let me be clear: If Senator McConnell doesn’t agree to an up or down vote to provide the working people of our country a $2,000 direct payment, Congress will not be going home for New Year’s Eve,” Sanders said in a statement. “Let’s do our job.”
https://catherineann-caruso.medium.com/mcconnell-blocks-vote-on-2-000-stimulus-checks-5e47815ef276
['Catherine Caruso']
2020-12-29 20:28:02.572000+00:00
['Equality', 'Justice', 'Economy', 'Society', 'Politics']
An Introduction to Redis
Redis as an in-memory database We all know in every computer we have two types of memory — non-volatile or secondary memory (ex. hardisk) and volatile or primary memory (ex. RAM, ROM). Whenever we open an application the data will be fetched by the CPU from secondary memory and fed into RAM to make further accessing faster.Because, fetching from secondary memory will take more time compared to fetching from RAM. Normally our RDBMS (Relational Database Management System) or any other data which we want to persist permanently will be stored in secondary memory. But, fetching from those will cost some delay. To rectify that we can store the data in RAM and using that primary memory as a database can be called in-memory database. But, as everyone knows it is transient because RAM itself is transient. The accessing will be much faster fromRAM than from the secondary memory. In-memory database is nothing but storing the data in primary memory that is RAM or cache(we will discuss later about this). We can use redis the same as other databases like storing key and value pairs, sets, etc.., except persisting the data in secondary memory. There are ways to persist data in secondary memory with redis but that is not in our scope today.
https://medium.com/@kumaran-kugesh/an-introduction-to-redis-b5f7ac2c9484
['Kumaran Kugesh']
2021-04-25 14:32:23.391000+00:00
['Message', 'Redis', 'Database', 'In Memory', 'Software']
Cryptography 101
The blockchain technology, which originated with Bitcoin, consists of a combination of existing technologies. Essential concepts are the asymmetric encryption (see Merkle, 1980), the digital signature (see Haber & Stornetta, 1997), the proof-of-work algorithm (see Back, 2002) as well as the concept of decentralization, which were combined for the first time by Nakamoto (2008) to a pure peer-to-peer digital cash system. With the Bitcoin whitepaper, Nakamoto solved the double spending problem of digital assets that had never been solved before. This involves the use of cryptographic hash functions for incentive design, validation and transaction concatenation in blockchain systems. Furthermore, asymmetric encryption forms the basis for transparency and privacy within the blockchain. Deutsche Version verfügbar unter ledgerlabs.li Symmetric and asymmetric encryption Asymmetric cryptography, also known as public key cryptography, is one of the key components of the blockchain technology. This form of cryptography allows anyone to verify the integrity of transactions or securely store digital assets. To understand the advantages of asymmetric encryption, a comparison with its counterpart the symmetric encryption is helpful. As the name suggests, symmetric encryption uses the same key to encrypt and decrypt data. To enable private communication, all involved parties must have the same key. In practice, this means that the key must be shared with the other party in secret meetings, sealed envelopes or trusted channels. Symmetric encryption is therefore poorly scalable and vulnerable to attacks when the key is transmitted. Asymmetric encryption was developed to counteract the disadvantages of the symmetric encryption and represents an important basis from cryptography in the blockchain context. Asymmetric cryptography solves the coordination problem that existed in symmetric encryption when the key was transmitted. Instead of passing the “password” to decrypt a message, a combination of two keys is used — a public key and a private key. This is often referred to as private and public key encryption. The private key is only known by the owner and must be kept safe. The public key, on the other hand, can be passed on to anyone without hesitation. Both keys are cryptographically linked to each other. The following example shows how a secure message can be transmitted by using the key pairs without having to send the password to the recipient for the decryption. Alice shares her public key with Bob Bob uses Alice’s public key to encrypt his message. The message sent by Bob can now only be decrypted by Alice, since only she holds her private key, which is needed for the decryption. One application of asymmetric cryptography is the storage and transmission of cryptocurrencies. The public key is the address where the coins are located and can be used to encrypt data. The private key, on the other hand, is equivalent to the password that is needed to decrypt the data again. The private key therefore allows access to your own assets on the blockchain and should therefore be kept safe in any case. In addition to the safekeeping of assets, asymmetric encryption is also used to sign transactions concerning these assets. Hash functions Hash functions are cryptographic one-way functions that convert an input (text, documents, etc.) into a length-limited alphanumeric code (hash) as an output. Nowadays, they are used to sign emails or digital documents. In addition they also form the basis for the proof-of-work algorithm for validating transactions on the blockchain. (Voshmgir & Kalinov, 2018) As shown in the figure, even a slight change in the function input leads to a completely different output. Nevertheless, using the same input and hash function always generates the same output. Hence, it is easy to check whether the hash (output) belongs to a certain data set (input). However, the hash value (output) cannot be traced back to the data set (input). Therefore, it is called a one-way function. Graphic in German from ledgerlabs.li Hash functions like SHA256 (Secure Hash Algorithm) are usually open source. This means that anyone can try to crack the algorithm. Hash algorithms that have been around for a long time and have not yet been successfully cracked, can be considered secure. TIP: An interactive tool for a better understanding of hash functions is provided by Anders Brownworth on the website https://anders.com/blockchain/hash Sources Back, A. (2002). Hashcash — A Denial of Service Counter-Measure. http://www.Hashcash.Org/Papers/Hashcash.Pdf. Haber, S. & Stornetta, W. S. (1997). Secure names for bit-strings. Proceedings of the ACM Conference on Computer and Communications Security, 28–35. Merkle, R. C. (2012). Protocols for public key cryptosystems. Proceedings — IEEE Symposium on Security and Privacy. https://doi.org/10.1109/SP.1980.10006 Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System [Whitepaper]. Retrieved November 3, 2019 from https://bitcoin.org/bitcoin.pdf Voshmgir, S., & Kalinov, V. (2018, 11. September). Cryptography & Blockchain Infographic — BlockchainHub. Abgerufen 3. Februar, 2019, von https://blockchainhub.net/blog/infographics/cryptography-blockchain-infographic/
https://medium.com/ledgerlabs-li/cryptography-101-a8b2c4210aba
['Severin Kranz']
2020-05-02 18:32:45.849000+00:00
['Bitcoin', 'Blockchain', 'Asymmetric Encryption', 'Hash', 'Cryptography']
How to use SCSS to improve your CSS
CSS is super important to making your website look better than an interactive powerpoint presentation, but doing custom CSS can be a drag. That’s where SCSS comes in! Designing stuff will still be a drag, but it does make things a little cleaner and more shorter! So here we go. To install an SASS style sheet to an existing React.js app, perform the following steps: npm install node-sass — save Then rename App.css to App.scss and in App.js change it from import ./App.css to ./App.scss. SCSS improvements over normal CSS: You can use variables, which are declared using $ $main-color: red; $second-color: blue; #box { color: main-color; background-color: second-color } This will result in a blue box with red text. Of course you can just manually have typed in red for color and blue for background color, but if you are making a site with a consistent theme and wish to change the color theme, instead of going through and changing the color and background color of everything, you can just change the variables. 2. You can nest descendants using {} #div1 { background-color: red; #p1 { color: blue; } } This is the equivalent of @div1 { background-color: red } #div1 #p { color: blue } It is a bit shorter because you don’t have to type the parent again in order to access its descendants, and saves more time the further down you have ot reach to get those descendants. Also it looks a little more intuitive, and you will have everything grouped together, so you don’t need to search through a list of different items to try to find what you wrote for a particular descendant. 3) You now have access to loops and arithmetic. $repeat: 4 @for $i from 1 through $repeat { #line-#{$i} { opacity: .1 * $i } } This is the equivalent of writing #line1 { opacity: .1 } #line2 { opacity: .2 } #line3 { opacity: .4 } This will have the lines labeled lines 1–3 increasing in opacity. The more things you have to loop through, the more this will save you time. If you wanted to have say a bunch of boxes of different colors or a bunch of objects differing in shape, rather than manually coding them all a loop will save a lot of time. 4) There are mixins. @mixin main-item($color) { height: 500px; width: 500px; color: $color } #item1 { @include main-item(red); } This allows a fixed set of properties to be set (which can also include properties that varies based on input) which can be easily assigned to an item. This makes it so if you have many items that you want to have either the same property (such as size or color) or want them to have similar properties aside from specific variables, you can simply write a mixin and simply include it in items (with the appropriate argument passed in). 5) Mixins can also include booleans, loops, if-else statements, and arithmetic. @mixin color($color, $width) { @if($width > 50px){ background-color: red; } @else { background-color: blue; } } #button1 { @include color(70px) } This allows you to include the color mixin for an item, such as a button, which will automatically give it a background color based on its width. Since button1 we use has 70px passed in as the argument, it will have its background color assigned to be blue. 6) Functions @function size-doubler($size){ @return $size * 2 } #box1 { height: size-doubler(50px); width: 100px; } A function is called in place of where a number would be, and its return value will act as the number. While almost anything you can do with SCSS you could have done with normal CSS anyway, SCSS allows some shortcuts to make life easier. SCSS makes your code easier to read, more organized, require less coding, and easier to change. Hopefully SCSS proves to be a convenient tool for you in the future when you are styling your web page!
https://medium.com/future-vision/how-to-use-scss-to-improve-your-css-c6aad6ddc6cc
['Nicky Liu']
2019-05-08 18:09:38.987000+00:00
['Scss', 'CSS']
Introducing The Divi Project
Crypto Made Easy We are excited to present the Divi Project to the world. A complete cryptocurrency solution for mass adoption and ease of use. We will be announcing the details of the the Divi Project token sale in a few days. Stay tuned, you won’t want to miss out on this. Make sure to visit our website @ https://www.diviproject.org/ and read our White Paper here.
https://medium.com/diviproject/introducing-the-divi-project-fa912fae75d
['Divi Cryptocurrency']
2017-10-19 18:36:18.409000+00:00
['ICO', 'Blockchain Technology', 'Token Sale', 'Divi Project', 'Divi']
So, How Do You Really Feel About #loveinthetimeofcorona?
I saw a great post on social media the other day from a colleague with a prominent picture of him giving ‘us’ the middle finger. As this post was from a colleague I admire, I was intrigued. The post went on to say how important it is to allow all of our emotions during these strange times. I could not agree more. This is something I have certainly had to unlearn. That is, the need to be happy all the time. As we learn to embrace all of our emotions — from anger and resentment to sadness and hopelessness — we give ourselves permission to be fully ourselves. This is my definition of personal freedom. Today, I’m thrilled to share with you this excerpt below from Chapter 11 of my book Belonging: Overcome Your Inner Critic and Reclaim Your Joy. Please read to the end and take on the practice this week and let me know how it goes! Calls with clients are often like a game of chess, except that I play on my client’s team. We never know what we will get, and sometimes we have to move back to gain some perspective and move forward again. This was certainly the case the day I screamed at the top of my lungs while punching a pillow in front of my computer screen, to model for my client what I wanted him to practice. He looked at me as if I had gone crazy when I said, “Now, your turn.” I remember that day so vividly. Bob remained mute on our video call. We had worked together for nearly a year at this point and had developed lots of rapport, but he could not bring himself to express anger. This is a common experience for my clients, as it had been mine years before. Through our work together, we discovered that Bob’s unexpressed anger had blocked him and caused him to try healing his unmet needs through his relationships and an overdeveloped sense of responsibility. We discovered that it was hard for Bob to express any emotion, not simply anger. While his sense of responsibility led him to be incredibly reliable and a go-to person for many, it also left him susceptible to being taken advantage of by those in his life. I gave Bob an exercise before our next session to verbalize and release his anger as access to finding his voice, ability to say no, and ability to begin taking care of himself again. In The Dark Side of the Light Chasers, Debbie Ford explains that “screaming is a good way to let out pent-up emotions. Often our voices have been literally suppressed and we can’t use our entire vocal range. When you allow yourself to scream completely, with every ounce of your being, you can clear out repressed energies. If you don’t want to disturb anyone, scream into a pillow. If you have never screamed, or if you grew up in a home where there was a lot of screaming, you may have decided screaming was wrong. Now we’re back to ‘what you can’t be with won’t let you be.” So scream. It’s important to have access to your entire range of emotions. At first, Bob resisted, still horrified that I would recommend this, certain he could not express anger. I told him this is a common experience and that most individuals find great relief in leaning into the exercise anyway. He agreed and left, willing to attempt the exercise over the following week. The next week Bob called and shared that he attempted the exercise twice and couldn’t do it. I reminded him of our conversation from the past week and his commitment to discovering his voice and asked him if he was willing to practice with me on our call again. I asked him to go and collect a pillow and then practice punching a pillow as he screamed out loud. Often, the act alone of verbalizing our constrained emotions, with words or simply sounds, creates an energetic release. After some posturing, Bob let go and began punching the pillow, getting continually louder. When he finished, I asked him what the experience was like, and he said he felt lighter. We celebrated his efforts and I suggested he could still get louder and asked if he was willing to continue practicing. When I next connected with Bob, he happily reported that he had gotten so frustrated at one of his family members that he did some anger release work in the car. He shared that the act alone allowed him to put a boundary in place with his brother and say no to something he would have easily acquiesced to before our work together. The other day, Bob called me glowing and shared that he had met a woman who was nothing like any of the women he had dated before. She treated him wonderfully, and he could be vulnerable with her and ask for what he wanted and needed. Clients who are up for this deep work frequently share that they’re scared to uncover their deep well of emotions, afraid of what they might find at the bottom. I often tell them that the greater challenge is likely not that they can’t express their emotions, but rather that they are unable not to. I question clients about their experience with crying or exploding with anger. Their recollection of expressing these emotions occurs as outbursts to others even if they do not realize it for themselves. I am reminded of my client Eve, who cried regularly on our coaching calls. Often without due cause or reason, Eve would cry on a nearly weekly basis for the first six months we worked together. As Eve began practicing expressing her feelings and needs in the moment, her tears dried up. In Difficult Conversations, the authors explain that one common explanation for habitual anger or tears is quite the opposite of what we might expect. They write, “We don’t cry or lose our temper because we express our feelings too often, but because we express them too rarely. Like finally opening a carbonated drink that has been shaken, the results can be messy.” While it’s true that it may get messier and more confronting before it gets easier, taking on and releasing our unexpressed emotions is necessary to return to a normal emotional state. Our ability to get comfortable with and access our own anger allows us to be with others’ anger as well, without taking it personally. This has had a significant impact on my own life. Normalizing my own anger and giving it permission to express itself without making it bad or wrong allowed me to more fully show up self-expressed in the presence of others without fear of triggering them or causing them anger. Healthy anger has allowed me to be free in so many ways; this is my greatest desire for you. TAKING ACTION: GIVING PERMISSION TO ALL THE PARTS OF YOU One of the most frequent responses I receive from clients regarding anger release work is that “they’re not angry people.” If that is you, then I’m even more excited for you in leaning into this exercise than the readers who have access to their own anger. Now there is most certainly both healthy anger, which gives us access to new parts of ourselves and to our power, and also unhealthy anger, which has us smearing our own unexpressed emotions and needs all over other people. Most anger is about our own unresolved emotions and wounds, not so much about other people. Today’s practice is about giving yourself permission to express your healthy anger. You could practice this using multiple methods. To be super clear, this is about you expressing your anger to yourself versus someone else. Here are a few methods to consider: Practice punching a pillow for two minutes while yelling out loud. It doesn’t matter if you yell words, sounds, profanities, or people’s names, just as long as you use your voice. The louder you get, the better. Punch and yell constantly without stopping for two minutes. I recommend using a timer so you can fully lean into the exercise without needing to mind the clock. A simpler version of this is to yell in your car with the windows up and music on or not, just as long as you yell. I recommend keeping the volume low, so you can hear the sound of your own voice. Conversely, you can practice hitting a loose-leaf notebook on the corner of a table or the back of a chair while yelling. If you’re looking for an ongoing practice or structure to embody your anger, consider a boxing class, martial arts, or kickboxing class. Do not relate to this practice as optional today. Take a minute and consider what structures or logistics you need to put in place so you can take on this exercise today. Do you need to borrow the car from your partner this afternoon? Do you need to find an old notebook? Take a minute and jot down what logistics you need to handle to follow through on this exercise. After taking on this exercise, journal how it felt in your body. What were you present to? How did you experience that heightened energy or response in your body, your feelings, your thoughts, your relationship to possibility? In this article series, I share excerpts and stories from my book, Belonging: Overcome Your Inner Critic and Reclaim Your Joy. I hope you enjoyed this post — if you enjoyed what you read, let’s connect. You can reach me via email, my website, or connect with me on social: Instagram, LinkedIN, or Facebook. You can also find my book on Amazon — here is the link to buy it.
https://medium.com/belonging-overcome-your-inner-critic-and-reclaim/so-how-do-you-really-feel-about-loveinthetimeofcorona-3522abdb2f02
['Catherine A. Wood']
2020-04-17 12:18:00.853000+00:00
['Emotional Wellbeing', 'Feelings Become Words', 'Anger', 'Emotions', 'Feelings']
Friday the 13th wear a mask…
There’s more to laugh at here: AndyAndersonCartoons.com If this cartoon made you laugh, share the laughter with a friend, and follow AndyAndersonCartoons publication for new cartoons each week.
https://medium.com/andy-anderson-cartoons/friday-the-13th-wear-a-mask-cartoon-8ce5262b2721
['Andy Anderson']
2020-11-13 13:37:11.532000+00:00
['Humor', 'Face Mask', 'Comics', 'Friday The 13th', 'Funny']
Meet the Directors: Johanna Lee
Meet the Directors: Johanna Lee A closer look at Women Who Code DFW’s Directors Photo by Estée Janssens on Unsplash How did you get into the industry? I got into the tech industry by getting a job at a tech startup as a digital marketing analyst. I had the unique ability to work at a SaaS (software as a service) company where the platform I used on a daily basis to do my job was also something I actively speced out new features and worked with engineers to see the finished feature through to the end. Overall, I liked what I was doing, but I ultimately wanted more control and input in solving the problems I was finding. This led me to eventually shifting into a Junior Front End Engineer role where I was mentored on a part-time basis while I completed classes and worked on personal projects to establish my own version of developer legitimacy. What did you struggle with most when getting into the industry? I find this question a bit funny because my journey was uniquely difficult but also because I had a lot of special circumstances come together at once. Whether it was making the real decision to end my previous role to prioritize becoming a developer, all the way to a friend agreeing to mentor me. I struggled the most with considering myself to be a “real” developer. I would constantly google: “How to become a real dev?”, “What’s the best way to transition into an engineering role?”, and so on. My biggest struggle was — and let’s be honest, still is — my self-doubt on whether I could be viewed as a legitimate software engineer despite not getting a college degree in this field. Spoiler: I am good enough, and so are the other developers without a college education in computer science. How did you get involved in Women Who Code? I got involved in Women Who Code after looking for Tech Meetups and realizing there were not many in close proximity for me to attend. I saw a unique need for tech events that were friendly for all genders and experience levels in my city, and so, I set out to attend Women Who Code meetups some two hours away with the ambition of one-day hosting events that would give people the access to quality events that primarily legitimize women in technology. You can say I dove headfirst into the organization and I’ve never looked back since. Why Women Who Code? (Why do you do what you do for WWCode?) Wow, talk about a question that could end in me writing a whole novel! Women Who Code was not the only tech meetup I attended, but without a doubt, it has been the most accepting, well put together, and empowering group of women on a local, national, and international level. They have a mission of empowering women to be representative in tech and they do it in a way that not only encourages women to enter the field but gives them a community to belong and thrive in. Not to mention, encourages men to be great allies! I’ve been the only woman dev in an office, and it is entirely intimidating. Even when the office is not discriminatory or purposefully exclusive, there’s no way to evade the inevitable factor that you’ll feel like the odd woman out. It’s a community of women and other allies who actively advocate for women by giving advice for everything from job interviews to salary negotiations. We organize regular networking events to strengthen soft skills as well as give speaking and teaching opportunities that in turn empower our entire network with a new skillset. It’s energizing to see so many women grow in their careers as a direct result of being a part of Women Who Code. This org has given me some of my best friends. This org makes me constantly reassess how to be a better person and leader. I am hopelessly devoted and enthralled to be a part of this organization. Johanna Lee standing in a black dress in front of a floral backdrop How has Women Who Code helped you in your career? Women Who Code has taught me that being a developer is more than just a job — it’s a career. It’s given me the confidence to value myself as a competent programmer even when I personally face imposter syndrome. Technology is wild. Like come on, the device you’re reading this on wasn’t even real some decades ago. Programming can seem entirely magical at times and having a group like WWCode helps me embrace the uncertainties of new frameworks and celebrate the wins of making words render on web pages. What is something you wished you would have known when you started your first dev job? I wish I would have known I don’t have to submit the perfect code the first time around. I mean, I was told this, but I didn’t really let it sink in. Code review is a beautiful learning opportunity and questions on your code (as long as you can answer them ;) ) actually helps to make you a better developer. What is something that motivates you? Seeing people I’ve helped in some way succeed beyond their initial goals. I know that’s somewhat broad, but I’m entirely a sucker for being motivated by seeing the success of someone whose journey I’ve been a part of. Simply put, it is everything. Tell me about a time you made a mistake. What did you learn? I would say the biggest mistake I’ve made in the past is putting my work-life ahead of my personal hobbies. Hobbies are healthy and amazing ways to de-stress and overall, make you a better employee and person. I learned that it is essential to make time to pursue these passion projects and interests and not let them fall away just because work is in a constant state of deadlines. What’s next for you? What are some long-term goals of yours? Become a solid full stack developer and Vue SME. Well, maybe I’ll never be a Vue.JS expert, but I do want to become very strong in the framework and work on a few personal projects. While I can do full-stack in terms of JavaScript, I hope to sharpen my C# ( :) ) skills over the next year or so to become a more proficient developer as a whole. What’s something you’re really proud of? I’m really proud of seeing where the new Fort Worth branch has come since August 2018. I’ve had countless people help get it to where it is, but I’m very proud of my own perseverance that was necessary to survive the 2019 mark.
https://medium.com/women-who-code-dfw/meet-the-directors-johanna-lee-888f66f69520
['Caree Youngman']
2019-10-07 00:13:31.134000+00:00
['Women In Tech', 'Meetthedirectors']
My Friend, the Squirting Expert
If I was to get super-specific: Get her ready to burst Warm-up Start how you’d normally have sex, and do everything you do to get your partner wet: relax and arouse her with lots of kissing, touching and foreplay. Turn on the faucet You want to give her pussy your full attention for a while; rub her clit and use your fingers for manual stimulation. Don’t start with the squirting moves too soon, as the friction will make her dry up if she’s not ready. Make her as wet as you possibly can; turn her into a waterfall. Add lube, spit or whatever you need, and turn her on in every way you know—body and mind. Be hands-on Your hand-technique is important. Everyone should, in theory, be really good at fingering because it’s the first place where we’re allowed to begin when we start having sex. For many, using their hands is their main way of bringing their partners to orgasm. Without good manual skills, you won’t make anyone squirt. As a woman, if your partner can’t bring you to orgasm with their hands alone, squirting will be hard. You don’t need a squirt master, but you need someone who can do this. There are many ways to learn, and women can also teach their partners what motions they like, how many fingers, and so on. Climb to the summit If you’re dexterous enough to finger her, while playing with her clit, using either one or two hands, you’re on the right path; you know when to speed up and slow down, and you’re familiar enough with the vagina to know how it feels at its peak wetness. Locate the g-spot When you reach that point, you’re right between an orgasm and something else—that’s when you can go for the squirt. If you haven’t already been stimulating it, now is when you have to find the g-spot. Point your fingers up, towards her belly and find an area that has a slightly different texture. There’s a bump that you can almost ‘hook’ behind. While you’re moving vigorously, you also have to be sensitive to what you’re doing because you don’t want to hurt her. Doing this, she’ll get wetter and wetter. “Where’s all this liquid coming from? Who knows? Who cares?” Go! Then you just keep building. If she’s willing to go there and you’re sensitive enough to read her, you can adjust, and you’ll make her squirt.
https://medium.com/essensually-ena/my-friend-the-squirting-expert-725e9ef2ab4
['Ena Dahl']
2019-12-10 20:59:16.570000+00:00
['Sex Tips', 'Sex', 'How To Squirt', 'Female Ejaculation', 'Interview']
What is China’s Role in the Future of Blockchain & Crypto?
After listening to Unchained’s recent interview with Changpeng Zhao, I started thinking about the impact China has on the blockchain space. Well-known for its harsh regulations and a debatable approach towards privacy, China doesn’t exactly have the best reputation. However, it was interesting to hear what Changpeng, or CZ, thought about the current SEC regulations in the United States and how they compare to the bans imposed by the Chinese government earlier this year. CZ is the founder and CEO of Binance, the world’s largest cryptocurrency exchange by trading volume. In a rather heated discussion with Unchained host Laura Shin, CZ shared his thoughts on security tokens and why he’s in no rush to enter the US market. CZ accused Shin of holding a ‘US-centric view’ and reiterated his belief that what Binance does by avoiding countries with harsh regulations is not only logical but ethical. The interview was high-octane and highly relevant as we find ourselves on the edge of an industry-wide shift towards STOs. Inspired, I decided to do a deeper dive to explore the current status of blockchain and crypto in China. Mobile Payments in China WeChat Pay and Alipay are so popular in China that even street performers and taxi drivers are paid electronically. According to Business Insider, mobile payments in China have grown into a $16 trillion market with 92% of people in large cities stating they use WeChat Pay or Alipay as their primary payment method. In this landscape, it makes sense that consumers would transition to cryptocurrencies with ease. In September 2017, Renminbi-to-Bitcoin trades accounted for approximately 90% of global trading activity. Support For Blockchain Innovation Such staggering popularity unnerved the Chinese government, which immediately banned cryptocurrency trading and ICO investment. Despite this, Chinese authorities appear to be embracing blockchain technology, with the China Center for Information Industry Development releasing ratings for different cryptocurrencies based on ‘a comprehensive investigation and evaluation of the public chain from three aspects: basic technology, application, and innovation.’ The Chinese government stated the reason for such reports was to fulfill the need for unbiased and independent research. In 2016, the Chinese government added blockchain development to its Five-Year Plan. In early 2018, the president of China, Xi Jinping, stated that the government would commit $1.6 billion to the development of blockchain technologies in China. As China continues its cryptocurrency crackdown by banning anything related to crypto trading and events, it continues to accelerate blockchain development, begging the question: is blockchain innovation truly possible with such tight restrictions of cryptocurrencies? Chinese Crypto Mining China is well-known for cryptocurrency mining due to competitive electricity prices, hardware availability, and the size and stability of local mining pools. At its peak, Chinese miners were producing 95% of all Bitcoin and currently produce a still-impressive 70% of the Bitcoin network. This drop is due to the Chinese government targeting mining pools by reducing electricity supply in an attempt to make an ‘orderly exit’ from the crypto business. The Economic and Information Commision previously stated that mining operations ‘contribute nothing to the region’s economy besides consuming a spiking volume of electricity.’ Regardless, crypto mining giant Bitmain continues to flourish in China, valued at $12 billion in July 2018. Bitmain produces ASIC chips which are a popular choice for cryptocurrency mining. 96% of their revenue comes from selling mining rigs and only 3% from mining. Although controversial, Bitmain continues to grow, recently expanding to the US where they plan to invest $500 million in a blockchain mining farm in Texas. People’s Bank of China: CryptoYuan Alongside research into blockchain technology, the People’s Bank of China is allegedly working on their own government-backed cryptocurrency. IG Group, a UK-based company providing financial derivatives trading, assumes any national cryptocurrency would be ‘introduced alongside China’s primary currency, the yuan, with the intention of catering to the millions of citizens who lack access to standard banking services.” It’s logical for the government to get involved with cryptocurrencies as a reported 3 million Chinese people are still holding on to their crypto assets despite bans. Although nothing is concrete, there’s already talk that a Chinese government-backed cryptocurrency could be even bigger than Bitcoin. The Chinese government may want to cash-in on the popularity of crypto by overtaking existing coins as well as mining their own. ‘It is unlikely that any government-backed cryptocurrency would kill off bitcoin or other large cryptos completely, but some of the smaller alt-coins could have a tougher time of it…The PBoC have stated that only the digital currency issued by them will be recognised nationally, excluding other coins such as bitcoin or ether. As foreign cryptos are already banned in China, the government would essentially force mining operations to switch to the national crypto. This could impact global mining communities, and reduce the value of bitcoin as it becomes less popular. A government endorsement could see the crypto gain popularity worldwide, as it becomes seen as credible in the eyes of the public.’ One of the most notable features of crypto is its decentralized nature and independence from authorities. As we hear of governments who want to introduce their own cryptocurrencies, it is critical to ask why. For Venezuela, who will launch their national coin, Petro, in November, it is clear their primary motivation is economic, as they are currently suffering from severe hyperinflation under US-imposed sanctions. As the Chinese economy has boomed in recent decades, their motives are entirely different. Given the introduction of the Social Credit System, which will become mandatory for all citizens and business from 2020, the Chinese government could use their national cryptocurrency to survey their citizens further, scrutinizing every transaction under the guise of tracing money laundering or illegal activities. Regardless of motives, it is fascinating to see how the versatility of cryptocurrencies could be adopted in different ways, for a variety of reasons. As someone who believes in the decentralized, transparent nature of blockchain technology, I am excited to see how everything unfolds which will hopefully be to the benefit of people around the world who have been disregarded from our existing financial structures and let down by greedy governments.
https://medium.com/sea-foam-media/what-is-chinas-role-in-the-future-of-blockchain-crypto-200995e045ad
['Chloe Diamond']
2018-10-13 22:52:04.879000+00:00
['Cryptocurrency', 'Crypto', 'Bitcoin', 'China', 'Blockchain']
Gig workers around the globe emboldened to fight following Prop 22’s win
By Breanna Diaz While all eyes were on the results of the presidential election this past November, California’s Proposition 22 was voted into law, denying employee rights for app-based workers. In 2019, Assembly Bill 5 established the ABC test which simplified the process of designating workers as independent contractors or employees. Under AB 5, gig workers such as Uber and Lyft drivers are considered employees, yet these corporations refused to classify them correctly. Enter Proposition 22, funded primarily by Uber, Lyft, and DoorDash, the measure exempts drivers from AB 5 and introduces a slew of other consequences for workers. Saba Waheed, research director at the UCLA Labor Center, a unit of IRLE, who has worked on multiple studies about gig and rideshare workers, said one of the effects of the proposition is that it allows apps like Uber and Lyft to pay drivers less than minimum wage. Sources: KQED and Ballotpedia “If you worked an 8:00 a.m. to 5:00 p.m. day, which should be considered an eight-hour day, it doesn’t count,” Waheed said. “Only the time that you had an active passenger counts.” Waheed said Uber and Lyft convinced drivers that if they wanted to keep their flexible work hours, they should vote for Proposition 22, even though that flexibility is still achievable under an employee model. Uber’s and Lyft’s threats to leave the state of California — and leave many drivers without income — also persuaded many to support the controversial ballot measure. “If someone really thinks that having employee benefits is going to mean they will lose their job altogether, which was some of the messaging that came from the companies, of course you’re going to advocate for that, even if it means that you’re going to lose a bunch of other things in the process,” she said. While the tech giants managed to persuade many current gig workers, others were not convinced. Tyler Sandness, a former Lyft driver and union organizer with Rideshare Drivers United, said Proposition 22 was firmly opposed by grassroots activists. “Prop 22 essentially is a backsliding of employment rights,” Sadness said. “That’s dangerous not just for people in the gig economy — that’s dangerous for everybody.” While the proposition is law for now, Sandness believes there is hope that federal legislation will be passed to enforce the ABC test across the nation. Winning federal recognition of app-based workers as full employees would be tough, he said, so grassroots organizations need to work harder now than ever to advocate for workers. “If you give up now, they win by default. How can you compete with that system where money is power? But as hard as that is, if you let them win, the future really is gone,” he said. “If you give up now, they win by default. How can you compete with that system where money is power? But as hard as that is, if you let them win, the future really is gone,” he said. While California is ground zero for the conflict between corporations and independent contractors, the issue of labor rights under tech-company domination has led to movements and strikes globally, including in Mexico, Argentina, Chile, and Brazil. On December 9, the UCLA Labor Studies Program hosted the webinar “The Future of Organizing under Technological Revolution,” convening speakers and activists for their input on app-based workers’ labor rights. Martin Manteca, organizing director for the Service Employees International Union Local 721 in Los Angeles, said Proposition 22 brought together drivers from around the world to discuss and strategize about how to respond to the proposition. “Prop 22 is a danger. It poses a real threat to the labor gains that workers across the world have gained across the century,” he said. “We’ve mapped out what 2021 is going to look like, and we’re going to be in battle with the DoorDashes of the world. ” “Prop 22 is a danger. It poses a real threat to the labor gains that workers across the world have gained across the century,” he said. “We’ve mapped out what 2021 is going to look like, and we’re going to be in battle with the DoorDashes of the world. ” The DoorDashes of the world are indeed gearing up to pass laws like Proposition 22 in as many places as possible. As Uber CEO Dara Khosrowshahi put it, they will “work with governments across the U.S. and the world to make this a reality.” While the companies plan on enacting more laws similar to Proposition 22 across the globe, they will face fierce opposition. Luciana Kasai, food delivery worker and member of Brazil’s Entregadores Antifascistas worker collective, said worker activists have been able to raise awareness about how app-based delivery companies treat workers, earning support from customers who regularly use delivery apps. “We are rising up and saying all the things that we go through. People are starting to look at us differently and actually thinking twice before ordering food,” Kasai said. Entregadores Antifascistas has also begun offering delivery services with fair prices in Brazil, she said, allowing the workers to set the rules instead of delivery app companies. “We’re going to hit them head on,” Manteca said. “The power belongs to those who deliver for services. The power belongs to those who make a living making sure people have food to eat.” The Institute for Research on Labor and Employment (IRLE) and Labor Center have released a series of reports focused on the gig workers and the gig economy.
https://medium.com/@uclairle70/gig-workers-around-the-globe-emboldened-to-fight-following-prop-22-win-a07421aba2a2
['Ucla Irle']
2020-12-17 23:37:54.954000+00:00
['Workers', 'Gig Economy', 'Labor', 'Future Of Work']
Plutus Monthly Report — Audit Results (October 2019)
Welcome back to our monthly report, Plutus has received some exciting results and the team are looking forward to releasing our high-quality finance app. Pinned Announcements Access to features is available to team accounts and members of our Pilot Programme. The aim of the Pilot is to gather feedback from our community in order to provide an improved platform for the wider audience. Our site and a new app will be available to the wider public once the pilot ends, unveiling our new live features. Development Update Implementing Small Details The team has implemented a number of small changes including download functionality of our T&C and Privacy Policies in PDF form as required by law; we have also changed our email workflow so users can update their email addresses and subsequently authenticate it with minimal effort. The team has polished up a number of small detailed tasks that needed to be resolved before public release. Back Office The team has upgraded user management features so our back-office can allow the Compliance team to monitor and suspend users when they are suspected of foul play or suspicious activity. As a highly compliance-oriented company, we are ensuring the team say vigilant about any potential malicious activity. The developers have also strengthened our back-office dashboard which will greatly empower us to manage and monitor the business’s overall health and security. Marketing Update Notification Emails The team has created an informative design for our automated emails to reduce any potential pain points for the end-user; this includes all system notifications, product notifications, and KYC related emails. The development team is currently implementing these to our back-end systems. Website Design We have developed new sub-pages with an interactive design for our corporate website; this includes the About, Accounts, Card and PlutusDEX pages. These will be implemented this month so newcomers can rapidly educate themselves on our product offerings and fees. Video Progress We are creating how-to videos and developed several informative videos to guide you through the user journey. These will become available periodically alongside the upcoming expansion of our Early-Bird user group. General Update Audit The most important event of the month has been the final audit with our banking provider to take an in-depth analysis of our systems and processes; this ranged from the services we deliver to our customer support approach, and most importantly the vigorous testing of our platform's security. We have now successfully reached a historic milestone for the company which marks the beginning of our journey. Plutus can now release our web and mobile platforms to the public and issue slick Plutus Debit Cards. Clarifying Fees & Limits We have ensured our costs are more competitive than alternative crypto finance apps on the market in order to increase our appeal to the wider community. The team will be offering two different account types: A basic version for less committed customers — this comes with tighter limits and a very modest exchange fee. A premium version for higher usage customers — for a small subscription fee, users can gain access to unlimited and fee-less conversions whilst experiencing a wealth of other ‘members’ benefits. Plan of Action Rather than open the floodgates all at once, we have implemented hard limits on the number of users who can use our app which will increase incrementally. This is to help manage demand, guarantee our servers can handle the load, and help learn from our Early-Bird users the next courses of action. Anyone who downloads the app after we have reached our temporary user-threshold will be placed in a queuing system and receive full access very shortly after. Keep an eye on our social channels to secure your place.
https://medium.com/plutus/plutus-monthly-report-audit-results-october-2019-b46207bd7f0c
[]
2019-10-09 12:25:35.296000+00:00
['Fintech', 'Blockchain', 'Cryptocurrency', 'Report', 'Bitcoin']
Google Cloud Platform Security Operations Center Data Lake
Data lake components Google Stackdriver Stackdriver aggregates metrics, logs, and events from infrastructure, giving developers and operators a rich set of observable signals that speed root-cause analysis and reduce mean time to resolution (MTTR). It provides native integration with cloud data tools like BigQuery, Cloud Pub/Sub, Cloud Storage, Cloud Datalab, and out-of-the-box integration with tools like Splunk Enterprise. You can filter which logs to exclude by organization, folder, project, and billing id. You can enable Data Access logs at the organization, folder, or project level (other logs are enabled by default). You can specify the services whose audit logs you want to receive. For example, you might want audit logs from Compute Engine but not from Cloud SQL. Google Cloud Security Command Center Cloud Security Command Center gives enterprises consolidated visibility into their cloud assets across App Engine, Compute Engine, Kubernetes Engine, Cloud Storage, Datastore, Spanner, Cloud DNS, Service accounts and Google Container Registry. Cloud Security Command Center integrates with Google Cloud Platform security tools like Cloud Security Scanner, and the Cloud Data Loss Prevention (DLP) API. It also integrates with third-party security solutions such as Acqua, Cavirin, Cloudflare, CrowdStrike, Dome9, Palo Alto Networks RedLock, Qualys, and Twistlock, and provides an API and schema to integrate additional third party tools. Google Cloud Dataflow Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time) and batch (historical) modes with equal reliability and expressiveness — no more complex workarounds or compromises needed. Use Cloud Dataflow as a convenient integration point to bring predictive analytics to security event management by adding TensorFlow-based Cloud Machine Learning models and APIs to your data processing pipelines. Google BigQuery BigQuery allows organizations to capture and analyze security data in real time using its powerful streaming ingestion capability so that your insights are always current. It gives you full view of all your data by seamlessly querying data stored in BigQuery’s managed columnar storage, Cloud Storage, Cloud Bigtable, Sheets, and Drive. It enables you to analyze all your security operations data, build and operationalize machine learning solutions with simple SQL, and easily and securely share insights within your organization and beyond as datasets, queries, spreadsheets, and reports. It… Integrates with existing ETL tools like Informatica and Talend to enrich the data you already use. Supports popular BI tools like Tableau, MicroStrategy, Looker, and Data Studio out of the box, so anyone can easily create reports and dashboards. BigQuery ML (beta) enables users to create and execute machine learning models using standard SQL queries; it also increases development speed by eliminating the need to move data. It supports the following types of models: Linear regression — These models can be used for predicting a numerical value. Binary logistic regression — These models can be used for predicting one of two classes (such as identifying whether an event represents a security threat). Multiclass logistic regression for classification — These models can be used to predict more than two classes such as whether an input represents a low, medium, or high impact threat. Google Cloud Storage Google Cloud Storage allows world-wide storage and retrieval of any amount of data at any time. Supported data sources include Cloud Pub/Sub, Stackdriver Logging, Dataflow, and BigQuery; BigQuery can also import from Google Cloud Storage. Object Lifecycle Management provides the ability to set the object storage class (eg. Nearline, Coldline) to a lower-cost class for less frequently accessed objects, as well as delete objects, based on
https://medium.com/google-cloud/google-cloud-platform-security-operations-center-soc-data-lake-4b31e011f622
['Ferris Argyle']
2019-02-20 17:50:31.754000+00:00
['Google Cloud Platform', 'Cloud Computing', 'Security Operation Center', 'Data Lake']
Timely Historical Information for the Basis of Racism In America
Timely Historical Information for the Basis of Racism In America *Many of the historical excerpts are compliments of the book Uprooting Racism by Paul Kivel My recent dip into the world of social work has felt like ice fishing. I am out on the ice and fall into a hole. I am screaming, “Help, get me out of here! The water is freezing!” I feel like this because once I truly learned about our systems of oppression, I couldn’t unsee them. I had fallen into a social explanation for human suffering and very few people could save me because few people understand how bad it is out there for women, the LGBTQ community, the homeless, BIPOC, and class mobility. My biggest learning curve has been on how rooted racism is, so here are my thoughts. Those ‘Mexicans’ Pexels: Daniel Xavier Official Term: Latinx, while someone may be from Mexico, many of who we label as “Mexican” might not be from Mexico(Cuba or Puerto Rico are some examples). Latin is more encompassing, and the “X” is gender inclusive. Common statement: “They are illegal immigrants here to take our jobs.” Historically Accurate Reality: Americans were once illegal immigrants in Mexico and the Government-sponsored Mexican workers coming to America. Think about how many times you have boasted about drinking almond milk, how many oranges you have enjoyed in the summer, or how many times you have heard “Steve” talking about those “immigrants”. Well, here are some facts you can tell Steve: The 1823 Monroe Doctrine where the US tried to claim that all the Western Hemisphere was all ours! We wreaked havoc and violence on Central and South America. How about the 1898 war on Spain where we took over Cuba and other territories like Guam and Puerto Rico? How about when part of the war with Spain US troops wiped out 1/6 of the Philippine Population? What about the fact that our “Lone Star Texas” Was Mexico’s territory that American’s migrated to as illegal immigrants and then we eventually won the territory in bloodshed? How about the fact that US companies demand cheap labor and policies that support unfair wages and trade, and then convince us to blame Latinx instead of pointing fingers at the big man? Questions: Why do you think Latinx deserve to lose their land, children, and culture? Is your opinion of Mexico encompassed in your opinion of Mexicans? Do you think there is something inherently wrong/different/more primitive about people in Mexico that justifies this violent opinion? If so, can you find studies that indicate that skin color(or Latinx culture/skin tone) is correlated to intelligence or compassion? If you believe there is religious inferiority, what part of your religious doctrine justifies the killing of people just like you and I? Other Sources: Source: Youtube, Adam Ruins Everything: Border Patrol on Channel truTV Source: Youtube, Adam Ruins Everything: Why A Wall Won’t Stop immigration on Channel truTV The “Arab Muslims” Pexels: mentatdgt Common Statement: “Those Arab Muslims are so violent!” Historically Accurate Reality: To be very clear, Arabs are people who speak the language Arabic. Muslims are people who practice the religion of Islam. Arab Americans: The majority of Arab Americans are Christian. The media depicts Arabs as incredibly violent, deceitful, and in cruel and inaccurate stereotypes. Very, very few films show any positive depictions of Arab Americans. Arabs have faced so much violence/murders from white Americans following 9/11. White terrorists are rising and causing surging numbers of violence yet most of our money disproportionately goes to “protecting” ourselves against the “Arab Terrorist.” Think of the last decades of places the US has wanted to invade and the wars and violence that have ensued(Iraq, Pakistan, Afghanistan, Yemen, Sudan etc.) Questions: What are cultural practices/communities of Arab Americans in your area? How has violence, fear, and media depictions, harmed Arab Americans? How can we demand change in this narrative? Muslim Americans: Muslim communities are constantly experiencing violence/death from white threats due to Islamophobia. Almost all our media portrays them as dangerous. Islam has been hated all the way back to Pope Urban II in 1095. Islam was first brought to the US by enslaved Africans . Pew Research shows us that the reality is that Muslim Americans are well educated and middle class. Questions: What’s the difference between Muslim, Arab, and Islam? How many depictions do you see of Muslims on TV or in movies? How many are accurate or positive? Does your American value of religious freedom extend to all faiths? Or just your religion? Other Sources: Source: Youtube, Channel: Crash Course. Episode: Islam and Politics Source: Youtube. Channel: Soul Pancake. Episode: How You See Me. Those Blacks Pexels: RF._.Studios Common Statement: “The black community needs to empower themselves and be better role models for their youth.” Historically Accurate Reality: White America enslaved blacks for so long and never financially compensated them for it. Then we denied them home loans, including 1 million black veterans from the war that were denied. We redlined their communities so their schools received less funding. Tulsa Race Massacre is when whites killed hundreds of blacks and left 10,000 black people homeless, in a community that used to be known as, “Black Wallstreet.” Even when black people worked hard and accumulated wealth, white people still stole it from them. Questions: What are the genetic/physical differences between white and black people? (Hint: None) Why do we see black people in disproportionate levels of poverty, with our history of stealing/denying blacks wealth, what is our responsibility as white people to make amends for this? What are other ways we can fight for racial equity despite simply ignoring the issues or throwing money at them? What personal responsibility do you hold as a white person to leverage your power for racial equity?(Hint: A lot!) Other Sources: Source: Youtube. Channel: Netflix. Movie: 13th Directed by-Ava DuVernay The Asians Pexels: Philip Justin Mamelic Common Statement: Those Asians and their math/diseases/’cute’ traditions. Historically Accurate Reality We once considered Asians(people from over 20 countries, but in this case China) to be white and then ten years later they were “othered” again We forcefully opened up Japan and China to US trade in the 19th century when those countries refused to have anything to do with America at first Vietnam/Cambodia/Loas people did not invade us but tried to escape the invasion and disruption created by US military/economic forces We also had many exclusion acts, and when we did allow immigrants from places like China or Japan, we only allowed men over, warping our ideas of these family systems and cultural perceptions of gender about the Chinese. We created this idea of “Yellow Peril” (we were duped by government propaganda, depicting Asians as lazy and opium addicted). We later created positive propaganda during the Cold Wars as the government feared we would be favorable to communism. Questions: How do you view Asians? Do you think about Chinese Americans, Japanese Americans, and Korean Americans? What is different about Japan/China/Korea, what languages, customs or cultures are unique to these places? Why don’t Americans(328 Million Americans) know more about the over 4 billion people in Asia? What are stereotypes about Asian Americans that we hold today? Why are these stereotypes harmful and which ones are leftover from government propaganda campaigns? Other Sources: Source: Youtube: Channel TODAY. Activist: Olivia Munn. Episode: Olivia Munn on Violence Against Asian Americans. Source: Youtube. Channel: TruTV. Episode: Adam Ruins Everything- How American Created the “Model Minority” Jewish Pexels: Cottonbro Common Statement: The Holocaust was bad, but everything is okay now. Historically Accurate Reality: The hate against the Jewish faith dates far back to theological differences. Large scale attacks were committed by Christians against Jews during the Crusades. Jewish people are not just a race but come in every color we ascribe to race: black, brown, white etc. The Jews are falsely accused of killing Jesus in the New Testament. Questions: What holidays favor Christian beliefs? What days do Jewish holidays and traditions fall on? How do our school calendars reflect/respect these dates? How have we blamed Jewish people in the past for things that were not their fault? Source: Youtube. Channel:BBC Three. Episode: Things Not To Say To Jewish People Source: Youtube. Channel: Past To Future. Episode: History of Jews in 5 Minutes- Animation. One of the biggest things I have learned in this journey, a thing that sounds so simple: Is that we, as humans, are all the same at heart. We are products of our systems and each worthy of life, love, and joy. We are all capable of intelligence, strength, and change. To establish another is to allow violence towards people with families and souls just like ours. We lose our own humanity when we create a separation between ourselves and other human beings.
https://aninjusticemag.com/timely-historical-information-for-the-basis-of-racism-in-america-a9f8b6d660a0
['Sadie Lee']
2021-04-07 21:59:24.834000+00:00
['Muslim', 'Asian American', 'Black Lives Mater', 'Latinx', 'Jewish']
WAR
I was always a writer but lived in a bookkeeper’s body before I found Medium and broke free — well, almost. Working to work less and write more. Follow
https://medium.com/resistance-poetry/war-e99a46f6c8e1
[]
2017-08-21 02:52:08.323000+00:00
['War', 'Crime', 'Trump Administration', 'Russia', 'Trump']
I Found Love
It feels good. The last time I was filled with such self-sufficiency, and happiness, I don’t remember, maybe never. But that’s perfectly fine, it’s a new reinvented chapter, and this time I get to decide my plot line, characterization and the entire theatrics. Gazing back into memory lane, and only visiting the streets that evoke unhappiness wasn’t the reason for my distress, rather it was me. I looked at the past in a sense which made me hate myself, because it made me feel helpless. What I needed to realize, is that even though circumstances cannot be under my control, I should’ve still been able to handle the consequences post-impact, because I have to be responsible towards the way my feelings unfolded, since they directly influence my head-space. I realized blaming doesn’t help, it evolves into hatred, for yourself, and everyone and everything around you. I went in the search of love, acceptance and belonging in the most inappropriate places, people out of all places. When really, I should’ve started growing it like a little seed inside my heart, nurtured it with my thoughts, only to be cultivated to heal my heart, when the world broke it for me the several thousand times. In fact, why even allow the world to see the vulnerable crevices of our spiritual caricature. Filter out the hate and only allow the love and positive in. Finding love wasn’t that tough, it was finding the acceptance and love inside me, that I had been searching outside in the physical world all this while. Instead of focusing on all the disfigurements in me and plot holes of my story, if I had just accepted my flaws as a part of my authenticity and had taken the editor’s position in my story, maybe the journey would’ve been easier. But as they say what comes easy, is not as valuable, now I won’t think twice before putting myself first, in any kind of circumstances. Now, I will be a priority and not the way people visualize or evaluate my existence. I took my own unique time span to make the changes required in the way I saw myself, by really seeing, feeling and understanding myself. I gave myself what I always gave people, time and space, to evolve into the human I’ve grown and learned to love. I’ve befriended myself. This ‘new-found’ love didn’t need to be discovered or spoon-fed by anyone, this was nurtured over time with understanding for individualism, a lot of patience and a hint of romance for the self. It had to be brewed with love and acceptance that could be only cultivated by me. I didn’t run or wait for anyone, to provide me with this. I waited for me at places when I got tired and ran after me when I got scared. “We accept the love we think we deserve” said by Stephen Chbosky. Great quote, when taken literally sometimes leads to a toxic relationship with yourself, but it isn’t supposed to be taken literally. Love yourself enough to think that the love people provide you with, the universe provides you with, is worth your precious and limited time. It’s not that people didn’t shower me with love, affection and care, it’s just the simple fact that I never accepted it, just like I never accepted myself. I’m healing, and I know it will take time and a constant need of observance from myself, but at least, I won’t go back to the thinking that death is the only way out of my living hell. I will know how to get back up, there will be times I’ll fall, but I won’t have to start back from square one, since the map to my mind labyrinth was created by me. Now, I’ve started to sleep without the need of music, to steal my attention form past depressive episodes. I must stay alive for myself. No reasons. No excuses. This might be the only time, after 6th grade where I can see a future for myself, being alive after the age of 30, and not want to die. You are worthy of this love, time and attention you are given by the universe and should never be taken for granted. The kindness you shower yourself with is the same, which you portray upon people. The world will beat you up thinking you are down. It’s like your own family member snitching about your vulnerabilities to your enemy. Not that world is the enemy, just that opportunists are everywhere. I created this happiness over time, developed it with a lot of observance and calm. And I’m not letting this go, I’m not giving this away, for anyone at any cost. I’ve created this and I would like to keep it. Healing is addicting. Happiness is amazing, especially the one you create, and growth is challenging but so damn worth it. I want to feel his forever. I want to feel this constant euphoria, without medication, if possible.
https://medium.com/@blcksheets/i-found-love-e772d786bfdc
[]
2020-02-23 09:23:58.278000+00:00
['Love Letters', 'Finding Yourself', 'Blcksheets', 'Love', 'Self Love']
Astrology: New Moon in Sagittarius 14th December 2020
Astrology: New Moon in Sagittarius 14th December 2020 Photo by Marek Piwnicki on Unsplash New moons are a great time for setting intentions for the next month and even longer. Intentions are often manifested six months later when the full moon presents itself in the same zodiac sign. It is a time of change, where the moon begins its journey to its fullest point, the full moon, fourteen days later. A new moon occurs when the sun and the moon are conjunct; meaning that they are together in the sky. The sun and the moon are, for a short time, merged; creating an energetic reset. This Moon is extra special, as it is also a solar eclipse. Solar eclipses always occur at new moons and lunar eclipses at full moons. You can see from the image below the position of the Moon, Earth and Sun at each stage. What Causes an Eclipse? A solar eclipse occurs when the Moon is in-between the Earth and the Sun. The New Moon blocks the light of the Sun for the period of the eclipse and casts a shadow on the Earth. The eclipse can be total or partial depending where on Earth you view it from. The most intense part of the shadow is called the umbra and if you are viewing the eclipse from this part of the Earth, then it is said to be a total solar eclipse. If it is only in the penumbral part of the shadow, then it is said to be a partial penumbral lunar eclipse, which this one is in Europe; it only appears as a total solar eclipse in South America. Solar Eclipse Explained Why Isn’t Every New Moon an Eclipse? As Goldilocks might say, the moon needs to be just right, neither too high nor too low in the sky. As this image demonstrates. The Moon’s pathway around the Earth is inclined at 5°. For an eclipse to occur the Moon’s journey needs to cross the pathway of the Earth around the Sun (known as the ecliptic). The points where these two pathways intersect are called the lunar nodes and this is where eclipses occur. At present, the nodes are in Gemini (North) and Sagittarius (South). How and Why Eclipses Occur What Does This Mean For Me? At the time of this new moon, both the sun and the moon are at 23°08' of Sagittarius. In astrology, there are 12 houses, each represents a part of life. This new moon will impact one of these houses or areas of life, depending upon where it falls in your natal chart. You can find out more general information about astrology by visiting the website below: To better understand the personal impact of the new moon, I recommend downloading your natal chart; just click on the link below, enter your details and away you go! Natal Chart Form Setting Intentions Throughout the year, the location of the new moon changes, and it falls in different zodiac signs; and therefore locations or houses in your natal chart. This gives you a wonderful opportunity to focus on different areas of your life throughout the year. Before life coaching there was astrology :) Symbolism of Sagittarius Photo by Félix Lam on Unsplash Sagittarius being a fire sign is all about philosophy, higher learning, broadening horizons and new adventures. Some of the buzz words associated with Sagittarius would be: Optimistic, Enthusiastic, Adventurous, Philosophical, Freedom-loving, Non-committal, Honest, Blunt, Outgoing, Indiscriminate, Dogmatic, Faith Exaggeration Confidence Arrogance, Courage, Risk-taking EnjoymentOver-indulgence, Abundance, Excess, Good fortune, Hubris, Generosity, Squandering, Power-tripping, Prey to illusions, Inflated Ideas, Greed, Wisdom, Egotism, Illusions of Grandeur, Recklessness, Risk-Taking Chart of the New Moon in Sagittarius Chart of the New Moon @ 23°08' of Sagittarius The chart above shows 360° divided into 12 houses, running in an anti-clockwise direction, starting from the ASC (or ascendant) on the left. Note: Although the house placements are the same; the corresponding zodiac signs would be different in your natal chart, that you have downloaded from the link above. Sun and Moon conjunct In this screenshot, you can see the sun and moon glyphs, forming a conjunction in Sagittarius. Each house has a meaning and is associated with an area of life. To be more personal with setting your intention, it is good to understand some of the words and symbolism associated with the 12 houses. House symbolism Houses 1–6 represent your individual world: 1st House ‘The self’, the spark, physical body, appearance, who you are, your temperament. Associated with Aries energy. 2nd House Money, values, possessions, materialism, security, resources, self-esteem. Associated with Taurus energy. 3rd House Siblings, short journeys, communication, transport, school, network and mind. Associated with Gemini energy. 4th House Home, nurturing, safety, emotions, family, privacy, roots, maternal influence. Associated with Cancer energy. 5th House Creativity, play, self-expression, romance, children, confidence and courage. Associated with Leo energy. 6th House Daily routines, analysis, health, wellbeing, work, organisation and planning. Associated with Virgo energy. Houses 7–12 represent the outside world, perspectives and relationships: 7th House Relationships, balance, aesthetics, justice, harmony, partnerships and ‘the other’. Associated with Libra energy. 8th House Transformation, taxes, inheritance, death, sex, secrets, occult and hidden life. Associated with Scorpio energy. 9th House Long-distance travel, belief systems, philosophy, higher education and religion. Associated with Sagittarius energy. 10th House Career, integrity, public image, success, aspirations, ambition and life purpose. Associated with Capricorn energy. 11th House Innovation, humanitarian, individual, community, technology and revolution. Associated with Aquarius energy. 12th House Dreams, unseen, intuition, psychic abilities, spirituality, interconnectedness. Associated with Pisces energy. What should I focus my intention on? Depending upon where Sagittarius falls in your natal chart, you would combine the qualities of Sagittarius with the house area. If 23°08' of Sagittarius falls in your 4th House; you would set an intention based on the symbolism of both Sagittarius and 4th house area of life, examples could be: I would like to make a new home in a foreign country I would like to learn more about my ancestral roots and family tree I would like to nurture my adventurous side Give your intention energy Once you have connected the symbolisms above (Sagittarius with your house placement) and formed an intention or two; it is good to write it down. This gives your intention more power. I like to find a quiet space and say the intentions out loud to reinforce what I want to manifest in my life for the new cycle. Setting the scene too, by lighting a candle or marking the occasion in some way. Doing this every new moon, you start to get a picture of what you want to attract in your life and also after one year you have set intentions in all areas of your life. Happy manifesting 💜🙏
https://medium.com/soulzine/astrology-new-moon-in-sagittarius-december-2020-793f063e7402
['Laura Dawn Blewitt']
2020-12-14 19:15:02.754000+00:00
['Life Lessons', 'Manifestation', 'Moon', 'Astrology', 'Intentions']
How to Write HTML, Part 2: Understanding Tags
Tags. As mentioned in How to Write HTML, Part 1: What is HTML?,: “A tag is the basic bedrock principle of HTML. It is the building block tool which enables formatting of font, color, graphic, and hyperlinks. Tags are used universally across HTML to identify specific types of content and how a webpage is formatted… By knowing the rules of how tags work, which tags to use in a specific situation and more importantly, how tags relate to one another, it becomes stupid simple to create a HTML page.” So, let’s dive into tags to understand both their form and their function. HTML5 DOCTYPE is now the most common, used on 80% of pages — PowerMapper The Basics Think of a tag as a bookend to all content placed on a page. They are the directives telling the web browser how to display content in relation to the formatting you impose upon it and they direct how content is related to additional content on a given page. This said, tags can help you format paragraphs, bolded copy, line breaks, hyperlinks, block quotes, headings, order lists, header data, body copy, etc. You can use this link to find a more comprehensive list of tags. For a basic list of tags, like the ones you will be using to create your first webpage later on in this series, see the chart below. A comprehensive list of basic HTML tags Page Formatting and Content Formatting As mentioned, HTML is meant to define the attributes on your page. While you might think of that only as the look and feel of content, HTML also defines the structure of the page, i.e. where the content is placed in relation to other bits of content. To understand what this means, let’s take a look at the head , title and body tags. The head tag is how you define for the web browser what content is the head section – the top most section – of your webpage. The tag defines the placement of webpage meta content which serves as the website search name/title within search engine result pages (SERP’s). As such, when viewing the source of any web page, it would be normal to see: <!DOCTYPE html> <html lang="en"> <head> <title>Learning HTML</title> <link href="learnHTML.css" rel="stylesheet"> </head> <body> <!-- alter in due time --> <p>When you take the time to cycle through the process of understanding tags, their use and their structure, learning HTML isn’t all that hard.</p> </body> </html> The head tag contains the content along with the linking cascading style sheet (CSS). Likewise, the body tag is used to tell the web browser what content on your page is body copy for the reader. In this case, the sentence which is book ended by the p - paragraph tag – is the body copy. In both cases, the head and body tags direct the browser on how content should be used, where it should be placed and its overall relation to the additional content on the page. Formatting As you can see from both the inserted and linked tag list, the basic structure of a tag is: Angle bracket — specific directive — angle bracket Angle bracket — backslash — repeating original directive — angle bracket This means there are two tags to keep in mind whenever using HTML — opening and closing. The opening tag uses both the backwards and front facing angle bracket with the directive p or html or br in the middle. The closing tag adds a symbol to the mix, the backwards facing slash, to signal to the reader the closing of the formatting tag. Like the opening tag, the closing tag uses both the backwards and front facing angle bracket with the directive in the middle however places a backwards slash directly in front of the opening tag, /p or /html or /br. The backwards slash is paramount in closing the tag. If it is forgotten, the formatting directive opened by the first tag will be applied to all content below it. Thus, if you don’t close the tag, all remaining copy could be bolded or italicized or hyperlinked etc. Formatting Inconsistencies Now, as you might notice, not all HTML tags contain the same formatting. The vast majority hold the same construct of: Angle bracket — specific directive — angle bracket …however some HTML tags, like hyperlinks and images have modified structure. With both, two additional symbols are added to the formatting structure, the equals sign and quotation marks. These elements are added to direct the web browser to the link location of the image or server file where an image is being pulled from. Thus proper hyperlink and image formatting is shown below: Hyperlink <a href=”Insert_link_here”>Cute Puppies</a> As you can see the angled bracket opens with the letter a (anchor) and continues with href (hypertext reference) followed by the equals to value and quotation marks finalized by the direct link. Naturally than, this tag closes with the closing of the anchor, /a . Image <img src=”Insert_image_file_name_or_linking_url”> The image tag works in almost the same fashion as the hyperlink tag. As you can see the angled bracket opens with the truncated img (image) and src (source) followed by the equals to value finalized with quotation marks. It is important to note because an image can be pulled from a linking source or a server file, two distinct values will fall within the quotation marks. <img src="https://i-msdn.sec.s-msft.com/dynimg/IC485706.png"> or <img src="/wp-content/uploads/flamingo.jpg"> The difference being one image is being called via a linked file location and the other is being called via an image stored on a local server. Summary HTML tags are used to define how content is formatted on a web page and the relation of that content to other content held within the same webpage. They allow you to tell the web browser what belongs where with what look and feel. Now that you know the basics of tags, how they work and what they are used for, let’s take a deeper dive into the core formatting functionality of tags.
https://medium.com/healthwellnext/how-to-write-html-part-2-understanding-tags-fd8fc583a06a
['Brad Yale']
2017-10-31 12:07:09.612000+00:00
['HTML', 'Html5', 'Web Design', 'CSS', 'Web Development']
Tweak your Windows Terminal
Finally, Microsoft Windows has provided a decent terminal which can actually step out of the shadows of the Unix-like systems’ terminals. New terminal is Open-Source, highly “customizable”, supports tabs, themes and can be installed from Microsoft Store or downloaded from Github. Please note that one of requirements for this software is Windows 10 version 18362.0 or higher. Personally, it looks nice, but still, something is missing here. Let’s add a little bit of color. PSColor — Color plugin for PowerShell, provides simple but very handy color highlighting. To install the plugin let’s run the following command: Install-Module PSColor -Scope CurrentUser After the installation we need to enable the plugin: Import-Module PSColor Now if you navigate to any folder and run ls command you’ll see that the list of files and directories are nicely highlighted. For more information about how to customize PSColor, click here. For those who were using terminal on Mac, probably aware of zsh (z shell) and the popular extension ohmyzsh, which comes with various plugins and themes. Unfortunately ohmyzsh supports Unix-like systems only, but have no fear, there’s an alternative for Windows: oh-my-posh 😏 Next we will install oh-my-posh itself and the supporting plugin posh-git, type following commands in your terminal: Install-Module posh-git -Scope CurrentUser Install-Module oh-my-posh -Scope CurrentUser As soon as installation is complete, let’s enable the plugins and set a nice looking theme: Import-Module posh-git Import-Module oh-my-posh Set-Theme Agnoster Now the terminal provides you with a lot of handy information, I personally like the git plugin, it’s just hot🔥. The only downside is that if you open a new tab you need to re-enable all plugins and themes again, not good. To avoid that, let’s add the Import-Module and Set-Theme statements to the $PROFILE. Write the following command in your terminal: notepad $PROFILE Copy and paste these statements into the opened notepad window, save it and close it. Import-Module PSColor Import-Module posh-git Import-Module oh-my-posh Set-Theme Agnoster oh-my-posh has a very vibrant community and provide lots of different themes and plugins, I suggest you to visit oh-my-posh and give a try to couple of them. Below is the way my window looks like… … as you can see on the first line of the snapshot, some symbols weren’t rendered properly. To fix this I suggest to download a font type which supports glyphs and other symbols and add it to the terminal settings. My all time favorite is Fira Font, which is free and can be downloaded here. Next step after the font installation, is the customization of the terminal window itself. Click on the arrow-down button by the tab in the terminal window and choose Settings. Now settings file contains a list of profiles provided in JSON format, you can find detailed information about all supported fields and values in this link. Below you can see the part of my custom profile which includes bunch of settings: font-type, commandline, padding etc. You can click on this link, in order to get the whole JSON content, which you can copy into your profiles.json file and save it. So, if you’ve done everything correct, your terminal window should look like this: That’s all! Big thanks to Tormod Fjeldskår for introducing me to all these cool features during his Upskill event. Cheers!
https://medium.com/capgemini-norway/pimp-my-windows-terminal-6fe803a2c34c
[]
2019-12-06 11:38:29.510000+00:00
['Windows', 'Terminal', 'Windows 10', 'Zsh', 'Ohmyposh']
Making Love in 2020: the Year of Truth
This post is my final contribution to a year-long series: 2020: The Year of Truth. In January, I listened to my inner voice as it laid out twelve monthly topics for the year and I have posted a blog each month accordingly. And still, in retrospect, I much prefer this “Year of Truth” way of describing 2020 to the more popular and oversimplified phrase, “#WorstYearEver”. I received a holiday gift of a 2020 snowflake ornament made with various sizes of the word Fuck fanned out in a cursive, subtly disguised, font — fanned out like the banality that is so easy, crude, and both un-evolved and uninspired. The description that came with this ornament (pictured here) even speaks to the Karen phenomenon, which is the subject of my new book, Beyond Karen (email me at Beyond.Karen@outlook if you want details). I get the urges to seek affiliation by desecrating reality; I struggle with it too. But aren’t we humans capable of so much more? As a writer whose Medium profile (prior to its current advertisement for my book) was simply, “Woman pursuing harmony,” it rankled my precious and divergent sensitivities to invest myself in offering up many thousands of words of embodied truth in exchange for a meager ten cent compensation. Definitely not even a penny for my thoughts. Doing what (or whom!) one loves is the advice of so many who have “made it” in and through the world. But does one have to start at the bottom with every new expression of itself? Or is there a more elegant way of emerging — of composing and arranging the multidimensionality of a living potential — on purpose? The topic I chose for this blog post back in January is “making love” and the nature of embodiment as a way of life. When most of us adults think of making love, we think of fucking. Even if we may be trying to be evolved and manifesting some expression of elegance in our lives, equating love with intercourse is very much a culturally programmed opinion to contend with. I contend that the best sexual lovemaking follows the cultivation of love awareness in as many ways as possible, whether moment by moment, as a daily practice, or as a lifelong ambition. To love the breath, the water one drinks and bathes with, to feel devotion to the natural world as fiercely as we devote ourselves to our partners — — — that … feels … like the answer to our multiple complex societal crises. Making love is the unique dance of listening to oneself and another as another listens to their self and you. We know that a clear and succinct form of feedback one gets when making love is orgasm, hence the natural pursuits and attentions we give it. Imagine if we had that clear of a response with each conversation — there would be either no minor or catastrophic mistakes ever made, or an awful lot of faked “orgasms”! There is such richness in the unknowing, in the exploration and frustration with the act of trying; and this effort to keep refining our actions, our approaches to dialogue, to behavior, to sentimentality, is what gives life its depth and meaning. If life feels meaningless, it is a signal to go exploring — to trust the void made by the awareness of not knowing something important and to trust the self, the body and mind to endure whatever frustration necessary while in pursuit of the desired answers, the subtle confirmation of hearing and being heard, of sensing and being sensed. I’m not widely known as a writer. I have had many interesting and satisfying career paths (as others have many interesting and satisfying lovers) but the embodied truth of my inner writer has consistently given me the sharpest edge against which to explore the void and frustration. The challenge of building a platform to engage others in honest thought feels logical — follow the X Y and Z formula of marketing, etc.— and yet evasive and unsatisfying … until I return to love. When love guides my life, my conversations, my writing, the giving is deeply intertwined with the receiving. I care less about the ten cents or whatever, and more about the listening and follow through action. The feedback I seek, the confirmation of being heard and seen, is faint like a whisper, like the communications which nature makes, evokes, in and with us. It can really be frustrating, like sexual arousal and a great buildup with no climax. But then again, such un-evolved and uninspired attachment to outcome or orgasm is crude and inelegant. Would that we all could trust the magnificent universe to guide us in all ways. On this 25th day of December, the day of celebrating the birth of Christ, it may be a lovely endeavor, a making of love, to reflect on the questions embodied within. And then as a gift of love to the self, listen to what emerges. Even if the Truth is, “I don’t know … and it’s okay.”
https://medium.com/@innerfortune/making-love-in-2020-the-year-of-truth-97f29488431f
['Karen Willard Ribeiro']
2020-12-26 13:39:30.911000+00:00
['Societal Crises', 'Making Love', 'Truth', 'Karens']
How Women Finally Got the Vote (with My Uncle’s Help)
Roberts earned a master’s degree during his time at the Alpine Institute and studied law in the office of a Livingston attorney. He was appointed superintendent of public instruction for Overton County while he built his law practice. In 1918 he was elected governor as a Democrat in an uneventful campaign overshadowed by the war in Europe and the influenza epidemic at home. In his short two years in the governor’s office, Roberts focused on tax reform. He delighted in giving examples of large holdings previously assessed at a small fraction of their true value and oversaw the amendment of tax laws to remove assessment inequalities. A new tax helped counties fund schools; the average elementary school teacher salary increased by 40 percent while the average high school teacher’s salary increased by 19 percent. A workers’ compensation law was passed, and the State Police Bill was signed into law, including historic provisions that prohibited lynching and other forms of mob violence. He even oversaw creation of the Tennessee Historical Commission. The anti-suffrage women circulated pamphlets accusing all suffragists of being ‘atheistic feminists who rewrote the Bible,’ ‘destroyed the home,’ and ‘blackened the honor of Robert E. Lee.’ As the 1920 primary election approached, however, Roberts’ campaign advisers had good reason to fear he might lose. Farmers hated the new tax laws, and labor was organizing against him because of his emphasis on “law and order,” which included sending National Guardsmen to break up strikes against the Carter Shoe Co. and the Knoxville Railway & Light Co. Gov. Roberts had managed to alienate almost everyone. In her 1978 American Heritage article, “Countdown in Tennessee: 1920,” Carol Lynn Yellin chronicled the tense day-by-day battle for women’s suffrage in Tennessee, describing Gov. Roberts as a “mild-mannered, deliberate teacher-turned-lawyer.” She agreed that his main worry was renomination, but carefully and thoroughly explained that criticisms of Roberts were unfounded. For one thing, Gov. Roberts was on solid ground when he insisted he could not call a special session of the sitting Tennessee legislature to ratify the controversial amendment until after the 1920 election. The state’s constitution required that the legislature vote on federal amendments only when legislators were elected after an amendment was submitted. This wasn’t a stalling tactic. Gov. Roberts was pressured by President Woodrow Wilson, but there’s no reason to believe he bowed to the pressure unwillingly. Catherine Kenny, ratification chair for the Tennessee League of Women Voters, had conceived the idea of asking Wilson to wire Gov. Roberts with a “loving message” urging him “to deliver the 36th state for the Democrats.” On June 23, 1920, Wilson indeed telegraphed the Tennessee governor with just such a message, Yellin wrote. The governor still hesitated, wiring the President on June 24 saying he had to consult his state attorney general. The governor was bombarded by an overwhelming avalanche of messages from both friend and foe, plus an onslaught of newspaper editorials. He finally scheduled the special session for Aug. 9, after he won renomination in the primary but before the general election in November. Suffragette and anti-suffragette forces converged on Nashville from all over the country. In the midst of the maelstrom, Gov. Roberts was turning out to be the most committed ratificationist of them all. The anti-suffrage women circulated pamphlets accusing all suffragists of being “atheistic feminists who rewrote the Bible,” “destroyed the home,” and “blackened the honor of Robert E. Lee,” and they liked to label all suffragettes as “she-males.” Josephine Pearson, president of the state’s branch of the Southern Women’s Rejection League, wrote to supporters across Tennessee, appealing for “active moral backing” to fight three “deadly principles” somehow hiding in the 19th Amendment, Yellin wrote. Pearson claimed that passage of an amendment giving women the right to vote would surrender state sovereignty, give Negro women suffrage, and in some unspecified way further the dreaded prospect of “race equality.” Prohibition also passed in 1920, and women advocating for temperance usually were suffragists as well. The effort to ban demon rum made perfect sense as another “women’s rights” issue, because drunk men beat up their wives and spent their paychecks on liquor. As a result, the liquor interests backed the anti-suffragist cause. The railroads and textile manufacturers also backed the “anti’s.” “Southern mill owners believed an inevitable result of woman suffrage would be sociopolitical demands for higher wages for women or, more inconvenient, the enactment of child labor laws,” Yellin wrote. In the midst of this maelstrom, she noted, Gov. Roberts was turning out to be the most committed ratification supporter of them all. With his nomination secured, Roberts opened up an unofficial caucus room in the State Capitol for legislators favoring the amendment and actively campaigned for its passage.
https://jacquewhitekochak.medium.com/how-women-finally-got-the-vote-with-my-uncles-help-94e2c5304e33
['Jacque White Kochak']
2020-08-23 20:18:15.445000+00:00
['American Politics', 'Family', 'Voting Rights', 'Nineteenth Amendment', 'Tennessee History']
Did the Protagonist Need a Backstory in Tenet?
Christopher Nolan’s more recent films have, in one way or another, been polarizing, to say the least. Whether it was the narratively messy The Dark Knight Rises, or the heavy-handed dialogue found in Interstellar, or the lack of character development in Dunkirk, there is no shortage of criticism that can be found being levied against Nolan films. And yet, for how prevalent this is for Nolan’s work, the criticism and critiques never seem to stick, at least not in the same way that it has for the likes of M. Night Shyamalan; which effectively sunk his career and reputation in a big way. The why behind Nolan’s success is truly fascinating. For while we can criticize his storytelling style all day long, we always find ourselves coming back for more. Which brings us to Nolan’s latest polarizing project, Tenet. Tenet is a fascinating character study — not of the protagonist, the, uh, Protagonist — but of Nolan himself. Tenet, probably more so than any other of Nolan’s recent projects gives us a glimpse into how he approaches his minimalistic storytelling process. The protagonist, the Protagonist Ironically, the most fascinating criticism about Tenet isn’t the preposterously crazy take on time travel, but about how the film presents its lead character, the protagonist who is purposefully known literally as the Protagonist. Many have taken humorous jabs at Nolan for this seemingly on-the-nose creative self-indulgence. After all, on the surface, naming your protagonist the Protagonist seems like the sort of thing a film student would do in an attempt to be artistically edgy and unique, but is instead groan-inducing. And while I’m not saying that Nolan couldn’t have nor shouldn’t have come up with a more appropriate naming convention, it makes me wonder, how much focus did Nolan plan on putting into Tenet’s main character in the first place? After all, the Protagonist feels like a shell of a character. He seemingly doesn’t have a fleshed-out backstory, and his motivations are unclear at best. While defenders of Tenet have tried to explain away the Protagonist’s coldness and aloofness, you can’t deny that those elements definitely exist within the character. Which I guess is kind of the point. At the end of the day, all you really need to know about the Protagonist is that he’s cold, efficient, and incredibly competent at his job. Only in very subtle instances do we see cracks in his exterior that hint at an underlying softness in his stoic shell. So while the Protagonist doesn’t have genuine character development, he does have character. Seeing a character react to their situation is character, whereas challenging the belief systems of a character is character development. With the Protagonist, we see him react to plenty of unusual circumstances, but we never get a firm grasp of why he has chosen to face these challenges in the first place or how it makes him feel. After all, you can’t have character development if the character doesn’t grow or shift their mindset in a meaningful way. And the problem with the Protagonist is that we have no idea what he believes. But by naming the protagonist the Protagonist, Nolan effectively stripped the character down to its naked core. In a way, Nolan naming the main character the Protagonist is simply his way of saying, ‘This story isn’t about the character. It’s about the story. Oh, and by the way, he’s the good guy, and he knows he’s the good guy.’ In the tech development industry, many teams have adopted the Lean methodology. Simply put, Lean is meant to help product teams focus on small, doable tasks while cutting out the fat of digital products. In this way, you focus on expanding the elements of your product that are essential. In much the same way, Tenet seems like part of an extended experiment on Nolan’s part in a quest to find the most efficient way to tell overly complicated stories. No matter whether you think the use of the title protagonist is fitting or inherently silly, you have to admire Nolan for creating such a complicated story in such a lean, efficient way. Do Characters Need Backstory? But all this got me thinking. Is the Protagonist a cold and aloof character simply because he has no backstory, or is there more to the story itself? After all, the Protagonist is far from the first action hero that has no backstory. The first example that came to mind for me is one of my favorite heroes, Ethan Hunt, in the Mission Impossible franchise. For as iconic of a character that he is, what do we really know about Ethan, exactly? The first film alludes to his upbringing in a small rural town and mentions his mother and Uncle Donald, but besides that, we know nothing about Ethan’s past. Was he in the military or CIA before joining the IMF? Does he have siblings? What did he have to overcome personally and professionally to get to be an IMF agent? The fact is, we simply don’t know. Funnily enough, Tom Cruise’s spy character in the criminally under-appreciated Knight and Day has more backstory than Ethan Hunt does in the Mission Impossible films. And what about characters like Jason Bourne, which is a character who’s past is deliberately held back from the audience. (Except for Matt Damon’s last entry into the franchise, but we don’t talk about that.) How is it that a character can be successful like Jason Bourne when the only things we know about him are the same things that the character knows about himself? The answer is that in these cases, their past simply doesn’t matter. What matters is how the characters react and respond to obstacles in the moment. In The Bourne Identity, we see Bourne struggle with his amnesia, even going so far as to lash out verbally due to his frustration. We also are able to get into his mind to see how he solves problems, such as when he’s escaping from the embassy. With Ethan Hunt, we get emotionally invested with him as he deals with the turmoil of seeing his team murdered right in front of him in the original Mission Impossible film. We get to see the aftermath as he struggles with figuring out what to do next, while also dealing with emotional fatigue. These are only a couple of examples that would seem to suggest that characters don’t need backstories for us as the audience to identify and empathize with them. Which raises the question, are backstories even necessary at all? Depends On the Story It’s been posited by some online commentators that backstories are unnecessary. I’ve heard arguments be made that you can watch The Dark Knight without having seen Batman Begins and still be able to understand and become engaged in Bruce Wayne’s story. While this is true, it’s a fact that even though they’re in the same trilogy, The Dark Knight has a totally different story to tell than Batman Begins. You can’t just simply take the storytelling style of the Dark Knight and make Batman Begins. It just wouldn’t work, and vice versa. Including a backstory or not is completely predicated on the type of story you want to tell. Are you telling a tight, lean spy story that’s mostly focused on espionage and mind-games, or are you diving into a character study where the character’s depth is important to the story and the progression of the plot? While The Dark Knight is, at its core, a crime thriller, Batman Begins is a character study about Bruce Wayne’s childhood trauma. Both are great stories in their own right, but they’re not equal because they’re not the same. So no, backstories are not a tool to simply be thrown away. At the same time, not every story ever written needs one either. Ultimately, it just depends. Every type of story has pros and cons. With a character study like Batman Begins, you gain the ability for the audience to empathize and become emotionally invested in the hero’s journey, whereas with a crime thriller like The Dark Knight, you can place all your focus on the character’s actions and reactions. What about Tenet? I went to go see Tenet in theaters with a couple of my brothers, and afterward, while discussing the film, one of my brothers pointed out that in Tenet, it wasn’t the Protagonist’s lack of backstory that was the problem with his character, but that we didn’t get to see him respond in a human way to the obstacles he encounters. With every new obstacle or piece of information he learns, he accepts everything in stride without ever reacting in a relatable way for the audience to empathize with. For all intents and purposes, the Protagonist is effectively emotionless. David Washington does what he can with the character, and I quite liked him in the role, but his character almost felt more robotic than human, like an AI always trying to figure things out while not having any underlying emotions to connect with. Yes, we get to see him making difficult decisions, but we don’t really get to see the effect that those decisions have on him as a person. On top of that, the Protagonist only asks direct questions and doesn’t ask for elaboration. For someone who is experiencing a scientific anomaly, he seems numb for most of the runtime since nothing that happens in the course of the story seems to pique his curiosity in the slightest. In a way, the Protagonist feels more like he’s caught up in the current of the story and is just along for the ride as opposed to being an active participant in the plot. Which, once again, might be kind of the point, but I won’t go into spoilers here. Ultimately, with Christopher Nolan’s screenplay, the themes, concepts, and storytelling beats took precedence over the characterization of the characters. Which, in a way, is logical and totally warranted. Tenet has so many complicated twists and turns that it’s hard to just keep up with what’s happening in the story. If Nolan had inserted deep characterization into the plot, it potentially could have just become too bloated to be engaging. In essence, Nolan sacrificed characterization for the sake of the plot. Was that the right decision to make? Well, not only does a story depend on the type of story that the storyteller intends to tell, but it also depends on what the audience expects of certain stories as well. After Christopher Nolan’s Dunkirk, I was expecting Tenet to be more of a visual and audio spectacle more than a deep character study. In that sense, Tenet totally paid off for me, because while I didn’t become emotionally attached to the characters, I was fully engaged with the story. So while I sympathize with people who saw Tenet and were disappointed at the lack of characterization — and I’ll readily admit that they’re definitely not wrong for thinking so — I’m not convinced that Christopher Nolan made the wrong decision to forego characterization for the sake of the story. While I think a more nuanced director like Doug Liman could have turned Tenet’s protagonist into a more relatable character — which probably would have translated into a better movie overall — I’m also simultaneously amazed at the sheer scope and visceral energy of Tenet’s story and filmmaking. Tenet is one of those movies that keeps you thinking about it for days afterward. Conclusion While Tenet told its story in a lean and satisfactory way, it was missing a human element to ground the story on an emotional level. What this boils down to is that Tenet is one of Christopher Nolan’s lesser movies, but also one of his most fascinating. Tenet tells a story that doesn’t resonate with me emotionally, but the plot keeps the analytical side of my mind constantly engaged. Much like Ad Astra, that was so cold and emotionless as to render the audience numb, Tenet ultimately was a lesser film because it only hooked me intellectually, not emotionally. In general, the best films are able to do both, but that doesn’t mean that Tenet was a mistake. In short, Nolan knew the story he wanted to tell, and he did it in the most efficient way possible. If you enjoy movies and liked this story, give me some claps and follow me for more stories like this!
https://medium.com/oddbs/did-the-protagonist-need-a-backstory-in-tenet-bc7a80974fd0
['Brett Seegmiller']
2020-10-06 18:58:30.181000+00:00
['Storytelling', 'Cinema', 'Film', 'Writing', 'Movies']
Uber Data Scientist Interview Experience
Application For the last couple of months, I have been actively applying for Machine Learning jobs. My main source of application has been LinkedIn. I applied for the position of Data Scientist at Uber via LinkedIn. The job responsibilities included the implementation of ML algorithms for real-life challenging problems related to Uber Ride and Uber Eats. Recruiter’s Email I received an email from the recruiter within two weeks. The recruiter asked for my availability and also provided a detailed document mentioning the interview timeline and useful resources. The document was really helpful for me to get a good understanding of what the interview process would look like. Phone Screen 1: I had two phone screenings ten days apart. Each screening was a 45mins video interview and followed the same format. In the first 5 mins, the interviewer introduced him/herself. Followed by my introduction and a walk-through of my resume. The first phone screen interview consisted of two parts Case Study: The case study consisted of a relevant open-ended problem similar to UberEats. The interviewer and I discussed various aspects of the problem, such as understanding the objective, collecting the data, carrying out exploratory data analysis, scaling the system, important KPIs for the problem, underlying Machine Learning solutions, deploying the solution, and integrating it into the existing system. The purpose of this case study was to evaluate my approach to large-scale Machine Learning problems. The case study consisted of a relevant open-ended problem similar to UberEats. The interviewer and I discussed various aspects of the problem, such as understanding the objective, collecting the data, carrying out exploratory data analysis, scaling the system, important KPIs for the problem, underlying Machine Learning solutions, deploying the solution, and integrating it into the existing system. The purpose of this case study was to evaluate my approach to large-scale Machine Learning problems. Coding: The second part consisted of a medium coding question via CodeSignal. I was asked to choose the language of my liking. The interviewer was very clear when explaining the problem. I was able to solve the problem optimally in the given time. In the end, the interviewer asked me if I had any questions. We had a really good discussion on what his job responsibilities are at Uber. Phone Screen 2: I heard my first phone screen interview feedback in 3 days, and my next phone screen interview was scheduled within a week. Similar to the first interview, this interview had two parts Machine Learning basics: The interviewer went through some basic to intermediate level questions related to machine learning such as backpropagation, graph ML, ensemble methods, vanishing gradients, precision vs recall, bias-variance trade-off, etc. A good understanding of these topics can be found on the useful link below ML case study: Similar to the first interview, an open-ended case study related to UberEats was discussed. Contrary to the first interview, this case study was much more complex and more machine learning and statistics oriented. I was asked to design an ML-based solution for a related product-based problem. The interviewer asked relevant detailed questions in each step. The discussion started from a simple logistic regression model to slightly complex SVM and decision trees and finally to even complex graph ML models. We also discussed important features of the available data set and how to use them towards solving the problem. Again, the interviewer was very friendly and made sure the interview environment was comfortable enough for me. On-site The onsite interview consisted of 6 rounds spanning a total time of 5–6 hours with two 15-minutes breaks. They gave me enough time to explain my answers made me feel welcomed. They eased me into the process by first introducing themselves and describing their job responsibilities. None of the interviewers were distracted and were completely focused on what I had to say. I have listed the six rounds into the following categories of my own based on the nature of the questions asked. 1. Design Problem This interview consisted of an open-ended case study related to designing an online grocery store. I walked the interviewer through the process of data collection, exploratory data analysis, feature selection, feature transformation, machine learning model selection, training the model, and selecting the right KPIs to analyze the performance. This round was much more focused on the bigger picture. The interviewer was interested in my high-level understanding of the problem and not in the technical details of the algorithms. We also discussed a few business-related KPIs to gauge the solution to the problem. 2. Coding: The coding interview was focused on my understanding of statistics. The interviewer asked me to implement a problem that involved hypothesis testing. Since I was used to the LeetCode type question related to real-world problems, this coding question gave me a tough time. It took me some time to understand what the interviewer wanted me to implemented. The interviewer gave me a few hints when I got stuck. This coding question had multiple follow-up coding questions. A good understanding of the section can be found below 3. Research This part of the onsite consisted solely of my prior research experience. The interviewer asked me to explain my Ph.D. research, asking follow-up questions. This part was more of a discussion. The interviewer went through my CV in detail and asked relevant questions on projects that I had mentioned. He asked me to pick up a research project and explain in detail the objectives, setbacks, and final results. 4. Behavioral: The interviewer asked me questions regarding my experience as a Ph.D. student, my experience as an intern, and a Machine learning engineer. The questions revolved around my leadership qualities, time management, meeting deadlines, dealing with a difficult colleague, and conflict resolution. A good understanding of such questions can be found below 5. Machine Learning/Statistical: This section was an open-ended discussion related to one of Uber’s products. The problem was presented as a case study and the interviewer was most interested in my ML approach to the problem. We started with a very basic ML solution, discussed how this was a very simple approach that would fail in real-life cases, moved onto more complex solutions. Throughout the interview, we discussed various ML concepts and how changing a few things would address issues related to the ML pipeline. Topics such as Graph ML models, embeddings, loss functions, etc were discussed as well. A few of the related topics can be found on the link below. 6. Case Study: This was similar to the Design Problem and Machine Learning/Statistics but the Uber product under consideration was different. The interviewer was interested in my high-level understanding of designing a solution to a given problem and how would I design a solution when scaled to larger data sets. The case study was somewhat similar to the design problems asked in FAANG companies. Summary: Overall my interview experience with Uber was really good. The interviewers were focused on what I had to say and gave me ample time to explain my answers. The interviews mainly focused on design problem/case studies, machine learning concepts, coding problems, and my research background. I found the following resources useful while preparing for the interview. Uber Engineering Blog: Things to focus on with case studies/design problems: Scalable solution: With over 90 million active users, Uber is a company that gives significant importance to scalable solutions. Whenever answering a case study or design problem, make sure your solution With over 90 million active users, Uber is a company that gives significant importance to scalable solutions. Whenever answering a case study or design problem, make sure your solution Customer-oriented approach : Uber deals with people directly. The customers are the general public. Hence your solution should give more importance to making lives easy for customers. The design problem should be modified keeping the user experience in mind. : Uber deals with people directly. The customers are the general public. Hence your solution should give more importance to making lives easy for customers. The design problem should be modified keeping the user experience in mind. Business KPIs: Finally, the end goal for Uber is to generate revenue. Your solutions should focus on user retention, user churn, new users, number of riders, number of orders with Uber eats, new restaurants being added to Uber eats, etc. Ask yourself how do these metrics eventually impact revenue generated? If you have an interview lined up with Uber, I hope this article helps you.
https://towardsdatascience.com/uber-data-scientist-interview-experience-78305114540c
['Aqeel Anwar']
2021-07-18 03:03:36.229000+00:00
['Uber', 'Data Science', 'Artificial Intelligence', 'Interview', 'Machine Learning']
Python Map Reduce Filter Tutorial Introduction
Map, Filter And Reduce In Pure Python The concepts of map, filter and reduce are a game changer. The usage of these methods goes way beyond Python and are an essential skill for the future. Map, Filter and Reduce (Image by Author) The Basics Map, filter and reduce are functions that help you handle all kinds of collections. They are at the heart of modern technologies such as Spark and various other data manipulation and storage frameworks. But they can also very powerful helpers when working with vanilla Python. Map Map is a function that takes as an input a collection e.g. a list [‘bacon’,’toast’,’egg’], and a function e.g. upper(). Then it will move every element of the collection through this function and produce a new collection with the same count of elements. Let’s look at an example map_obj = map(str.upper,['bacon','toast','egg']) print(list(map_obj)) >>['BACON', 'TOAST', 'EGG'] What we did here is using the map(some_function, some_iterable) function combined with the upper function (this function capitalizes each character of a string). As we can see we produced for every element in the input list another element in the output list. We receive always the same amount of elements in the output as we will put into it! Here we send 3 in and received 3 out, this is why we call it an N to N function. Let’s look at how one can use it. def count_letters(x): return len(list(x)) map_obj = map(count_letters,['bacon','toast','egg']) print(list(map_obj)) >>[6, 5, 3] In this example we defined our own function count_letters(). The collection was passed through the function and in the output, we have the number of letters of each string! Let’s make this a little bit sexier using a lambda expression. map_obj = map(lambda x:len(list(x)),['bacon','toast','egg']) print(list(map_obj)) >>[6, 5, 3] A lambda expression is basically just a shorthand notation for defining a function. If you are not familiar with them you can check out how they work here. However, it should be fairly easy to understand how they work from the following examples. Filter In contrast to Map, which is an N to N function. Filter is a N to M function where N≥M. What this means is that it reduces the number of elements in the collection. In other words, it filters them! As with map the notation goes filter(some_function, some_collection). Let’s check this out with an example. def has_the_letter_a_in_it(x): return 'a' in x # Let's first check out what happens with map map_obj = map(has_the_letter_a_in_it,['bacon','toast','egg']) print(list(map_obj)) >>[True,True,False] # What happens with filter? map_obj = filter(has_the_letter_a_in_it,['bacon','toast','egg']) print(list(map_obj)) >>['bacon', 'toast'] As we can see it reduces the number of elements in the list. It does so by calculating the return value for the function has_the_letter_a_in_it() and only returns the values for which the expression returns True. Again this looks much sexier using our all-time favorite lambda! map_obj = filter(lambda x: 'a' in x, ['bacon','toast','egg']) print(list(map_obj)) >>['bacon', 'toast'] Reduce Let’s meet the final enemy and probably the most complicated of the 3. But no worries, it is actually quite simple. It is an N to 1 relation, meaning no matter how much data we pour into it we will get one result out of it. The way it does this is by applying a chain of the function we are going to pass it. Out of the 3, it is the only one we have to import from the functools. In contrast to the other two it can most often be found using three arguments reduce(some_function, some_collection, some_starting_value), the starting value is optional but it is usually a good idea to provide one. Let’s have a look. from functools import reduce map_obj = reduce(lambda x,y: x+" loves "+y, ['bacon','toast','egg'],"Everyone") print(map_obj) >>'Everyone loves bacon loves toast loves egg' As we can see we had to use a lambda function which takes two arguments at a time, namely x,y. Then it chains them through the list. Let’s visualize how it goes through the list x=“Everyone”, y=” bacon”: return ”Everyone loves bacon“ x=”Everyone loves bacon“, y=”toast”: return ”Everyone loves bacon loves toast“ x=”Everyone loves bacon loves toast“, y=”egg” : return ”Everyone loves bacon loves toast loves eggs“ So we have our final element ”Everyone loves bacon loves toast loves eggs“. Those are the basic concepts to move with more ease through your processing pipeline. One honorable mention here is that you can not in every programming language assume that the reduce function will handle the element in order, e.g. in some languages it could be “‘Everyone loves egg loves toast loves bacon’”. Combine To make sure we understood the concepts let’s use them together and build a more complex example. from functools import reduce vals = [0,1,2,3,4,5,6,7,8,9] # Let's add 1 to each element >> [1,2,3,4,5,6,7,8,9,10] map_obj = map(lambda x: x+1,vals) # Let's only take the uneven ones >> [1, 3, 5, 7, 9] map_obj = filter(lambda x: x%2 == 1,map_obj) # Let's reduce them by summing them up, ((((0+1)+3)+5)+7)+9=25 map_obj = reduce(lambda x,y: x+y,map_obj,0) print(map_obj) >> 25 As we can see we can build pretty powerful things using the combination of the 3. Let’s move to one final example to illustrate what this might be used for in practice. To do so we load up a small subset of a dataset and will print the cities which are capitals and have more than 10 million inhabitants! from functools import reduce #Let's define some data data=[['Tokyo', 35676000.0, 'primary'], ['New York', 19354922.0, 'nan'], ['Mexico City', 19028000.0, 'primary'], ['Mumbai', 18978000.0, 'admin'], ['São Paulo', 18845000.0, 'admin'], ['Delhi', 15926000.0, 'admin'], ['Shanghai', 14987000.0, 'admin'], ['Kolkata', 14787000.0, 'admin'], ['Los Angeles', 12815475.0, 'nan'], ['Dhaka', 12797394.0, 'primary'], ['Buenos Aires', 12795000.0, 'primary'], ['Karachi', 12130000.0, 'admin'], ['Cairo', 11893000.0, 'primary'], ['Rio de Janeiro', 11748000.0, 'admin'], ['Ōsaka', 11294000.0, 'admin'], ['Beijing', 11106000.0, 'primary'], ['Manila', 11100000.0, 'primary'], ['Moscow', 10452000.0, 'primary'], ['Istanbul', 10061000.0, 'admin'], ['Paris', 9904000.0, 'primary']] map_obj = filter(lambda x: x[2]=='primary' and x[1]>10000000,data) map_obj = map(lambda x: x[0], map_obj) map_obj = reduce(lambda x,y: x+", "+y, map_obj, 'Cities:') print(map_obj) >> Cities:, Tokyo, Mexico City, Dhaka, Buenos Aires, Cairo, Beijing, Manila, Moscow If you enjoyed this article, I would be excited to connect on Twitter or LinkedIn. Make sure to check out my YouTube channel, where I will be publishing new videos every week.
https://towardsdatascience.com/accelerate-your-python-list-handling-with-map-filter-and-reduce-d70941b19e52
[]
2020-12-01 14:56:19.868000+00:00
['Python', 'Mapreduce', 'Tutorial', 'Programming', 'Intro']
摘字集:Encore
ENCORE en·​core | \ ˈän-ˌkȯr \ Definition of encore 1: a demand for repetition or reappearance made by an audience 2: a reappearance or additional performance demanded by an audience 3: a second achievement especially that surpasses the first (Edited from Merriam-Webster Dictionary) 我在渥太華住的地方是一處小小的社區。公車行駛停靠的大路叫做Baseline,車站叫做Baseline/Farlane,就是Baseline與Farlane交叉口的意思。這裡的公車路線大多都這麼命名。出了市區之後一眼望去全是小幢小幢的屋子,根本沒有什麼商店、學校或市場可以當地標。渥太華是新的城市,市區街道寬直中正,以前還規定沒有建築物可以蓋得比國會大廈高。公車常常一條路開到底,兩邊一些小店餐廳也只密集的出現在三五個街區。常常搭6號公車從Billings Bridge來回,從Lansdowne到Parliament中間的一段路名我都很喜歡,Downtown最熱鬧的Bank street搭上Flora, Gladstone, Somerset, Gloucester這樣對我來說沒什麼意義的好聽音節。從Billings Bridge轉搭88號公車就會回到Baseline,沿途慢慢記下了Prince of Wales, Lexington, Fisher, Marson, Zena這樣的先後順序。 Night trip on baseline 從Baseline/Farlane下車後,沿Farlane往裡走會先經過一個有著義大利名字的養老院,大門前偶爾會有小黑板昭示周末會舉辦的活動。養老院斜後方有一小塊用腰高的鐵絲網圍起來的空地,裡面擺著鮮豔塑膠滑梯與小座車,平日早上會有很多保母帶著小小孩來玩,冬天也在積雪裡跌跌撞撞。往後再走一小段便會看見Encore Private的藍色小路牌,右轉進去就都是格局相似的紅褐粉橘小屋子,其中前庭自由種著花草的就是我住的地方了。左邊的窗子是我的房間。窗子用的是木製的百葉窗,用手搬著連動的白木條開關,氣密窗則要自己安上把手逆時針旋開。冬天時站在窗下的暖氣風口開一會兒窗,帶著冰氣的風雪吹進來,我無法想出任何符合台灣華文脈絡的比喻來代替這個句子。 住在別人的房子裡而且那個別人也還住在自己的房子裡,大抵不是一個明智的決定,但我的房間在那一段時間內的的確確是我的房間。不想被別人看見的東西可以不用被別人看見,書架上的書可以平平整整的貼牆,衣櫃高處低處都沒有衣服以外的東西;牆上可以貼滿寫著莫名其妙點子與亂七八糟字句的便條;想要放零食在床底下就放零食在床底下。我把房間鑰匙跟大門鑰匙一起串在插著學生證和U-Pass的卡夾上,出門時還會在鎖門後拍張照以免忘了自己有沒有鎖門一整天心神不寧。我曬得黑摀得白凍得青熱得紅的左手壓著門把的照片,一年下來應該拍了超過一百張。同樣成為習慣的,還有從Baseline/Farlane下車後走回來的那段路。冬天太陽落得早,六點天就漆黑,走在空無一人的街道上我常常大唱安九,五分鐘的歌剛好搭配五分鐘的路,唱完最後一次副歌就到了。 House on Encore 住在Encore的日子大致都開心,離開渥太華也四個月了,下一個篇章也要好好的前進。希望寫完這個系列的時候我也會開始下一段旅程,也希望這個系列可以好好的寫下去,有時候我更相信我寫下來的東西,而我想相信這個。 「在快樂與悲傷都寫在我們的臉上/的那些時代裡/我們不需要去隱藏/我們的情緒」
https://medium.com/%E5%9C%A8%E5%9C%B0%E7%90%83%E8%B7%9F%E6%9C%88%E4%BA%AE%E4%B9%8B%E9%96%93%E5%A1%9E%E4%B8%80%E9%A1%86%E6%98%9F%E6%98%9F/%E6%91%98%E5%AD%97%E9%9B%86-encore-1de023210bc6
[]
2020-08-13 15:25:18.499000+00:00
['交換學生', '摘字集', '加拿大', '渥太華', 'Ottawa']
Implementing commitments beyond 2020
Mei Corbett Our global appetite for commodities such as palm oil is driving tropical forest loss, photo by CIFOR, via flickr.com, creative commons licence With the 2020 New York Declaration on Forests deadline to halve natural forest loss and eliminate commodity-driven deforestation in sight, not one of the companies and financial institutions assessed by Global Canopy for the Forest 500 2018 ranking, launched today on International Forests Day, is set to meet this goal. Forest 500 ranks the forest-risk commodity policies of the most influential companies and financial institutions acting in the global palm oil, soy, cattle and timber supply chains. Their rank is dependent on their commitments and actions towards ending forest loss in their supply chains or portfolios. Of the 350 companies assessed, 164 have committed to ambitious targets to reduce deforestation by 2020, in line with the goals set by the New York Declaration on Forests (Goal 2) and by the Consumer Goods Forum. Worryingly, many of these companies do not appear to be implementing them across their supply chains. Over 40% of the Forest 500 are still yet to make commitments to tackle the deforestation that they are linked to. The good news The number of companies with commitments to address deforestation have steadily increased since Forest 500 began tracking company policies in 2014. Now 57% of companies have a commitment to protect forests for at least one of the commodities they are exposed to on the final stretch towards 2020. The assessment focuses on ‘forest-related’ commitments to recognise the efforts of companies that look beyond broader sustainability commitments and focus on protecting priority forests and/or eliminating deforestation completely within their supply chains. Commitments continue to push further with ambition moving beyond reducing deforestation to zero conversion of forests such as Cargill’s commitment to sustainable soy, announced after the 2018 assessment was completed. This is an encouraging step in the right direction, but depends on their promise of a time-bound action plan. The pathway to deforestation-free supply chains Updated methodology measures implementation actions But the most influential companies must take action to ensure that their commitments are implemented. So in 2018, we introduced five implementation indicators to assess key actions that companies should be making to implement their commitments. They reflect the urgent need for powerbrokers to take action towards reducing deforestation in their supply chains. Implementation gap Forest 500 has identified that in total only 50 of the companies assessed in 2018 reported on some implementation activities for all of the commodities they are exposed to. And almost a third of commodity-specific forest-related commitments made by companies did not include any implementation actions for any of the five indicators.
https://medium.com/global-canopy/implementing-commitments-beyond-2020-414f30dafc73
['Global Canopy']
2019-03-21 12:48:37.361000+00:00
['Palm Oil', 'Sustainability', 'Deforestation', 'Companies', 'Cattle']
My Unconventional Method for a Healthier Body Image
I’m someone who used to struggle a lot with body image. A lot of the progress I’ve made comes from eventually reaching a point where I was sick and tired of my body image being in the way. In 2014 I decided to quit counting calories and finding new workout regimes, and instead focus solely on finding ways to love myself. On my journey to self-acceptance, there was particularly one aspect that I could never figure out: Why do I struggle with body image when I’m otherwise independent and open-minded? For that reason, I felt like my body image issues and the voice that told me all the negative things about myself were almost a separate part of me — a devil on my shoulder. It never aligned with anything else that I thought, valued, and stood for. Naturally, I struggled to understand why I had this devil there. Was its agenda just to make me feel bad? And if so, how could I stop it? I tried to do the advised thing and think more positively. When I caught the little devil saying “You can’t wear shorts with those thighs”, I would answer back “My sole purpose for wearing shorts, or exist in general, is not to please other people’s eyes” It worked there and then, but it also became tiring after a while. I gradually discovered that my problem was this: My values and identity were part of my conscious mind, but my body image issues came from my subconscious mind. My psyche was storing something I wasn’t fully aware of: a lesson that I learned at some point, a memory, or emotions, that I hadn’t processed fully, didn’t like, didn’t accept, or didn’t want to have there. And telling it to just go away wasn’t helping. This led me onto a very different road to healing my body image, and it required a different approach than I was used to. The role of our subconscious I’ve never heard much about the role of the subconscious in body image work. In my experience, there’s a tendency, especially in the self-help world, to “go towards the light”; meaning that there’s a lot of advice out there to think positive, empowering thoughts. The logic behind it makes sense; we have negative thoughts, so we attempt to replace them with positive ones. Usually, it involves positive affirmations that we say to ourselves, or purposefully doing positive self-talk. In my own experience, it was never that easy. If we could control our own thoughts, we would never be unmotivated for anything, and we would never procrastinate, because we would always be able to “think positively” about anything we want. I agree that we always have a choice in how we view a situation, which in turn can change our thoughts, but to control our thoughts is very difficult and requires a lot of strength and perseverance. Secondly, if we always try to replace our negative thoughts or “put something on top of them” as a band-aid, we will have to do it over and over again, because we haven’t looked at the core reason for why they’re there in the first place. Our unwanted, seemingly irrational, thoughts that guide the behavior we want to get rid of, are often coming from emotions, whether we are consciously aware of them or not. In our subconscious lies the parts of your psyche that you haven’t recognized or haven’t processed. They are an equally important part of you as your thoughts are, and they sometimes make themselves heard through acting “irrational”, because we haven’t listened to them, or we haven’t learned to listen to them yet. For instance, when we procrastinate, or when we do things we later regret or we feel out of our control, it’s because our emotions are making themselves known. We might not be aware of them, but they are exercising their right to a say in our behavior. “Until you make the unconscious conscious, it will direct your life and you will call it fate” — C. G. Jung The subconscious can be reminiscent of monsters from old folklore or mythology: the monster might be scary, but expose it to sunlight and it will die. Its’ role in body image If you ask someone why they struggle with body image, a lot of people will mention social media, traditional media, advertising, our generation’s obsession with perfection, etc. This is the logical answer, the answer most of us have in common, and the answer we already know. Now it’s time to investigate our subconscious attitudes — the parts we’re not aware of. If you’ve ever been in therapy, you might be familiar with the 2-chair technique: the therapist points to a chair next to you, and asks you to imagine someone sitting there (your father, mother, sibling, your 6-year old self etc.) and asks you “What would you say to them?” Maybe the therapist asks you to sit in the other chair yourself and answer what you believe the other person would say. Then you go back to your own seat and answer back. You catch my drift. This technique can also be used to talk to aspects of ourselves. For instance, the parts of us that demand that holds us to perfectionist standards, or that thinks— no, demands! — we should be on a diet. Spoiler alert: This is what I did in therapy myself. And the results were, in one word (or two, with a hyphen): eye-opening. Instead of constantly pushing it away, I was finally having a real conversation with the devil on my shoulder — my “perfectionist self”. And not just an ordinary conversation, but a heart-to-heart. I’ve always heard in advice about loving myself that “absolutely every aspect of yourself deserves love, and if you have negative thoughts, show them love too”. Advice like that has never made sense to me, because why should I show something love when it doesn’t serve me, but makes my life miserable? How am I supposed to “love” the perfectionist voice in my head? This is where the true value of this technique lies. When I started having a heart-to-heart with “my perfectionist self”, I started to understand it. I understood why it was created, and why it has continued to live. In my own case, my perfectionist self exists to protect me. Filling me with fear about what others believe about me, is a protection mechanism. I’m absolutely not saying it has employed the right strategy of never being happy with anything, but now I can at least understand it. From there, I (together with my therapist) tried to figure out where it started, what experiences made it stronger and more convinced of its mission, and then see those again with my adult eyes, instead of my younger eyes who might have seen situations very differently than I would as an adult. This is also where the second phase comes in: bargaining. To ask my perfectionist self if there’s anything I can do to calm it down. Is there anything I can do to show it that I’m “safe” and don’t need protection? And that’s it. That’s what worked for me in therapy. Hopefully, this is of help for someone else out there too. I highly recommend therapy for anyone to become happier, and to get the full effect of this technique.
https://medium.com/beyond-the-body/my-unconventional-method-for-a-healthier-body-image-fe26f1addbe3
['Trine F.']
2021-05-20 15:57:39.789000+00:00
['Body Confidence', 'Beauty', 'Body Positive', 'Body Image', 'Self Confidence']
My Gay Shoes Scare me
My Gay Shoes Scare Me Image from author Recently I bought these shoes. Despite being a writer in the early days of my career, and therefore generally unable to afford frivolous expenditure, this was a purchase that I simply could not deny myself. I saw them while engaging in some fairly stressful Christmas shopping and was immediately smitten by their beautiful and wonderfully queer design. Though I often dress and act in ways that may be construed as ‘queer’, I have never bought or worn an item bearing the rainbow flag before. To do so always felt just a little bit too ‘out there’, as if being so public was tempting fate. Basically, the prospect of showing my pride scared me, and it still does. Touched by the rainbow The symbol of the LGBTQ+ rainbow flag has a long and complicated history, one which many people have recorded far better than I ever could. It is a banner of hope and defiance, and one that continues to be held close to the hearts of many in this community, including my own. In fact, the ever-growing culture of using flags, colours, and symbols to identify various groups within the queer community has been a joy to watch unfold (flag pun intended). Icons such as these serve as talismans for our identities, shining out into the world like beacons of belonging. Some of us use these symbols to show our pride, others use them to make themselves visible to others in the community, and a few outside our community may even bear our icons to show their support for our cause. Needless to say, symbols like the rainbow flag have become the coat of arms for our community and, like any good insignia, they have become widely recognisable. While this is fantastic in terms of increasing visibility, it is also a problem… because it increases our visibility. Back to the shoes. I am proud of my identity, and of the ways in which I contribute to our community and our continuous battle against bigotry and oppression. I have no problem telling people who I am and what I believe. The problem is that by wearing an LGBTQ+ symbol, not least one as widely recognised as the rainbow flag, I’m not directly telling anyone anything. I don't have control over the interaction. I cannot decide how to broach the subject, what tone to use, or how much caution to exercise. Instead, I am just walking by and information about me, about my life, is freely available to all who see me, open to interpretation and devoid of the safeguards of person to person conversation. This can be dangerous. If someone who wishes ill upon our community sees me wearing my new favourite shoes, they will assume a number of things about me. In turn, based on those assumptions, they may choose to take action against me. My character has been assessed and my punishment decided upon without a single word being shared. There is no humanity in that, no opportunity to develop a perception of me that goes beyond my sexuality. Oh, the humanity… There is a reason why people suffer homophobic abuse despite not being queer. It’s because many abusers act on assumptions rather than evidence. They have no interest in conversation. They care only for the vicious application of their personal ideology. Not to mention the fact that it is much easier to beat the living hell out of a stranger than a person with whom you have had an actual conversation. Instant assumption and swift punishment allow abusers to think of their targets as mere animals rather than thinking, feeling human beings. This is not always the case, of course. Some bigoted attacks are premeditated well in advance, and many are perpetrated by people the victim knows, sometimes even members of their own family. But many attacks are also spontaneous. Toxic hearts beat faster when twisted minds see an opportunity to strike. Some people are just on the lookout for targets. The sad truth is that wearing your pride openly, in any form, makes you one of those targets. Your beacon of pride, emblazoned on your person for all to see, can also serve as a magnet for bigotry and abuse. Pride, as a queer person, is a dangerous thing to have. That's why my new shoes, unreservedly fabulous though they are, scare me every time I put them on. Stepping out of the house wearing my identity so openly fills me with pride, but the echo of apprehension is never far behind; a maelstrom of anxious ‘what ifs’ that are not nearly as far fetched as they should be. Yet it seems than many people beyond the realms of our community are unaware of this interplay between pride and risk. Often I hear people bemoan that ‘queer culture’ is shoved down their throats by pin badges and flags, or they simply argue that such open announcement of sexuality and gender is not necessary. But it is. It really is. Long live our pride Visibility is a cornerstone of progress. The ignorant will never learn to tolerate that which they cannot see. For progress to be made, we must be there on their streets and in their shops, living side by side with them, working with them, befriending them if we can. Finding common ground is the best way to catalyse peaceful progress. The benefits of wearing our identities openly go beyond the personal excitement of being true to one’s self. Walking the streets bearing symbols of defiance is the best way to normalise a cause for change. Over time, those symbols become commonplace in the public realm and slowly, inch by inch, the movements they represent become integrated. Demonstrations and marches have their place, no doubt, but just as much progress is made through quiet defiance and day to day pride. There is a risk involved. There’s no getting away from that fact. Every time we enter the public sphere wearing our identity for all to see, our pride may spell our downfall at the hands of the depraved and inhumane monsters of this world. But then, it might not. Instead, your pride might deliver inspiration and comfort to those around you, providing reassurance to those struggling with their identities and showing the next generation that living openly and with pride is ok. Thre is risk in pride, to be sure, but there is also the potential to change the world, one mind at a time. My gay shoes still scare me. They still represent the possibility of hatred and abuse. But they also remind me every day to be proud of who I am and that I have a part to play in building the future of our community by living as openly and truthfully as I can. We suffer the injustices of the present so that we might look to a better future. Our future is in progress, but there is still work to be done. So, if you are able and willing, wear your queer insignia with pride. Piece by piece, the world will change. And we will be the ones to change it.
https://aninjusticemag.com/my-gay-shoes-scare-me-9c4d500882c2
['Sean Bennett']
2020-12-24 02:36:55.493000+00:00
['LGBTQ', 'Pride', 'Homophobia', 'Identity', 'Civil Rights']
4 Foods That are Rich in Energy
Due to the massive workload, some people may feel tired. It may be due to a lack of energy. Lack of energy can cause health problems. It can also reduce productivity. Your energy levels depend on the diet that you eat and your physical activity. If your physical activity is less, you may consume less energy. If your diet is poor, you may have fewer energy levels. Consuming foods that are rich in nutrients such as bone broth can boost your energy levels. By consuming healthy and nutrients rich foods, you can maintain your energy levels. In this article, we will discuss some foods that are rich in nutrients and can boost your energy levels. Coffee: Coffee is consumed as a morning beverage normally. It boosts your energy levels and can give your day a push start. A Protein coffee stick is getting popularity in the United States of America and some other countries as well. Coffee contains caffeine that quickly passes to your brain and boosts your productivity. Calories in coffee are low but its stimulatory effects can keep you active and alert. Bananas: Banana is great food. It is very soft to eat and easy to digest. It is a great source of energy. Bananas contain various nutrients and vitamins including potassium and vitamin B6. They are also a great source of complex carbs to boost energy levels. Fatty Fish: Seafood is a great source of protein, fatty acids, iodine, and B vitamins. Fatty fish like tuna and salmon are the best choices to boost energy levels. Fatty fish can reduce the risk of inflammation and fatigue as it contains omega-3 fatty acids. Brown Rice: Brown rice is very popular. Especially in Asian countries. Brown rice is more nutritious and beneficial than white rice. Brown rice contains fiber, vitamins, minerals, and some other nutrients. Brown rice regulate blood sugar levels and keep you active throughout the day by promoting steady energy levels. Conclusion: There are various nutrients rich foods that can keep your energy levels boosted. Consume these foods for a healthy mind and body.
https://medium.com/@anna-smith/4-foods-that-are-rich-in-energy-d39784e92369
['Anna Smith']
2020-08-20 10:46:08.148000+00:00
['Energy', 'Energy Healing', 'Foodies', 'Energy Efficiency', 'Food']
What’s the Best Way to Support Youth Skills?
This World Youth Skills Day, CEGA Communications Intern Yevanit Reschechtko (MDP ’22, UC Berkeley) explores CEGA’s recent research projects that support youth skills and promote economic stability. Youth participate in an employment skill-building workshop in Rwanda. Credit: aphromutangana July 15 is recognized by the United Nations as World Youth Skills Day, a day to highlight the importance of equipping young people with the skills they need for employment and economic stability. Despite progress toward improving youth employment and skill-building in recent years, young people still face significant barriers to accessing gainful employment. Even before the COVID-19 pandemic, 22% of youth (aged 15–24) globally were not in school, employment, or training, with the rate as high as one in three for young women. Meanwhile, as a result of COVID-related lockdowns and economic recessions, the global youth employment rate dropped by 8.7% in 2020. Over a year and a half after the pandemic began, helping young people achieve economic stability is more urgent than ever. To address this challenge, CEGA supports several studies generating new evidence on how to promote youth skills development and economic stability in low- and middle-income countries. Our investment in this research aims to provide policymakers, NGOs, and government leaders with the evidence they need to identify the most effective economic, programmatic, and psychosocial interventions to support youth all over the world. Below is a selection of related studies we’re working on: Finding a Cost-Effective Approach: “Cash benchmarking” a Youth Unemployment Program One way to assess the cost-effectiveness of a program is through “cash benchmarking,” which compares the impacts of an intervention to unconditional cash transfers of equal value. An ongoing study — funded by USAID through CEGA’s Development Impact Lab, and in partnership with Innovations for Poverty Action (IPA) — is using this approach to assess a youth employment program in Rwanda. CEGA affiliated professor Craig McIntosh (UCSD) and Andrew Zeitlin (Georgetown University) compared the impacts of a USAID youth employment program in Rwanda to unconditional cash transfers implemented by GiveDirectly. They found that although participants in the employment program did gain business knowledge and productive hours, overall employment rates did not improve fifteen months after the program ended. On the other hand, recipients of a $332 cash transfer saw longer-term economic and psychological improvements, suggesting that providing young people with the cash they need to overcome initial economic barriers to employment may be a cost-effective strategy. See this interview with the investigators for more details. Addressing the Youth Employment Gender Gap in Uganda and Tanzania Young women face especially high barriers to employment in much of Sub-Saharan Africa. Only 36% complete secondary school (compared with 42% of boys) and they are more likely to drop out due to low investments in human capital, high rates of unintended pregnancy, and mental health disorders. Two studies funded through the BRAC-CEGA Learning Collaborative address this gender gap by providing new evidence on psychosocial interventions that build relevant skills and improve young women’s employment outcomes. In partnership with Strong Minds Uganda, Sarah Baird (George Washington University), Berk Ozler (World Bank), Chiara Dell’Aira (World Bank), and Danish Us Salam (BRAC Uganda) are evaluating whether a low-cost and scalable psychotherapeutic health intervention, coupled with an unconditional cash transfer, impacts mental health and other outcomes of wellbeing for adolescent girls participating in BRAC Uganda’s Empowerment and Livelihood for Adolescents (ELA) clubs. Preliminary findings suggest small immediate improvements in mental health outcomes at the end of therapy, as well as decreases in depression symptoms for participants who didn’t receive the therapy or cash, but did participate in ELA clubs. There were no significant short-term effects on school enrollment, incidence of pregnancy and child marriage, or condom use. Forthcoming two-year follow-up results will provide more information about the intervention’s impact on mental health and human capital outcomes, particularly in relation to COVID-19. CEGA affiliated professor Ketki Sheth (UC Merced) and James Khakshi (BRAC International) are assessing the impact of BRAC’s Education, Empowerment, and Life-skills for Adolescent Girls and Young Children (EELAY) program in Tanzania. EELAY is a two year alternative education program that targets girls who have dropped out or been excluded from the educational system. Preliminary results — shared at CEGA’s 2021 Africa Evidence Summit — suggest that EELAY is successful in increasing girls’ participation in the qualifying examination for secondary school equivalency. The follow-up evaluation will assess the program’s impact on labor market, welfare, and education outcomes, with the intention of scaling actionable findings across BRAC programming in East Africa. Identifying Effective Skills-Building Interventions in Uganda While many sub-Saharan national governments have integrated entrepreneurship training and career coaching into high school curricula, these programs are primarily based on hard skills (i.e. accounting, finance, and strategy) rather than soft skills (i.e. communication, negotiation, and decision-making). Two CEGA-funded studies in Uganda examine the effects of career coaching, mentorship, and experiential learning on soft skills such as self-efficacy, persuasion, and career aspirations, as well as on youth employment outcomes. With support from CEGA’s Behavioral Economics in Reproductive Health Initiative (BERI), CEGA affiliated professor Paul Gertler (UC Berkeley), Laura Chioda (UC Berkeley), David Contreras-Loya (UC Berkeley), and Dana Carney (UC Berkeley) partnered with Educate! to compare the effects of two employment interventions — one focused on soft skills and the other on hard skills — on Ugandan secondary school students. Three and a half years after the intervention, they found that earnings increased significantly for youth in both groups, as did their likelihood to engage in entrepreneurship and maintain successful enterprises. The cost of the program was eclipsed by two months of participants’ employment earnings alone, suggesting the cost-effectiveness of both hard- and soft-skill approaches. See this blog post for more details. CEGA affiliated professor Jeremy Magruder (UC Berkeley), Mary Namubiru (BRAC Uganda), Mahbubul Kabi and Livia Alfonsi (UC Berkeley) are examining how career coaching and job search assistance from mentors — such as successful alumna — can influence career expectations and labor market trajectories for students attending five Vocational Institutes (VTIs) in Uganda. The study, also funded through the BRAC-CEGA Learning Collaborative, will build evidence as to how mentorship from relatable role models can support youth’s soft skills by complementing more concrete interventions like job search assistance.
https://medium.com/center-for-effective-global-action/whats-the-best-way-to-support-youth-skills-60c2bfe01ffb
['The Center For Effective Global Action']
2021-07-15 17:00:14.774000+00:00
['Employment', 'Youth Mentorship', 'Youth', 'Labor', 'Skills Development']
5 Effective Ways To Pay Off Your Debt
In the summer of 2016, I was left with a staggering $70,000 debt in my hands. So, I did what every 20-something year old with that kind of debt would do: I crawled under my blanket and wept. I was stuck with this debt for almost two years. Every time I tried paying it back, the interest rate would eat up my payment until I was back to square one. This was a vicious cycle that left me emotionally paralyzed and depressed for months. Then one day, I decided that enough was enough. I figured that there had to be a better way to pay back my debt, even if I didn’t have cash in hand. So I decided to read up on personal finance books and learn how to better tackle my problems. In the process, I learned that there were other ways that I can pay back my loan and, at the same time, reduce my anxiety about my financial problems. Eventually, I made a system for myself that guaranteed that I could pay back x amount of money back if only I did y and z. After all, money was out there; I just needed to find it. As Dave Chappelle once said: “You’re not poor. That’s a mentality. You are BROKE.” Those words made all the difference in my life. This system was a life-changer. It not only helped me pay back my loans but also changed my relationship with money and material things. Not to be overly dramatic, but this system really helped me, and it works. In this article, I’m going to teach you the same system that helped me pay back my debt. With that said, let’s get started. 1. Sell unnecessary stuff in your house According to a study by the LA Times, Americans have about 300,000 items in their household. If you live in America, then chances are you have stuff that you don’t need lying around. A LOT of stuff. You might be saying to yourself, “but yeah, some of these stuff I have fond memories of. Like this keychain that I bought at a snowboarding trip with my girlfriend.” Alright, if these stuff have sentimental value to you, then by all means keep them. But if you have stuff that you don’t really use-like old toys, unused tech gadgets, and books-then you should consider selling them online. I made a ton of side money just by selling all the useless things that were lying in my house. For example, I took the liberty of cleaning my wardrobe that was full of clothes that I either outgrew or didn’t wear. After selling them online and to friends, I ended up making almost $800, all from clothes that were sitting in my closet. If you have things lying around, take stock and start selling them. You might just make enough to pay for your next credit card bill. And speaking of credit card bills… 2. Pay more than the minimum payment on your credit card This one’s important. Always pay more than the minimum payment on your credit card or face the wrath of high interest rates. There’s a simple logic to this: Your bank, the institution that gave you your credit card, is simply looking out for itself. Whenever you make a payment every month, your bank will see whether or not you paid the minimum or more. If you paid more, your bank will think that you’re making good money so it’ll give you a lower interest rate, assuming that you’ll continue making the same high payment the next month. If you pay the minimum, or worse, skip a payment, your bank will freak out and start charging you with a high interest rate fearing that you won’t be able to pay the money that you owe them. Even worse, if you stop paying your card for months, your bank will shut down your credit card so that you can’t use it, even if you start paying later on. In addition, your credit score will drop, making it harder for you get a loan for anything in the future. With that said, always, always pay more than the minimum payment on your credit card. You’ll save months, possibly years, of headache. 3. Cut off subscriptions We all like to kick it back and watch Netflix once in a while, but do we really need all those other subscriptions? According to a study, Americans watch an an average 3.4 services today including Netflix, Hulu, HBO Max, and Disney Plus. Altogether, these streaming subscriptions run Americans an average of $29 a month. That doesn’t even include subscriptions to services such as music, magazines, and other monthly charged services. From my personal experience, I saved around $50 just by cutting off subscriptions. Some of them included Apple Music, Time Magazine, Hulu, and HBO. After cutting my ties, I realized that I barely even paid attention to them to begin with. I saved so much money just by letting go of the things that I didn’t need and instead channeled the extra money into paying off my debt. Try it yourself. If you like to watch streaming services, then try to stick with just one until you have more financial flexibility to afford the others. You’ll find that you can save more money just by saying no to things. 4. Live below your means If you’re trying to pay back your debt, then you have to start living below your means. That means no more four dollar Starbucks coffee, no more eating out every day, and no more making impulsive purchases. The key to paying back your debt is to reign in your spending and monitor where your money goes. You don’t want to wake up one day and see your credit card bill and think when did I make THIS purchase? We’ve all been there before, and the feeling sucks. The worst thing is we shrug it off and say we won’t make the same mistake, until it happens again. Learn to fix this habit by being more vigilant with how you spend your money. With that said, I’m a firm believer that living below your means doesn’t mean you can’t splurge once in a while. Life is short, and buying things feel good. However, you should only buy the things that you love. Not want, but love. There’s a difference. You might want that Starbucks coffee, but you might not necessarily love it (unless you’re a Starbucks aficionado then more power to you). On the other hand, you might love video games so much that you don’t mind spending money to buy the latest games and systems. So, instead of buying the things you don’t truly love, why not save for the things that you do instead? To save up for something you love, take out a notebook and write down all the money that you have versus all the money that goes out to paying your debt (expenses). If you can find some extra money in between, save some of it towards something that you would love to buy. Later, once you hit a certain amount for, say, a PS5, make sure to buy it. You deserve it for all the hard work that you’ve done. 5. Earn money online Earning money today has never been easier. There are many people who have paid off their debt simply by driving Uber or delivering food on the weekends. Thanks to our smartphone, we are one tap away from a temporary job that can make us some quick cash right away. In addition to Uber and food delivery, you can also find plenty of opportunities to make money online. If you’re from an English-speaking country and your primary language is English, you can teach English online and make $2,000 a month. If writing is more in your avenue, you can earn money by posting stories on Medium and get paid for it. Check out my article about the 11 easy ways to earn money online to learn more about how you can start making money online today. Overall, there are a lot of great opportunities out there to earn money. Ideally, you’ll want to pick the one that caters to your strength so that you can enjoy doing the job and learn new things along the way.
https://medium.com/@johnlim-ys/5-effective-ways-to-pay-off-your-debt-9a688c34c6c
['John Lim']
2020-12-26 01:27:22.746000+00:00
['Personal Finance', 'Debt', 'Make Money', 'Credit Card Payment', 'Pay Off Debt']
Front End vs Backend Developers: How Do They Compare?
Building a website is a collaborative experience. It requires a partnership between the client or business and the developer — or in some cases, multiple developers: front end and backend developers. Websites are composed of both the client-side and the server-side. The client-side is the elements that you see when you interact with a website. The backend or server-side includes all of the functionality and data that make the website work. While some web developers have experience and can perform both front end and backend development, the two roles are unique and require different skills. When it comes to building your website, you need both a front end and backend developer. In some cases, you can work with a web designer that can perform both functions. Here’s how the two roles compare and how to determine your web development needs. Front End Web Development Front end developers focus on user experience and user interface. They are responsible for the website’s appearance and design. When you look at a website, the way it appears flows and draws your eye to specific elements is all part of front end development. In order to do their job effectively, front end developers need to understand the importance of human interaction and human behavior. What colors, design elements, and layout are most attractive to website users? These are the questions front end developers ask in order to do their job effectively. Their skills allow them to build sites that provide a great experience for the user. When you hear discussions about whether or not a website is user-friendly, that is referring to the aspects of the site that a front end developer handles. Front end developers also lean on different coding languages than backend web developers. A front end developer most often utilizes CSS, HTML, and Javascript for design and development. These languages allow developers to create visually appealing designs and websites. Backend Web Development Backend web developers are responsible for the functionality of the website that takes place behind the scenes. They create the platform and code that allows the front end design to work. Every time you interact with a website, what you see is front end development. But what you experience when you click or scroll is backend development. The languages used by backend developers include Ruby and Python, among other query languages. These coding languages are used for functionality and communication between applications and databases. Backend developers are less concerned with the appearance of a site and more concerned with security, database management, and how all of your website’s elements work together. They are critical thinkers and problem solvers who make sure web pages do what they are designed to do. Which One Do You Need? Building a website that stands out and helps your business grow requires both front end and backend development. In some cases, when you use certain website servers and platforms, backend development is already complete. Your web designer can build a beautiful site without involving a separate backend developer. When you require custom development, however, it is important to hire a web developer that either does both front end and backend, or one who has a partnership with other developers. While I specialize in front end development and web design, I also have extensive experience with backend development support and services. Even if your website is beautifully designed and optimized for user experience, I offer maintenance and security services to make sure your website continues functioning the way it should. If you have any questions, please feel free to contact me and I’ll help you navigate whether you need a front end or backend developer, or a combination of the two.
https://medium.com/@hanna_84538/front-end-vs-backend-developers-how-do-they-compare-307efc4948fa
[]
2020-10-07 20:02:50.283000+00:00
['Front End Development', 'Web Design Agency', 'Backend Development', 'Back End Developer']
Easy to Understand Flexbox
What is flexbox? It’s a css tool that makes arranging elements on your page a little more, well flexible. It stands for “flexible box” and it allows the designer to move items within a container to fill the available space without overflow. Most importantly flexbox understands that the available space changes if pages are made smaller or larger, and adjusts accordingly. When you add the { display: flex } property in the parent container, it will place the items along a horizontal axis from left to right or on a “row”. It’s important to remember that left is the “start” and right is the “end” because you can specifically place items to “start” from the left or “end” at the right. When you add the { align-items: } property it allows you to place the items along the opposite axis of the display. In the case of using flexbox, which aligns items along the row, align-items will place them along the vertical axis, or the column. With understanding this basic layout one can dive further into just how powerful Flexbox can be. Use “justify-content” to align multiple items along a row. Use “order” to specifically order those items. The list goes on. The best way to understand is to open an index.html and a styles.css file and try it yourself! Resources:
https://medium.com/@kniskernjoseph/easy-to-understand-flexbox-c78a65031291
['Joseph Kniskern']
2020-11-19 00:56:19.949000+00:00
['Style', 'CSS']
Kristi Bulock’s Wildfire Career Blazes Bright
Kristi Bulock’s Wildfire Career Blazes Bright Kristi Bulock 📷 USFWS Kristi Bulock grew up near Prescott, Arizona, a town that played a major role in the history of the American Southwest. For thousands of years, the Yavapai people farmed, hunted and foraged in the region that surrounds the current city. Prescott is the former capital of the state of Arizona, and was the major civic and commercial center for the central Arizona Territory through the 19th Century. And like much of the Southwest, it knew some wild and wooly times; both Doc Holliday and Virgil Earp lived in the town prior to their participation in the notorious Gunfight at the OK Corral in Tombstone. Prescott is also notable for something else: wildfire. Major conflagrations ravaged the city several times in the early 20th Century, ultimately compelling municipal leaders to decree reconstruction with brick. Wildfire in the surrounding Granite Creek watershed remains a central fact of life for people living in and around Prescott, and the city has an illustrious fire-fighting tradition, one notable for both heroism and tragedy. In 2013, 19 members of the Granite Mountain Hotshots, a firefighting crew associated with the Prescott Fire Department, died while fighting the Yarnell Hill Fire in the west central part of the state. Kristi was aware of local wildfires growing up, but she seldom accorded them much thought. That changed when she attended Yavapai Community College in northeast Prescott. While pursuing a pre-Med course of study, she enrolled in an EMT class.
https://medium.com/@alaskausfws/kristi-bulocks-wildfire-career-blazes-bright-53b891dc46ef
['U.S.Fish Wildlife Alaska']
2020-04-29 21:10:45.080000+00:00
['Forest', 'Jobs', 'Environment', 'Careers', 'Fire']
Your definition of customer experience is wrong
Your definition of customer experience is wrong, and here’s why, right, there’s lots of different definitions of what Customer experience is, and if we put them together, we might end up with something like this, the sum of the interactions, perceptions and feelings a customer has with your company. Frustrated woman having problem ….. You might think that’s a pretty decent definition, James, but you’d be wrong. Most of the definitions you find are going to run along the same theme. Okay, the problem is that theme is completely inside-out. These definitions take a company view of customer experience, not a customer view of customer experience. Go figure. Let’s say that we’re an airline….. Watch the video and get the full transcript from here ***************************************** I have just done a 3 minute explainer video for Outside-In — see it here: https://bit.ly/OIDifference Step #1 — Get The Book: Outside-In The Secret *FREE* | https://bit.ly/OI2021now Step #2 — Get The Training: Certified Outside-In Master® | https://bit.ly/COIM2021 Certified Process Professional Master® | https://bit.ly/CPPM21 Accredited Customer Experience Master® | https://bit.ly/ACXM2021 Step #3 — Get the Software: The Experience Manager | https://bit.ly/TEM2021 Step #4 — Connect With The Community: LinkedIn | https://bit.ly/Steve2021 Please follow and like me: Share this: Related
https://medium.com/@stowers/your-definition-of-customer-experience-is-wrong-a0e8bbb303b
[]
2020-12-19 17:06:00.760000+00:00
['Cx', 'Customerexperience', 'Outsidein']
Dev Portfolios: How to Stand Out From the Crowd
Create Interesting Projects for Your Portfolio Photo by Ketut Subiyanto from Pexels I am a front end developer who owns a small software house. As an entrepreneur, I’ve seen many candidate’s portfolios. Many landing pages created by those developers were pixel-perfectly identical! In Poland, we have an incredible challenge for front-end developers called “The Weekly Web Dev Challenge.” I love this challenge and I participated in it when I was a junior. Each week participants create landing pages and are then given feedback. So, after a few challenges, multiple developers had similar projects in their portfolios/CVs, which they then used to apply to the same employers. Put yourself in the employer’s shoes. You’re looking for a new developer, so you posted a job offer. After a few hours, you get ten CVs, all with the same project! Not one stands out from the crowd. Those challenges are useful for practice, but they’re not suitable when you’re trying to distinguish yourself. No one tells you that when you’re looking for your first job. That’s why I’m telling you now!
https://medium.com/better-programming/dev-portfolios-how-to-stand-out-from-the-crowd-4a5d990b3400
['Albert Walicki']
2020-12-16 15:05:39.656000+00:00
['Programming', 'CSS', 'HTML', 'JavaScript', 'Learning To Code']
4 Writing Tools for Different Types of Writers
Image by the author Writers of the past didn’t have much choice when it came to the selection of writing tools. Of course, they could choose a typewriter brand, paper quality, or notebook type, but as far as the writing process was concerned, that was pretty much it. Computers were not there yet, so no word processors or fancy magic tools could be of any help. And even though the first word processors, like WordStar (1978) and Microsoft Word (1983), were nothing but powerful typewriters capable of editing and sharing the text without any paper waste, they changed the way we perceive the writing process. Today, forty years later, the list of available word processors, both commercial and free, has grown by tenfold and is constantly growing. The offering is so wide that many writers tend to stick with familiar all-in-ones like Word, Pages, and GoogleDocs that suit most writers and purposes. However, every writer has a different personality, tastes, and writing strategies. If so, what tools could suit different writer types the best? In this post, I talk about the four popular writing tools on the market that can suit different types of writers. I hope it will help you pick a new tool for your writing arsenal. iA Writer iA Writer’s sentence focus mode. If you are one of the writers who need a distraction-free user interface, you will like iA Writer. The app offers a unique environment for writing down your story using nothing but plain text with some basic Markdown components (heading, lists, footnotes, etc.). The only thing you need to do is select the preferred level of focus (sentence, paragraph, or line) and start writing. If you select sentence-level focus, everything except the sentence you are writing is dimmed, including previous sentences, headers, and everything that comes after. It creates an immense sense of focus, sparing the temptation to look back at what you’ve written, thus saving time and forcing you to care only about the current sentence. In addition to focus mode, iA Writer allows you to track how you use parts of speech, highlighting nouns, verbs, adverbs, adjectives, and conjunctions with different colors. I don’t find this feature useful during writing, but it is definitely helpful when reviewing your text, as it can help to spot parts of your text overloaded with certain parts of speech. What: simple yet feature-rich Markdown editor with superior typeface and several focus modes. Top features: grammar syntax highlighting, unparalleled focus mode, style check, the export of your drafts directly to Medium. For whom: for pantsers and writers who like to write fast without losing their focus. Ulysses A glimpse of Ulysses with its word count tracking feature. Another text editor equipped with basic Markdown features offering a unique writing experience. Unlike iA Writer, Ulysses is available exclusively on macOS. I have been using Ulysses regularly for more than a year and must admit: if I ever face a choice of going on with a MacBook or buying a PC, I will stick with my MacBook only because Ulysses is there. Seriously, Ulysses has replaced most of the writing software I had used in the past. I use it for writing blog posts, poems, taking notes, storing recipes, and anything writing-related. What makes Ulysses unique is an easy document management system and flexible export options (for PDFs, you can choose between dozens of beautiful styles you won’t find anywhere else). As for the document management system, I like it because it allows you to create groups of documents, give them any name (e.g., Medium, Recipes, Notes, etc.), and use custom icons to help you navigate in your writing more easily. Ulysses has many more useful features that I use when I write. Some examples include visually appealing and not distracting tracking of word count, a preview of reading time (for the slow, average, fast reader, and even reading aloud), and detailed text statistics. What: a powerful plain text editor capable of turning your drafts into the most beautiful documents ready to be shown almost to anybody. Top features: tracking goal word count, setting deadlines, custom icons for document groups, and powerful export options (including directly to Medium and WordPress). For whom: writers who know the exact amount of words they want to write and those obsessed with perfect typesetting of their final drafts. Scrivener Scrivener user interface It is almost impossible to write an article about writing tools and not mention Scrivener, so I won’t say much as a lot has been said before. All you need to know about Scrivener is that it is one of those tools you respect. You may also think you know it, but it has more to offer than you’re able to discover in a lifetime. Seriously, if you think of one software for big writers (writing big things), nothing compares to Scrivener. Everything from synopsis to scene planner, from character sketches to story outliner, from index cards to distraction-free mode, and much more. Scrivener is a paragon of a tool for writers working in different genres, be it screenplays, novels, poetry, journalistic essays, or academic writing. Whatever you write, Scrivener gets you covered. What: all-in-one feature-rich writing software designed by someone who knows something about the writing process. Top features: incremental manuscript building, flexible drafting options, space for storing research findings, and industry-grade export features. For whom: for plotters and serious writers who waste no time on trifles. Flowstate I barely managed to take a screenshot of the Flowstate editor before the text has disappeared. This minimalistic writing tool takes distraction-free writing to the next level. The idea behind Flowstate is simple: it helps you beat the inner critic and finally start writing without looking back. Honestly, you wouldn’t be able to look back because if you stop typing for seven seconds, everything you have written during the writing session will disappear. Yes, forever. There is no copy-paste command to save you writing, no other tricks that will help you cheat and feel just half of the challenge. The challenge is real. The only thing you can control in Flowstate is the timer setting (between 5 and 180 minutes). Once it ends, you are able to save your writing to the hard drive and even edit it. You should make sure that no one is going to disturb you during the writing session. But most importantly, you need a lot of courage and be not afraid of accidentally killing your darlings. What: an extreme minimalistic writing tool that will make you experience a flow state every time you start writing. Top features: no mercy. For whom: writers who like to challenge themselves from time to time and those who experience writer block. Takeaway Depending on how you approach writing, some tools may suit you better than others. My favorite one is Ulysses because I really like its user experience and some features like word count tracking and beautiful export. However, I still use other tools from time to time, depending on my mood and needs. For example, Scrivener is instrumental when I need to track multiple storylines or tell a story from different viewpoints. iA Writer is perfect for writing blog posts, and Flowstate is more of a fun tool that I use only rarely to challenge myself or get rid of the fear of writing after long breaks. I hope that this post helped you decide if you want to try some of the tools I covered. The choice is totally yours, of course. Perhaps, you have already found your favorite one, and it is not even on my list. If so, I would be happy to hear your opinion. Originally published at https://hackernoon.com/4-top-writing-tools-to-suit-different-types-of-writers-ak1l3w48
https://medium.com/@evgeny-kim/4-writing-tools-for-different-types-of-writers-d8b1d89d3e21
['Evgeny Kim']
2020-12-09 20:33:10.804000+00:00
['Writer', 'Writing', 'Tools', 'Mac', 'Software']
Use-case example: TF-IDF used for insurance feedback analysis
Bag of words Because machine learning models cannot work with text data directly, we need to convert these responses into some numerical representation. We will start by transforming the text into tokens — individual words. [ ‘The’, ‘online’, ‘system’, ‘for’, ‘reporting’, ‘insurance’, …] [‘I’, ‘was’, ‘surprised’, ‘by’, ‘the’, ‘speed’, …] [‘I’, ‘paid’, ‘this’, ‘expensive’, …] Then we create a corpus — set of all the words from the responses. [‘but’, ‘by’, ‘claim’, ‘declined’, ‘doesn’t’, ‘expensive’, … , ’the’, ‘this’, ‘Unbelievable’, ‘was’, ‘work’, ‘years’, ‘you’, ‘10’] For each response, we mark the number of occurrences of each word. Term frequency table of our responses We have just created a popular text representation — bag-of-words. What’s more, we have also implemented something called TF — term frequency. Each word from the response is weighted by how many occurrences it has. Can we use this to highlight important words? Let’s try to embolden words that have high TF scores. ‘The online system for reporting insurance claim doesn’t work on the phone.’ ‘I was surprised by the speed of resolvement of my insurance claim.’ ‘I paid this expensive insurance for 10 years, but you declined my claim? Unbelievable!’ That doesn’t look very informative nor helpful, does it? Some words in the responses are clearly more important than others, but by simply counting the term frequencies, we treat words like ‘insurance’, ‘for’, ‘expensive’ and ‘speed’ the same. Of course, the clients are mentioning insurance claims, but we want to highlight the specific problems… So what do we do about it? We could move all the common words into „stop words“ and ignore them altogether. But this would eliminate too much information and might actually be harmful. We are smarter than that, we implement TF-IDF!
https://medium.com/datasentics/use-case-example-tf-idf-used-for-insurance-feedback-analysis-e48de824f7f2
['Vojtech Poriz']
2020-12-08 07:16:34.220000+00:00
['Sklearn', 'Spark', 'Tf Idf', 'Information Retrieval', 'AI']
Inside the Dry Well
Photo by Keegan Houser on Unsplash Every time my thoughts run dry And my pen stays frighteningly still I wonder if that is all there is to it That what I had to share has stopped And the gift that all envied had ended While I stand on the side with a plaque For people to see what I had once been But the world will move on with indifference And a cold shiver goes up my naked spine I realize suddenly in that dark silent room I never did write for the world nor for it’s eyes But only for me to share what I had to say I murmur my gratitude for the given chance And stay still just to be alone with the room After how long I never will know But I hear that trickling sound again I jump up in joy and take up my pen And the words start to flow once more…
https://medium.com/literally-literary/inside-the-dry-well-4f42ce74334e
['Arvindh Shyam']
2020-06-06 11:19:46.599000+00:00
['Poetry', 'Poem', 'Writers Block', 'Writer', 'Thoughts']
The Crypto Culture: Then and Now
BY CRYPTO CALAVERA ON 3/26/18 AT 7:30 AM Bitcoin Genesis Block / Robi / Bitcoin.com Let’s explore the progression of culture in the cryptocurrency space. The purpose is to present a record for future generations, that may be used in educational material when studying the nature and history of blockchain technology. It should serve as a piece that would be easy to read and understand and give a glimpse at the early beginnings of this technological revolution. It is important to preface this by saying that I, myself, have actively been in the space only since early 2017. Anything I report on prior to that is purely from material I have found and read online, giving me a second-hand interpretation of the cultural dynamics of the time. In addition, no single article can possibly cover the vast number of events that have transpired in these 10 years, so I try to present only the most notable cultural paradigms and shifts so far. With that in mind, let’s get started. The Pre-Fork Era Firstly, we look at the scene before 2017 which we’ll call the “Pre-Fork Era”. This is the time before the BTC/BCH fork in summer of 2017 and the years leading to the massive altcoin spring in March-May 2017. The Dawn of Bitcoin In October 2008 the Bitcoin Whitepaper was distributed to a cryptography mailing list. Following that, the first Bitcoin block was mined on January 3rd, 2009 by Satoshi Nakamoto. It is called the genesis block and it contained 50 Bitcoins. In the codebase of the block was embedded the following: “The Times 03/Jan/2009 Chancellor on brink of second bailout for banks.” This was both an important timestamp for the date of inception of Bitcoin, and a strong message that would forever shape the emerging culture around the technology. Many people talk about how “if you would buy ‘X’ amount of Bitcoin in 2010” you’d be a multi-millionaire by now. What these people don’t understand is that there was no real market for Bitcoin back then. You wouldn’t just go and buy some Bitcoin like you would today. You’d have to know certain types of people, namely the developers working on the technology. That was the culture in the first couple of years. A group of developers and economists around the world, disenchanted with the banking system, working on a better solution. They shared ideas from across the globe, worked on the code tirelessly to improve the technology, told people who they thought would understand, and didn’t bother too much with non-believers. Slowly, as trading markets began to emerge, more and more people were drawn in. Some, like the founders and early developers, out of sense of justice and need for betterment of the system. Others, for the opportunity to make an extraordinary amount of money through this massively volatile asset. Magic the Gathering: Online Exchange In March 2011, Magic the Gathering: Online Exchange, or Mt.Gox for short, was sold to French developer Mark Karpelès. It’s previous owner, Jed McCaleb had originally built the site in late 2007 as a place where people could trade ‘Magic the Gathering’ cards similarly to stocks. In July 2010, after McCaleb read about Bitcoin in the Slashdot forum, he decided to build the exchange for Bitcoin instead. He then handed it over to Karpelès a year later because he did not have the time and perhaps the motivation to build a Bitcoin exchange to its full potential. Mt. GOX Exchange Logo / Wikipedia The years between 2011 and the end of 2013 would then forever change the underlying culture of Bitcoin and cryptocurrency in general. It was two and a half glorious years for Bitcoin. In 2010 the cryptocurrency was going around for cents on the dollar in various online exchanges and between private buyers and sellers. By the end of 2011 it had reached a peak of double digits in US dollars! Much of this appreciation was due to developments in the technology, word of mouth, but also thanks to the Mt.Gox exchange advertising the technology like a tradable currency. This attracted many opportunistic individuals who further shaped the nature of cryptocurrency culture. You’d have to be of a certain type of character to join Bitcoin this early. Either you were drawn by the technology, were a nerdy individual, possibly with an IT background, and a sense of disgust at the way banks operate; or you were drawn by the volatility, you were of an economic and/or Forex background, and most likely an opportunist. You liked the promise of big money, had some cash lying around and wanted to try your hand at trading the markets. You most likely weren’t a professional trader, rather a retail one. These two main characters bred a culture of anarchism and, paradoxically, a strong sense of community. The bitcointalk forum founded by Satoshi Nakamoto was getting livelier, price was volatile and 60–200% swings were not uncommon in either direction, but the general trend was up. A small handful of alternative currencies (altcoins) appeared, promising to do what Bitcoin did, but better, or something different that Bitcoin can’t do. Many think Litecoin was the first one, but it was actually Namecoin interestingly enough — with the purpose of decentralizing domain registrations. Anarchistic personalities started making youtube videos about bringing down the world order, and how Bitcoin would overtake traditional currencies. Massive debates were had by a tiny community of people that either believed, wanted to believe, or were too blind to. A few articles started appearing around saying how it’s fundamentally flawed, how it’s going to die or it’s already dead. 4chan and Reddit threads — a lot now deleted — were springing up alerting more individuals about the opportunity. Mainstream media was barely covering the technology, apart from a few curious/dismissive articles here and there. This was a very volatile and uncertain time, but the overall consensus was that it’s worth a shot. The Age of the Great Bear In the final months of 2013 the price of Bitcoin shot up from a hundred dollars to over a thousand dollars, with Mt.Gox leading the charge, handling over 70% of the volume. However, after the all-time high of $1155 recorded on Coinmarketcap at the time, the price would not return to these levels for over three years. And in fact, for the next year and a half it would go on to slowly and painfully lose around 85% of that value. Many things happened in these dark years for Bitcoin. Most notably, Mt.Gox filed for bankruptcy after almost three months of having withdrawals halted. It turned out that over 700,000 bitcoins had been stolen from their reserves years ago and the company had turned insolvent. This, along with the already declining price, plunged the community into despair. Many gave up, said this was the end. The few articles that were saying Bitcoin would fail were now springing by the dozens. China was banning Bitcoin basically every week. Countless men and women in suits were looking down upon these ‘Bitcoin cultists’ telling them “I told you so” without remorse. With prices reaching as low as $200–300 it all started to seem irreversible. A large part of people who had invested in the boom had left by this point. This was a culture torn asunder. This was a community destroyed and obliterated. Almost all hope was lost. Bitcoin was dead. However, Bitcoin was not dead. In fact, it was very much alive. Developers kept working on the code, new exchanges kept coming along, new people kept joining once they had learned about the potential of the technology. Between mid-2013 and mid-2014 a tremendous amount of altcoins sprung up further enriching the network and the exchanges. Most were worthless here-one-day-gone-the-next type of schemes, but some stuck. DogeCoin Logo / Dogecoin.com Like DogeCoin! And a little-known Initial Coin Offering platform called Ethereum, which was in fact itself an ICO! It was selling on Bleutrade for less than 100 satoshi per coin! And Ripple appeared… That’s a little bit of a touchy story since at the time the culture was “We hate banks, they ruined the economy, Bitcoin will replace them!” and here comes Ripple working WITH the gangster banksters trying to merge blockchain technology with them. That shook things up! Bitcoin maximalists were born, shunning and hating on Ripple and any other altcoin that appeared on the market with the promise of overthrowing the one true cryptocurrency. During this period, the famous HODL abbreviation was born as well! Altcoins started having market cycles, pump and dumps were born, manipulation of the markets was rampant and easy. Many scams started to appear, both from coin creators, exchanges, and anyone in between! A lot of people lost their money during this time out of desperation to get it back from the failing Bitcoin prices. This was by no means a “dead space” as portrayed by the media during that period, no matter how negative things got at times. Personalities on Twitter like notsofast, crypto cobain, 22loops, and many others were being moulded into becoming the OGs they are today. For many this was their first major bear market. Those who made it out alive, and still holding onto their coins, came out the other end a bit roughed up, a bit more cynical, and a bit more cautious. But nonetheless they were stronger, fiercer, and more motivated than ever to push forward and take Bitcoin to new heights. The Post-Fork Era This section covers the, currently, short period of time between Spring 2017 and present day (26th of March 2018). In January-February of 2017, things were looking a lot better than before. Bitcoin had risen way above its low of around $200 almost back to $1,000. The overall market cap of the market at the time was about $10–15 billion with Bitcoin leading the charge and Ethereum close behind. No-one was truly prepared for what happened next. Altcoin Spring Between March and June 2017, the market cap rose to over $100BN. Ripple went from half a cent to almost 50 cents. Another cryptocurrency — Stratis, from half a cent to over $10. Antshares (now NEO) from a few cents to over $10, all back to back to back! At the same time, the Ethereum network was developed to the point where many aspiring entrepreneurs and development teams started issuing ICOs for their own blockchain related projects. This further filled the space with new money, which in turn pushed Ethereum to new heights from $10–20 to over $400! At this point, the culture had completely shifted. The few that had remained from the dark days had meshed with the new that had just arrived and made insane gains practically overnight! If you had started with $1,000 in February 2017 and played your cards right, you could have had over $100K by the end of June! This marked the start of a new major bull-run for Bitcoin and cryptocurrencies. The nature of communication shifted from forums, airdrops, Proof of Work (mineable) coins, to Slack and Telegram channels, ICOs, pre-mined and Proof of Stake/Masternode coins. Bcash Summer A notable even was the mid-july Bcash fork where Bitcoin essentially split into two. Everyone who held Bitcoin in their wallets received an equal amount of Bcash. Because this was an unprecedented event there was much uncertainty in the space and the market briefly lost over 35% in less than a month. But after the fork things appeared to be back to normal and prices continued to rise! The culture was one of constant pumps, everyone was making money! Gurus started springing out from the ground, YouTube channels started reviewing coins and ICOs. Technical analysts with years of professional experience joined the space and started sharing their insight on price action and candlestick patterns. Many newbies started imitating them. Because it was such a massive bull run even the most basic looking chart was a sure hit, everything was pumping together! China Fall For a brief period in September China said it was banning Bitcoin again or something. Some banker also said Bitcoin was a fraud, like that was some new or original statement. Turned out his little daughter was actually investing in cryptocurrency. Smart girl. One of the Twitter OGs mistakenly told everyone to exit all the crypto markets, that was a fun ordeal. But, by that point everyone knew China and the banks were practically non-relevant, so we quickly forgot about it and by the end of October we were back to making new heights! The excitement was growing, the community channels were buzzing, we were feeling like everyone and their grandma was in this ready for the big pump! The Bcash Debacle Roger Ver / Linkedin.com Something funny happened for a few days in November. Roger Ver, the creater of Bcash fork (which took place in summer), was an early adopter of Bitcoin and disagreed with some of the plans that few developers had for the technology. He decided to fork it and make his own. Instead of branding it something original he called it Bitcoin Cash and tried to hijack the brand. This was done to such an extent that in early November, the Bitcoin mem-pool was mysteriously spammed with thousands of small transactions making the system clog up. This temporarily brought the price down. At the same time Bitcoin Cash’s price started rising exponentially with the South Korean exchange Bithumb leading the charge. However, at the peak of Bcash’s rise to the top, the Bithumb exchange mysteriously crashed. This sent the price of the coin plunging back down and it has since not recovered. Very coincidentally, Roger Ver was in South Korea at the time. And very coincidentally right after this all transpired the mem-pool for Bitcoin was cleared up and all the spam transactions stopped — the system was working fine again. That was fun to watch happening both on the charts and on social media, with thousands of people parading Bcash as the real Bitcoin. All those people fell silent after the event, almost as if they were some kind of sock puppets, who knows. You can draw your own conclusions. Winter Boom After which, the hype continued. At this point, Bitcoin had gone past the psychological $10,000 barrier and had quickly made its way to the high $15Ks. Songs about it were being sung online, popular news outlets started advertising it telling people how to buy. Celebrities like Paris Hilton and Katy Perry were advertising the space in various ways. Futures trading launched which meant that Wall Street could now trade a futures market for Bitcoin. That sure was a wholesome experience later on. All other cryptocurrencies were at the very top. People started wearing hand-knit sweaters with Bitcoin and Litecoin on them. Mainstream companies like Kodak and KFC joined the space in their own ways. Any rational investor could have seen that the top was near. And in fact, many of the OGs that had experienced that before did! One such legend, Crypto Cobain, tweeting “I’m close to officially calling the top on this motherfucker.” Happy New Year! And so, it was! At the peak of everyone’s euphoria, altcoin and Bitcoin prices plummeted. In the following months of January and February the market went from a high of $827BN to a low of $283BN. Bitcoin lost over 70% of its value dropping from a high of $20K to a low of almost $5800. Times are currently uncertain. We may be beginning a new altcoin spring, or it may be just a dead-cat bounce, we don’t know. Articles are starting to spring up again about how dead Bitcoin is, how ICOs should be illegal, how everything is a scam, and we’re all just in it for the money. Good developments like the new Lightning Network and Atomic Swaps are not being reported. Smaller more unknown cryptocurrencies like NANO, XBY and RDN that are major scaling solutions are not being reported. Few people seem to care that governments for the first time ever are addressing officially the nature of cryptocurrencies and are taking an agnostic and thoughtful approach to their integration into society. But none of that matters. We know. We now have a stronger culture than ever before. Together We Are Strong Image By Corello Hosting The current climate is, in my humble opinion, the strongest it has ever been for crypto. Sure, scams and market manipulation are still prevalent. Of course, we know Bitfinex, the new Mt.Gox on the block, is doing illegal and shady stuff with Tether, pegging their stable coin to the dollar. No doubt, if they shut down due to a government crackdown it would probably send the price of Bitcoin plummeting to below $3K and altcoins with it. We realize that, and we’re still here. Because we know that none of that matters. The only thing that matters is that we fix the broken system of corrupt bankers and politicians running our world. The trust-less decentralized nature of the blockchain provides that solution. Ours is a culture of knowledge. There are still new people only in it for the money who do not understand the bigger picture and that’s fine. But the point is that more people than ever before understand the bigger picture and are willing to stick through these hard times for the massive potential rewards this revolutionary technology can bring both to us as individuals and as a society. That is where we are right now, and that, my dear readers, is where we’re heading. Parting Words As I said, I’ve only lived through a small fraction of what I described in this article, so if you’ve spotted any factual errors, please do leave a comment and we will amend it ASAP! I hope you have enjoyed this trip back to memory lane to revisit the past, hopefully learn from it, and continue forward with a bit more conviction than before. If you’re someone from the distant future researching the phenomenon that it is crypto, and have stumbled upon this small piece, I hope it has given you some insight. A glimpse as to what things were like before all the major coins’ market caps were in the trillions. A time when price movements were in the hundreds of per cents instead of the fractions of one. A scary time. A glorious time. A time of revolution. If you’d like to see stuff like this in the future, give me a follow. — — — — — — — — — — — — More on me 👇 Crypto Calavera Full Time Cryptocurrency Trader Contributing Writer on Cryptoweek I’ve Got Socials 😎 Twitter — @cryptocalavera Discord — Crypto Calavera#5707
https://medium.com/cryptoweek/welcome-to-the-crypto-culture-then-and-now-1ba0d07dad57
[]
2018-04-18 08:38:17.028000+00:00
['Cryptocurrency', 'Bitcoin', 'Opinion', 'Blockchain', 'Crypto']
Nutrients to Keep an Eye on During Pregnancy
The most common nutrient of concern in pregnancy tends to be folate and for good reason. Folate deficiency causes neural tube defects and can harm cognitive function throughout life. But there is another nutrient that is being linked to cognitive performance. This nutrient is choline. Studies are showing women that had higher blood choline have infants that have higher cognitive scores and children that have better visual memory at age 7. Higher choline supplementation (930 mg) of the mother was correlated with infants having faster reaction times and this enhanced reaction time continuing to age 7. Not only is choline intake beneficial to children in their first few years from conception, but choline also appears important for the elderly. Choline intake is now being linked to lower risk of dementia and could be a key piece to slowing mental decline. For more information on these studies you can look up the Framingham Offspring and Kupio Ischemic Heart Disease Risk Factor Study. Although there are confounding variables in these studies, choline taken within reasonable amounts shows little risk (consult your doctor before supplementing). The best food sources include chicken liver, salmon and eggs. Similarly run studies have also shown lutein and zeaxanthin to be keys in higher cognitive function in young and old populations and have even been associated with better academic achievement. These two compounds can be found in spinach and other leafy vegetables. Eating them with fat will increase absorption. Egg yolk is also a source of both and can be combined with vegetables to improve intake and absorption. These nutrients are in line with most recommended eating patterns. Including quality eggs, fatty fish and leafy greens is hard to argue against. The benefits of finding a healthy eating pattern that works for you (and your mom doing the same before that) can set you on the path for a long healthy life from the start.
https://medium.com/@fuelwel/nutrients-to-keep-an-eye-on-during-pregnancy-3ae9946f3b58
[]
2020-11-27 22:57:38.474000+00:00
['Diet', 'Nutrition', 'Pregnancy', 'What To Eat When Pregnant', 'Nutrients']
I Need You To Stop Right There: Calling Out Bias and Other Actions for Allies
1. Know when to call out bias, even in public Early in my career, I learned to “criticize in private, praise in public.” That said, there are times when I need to let someone know in the moment that their words or actions are unacceptable, regardless of whether it’s a one-on-one conversation or in a group setting. I have a feeling I’ll be doing more of this as part of my personal goal to be anti-racist. Most recently, this happened when I was enjoying a socially-distanced outdoor dinner with some friends. At one point, someone said something racist, and I simply couldn’t let the conversation continue. I called them out. In public. As I continue to my journey to be anti-racist, I’ll leverage Interrupting Bias: Calling Out vs. Calling In by Seed The Way, a firm focused on anti-bias curricula and equity literacy for educators. They recommend calling out bias when: We need to let someone know that their words or actions are unacceptable and will not be tolerated We need to interrupt in order to prevent further harm We need to hit the “pause” button and break the momentum Their handout also has suggested phrases allies can use to call someone out: Here are just a few: Wow. Nope. Ouch. I need to stop you right there. I need to push back against that. I disagree. I don’t see it that way. Okay, I am having a strong reaction to that and I need to let you know why. Let’s all choose one of these phrases to use the next time we need to call out bias. (Many thanks to newsletter subscriber Liz Blickley for bringing this handout to my attention.) 2. Use “exempt” instead of “grandfathered in” In policy and law, situations that are exempt from a new rule are referred to as being “grandfathered in.” This phrase is related to poll taxes and literacy tests some states used to prevent Black men from voting. While these states couldn’t ban Black men from voting in the nineteenth century, they could make it difficult. They used a “grandfather clause” exempting white people from the taxes and tests if their ancestors had the right to vote before the Civil War. (Want to understand more about this history? Check out this ten-minute video from the Washington Post.) Unfortunately, this phrase is still popular today, even though it’s referencing a terrible part of our history. Why are we not using “exempt” instead? (I learned about the Washington Post video from Danielle Coke’s weekly Patreon newsletter. Thank you Dani.) 3. Encourage male colleagues to take parental leave This week, Reddit cofounder and venture capitalist Alexis Ohanian wrote an opinion piece for Fast Company Why now is the time to destigmatize paternity leave, for good. He did so after reading about a business leader allegedly disparaging a male colleague who was contemplating taking leave. After sharing his personal experience taking four months of paternity leave, he went on to say, “The implication that paternity leave is unimportant sets a dangerous precedent, one that suggests fathers are not an integral part of the child care unit, and perpetuates the antiquated belief that mothers alone should be the primary caregivers.” Folks, let’s not disparage male colleagues and other non-birth parents for taking parental leave. By contrast, let’s encourage them. 4. Use stock images featuring people from underrepresented groups After the publication of my first book, Present! A Techie’s Guide to Public Speaking, my coauthor Poornima Vijayashanker and I created a presentation to share its key messages and drive awareness (and hopefully sales). After we outlined our talk, I dove into designing a slide deck, using stock photography and other images to reinforce our speaking points. I purposefully chose images of women, taking the opportunity to showcase diversity — or so I thought. I believed I’d done a good job with our slides, until my daughter Emma saw me deliver that talk. Afterwards, she pointed out that all of the stock photography was of white women, many of whom had blond ponytails. Jeepers. I hadn’t even noticed. (As you might have guessed, I immediately changed the slide deck to reflect more diversity.) Here’s a reminder to allies everywhere: representation matters. It may be understated, but the simple act of using stock images of people from underrepresented groups makes a difference. There are many resources (some of which are free) to make it easy for you. I’ve listed several of my favorites here. Make them your go-to websites for finding stock images whenever you need a photo or illustration. 5. Share mistakes you make on the journey to be a better ally As you’ve just read, I share the mistakes I’ve made trying to be an ally. If you’re a regular reader of my newsletter, you know I do this often. In fact, I count myself as a member of the imperfect ally club. Now here’s something I learned this week about why we all should share our mistakes. In an article for the Stanford Social Innovation Review, Leading Edge CEO Gali Cooks wrote about how surprised she was to discover a lack of psychological safety in her organization. She values inclusion, yet by focusing on promoting a positive culture, she unwittingly created a workplace where employees didn’t feel comfortable giving each other constructive feedback or sharing unpopular opinions. Cooks goes on to explain that sharing mistakes and what we learn from them is one way to increase psychological safety on a team. Doing so normalizes failure, which encourages more people to step outside their comfort zone and try new things, knowing they’ll be supported if they make a mistake. In her book Dare to Lead, Brené Brown wrote, “People are opting out of vital conversations about diversity and inclusivity because they fear looking wrong, saying something wrong, or being wrong. Choosing our own comfort over hard conversations is the epitome of privilege, and it corrodes trust and moves us away from meaningful and lasting change.” Let’s all put in the effort and be okay with making mistakes. Share any you might make along the way. By normalizing it, you may inspire others to join you on the journey to be a better ally. (Thanks to the Aleria team for sharing Gali Cooks’ article in their weekly newsletter.) That’s all for this week. I wish you strength and safety as we all move forward, — Karen Catlin, Founder and Author of Better Allies®
https://medium.com/@betterallies/i-need-you-to-stop-right-there-calling-out-bias-and-other-actions-for-allies-7c482ed845a2
['Better Allies']
2020-09-18 13:31:01.297000+00:00
['Inclusion', 'Allyship', 'Betterallies', 'Workplace Culture', 'Diversity']
Call Center Intelligence (CCI) through AI Solutions
Call Center Intelligence (CCI) through AI Solutions Over the years many organizations have tried to build a solution in order to anticipate the necessities of their customers and provide solutions to their queries related to products/services. Call centers tend to customer’s inquiries through telephones which can be inbound (e.g. Attending to customer’s queries) and outbound (e.g. Telemarketing). Everyone would have faced the worst queuing up, possibly pressing your keypad a whopping number of times and listening to softly played music until you talk to an actual live agent. At the end of all the fuss, we end up repeating all the information again. Hmm! Frustrating, ain’t it? Pain points of a Call Center: These can be discussed in terms of: Customer Experience Huge Business Expenditure Customer Experience: Deep and Complex IVR (Interactive Voice Response) Tree. Customers Repeating Information more than once Agents searching for information thereby increasing the wait time. Huge Business Expenditure: The number of requests to call-center has increased massively in the past decade. The preference of the people has always been voice rather than other services such as chats or emails. Most of them are just routine calls, such as troubleshooting network issues (Internet Service Providers) and Debit card blocking/unblocking complaints when it comes to Banking Sectors. Live agent resources could be cut down if these routine calls are avoided to an extent. “Focus on the Solution, Not the Problem”. With the advent of Natural Language Processing (NLP), chatbots can easily decipher our intent, emotions, and sentiments based on the way we interact. AI Powered Assistants: A Solution: There is ample evidence that Artificial Intelligence simplifies many routine things and daily tasks, changing our lives for the better. AI has been the buzzword around the business circle making it an unavoidable technology to account for. Creating computers that can understand natural language has always been the technology that surrounded homo sapiens’ speculation. The growth in Natural Language Understanding has quenched the thirst of longing. Natural Language Understanding(NLU) and Natural Language Generation(NLG) are very promising areas of Artificial Intelligence. According to the GlobalNewsWire forecast, the global NLP market accounted to hit a market value of $28.6 billion in 2026. Chat-bots exploit NLU, i.e., in simple words it develops the ability to understand what the user actually says. Call centers are the best market to implement NLU algorithms, where chatbots could perform routine works and also works as an advisor to the live agents. Let's check out different places where an AI agent becomes handy. Source: Virtual Agent (Attends routine topic) Virtual Agent: Automates most common transactions and passes on complex transactions to live agents. It propagates all the context gathered during calls to the live agent. More interactive, informative, and quicker than IVR. Agent Assist: Pulls out contextually similar contents from the Knowledge Base and presents it to the Agent, thereby reducing the waiting time. Conversational Topic Modelling Conversational Topic Modelling: Discover the topics for which customers reach out to you and how they articulate. This is essential to update the Knowledge Base and produce more improved results in the future. Thereby the system gets better and better. How to make AI better Agents? AI can be better agents and the one way it does it is through AI-powered Knowledge bases. In customer service jobs, agents have to quickly search through relevant documents to find a solution to a customer’s problem and this has to be pretty quick. An AI-powered Knowledge base can quickly traverse through the documentation by using key-phrases and deliver this straight to the agent thereby reducing the time. This surely inspires confidence in the brand/product. So, how to build a powerful Knowledge Base? The main factor here is to understand what the customer wants, it includes the discovery of the topic on which customer has to be serviced. Each time a customer calls, call logs are collected in order to generate training data for different topics. Any ML algorithm can be used to predict the needs of every user. Creating a Powerful Knowledge Base Once the topic is chosen, important keywords and top sentences used by the callers to articulate those topics are collected. ML algorithms along with human-supervised validation make the system more robust. Whenever an AI agent fails, the stored recording serves as the Knowledge Base thus making the chatbots better over time and much more adaptive to specific business cases. AI: ALL THE WAY The world now is adapting more to AI-driven solutions. It is pretty clear that AI cannot take the position of human beings but sure it can assist them thereby increasing their productivity and is a boon for any business. For all the promotions around AI Chatbots, few companies have embraced it in call center operations. But the rate of acquisition is going to rise up in the following years primarily because of the cost reduction and personalized experience it offers. And sooner chatbots being drafted to other businesses is not too far away. Let’s democratize the AI! Let’s make AI for everyone possible! Are you looking out for AI products, AI Services, AI Research, and AI Resourcing? You can get to us! Website: Federated AI Services Federated AI (FAI) Services FAI envisioned to become an enduring structure-preserving map between Business Values and Artificial Intelligence Research. FAI, emphasized in upskilling and reskilling the Indian workforce to build personalized products from redesigned high-impact AI research and engineering solutions to serve the greater business values to its clients with greater efficiency.
https://medium.com/federatedai/ai-solutions-for-call-centers-4defeef98106
['Akaash B']
2020-12-15 15:12:48.599000+00:00
['Naturallanguageprocessing', 'Business', 'Chatbots', 'Call Center', 'Artificial Intelligence']
Vivint Doorbell Camera Pro review: Sophisticated front-door security — for a price
Vivint Doorbell Camera Pro review: Sophisticated front-door security — for a price Jeanette Jan 10·7 min read The Vivint Doorbell Camera Pro isn’t cheap at $249, but it’s prettier and more sophisticated than the similarly priced Ring Video Doorbell Pro ($249) and the Nest Hello ($229). Vivint offers one of the best professionally installed and monitored smart home/home security solutions on the market, but you don’t necessarily need to buy the entire system to deploy this doorbell. You will, however, want to sign up for the ostensibly optional cloud storage plan, which costs $4.99 per month. Without that plan, you only see a live view from the camera (Ring’s doorbells and security cameras have the same limitation.) But Vivint will install this doorbell for you at no additional cost, even if your home doesn’t already have the low-voltage wiring in place that it depends on for power. Vivint’s best doorbell is outfitted with infrared night vision and an image sensor that supports HDR—and the camera’s image quality is superb. It has a wider field of view than either the Ring or Nest video doorbells—180 degrees vertical as well as horizontal—enabling you to see your entire porch (the camera has 1:1 aspect ratio and resolution of 1,664 x 1,664 pixels, but video is streamed in 1080p). This review is part of TechHive’s coverage of the best video doorbells, where you’ll find reviews of competing products, plus a buyer’s guide to the features you should consider when shopping.Even though the doorbell is installed at a right angle to my front door, it still affords a view more than 50 feet down my flagstone entry walk. There is a small amount of fisheye distortion when objects are very close to the camera, but the wide-angle view lets me see visitors head to toe, as well as packages left anywhere on the porch. Michael Brown / IDG Notice the message at the top of the screen indicating that a package has been delivered, and note that the resolution is crisp enough that you can read the logo on the truck in the driveway that’s nearly 100 feet away. Package and person detection Mentioned in this article Ring Video Doorbell Pro Read TechHive's reviewMSRP $249.00See it And package detection is one of the Doorbell Camera Pro’s best features. In addition to sending “person detected” alerts when someone comes within range of its motion detector, the camera will also alert you when a package has been delivered. If you can’t get to your porch right away, and the doorbell is in “deter” mode, it will sound an alert tone on its speaker (I selected a “you-who!” whistle) whenever someone approaches your door, causing them to look to the source of the sound and therefore present their face to the camera. In addition to the alert tone, a bright red LED encircling the doorbell button lights up. You can turn this “deter” feature on or off, or you can schedule it to operate on a schedule. Michael Brown / IDG Push notifications alert you when the Vivint Doorbell Camera Pro detects a person or a package. Another exceptionally good feature—although it is optional and requires your having the larger Vivint Smart Home system installed—is the Doorbell Camera Pro’s ability to continuously record to a local network-attached hard drive (a four-channel NAS box, essentially, although it can accommodate just a single 1TB drive and it can’t be used for any other purpose). Vivint calls this product the Vivint Smart Drive and it operates with up to four of Vivint’s security cameras (both indoor and outdoor models). There’s no arguing that it’s expensive, but there’s also no arguing that it’s a fantastic option to have—and it’s unmatched by any other vendor. [ Further reading: The best home security cameras ]Rather than record a short clip only when motion is detected or someone rings the doorbell, up to four of Vivint’s cameras record continuously to the drive for up to 30 days (you get 14 days of clip storage in the cloud without the drive). When you look at the camera’s live view via the Vivint mobile app, you can press a Rewind button to bring up a timeline of recordings. Dragging your fingertip along this timeline scrubs band and forth through time of recorded video. At any point along the timeline, you can start watching in real time or at an accelerated speed: 2X, 4X, or 8X. Vivint The Vivint Smart Drive is a pricey option at $249, but it delivers continuous video recording for up to four Vivint security cameras without worry that you’ll bump into your ISP’s data cap. Markers on the timeline alert you to detection events, and a label appears at the top of the screen to let you know if it’s a person or a package alert. Rewind, fast-forward, and play/pause controls help you quickly locate these motions events, and there’s a helpful time stamp above the timeline and a date stamp above the primary video screen. Whether the camera has flagged an event or not, you can create a video clip lasting either 30 seconds, 90 seconds, or five minutes from any starting point on the timeline. You can then either download the file or share a link to the clip. If your camera captured a crime in process, you can share this forensic evidence with the police. Unfortunately, these clips are not timestamped, a factor that could diminish their value to investigators. Nest cameras are among the few other security cameras that can record continuously, but those devices record continuously to the cloud, which can cause problems for users with data caps and/or limited upload bandwidth. Vivint’s cameras record only event-triggered clips to the cloud. My ISP doesn’t currently impose a data cap, but I live in a rural area and do have to put up with very slow upload speeds. Vivint’s official recommendation is to have at least 1.5Mbps of upload speed for each camera. Unlike Nest, Vivint doesn’t allow you to tailor the quality of the uploaded video stream to reduce its bandwidth consumption. Michael Brown / IDG A message at the top of the screen labels this event as someone ringing the bell. Skip back and skip forward buttons let you jump to previous and next events (the bar to the left of the green line indicates previous events, and the dot indicates the current event). Motion detectionYou can draw a single but irregularly shaped detection zone with your fingertip across the camera’s field of view, to block things like branches from shrubs and trees from creating false alerts, and you can also fine-tune the motion detector’s sensitivity. But the camera proved so accurate at discerning the movement of people from other things—including animals, for the most part—that I didn’t find the custom detection zones to be all that necessary. Mentioned in this article Nest Hello Read TechHive's review$229.00MSRP $229.00See iton Nest Store There was just one occasion in the several weeks that I tested the doorbell when it misidentified my large (12-pound) cat as a person. But that happened around 2:15 a.m., and only after he jumped from the porch to a bench and then a 20-inch-high planter. I have two other smaller cats, and they’ve never set off a person alert. The Vivint Doorbell Camera Pro looks more like a conventional doorbell than most, measuring just 1.5 inches wide and 4.5 inches tall. Much of the reason for its svelte dimensions, however, is that it doesn’t depend on a battery for power. I thought that would be a problem for me, because it never occurred to me to ensure my general contractor included a doorbell when we had our house built 12 years ago. Fortunately, Vivint’s installer came up with a clever solution: My dining room is on the other side of the wall from my front porch, so he drilled hole into the exterior wall, snaked the doorbell wiring through to the junction box hosting an AC outlet, and connected it to a plug-in adapter there. (The doorbell can operate on 12-24 DC or 16-24 AC adapters that deliver a minimum of 1.0 amps.) Michael Brown / IDG The Vivint Doorbell Camera Pro only once misidentified my (overweight) cat as a person, and that was at 2:00 a.m. (I added arrows to point out the cat and the alert). The doorbell has dual-band 802.11ac (Wi-Fi 5) wireless adapter onboard. However, instead of connecting to your Wi-Fi router, the doorbell connects instead to Vivint’s smart home control panel and the control panel connects to your router. The doorbell supports two-way audio, and you can talk with visitors using either your mobile device or the control panel. By the same token, you can view thumbnails of every event and select any of those thumbnails to play a recording in a larger window on either your mobile app or on Vivint’s control panel. Deep smart home integrationAs with all Vivint’s products, the Doorbell Camera Pro integrates tightly with the rest of its smart home system, including its new Car Guard onboard diagnostic product. When you’re viewing a live stream from the doorbell, you can push a button at the bottom of the screen to bring up the user interface for the security system, where you can arm or disarm the system, lock or unlock any of your smart locks, and open or close your connected overhead garage doors. Michael Brown / IDG When a person is detected, live video is streamed from the doorbell to Vivint’s control panel. You can also create custom rules such as “record a clip with all cameras when an alarm is triggered;” “turn on my porch light when my doorbell detects a person at night, and turn it off 30 minutes later,” or even “record a clip with my doorbell when my vehicle is disturbed.” But as I said early on: This level of sophistication, security, and automation isn’t cheap. The Doorbell Camera Pro costs $249, plus $4.99 per month for the cloud storage service. The optional Smart Drive costs another $249, and a Vivint smart home system starts at $599 plus a $99 installation fee (which is often waived if the company is running a promotion). The kit includes the aforementioned smart home hub, two door/window sensors, a motion sensor, a water leak detector, and a $100 credit for additional sensors. Monthly service fees start at $39 and include professional monitoring. If you choose to finance the system purchase through Vivint, you’ll need to sign a service contract; no contract is required if you pay for the hardware up front. If those costs aren’t a barrier, the Vivint Doorbell Camera Pro is the best device in its category. Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
https://medium.com/@jeanett70772847/vivint-doorbell-camera-pro-review-sophisticated-front-door-security-for-a-price-91331d2d35c9
[]
2021-01-10 07:49:04.001000+00:00
['Audio', 'Cutting', 'Music', 'Deals']
Introducing JSON-Explorer, a free open-source web JSON visualizer
tl;dr: You can visit https://maxired.github.io/json-explorer/ and upload a JSON to explore and filter it. This is mainly useful for tabular data. The landing screen of JSON-Explorer — import a file, see it While working on Retrolution, I recently faced a situation where I wanted to provide a new web interface on top of data that I don’t own. For those interested in retrospectives meetings, I am speaking about the very famous Retromat. While the code of Retromat is available on Github, there is currently no license attached. Moreover, even if the project was open-source, the data is not available on the repository. I very recently contacted Corinna Baldauf, Retromat founder, for permission to use the data. She basically refused me the right to use freely Retromat data, which I totally respect. This could be the end of the story but at the same time she that reassures me that “For your own personal use you can do whatever you want.” For your own personal use you can do whatever you want. Because my initial goal was to create a useful web interface and share it with people, I tried to find an alternative. One of them would be to manually recreate the data by myself, asking permission to differents authors and so on. Quite possible actually, but looks to me more like a tedious job than a quick win. For legal reasons, I don’t want to host any data that I don’t own. But it is easy for everyone with limited computed skills to download a file, save it locally, and use it on another webpage. Before going further, I decided to do a very quick research about currently available solutions that would to the job. Such tools probably exist, but I haven’t been able to find them. So I decided to build one. You can find it here and the sources on Github. Because I want my tool to be useful for everyone, I made it so it works with any JSON, not just Retromat one. For example, you can use the file https://data.senat.fr/data/senateurs/ODSEN_GENERAL.json found on the french government open data platform and discover that ‘Michel’ is the all-time most popular name for french Senator. Or you can use https://retromat.org/activities.json?locale=en and filter Retromat activities ;-) Everything is still pretty rough. I just spend for now a few hours on it, but please let me know it is useful or how can it be improved.
https://medium.com/@maxired/introducing-json-explorer-a-free-open-source-web-json-visualizer-227849c0f4e9
['Maxence Dalmais']
2020-03-11 15:00:23.346000+00:00
['Open Source', 'Retrospectives', 'Agile', 'Github', 'Json']
Eight Principles of Conversational Design
Eight Principles of Conversational Design Using human conversation patterns to design more natural digital interactions Photo: Etienne Boulanger/Unsplash We increasingly rely on digital systems to either mediate or replace human communications. But often, these experiences feel clunky and impersonal, or even scammy and deceptive. Asking Alexa to add something to my shopping cart is a breeze: “Hey Alexa, add bananas to my shopping list.” “Okay, I’ve added bananas to your shopping list.” But if I want to add 10 things to my list, I have to ask Alexa again to add each individual item. “Alexa, add peanut butter to my shopping list.” “Okay, I’ve added peanut butter to your shopping list.” “Alexa, add strawberry jam to my shopping list.” “Okay, I’ve added strawberry jam to your shopping list.” “Alexa, add whole wheat bread to my shopping list.” “Okay,…” Alexa’s limited recall means I have to repeatedly call her name and tell her the context again, which results in a very unnatural conversation and makes me wonder if it would have been quicker just to write down the list myself. While voice interactions and other digital interfaces often use cutting edge technology, if the design is too constrained by the application logic, it can strain the experience of the humans that use them. The key to designing interactions that feel more human is to follow the core principles of human interactions and conversations. Systems are ubiquitous, and we rely on them to do to more and more. Having multiple interfaces and systems to interact with (voice, text, website, on location) makes it more complicated to exchange value. Regular context-switching means interfaces need to be as simple, intuitive, and as similar as possible, to avoid a disjointed experience for customers. The challenge for designers is to make interactions with digital systems feel less robotic and more personal, creating systems that succeed on human terms. The key to designing interactions that feel more human is to follow the core principles of human interactions and conversations. Read on for some key principles to keep in mind when designing interactions. What is conversational design? The concept of conversational design is about looking at human conversation as a model for all interactions with digital systems. Using the principles of what makes everyday human interactions productive, it’s possible to create a better and more natural dialogue with systems. Conversation is how humans interact with one another — any two strangers who speak the same language can have a conversation using this familiar interface. Even if it’s occasionally awkward, or you don’t fully understand all the inner workings of another person, there is enough shared understanding about what’s expected in a conversation to efficiently communicate. The goal of conversational design is to learn from human conversations to make digital systems easy and intuitive to use. Principles for conversational design go beyond voice assistants and chatbots — UI, web design, and even print design can all can be more conversational. Intentional language choices can make digital interactions feel like they’ve been designed for humans, by humans. “We’re fast moving past ‘computer literacy.’ It’s on us to ensure all systems speak human fluently.” — Erika Hall Eight principles of conversational design Basic principles of human conversation — such as providing enough information that’s honest and relevant, brief and polite — can be carried over to designing interactions with systems. Developing empathy for human experiences through user research and listening is essential to learning how to design interactions that serve people better. In Hall’s excellent book on the topic, Conversational Design introduces the following set of conversational design principles to create more human-centered interactions in any type of interface. 1. Cooperative The core underlying principle of conversation is cooperation, the shared purpose that helps people understand each other across verbal gaps. In other words, for a conversation to work, everyone participating must do their part. For example, if someone asks you for directions, there’s a mutual agreement suggested by common courtesy for you to provide useful and relevant information, and not to deceive them or tell them a long-winded story that has nothing to do with their question. When users feel like they have to put in a lot of work to carry a conversation, they can feel like the system is not on their side, and is even making things harder for them. Systems that require special knowledge or “computer literacy” place a burden on customers to figure out how they work. Cooperative systems actively support the user and require less effort to interact with, mirroring the natural give-and-take flow of human conversation to make the exchange easier and more intuitive. 2. Goal-oriented Having a clear goal in mind is a core principle of interaction design. People have goals when they interact with digital systems, services, and products — whether it’s checking a bank account balance, asking for help, comparing vacation spots, or looking up an unfamiliar word. User goals and needs should be explored via user research as part of the holistic design process, and they’re key to designing a successful interaction. If you don’t know what your users are trying to do, how will you know when they’ve done it successfully? A successful interaction helps both parties — customers and organizations — meet their goals. 3. Context-aware The equivalent of “reading a room” to guide a conversation, the more context-aware a system is, the more conversational it can be. When you’re searching for hotel rooms available tonight in Seattle on your phone, for example, you don’t want to see rooms available in Boston, or deals on hotels+flight+rental car packages — it’s clear that you are already in Seattle and need a place to sleep. Overly automated messages or recommendations that don’t add value to your users can be damaging to the overall experience. The more a system can respond to contextual cues, the better it will be at having what seems like a natural conversation and not leaving users feeling stranded. While devices can automatically provide information such as a user’s location and time zone, additional insight into what users need and expect throughout the different phases of their interactions helps inform solutions that feel like they’ve been designed for humans. 4. Quick and clear Get to the point. Time is precious. Speed is everything. When it’s built on an understanding of your customers’ goals and context, thoughtful conversation design can help people make decisions with less friction and get their interaction over with quickly and efficiently. Save your users time and mental exertion by being succinct and unambiguous. Use plain language and guide users in a logical sequence, considering their likely interactions. Highly technical language or ambiguous error messages will leave people confused and unsure of what to do next. It’s lazy to default to application logic without considering the human interaction with the system and having enough empathy to design a better experience. 5. Turn-based In a conversation, each party takes turns listening and responding appropriately. Ideally, these are relatively brief and even turns in the exchange, though some topics require a longer explanation. To avoid feeling one-sided, functional conversations should avoid long monologues on the part of the system and make it clear whose turn it is at every moment. Just as a good storyteller can keep an audience engaged for a long time without being rude, responding to contextual cues and requesting or providing the right information at the right time helps system interactions feel more intuitive. Validating that input is correct before moving forward also helps keep the conversation moving along smoothly — especially if undoing an action will cause the user additional time or pain to correct. Google Search is a great example of a natural, turn-taking interaction — you enter your query and receive pages of relevant results almost instantly. The predictive search functionality recommends similar queries based on previous searches, so you often don’t even have to type the full query. Each new search gives you a batch of new results, and there’s little perceived downside to running multiple searches. This ease of interaction has helped Google Search become the standard for information exchange today. 6. Truthful Successful interactions feel truthful, offer clear and verifiable information, and prevent confusion. Being truthful in conversation design means ensuring there is a strong correlation between what the user expects and what the system offers. No unpleasant surprises or bait-and-switch tactics that cause distrust. Intentionally vague language, such as a “Get Started” link that forces users to create an account before even knowing what they’re signing up for, can feel deceptive and negatively impact the perception of how credible your system is. Using clear language and ensuring all the information necessary to take action is present in conversations with systems will help create a more trustworthy and satisfactory interaction. 7. Polite Politeness is the quality of being respectful and considerate of other people, and it helps make people feel more relaxed and comfortable with one another in a conversation. Being respectful of your customer’s limited time and attention means designing interactions that don’t impose on them. Don’t be rude, like ads that pop up and starting playing a video, forcing you to listen while you’re trying to figure out how to close them. Polite designs help organizations meet business goals while also making customers feel good. Giving the customer more or fewer options (depending on what they’re trying to accomplish) and anticipating additional needs can make digital interactions feel more considerate and pleasantly productive. Understanding your user’s journey through research and testing will help highlight areas where there are opportunities to make users’ time with your system more productive and efficient, making it clear that you respect their time. 8. Error-tolerant People make mistakes. We’re only human, after all. In a system interaction, how easy or difficult it is to recover from an error affects the entire rest of the experience. You’ll only try so many times before getting frustrated and giving up. Computers are programmed to follow instructions based on reason and logic and don’t always catch our more human errors. Understanding intent with imperfect information or an unexpected response is challenging for machines, but not impossible. Google Search again is leading the way with understanding search intent. When you spell a word wrong in your Google Search query, you still get relevant results, and it shows you the corrected spelling. Even if that’s still not quite what you meant, revising the query feels effortless. While the machines are learning about intent and natural language processing, we can learn from our human interactions to create thoughtful and intentional designs that help people make decisions easily and recover seamlessly.
https://modus.medium.com/eight-principles-of-conversational-design-a57e78cfbd61
['Maya Hampton']
2019-11-06 16:31:01.637000+00:00
['Interaction Design', 'Conversational Design', 'User Research', 'Craft', 'UX']
World Logos
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/storymaker/world-logos-b01bddbb1016
['Sr Hardy']
2020-11-20 23:34:39.260000+00:00
['Sky', 'World', 'Birds', 'Flight', 'Poem']
d3.js, Linearicons and Font Awesome
Clients often want to embed icons in d3 which is not a built in feature. Here’s my solution for 2 great icon providers. Linear Icons a) choose your icon When you go to the website and click on an icon you get the properties page: You need to store the number in the Entity field (in this case 800). You will be using this later with the prefix ‘\ue’ instead of ‘’. b) add a link in your html page: <link rel="stylesheet" href="https://cdn.linearicons.com/free/1.0.0/icon-font.min.css"> c) the code (use can clearly use CSS for styling if you wish) svg.append('text') .style('font-family', 'Linearicons-Free') .attr('font-size', '20px' ) .text('\ue800') .attr('x',40) .attr('y',40) .attr("fill","pink"); This adds a pink 20px x 20px home icon to your svg at position (40,40). Font Awesome a) choose your icon When you go to the website and click on an icon you get the properties page: You need to store the number in the Entity which is the number/letter combo starting with f (in this case 2b9). You will be using this later with the prefix ‘\uf’ instead of ‘f’. b) add a link in your html page: <script src="https://kit.fontawesome.com/type-your-personal-kitcode-here.js"></script> You’ll need to visit this link to get your personal kit code. c) the code d3.select("svg").append("text") .attr("class","fa") .attr("x",20) .attr("y",20) .attr("font-size","20px") .attr("fill","pink") .text('\uf2b9') NB: you need to give your text element a class of “fa” otherwise it won’t render properly. Hope that’s helpful..
https://medium.com/@bryony_17728/d3-js-linearicons-and-font-awesome-887548662162
['Bryony Miles']
2019-06-10 13:10:08.404000+00:00
['Icons', 'JavaScript', 'D3js', 'Font Awesome']