{"text":"# Hypothesis Testing and Types of Errors\n\n## Summary\n\n\nSuppose we want to study income of a population. We study a sample from the population and draw conclusions. The sample should represent the population for our study to be a reliable one.\n\n**Null hypothesis** \\((H\\_0)\\) is that sample represents population. Hypothesis testing provides us with framework to conclude if we have sufficient evidence to either accept or reject null hypothesis. \n\nPopulation characteristics are either assumed or drawn from third-party sources or judgements by subject matter experts. Population data and sample data are characterised by moments of its distribution (mean, variance, skewness and kurtosis). We test null hypothesis for equality of moments where population characteristic is available and conclude if sample represents population.\n\nFor example, given only mean income of population, we validate if mean income of sample is close to population mean to conclude if sample represents the population.\n\n## Discussion\n\n### What are the math representations of population and sample parameters?\n\nPopulation mean and population variance are denoted in Greek alphabets \\(\\mu\\) and \\(\\sigma^2\\) respectively, while sample mean and sample variance are denoted in English alphabets \\(\\bar x\\) and \\(s^2\\) respectively. \n\n\n### What's the relevance of sampling error to hypothesis testing?\n\nSuppose we obtain a sample mean of \\(\\bar x\\) from a population of mean \\(\\mu\\). The two are defined by the relationship |\\(\\bar x\\) - \\(\\mu\\)|>=0: \n\n + If the difference is not significant, we conclude the difference is due to sampling. This is called **sampling error** and this happens due to chance.\n + If the difference is significant, we conclude the sample does not represent the population. The reason has to be more than chance for difference to be explained.Hypothesis testing helps us to conclude if the difference is due to sampling error or due to reasons beyond sampling error.\n\n\n### What are some assumptions behind hypothesis testing?\n\nA common assumption is that the observations are independent and come from a random sample. The population distribution must be Normal or the sample size is large enough. If the sample size is large enough, we can invoke the *Central Limit Theorem (CLT)* regardless of the underlying population distribution. Due to CLT, sampling distribution of the sample statistic (such as sample mean) will be approximately a Normal distribution. \n\nA rule of thumb is 30 observations but in some cases even 10 observations may be sufficient to invoke the CLT. Others require at least 50 observations. \n\n\n### What are one-tailed and two-tailed tests?\n\nWhen acceptance of \\(H\\_0\\) involves boundaries on both sides, we invoke the **two-tailed test**. For example, if we define \\(H\\_0\\) as sample drawn from population with age limits in the range of 25 to 35, then testing of \\(H\\_0\\) involves limits on both sides.\n\nSuppose we define the population as greater than age 50, we are interested in rejecting a sample if the age is less than or equal to 50; we are not concerned about any upper limit. Here we invoke the **one-tailed test**. A one-tailed test could be left-tailed or right-tailed.\n\nConsider average gas price in California compared to the national average of $2.62. If we believe that the price is higher in California, we consider right-tailed test. If we believe that California price is different from national average but we don't know if it's higher or lower, we consider two-tailed test. Symbolically, given the **alternative or research hypothesis** \\(H\\_1\\), we state, \n\n + \\(H\\_0\\): \\(\\mu = \\$ 2.62\\)\n + \\(H\\_1\\) right-tailed: \\(\\mu > \\$ 2.62\\)\n + \\(H\\_1\\) two-tailed: \\(\\mu \\neq \\$ 2.62\\)\n\n### What are the types of errors in hypothesis testing?\n\nIn concluding whether sample represents population, there is scope for committing errors on following counts: \n\n + Not accepting that sample represents population when in reality it does. This is called **type-I** or **\\(\\alpha\\) error**.\n + Accepting that sample represents population when in reality it does not. This is called **type-II** or **\\(\\beta\\) error**.For instance, granting loan to an applicant with low credit score is \\(\\alpha\\) error. Not granting loan to an applicant with high credit score is (\\(\\beta\\)) error.\n\nThe symbols \\(\\alpha\\) and \\(\\beta\\) are used to represent the probability of type-I and type-II errors respectively. \n\n\n### How do we measure type-I or \\(\\alpha\\) error?\n\nThe p-value can be interpreted as the probability of getting a result that's same or more extreme when the null hypothesis is true. \n\nThe observed sample mean \\(\\bar x\\) is overlaid on population distribution of values with mean \\(\\mu\\) and variance \\(\\sigma^2\\). The proportion of values beyond \\(\\bar x\\) and away from \\(\\mu\\) (either in left tail or in right tail or in both tails) is **p-value**. If p-value <= \\(\\alpha\\) we reject null hypothesis. The results are said to be **statistically significant** and not due to chance. \n\nAssuming \\(\\alpha\\)=0.05, p-value > 5%, we conclude the sample is highly likely to be drawn from population with mean \\(\\mu\\) and variance \\(\\sigma^2\\). We accept \\((H\\_0)\\). Otherwise, there's insufficient evidence to be part of population and we reject \\(H\\_0\\). \n\nWe preselect \\(\\alpha\\) based on how much type-I error we're willing to tolerate. \\(\\alpha\\) is called **level of significance**. The standard for level of significance is 0.05 but in some studies it may be 0.01 or 0.1. In the case of two-tailed tests, it's \\(\\alpha/2\\) on either side.\n\n\n### How do we determine sample size and confidence interval for sample estimate?\n\n**Law of Large Numbers** suggests larger the sample size, the more accurate the estimate. Accuracy means the variance of estimate will tend towards zero as sample size increases. Sample Size can be determined to suit accepted level of tolerance for deviation. \n\nConfidence interval of sample mean is determined from sample mean offset by variance on either side of the sample mean. If the population variance is known, then we conduct z-test based on Normal distribution. Otherwise, variance has to be estimated and we use t-test based on t-distribution. \n\nThe formulae for determining sample size and confidence interval depends on what we to estimate (mean/variance/others), sampling distribution of estimate and standard deviation of estimate's sampling distribution.\n\n\n### How do we measure type-II or \\(\\beta\\) error?\n\nWe overlay sample mean's distribution on population distribution, the proportion of overlap of sampling estimate's distribution on population distribution is **\\(\\beta\\) error**. \n\nLarger the overlap, larger the chance the sample does belong to population with mean \\(\\mu\\) and variance \\(\\sigma^2\\). Incidentally, despite the overlap, p-value may be less than 5%. This happens when sample mean is way off population mean, but the variance of sample mean is such that the overlap is significant.\n\n\n### How do we control \\(\\alpha\\) and \\(\\beta\\) errors?\n\nErrors \\(\\alpha\\) and \\(\\beta\\) are dependent on each other. Increasing one decreases the other. Choosing suitable values for these depends on the cost of making these errors. Perhaps it's worse to convict an innocent person (type-I error) than to acquit a guilty person (type-II error), in which case we choose a lower \\(\\alpha\\). But it's possible to decrease both errors but collecting more data. \n\nJust as p-value manifests \\(\\alpha\\), **Power of Test** manifests \\(\\beta\\). Power of test is \\(1-\\beta\\). Among the various ways to interpret power are: \n\n + Probability of rejecting the null hypothesis when, in fact, it is false.\n + Probability that a test of significance will pick up on an effect that is present.\n + Probability of avoiding a Type II error.Low p-value and high power help us decisively conclude sample doesn't belong to population. When we cannot conclude decisively, it's advisable to go for larger samples and multiple samples.\n\nIn fact, power is increased by increasing sample size, effect sizes and significance levels. Variance also affects power. \n\n\n### What are some misconceptions in hypothesis testing?\n\nA common misconception is to consider \"p value as the probability that the null hypothesis is true\". In fact, p-value is computed under the assumption that the null hypothesis is true. P-value is the probability of observing the values, or more extremes values, if the null hypothesis is true. \n\nAnother misconception, sometimes called **base rate fallacy**, is that under controlled \\(\\alpha\\) and adequate power, statistically significant results correspond to true differences. This is not the case, as shown in the figure. Even with \\(\\alpha\\)=5% and power=80%, 36% of statistically significant p-values will not report the true difference. This is because only 10% of the null hypotheses are false (base rate) and 80% power on these gives only 80 true positives. \n\nP-value doesn't measure the size of the effect, for which **confidence interval** is a better approach. A drug that gives 25% improvement may not mean much if symptoms are innocuous compared to another drug that gives small improvement from a disease that leads to certain death. Context is therefore important. \n\n## Milestones\n\n1710\n\nThe field of **statistical testing** probably starts with John Arbuthnot who applies it to test sex ratios at birth. Subsequently, others in the 18th and 19th centuries use it in other fields. However, modern terminology (null hypothesis, p-value, type-I or type-II errors) is formed only in the 20th century. \n\n1900\n\nPearson introduces the concept of **p-value** with the chi-squared test. He gives equations for calculating P and states that it's \"the measure of the probability of a complex system of n errors occurring with a frequency as great or greater than that of the observed system.\" \n\n1925\n\nRonald A. Fisher develops the concept of p-value and shows how to calculate it in a wide variety of situations. He also notes that a value of 0.05 may be considered as conventional cut-off. \n\n1933\n\nNeyman and Pearson publish *On the problem of the most efficient tests of statistical hypotheses*. They introduce the notion of **alternative hypotheses**. They also describe both **type-I and type-II errors** (although they don't use these terms). They state, \"Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behaviour with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong.\" \n\n1949\n\nJohnson's textbook titled *Statistical methods in research* is perhaps the first to introduce to students the Neyman-Pearson hypothesis testing at a time when most textbooks follow Fisher's significance testing. Johnson uses the terms \"error of the first kind\" and \"error of the second kind\". In time, Fisher's approach is called **P-value approach** and the Neyman-Pearson approach is called **fixed-α approach**. \n\n1993\n\nCarver makes the following suggestions: use of the term \"statistically significant\"; interpret results with respect to the data first and statistical significance second; and pay attention to the size of the effect.","meta":{"title":"Hypothesis Testing and Types of Errors","href":"hypothesis-testing-and-types-of-errors"}}
{"text":"# Polygonal Modelling\n\n## Summary\n\n\nPolygonal modelling is a 3D modelling approach that utilizes edges, vertices and faces to form models. Modellers start with simple shapes and add details to build on them. They alter the shapes by adjusting the coordinates of one or more vertices. A polygonal model is called faceted as polygonal faces determine its shape. \n\nPolygonal or polyhedral modelling fits best where visualization matters more than precision. It's extensively used by video game designers and animation studios. Assets in video games form whole worlds for gamers. Features of these assets are built using polygonal modelling. \n\nComputers take less time to render polygonal models. So, polygonal modelling software run well on browsers. For higher precision, advanced 3D models such as NURBS are suitable. However, NURBs can't be 3D printed unless they are converted to polygons. Many industrial applications easily handle polygonal model representations.\n\n## Discussion\n\n### Can you describe the basic elements of polygonal modelling?\n\nA **vertex** is the smallest component of a 3D model. Two or more edges of a polygon meet at a vertex. \n\n**Edges** define the shape of the polygons and the 3D model. They are straight lines connecting the vertices. \n\nTriangles and quadrilaterals are the polygons generally used. Some applications offer the use of polygons with any number of edges (N-gons) to work with. \n\nFaces of polygons combine to form polygonal **meshes**. One can **deform** meshes. That is, one may move, twist or turn meshes to create 3D objects using deformation tools in the software. The number of polygons in a mesh makes its **polycount**. \n\n**UV coordinates** are the horizontal (U) and vertical (V) axes of the 2D space. 3D meshes are converted into 2D information to wrap textures around them. \n\nPolygon density in the meshes is its **resolution**. Higher resolution indicates better detailing. Good 3D models contain high-resolution meshes where fine-detailing matters and low-resolution meshes where detailing isn't important. \n\n\n### How are polygonal meshes generated?\n\nPolygonal meshes are generated by converting a set of spatial points into vertices, faces and edges. These components meet at shared boundaries to form physical models. \n\nPolygonal mesh generation (aka meshing) is of two types: **Manual** and **Automatic**. In manual meshing, the positions of vertices are edited one by one. In automatic meshing, values are fed into the software. The software automatically constructs meshes based on the specified values. The automatic method enables the rapid creation of 3D objects in games, movies and VR. \n\nMeshing is performed at two levels. At the model's surface level, it's called **Surface meshing**. Surface meshes won't have free edges or a common edge shared by more than two polygons. \n\nMeshing in its volume dimension is called **Solid meshing**. The solid surfaces in solid meshing are either polyhedral or trimmed. \n\nThere are many ways to produce polygonal meshes. Forming primitives from standard shapes is one way. Meshes can also be drawn by interpolating edges or points of other objects. Converting existing solid models and stretching custom-made meshes into fresh meshes are two other options. \n\n\n### What are free edges, manifold edges and non-manifold edges?\n\nA **free edge** in a mesh is an edge that doesn't fully merge with the edge of its neighbouring element. The nodes of meshes with free edges won't be accurately connected. Such edges within the geometry will affect the overall output. Therefore, unwanted free meshes should be removed. \n\nA **manifold edge** is an edge shared utmost by two faces. It means, when there is a third face sharing the edge, it becomes a **non-manifold** edge. \n\nA non-manifold edge cannot be replicated in the real world. Hence it should be removed while modelling. In the event of 3D printing, non-manifold edges will produce failed models. \n\n\n### How would you classify the polygonal meshing process based on grid structure?\n\nA grid structure works on the principle of Finite Element Analysis (FEA). An FEA node can be thought of as the vertex of a polygon in polygonal modelling. An FEA element shall represent an edge, a shape and a solid in three different dimensions. \n\nDividing the expanse of a polygonal model into small elements before computing forms a grid. Grid structure-wise, meshing is of two types: \n\n + **Structured meshing** displays a definite pattern in the arrangement of nodes or elements. The size of each element in it is nearly the same. It enables easy access to the coordinates of these elements. It's applicable to uniform grids made of rectangles, ellipses and spheres that make regular grids.\n + **Unstructured meshing** is arbitrary and forms irregular geometric shapes. The connectivity between elements is not uniform. So, unstructured meshes do not follow a definite pattern. It requires that the connectivity between elements is well-defined and properly stored. The axes of these elements are unaligned (non-orthogonal).\n\n### How are mesh generation algorithms written for polygonal modelling?\n\nMesh generation algorithms are written according to the principles of the chosen mesh generation method. There are many methods to generating meshes. It depends on the mesh type. \n\nA mesh generation method serves the purposes of generating nodes (geometry) and connecting nodes (topology). \n\nLet's take the Delaunay triangulation method for instance. According to it, the surface domain elements are discretized into non-overlapping triangles. The nodes are so created that the angles between them when triangulated are the least. The circumcircle drawn about each triangle cannot accommodate an additional triangle within it. \n\nDelaunay triangulation is applied through several algorithms. Boyer-Watson algorithm is one of them. It's an incremental algorithm that adds one node at a time in a given triangulation. If the new point falls within the circumcircle of a triangle, the triangle is removed. Using the new point a fresh triangle is formed. \n\n\n### How does one fix the polygon count for models?\n\nPolygon count or polycount gives a measure of visual quality. Detailing needs a high number of polygons. It gives a photorealistic effect. But high polycount impacts efficiency. It may take more time to load and render. When a model takes more time to download, we may run out of patience. Real-time rendering delays cause a video or animation to stop and start. So, a good polygonal model is a combination of high visual quality and low polycount. \n\nThe threshold number to call a polygon count high is subjective. For mobile devices, anywhere between 300 to 1500 polygons is good. Desktops can comfortably accommodate 1500 to 4000 polygons without affecting performance. \n\nThese polycount numbers vary depending on the CPU configuration and other hardware capabilities. Advanced rendering capabilities smoothly handle anywhere between 10k to 40k polygons. Global mobile markets are vying to produce CPUs that can render 100k to 1 million polygons for an immersive 3D experience. \n\nHigher polycount increases the file sizes of 3D assets. Websites will have upload limits. So it's also important to keep file sizes in mind while fixing the polygon count. \n\n\n### What are some beginner pitfalls to polygonal modelling?\n\n**Irregular meshes**: As beginners, we may miss triangles and create self-intersecting surfaces. Or we may leave holes on mesh surfaces or fill in with backward triangles. Irregular meshes will affect the model's overall appearance. Eyeball checks and use of mesh generation software will help us avoid mesh-related errors. \n\n**Incorrect measurements**: It may distort the model's proportionality and ruin the output. It's best to train our eyes to compare images and estimate the difference in depths. Comparing our model with the reference piece on the image viewer tool will tell us the difference. \n\n**Too many subdivisions early in the modelling**: It will disable us from making changes without tampering with the measurements. So, we may end up creating uneven surfaces. Instead, it's better to start with fewer polygons and add to them as we build the model. \n\n**Topology error**: We may get the edge structure and mesh distributions wrong. We need to equip ourselves by learning how to use mesh tools. It's important to learn where to use triangles, quads and higher polygons. Duplicates are to be watched out for. Understanding the flow of edges is vital. \n\n## Milestones\n\n1952\n\nGeoffrey Colin Shepherd furthers Thomas Bradwardine's 14th-century work on non-convex polygons. He extends polygon formation to the imaginary plane. It paves the way for the construction of complex polygons. In polygonal modelling, complex polygons have circuitous boundaries. A polygon with a hole inside is one example. \n\n1972\n\nBruce G Baumgart introduces a paper on **winged edge data structure** at Stanford University. Winged data structure is a way of representing polyhedrons on a computer. The paper states its exclusive use in AI for computer graphics and world modelling. \n\n1972\n\nNewell introduces the **painter's algorithm**. It's a painting algorithm that paints a polygon. It considers the distance of the plane from the viewer while painting. The algorithm paints the farthest polygon from the viewer first and proceeds to the nearest. \n\n1972\n\nEdwin Catmull and Fredrick Parke create the **world's first 3D rendered movie**. In the movie, the animation of Edwin's left hand has precisely drawn and measured polygons. \n\n1992\n\nFowlery et al. present *Modelling Seashells* at ACM SIGGRAPH, Chicago. They use polygonal meshes among others to create comprehensive computer imagery of seashells. \n\n1998\n\nAndreas Raab suggests the **classification of edges** of a polygonal mesh. They shall be grouped as sharp, smooth, contour and triangulation edges. It solves the problem of choosing the right lines to draw. \n\n1999\n\nDeussen et al. successfully apply Adreas Raab's algorithm that constructs a skeleton from a 3D polygonal model. They use it in connection with the intersecting planes.","meta":{"title":"Polygonal Modelling","href":"polygonal-modelling"}}
{"text":"# Relation Extraction\n\n## Summary\n\n\nConsider the phrase \"President Clinton was in Washington today\". This describes a *Located* relation between Clinton and Washington. Another example is \"Steve Balmer, CEO of Microsoft, said…\", which describes a *Role* relation of Steve Balmer within Microsoft. \n\nThe task of extracting semantic relations between entities in text is called **Relation Extraction (RE)**. While Named Entity Recognition (NER) is about identifying entities in text, RE is about finding the relations among the entities. Given unstructured text, NER and RE helps us obtain useful structured representations. Both tasks are part of the discipline of Information Extraction (IE). \n\nSupervised, semi-supervised, and unsupervised approaches exist to do RE. In the 2010s, neural network architectures were applied to RE. Sometimes the term **Relation Classification** is used, particularly in approaches that treat it as a classification problem.\n\n## Discussion\n\n### What sort of relations are captured in relation extraction?\n\nHere are some relations with examples:\n\n + *located-in*: CMU is in Pittsburgh\n + *father-of*: Manuel Blum is the father of Avrim Blum\n + *person-affiliation*: Bill Gates works at Microsoft Inc.\n + *capital-of*: Beijing is the capital of China\n + *part-of*: American Airlines, a unit of AMR Corp., immediately matched the moveIn general, affiliations involve persons, organizations or artifacts. Geospatial relations involve locations. Part-of relations involve organizations or geo-political entities. \n\n**Entity tuple** is the common way to represent entities bound in a relation. Given n entities in a relation r, the notation is \\(r(e\\_{1},e\\_{2},...,e\\_{n})\\). An example use of this notation is *Located-In(CMU, Pittsburgh)*. \n\nRE mostly deals with binary relations where n=2. For n>2, the term used is **higher-order relations**. An example of 4-ary biomedical relation is *point\\_mutation(codon, 12, G, T)*, in the sentence \"At codons 12, the occurrence of point mutations from G to T were observed\". \n\n\n### What are some common applications of relation extraction?\n\nSince structured information is easier to use than unstructured text, relation extraction is useful in many NLP applications. RE enriches existing information. Once relations are obtained, they can be stored in databases for future queries. They can be visualized and correlated with other information in the system. \n\nIn question answering, one might ask \"When was Gandhi born?\" Such a factoid question can be answered if our relation database has stored the relation *Born-In(Gandhi, 1869)*. \n\nIn biomedical domain, protein binding relations can lead to drug discovery. When relations are extracted from a sentence such as \"Gene X with mutation Y leads to malignancy Z\", these relations can help us detect cancerous genes. Another example is to know the location of a protein in an organism. This ternary relation is split into two binary relations (Protein-Organism and Protein-Location). Once these are classified, the results are merged into a ternary relation. \n\n\n### Which are the main techniques for doing relation extraction?\n\nWith **supervised learning**, the model is trained on annotated text. Entities and their relations are annotated. Training involves a binary classifier that detects the presence of a relation, and a classifier to label the relation. For labelling, we could use SVMs, decision trees, Naive Bayes or MaxEnt. Two types of supervision are feature-based or kernel-based. \n\nSince finding large annotated datasets is difficult, a **semi-supervised** approach is more practical. One approach is to do a phrasal search with wildcards. For example, `[ORG] has a hub at [LOC]` would return organizations and their hub locations. If we relax the pattern, we'll get more matches but also false positives. \n\nAn alternative is to use a set of specific patterns, induced from an initial set of seed patterns and seed tuples. This approach is called **bootstrapping**. For example, given the seed tuple *hub(Ryanair, Charleroi)* we can discover many phrasal patterns in unlabelled text. Using these patterns, we can discover more patterns and tuples. However, we have to be careful of **semantic drift**, in which one wrong tuple/pattern can lead to further errors. \n\n\n### What sort of features are useful for relation extraction?\n\nSupervised learning uses features. The named entities themselves are useful features. This includes an entity's bag of words, head words and its entity type. It's also useful to look at words surrounding the entities, including words that are in between the two entities. Stems of these words can also be included. The distance between the entities could be useful. \n\nThe **syntactic structure** of the sentence can signal the relations. A syntax tree could be obtained via base-phrase chunking, dependency parsing or full constituent parsing. The paths in these trees can be used to train binary classifiers to detect specific syntactic constructions. The accompanying figure shows possible features in the sentence \"[ORG American Airlines], a unit of AMR Corp., immediately matched the move, spokesman [PERS Tim Wagner] said.\" \n\nWhen using syntax, expert knowledge of linguistics is needed to know which syntactic constructions correspond to which relations. However, this can be automated via machine learning. \n\n\n### Could you explain kernel-based methods for supervised relation classification?\n\nUnlike feature-based methods, kernel-based methods don't require explicit feature engineering. They can explore a large feature space in polynomial computation time. \n\nThe essence of a kernel is to compute the **similarity** between two sequences. A kernel could be designed to measure structural similarity of character sequences, word sequences, or parse trees involving the entities. In practice, a kernel is used as a similarity function in classifiers such as SVM or Voted Perceptron. \n\nWe note a few kernel designs: \n\n + **Subsequence**: Uses a sequence of words made of the entities and their surrounding words. Word representation includes POS tag and entity type.\n + **Syntactic Tree**: A constituent parse tree is used. Convolution Parse Tree Kernel is one way to compare similarity of two syntactic trees.\n + **Dependency Tree**: Similarity is computed between two dependency parse trees. This could be enhanced with shallow semantic parsers. A variation is to use dependency graph paths in which the shortest path between entities represents a relation.\n + **Composite**: Combines the above approaches. Subsequence kernels capture lexical information whereas tree kernels capture syntactic information.\n\n### Could you explain distant supervised approach to relation extraction?\n\nDue to extensive work done for Semantic Web, we already have many knowledge bases that contain `entity-relation-entity` triplets. Examples include DBpedia (3K relations), Freebase (38K relations), YAGO, and Google Knowledge Graph (35K relations). These can be used for relation extraction without requiring annotated text. \n\nDistant supervision is a combination of unsupervised and supervised approaches. It extracts relations without supervision. It also induces thousands of features using a probabilistic classifier. \n\nThe process starts by linking named entities to those in the knowledge bases. Using relations in the knowledge base, the patterns are picked up in the text. Patterns are applied to find more relations. Early work used DBpedia and Freebase, and Wikipedia as the text corpus. Later work utilized semi-structured data (HTML tables, Wikipedia list pages, etc.) or even a web search to fill gaps in knowledge graphs. \n\n\n### Could you compare some semi-supervised or unsupervised approaches of some relation extraction tools?\n\nDIPRE's algorithm (1998) starts with seed relations, applies them to text, induces patterns, and applies the patterns to obtain more tuples. These steps are iterated. When applied to *(author, book)* relation, patterns take the form `(longest-common-suffix of prefix strings, author, middle, book, longest-common-prefix of suffix strings)`. DIPRE is an application of Yarowsky algorithm (1995) invented for WSD. \n\nLike DIPRE, Snowball (2000) uses seed relations but doesn't look for exact pattern matches. Tuples are represented as vectors, grouped using similarity functions. Each term is also weighted. Weights are adjusted with each iteration. Snowball can handle variations in tokens or punctuation. \n\nKnowItAll (2005) starts with domain-independent extraction patterns. Relation-specific and domain-specific rules are derived from the generic patterns. The rules are applied on a large scale on online text. It uses pointwise mutual information (PMI) measure to retain the most likely patterns and relations. \n\nUnlike earlier algorithms, TextRunner (2007) doesn't require a pre-defined set of rules. It learns relations, classes and entities on its own from a large corpus. \n\n\n### How are neural networks being used to do relation extraction?\n\nNeural networks were increasingly applied to relation extraction from the early 2010s. Early approaches used **Recursive Neural Networks** that were applied to syntactic parse trees. The use of **Convolutional Neural Networks (CNNs)** came next, to extract sentence-level features and the context surrounding words. A combination of these two networks has also been used. \n\nSince CNNs failed to learn long-distance dependencies, **Recurrent Neural Networks (RNNs)** were found to be more effective in this regard. By 2017, basic RNNs gave way to gated variants called GRU and LSTM. A comparative study showed that CNNs are good at capturing local and position-invariant features whereas RNNs are better at capturing order information long-range context dependency. \n\nThe next evolution was towards **attention mechanism** and **pre-trained language models** such as BERT. For example, attention mechanism can pick out most relevant words and use CNNs or LSTMs to learn relations. Thus, we don't need explicit dependency trees. In January 2020, it was seen that BERT-based models represent the current state-of-the-art with an F1 score close to 90. \n\n\n### How do we evaluate algorithms for relation extraction?\n\nRecall, precision and F-measures are typically used to evaluate on a gold-standard of human annotated relations. These are typically used for supervised methods. \n\nFor unsupervised methods, it may be sufficient to check if a relation has been captured correctly. There's no need to check if every mention of the relation has been detected. Precision here is simply the correct relations against all relations as judged by human experts. Recall is more difficult to compute. Gazetteers and web resources may be used for this purpose. \n\n\n### Could you mention some resources for working with relation extraction?\n\nPapers With Code has useful links to recent publications on relation classification. GitHub has a topic page on relation classification. Another useful resource is a curated list of papers, tutorials and datasets.\n\nThe current state-of-the-art is captured on the NLP-progress page of relation extraction. \n\nAmong the useful datasets for training or evaluation are ACE-2005 (7 major relation types) and SemEval-2010 Task 8 (19 relation types). For distant supervision, Riedel or NYT dataset was formed by aligning Freebase relations with New York Times corpus. There's also Google Distant Supervision (GIDS) dataset and FewRel. TACRED is a large dataset containing 41 relation types from newswire and web text. \n\n## Milestones\n\n1998\n\nAt the 7th Message Understanding Conference (MUC), the task of extracting relations between entities is considered. Since this is considered as part of template filling, they call it **template relations**. Relations are limited to organizations: employee\\_of, product\\_of, and location\\_of. \n\nJun \n2000\n\nAgichtein and Gravano propose *Snowball*, a semi-supervised approach to generating patterns and extracting relations from a small set of seed relations. At each iteration, it evaluates for quality and keeps only the most reliable patterns and relations. \n\nFeb \n2003\n\nZelenko et al. obtain **shallow parse trees** from text for use in binary relation classification. They use contiguous and sparse subtree kernels to assess similarity of two parse trees. Subsequently, this **kernel-based** approach is followed by other researchers: kernels on dependency parse trees of Culotta and Sorensen (2004); subsequence and shortest dependency path kernels of Bunescu and Mooney (2005); convolutional parse kernels of Zhang et al. (2006); and composite kernels of Choi et al. (2009). \n\n2004\n\nKambhatla takes a **feature-based** supervised classifier approach to relation extraction. A MaxEnt model is used along with lexical, syntactic and semantic features. Since kernel methods are a generalization of feature-based algorithms, Zhao and Grishman (2005) extend Kambhatla's work by including more syntactic features using kernels, then use SVM to pick out the most suitable features. \n\nJun \n2005\n\nSince binary classifiers have been well studied, McDonald et al. cast the problem of extracting **higher-order relations** into many binary relations. This also makes the data less sparse and eases computation. Binary relations are represented as a graph, from which cliques are extracted. They find that probabilistic cliques perform better than maximal cliques. The figure corresponds to some binary relations extracted for the sentence \"John and Jane are CEOs at Inc. Corp. and Biz. Corp. respectively.\" \n\nJan \n2007\n\nBanko et al. propose **Open Information Extraction** along with an implementation that they call *TextRunner*. In an unsupervised manner, the system is able to extract relations without any human input. Each tuple is assigned a probability and indexed for efficient information retrieval. TextRunner has three components: self-supervised learner, single-pass extractor, and redundancy-based assessor. \n\nAug \n2009\n\nMintz et al. propose **distant supervision** to avoid the cost of producing hand-annotated corpus. Using entity pairs that appear in Freebase, they find all sentences in which each pair occurs in unlabelled text, extract textual features and train a relation classifier. The include both lexical and syntactic features. They note that syntactic features are useful when patterns are nearby in the dependency tree but distant in terms of words. In the early 2010s, distant supervision becomes an active area of research. \n\nAug \n2014\n\nNeural networks and word embeddings were first explored by Collobert et al. (2011) for a number of NLP tasks. Zeng et al. apply **word embeddings** and **Convolutional Neural Network (CNN)** to relation classification. They treat relation classification as a multi-class classification problem. Lexical features include the entities, their surrounding tokens, and WordNet hypernyms. CNN is used to extract sentence level features, for which each token is represented as *word features (WF)* and *position features (PF)*. \n\nJul \n2015\n\nDependency shortest path and subtrees have been shown to be effective for relation classification. Liu et al. propose a recursive neural network to model the dependency subtrees, and a convolutional neural network to capture the most important features on the shortest path. \n\nOct \n2015\n\nSong et al. present *PKDE4J*, a framework for dictionary-based entity extraction and rule-based relation extraction. Primarily meant for biomedical field, they report F-measures of 85% for entity extraction and 81% for relation extraction. The RE algorithm uses dependency parse trees, which are analyzed to extract heuristic rules. They come up with 17 rules that can be applied to discern relations. Examples of rules include verb in dependency path, nominalization, negation, active/passive voice, entity order, etc. \n\nAug \n2016\n\nMiwa and Bansal propose to **jointly model the tasks of NER and RE**. A BiLSTM is used on word sequences to obtain the named entities. Another BiLSTM is used on dependency tree structures to obtain the relations. They also find that shortest path dependency tree performs better than subtrees of full trees. \n\nMay \n2019\n\nWu and He apply **BERT pre-trained language model** to relation extraction. They call their model *R-BERT*. Named entities are identified beforehand and are delimited with special tokens. Since an entity can span multiple tokens, their start/end hidden token representations are averaged. The output is a softmax layer with cross-entropy as the loss function. On SemEval-2010 Task 8, R-BERT achieves state-of-the-art Macro-F1 score of 89.25. Other BERT-based models learn NER and RE jointly, or rely on topological features of an entity pair graph.","meta":{"title":"Relation Extraction","href":"relation-extraction"}}
{"text":"# React Native\n\n## Summary\n\n\nTraditionally, *native mobile apps* have been developed in specific languages that call platform-specific APIs. For example, Objective-C and Swift for iOS app development; Java and Kotlin for Android app development. This means that developers who wish to release their app on multiple platforms will have to implement it in different languages.\n\nTo avoid this duplication, *hybrid apps* came along. The app was implemented using web technologies but instead of running it inside a web browser, it was wrapped and distributed as an app. But it had performance limitations.\n\nReact Native enables web developers write code once, deploy on any mobile platform and also use the platform's native API. **React Native** is a platform to build native mobile apps using JavaScript and React.\n\n## Discussion\n\n### As a developer, why should I adopt React Native?\n\nSince React Native allows developers maintain a single codebase even when targeting multiple mobile platforms, development work is considerably reduced. Code can be reused across platforms. If you're a web developer new to mobile app development, there's no need to learn a new language. You can reuse your current web programming skills and apply them to the mobile app world. Your knowledge of HTML, CSS and JS will be useful, although you'll be applying them in a different form in React Native. \n\nReact Native uses ReactJS, which is a JS library invented and later open sourced by Facebook. ReactJS itself has been gaining adoption because it's easy to learn for a JS programmer. It's performant due to the use of *virtual DOM*. The recommended syntax is ES6 and JSX. ES6 brings simplicity and readability to JS code. JSX is a combination of XML and JS to build reusable component-based UI. \n\n\n### How is React Native different from ReactJS?\n\nReact Native is a framework whereas ReactJS is a library. In ReactJS projects, we typically use a bundler such as *Webpack* to bundle necessary JS files for use in a browser. In React Native, we need only a single command to start a new project. All basic modules required for the project will be installed. We also need to install Android Studio for Android development and Xcode for iOS development. \n\nIn ReactJS, we are allowed to use HTML tags. In React Native, we create UI components using React Native components that are specified using JSX syntax. These components are mapped to native UI components. Thus, we can't reuse any ReactJS libraries that render HTML, SVG or Canvas. \n\nIn ReactJS, styling is done using CSS, like in any web app. In React Native, styling is done using JS objects. For component layout, React Native's *Flexbox* can be used. CSS animations are also replaced with the *Animated* API. \n\n\n### How does React Native work under the hood?\n\nBetween native and JavaScript worlds is a bridge (implemented in C++) through which data flows. Native code can call JS code and vice versa. To pass data between the two, data is serialized. \n\nFor example, a UI event is captured as a native event but the processing for this is done in JavaScript. The result is serialized and sent over the bridge to the native world. The native world deserializes the response, does any necessary processing and updates the UI. \n\n\n### What are some useful developer features of React Native?\n\nReact Native offers the following:\n\n + **Hot Reloading**: Small changes to your app will be immediately visible during development. If business logic is changed, Live Reload can be used instead.\n + **Debugging**: Chrome Dev Tools can be used for debugging your app. In fact, your debugging skills from the web world can be applied here.\n + **Publishing**: Publishing your app is easy using CodePush, now part of Visual Studio App Center.\n + **Device Access**: React Native gets access to camera, sensors, contacts, geolocation, etc.\n + **Declarative**: UI components are written in a declarative manner. Component-based architecture also means that one developer need not worry about breaking another's work.\n + **Animations**: For performance, these are serialized and sent to the native driver. They run independent of the JS event loop.\n + **Native Code**: Native code and React Native code can coexist. This is important because React Native APIs may not support all native functionality.\n\n### How does React Native compare against platforms in terms of performance?\n\nSince React Native is regularly being improved with each release, we can except better performance than what we state below.\n\nA comparison of React Native against iOS native programming using Swift showed comparable performance of CPU usage for list views. When resizing maps, Swift was better by 10% but React Native uses far less memory here. For GPU usage, Swift outperforms marginally except for list views. \n\nReact Native apps can leak memory. Therefore, `FlatList`, `SectionList`, or `VirtualizedList` could be used rather than `ListView`. The communication between native and JS runtimes over the bridge is via message queues. This is also a performance bottleneck. For better performance, ReactNavigation is recommended over Navigator component. \n\nWhen comparing against Ionic platform, React Native outperforms Ionic across metrics such as CPU usage, memory usage, power consumption and list scrolling. \n\n\n### Are there real-world examples of who's using React Native?\n\nFacebook and Instagram use React Native. Other companies or products using it include Bloomberg, Pinterest, Skype, Tesla, Uber, Walmart, Wix, Discord, Gyroscope, SoundCloud Pulse, Tencent QQ, Vogue, and many more. \n\nWalmart moved to React Native because it was hard to find skilled developers for native development. They used an incremental approach by migrating parts of their code to React Native. They were able to reuse 95% of their code between iOS and Android. They could reuse business logic with their web apps as well. They could deliver quick updates from their server rather than an app store. \n\nBloomberg developed their app in half the time using React Native. They were also able to push updates, do A/B testing and iterate quickly. \n\nAirbnb engineers write code for the web, iOS and Android. With React Native, they stated, \n\n> It's now feasible for us to have the same engineer skilled in JavaScript and React write the feature for all three platforms.\n\nHowever, in June 2018, Airbnb decided to move away from React Native and back to native development due to technical and organizational challenges. \n\n\n### What backend should I use for my React Native app?\n\nReact Native provides UI components. However, the React Native ecosystem is vast. There are frameworks/libraries for AR/VR, various editors and IDEs that support React Native, local databases (client-side storage), performance monitoring tools, CI/CD tools, authentication libraries, deep linking libraries, UI frameworks, and more. \n\nSpecifically for backends, **Mobile Backend as a Service (MBaaS)** is now available. Some options include RN Firebase, Baqend, RN Back, Feather and Graph Cool. These services make it easy for developers to build their React Native apps. \n\nThe more traditional approach is to build and manage your own backend. Some developers choose Node.js or Express.js because these are based on JavaScript that they're already using to build React Native UI. This can be paired with a database such as Firebase, MySQL, or MongoDB. Another option is to use Django with GraphQL. Even WordPress can be used, especially if the app is content driven. These are merely some examples. Developers can use any backend that suits their expertise and app requirements.\n\n\n### Could you point me to some useful React Native developer resources?\n\nHere are some useful resources:\n\n + Expo is a free and open source toolchain for your React Native projects. Expo also has a collection of apps developed and shared by others. The easiest way to create a new app is to use the create-react-native-app codebase.\n + If you wish learn by studying app code written by others, React Active News maintains a curated list of open source React Native apps.\n + React.parts is a place to find reusable components for React Native.\n + Visual Studio App Center is a useful tool to build and release your app.\n + Use React Navigation for routing and navigation in React Native apps.\n + React Native provides only the UI but here's a great selection of tools to complement React Native.\n\n\n## Milestones\n\n2011\n\nAt Facebook, Jordan Walke and his team release ReactJS, a JavaScript library that brings a new way of rendering pages with more responsive user interactions. A web page can be built from a hierarchy of UI components. \n\n2013\n\nReact Native starts as an internal hackathon project within Facebook. Meanwhile, ReactJS is open sourced. \n\nMar \n2015\n\nFacebook open sources React Native for iOS on GitHub. The release for Android comes in September. \n\n2016\n\nMicrosoft and Samsung commit to adopt React Native for Windows and Tizen. \n\n2017\n\nReact Native sees a number of improvements over the year: better navigation, smoother list rendering, more performant animations, and more.","meta":{"title":"React Native","href":"react-native"}}
{"text":"# Web of Things\n\n## Summary\n\n\nWeb of Things (WoT) is a set of building blocks that seeks to make the Internet of Things (IoT) more interoperable and usable. It simplifies application development (including cross-domain applications) by adopting the web paradigm. Web developers will have a low barrier to entry when programming for the IoT. \n\nThe key concepts of WoT include Thing Description, Thing Model, Interaction Model, Hypermedia Controls, Protocol Bindings, Profiles, Discovery and Binding Templates. IoT devices (aka Things) are treated as web resources, which makes WoT a Resource-Oriented Architecture (ROA). \n\nWoT is standardized by the W3C. There are developer tools and implementations. As of December 2023, widespread industry adoption of WoT is yet to happen. Highly resource-constrained devices that can't run a web stack will not be able to adopt WoT.\n\n## Discussion\n\n### Why do we need the Web of Things (WoT)?\n\nThe IoT ecosystem is fragmented. Applications or devices from different vendors don't talk to one another due to differing data models. Consumers need to use multiple mobile apps to interact with their IoT devices. While IoT has managed to network different devices via various connectivity protocols (Zigbee, IEEE 802.15.4, NB-IoT, Thread, etc.), there's a disconnect at the application layer. \n\nFor developers, this disconnect translates to more effort integrating new devices and services. Each application exposes its own APIs. This results in tight coupling between clients and service providers. It's more effort maintaining and evolving these services. \n\nWoT brings interoperability at the application layer with a unifying data model. It reuses the web paradigm. IoT devices can be treated as web resources. Just as documents on the web are interlinked and easily navigated, Things can be linked, discovered, queried and acted upon. Mature web standards such as REST, HTTP, JSON, AJAX and URI can be used to achieve this. This means that web developers can become IoT developers. They can create reusable IoT building blocks rather than custom proprietary implementations that work for limited use cases. \n\n\n### What integration patterns does WoT cover?\n\nAn IoT device can directly expose a WoT API. This is the simplest integration pattern. It's also challenging from a security perspective or if the device is behind a firewall. For more resource-constrained devices running LPWAN protocols, direct access is difficult. They would connect to the cloud via a gateway, which exposes the WoT API. When devices spread over a large area need to cooperate, they would connect to the cloud in different ways and the cloud exposes the WoT API. \n\nLet's consider specific use cases. A remote controller connects directly to an electrical appliance in a trusted environment. Similarly, a sensor acting as a control agent connects to an electrical appliance. A remote control outside a trusted environment connects to a gateway or a edge device which then connects to an electrical appliance. Connected devices are mapped to digital twins that can be accessed via a client device. A device can be controlled via its digital twin in the cloud. These various integration patterns can be combined through system integration. \n\n\n### What's the architecture of WoT?\n\nWoT standardizes a layered architecture of four layers (lower to higher): Access, Find, Share and Compose. The protocols or techniques used at each of these layers are already widely used on the web. These four layers can't be mapped to the OSI model, nor are they strictly defined at the interfaces. They're really a collection of services to ease the development of IoT solutions. \n\nAt the access layer, solution architects have to think about resource, representation and interface designs. They should also define how resources are interlinked. At the find layer, web clients can discover root URLs, the syntax and semantics of interacting with Things. At the compose layer, tools such as Node-RED and IFTTT can help create mashups. \n\n\n### What are Thing Description (TD) and Thing Model (TM) in WoT?\n\nTD is something like the business card of the Thing. It reveals everything about the Thing. It informs the protocol, data encoding, data structure, and security mechanism used by the Thing. TD itself is in JSON-LD format and is exposed by the Thing or can be discovered by consumers from a Thing Description Directory (TDD). \n\nIn object-oriented programming, objects are instantiated from classes. Likewise, a TD can be seen as an instantiation of a TM. A TM is a logical description of a Thing's interface and interactions. However, it doesn't contain instance-specific information such as an IP address, serial number of GPS location. A TM can include security details if those are applicable for all instances of that TM. \n\nBoth TD and TM are represented and serialized in JSON-LD format. Whereas a TD can be validated against its TM, a TM can't be validated. \n\n\n### What's the WoT interaction model?\n\nApart from links, a Thing may expose three types of interaction affordances: \n\n + **Properties**: Property is a state of the Thing. State may be read-only or read-write. Properties can be made observable. Sensor values, stateful actuators, configuration, status and computation results are examples.\n + **Actions**: Action invokes a function of the Thing. Action can be used to update one or more properties including read-only ones.\n + **Events**: Event is used to asynchronously send data from the Thing to a consumer. Focus is on state transitions rather than the state itself. Examples include alarms or samples of a time series.Like documents on the web, WoT also links and forms. These are called **hypermedia controls**. Links are used to discover and interlink Things. Forms enable more complex operations than what's possible by simply dereferencing a URI. \n\n\n### What are protocol bindings in WoT?\n\nWoT's abstractions make it protocol agnostic. It doesn't matter if a Thing uses MQTT, CoAP, Modbus or any other connectivity protocol. WoT's interaction model unifies all these so that applications talk in terms of properties, actions and events. But abstractions have to be translated into protocol actions. This is provided by **protocol bindings**. For a door handle for example, protocol binding tells how to open/close the door at the level of knob or lever. \n\nW3C has published a non-normative document called **WoT Binding Templates**. This gives blueprints on how to write TDs for different IoT platforms or standards. This includes protocol-specific metadata, payload formats, and usage in specific IoT platforms. The consumer of a TD would implement the template, that is, the protocol stack, media type encoder/decoder and platform stack. \n\n\n### Who has implemented WoT?\n\nW3C maintains a list of developer resources. This include tools, implementations, TD directories and WoT middleware. For example, Eclipse Thingweb is a Node.js implementation to expose and consume TD. From other sources, there are implementations in Python, Java, Rust and Dart. Among the TD directories are TinyIoT Thing Directory and WoTHive. Major WoT deployments during 2012-2021 have been documented. \n\nKrellian Ltd. offers WebThings Gateway and WebThings Framework. WebThings was initially developed at Mozilla. However, its API differs from W3C specifications in many ways. \n\nThe sayWoT! platform from evosoft (a Siemens subsidiary) gives web and cloud developers an easy way to develop IoT solutions. One study compared many WoT platforms including WoT-SDN, HomeWeb, KNX-WoT, EXIP, WTIF, SOCRADES, WoTKit, µWoTO, and more. \n\nWoT is being leveraged to create digital twins. WoTwins and Eclipse Ditto with WoT integration are examples of this. Ortiz et al. used WoT TD effectively in real-time IoT data processing in smart ports use case. WoTemu is an emulation framework for WoT edge architecture. \n\n\n### What standards cover WoT?\n\nThe W3C is standardizing WoT. The following are the main normative specifications: \n\n + WoT Architecture 1.1 (Recommendation)\n + WoT Thing Description 1.1 (Recommendation)\n + WoT Discovery (Recommendation)\n + WoT Profile (Working Draft)Informative specifications include WoT Scripting API, WoT Binding Templates, WoT Security and Privacy Guidelines, and WoT Use Cases and Requirements. \n\nBeginners can start at the W3C WoT webpage for latest updates, community groups, documentation and tooling.\n\nAt the IETF, there's a draft titled *Guidance on RESTful Design for Internet of Things Systems*. This is relevant to WoT. \n\n\n### What are some limitations of WoT?\n\nWoT depends on the web stack. Hence, it's not suited for very low-power devices or mesh deployments. \n\n**Matter** protocol, known earlier as Project CHIP, is an alternative to WoT. This is promoted by the Connectivity Standards Alliance (CSA), formerly called Zigbee Alliance. Matter is based on Thread, IPv6 and Dotdot. While Matter is not web friendly like WoT, it appears to have better industry traction. However, Matter devices that expose WoT TDs can talk to WoT devices. \n\nThere's a claim that WoT hasn't adequately addressed security, privacy and data sharing issues. This is especially important when IoT devices are directly exposed to the web. Devices are energy inefficient since they're always on. They're vulnerable to DoS attacks. \n\nWoT alone can't solve complex problems such as optimize workflows across many IoT devices or applications. Hypermedea and EnvGuard are two approaches to solve this. Larian et al. compared many WoT platforms. They noted that current IoT middleware and WoT resource discovery need to be improved. Legacy systems would require custom code to interface to the WoT architecture. \n\n## Milestones\n\nNov \n2007\n\nWilde uses the term \"Web of Things\" in a paper titled *Putting Things to REST*. He makes the case for treating a Thing (such as a sensor) as a web resource. It could then be accessed via RESTful calls rather than the more restrictive SOAP/WSDL API calls. Web concepts of URI, HTTP, HTML, XML and loosely coupling can be applied effectively towards Things. \n\n2011\n\nGuinard publishes his Doctor of Science dissertation in the field of Web of Things. In 2016, he co-authors (with Trifa) a book titled *Building the Web of Things*. Guinard sees WoT as\n\n> A refinement of the Internet of Things (IoT) by integrating smart things not only into the Internet (the network), but into the Web (the application layer).\n\nJul \n2013\n\n**Web of Things Community Group** is created. Subsequently in 2014, a workshop is held (June) and an Interest Group is formed (November). \n\nDec \n2016\n\nFollowing the first in-person meeting and a WoT Plugfest in 2015, the **W3C WoT Working Group** is formed. It's aim is to produce two normative specifications (Architecture, Thing Description) and two informative specifications (Scripting API, Binding Templates). \n\nJun \n2018\n\nFrom the Eclipse Foundation, the first commit on GitHub is made for the **Eclipse Thingweb** project. The project aims to provide Node.js components and tools for developers to build IoT systems that conform to W3C WoT standards. The project releases v0.5.0 in October. \n\nApr \n2020\n\nW3C publishes WoT Architecture and WoT Thing Description as separate **W3C Recommendation** documents. \n\nJul \n2022\n\nTzavaras et al. propose using **OpenAPI** descriptions and ontologies to bring Things closer to the world of Semantic Web. Thing Descriptions can be created in OpenAPI while also conforming to W3C WoT architecture. They argue that OpenAPI is already a mature standard. It provides a uniform way to interact with web services and Things. \n\nNov \n2022\n\nMarkus Reigl at Siemens comments that WoT will do for IoT what HTML did for the WWW in the 1990s. TD is not a mere concept. It leads to executable software code. He predicts IoT standardization will gain momentum. \n\nDec \n2023\n\nW3C publishes WoT Architecture 1.1 and WoT Thing Description 1.1 as W3C Recommendation documents. In addition, WoT Discover is also published as a W3C Recommendation.","meta":{"title":"Web of Things","href":"web-of-things"}}
{"text":"# TensorFlow\n\n## Summary\n\n\nTensorFlow is an open source software library for numerical computation using **data flow graphs**. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This dataflow paradigm enables parallelism, distributed execution, optimal compilation and portability. \n\nThe typical use of TensorFlow is for Machine Learning (ML), particularly Deep Learning (DL) that uses large scale multi-layered neural networks. More specifically, it's best for classification, perception, understanding, discovery, prediction and creation. \n\nTensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence Research organization for ML/DL research. The system is general enough to be applicable in a wide variety of other domains as well.\n\n## Discussion\n\n### For which use cases is TensorFlow best suited?\n\nTensorFlow can be used in any domain where ML/DL can be employed. It can also be used in other forms of AI, including reinforcement learning and logistic regression. On mobile devices, applications include speech recognition, image recognition, object localization, gesture recognition, optical character recognition, translation, text classification, voice synthesis, and more. \n\nSome of the areas are: \n\n + **Voice/Speech Recognition**: For voice-based interfaces as popularized by Apple Siri, Amazon Alexa or Microsoft Cortana. For sentiment analysis in CRM. For flaw detection (noise analysis) in industrial systems.\n + **Text-Based Applications**: For sentimental analysis (CRM, Social Media), threat detection (Social Media, Government) and fraud detection (Insurance, Finance). For machine translation such as with Google Translate. For text summarization using sequence-to-sequence learning. For language detection. For automated email replies such as with Google SmartReply.\n + **Image Recognition**: For face recognition, image search, machine vision and photo clustering. For object classification and identification within larger images. For cancer detection in medical applications.\n + **Time-Series Analysis**: For forecasting. For customer recommendations. For risk detection, predictive analytics and resource planning.\n + **Video Detection**: For motion detection in gaming and security systems. For large-scale video understanding.\n\n### Could you name some applications where TensorFlow is being used?\n\nTensorFlow is being used by Google in following areas: \n\n + RankBrain: Google search engine.\n + SmartReply: Deep LSTM model to automatically generate email responses.\n + Massively Multitask Networks for Drug Discovery: A deep neural network model for identifying promising drug candidates.\n + On-Device Computer Vision for OCR - On-device computer vision model to do optical character recognition to enable real-time translation.\n + Retinal imaging - Early detection of diabetic retinopathy using deep neural network of 26 layers.\n + SyntaxNet - Built for Natural Language Understanding (NLU), this is based on TensorFlow and open sourced by Google in 2016.Outside Google, we mention some known real-world examples. Mozilla uses TensorFlow for speech recognition. UK supermarket Ocado uses it for route planning for its robots, demand forecasting, and product recommendations. A Japanese farmer has used it to classify cucumbers based on shape, length and level of distortion. As an experiment, Intel used TensorFlow on traffic videos for pedestrian detection. \n\nFurther examples were noted at the TensorFlow Developer Summit, 2018. \n\n\n### Which platforms and languages support TensorFlow?\n\nTensorFlow is available on 64-bit Linux, macOS, Windows and also on the mobile computing platforms like Android and iOS. Google has announced a software stack specifically for Android development called TensorFlow Lite. \n\nTensorFlow has official APIs available in the following languages: Python, JavaScript, C++, Java, Go, Swift. Python API is recommended. Bindings in other languages are available from community: C#, Haskell, Julia, Ruby, Rust, Scala. There's also C++ API reference for TensorFlow Serving. R's `tensorflow` package provides access to the complete TensorFlow API from within R. \n\nNvidia's **TensorRT**, a Programmable Inference Accelerator, allows you to optimize your models for inference by lowering precision and thereby reducing latency. \n\n\n### How is TensorFlow different from other ML/DL platforms?\n\nTensorFlow is relatively painless to setup. With its growing community adoption, it offers a healthy ecosystem of updates, tutorials and example code. It can run on a variety of hardware. It's cross platform. It has APIs or bindings in many popular programming languages. It supports GPU acceleration. Through TensorBoard, you get an intuitive view of your computation pipeline. Keras, a DL library, can run on TensorFlow. However, it's been criticized for being more complex and slower than alternative frameworks. \n\nCreated in 2007, **Theano** is one of the first DL frameworks but it's been perceived as too low-level. Support for Theano is also ending. Written in Lua, **Torch** is meant for GPUs. It's Python port released by Facebook, called **PyTorch**, is popular for analyzing unstructured data. It's developer friendly and memory efficient. **Caffe2** does well for modeling convolutional neural networks. **Apache MXNet**, along with its simplified DL interface called **Gluon**, is supported by Amazon and Microsoft. Microsoft also has **Microsoft Cognitive Toolkit (CNTK)** that can handle large datasets. For Java and Scala programmers, there's **Deeplearning4j**. \n\n\n### Which are the tools closely related to TensorFlow?\n\nThe following are closely associated with or variants of TensorFlow:\n\n + **TensorFlow Lite**: Enables low-latency inferences on mobile and embedded devices.\n + **TensorFlow Mobile**: To use TensorFlow from within iOS or Android mobile apps, where TensorFlow Lite cannot be used.\n + **TensorFlow Serving**: A high performance, open source serving system for machine learning models, designed for production environments and optimized for TensorFlow.\n + **TensorLayer**: Provides popular DL and RL modules that can be easily customized and assembled for tackling real-world machine learning problems.\n + **TensorFlow Hub**: A library for the publication, discovery, and consumption of reusable parts of machine learning models.\n + **TensorFlow Model Analysis**: A library for evaluating TensorFlow models.\n + **TensorFlow Debugger**: Allows us to view the internal structure and states of running TensorFlow graphs during training and inference.\n + **TensorFlow Playground**: A browser-based interface for beginners to tinker with neural networks. Written in TypeScript and D3.js. Doesn't actually use TensorFlow.\n + **TensorFlow.js**: Build and train models entirely in the browser or Node.js runtime.\n + **TensorBoard**: A suite of visualization tools that helps to understand, debug, and optimize TensorFlow programs.\n + **TensorFlow Transform**: A library for preprocessing data with TensorFlow.\n\n### What's the architecture of TensorFlow?\n\nTensorFlow can be deployed across platforms, details of which are abstracted away from higher layers. The core itself is implemented in C++ and exposes its features via APIs in many languages, with Python being the most recommended. \n\nAbove these language APIs, is the **Layers** API that offers commonly used layers in deep learning models. To read data, **Datasets** API is the recommended way and it creates input pipelines. With **Estimators**, we can create custom models or bring in models pre-made for common ML tasks. \n\n**XLA (Accelerated Linear Algebra)** is a domain-specific compiler for linear algebra that optimizes TensorFlow computations. If offers improvements in speed, memory usage, and portability on server and mobile platforms. \n\n\n### Could you explain how TensorFlow's data graph works?\n\nTensorFlow uses a **dataflow graph**, which is a common programming model for parallel computing. Graph nodes represent **operations** and edges represent data consumed or produced by the nodes. Edges are called **tensors** that carry data. In the example figure, we show five graph nodes: `a` and `b` are placeholders to accept inputs; `c`, `d` and `e` are simple arithmetic operations.\n\nIn TensorFlow 1.x, when a graph is created, tensors don't contain the results of operations. The graph is evaluated through **sessions**, which encapsulate the TensorFlow runtime. However, with **eager execution**, operations are evaluated immediately instead of building a graph for later execution. This is useful for debugging and iterating quickly on small models or data. \n\nFor ingesting data into the graph, **placeholders** can be used for the simplest cases but otherwise, **datasets** should be preferred. To train models, **layers** are used to modify values in the graph. \n\nTo simplify usage, high-level API called **estimators** should be used. They encapsulate training, evaluation, prediction and export for serving. Estimators themselves are built on layers and build the graph for you. \n\n\n### How is TensorFlow 2.0 different from TensorFlow 1.x?\n\nIt makes sense to write any new code in TensorFlow 2.0. Existing 1.x code can be migrated to 2.0. The recommended path is to move to TensorFlow 1.14 and then to 2.0. Compatibility module `tf.compat` should help. \n\nHere are the key changes in TensorFlow 2.0: \n\n + **API Cleanup**: Many APIs are removed or moved. For example, `absl-py` package replaces `tf.app`, `tf.flags`, and `tf.logging`. Main namespace `tf.*` is cleaned up by moving some items into subpackages such as `tf.math`. Examples of new modules are `tf.summary`, `tf.keras.metrics`, and `tf.keras.optimizers`.\n + **Eager Execution**: Like Python, eager execution is the default behaviour. Code execute in order, making `tf.control_dependencies()` redundant.\n + **No More Globals**: We need to keep track of variables. Untracked `tf.Variable` will get garbage collected.\n + **Functions, Not Sessions**: Functions are more familiar to developers. Although `session.run()` is gone, for efficiency and JIT compilation, `tf.function()` decorator can be used. This automatically invokes *AutoGraph* to convert Python constructs into TensorFlow graph equivalents. Functions can be shared and reused.\n\n\n## Milestones\n\n2011\n\nGoogle Brain invents **DistBelief**, a framework to train large models for machine learning. DistBelief can make use of computing clusters of thousands of machines for accelerated training. The framework manages details of parallelism (multithreading, message passing), synchronization and communication. Compared to MapReduce, DistBelief is better at deep network training. Compared to GraphLab, DistBelief is better at structured graphs. \n\nNov \n2015\n\nUnder Apache 2.0 licensing, Google open sources TensorFlow, which is Google Brain's second-generation machine learning system. While other open source ML frameworks exist (Caffe, Theano, Torch), Google's competence in ML is supposedly 5-7 years ahead of the rest. However, Google doesn't open source algorithms that run on TensorFlow, not its advanced hardware infrastructure. \n\nApr \n2016\n\nVersion 0.8 of TensorFlow is released. It comes with distributed training support. Powered by gRPC, models can be trained on hundreds of machines in parallel. For example, Inception image classification network was trained using 100 GPUs with an overall speedup of 56x compared to a single GPU. More generally, the system can map the dataflow graph onto heterogeneous devices (multi-core CPUs, general-purpose GPUs, mobile processors) in the available processes. \n\nMay \n2016\n\nGoogle announced that it's been using **Tensor Processing Unit (TPU)**, a custom ASIC built specifically for machine learning and tailored for TensorFlow. \n\nJun \n2016\n\nTensorFlow v0.9 is released with support for iOS and Raspberry Pi. Android support has been around from the beginning. \n\nFeb \n2017\n\nVersion 1.0 of TensorFlow is released. The API is in Python but there's also experimental APIs in Java and Go. \n\nNov \n2017\n\nGoogle releases a preview of **TensorFlow Lite** for mobile and embedded devices. This enables low-latency inferences for on-device ML models. In future, this should be preferred over **TensorFlow Mobile**. With TensorFlow 1.4, we can build models using high-level **Keras** API. Keras, which was previously in `tf.contrib.keras`, is now the core package `tf.keras`. \n\nSep \n2019\n\nTensorFlow 2.0 is released following an alpha release in June. It improves workflows for both production and experimentation. It promises better performance on GPU acceleration.","meta":{"title":"TensorFlow","href":"tensorflow"}}
{"text":"# Wi-Fi Calling\n\n## Summary\n\n\nWi-Fi Calling is a technology that allows users to make or receive voice calls via a local Wi-Fi hotspot rather than via their mobile network operator's cellular radio connection. Voice calls are thus carried over the Internet, implying that Wi-Fi Calling relies on VoIP. However, unlike other VoIP services such as Skype or Viber, Wi-Fi Calling gives operators more control.\n\nWi-Fi Calling is possible only if the operator supports it, user's phone has the feature and user has enabled it. Once enabled, whether a voice call uses the cellular radio link or Wi-Fi link is almost transparent to the user. With cellular networks going all IP and offering VoLTE, Wi-Fi Calling has become practical and necessary in a competitive market.\n\nWi-Fi Calling is also called *Voice over Wi-Fi (VoWi-Fi)*.\n\n## Discussion\n\n### In what scenarios can Wi-Fi Calling be useful to have?\n\nIn places where cellular coverage is poor, such as in rural residences, concrete indoors, basements, or underground train stations, users will not be able to make or receive voice calls. In these scenarios, the presence of a local Wi-Fi network can serve as the \"last-mile\" connectivity to the user. Wi-Fi can therefore complement the cellular network in places where the latter's coverage is poor.\n\nFor example, a user could be having an active voice call via the cellular network and suddenly enters a building with poor coverage. Without Wi-Fi Calling, the call might get dropped. With Wi-Fi Calling, the call can be seamlessly handed over to the Wi-Fi network without even the user noticing it. Astute users may notice that their call is on Wi-Fi since smartphones may indicate this via an icon. More importantly, user intervention is not required to switch between cellular and Wi-Fi. Such seamless handover has become possible because cellular network's IP and packet switching: VoWi-Fi can be handed off to VoLTE, and vice versa. \n\n\n### Isn't Wi-Fi Calling the same as Skype, Viber or WhatsApp voice calls?\n\nMany smartphone apps allow voice (and even video) calls over the Internet. They are based on VoIP technology. We normally call them over-the-top (OTT) services since they merely use the phone's data connection and operators bill for data usage and not for the service itself. However, many of these systems require both parties to have the same app installed. Even when this constraint is removed, the service is controlled by the app provider.\n\nWi-Fi Calling gives cellular operators greater control. Driven by competition from OTT services, Wi-Fi Calling gives operators an opportunity to regain market share for voice calls. Voice packets are carried securely over IP to the operator's core network, thus allowing the operator to reuse many resources and procedures already in place for VoIP calls. Likewise, messages and video–*Video over LTE (ViLTE)*–can also be carried over Wi-Fi. \n\nFrom an architectural perspective, Wi-Fi Calling is served by operator's IP Multimedia Subsystem (IMS), whereas Skype calls are routed out of the operator's network into the Internet.\n\n\n### Isn't Wi-Fi Calling the same as Wi-Fi Offload?\n\nNot exactly. Wi-Fi Calling can be seen as a form of offload but they have different motivations. Wi-Fi Offload came about to ease network congestion and improve QoS for users in high-density areas. The offload is transparent for users whose devices are authenticated via EAP-SIM/AKA. \n\nWi-Fi Calling is in response to OTT services stealing revenue from mobile operators. Even VoLTE was deployed by operators, voice calls couldn't be made over Wi-Fi and OTT services was what users used when they had access to Wi-Fi. Wi-Fi Calling aims to overcome this problem. \n\n\n### What are the possible benefits of Wi-Fi Calling?\n\nFor subscribers, benefits include seamless connectivity and mobility between cellular and Wi-Fi. The selection is automatic and transparent to users. Data is protected using IPSec from mobile to core network, along with traditional SIM-based authentication. Users can potentially lower their monthly bills through service bundles and reduced roaming charges. Sometimes calling home from another country could be free depending on the subscribed plan and operator. \n\nMoreover, the user's phone will have a single call log (likewise, for message log). The default dialler can be used along with all saved contacts. Those receiving the call will see the caller's usual phone number. These are not possible with a third-party installed app.\n\nFor operators, Wi-Fi complements cellular coverage and capacity. T-Mobile was one of the early adopters because it had poor indoor coverage. Wi-Fi Network performance is optimized by allowing bandwidth-intensive traffic to be offloaded to Wi-Fi when so required. All their IMS-based services can now be extended to Wi-Fi access rather than losing out to OTT app/service providers. \n\n\n### How does the network architecture change for Wi-Fi Calling?\n\nTwo network functions are involved: \n\n + **Evolved Packet Data Gateway (ePDG)**: Serves an untrusted Wi-Fi network. An IPSec tunnel protects data between mobile and ePDG, from where it goes to Packet Gateway (PGW). The mobile needs an update with an IPsec client. No changes are needed for the access point.\n + **Trusted Wireless Access Gateway (TWAG)**: Serves a trusted Wi-Fi network, which is typically under operator's control. In this case, data between mobile and TWAG is encrypted at radio access and IPSec is not used. From TWAG, data goes to PGW. No changes are needed for the mobile but Wi-Fi access point needs to be updated.If the network is not an Evolved Packet Core (EPC), then Tunnel Termination Gateway (TTG) is used instead of ePDG; Wireless Access Gateway (WAG) is used instead of TWAG; GGSN is used instead of PGW.\n\nThe untrusted mode is often used for Wi-Fi Calling, since public hotspots can be used without updating the access point. It's the operator who decides if a non-3GPP access can be considered trusted. \n\n\n### How is an end-user device authenticated for Voice over Wi-Fi service?\n\nWithin the network, *3GPP AAA Server* is used to authenticate end devices. Authentication is based on SIM and the usual network functions located in the Home Subscriber Server (HSS). 3GPP AAA Server does not maintain a separate database and relies on the HSS. \n\nVendors who sell AAA servers usually give the ability to do authentication of devices that don't have SIM. For legacy networks, they can interface with HLR rather than HSS. They support AAA protocols such as RADIUS and Diameter. They support various EAP methods including TLS, PEAP and CHAP. \n\n\n### What are the 3GPP standards covering Wi-Fi Calling?\n\nDocuments that specify \"non-3GPP access\" are applicable to Wi-Fi Calling. The following are some relevant documents (non-exhaustive list): \n\n + TR 22.814: Location services\n + TR 22.912: Study into network selection requirements\n + TS 23.402: Architectural enhancements\n + TS 24.234: 3GPP-WLAN interworking: WLAN UE to network protocols, Stage 3\n + TS 24.302: Access to EPC, Stage 3\n + TS 29.273: 3GPP EPS AAA interfaces\n + TS 33.402: System Architecture Evolution: security aspects\n + TR 33.822: Security aspects for inter-access mobilityIn addition, GSMA has released a list of Permanent Reference Documents on VoWi-Fi. \n\nWi-Fi Calling is a technology that comes from the cellular world. From Wi-Fi perspective, there's no special IEEE standard that talks about Wi-Fi Calling.\n\n\n### Are there commercial services offering Wi-Fi Calling?\n\nIn June 2016, it was reported that all four major operators in the US support Wi-Fi Calling, with T-Mobile supporting as many as 38 different handsets. In November 2016, there were 40+ operators offering Wi-Fi Calling in 25+ countries. Moreover, even affordable phones or devices without SIMs are supporting Wi-Fi Calling. An operator will normally publish a list of handsets that are supported, which usually includes both Android and iPhone models. In September 2017, it was reported that AT&T has 23 phones and Verizon has 17 phones that support Wi-Fi Calling. \n\nWi-Fi Calling may involve regulatory approval based on the country's licensing framework. For example, India's TRAI commented in October 2017 that Wi-Fi Calling can be introduced since licensing allows telephony service to be provided independent of the radio access. \n\n\n### Within enterprises, how can IT teams plan for Wi-Fi Calling?\n\nSome access points have the ability to prioritize voice traffic and this can be repurposed for Wi-Fi Calling. Examples include Aerohive, Aruba, Cisco Aironet and Ruckus. Enterprises can also work with operators to deploy femto/pico cells or distributed antenna systems. \n\nA minimum of 1 Mbps may be needed to support Wi-Fi Calling although Republic Wireless in the US claims 80 kbps is enough to hold a call, although voice quality may suffer. In reality, voice needs just 12 kbps but can scale down to 4.75 kbps. \n\n\n### How will users be billed for Wi-Fi Calling?\n\nThis is completely operator dependent and based on subscriber's current plan. For example, Canada's Rogers says that calls and messages are deducted from airtime and messaging limits. Roaming charges may apply only for international roaming. Verizon Wireless states that a voice call will use about 1 MB/minute of data; a video call will use 6-8 MB/minute. Billing is linked to user's current plan. \n\n\n### What are some practical issues with Wi-Fi Calling?\n\nBack in 2014, T-Mobile had handoff problems but it was improved later. The service was also not offered by other operators and not supported by most handsets. Even when a handset supports it, operators may not offer the service if the handset has not been purchased from the operator. \n\nSince any Wi-Fi hotspot can be used, including public ones, security is a concern. For this reason, all data over Wi-Fi must be protected and subscriber must be authenticated by the cellular operator. Seamless call continuity across cellular and Wi-Fi could be a problem, particularly when firewalls and VPNs are involved. Some users have reported problems when using Wi-Fi behind corporate firewalls. Likewise, IT teams in enterprises may have the additional task of ensuring Wi-Fi coverage and managing traffic. \n\nSince Wi-Fi Calling often uses public hotspots, there's no QoS control. However, it's argued that in places where cellular has poor coverage, QoS cannot be guaranteed anyway. In addition, QoS on Wi-Fi can often be achieved implicitly because of excess capacity. With the coming of 802.11ac and the ability to prioritize traffic via Wi-Fi Multimedia (WMM), QoS is unlikely to be a problem. \n\n## Milestones\n\n2007\n\nT-Mobile in the US launches something called \"HotSpot @ Home\". This is based on a technology named *Unlicensed Mobile Access*, which is a commercial name of a 3GPP feature named *Generic Access Network*. GAN operates in the IP layer, which means that access can be via any protocol, not just Wi-Fi. UMA does not take off because of lack of handsets that support it. It also had other operational issues related to interference, handover and configuration setup. \n\nNov \n2011\n\nRepublic Wireless, a mobile virtual network operator (MVNO) in the US, rolls out \"Hybrid Calling\". Calls are primarily on Wi-Fi and cellular will be used as a fallback option. Their General Manager, Brian Dally, states,\n\n> Every other mobile carrier talks about offloading to Wi-Fi, we talk about failing over to cellular.\n\nSep \n2014\n\nT-Mobile introduces Wi-Fi Calling in the US. This comes at the heels of the operator's rollout of VoLTE. Meanwhile, Apple iPhone starts supporting Wi-Fi Calling. \n\nApr \n2015\n\nSprint introduces Wi-Fi Calling in the US. EE does the same in the UK. Meanwhile, Google gets into telecom by launching *Project Fi*, which allows seamless switching between Wi-Fi and cellular. Google doesn't have its own cellular network but uses those of Sprint, T-Mobile, and US Cellular. \n\nOct \n2015\n\nIn the US, AT&T obtains regulatory approval to launch Wi-Fi Calling. By 2016, all four major US operators rollout Wi-Fi Calling nationwide. \n\nJun \n2017\n\nUMA, which may be called first generation Wi-Fi Calling, is decommissioned by T-Mobile in the US. \n\nNov \n2018\n\nResearchers discover security vulnerabilities with Wi-Fi Calling due to various reasons. They propose possible solutions to overcome these.","meta":{"title":"Wi-Fi Calling","href":"wi-fi-calling"}}
{"text":"# Design Thinking\n\n## Summary\n\n\nDesign thinking is a problem-solving method used to create practical and creative solutions while addressing the needs of users. The process is extremely user centric as it focuses on understanding the needs of users and ensuring that the solutions created solve users' needs. \n\nIt's an iterative process that favours ongoing experimentation until the right solution is found.\n\n## Discussion\n\n### Why is the design thinking process important?\n\nDesign thinking helps us to innovate, focus on the user, and ultimately design products that solve real user problems. \n\nThe design thinking process can be used in companies to reduce the time it takes to bring a product to the market. Design thinking can significantly reduce the amount of time spent on design and development. \n\nThe design thinking process increases return of investment as the products are user-centric, which helps increase user engagement and user retention. It's been seen that a more efficient workflow due to design thinking gave 75% savings in design and development time, 50% reduction in defect rate, and a calculated ROI of more than 300%. \n\n\n### When and where should the design thinking process be used?\n\nThe design thinking process should especially be used when dealing with **human-centric challenges** and **complex challenges**. The design thinking process helps break down complex problems and experiment with multiple solutions. Design thinking can be applied in these contexts: human-centred innovation, problems affecting diverse groups, involving multiple systems, shifting markets and behaviours, complex societal challenges, problems that data can't solve, and more. \n\nA class of problems called **wicked problems** is where design thinking can help. Wicked problems are not easy to define and information about them is confusing. They have many stakeholders and complex interdependencies. \n\nOn the contrary, design thinking is perhaps an overkill for obvious problems, especially if they're not human centred. In such cases, traditional problem-solving methods may suffice. \n\n\n### What are the principles of the design thinking process?\n\nThere are some basic principles that guide us in applying design thinking: \n\n + **The human rule**: All design activity is social because all social innovation will bring us back to the \"human-centric point of view\".\n + **The ambiguity rule**: Ambiguity is inevitable, and it can't be removed or oversimplified. Experimenting at the limits of your knowledge and ability is crucial in being able to see things differently.\n + **The redesign rule**: While technology and social circumstances may change, basic human needs remain unchanged. So, every solution is essentially a redesign.\n + **The tangibility rule**: Making ideas tangible by creating prototypes allows designers to communicate them effectively.\n\n### What are the typical steps of a design thinking process?\n\nThe process involves five steps: \n\n + **Empathy**: Put yourself in the shoes of the user and look at the challenge from the point of view of the user. Refrain from making assumptions or suggesting answers. Suspend judgements throughout the process.\n + **Define**: Create a challenge statement based on the notes and thoughts you have gained from the empathizing step. Go back to the users and modify the challenge statement based on their inputs. Refer to the challenge statement multiple times throughout the design thinking process.\n + **Ideate**: Come up with ideas to solve the proposed challenge. Put down even the craziest ideas.\n + **Prototype**: Make physical representations of your ideas and solutions. Get an understanding of what the final product may look like, identify design flaws or constraints. Take feedback from users. Improve the prototype through iterations.\n + **Test**: Evaluate the prototype on well-defined criteria.Note that empathy and ideate are divergent steps whereas others are convergent. Divergent means expanding information with alternatives and solutions. Convergent is reducing information or filtering to a suitable solution. \n\n\n### What are the specific tools to practice design thinking?\n\nDesign thinking offers tools for each step of its five-step process. These are summarized in the above figure. These tools offer individuals and teams something concrete to effectively practice design thinking.\n\nNew Metrics has enumerated 14 different tools: immersion, visualization, brainstorming, empathy mapping, journey mapping, affinity mapping, rapid iteration, assumption testing, prototyping, design sprints, design criteria, finding the value proposition, and learning launch. They describe each tool briefly and note the benefits. More tools include focus groups, shadowing, concept maps, personas, positioning matrix, minimum viable product, volume model, wireframing, and storyboards. \n\nFor specific software tools, we note the following: \n\n + **Empathize**: Typeform, Zoom, Creatlr\n + **Define**: Smaply, Userforge, MakeMyPersona\n + **Ideate**: SessionLab, Stormboard, IdeaFlip\n + **Prototype**: Boords, Mockingbird, POP\n + **Test**: UserTesting, HotJar, PingPong\n + **Complete Process**: Sprintbase, InVision, Mural, Miro\n\n### What should I keep in mind when applying the design thinking process?\n\nEvery designer can use a variation of the design thinking process that suits them and customize it for each challenge. Although distinct steps are defined, design thinking is not a linear process. Rather, it's very much **iterative**. For example, during prototyping we may go back to redefine the problem statement or look for alternative ideas. Every step gives us new information that might help us improve on previous steps.\n\nAdopt Agile methodology. Design thinking is strong on ideation while Scrum is strong on implementation. Combine the two to make a powerful hybrid Agile approach. \n\nWhile the steps are clear, applying them correctly is not easy. To identify what annoys your clients, ask questions. Empathy means that you should relate to their problems. Open-ended questions will stimulate answers and help identify the problems correctly. \n\nAt the end of the process, as a designer, reflect on the way you've gone through the process. Identify areas of improvement or how you could have done things differently. Gather insights on the way you went through the design thinking process.\n\n\n### What do I do once the prototype is proven to work?\n\nThe prototype itself can be said to \"work\" only after we have submitted it to the clients for feedback. Use this feedback to improve the prototype. Make the actual product after incorporating all the feedback from the prototype. \n\nGathering feedback itself is an important activity. Present your solution to the client by describing the thought process by which the challenge was solved. Take notes from users and ensure that they are satisfied with the final product. It's important not to defend your product. It's more important to listen to what users have to say and make changes to improve the solution. \n\nPresent several versions of the prototype so that users can compare and express what they like and dislike. Consider using *I Like, I Wish, What If* method for gathering feedback. Get feedback from regular users as well as extreme users with highly opinionated views. Be flexible and improvise during testing sessions. Allow users to contribute ideas. \n\nRecognize that prototyping and testing is an iterative process. Be prepared to do this a few times. \n\n\n### How is design thinking different from user-centred design?\n\nOn the surface, both design thinking and user-centred design (UCD) are focused on the needs of users. They have similar processes and methods. They aim for creative or innovative solutions. To elicit greater empathy among designers, UCD has been more recently called human-centred design (HCD). \n\nHowever, design thinking goes beyond usability. It considers technical feasibility, economic viability, desirability, etc. without losing focus on user needs. While UCD is dominated by usability engineers and focuses on user interfaces, design thinking has a larger scope. Design thinking brings more multi-disciplinary perspectives that can suggest innovative solutions to complex problems. While it borrows from UCD methods, it goes beyond the design discipline. \n\nSome see UCD as a framework and design thinking as a methodology that can be applied within that framework. Others see these as complementary: a team can start with design thinking for initial exploration and later shift to UCD for prototyping and implementation. \n\n\n### What are some ways to get more ideas?\n\nDesign thinking is not about applying standard off-the-shelf solutions. It's about solving difficult problems that typically require creative approaches and innovation. More ideas, the better. Use different techniques such as brainstorming, mind mapping, role plays, storyboarding, etc. \n\nInnovation is not automatic and needs to be fostered. We should create the right mindsets, an open and explorative culture. Designers should combine both logic and imagination. Teams should be cross-disciplinary and collaborative. Work environments must be conductive to innovation. \n\nWhen framing the problem, think about how the challenge can be solved in a certain place or scenario. For example, think about how one of your ideas would function differently in a setting such as a kitchen.\n\nWrite down even ideas that may not work. Further research and prototyping might help refine it. Moreover, during the prototyping and testing steps, current ideas can spark new ideas. \n\n## Milestones\n\nSep \n1962\n\n*The Conference on Systematic and Intuitive Methods in Engineering, Industrial Design, Architecture and Communications* is held in London. It explores design processes and new design methods. Although the birth of design methodology can be traced to Zwicky's *Morphological Method* (1948), it's this conference that recognizes design methodology as a field of academic study. \n\n1966\n\nThe term **Design Science** is introduced. This shows that the predominant approach is to find \"a single rationalised method, based on formal languages and theories\". \n\n1969\n\nHerbert A. Simon, a Nobel Prize laureate and cognitive scientist, mentions the design thinking process in his book *The Sciences of the Artificial* and further contributes ideas that are now known as the principles of design thinking. \n\n1970\n\nThis decade sees some resistance to the adoption of design methodology. Even early pioneers begin to dislike \"the continual attempt to fix the whole of life into a logical framework\". \n\n1973\n\nRittel publishes *The State of the Art in Design Methods*. He argues that the early approaches of the 1960s were simplistic, and a new generation of methodologies are beginning to emerge in the 1970s. Rather than optimize through systematic methods, the **second generation** is about finding a satisfactory solution in which designers partner with clients, customers and users. This approach is probably more relevant to architecture and planning than engineering and industrial design. \n\n1980\n\nThis decade sees the development of **engineering design methodology**. An example is the series of *International Conferences on Engineering Design*. The American Society of Mechanical Engineers also launches a series of conferences on Design Theory and Methodology. \n\nOct \n1982\n\nNigel Cross discusses the problem-solving nature of designers in his seminal paper *Designerly Ways of Knowing*. \n\n1987\n\nPeter Rowe, Director of Urban Design Programs at Harvard, publishes his book *Design Thinking*. This explores the underlying structure and focus of inquiry in design thinking. \n\n1991\n\nIDEO, an international design and consulting firm, brings design thinking to the mainstream by developing their own customer-friendly technology.","meta":{"title":"Design Thinking","href":"design-thinking"}}
{"text":"# Single Page Application\n\n## Summary\n\n\nA web application broadly consists of two things: data (content) and control (structure, styling, behaviour). In traditional applications, these are spread across multiple pages of HTML, CSS and JS files. Each page is served in HTML with links to suitable CSS/JS files. A Single Page Application (SPA) brings a new programming paradigm for the web.\n\nWith SPA, we have a single HTML page for the entire application. This page along with necessary CSS and JS for the site are loaded when the page is first requested. Subsequently, as the user navigates the app, only relevant data is requested from the server. Other files are already available with the client. The page doesn't reload but the view and HTML DOM are updated. \n\nSPA (along with PWA) is the modern way to build web applications. SPA enhances user experience. There are frameworks that simplify building SPAs.\n\n## Discussion\n\n### Could you explain the single page application for a beginner?\n\nIn a typical multi-page application, each page is generated as HTML on the server and served to the client browser. Each page has its own URL that's used by the client to request that page.\n\nWhen a user navigates from one page to another, the entire page loads. However, it's common for all pages to share many UI components: sidebar, header, footer, navigation menu, login/logout UI, and more. It's therefore wasteful to download these common elements with every page request. In terms of user experience, moving from one page to another might be annoying. Current page might lose UI interaction as user waits for another page to load.\n\nIn SPA, there's a single URL. When a link is clicked, relevant content is downloaded and specific UI components are updated to render that content. User experience improves because user stays with and can interact with the current page while the new content is fetched from the server. When an update happens, there's no transition to another page. Parts of the current page are updated with new content. \n\n\n### How does the lifecycle of an SPA request/response compare against a traditional multi-page app?\n\nIn multi-page apps, each request is for a specific page or document. Server looks at the URL and serves the corresponding page or document. The entire app is really a collection of pages. \n\nIn SPA, the first client request loads the app and all its relevant assets. These could be HTML plus JS/CSS files. If the app is complex, this initial bundle of files could be large. Therefore, the first view of the app can take some time to appear. During this phase, a loader image may be shown to the user. \n\nSubsequently, when the user navigates within the SPA, an API is called to fetch new data. The server responds with only the data, typically in JSON format. The browser receives this data and updates the app view. User sees this new information without a page reload. The app stays in the same page. Only the view changes by updating some components of the page. \n\nSPAs are well-suited when we wish to build rich interactive UI with lots of client-side behaviour. \n\n\n### Which are the different SPA architectures?\n\nApplication content might be stored in files or databases. It can be dynamic (news sites) or contextual (user specific). Therefore, the application has to transform this content into HTML so that users can read them in a web browser. This transformation process is called *rendering*. From this perspective, we note the following SPA architectures: \n\n + **Client-Side Rendering**: When browser requests the site, server responds quickly with a basic HTML page. This is linked to CSS/JS files. While these files are loading, user sees a loader image. Once data loads, JavaScript on the browser executes to complete the view and DOM. Slow client devices can spoil user experience.\n + **Server-Side Rendering**: HTML page is generated on the fly at the server. Users therefore see the content quickly without any loader image. At the browser, once events are attached to the DOM, app is ready for user interaction.\n + **Static Site Generators**: HTML pages are pre-generated and stored at the server. This means that the server can respond immediately. Better still, the page can be served by a CDN. This is the fastest approach. This approach is not suitable for dynamic content.\n\n### What are the benefits of an SPA?\n\nWith SPA, applications load faster and use less bandwidth. User experience is seamless, similar to a native app. Users don't have to watch slow page reloads. Developers can build feature-rich applications such as content-editing apps. On mobile devices, the experience is richer: clicks can be replaced with scrolling and amazing transitions. With browsers providing many developer tools, SPAs are also easy to debug on the client side. \n\nSPA optimizes bandwidth usage. Main resources (HTML/CSS/JS) are downloaded only once and reused. Subsequently, only data is downloaded. In addition, SPAs can cache data, thereby saving bandwidth. Caching also enables the application to work offline. \n\n\n### What are some criticisms or disadvantages of an SPA?\n\nAmong the disadvantages of SPA is **SEO**. SPA has a single URL and all routing happens via JavaScript. More recently, Google is able to crawl and index JS files. In general, use multi-page apps if SEO is important. Adopt SPA for SaaS platforms, social networks or closed communities where SEO doesn't matter. \n\nSPA breaks **browser navigation**. Browser's back button will go to previous page rather than previous app view. This can be overcome with the *HTML5 History API*. \n\nSPA could lead to **security issues**. Cross-site scripting attacks are possible. If developers are not careful, sensitive data could be part of the initial data download. Since all this data is not necessarily displayed on the UI, it can give developers a false sense of security. Developers could also unknowingly provide access to privileged functionality at the client side. \n\nSPA needs **client-side processing** and therefore may not work well on old browsers or slow devices. It won't work if users turn off JavaScript in their browsers. SPAs can be hard to maintain due to reliance on many third-party libraries. \n\nIt's worth reading Adam Silver's article on the many disadvantages of SPAs. \n\n\n### What are some best practices when converting a traditional app to SPA?\n\nAn SPA has to implement many things that come by default in traditional apps: browsing history, routing, deep linking to particular views. Therefore, **select a framework** that facilitates these. Select a framework with a good ecosystem and a modular structure. It must be flexible and performant for even complex UI designs. \n\nAfter the initial page loads, subsequent data is loaded by making API calls. Building an SPA implies a **well-defined API**. Involve both frontend and backend engineers while creating this API. In one approach, serve static files separately from the data that's handled by API endpoints. \n\nDefine clearly which parts of the UI are dynamic. This helps to organize project modules. Structure the project to enable **reusable components**. \n\nDue to its high reliance on JavaScript, invest in **build tools** for better dependency management. Webpack is a good choice. A build process can do code compilation (via Babel), file bundling and minification. \n\nWhen converting to an SPA, don't take an all-out approach. **Migrate incrementally**, perhaps one page at a time. \n\n\n### How do I test and measure performance of an SPA?\n\nTesting tools Selenium, Cypress and Puppeteer can also be used to measure app performance. WebPageTest is an online tool that's easier to use. Compared to multi-page apps, there's more effort to fill forms or navigate across views. \n\nApplication performance on the client side can be monitored via Navigation Timing API and Resource Timing API. But these fail to capture JavaScript execution times. To address this, User Timing API can be used. LinkedIn took this approach and improved the performance of their SPA by 20%. Among the techniques they used are lazy rendering (defer rendering outside viewport) and lazy data fetching. \n\nAt Holiday Extras, their app took 23 seconds to load on a good 3G connection. To reduce this, they adopted code splitting to defer loading of non-critical libraries. CSS was also split into three parts loaded at different stages: critical, body, onload. They moved from JS rendering to HTML rendering, and then started serving static HTML from Cloudfront CDN. They did real user monitoring (RUM). Among the tools they used were React, EJS, Webpack, and Speed Curve. \n\n\n### Could you mention some popular websites or web apps that are SPAs?\n\nFacebook, Google Maps, Gmail, Twitter, Google Drive, and GitHub are some examples of websites built as SPAs. \n\nFor example, in Gmail we can read mails, delete mails, compose and send mails without leaving the page. It's the same with Google Maps in which new locations are loaded and displayed in a seamless manner. In Grammarly, writers get suggestions and corrections as they compose their content. All this is powered by HTML5 and AJAX to build responsive apps. \n\nTrello is another example of SPA. The card layout, overlays, and user interactions are all done without any page reloads. \n\n\n### Which are some tools and frameworks to help me create an SPA?\n\nThe three main frameworks for building SPAs are React, Angular and Vue on the client side, and Node.js on the server side. All these are based on JavaScript. Other JavaScript frameworks include Meteor, Backbone, Ember, Polymer, Knockout and Aurelia. \n\nDevelopers can choose the right framework by comparing how each implements or supports UI, routing, components, data binding, usability, scalability, performance, and testability. For example, while Ember comes with routing, React doesn't; but many modules for React support routing. React supports reusable components. React supports one-way data binding whereas Angular supports two-way data binding. Ember and Meteor are opinionated whereas React and Angular are less so and more flexible. \n\n.NET/C# developers can consider using Blazor. Blazor can work both at client side and server side. It runs in a web browser due to WebAssembly. \n\nDesign tools support traditional multi-page sites. Adobe Experience Manager Sites is a tool that allows designers to create or edit SPAs. It supports drag-and-drop editing, out-of-the-box components and responsive web design. \n\n\n### How does an SPA differ from PWA?\n\nPWA uses standard web technologies to deliver mobile native app-like experience. They were meant to make responsive web apps feel more native on mobile platforms. PWA enables the app to work offline, push notifications and access device hardware. Unlike SPA, PWA use service workers, web app manifest and HTTPS. \n\nPWA load almost instantly since service workers run in a separate thread from the UI. SPAs need to pre-fetch assets at the start and therefore there's always an initial loading screen. SPAs can also use service workers but PWA do it better. In terms of accessibility, PWA are better than SPAs. SPAs might be suited for data-intensive sites that are not necessarily visually stunning. \n\nBut PWA are not so different from SPA. Both offer app-like user experience. Many PWA are built with the same frameworks that are used to build SPA. In fact, an app might initially be developed as an SPA. Later, additional features such as caching, manifest icons and loading screens could be added. These make an SPA more like a PWA. \n\n## Milestones\n\n1995\n\nIn the mid-1990s, rich interactions on web browsers become possible due to two different technologies: **Java Applets** and **Macromedia Flash**. Browsers are merely proxies for these technologies that have to be explicitly installed as browser plugins. With these technologies, all content is either loaded upfront or loaded on demand as the view changes. No page reloads are necessary. In this sense, these are ancestors of modern SPAs. \n\n2005\n\nJesse James Garrett publishes a paper titled *Ajax: A New Approach to Web Applications*. This describes a novel way to design web applications. AJAX, that expands to **Asynchronous Javascript + XML**, makes asynchronous requests in the background while the user continues to interact with the UI in the foreground. Once the server responds with XML (or JSON or any other format) data, the browser updates the view. AJAX uses the `XMLHTTPRequest` API. While this was around since the early 2000s, Garrett's paper popularizes this approach. \n\n2008\n\nWith the launch of GitHub, many JavaScript libraries and frameworks are invented and shared via GitHub. These become the building blocks on which true SPAs would later be built. \n\nSep \n2010\n\nTwitter releases a new version of its app with client-side rendering using JavaScript. Initial page load becomes slow. Due to diversity of client devices and browsers, user experience becomes inconsistent. In 2012, Twitter updates the app towards server-side rendering and defers all JS execution until the content is rendered on browser. They also organize the code as CommonJS modules and do lazy loading. These changes reduce the initial page load to a fifth. \n\nMay \n2016\n\nGoogle builds an app for its Google I/O event. Google engineers call this both an SPA and a PWA. With an App Engine backend, the app uses web components, Web Animations API, material design, Polymer and Firebase. During the event the app brings more user engagement than the native app. We might say that the app started as a SPA to create a PWA. In general, it's better to plan for a PWA from the outset rather than re-engineer an SPA at a later point. \n\nFeb \n2019\n\nGoogle engineers compare different SPA architectures in terms of performance. One of these is called **rehydration** which combines both server-side and client-side renderings. This has the drawback that content loads quickly but not immediately interactive, thus frustrating the user. \n\nMay \n2019\n\nWith the rise of edge computing, Section describes in a blog post how a Nuxt.js app (based on Vue.js) can be deployed at the edge. The app is housed within a Node.js module deployed at the edge. This SPA uses server-side rendering.","meta":{"title":"Single Page Application","href":"single-page-application"}}
{"text":"# Document Object Model\n\n## Summary\n\n\nDocument Object Model (DOM) is the object-oriented representation of an HTML or XML document. It defines a platform-neutral programming interface for accessing various components of a webpage, so that JavaScript programs can change document structure, style, and content programmatically. \n\nIt generates a hierarchical model of the HTML or XML document in memory. Programmers can access/manipulate tags, IDs, classes, attributes and elements using commands or methods provided by the document object. It's a logical structure because DOM doesn't specify any relationship between objects. \n\nTypically you use DOM API when documents can fit into memory. For very large documents, streaming APIs such as Simple API for XML (SAX) may be used.\n\nThe W3C DOM and WHATWG DOM are standards implemented in most modern browsers. However, many browsers extend these standards. Web applications must keep in view the DOM standard used for maintaining interoperability across browsers.\n\n## Discussion\n\n### What are the different components of a DOM?\n\nPurpose of DOM is to mirror HTML/XML documents as an in-memory representation. It's composed of: \n\n + Set of objects/elements\n + Hierarchical structure to combine objects\n + An interface to access/modify objectsDOM lists the required interface objects, with supported methods and fields. DOM-compliant browsers are responsible to supply concrete implementation in a particular language (mostly JavaScript).\n\nSome HTML DOM objects, functions & attributes: \n\n + **Node** - Each tree node is a Node object. Different types of nodes inherit from the basic `Node` interface.\n + **Document** - Root of the DOM tree is the HTMLDocument node. Usually available directly from JavaScript as document or window. Gives access to properties associated with a webpage such as URL, stylesheets, title, or characterSet. The field `document.documentElement` represents the child node of type `HTMLElement` and corresponds to `` element.\n + **Attr** – An attribute in an `HTMLElement` object providing the ability to access and set an attribute. Has name and value fields.\n + **Text** — A leaf node containing text inside a markup element. If there is no markup inside, text is contained in a single `Text` object (only child of the element).\n\n### Can you show with an example how a web page gets converted into its DOM?\n\nThe simplest way to see the DOM generated for any webpage is using \"Inspect\" option within your browser menu. DOM element navigation window that opens allows you to scroll through the element tree on the page. You can also alter some element values and styles – text, font, colours. Event listeners associated with each elements are also listed. \n\nThe document is the root node of the DOM tree and offers many useful properties and methods. `document.getElementById(str)` gives you the element with `str` as id (or name). It returns a reference to the DOM tree node representing the desired element. Referring to the figure, `document.getElementById('div1')` will return the first \"div\" child node of the \"body\" node.\n\nWe can also see that \"html\" node has two direct children, \"head\" and \"body\". This example also shows three leaf nodes containing only text. These are one \"title\" and two \"p\" tags.\n\nCorresponding CSS and JavaScript files referenced from HTML code can also be accessed through DOM objects. \n\n\n### How is JavaScript used to manipulate the DOM of a web page?\n\nThe ability to manipulate webpages dynamically using client-side programming is the basic purpose behind defining a DOM. This is achieved using DHTML. DHTML is not a markup language but a technique to make dynamic web pages using client-side programming. For uniform cross-browser support of webpages, DHTML involves three aspects:\n\n + **JavaScript** - for scripting cross-browser compatible code\n + **CSS** - for controlling the style and presentation\n + **DOM** - for a uniform programming interface to access and manipulate the web page as a documentGoogle Chrome, Microsoft Edge, Mozilla Firefox and other browsers support DOM through standard JavaScript. JavaScript programming can be used to manipulate the HTML page rendering, the underlying DOM and the supporting CSS. List of some important DOM related JavaScript functionalities: \n\n + Select, Create, Update and Delete DOM Elements (reference by ID/Name)\n + Style setting of DOM Elements – color, font, size, etc\n + Get/set attributes of Elements\n + Navigating between DOM elements – child, parent, sibling nodes\n + Manipulating the BOM (Browser Object Model) to interact with the browser\n + Event listeners and propagation based on action triggers on DOM elements\n\n### Can DOM be applied to documents other than HTML or XML?\n\nBy definition, DOM is a language-neutral object interface. W3 clearly defines it as an API for valid HTML and well-formed XML documents. Therefore, a DOM can be defined for any XML compliant markup language. The WHATWG community manages the HTML DOM interface. Some Microsoft specific XML extensions define their own DOM. \n\n**Scalable Vector Graphics (SVG)** is an XML-based markup language for describing two-dimensional vector graphics. It defines its own DOM API. \n\n**XAML** is a declarative markup language promoted by Microsoft, used in UI creation of .NET Core apps. When represented as text, XAML files are XML files with `.xaml` extension. By treating XAML as a XAML node stream, XAML readers communicate with XAML writers and enable a program to view/alter the contents of a XAML node stream similar to the XML Document Object Model (DOM) and the `XmlReader` and `XmlWriter` classes. \n\n**Standard Generalized Markup Language (SGML)** is a standard for how to specify a document markup language or tag set. The DOM support for SGML documents is limited to parallel support for XML. While working with SGML documents, the DOM will ignore `IGNORE` marked sections and `RCDATA` sections. \n\n\n### What are the disadvantages of using DOM?\n\nThe biggest problem with DOM is that it is **memory intensive**. While using the DOM interface, the entire HTML/XML is parsed and a DOM tree (of all nodes) is generated and returned. Once parsed, the user can navigate the tree to access data in the document nodes. DOM interface is easy and flexible to use but has an overhead of parsing the entire HTML/XML before you can start using it. So when the document size is large, the memory requirement is high and initial document loading time is also high. For small devices with limited on board memory, DOM parsing might be an overhead. \n\nSAX (Simple API for XML) is another document parsing technique where the parser doesn’t read in the entire document. Events are triggered when the XML is being parsed. When it encounters a tag start (e.g. `
Lorem ipsum.
`. Consider CSS rules `p { color: red; }`, `.foo { color: green; }` and `#bar { color: blue; }`. All three selectors target the paragraph but the last selector has highest specificity. Hence, we'll see blue-coloured text. This can be understood by calculating the specificity: \n\n + `p`: one element: 0-0-0-1\n + `.foo`: one class: 0-0-1-0\n + `#bar`: one ID: 0-1-0-0Since 0-1-0-0 > 0-0-1-0 > 0-0-0-0, `#bar` selector has the highest specificity. \n\nIf we have `p { color: red !important; }`, we'll have red-coloured text. Specificity is ignored. \n\nSuppose we introduce inline styling, `…
`. This will take precedence, unless there's an earlier declaration with `!important`. \n\nSuppose we have two classes `…
` styled with `.hoo { color: yellow; }`. Specificity is the same for both `.foo` and `.hoo`. If `.hoo` appears later in the stylesheet, we'll have yellow-coloured text. When specificity is same, order matters. \n\n\n### How is specificity affected by cascading order?\n\nConsider HTML content `Lorem ipsum.
`. Suppose the author defines `.foo { color: green; }` and the user defines `#bar { color: blue; }`. User-defined styles are typical for accessibility reasons. The latter has higher specificity but the former declaration is used; that is, text is rendered in green. To understand this, we need to understand the concept of **origin**. \n\nCSS styles can come from different origins: user (reader of the document), user agent (browser), or author (web developer). The standard defines precedence of the origin. This is applied first before specificity is considered. The order also considers declarations that include `!important`. \n\nPrecedence in descending order is Transition declarations, Important user agent declarations, Important user declarations, Important author declarations, Animation declarations, Normal author declarations, Normal user declarations, and Normal user agent declarations. \n\n\n### What are some examples of CSS specificity calculation?\n\nHere we share a few examples: \n\n + `ul#nav li.active a`: `#nav` is the ID, `.active` is the class, and three elements `ul`, `li` and `a` are used. Specificity 0-1-1-3.\n + `body.ie7 .col_3 h2 ~ h2`: Two classes `ie7` and `.col_3`, and three elements `body`, `h2` and `h2` are used. `~` is not counted. Specificity 0-0-2-3.\n + `