source
stringlengths
33
168
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Sherman%20trap
The Sherman trap is a box-style animal trap designed for the live capture of small mammals. It was invented by Dr. H. B. Sherman in the 1920s and became commercially available in 1955. Since that time, the Sherman trap has been used extensively by researchers in the biological sciences for capturing animals such as mice, voles, shrews, and chipmunks. The Sherman trap consists of eight hinged pieces of sheet metal (either galvanized steel or aluminum) that allow the trap to be collapsed for storage or transport. Sherman traps are often set in grids and may be baited with grains and seed. Description The hinged design allows the trap to fold up flat into something only the width of one side panel. This makes it compact for storage and easy to transport to field locations (e.g. in a back pack). Both ends are hinged, but in normal operation the rear end is closed and the front folds inwards and latches the treadle, trigger plate, in place. When an animal enters far enough to be clear of the front door, their weight releases the latch and the door closes behind them. The lure or bait is placed at the far end and can be dropped in place through the rear hinged door. Variants Later, other variants that built upon the basic design, appeared - such as the Elliott trap used in Europe and Australasia. The Elliott trap has simplified the design slightly and is made from just 7 hinged panels.
https://en.wikipedia.org/wiki/Symbolic%20language%20%28mathematics%29
In mathematics, a symbolic language is a language that uses characters or symbols to represent concepts, such as mathematical operations, expressions, and statements, and the entities or operands on which the operations are performed. See also Formal language Language of mathematics List of mathematical symbols Mathematical Alphanumeric Symbols Mathematical notation Notation (general) Symbolic language (other)
https://en.wikipedia.org/wiki/List%20of%20circle%20topics
This list of circle topics includes things related to the geometric shape, either abstractly, as in idealizations studied by geometers, or concretely in physical space. It does not include metaphors like "inner circle" or "circular reasoning" in which the word does not refer literally to the geometric shape. Geometry and other areas of mathematics Circle Circle anatomy Annulus (mathematics) Area of a disk Bipolar coordinates Central angle Circular sector Circular segment Circumference Concentric Concyclic Degree (angle) Diameter Disk (mathematics) Horn angle Measurement of a Circle List of topics related to Pole and polar Power of a point Radical axis Radius Radius of convergence Radius of curvature Sphere Tangent lines to circles Versor Specific circles Apollonian circles Circles of Apollonius Archimedean circle Archimedes' circles – the twin circles doubtfully attributed to Archimedes Archimedes' quadruplets Circle of antisimilitude Bankoff circle Brocard circle Carlyle circle Circumscribed circle (circumcircle) Midpoint-stretching polygon Coaxal circles Director circle Fermat–Apollonius circle Ford circle Fuhrmann circle Generalised circle GEOS circle Great circle Great-circle distance Circle of a sphere Horocycle Incircle and excircles of a triangle Inscribed circle Johnson circles Magic circle (mathematics) Malfatti circles Nine-point circle Orthocentroidal circle Osculating circle Riemannian circle Schinzel circle Schoch circles Spieker circle Tangent circles Twin circles Unit circle Van Lamoen circle Villarceau circles Woo circles Circle-derived entities Apollonian gasket Arbelos Bicentric polygon Bicentric quadrilateral Coxeter's loxodromic sequence of tangent circles Cyclic quadrilateral Cycloid Ex-tangential quadrilateral Hawaiian earring Inscribed angle Inscribed angle theorem Inversive distance Inversive geometry Irrational rotation Lens (geometry) Lune Lune of
https://en.wikipedia.org/wiki/Sagrada%20Fam%C3%ADlia
The Basílica i Temple Expiatori de la Sagrada Família, shortened as the Sagrada Família, is an under construction church in the Eixample district of Barcelona, Catalonia, Spain. It is the largest unfinished Catholic church in the world. Designed by architect Antoni Gaudí (1852–1926), his work on Sagrada Família is part of a UNESCO World Heritage Site. On 7 November 2010, Pope Benedict XVI consecrated the church and proclaimed it a minor basilica. On 19 March 1882, construction of the Sagrada Família began under architect Francisco de Paula del Villar. In 1883, when Villar resigned, Gaudí took over as chief architect, transforming the project with his architectural and engineering style, combining Gothic and curvilinear Art Nouveau forms. Gaudí devoted the remainder of his life to the project, and he is buried in the church's crypt. At the time of his death in 1926, less than a quarter of the project was complete. Relying solely on private donations, the Sagrada Família's construction progressed slowly and was interrupted by the Spanish Civil War. In July 1936, anarchists from the FAI set fire to the crypt and broke their way into the workshop, partially destroying Gaudí's original plans. In 1939, Francesc de Paula Quintana took over site management, which was able to go on due to the material that was saved from Gaudí's workshop and that was reconstructed from published plans and photographs. Construction resumed to intermittent progress in the 1950s. Advancements in technologies such as computer-aided design and computerised numerical control (CNC) have since enabled faster progress and construction passed the midpoint in 2010. However, some of the project's greatest challenges remain, including the construction of ten more spires, each symbolising an important Biblical figure in the New Testament. It was anticipated that the building would be completed by 2026, the centenary of Gaudí's death, but this has now been delayed due to the COVID-19 pandemic. Some aspec
https://en.wikipedia.org/wiki/Continuous%20availability
Continuous availability is an approach to computer system and application design that protects users against downtime, whatever the cause and ensures that users remain connected to their documents, data files and business applications. Continuous availability describes the information technology methods to ensure business continuity. In early days of computing, availability was not considered business critical. With the increasing use of mobile computing, global access to online business transactions and business-to-business communication, continuous availability is increasingly important based on the need to support customer access to information systems. Solutions to continuous availability exists in different forms and implementations depending on the software and hardware manufacturer. The goal of the discipline is to reduce the user or business application downtime, which can have a severe impact on business operations. Inevitably, such downtime can lead to loss of productivity, loss of revenue, customer dissatisfaction and ultimately can damage a company's reputation. Degrees of availability The terms high availability, continuous operation, and continuous availability are generally used to express how available a system is. The following is a definition of each of these terms. High availability refers to the ability to avoid unplanned outages by eliminating single points of failure. This is a measure of the reliability of the hardware, operating system, middleware, and database manager software. Another measure of high availability is the ability to minimize the effect of an unplanned outage by masking the outage from the end users. This can be accomplished by providing redundancy or quickly restarting failed components. Availability is usually expressed as a percentage of uptime in a given year: When defining such a percentage it needs to be specified if it applies to the hardware, the IT infrastructure or the business application on top. Continuou
https://en.wikipedia.org/wiki/PBASIC
PBASIC is a microcontroller-based version of BASIC created by Parallax, Inc. in 1992. PBASIC was created to bring ease of use to the microcontroller and embedded processor world. It is used for writing code for the BASIC Stamp microcontrollers. After the code is written, it is tokenized and loaded into an EEPROM on the microcontroller. These tokens are fetched by the microcontroller and used to generate instructions for the processor. Syntax When starting a PBASIC file, the programmer defines the version of the BASIC Stamp and the version of PBASIC that will be used. Variables and constants are usually declared first thing in a program. The DO LOOP, FOR NEXT loop, IF and ENDIF, and some standard BASIC commands are part of the language, but many commands like PULSOUT, HIGH, LOW, DEBUG, and FREQOUT are native to PBASIC and are used for special purposes that are not available in traditional BASIC (such as having the Basic Stamp ring a piezoelectric speaker, for example). Programming In the Stamp Editor, the PBASIC integrated development environment (IDE) running on a (Windows) PC, the programmer has to select 1 of 7 different basic stamps, BS1, BS2, BS2E, BS2SX, BS2P, BS2PE, and BS2PX, which is done by using one of these commands: ' {$STAMP BS1} ' {$STAMP BS2} ' {$STAMP BS2e} ' {$STAMP BS2sx} ' {$STAMP BS2p} ' {$STAMP BS2pe} ' {$STAMP BS2px} The programmer must also select which PBASIC version to use, which he or she may express with commands such as these: ' {$PBASIC 1.0} ' use version 1.0 syntax (BS1 only) ' {$PBASIC 2.0} ' use version 2.0 syntax ' {$PBASIC 2.5} ' use version 2.5 syntax An example of a program using HIGH and LOW to make an LED blink, along with a DO...LOOP would be: DO HIGH 1 'turn LED on I/O pin 1 on PAUSE 1000 'keep it on for 1 second LOW 1 'turn it off PAUSE 500 'keep it off for 500 msec LOOP 'repeat forever An example of a pr
https://en.wikipedia.org/wiki/Monounsaturated%20fat
In biochemistry and nutrition, a monounsaturated fat is a fat that contains a monounsaturated fatty acid (MUFA), a subclass of fatty acid characterized by having a double bond in the fatty acid chain with all of the remaining carbon atoms being single-bonded. By contrast, polyunsaturated fatty acids (PUFAs) have more than one double bond. Molecular description Monounsaturated fats are triglycerides containing one unsaturated fatty acid. Almost invariably that fatty acid is oleic acid (18:1 n−9). Palmitoleic acid (16:1 n−7) and cis-vaccenic acid (18:1 n−7) occur in small amounts in fats. Health Studies have shown that substituting dietary monounsaturated fat for saturated fat is associated with increased daily physical activity and resting energy expenditure. More physical activity was associated with a higher-oleic acid diet than one of a palmitic acid diet. From the study, it is shown that more monounsaturated fats lead to less anger and irritability. Foods containing monounsaturated fats may affect low-density lipoprotein (LDL) cholesterol and high-density lipoprotein (HDL) cholesterol. Levels of oleic acid along with other monounsaturated fatty acids in red blood cell membranes were positively associated with breast cancer risk. The saturation index (SI) of the same membranes was inversely associated with breast cancer risk. Monounsaturated fats and low SI in erythrocyte membranes are predictors of postmenopausal breast cancer. Both of these variables depend on the activity of the enzyme delta-9 desaturase (Δ9-d). In children, consumption of monounsaturated oils is associated with healthier serum lipid profiles. The Mediterranean diet is one heavily influenced by monounsaturated fats. People in Mediterranean countries consume more total fat than Northern European countries, but most of the fat is in the form of monounsaturated fatty acids from olive oil and omega-3 fatty acids from fish, vegetables, and certain meats like lamb, while consumption of satur
https://en.wikipedia.org/wiki/Metaproteomics
Metaproteomics (also Community Proteomics, Environmental Proteomics, or Community Proteogenomics) is an umbrella term for experimental approaches to study all proteins in microbial communities and microbiomes from environmental sources. Metaproteomics is used to classify experiments that deal with all proteins identified and quantified from complex microbial communities. Metaproteomics approaches are comparable to gene-centric environmental genomics, or metagenomics. Origin of the term The term "metaproteomics" was proposed by Francisco Rodríguez-Valera to describe the genes and/or proteins most abundantly expressed in environmental samples. The term was derived from "metagenome". Wilmes and Bond proposed the term "metaproteomics" for the large-scale characterization of the entire protein complement of environmental microbiota at a given point in time. At the same time, the terms "microbial community proteomics" and "microbial community proteogenomics" are sometimes used interchangeably for different types of experiments and results. Questions Addressed by Metaproteomics Metaproteomics allows for scientists to better understand organisms' gene functions, as genes in DNA are transcribed to mRNA which is then translated to protein. Gene expression changes can therefore be monitored through this method. Furthermore, proteins represent cellular activity and structure, so using metaproteomics in research can lead to functional information at the molecular level. Metaproteomics can also be used as a tool to assess the composition of a microbial community in terms of biomass contributions of individual members species in the community and can thus complement approaches that assess community composition based on gene copy counts such as 16S rRNA gene amplicon or metagenome sequencing. Proteomics of microbial communities The first proteomics experiment was conducted with the invention of two-dimensional polyacrylamide gel electrophoresis (2D-PAGE). The 1980s and 1990
https://en.wikipedia.org/wiki/Latch-up
In electronics, a latch-up is a type of short circuit which can occur in an integrated circuit (IC). More specifically, it is the inadvertent creation of a low-impedance path between the power supply rails of a MOSFET circuit, triggering a parasitic structure which disrupts proper functioning of the part, possibly even leading to its destruction due to overcurrent. A power cycle is required to correct this situation. The parasitic structure is usually equivalent to a thyristor (or SCR), a PNPN structure which acts as a PNP and an NPN transistor stacked next to each other. During a latch-up when one of the transistors is conducting, the other one begins conducting too. They both keep each other in saturation for as long as the structure is forward-biased and some current flows through it - which usually means until a power-down. The SCR parasitic structure is formed as a part of the totem-pole PMOS and NMOS transistor pair on the output drivers of the gates. The latch-up does not have to happen between the power rails - it can happen at any place where the required parasitic structure exists. A common cause of latch-up is a positive or negative voltage spike on an input or output pin of a digital chip that exceeds the rail voltage by more than a diode drop. Another cause is the supply voltage exceeding the absolute maximum rating, often from a transient spike in the power supply. It leads to a breakdown of an internal junction. This frequently happens in circuits which use multiple supply voltages that do not come up in the required sequence on power-up, leading to voltages on data lines exceeding the input rating of parts that have not yet reached a nominal supply voltage. Latch-ups can also be caused by an electrostatic discharge event. Another common cause of latch-ups is ionizing radiation which makes this a significant issue in electronic products designed for space (or very high-altitude) applications. A single event latch-up is a latch-up caused by a si
https://en.wikipedia.org/wiki/Circuit%20design
The process of circuit design can cover systems ranging from complex electronic systems down to the individual transistors within an integrated circuit. One person can often do the design process without needing a planned or structured design process for simple circuits. Still, teams of designers following a systematic approach with intelligently guided computer simulation are becoming increasingly common for more complex designs. In integrated circuit design automation, the term "circuit design" often refers to the step of the design cycle which outputs the schematics of the integrated circuit. Typically this is the step between logic design and physical design. Process Traditional circuit design usually involves several stages. Sometimes, a design specification is written after liaising with the customer. A technical proposal may be written to meet the requirements of the customer specification. The next stage involves synthesising on paper a schematic circuit diagram, an abstract electrical or electronic circuit that will meet the specifications. A calculation of the component values to meet the operating specifications under specified conditions should be made. Simulations may be performed to verify the correctness of the design. A breadboard or other prototype version of the design for testing against specification may be built. It may involve making any alterations to the circuit to achieve compliance. A choice as to a method of construction and all the parts and materials to be used must be made. There is a presentation of component and layout information to draughtspersons and layout and mechanical engineers for prototype production. This is followed by the testing or type-testing several prototypes to ensure compliance with customer requirements. Usually, there is a signing and approval of the final manufacturing drawings, and there may be post-design services (obsolescence of components, etc.). Specification The process of circuit design begins
https://en.wikipedia.org/wiki/List%20of%20Runge%E2%80%93Kutta%20methods
Runge–Kutta methods are methods for the numerical solution of the ordinary differential equation Explicit Runge–Kutta methods take the form Stages for implicit methods of s stages take the more general form, with the solution to be found over all s Each method listed on this page is defined by its Butcher tableau, which puts the coefficients of the method in a table as follows: For adaptive and implicit methods, the Butcher tableau is extended to give values of , and the estimated error is then . Explicit methods The explicit methods are those where the matrix is lower triangular. Forward Euler The Euler method is first order. The lack of stability and accuracy limits its popularity mainly to use as a simple introductory example of a numeric solution method. Explicit midpoint method The (explicit) midpoint method is a second-order method with two stages (see also the implicit midpoint method below): Heun's method Heun's method is a second-order method with two stages. It is also known as the explicit trapezoid rule, improved Euler's method, or modified Euler's method. (Note: The "eu" is pronounced the same way as in "Euler", so "Heun" rhymes with "coin"): Ralston's method Ralston's method is a second-order method with two stages and a minimum local error bound: Generic second-order method Kutta's third-order method Generic third-order method See Sanderse and Veldman (2019). for α ≠ 0, , 1: Heun's third-order method Van der Houwen's/Wray third-order method Ralston's third-order method Ralston's third-order method is used in the embedded Bogacki–Shampine method. Third-order Strong Stability Preserving Runge-Kutta (SSPRK3) Classic fourth-order method The "original" Runge–Kutta method. 3/8-rule fourth-order method This method doesn't have as much notoriety as the "classic" method, but is just as classic because it was proposed in the same paper (Kutta, 1901). Ralston's fourth-order method This fourth order method has minimum truncation er
https://en.wikipedia.org/wiki/Jeremy%20Burroughes
Jeremy Henley Burroughes (born August 1960) is a British physicist and engineer, known for his contributions to the development of organic electronics through his work on the science of semiconducting polymers and molecules and their application. He is the Chief Technology Officer of Cambridge Display Technology, a company specialising in the development of technologies based on polymer light-emitting diodes. Education Burroughes earned his PhD from the University of Cambridge in 1989. His thesis was entitled The physical processes in organic semiconducting polymer devices. Work Early in his career, Burroughes discovered that certain conjugated polymers were capable of emitting light when an electric current passed through them. The discovery of this previously unknown form of electroluminescence led to the foundation of Cambridge Display Technology where Burroughes has been responsible for a number of technology innovations, including the direct printing of full-colour OLED displays. Awards and honours Burroughes was elected a Fellow of the Royal Society (FRS) in 2012. His certificate of election reads:
https://en.wikipedia.org/wiki/Transcellular%20transport
Transcellular transport involves the transportation of solutes by a cell through a cell. Transcellular transport can occur in three different ways active transport, passive transport, and transcytosis. Active Transport Main article: Active transport Active transport is the process of moving molecules from an area of low concentrations to an area of high concentration. There are two types of active transport, primary active transport and secondary active transport. Primary active transport uses adenosine triphosphate (ATP) to move specific molecules and solutes against its concentration gradient. Examples of molecules that follow this process are potassium K+, sodium Na+, and calcium Ca2+. A place in the human body where this occurs is in the intestines with the uptake of glucose. Secondary active transport is when one solute moves down the electrochemical gradient to produce enough energy to force the transport of another solute from low concentration to high concentration.  An example of where this occurs is in the movement of glucose within the proximal convoluted tubule (PCT). Passive Transport Main article: Passive transport Passive transport is the process of moving molecules from an area of high concentration to an area of low concentration without expelling any energy. There are two types of passive transport, passive diffusion and facilitated diffusion. Passive diffusion is the unassisted movement of molecules from high concentration to low concentration across a permeable membrane. One example of passive diffusion is the gas exchange that occurs between the oxygen in the blood and the carbon dioxide present in the lungs. Facilitated diffusion is the movement of polar molecules down the concentration gradient with the assistance of membrane proteins. Since the molecules associated with facilitated diffusion are polar, they are repelled by the hydrophobic sections of permeable membrane, therefore they need to be assisted by the membrane proteins. Both t
https://en.wikipedia.org/wiki/Structural%20induction
Structural induction is a proof method that is used in mathematical logic (e.g., in the proof of Łoś' theorem), computer science, graph theory, and some other mathematical fields. It is a generalization of mathematical induction over natural numbers and can be further generalized to arbitrary Noetherian induction. Structural recursion is a recursion method bearing the same relationship to structural induction as ordinary recursion bears to ordinary mathematical induction. Structural induction is used to prove that some proposition holds for all of some sort of recursively defined structure, such as formulas, lists, or trees. A well-founded partial order is defined on the structures ("subformula" for formulas, "sublist" for lists, and "subtree" for trees). The structural induction proof is a proof that the proposition holds for all the minimal structures and that if it holds for the immediate substructures of a certain structure , then it must hold for also. (Formally speaking, this then satisfies the premises of an axiom of well-founded induction, which asserts that these two conditions are sufficient for the proposition to hold for all .) A structurally recursive function uses the same idea to define a recursive function: "base cases" handle each minimal structure and a rule for recursion. Structural recursion is usually proved correct by structural induction; in particularly easy cases, the inductive step is often left out. The length and ++ functions in the example below are structurally recursive. For example, if the structures are lists, one usually introduces the partial order "<", in which whenever list is the tail of list . Under this ordering, the empty list is the unique minimal element. A structural induction proof of some proposition then consists of two parts: A proof that is true and a proof that if is true for some list , and if is the tail of list , then must also be true. Eventually, there may exist more than one base case
https://en.wikipedia.org/wiki/Prony%27s%20method
Prony analysis (Prony's method) was developed by Gaspard Riche de Prony in 1795. However, practical use of the method awaited the digital computer. Similar to the Fourier transform, Prony's method extracts valuable information from a uniformly sampled signal and builds a series of damped complex exponentials or damped sinusoids. This allows the estimation of frequency, amplitude, phase and damping components of a signal. The method Let be a signal consisting of evenly spaced samples. Prony's method fits a function to the observed . After some manipulation utilizing Euler's formula, the following result is obtained, which allows more direct computation of terms: where are the eigenvalues of the system, are the damping components, are the angular-frequency components, are the phase components, are the amplitude components of the series, is the imaginary unit (). Representations Prony's method is essentially a decomposition of a signal with complex exponentials via the following process: Regularly sample so that the -th of samples may be written as If happens to consist of damped sinusoids, then there will be pairs of complex exponentials such that where Because the summation of complex exponentials is the homogeneous solution to a linear difference equation, the following difference equation will exist: The key to Prony's Method is that the coefficients in the difference equation are related to the following polynomial: These facts lead to the following three steps within Prony's method: 1) Construct and solve the matrix equation for the values: Note that if , a generalized matrix inverse may be needed to find the values . 2) After finding the values, find the roots (numerically if necessary) of the polynomial The -th root of this polynomial will be equal to . 3) With the values, the values are part of a system of linear equations that may be used to solve for the values: where unique values are used. It is possible to
https://en.wikipedia.org/wiki/Table%20of%20divisors
The tables below list all of the divisors of the numbers 1 to 1000. A divisor of an integer n is an integer m, for which n/m is again an integer (which is necessarily also a divisor of n). For example, 3 is a divisor of 21, since 21/7 = 3 (and therefore 7 is also a divisor of 21). If m is a divisor of n then so is −m. The tables below only list positive divisors. Key to the tables d(n) is the number of positive divisors of n, including 1 and n itself σ(n) is the sum of the positive divisors of n, including 1 and n itself s(n) is the sum of the proper divisors of n, including 1, but not n itself; that is, s(n) = σ(n) − n a deficient number is greater than the sum of its proper divisors; that is, s(n) < n a perfect number equals the sum of its proper divisors; that is, s(n) = n an abundant number is lesser than the sum of its proper divisors; that is, s(n) > n a highly abundant number has a sum of positive divisors greater than any lesser number's sum of positive divisors; that is, s(n) > s(m) for every positive integer m < n. Counterintuitively, the first seven highly abundant numbers are not abundant numbers. a prime number has only 1 and itself as divisors; that is, d(n) = 2. Prime numbers are always deficient as s(n)=1. a composite number has more than just 1 and itself as divisors; that is, d(n) > 2 a highly composite number has more divisors than any lesser number; that is, d(n) > d(m) for every positive integer m < n. Counterintuitively, the first two highly composite numbers are not composite numbers. a superior highly composite number has more divisors than any other number scaled relative to some positive power of the number itself; that is, there exists some ε such that for every other positive integer m. Superior highly composite numbers are always highly composite numbers. a weird number is an abundant number that is not semiperfect; that is, no subset of the proper divisors of n sum to n 1 to 100 101 to 200 201 to 300 301 to 400 401 to 50
https://en.wikipedia.org/wiki/Comparison%20of%20streaming%20media%20software
This is a comparison of streaming media systems. A more complete list of streaming media systems is also available. General The following tables compare general and technical information for a number of streaming media systems both audio and video. Please see the individual systems' linked articles for further information. Operating system support Container format support Information about what digital container formats are supported. Protocol support Information about which internet protocols are supported for broadcasting streaming media content. Features See also Community radio Comparison of video services Content delivery network Digital television Electronic commerce Internet radio Internet radio device Internet television IPTV List of Internet radio stations List of music streaming services Multicast P2PTV Protection of Broadcasts and Broadcasting Organizations Treaty Push technology Streaming media Ustream Webcast Web television
https://en.wikipedia.org/wiki/Spiral%20plater
A spiral plater is an instrument used to dispense a liquid sample onto a Petri dish in a spiral pattern. Commonly used as part of a CFU count procedure for the purpose of determining the number of microbes in the sample. In this setting, after spiral plating, the Petri dish is incubated for several hours after which the number of colony forming microbes (CFU) is determined. Spiral platers are also used for research, clinical diagnostics and as a method for covering a Petri dish with bacteria before placing antibiotic discs for AST. Mode of action The spiral plater rotates the dish while simultaneously dispensing the liquid and either linearly moving the dish or the dispensing tip. This creates the common spiral pattern. If all movements are done in constant speed, the spiral created would have a lower concentration on the outside of the plate than on the inside. More advanced spiral platers provide different options for spiral patterns such as constant concentration (by slowing down the spinning and / or the lateral movements) or exponential concentration (by speeding up the spinning and / or the lateral movements). In food and cosmetic testing Spiral plating is used extensively for microbiological testing of food, milk and milk products and cosmetics. It is an approved method by the FDA. The advantage of spiral plating is less plates used versus plating manually because different concentrations are present on each plate. This also makes it harder to count the colonies and requires special techniques and equipment. Stand-alone vs. Add-on Spiral platers are either available as stand-alone instruments that are fed manually with plates and samples or fed automatically using dedicated stackers. Alternatively spiral platers are available as integrated devices as part of larger automated platforms. In this case a larger workflow is often automated, e.g. plating, incubation and counting.
https://en.wikipedia.org/wiki/Chip%20art
Chip art, also known as silicon art, chip graffiti or silicon doodling, refers to microscopic artwork built into integrated circuits, also called chips or ICs. Since ICs are printed by photolithography, not constructed a component at a time, there is no additional cost to include features in otherwise unused space on the chip. Designers have used this freedom to put all sorts of artwork on the chips themselves, from designers' simple initials to rather complex drawings. Given the small size of chips, these figures cannot be seen without a microscope. Chip graffiti is sometimes called the hardware version of software easter eggs. Prior to 1984, these doodles also served a practical purpose. If a competitor produced a similar chip, and examination showed it contained the same doodles, then this was strong evidence that the design was copied (a copyright violation) and not independently derived. A 1984 revision of the US copyright law (the Semiconductor Chip Protection Act of 1984) made all chip masks automatically copyrighted, with exclusive rights to the creator, and similar rules apply in most other countries that manufacture ICs. Since an exact copy is now automatically a copyright violation, the doodles serve no useful purpose. Creating chip art Integrated Circuits are constructed from multiple layers of material, typically silicon, silicon dioxide (glass), and aluminum. The composition and thickness of these layers give them their distinctive color and appearance. These elements created an irresistible palette for IC design and layout engineers. The creative process involved in the design of these chips, a strong sense of pride in their work, and an artistic temperament combined compels people to want to mark their work as their own. It is very common to find initials, or groups of initials on chips. This is the design engineer's way of "signing" his or her work. Often this creative artist's instinct extends to the inclusion of small pictures or icons
https://en.wikipedia.org/wiki/Canonical%20form
In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression. Often, it is one which provides the simplest representation of an object and allows it to be identified in a unique way. The distinction between "canonical" and "normal" forms varies from subfield to subfield. In most fields, a canonical form specifies a unique representation for every object, while a normal form simply specifies its form, without the requirement of uniqueness. The canonical form of a positive integer in decimal representation is a finite sequence of digits that does not begin with zero. More generally, for a class of objects on which an equivalence relation is defined, a canonical form consists in the choice of a specific object in each class. For example: Jordan normal form is a canonical form for matrix similarity. The row echelon form is a canonical form, when one considers as equivalent a matrix and its left product by an invertible matrix. In computer science, and more specifically in computer algebra, when representing mathematical objects in a computer, there are usually many different ways to represent the same object. In this context, a canonical form is a representation such that every object has a unique representation (with canonicalization being the process through which a representation is put into its canonical form). Thus, the equality of two objects can easily be tested by testing the equality of their canonical forms. Despite this advantage, canonical forms frequently depend on arbitrary choices (like ordering the variables), which introduce difficulties for testing the equality of two objects resulting on independent computations. Therefore, in computer algebra, normal form is a weaker notion: A normal form is a representation such that zero is uniquely represented. This allows testing for equality by putting the difference of two objects in normal form.
https://en.wikipedia.org/wiki/Bioactive%20terrarium
A bioactive terrarium (or vivarium) is a terrarium for housing one or more terrestrial animal species that includes live plants and populations of small invertebrates and microorganisms to consume and break down the waste products of the primary species. In a functional bioactive terrarium, the waste products will be broken down by these detritivores, reducing or eliminating the need for cage cleaning. Bioactive vivariums are used by zoos and hobbyists to house reptiles and amphibians in an aesthetically pleasing and enriched environment. Enclosure Any terrarium can be made bioactive by addition of the appropriate substrate, plants, and detritivores. Bioactive enclosures are often maintained as display terraria constructed of PVC, wood, glass and/or acrylic. Bioactive enclosures in laboratory "rack" style caging are uncommon. Cleanup crew Waste products of the primary species are consumed by a variety of detritivores, referred to as the "cleanup crew" by hobbyists. These can include woodlice, springtails, earthworms, millipedes, and various beetles, with different species being preferred in different habitats - the cleanup crew for a tropical rainforest bioactive terrarium may rely primarily on springtails, isopods, and earthworms, while a desert habitat might use beetles. If the primary species is insectivorous, they may consume the cleanup crew, and thus the cleanup crew must have sufficient retreats to avoid being completely depopulated. Additionally, bioactive terraria typically have a flourishing population of bacteria and other microorganisms which break down the wastes of the cleanup crew and primary species. Fungi may occur as part of the terrarium cycle and will be consumed by the cleanup crew. Substrate Bioactive enclosures require some form of substrate to grow plants and to provide habitat for the cleanup crew. The choice of substrate is typically determined by the habitat of the primary species (e.g. jungle vs desert), and created by mixing a v
https://en.wikipedia.org/wiki/Undersampling
In signal processing, undersampling or bandpass sampling is a technique where one samples a bandpass-filtered signal at a sample rate below its Nyquist rate (twice the upper cutoff frequency), but is still able to reconstruct the signal. When one undersamples a bandpass signal, the samples are indistinguishable from the samples of a low-frequency alias of the high-frequency signal. Such sampling is also known as bandpass sampling, harmonic sampling, IF sampling, and direct IF-to-digital conversion. Description The Fourier transforms of real-valued functions are symmetrical around the 0 Hz axis. After sampling, only a periodic summation of the Fourier transform (called discrete-time Fourier transform) is still available. The individual frequency-shifted copies of the original transform are called aliases. The frequency offset between adjacent aliases is the sampling-rate, denoted by fs. When the aliases are mutually exclusive (spectrally), the original transform and the original continuous function, or a frequency-shifted version of it (if desired), can be recovered from the samples. The first and third graphs of Figure 1 depict a baseband spectrum before and after being sampled at a rate that completely separates the aliases. The second graph of Figure 1 depicts the frequency profile of a bandpass function occupying the band (A, A+B) (shaded blue) and its mirror image (shaded beige). The condition for a non-destructive sample rate is that the aliases of both bands do not overlap when shifted by all integer multiples of fs. The fourth graph depicts the spectral result of sampling at the same rate as the baseband function. The rate was chosen by finding the lowest rate that is an integer sub-multiple of A and also satisfies the baseband Nyquist criterion: fs > 2B.  Consequently, the bandpass function has effectively been converted to baseband. All the other rates that avoid overlap are given by these more general criteria, where A and A+B are replaced
https://en.wikipedia.org/wiki/Mean-field%20theory
In physics and probability theory, Mean-field theory (MFT) or Self-consistent field theory studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other. The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost. MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium. Origins The idea first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Pierre–Weiss approximation, Flory–Huggins solution theory, and Scheutjens–Fleer theory. Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original solvable and open to calculation, and in some cases MFT may give very accurate approximations. In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the fi
https://en.wikipedia.org/wiki/Counterexample
A counterexample is any exception to a generalization. In logic a counterexample disproves the generalization, and does so rigorously in the fields of mathematics and philosophy. For example, the fact that "student John Smith is not lazy" is a counterexample to the generalization "students are lazy", and both a counterexample to, and disproof of, the universal quantification "all students are lazy." In mathematics, the term "counterexample" is also used (by a slight abuse) to refer to examples which illustrate the necessity of the full hypothesis of a theorem. This is most often done by considering a case where a part of the hypothesis is not satisfied and the conclusion of the theorem does not hold. In mathematics In mathematics, counterexamples are often used to prove the boundaries of possible theorems. By using counterexamples to show that certain conjectures are false, mathematical researchers can then avoid going down blind alleys and learn to modify conjectures to produce provable theorems. It is sometimes said that mathematical development consists primarily in finding (and proving) theorems and counterexamples. Rectangle example Suppose that a mathematician is studying geometry and shapes, and she wishes to prove certain theorems about them. She conjectures that "All rectangles are squares", and she is interested in knowing whether this statement is true or false. In this case, she can either attempt to prove the truth of the statement using deductive reasoning, or she can attempt to find a counterexample of the statement if she suspects it to be false. In the latter case, a counterexample would be a rectangle that is not a square, such as a rectangle with two sides of length 5 and two sides of length 7. However, despite having found rectangles that were not squares, all the rectangles she did find had four sides. She then makes the new conjecture "All rectangles have four sides". This is logically weaker than her original conjecture, since every squa
https://en.wikipedia.org/wiki/Popularity
In sociology, popularity is how much a person, idea, place, item or other concept is either liked or accorded status by other people. Liking can be due to reciprocal liking, interpersonal attraction, and similar factors. Social status can be due to dominance, superiority, and similar factors. For example, a kind person may be considered likable and therefore more popular than another person, and a wealthy person may be considered superior and therefore more popular than another person. There are two primary types of interpersonal popularity: perceived and sociometric. Perceived popularity is measured by asking people who the most popular or socially important people in their social group are. Sociometric popularity is measured by objectively measuring the number of connections a person has to others in the group. A person can have high perceived popularity without having high sociometric popularity, and vice versa. According to psychologist Tessa Lansu at the Radboud University Nijmegen, "Popularity [has] to do with being the middle point of a group and having influence on it." Introduction The term popularity is borrowed from the Latin term popularis, which originally meant "common." The current definition of the word popular, the "fact or condition of being well liked by the people", was first seen in 1601. While popularity is a trait often ascribed to an individual, it is an inherently social phenomenon and thus can only be understood in the context of groups of people. Popularity is a collective perception, and individuals report the consensus of a group's feelings towards an individual or object when rating popularity. It takes a group of people to like something, so the more that people advocate for something or claim that someone is best liked, the more attention it will get, and the more popular it will be deemed. Notwithstanding the above, popularity as a concept can be applied, assigned, or directed towards objects such as songs, movies, websites, a
https://en.wikipedia.org/wiki/Mutualism%20Parasitism%20Continuum
The hypothesis or paradigm of Mutualism Parasitism Continuum postulates that compatible host-symbiont associations can occupy a broad continuum of interactions with different fitness outcomes for each member. At one end of the continuum lies obligate mutualism where both host and symbiont benefit from the interaction and are dependent on it for survival. At the other end of the continuum highly parasitic interactions can occur, where one member gains a fitness benefit at the expense of the others survival. Between these extremes many different types of interaction are possible. The degree of change between mutualism or parasitism varies depending on the availability of resources, where there is environmental stress generated by few resources, symbiotic relationships are formed while in environments where there is an excess of resources, biological interactions turn to competition and parasitism. Classically the transmission mode of the symbiont can also be important in predicting where on the mutualism-parasitism-continuum an interaction will sit. Symbionts that are vertically transmitted (inherited symbionts) frequently occupy mutualism space on the continuum, this is due to the aligned reproductive interests between host and symbiont that are generated under vertical transmission. In some systems increases in the relative contribution of horizontal transmission can drive selection for parasitism. Studies of this hypothesis have focused on host-symbiont models of plants and fungi, and also of animals and microbes. See also Red King Hypothesis Red Queen Hypothesis Black Queen Hypothesis Biological interaction
https://en.wikipedia.org/wiki/List%20of%20inequalities
This article lists Wikipedia articles about named mathematical inequalities. Inequalities in pure mathematics Analysis Agmon's inequality Askey–Gasper inequality Babenko–Beckner inequality Bernoulli's inequality Bernstein's inequality (mathematical analysis) Bessel's inequality Bihari–LaSalle inequality Bohnenblust–Hille inequality Borell–Brascamp–Lieb inequality Brezis–Gallouet inequality Carleman's inequality Chebyshev–Markov–Stieltjes inequalities Chebyshev's sum inequality Clarkson's inequalities Eilenberg's inequality Fekete–Szegő inequality Fenchel's inequality Friedrichs's inequality Gagliardo–Nirenberg interpolation inequality Gårding's inequality Grothendieck inequality Grunsky's inequalities Hanner's inequalities Hardy's inequality Hardy–Littlewood inequality Hardy–Littlewood–Sobolev inequality Harnack's inequality Hausdorff–Young inequality Hermite–Hadamard inequality Hilbert's inequality Hölder's inequality Jackson's inequality Jensen's inequality Khabibullin's conjecture on integral inequalities Kantorovich inequality Karamata's inequality Korn's inequality Ladyzhenskaya's inequality Landau–Kolmogorov inequality Lebedev–Milin inequality Lieb–Thirring inequality Littlewood's 4/3 inequality Markov brothers' inequality Mashreghi–Ransford inequality Max–min inequality Minkowski's inequality Poincaré inequality Popoviciu's inequality Prékopa–Leindler inequality Rayleigh–Faber–Krahn inequality Remez inequality Riesz rearrangement inequality Schur test Shapiro inequality Sobolev inequality Steffensen's inequality Szegő inequality Three spheres inequality Trace inequalities Trudinger's theorem Turán's inequalities Von Neumann's inequality Wirtinger's inequality for functions Young's convolution inequality Young's inequality for products Inequalities relating to means Hardy–Littlewood maximal inequality Inequality of arithmetic and geometric means Ky Fan inequality Levinson's inequality Mac
https://en.wikipedia.org/wiki/Multimedia%20over%20Coax%20Alliance
The Multimedia over Coax Alliance (MoCA) is an international standards consortium that publishes specifications for networking over coaxial cable. The technology was originally developed to distribute IP television in homes using existing cabling, but is now used as a general-purpose Ethernet link where it is inconvenient or undesirable to replace existing coaxial cable with optical fiber or twisted pair cabling. MoCA 1.0 was approved in 2006, MoCA 1.1 in April 2010, MoCA 2.0 in June 2010, and MoCA 2.5 in April 2016. The most recently released version of the standard, MoCA 3.0, supports speeds of up to . Membership The Alliance currently has 45 members including pay TV operators, OEMs, CE manufacturers and IC vendors. MoCA's board of directors consists of Arris, Comcast, Cox Communications, DirecTV, Echostar, Intel, InCoax, MaxLinear and Verizon. Technology Within the scope of the Internet protocol suite, MoCA is a protocol that provides the link layer. In the 7-layer OSI model, it provides definitions within the data link layer (layer 2) and the physical layer (layer 1). DLNA approved of MoCA as a layer 2 protocol. A MoCA network can contain up to 16 nodes for MoCA 1.1 and higher, with a maximum of 8 for MoCA 1.0. The network provides a shared-medium, half-duplex link between all nodes using time-division multiplexing; within each timeslot, any pair of nodes communicates directly with each other using the highest mutually-supported version of the standard. Versions MoCA 1.0 The first version of the standard, MoCA 1.0, was ratified in 2006 and supports transmission speeds of up to 135 Mb/s. MoCA 1.1 MoCA 1.1 provides 175 Mbit/s net throughputs (275 Mbit/s PHY rate) and operates in the 500 to 1500 MHz frequency range. MoCA 2.0 MoCA 2.0 offers actual throughputs (MAC rate) up to 1 Gbit/s. Operating frequency range is 500 to 1650 MHz. Packet error rate is 1 packet error in 100 million. MoCA 2.0 also offers lower power modes of sleep and standby and is backw
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Borwein%20constant
The Erdős–Borwein constant is the sum of the reciprocals of the Mersenne numbers. It is named after Paul Erdős and Peter Borwein. By definition it is: Equivalent forms It can be proven that the following forms all sum to the same constant: where σ0(n) = d(n) is the divisor function, a multiplicative function that equals the number of positive divisors of the number n. To prove the equivalence of these sums, note that they all take the form of Lambert series and can thus be resummed as such. Irrationality Erdős in 1948 showed that the constant E is an irrational number. Later, Borwein provided an alternative proof. Despite its irrationality, the binary representation of the Erdős–Borwein constant may be calculated efficiently. Applications The Erdős–Borwein constant comes up in the average case analysis of the heapsort algorithm, where it controls the constant factor in the running time for converting an unsorted array of items into a heap.
https://en.wikipedia.org/wiki/Autonomous%20decentralized%20system
An autonomous decentralized system (or ADS) is a decentralized system composed of modules or components that are designed to operate independently but are capable of interacting with each other to meet the overall goal of the system. This design paradigm enables the system to continue to function in the event of component failures. It also enables maintenance and repair to be carried out while the system remains operational. Autonomous decentralized systems have a number of applications including industrial production lines, railway signalling and robotics. The ADS has been recently expanded from control applications to service application and embedded systems, thus autonomous decentralized service systems and autonomous decentralized device systems. History Autonomous decentralized systems were first proposed in 1977. ADS received significant attention as such systems have been deployed in Japanese railway systems for many years safely with over 7 billion trips, proving the value of this concept. Japan railway with ADS is considered as a smart train as it also learns. To recognizing this outstanding contribution, Dr. Kinji Mori has received numerous awards including 2013 IEEE Life Fellow, 2012 Distinguished Service Award, Tokyo Metropolitan Government, 2012 Distinguished Specialist among 1000 in the world, Chinese Government, 2008 IEICE Fellow, 1995 IEEE Fellow 1994 Research and Development Award of Excellence Achievers, Science and Technology Agency, 1994 Ichimura Industrial Prize, 1992 Technology Achievement Award, Society of Instrument and Control Engineers, 1988 National Patent Award, Science and Technology Agency, and 1988 Mainichi Technology Prize of Excellence. Dr. Mori donated the cash from Ichimura Industrial Price to IEEE to fund the IEEE Kanai Award. Since 1977, ADS has been a subject of research by many researchers in the world including US, Japan, EU particularly Germany, and China. ADS architecture An ADS is a decoupled architecture where each
https://en.wikipedia.org/wiki/Systems%20management
Systems management refers to enterprise-wide administration of distributed systems including (and commonly in practice) computer systems. Systems management is strongly influenced by network management initiatives in telecommunications. The application performance management (APM) technologies are now a subset of Systems management. Maximum productivity can be achieved more efficiently through event correlation, system automation and predictive analysis which is now all part of APM. Centralized management has a time and effort trade-off that is related to the size of the company, the expertise of the IT staff, and the amount of technology being used: For a small business startup with ten computers, automated centralized processes may take more time to learn how to use and implement than just doing the management work manually on each computer. A very large business with thousands of similar employee computers may clearly be able to save time and money, by having IT staff learn to do systems management automation. A small branch office of a large corporation may have access to a central IT staff, with the experience to set up automated management of the systems in the branch office, without need for local staff in the branch office to do the work. Systems management may involve one or more of the following tasks: Hardware inventories. Server availability monitoring and metrics. Software inventory and installation. Anti-virus and anti-malware. User's activities monitoring. Capacity monitoring. Security management. Storage management. Network capacity and utilization monitoring. Anti-manipulation management Functions Functional groups are provided according to International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Common management information protocol (X.700) standard. This framework is also known as Fault, Configuration, Accounting, Performance, Security (FCAPS). Fault management Troubleshooting, error logging an
https://en.wikipedia.org/wiki/Turbulence
In fluid dynamics, turbulence or turbulent flow is fluid motion characterized by chaotic changes in pressure and flow velocity. It is in contrast to a laminar flow, which occurs when a fluid flows in parallel layers, with no disruption between those layers. Turbulence is commonly observed in everyday phenomena such as surf, fast flowing rivers, billowing storm clouds, or smoke from a chimney, and most fluid flows occurring in nature or created in engineering applications are turbulent. Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the damping effect of the fluid's viscosity. For this reason turbulence is commonly realized in low viscosity fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. This increases the energy needed to pump fluid through a pipe. The onset of turbulence can be predicted by the dimensionless Reynolds number, the ratio of kinetic energy to viscous damping in a fluid flow. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence create a very complex phenomenon. Richard Feynman described turbulence as the most important unsolved problem in classical physics. The turbulence intensity affects many fields, for examples fish ecology, air pollution, precipitation, and climate change. Examples of turbulence Smoke rising from a cigarette. For the first few centimeters, the smoke is laminar. The smoke plume becomes turbulent as its Reynolds number increases with increases in flow velocity and characteristic length scale. Flow over a golf ball. (This can be best understood by considering the golf ball to be stationary, with air flowing over it.) If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the boundary layer would separate early, as the pressure gradient s
https://en.wikipedia.org/wiki/IEC%2061108
IEC 61108 is a collection of IEC standards for "Maritime navigation and radiocommunication equipment and systems - Global navigation satellite systems (GNSS)". The 61108 standards are developed in Working Group 4 (WG 4A) of Technical Committee 80 (TC80) of the IEC. Sections of IEC 61108 Standard IEC 61108 is divided into four parts: Part 1: Global positioning system (GPS) - Receiver equipment - Performance standards, methods of testing and required test results Part 2: Global navigation satellite system (GLONASS) - Receiver equipment - Performance standards, methods of testing and required test results Part 3: Galileo receiver equipment - Performance requirements, methods of testing and required test results Part 4: Part 4: Shipborne DGPS and DGLONASS maritime radio beacon receiver equipment - Performance requirements, methods of testing and required test results History On 1 December 2000, the International Maritime Organization - IMO adopted three resolutions regarding the characteristics of shipped GNSS receivers. IMO Resolutions On 1 December 2000, the International Maritime Organization - IMO adopted three resolutions regarding the performance standards for shipborne GNSS receivers: IMO RESOLUTION MSC.112(73) GLOBAL POSITIONING SYSTEM (GPS) RECEIVER EQUIPMENT IMO RESOLUTION MSC.113(73) GLONASS RECEIVER EQUIPMENT IMO RESOLUTION MSC.114(73) DGPS AND DGLONASS MARITIME RADIO BEACON RECEIVER EQUIPMENT IMO RESOLUTION MSC.233(82) GALILEO RECEIVER EQUIPMENT (adopted on 5 December 2006)
https://en.wikipedia.org/wiki/Tomahawk%20%28geometry%29
The tomahawk is a tool in geometry for angle trisection, the problem of splitting an angle into three equal parts. The boundaries of its shape include a semicircle and two line segments, arranged in a way that resembles a tomahawk, a Native American axe. The same tool has also been called the shoemaker's knife, but that name is more commonly used in geometry to refer to a different shape, the arbelos (a curvilinear triangle bounded by three mutually tangent semicircles). Description The basic shape of a tomahawk consists of a semicircle (the "blade" of the tomahawk), with a line segment the length of the radius extending along the same line as the diameter of the semicircle (the tip of which is the "spike" of the tomahawk), and with another line segment of arbitrary length (the "handle" of the tomahawk) perpendicular to the diameter. In order to make it into a physical tool, its handle and spike may be thickened, as long as the line segment along the handle continues to be part of the boundary of the shape. Unlike a related trisection using a carpenter's square, the other side of the thickened handle does not need to be made parallel to this line segment. In some sources a full circle rather than a semicircle is used, or the tomahawk is also thickened along the diameter of its semicircle, but these modifications make no difference to the action of the tomahawk as a trisector. Trisection To use the tomahawk to trisect an angle, it is placed with its handle line touching the apex of the angle, with the blade inside the angle, tangent to one of the two rays forming the angle, and with the spike touching the other ray of the angle. One of the two trisecting lines then lies on the handle segment, and the other passes through the center point of the semicircle. If the angle to be trisected is too sharp relative to the length of the tomahawk's handle, it may not be possible to fit the tomahawk into the angle in this way, but this difficulty may be worked around by re
https://en.wikipedia.org/wiki/Steered-response%20power
Steered-response power (SRP) is a family of acoustic source localization algorithms that can be interpreted as a beamforming-based approach that searches for the candidate position or direction that maximizes the output of a steered delay-and-sum beamformer. Steered-response power with phase transform (SRP-PHAT) is a variant using a "phase transform" to make it more robust in adverse acoustic environments. Algorithm Steered-response power Consider a system of microphones, where each microphone is denoted by a subindex . The discrete-time output signal from a microphone is . The (unweighted) steered-response power (SRP) at a spatial point can be expressed as where denotes the set of integer numbers and would be the time-lag due to the propagation from a source located at to the -th microphone. The (weighted) SRP can be rewritten as where denotes complex conjugation, represents the discrete-time Fourier transform of and is a weighting function in the frequency domain (later discussed). The term is the discrete time-difference of arrival (TDOA) of a signal emitted at position to microphones and , given by where is the sampling frequency of the system, is the sound propagation speed, is the position of the -th microphone, is the 2-norm and denotes the rounding operator. Generalized cross-correlation The above SRP objective function can be expressed as a sum of generalized cross-correlations (GCCs) for the different microphone pairs at the time-lag corresponding to their TDOA where the GCC for a microphone pair is defined as The phase transform (PHAT) is an effective GCC weighting for time delay estimation in reverberant environments, that forces the GCC to consider only the phase information of the involved signals: Estimation of source location The SRP-PHAT algorithm consists in a grid-search procedure that evaluates the objective function on a grid of candidate source locations to estimate the spatial location of the sound source,
https://en.wikipedia.org/wiki/List%20of%20Euclidean%20uniform%20tilings
This table shows the 11 convex uniform tilings (regular and semiregular) of the Euclidean plane, and their dual tilings. There are three regular and eight semiregular tilings in the plane. The semiregular tilings form new tilings from their duals, each made from one type of irregular face. John Conway called these uniform duals Catalan tilings, in parallel to the Catalan solid polyhedra. Uniform tilings are listed by their vertex configuration, the sequence of faces that exist on each vertex. For example 4.8.8 means one square and two octagons on a vertex. These 11 uniform tilings have 32 different uniform colorings. A uniform coloring allows identical sided polygons at a vertex to be colored differently, while still maintaining vertex-uniformity and transformational congruence between vertices. (Note: Some of the tiling images shown below are not color-uniform) In addition to the 11 convex uniform tilings, there are also 14 known nonconvex tilings, using star polygons, and reverse orientation vertex configurations. A further 28 uniform tilings are known using apeirogons. If zigzags are also allowed, there are 23 more known uniform tilings and 10 more known families depending on a parameter: in 8 cases the parameter is continuous, and in the other 2 it is discrete. The set is not known to be complete. Laves tilings In the 1987 book, Tilings and Patterns, Branko Grünbaum calls the vertex-uniform tilings Archimedean, in parallel to the Archimedean solids. Their dual tilings are called Laves tilings in honor of crystallographer Fritz Laves. They're also called Shubnikov–Laves tilings after Aleksei Shubnikov. John Conway called the uniform duals Catalan tilings, in parallel to the Catalan solid polyhedra. The Laves tilings have vertices at the centers of the regular polygons, and edges connecting centers of regular polygons that share an edge. The tiles of the Laves tilings are called planigons. This includes the 3 regular tiles (triangle, square and hexagon) and
https://en.wikipedia.org/wiki/Thermoduric%20bacterium
Thermoduric bacteria are bacteria which can survive, to varying extents, the pasteurisation process. Species of bacteria which are thermoduric include Bacillus, Clostridium and Enterococci.
https://en.wikipedia.org/wiki/Virtual%20Interface%20Architecture
The Virtual Interface Architecture (VIA) is an abstract model of a user-level zero-copy network, and is the basis for InfiniBand, iWARP and RoCE. Created by Microsoft, Intel, and Compaq, the original VIA sought to standardize the interface for high-performance network technologies known as System Area Networks (SANs; not to be confused with Storage Area Networks). Networks are a shared resource. With traditional network APIs such as the Berkeley socket API, the kernel is involved in every network communication. This presents a tremendous performance bottleneck when latency is an issue. One of the classic developments in computing systems is virtual memory, a combination of hardware and software that creates the illusion of private memory for each process. In the same school of thought, a virtual network interface protected across process boundaries could be accessed at the user level. With this technology, the "consumer" manages its own buffers and communication schedule while the "provider" handles the protection. Thus, the network interface card (NIC) provides a "private network" for a process, and a process is usually allowed to have multiple such networks. The virtual interface (VI) of VIA refers to this network and is merely the destination of the user's communication requests. Communication takes place over a pair of VIs, one on each of the processing nodes involved in the transmission. In "kernel-bypass" communication, the user manages its own buffers. Another facet of traditional networks is that arriving data is placed in a pre-allocated buffer and then copied to the user-specified final destination. Copying large messages can take a long time, and so eliminating this step is beneficial. Another classic development in computing systems is direct memory access (DMA), in which a device can access main memory directly while the CPU is free to perform other tasks. In a network with "remote direct memory access" (RDMA), the sending NIC uses DMA to read data
https://en.wikipedia.org/wiki/Prehensility
Prehensility is the quality of an appendage or organ that has adapted for grasping or holding. The word is derived from the Latin term prehendere, meaning "to grasp". The ability to grasp is likely derived from a number of different origins. The most common are tree-climbing and the need to manipulate food. Examples Appendages that can become prehensile include: Uses Prehensility affords animals a great natural advantage in manipulating their environment for feeding, climbing, digging, and defense. It enables many animals, such as primates, to use tools to complete tasks that would otherwise be impossible without highly specialized anatomy. For example, chimpanzees have the ability to use sticks to obtain termites and grubs in a manner similar to human fishing. However, not all prehensile organs are applied to tool use; the giraffe tongue, for instance, is instead used in feeding and self-cleaning.
https://en.wikipedia.org/wiki/Vyatta
Vyatta is a software-based virtual router, virtual firewall and VPN product for Internet Protocol networks (IPv4 and IPv6). A free download of Vyatta has been available since March 2006. The system is a specialized Debian-based Linux distribution with networking applications such as Quagga, OpenVPN, and many others. A standardized management console, similar to Juniper JUNOS or Cisco IOS, in addition to a web-based GUI and traditional Linux system commands, provides configuration of the system and applications. In recent versions of Vyatta, web-based management interface is supplied only in the subscription edition. However, all functionality is available through KVM, serial console or SSH/telnet protocols. The software runs on standard x86-64 servers. Vyatta is also delivered as a virtual machine file and can provide (, , VPN) functionality for Xen, VMware, KVM, Rackspace, SoftLayer, and Amazon EC2 virtual and cloud computing environments. As of October, 2012, Vyatta has also been available through Amazon Marketplace and can be purchased as a service to provide VPN, cloud bridging and other network functions to users of Amazon's AWS services. Vyatta sells a subscription edition that includes all the functionality of the open source version as well as a graphical user interface, access to Vyatta's RESTful API's, Serial Support, TACACS+, Config Sync, System Image Cloning, software updates, 24x7 phone and email technical support, and training. Certification as a Vyatta Professional is now available. Vyatta also offers professional services and consulting engagements. The Vyatta system is intended as a replacement for Cisco IOS 1800 through ASR 1000 series Integrated Services Routers (ISR) and ASA 5500 security appliances, with a strong emphasis on the cost and flexibility inherent in an open source, Linux-based system running on commodity x86 hardware or in VMware ESXi, Microsoft Hyper-V, Citrix XenServer, Open Source Xen and KVM virtual environments. In 2012, Bro
https://en.wikipedia.org/wiki/Mathematics%20and%20God
Connections between mathematics and God include the use of mathematics in arguments about the existence of God and about whether belief in God is beneficial. Mathematical arguments for God's existence In the 1070s, Anselm of Canterbury, an Italian medieval philosopher and theologian, created an ontological argument which sought to use logic to prove the existence of God. A more elaborate version was given by Gottfried Leibniz in the early eighteenth century. Kurt Gödel created a formalization of Leibniz' version, known as Gödel's ontological proof. A more recent argument was made by Stephen D. Unwin in 2003, who suggested the use of Bayesian probability to estimate the probability of God's existence. Mathematical arguments for belief A common application of decision theory to the belief in God is Pascal's wager, published by Blaise Pascal in his 1669 work Pensées. The application was a defense of Christianity stating that "If God does not exist, the Atheist loses little by believing in him and gains little by not believing. If God does exist, the Atheist gains eternal life by believing and loses an infinite good by not believing". The atheist's wager has been proposed as a counterargument to Pascal's Wager. See also Existence of God Further reading Cohen, Daniel J., Equations from God: Pure Mathematics and Victorian Faith, Johns Hopkins University Press, 2007 . Livio, Mario, Is God a Mathematician?, Simon & Schuster, 2011 . Ransford, H. Chris, God and the Mathematics of Infinity: What Irreducible Mathematics Says about Godhood, Columbia University Press, 2017 .
https://en.wikipedia.org/wiki/Geometry%20From%20Africa
Geometry From Africa: Mathematical and Educational Explorations is a book in ethnomathematics by . It analyzes the mathematics behind geometric designs and patterns from multiple African cultures, and suggests ways of connecting this analysis with the mathematics curriculum. It was published in 1999 by the Mathematical Association of America, in their Classroom Resource Materials book series. Background The book's author, Paulus Gerdes (1952–2014), was a mathematician from the Netherlands who became a professor of mathematics at the Eduardo Mondlane University in Mozambique, rector of Maputo University, and chair of the African Mathematical Union Commission on the History of Mathematics in Africa. He was a prolific author, especially of works on the ethnomathematics of Africa. However, as many of his publications were written in Portuguese, German, and French, or published only in Mozambique, this book makes his work in ethnomathematics more accessible to English-speaking mathematicians. Topics The book is heavily illustrated, and describes geometric patterns in the carvings, textiles, drawings and paintings of multiple African cultures. Although these are primarily decorative rather than mathematical, Gerdes adds his own mathematical analysis of the patterns, and suggests ways of incorporating this analysis into the mathematical curriculum. It is divided into four chapters. The first of these provides an overview of geometric patterns in many African cultures, including examples of textiles, knotwork, architecture, basketry, metalwork, ceramics, petroglyphs, facial tattoos, body painting, and hair styles. The second chapter presents examples of designs in which squares and right triangles can be formed from elements of the patterns, and suggests educational activities connecting these materials to the Pythagorean theorem and to the theory of Latin squares. For instance, basket-weavers in Mozambique form square knotted buttons out of folded ribbons, and the resul
https://en.wikipedia.org/wiki/Analog%20signal%20processing
Analog signal processing is a type of signal processing conducted on continuous analog signals by some analog means (as opposed to the discrete digital signal processing where the signal processing is carried out by a digital process). "Analog" indicates something that is mathematically represented as a set of continuous values. This differs from "digital" which uses a series of discrete quantities to represent signal. Analog values are typically represented as a voltage, electric current, or electric charge around components in the electronic devices. An error or noise affecting such physical quantities will result in a corresponding error in the signals represented by such physical quantities. Examples of analog signal processing include crossover filters in loudspeakers, "bass", "treble" and "volume" controls on stereos, and "tint" controls on TVs. Common analog processing elements include capacitors, resistors and inductors (as the passive elements) and transistors or opamps (as the active elements). Tools used in analog signal processing A system's behavior can be mathematically modeled and is represented in the time domain as h(t) and in the frequency domain as H(s), where s is a complex number in the form of s=a+ib, or s=a+jb in electrical engineering terms (electrical engineers use "j" instead of "i" because current is represented by the variable i). Input signals are usually called x(t) or X(s) and output signals are usually called y(t) or Y(s). Convolution Convolution is the basic concept in signal processing that states an input signal can be combined with the system's function to find the output signal. It is the integral of the product of two waveforms after one has reversed and shifted; the symbol for convolution is *. That is the convolution integral and is used to find the convolution of a signal and a system; typically a = -∞ and b = +∞. Consider two waveforms f and g. By calculating the convolution, we determine how much a reversed functio
https://en.wikipedia.org/wiki/Cepstrum
In Fourier analysis, the cepstrum (; plural cepstra, adjective cepstral) is the result of computing the inverse Fourier transform (IFT) of the logarithm of the estimated signal spectrum. The method is a tool for investigating periodic structures in frequency spectra. The power cepstrum has applications in the analysis of human speech. The term cepstrum was derived by reversing the first four letters of spectrum. Operations on cepstra are labelled quefrency analysis (or quefrency alanysis), liftering, or cepstral analysis. It may be pronounced in the two ways given, the second having the advantage of avoiding confusion with kepstrum. Origin The concept of the cepstrum was introduced in 1963 by B. P. Bogert, M. J. Healy, and J. W. Tukey. It serves as a tool to investigate periodic structures in frequency spectra. Such effects are related to noticeable echos or reflections in the signal, or to the occurrence of harmonic frequencies (partials, overtones). Mathematically it deals with the problem of deconvolution of signals in the frequency space.
https://en.wikipedia.org/wiki/Institution%20of%20Electronics%20and%20Telecommunication%20Engineers
The Institution of Electronics and Telecommunication Engineers (IETE) is India's leading recognized professional society devoted to the advancement of science, technology, electronics, telecommunication and information technology. Founded in 1953, it serves more than 70,000+ members through 60+ centers/sub centers primarily located in India (3 abroad). The Institution provides leadership in scientific and technical areas of direct importance to the national development and economy. Association of Indian Universities (AIU), Union Public Service Commission (UPSC) has recognized AMIETE, ALCCS (Advanced Level Course in Computer Science). Government of India has recognized IETE as a Scientific and Industrial Research Organization (SIRO) and also notified as an educational institution of national eminence. The IETE focuses on advancement of electronics and telecommunication technology. The IETE conducts and sponsors technical meetings, conferences, symposium, and exhibitions all over India, publishes technical and research journals and provides continuing education as well as career advancement opportunities to its members. IETE today is one of the prominent technical institution to provide education to working professionals in India and is fast expanding its wings across the country through its 60+ centres. Since 1953, IETE has expanded its educational activities in areas of electronics, telecommunications, computer science and information technology. IETE conduct programs by examination, leading to DipIETE equivalent to Diploma in Engineering, AMIETE equivalent to B Tech, and ALCCS equivalent to M Tech. IETE started Dual Degree, Dual Diploma and Integrated programs in December 2011. DipIETE is a three year, six semester course whereas AMIETE is a four year, eight semester course. IETE conducts examination for the above said courses, twice a year once in June and in December. Courses are divided into two sections, Section A and Section B. Courses of IETE are recognized
https://en.wikipedia.org/wiki/Ohm%27s%20law
Ohm's law states that the current through a conductor between two points is directly proportional to the voltage across the two points. Introducing the constant of proportionality, the resistance, one arrives at the three mathematical equations used to describe this relationship: where is the current through the conductor, V is the voltage measured across the conductor and R is the resistance of the conductor. More specifically, Ohm's law states that the R in this relation is constant, independent of the current. If the resistance is not constant, the previous equation cannot be called Ohm's law, but it can still be used as a definition of static/DC resistance. Ohm's law is an empirical relation which accurately describes the conductivity of the vast majority of electrically conductive materials over many orders of magnitude of current. However some materials do not obey Ohm's law; these are called non-ohmic. The law was named after the German physicist Georg Ohm, who, in a treatise published in 1827, described measurements of applied voltage and current through simple electrical circuits containing various lengths of wire. Ohm explained his experimental results by a slightly more complex equation than the modern form above (see below). In physics, the term Ohm's law is also used to refer to various generalizations of the law; for example the vector form of the law used in electromagnetics and material science: where J is the current density at a given location in a resistive material, E is the electric field at that location, and σ (sigma) is a material-dependent parameter called the conductivity. This reformulation of Ohm's law is due to Gustav Kirchhoff. History In January 1781, before Georg Ohm's work, Henry Cavendish experimented with Leyden jars and glass tubes of varying diameter and length filled with salt solution. He measured the current by noting how strong a shock he felt as he completed the circuit with his body. Cavendish wrote that the
https://en.wikipedia.org/wiki/Physical%20theories%20modified%20by%20general%20relativity
This article will use the Einstein summation convention. The theory of general relativity required the adaptation of existing theories of physical, electromagnetic, and quantum effects to account for non-Euclidean geometries. These physical theories modified by general relativity are described below. Classical mechanics and special relativity Classical mechanics and special relativity are lumped together here because special relativity is in many ways intermediate between general relativity and classical mechanics, and shares many attributes with classical mechanics. In the following discussion, the mathematics of general relativity is used heavily. Also, under the principle of minimal coupling, the physical equations of special relativity can be turned into their general relativity counterparts by replacing the Minkowski metric (ηab) with the relevant metric of spacetime (gab) and by replacing any partial derivatives with covariant derivatives. In the discussions that follow, the change of metrics is implied. Inertia Inertial motion is motion free of all forces. In Newtonian mechanics, the force F acting on a particle with mass m is given by Newton's second law, , where the acceleration is given by the second derivative of position r with respect to time t . Zero force means that inertial motion is just motion with zero acceleration: The idea is the same in special relativity. Using Cartesian coordinates, inertial motion is described mathematically as: where is the position coordinate and τ is proper time. (In Newtonian mechanics, τ ≡ t, the coordinate time). In both Newtonian mechanics and special relativity, space and then spacetime are assumed to be flat, and we can construct a global Cartesian coordinate system. In general relativity, these restrictions on the shape of spacetime and on the coordinate system to be used are lost. Therefore, a different definition of inertial motion is required. In relativity, inertial motion occurs along timelike or null
https://en.wikipedia.org/wiki/Unification%20of%20theories%20in%20physics
Unification of theories about observable fundamental phenomena of nature is one of the primary goals of physics. The two great unifications to date are Isaac Newton’s unification of gravity and astronomy, and James Clerk Maxwell’s unification of electromagnetism; the latter has been further unified with the concept of electroweak interaction. This process of "unifying" forces continues today, with the ultimate goal of finding a theory of everything. Unification of gravity and astronomy The "first great unification" was Isaac Newton's 17th century unification of gravity, which brought together the understandings of the observable phenomena of gravity on Earth with the observable behaviour of celestial bodies in space. Unification of magnetism, electricity, light and related radiation The ancient Chinese observed that certain rocks (lodestone and magnetite) were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. But even before the Chinese discovered magnetism, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also first studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, further work in the 19th century revealed that these two forces were just two different aspects of one force—electromagnetism. The "second great unification" was James Clerk Maxwell's 19th century unification of electromagnetism. It brought together the understandings of the observable phenomena of magnetism, electricity and light (and more broadly, the spectrum of electromagnetic radiation). This was followed in the 20th century by Albert Einstein's unification of space and time, and of mass and energy. Later, quantum field theory unified quantum mechanics
https://en.wikipedia.org/wiki/Luca%20Turin
Luca Turin (born 20 November 1953) is a biophysicist and writer with a long-standing interest in bioelectronics, the sense of smell, perfumery, and the fragrance industry. Early life and education Turin was born in Beirut, Lebanon on 20 November 1953 into an Italian-Argentinian family, and raised in France, Italy and Switzerland. His father, Duccio Turin, was a UN diplomat and chief architect of the Palestinian refugee camps, and his mother, Adela Turin (born Mandelli), is an art historian, designer, and award-winning children's author. Turin studied Physiology and Biophysics at University College London and earned his PhD in 1978. He worked at the CNRS from 1982-1992, and served as lecturer in Biophysics at University College London from 1992-2000. Career After leaving the CNRS, Turin first held a visiting research position at the National Institutes of Health in North Carolina before moving back to London, where he became a lecturer in biophysics at University College London. In 2001 Turin was hired as CTO of start-up company Flexitral, based in Chantilly, Virginia, to pursue rational odorant design based on his theories. In April 2010 he described this role in the past tense, and the company's domain name appears to have been surrendered. In 2010, Turin was based at MIT, working on a project to develop an electronic nose using natural receptors, financed by DARPA. In 2014 he moved to the Institute of Theoretical Physics at the University of Ulm where he was a Visiting Professor. He is a Stavros Niarchos Researcher in the neurobiology division at the Biomedical Sciences Research Center Alexander Fleming in Greece. In 2021 he moved to the University of Buckingham, UK as Professor of Physiology in the Medical School. Vibration theory of olfaction A major prediction of Turin's vibration theory of olfaction is the isotope effect: that the normal and deuterated versions of a compound should smell different due to unique vibration frequencies, despite having the
https://en.wikipedia.org/wiki/Periodic%20summation
In mathematics, any integrable function can be made into a periodic function with period P by summing the translations of the function by integer multiples of P. This is called periodic summation: When is alternatively represented as a Fourier series, the Fourier coefficients are equal to the values of the continuous Fourier transform, at intervals of . That identity is a form of the Poisson summation formula. Similarly, a Fourier series whose coefficients are samples of at constant intervals (T) is equivalent to a periodic summation of which is known as a discrete-time Fourier transform. The periodic summation of a Dirac delta function is the Dirac comb. Likewise, the periodic summation of an integrable function is its convolution with the Dirac comb. Quotient space as domain If a periodic function is instead represented using the quotient space domain then one can write: The arguments of are equivalence classes of real numbers that share the same fractional part when divided by . Citations See also Dirac comb Circular convolution Discrete-time Fourier transform Functions and mappings Signal processing
https://en.wikipedia.org/wiki/Universal%20gateway
A universal gateway is a device that transacts data between two or more data sources using communication protocols specific to each. Sometimes called a universal protocol gateway, this class of product is designed as a computer appliance, and is used to connect data from one automation system to another. Typical applications Typical applications include: M2M Communications – machine to machine communications between machines from different vendors, typically using different communication protocols. This is often a requirement to optimize the performance of a production line, by effectively communicating machine states upstream and downstream of a piece of equipment. Machine idle times can trigger lower power operation. Inventory Levels can be more effectively managed on a per station basis, by knowing the upstream and downstream demands. M2E Communications – machine to enterprise communications is typically managed through database interactions. In this case, EATM technology is typically leveraged for data interoperability. However, many enterprise systems have real-time data interfaces. When real-time interfaces are involved, a universal gateway, with its ability to support many protocols simultaneously becomes the best choice. In all cases, communications can fall over many different transports, RS-232, RS-485, Ethernet, etc. Universal Gateways have the ability to communicate between protocols and over different transports simultaneously. Design Hardware platform – Industrial Computer, Embedded Computer, Computer Appliance Communications software – Software (Drivers) to support one or more Industrial Protocols. Communications is typically polled or change based. Great care is typically taken to leverage communication protocols for the most efficient transactions of data (Optimized message sizes, communications speeds, and data update rates). Typical protocols; Rockwell Automation CIP, Ethernet/IP, Siemens Industrial Ethernet, Modbus TCP. There
https://en.wikipedia.org/wiki/Differentiation%20rules
This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus. Elementary rules of differentiation Unless otherwise stated, all functions are functions of real numbers (R) that return real values; although more generally, the formulae below apply wherever they are well defined — including the case of complex numbers (C). Constant term rule For any value of , where , if is the constant function given by , then . Proof Let and . By the definition of the derivative, This shows that the derivative of any constant function is 0. Intuitive (geometric) explanation The derivative of the function at a point is the slope of the line tangent to the curve at the point. Slope of the constant function is zero, because the tangent line to the constant function is horizontal and it's angle is zero. In other words, the value of the constant function, y, will not change as the value of x increases or decreases. Differentiation is linear For any functions and and any real numbers and , the derivative of the function with respect to is: In Leibniz's notation this is written as: Special cases include: The constant factor rule The sum rule The difference rule The product rule For the functions f and g, the derivative of the function h(x) = f(x) g(x) with respect to x is In Leibniz's notation this is written The chain rule The derivative of the function is In Leibniz's notation, this is written as: often abridged to Focusing on the notion of maps, and the differential being a map , this is written in a more concise way as: The inverse function rule If the function has an inverse function , meaning that and then In Leibniz notation, this is written as Power laws, polynomials, quotients, and reciprocals The polynomial or elementary power rule If , for any real number then When this becomes the special case that if then Combining the power rule with the sum and constant multiple rules permit
https://en.wikipedia.org/wiki/ITIL%20security%20management
ITIL security management describes the structured fitting of security into an organization. ITIL security management is based on the ISO 27001 standard. "ISO/IEC 27001:2005 covers all types of organizations (e.g. commercial enterprises, government agencies, not-for profit organizations). ISO/IEC 27001:2005 specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining and improving a documented Information Security Management System within the context of the organization's overall business risks. It specifies requirements for the implementation of security controls customized to the needs of individual organizations or parts thereof. ISO/IEC 27001:2005 is designed to ensure the selection of adequate and proportionate security controls that protect information assets and give confidence to interested parties." A basic concept of security management is information security. The primary goal of information security is to control access to information. The value of the information is what must be protected. These values include confidentiality, integrity and availability. Inferred aspects are privacy, anonymity and verifiability. The goal of security management comes in two parts: Security requirements defined in service level agreements (SLA) and other external requirements that are specified in underpinning contracts, legislation and possible internal or external imposed policies. Basic security that guarantees management continuity. This is necessary to achieve simplified service-level management for information security. SLAs define security requirements, along with legislation (if applicable) and other contracts. These requirements can act as key performance indicators (KPIs) that can be used for process management and for interpreting the results of the security management process. The security management process relates to other ITIL-processes. However, in this particular section the most obvious relations are the
https://en.wikipedia.org/wiki/Index%20of%20logarithm%20articles
This is a list of logarithm topics, by Wikipedia page. See also the list of exponential topics. Acoustic power Antilogarithm Apparent magnitude Baker's theorem Bel Benford's law Binary logarithm Bode plot Henry Briggs Bygrave slide rule Cologarithm Common logarithm Complex logarithm Discrete logarithm Discrete logarithm records e Representations of e El Gamal discrete log cryptosystem Harmonic series History of logarithms Hyperbolic sector Iterated logarithm Otis King Law of the iterated logarithm Linear form in logarithms Linearithmic List of integrals of logarithmic functions Logarithmic growth Logarithmic timeline Log-likelihood ratio Log-log graph Log-normal distribution Log-periodic antenna Log-Weibull distribution Logarithmic algorithm Logarithmic convolution Logarithmic decrement Logarithmic derivative Logarithmic differential Logarithmic differentiation Logarithmic distribution Logarithmic form Logarithmic graph paper Logarithmic growth Logarithmic identities Logarithmic number system Logarithmic scale Logarithmic spiral Logarithmic timeline Logit LogSumExp Mantissa is a disambiguation page; see common logarithm for the traditional concept of mantissa; see significand for the modern concept used in computing. Matrix logarithm Mel scale Mercator projection Mercator series Moment magnitude scale John Napier Napierian logarithm Natural logarithm Natural logarithm of 2 Neper Offset logarithmic integral pH Pollard's kangaroo algorithm Pollard's rho algorithm for logarithms Polylogarithm Polylogarithmic function Prime number theorem Richter magnitude scale Grégoire de Saint-Vincent Alphonse Antonio de Sarasa Schnorr signature Semi-log graph Significand Slide rule Smearing retransformation Sound intensity level Super-logarithm Table of logarithms Weber-Fechner law Exponentials Logarithm topics
https://en.wikipedia.org/wiki/System%20analysis
System analysis in the field of electrical engineering characterizes electrical systems and their properties. System analysis can be used to represent almost anything from population growth to audio speakers; electrical engineers often use it because of its direct relevance to many areas of their discipline, most notably signal processing, communication systems and control systems. Characterization of systems A system is characterized by how it responds to input signals. In general, a system has one or more input signals and one or more output signals. Therefore, one natural characterization of systems is by how many inputs and outputs they have: SISO (Single Input, Single Output) SIMO (Single Input, Multiple Outputs) MISO (Multiple Inputs, Single Output) MIMO (Multiple Inputs, Multiple Outputs) It is often useful (or necessary) to break up a system into smaller pieces for analysis. Therefore, we can regard a SIMO system as multiple SISO systems (one for each output), and similarly for a MIMO system. By far, the greatest amount of work in system analysis has been with SISO systems, although many parts inside SISO systems have multiple inputs (such as adders). Signals can be continuous or discrete in time, as well as continuous or discrete in the values they take at any given time: Signals that are continuous in time and continuous in value are known as analog signals. Signals that are discrete in time and discrete in value are known as digital signals. Signals that are discrete in time and continuous in value are called discrete-time signals. Switched capacitor systems, for instance, are often used in integrated circuits. The methods developed for analyzing discrete time signals and systems are usually applied to digital and analog signals and systems. Signals that are continuous in time and discrete in value are sometimes seen in the timing analysis of logic circuits or PWM amplifiers, but have little to no use in system analysis. With this categ
https://en.wikipedia.org/wiki/LwIP
lwIP (lightweight IP) is a widely used open-source TCP/IP stack designed for embedded systems. lwIP was originally developed by Adam Dunkels at the Swedish Institute of Computer Science and is now developed and maintained by a worldwide network of developers. lwIP is used by many manufacturers of embedded systems, including Intel/Altera, Analog Devices, Xilinx, TI, ST and Freescale. lwIP network stack The focus of the lwIP network stack implementation is to reduce resource usage while still having a full-scale TCP stack. This makes lwIP suitable for use in embedded systems with tens of kilobytes of free RAM and room for around 40 kilobytes of code ROM. lwIP protocol implementations Aside from the TCP/IP stack, lwIP has several other important parts, such as a network interface, an operating system emulation layer, buffers and a memory management section. The operating system emulation layer and the network interface allow the network stack to be transplanted into an operating system, as it provides a common interface between lwIP code and the operating system kernel. The network stack of lwIP includes an IP (Internet Protocol) implementation at the Internet layer that can handle packet forwarding over multiple network interfaces. Both IPv4 and IPv6 are supported dual stack since lwIP v2.0.0 . For network maintenance and debugging, lwIP implements ICMP (Internet Control Message Protocol). IGMP (Internet Group Management Protocol) is supported for multicast traffic management. While ICMPv6 (including MLD) is implemented to support the use of IPv6. lwIP includes an implementation of IPv4 ARP (Address Resolution Protocol) and IPv6 Neighbor Discovery Protocol to support Ethernet at the data link layer. lwIP may also be operated on top of a PPP (Point-to-Point Protocol) implementation at the data link layer. At the transport layer lwIP implements TCP (Transmission Control Protocol) with congestion control, RTT estimation and fast recovery/fast retransmit. UDP (U
https://en.wikipedia.org/wiki/Spark%20%28mathematics%29
In mathematics, more specifically in linear algebra, the spark of a matrix is the smallest integer such that there exists a set of columns in which are linearly dependent. If all the columns are linearly independent, is usually defined to be 1 more than the number of rows. The concept of matrix spark finds applications in error-correction codes, compressive sensing, and matroid theory, and provides a simple criterion for maximal sparsity of solutions to a system of linear equations. The spark of a matrix is NP-hard to compute. Definition Formally, the spark of a matrix is defined as follows: where is a nonzero vector and denotes its number of nonzero coefficients ( is also referred to as the size of the support of a vector). Equivalently, the spark of a matrix is the size of its smallest circuit (a subset of column indices such that has a nonzero solution, but every subset of it does not). If all the columns are linearly independent, is usually defined to be (if has m rows). By contrast, the rank of a matrix is the largest number such that some set of columns of is linearly independent. Example Consider the following matrix . The spark of this matrix equals 3 because: There is no set of 1 column of which are linearly dependent. There is no set of 2 columns of which are linearly dependent. But there is a set of 3 columns of which are linearly dependent. The first three columns are linearly dependent because . Properties If , the following simple properties hold for the spark of a matrix : (If the spark equals , then the matrix has full rank.) if and only if the matrix has a zero column. . Criterion for uniqueness of sparse solutions The spark yields a simple criterion for uniqueness of sparse solutions of linear equation systems. Given a linear equation system . If this system has a solution that satisfies , then this solution is the sparsest possible solution. Here denotes the number of nonzero entries of the vector . Lower bo
https://en.wikipedia.org/wiki/Thermal%20design%20power
The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often a CPU, GPU or system on a chip) that the cooling system in a computer is designed to dissipate under any workload. Some sources state that the peak power rating for a microprocessor is usually 1.5 times the TDP rating. Intel has introduced a new metric called scenario design power (SDP) for some Ivy Bridge Y-series processors. Calculation The average CPU power (ACP) is the power consumption of central processing units, especially server processors, under "average" daily usage as defined by Advanced Micro Devices (AMD) for use in its line of processors based on the K10 microarchitecture (Opteron 8300 and 2300 series processors). Intel's thermal design power (TDP), used for Pentium and Core 2 processors, measures the energy consumption under high workload; it is numerically somewhat higher than the "average" ACP rating of the same processor. According to AMD the ACP rating includes the power consumption when running several benchmarks, including TPC-C, SPECcpu2006, SPECjbb2005 and STREAM Benchmark (memory bandwidth), which AMD said is an appropriate method of power consumption measurement for data centers and server-intensive workload environments. AMD said that the ACP and TDP values of the processors will both be stated and do not replace one another. Barcelona and later server processors have the two power figures. The TDP of a CPU has been underestimated in some cases, leading to certain real applications (typically strenuous, such as video encoding or games) causing the CPU to exceed its specified TDP and resulting in overloading the computer's cooling system. In this case, CPUs either cause a system failure (a "therm-trip") or throttle their speed down. Most modern processors will cause a therm-trip only upon a catastrophic cooling failure, such as a no longer operational fan or an incorrectly mounted hea
https://en.wikipedia.org/wiki/Convergence%20research
Convergence research aims to solve complex problems employing transdisciplinarity. While academic disciplines are useful for identifying and conveying coherent bodies of knowledge, some problems require collaboration among disciplines, including both enhanced understanding of scientific phenomena as well as resolving social issues. The two defining characteristics of convergence research include: 1) the nature of the problem, and 2) the collaboration among disciplines. Definition In 2016, convergence research was identified by the National Science Foundation as one of 10 Big Idea's for future investments. As defined by NSF, convergence research has two primary characteristics, namely: "Research driven by a specific and compelling problem. Convergence research is generally inspired by the need to address a specific challenge or opportunity, whether it arises from deep scientific questions or pressing societal needs. Deep integration across disciplines. As experts from different disciplines pursue common research challenges, their knowledge, theories, methods, data, research communities and languages become increasingly intermingled or integrated. New frameworks, paradigms or even disciplines can form sustained interactions across multiple communities." Examples of convergence research Biomedicine Advancing healthcare and promoting wellness to the point of providing personalized medicine will increase health and reduce costs for everyone. While recognizing the potential benefits of personalized medicine, critics cite the importance of maintaining investments in public health as highlighted by the approaches to combat the COVID-19 pandemic. Cyber-physical systems The internet of things allows all people, machines, and infrastructure to be monitored, maintained, and operated in real-time, everywhere. Because the United States Government is one of the largest user of "things", cybersecurity is critical to any effective system. STEMpathy Jobs that utilize skil
https://en.wikipedia.org/wiki/Directional%20symmetry%20%28time%20series%29
In statistical analysis of time series and in signal processing, directional symmetry is a statistical measure of a model's performance in predicting the direction of change, positive or negative, of a time series from one time period to the next. Definition Given a time series with values at times and a model that makes predictions for those values , then the directional symmetry (DS) statistic is defined as Interpretation The DS statistic gives the percentage of occurrences in which the sign of the change in value from one time period to the next is the same for both the actual and predicted time series. The DS statistic is a measure of the performance of a model in predicting the direction of value changes. The case would indicate that a model perfectly predicts the direction of change of a time series from one time period to the next. See also Statistical finance Notes and references Drossu, Radu, and Zoran Obradovic. "INFFC data analysis: lower bounds and testbed design recommendations." Computational Intelligence for Financial Engineering (CIFEr), 1997., Proceedings of the IEEE/IAFE 1997. IEEE, 1997. Lawrance, A. J., "Directionality and Reversibility in Time Series", International Statistical Review, 59 (1991), 67–79. Tay, Francis EH, and Lijuan Cao. "Application of support vector machines in financial time series forecasting." Omega 29.4 (2001): 309–317. Xiong, Tao, Yukun Bao, and Zhongyi Hu. "Beyond one-step-ahead forecasting: Evaluation of alternative multi-step-ahead forecasting models for crude oil prices." Energy Economics 40 (2013): 405–415. Symmetry Signal processing
https://en.wikipedia.org/wiki/TNet
TNet is a secure top-secret-level intranet system in the White House, notably used to record information about telephone and video calls between the President of the United States and other world leaders. TNet is connected to Joint Worldwide Intelligence Communications System (JWICS), which is used more widely across different offices in the White House. Contained within TNet is an even more secure system known as NSC Intelligence Collaboration Environment (NICE). NSC Intelligence Collaboration Environment The NSC Intelligence Collaboration Environment (NICE) is a computer system operated by the United States National Security Council's Directorate for Intelligence Programs. A subdomain of TNet, it was created to enable staff to produce and store documents, such as presidential findings or decision memos, on top secret codeword activities. Due to the extreme sensitivity of the material held on it, only about 20 percent of NSC staff can reportedly access the system. The documents held on the system are tightly controlled and only specific named staff are able to access files. The system became the subject of controversy during the Trump–Ukraine scandal, when a whistleblower complaint to the Inspector General of the Intelligence Community revealed that NICE had been used to store transcripts of calls between President Donald Trump, and foreign leaders, apparently to restrict access to them. The system was reportedly used for this purpose from 2017 after leaks of conversations with foreign leaders. It was said to have been upgraded in the spring of 2018 to log, who had accessed particular files, as a deterrent against possible leaks. See also Classified website Intellipedia Joint Worldwide Intelligence Communications System (JWICS) NIPRNet RIPR SIPRNet
https://en.wikipedia.org/wiki/Invention%20of%20the%20integrated%20circuit
The first planar monolithic integrated circuit (IC) chip was demonstrated in 1960. The idea of integrating electronic circuits into a single device was born when the German physicist and engineer Werner Jacobi developed and patented the first known integrated transistor amplifier in 1949 and the British radio engineer Geoffrey Dummer proposed to integrate a variety of standard electronic components in a monolithic semiconductor crystal in 1952. A year later, Harwick Johnson filed a patent for a prototype IC. Between 1953 and 1957, Sidney Darlington and Yasuo Tarui (Electrotechnical Laboratory) proposed similar chip designs where several transistors could share a common active area, but there was no electrical isolation to separate them from each other. These ideas could not be implemented by the industry, until a breakthrough came in late 1958. Three people from three U.S. companies solved three fundamental problems that hindered the production of integrated circuits. Jack Kilby of Texas Instruments patented the principle of integration, created the first prototype ICs and commercialized them. Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (monolithic IC) chip. Between late 1958 and early 1959, Kurt Lehovec of Sprague Electric Company developed a way to electrically isolate components on a semiconductor crystal, using p–n junction isolation. The first monolithic IC chip was invented by Robert Noyce of Fairchild Semiconductor. He invented a way to connect the IC components (aluminium metallization) and proposed an improved version of insulation based on the planar process technology developed by Jean Hoerni. On September 27, 1960, using the ideas of Noyce and Hoerni, a group of Jay Last's at Fairchild Semiconductor created the first operational semiconductor IC. Texas Instruments, which held the patent for Kilby's invention, started a patent war, which was settled in 1966 by the agreement on cross-licensin
https://en.wikipedia.org/wiki/Biological%20tests%20of%20necessity%20and%20sufficiency
Biological tests of necessity and sufficiency refer to experimental methods and techniques that seek to test or provide evidence for specific kinds of causal relationships in biological systems. A necessary cause is one without which it would be impossible for an effect to occur, while a sufficient cause is one whose presence guarantees the occurrence of an effect. These concepts are largely based on but distinct from ideas of necessity and sufficiency in logic. Tests of necessity, among which are methods of lesioning or gene knockout, and tests of sufficiency, among which are methods of isolation or discrete stimulation of factors, have become important in current-day experimental designs, and application of these tests have led to a number of notable discoveries and findings in the biological sciences. Definitions In biological research, experiments or tests are often used to study predicted causal relationships between two phenomena. These causal relationships may be described in terms of the logical concepts of necessity and sufficiency. Consider the statement that a phenomenon x causes a phenomenon y. X would be a necessary cause of y when the occurrence of y implies that x needed to have occurred. However, only the occurrence of the necessary condition x may not always result in y also occurring. In other words, when some factor is necessary to cause an effect, it is impossible to have the effect without the cause. X would instead be a sufficient cause of y when the occurrence of x implies that y must then occur. in other words, when some factor is sufficient to cause an effect, the presence of the cause guarantees the occurrence of the effect. However, a different cause z may also cause y, meaning that y may occur without x occurring. For a concrete example, consider the conditional statement "if an object is a square, then it has four sides". It is a necessary condition that an object has four sides if it is true that it is a square; conversely, the obj
https://en.wikipedia.org/wiki/Outline%20of%20probability
Probability is a measure of the likeliness that an event will occur. Probability is used to quantify an attitude of mind towards some proposition whose truth is not certain. The proposition of interest is usually of the form "A specific event will occur." The attitude of mind is of the form "How certain is it that the event will occur?" The certainty that is adopted can be described in terms of a numerical measure, and this number, between 0 and 1 (where 0 indicates impossibility and 1 indicates certainty) is called the probability. Probability theory is used extensively in statistics, mathematics, science and philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems. Introduction Probability and randomness. Basic probability (Related topics: set theory, simple theorems in the algebra of sets) Events Events in probability theory Elementary events, sample spaces, Venn diagrams Mutual exclusivity Elementary probability The axioms of probability Boole's inequality Meaning of probability Probability interpretations Bayesian probability Frequency probability Calculating with probabilities Conditional probability The law of total probability Bayes' theorem Independence Independence (probability theory) Probability theory (Related topics: measure theory) Measure-theoretic probability Sample spaces, σ-algebras and probability measures Probability space Sample space Standard probability space Random element Random compact set Dynkin system Probability axioms Event (probability theory) Complementary event Elementary event "Almost surely" Independence Independence (probability theory) The Borel–Cantelli lemmas and Kolmogorov's zero–one law Conditional probability Conditional probability Conditioning (probability) Conditional expectation Conditional probability distribution Regular conditional probability Disintegration theorem Bayes' theorem Rule of succession Condition
https://en.wikipedia.org/wiki/Formulario%20mathematico
Formulario Mathematico (Latino sine flexione: Formulary for Mathematics) is a book by Giuseppe Peano which expresses fundamental theorems of mathematics in a symbolic language developed by Peano. The author was assisted by Giovanni Vailati, Mario Pieri, Alessandro Padoa, Giovanni Vacca, Vincenzo Vivanti, Gino Fano and Cesare Burali-Forti. The Formulario was first published in 1894. The fifth and last edition was published in 1908. Hubert Kennedy wrote "the development and use of mathematical logic is the guiding motif of the project". He also explains the variety of Peano's publication under the title: the five editions of the Formulario [are not] editions in the usual sense of the word. Each is essentially a new elaboration, although much material is repeated. Moreover, the title and language varied: the first three, titled Formulaire de Mathématiques, and the fourth, titled, Formulaire Mathématique, were written in French, while Latino sine flexione, Peano's own invention, was used for the fifth edition, titled Formulario Mathematico. ... Ugo Cassina lists no less than twenty separately published items as being parts of the 'complete' Formulario! Peano believed that students needed only precise statement of their lessons. He wrote: Each professor will be able to adopt this Formulario as a textbook, for it ought to contain all theorems and all methods. His teaching will be reduced to showing how to read the formulas, and to indicating to the students the theorems that he wishes to explain in his course. Such a dismissal of the oral tradition in lectures at universities was the undoing of Peano's own teaching career. Notes
https://en.wikipedia.org/wiki/Signal%20chain
Signal chain, or signal-processing chain is a term used in signal processing and mixed-signal system design to describe a series of signal-conditioning electronic components that receive input (data acquired from sampling either real-time phenomena or from stored data) sequentially, with the output of one portion of the chain supplying input to the next. Signal chains are often used in signal processing applications to gather and process data or to apply system controls based on analysis of real-time phenomena. Definition This definition comes from common usage in the electronics industry and can be derived from definitions of its parts: Signal: "The event, phenomenon, or electrical quantity, that conveys information from one point to another". Chain: "1. Any series of items linked together. 2. Pertaining to a routine consisting of segments which are run through the computer in tandem, only one segment being within the computer at any one time and each segment using the output from the previous program as its input". The concept of a signal chain is familiar to electrical engineers, but the term has many synonyms such as circuit topology. The goal of any signal chain is to process a variety of signals to monitor or control an analog-, digital-, or analog-digital system. See also Audio signal flow Daisy chain (electrical engineering) Feedback
https://en.wikipedia.org/wiki/Phoenix%20network%20coordinates
Phoenix is a decentralized network coordinate system based on the matrix factorization model. Background Network coordinate (NC) systems are an efficient mechanism for internet distance (round-trip latency) prediction with scalable measurements. For a network with N hosts, by performing O(N) measurements, all N*N distances can be predicted. Use cases: Vuze BitTorrent, application layer multicast, PeerWise overlay, multi-player online gaming. Triangle inequality violation (TIV) is widely exist on the Internet due to the current sub-optimal internet routing. Model Most of the prior NC systems use the Euclidean distance model, i.e. embed N hosts into a d-dimensional Euclidean space Rd. Due to the wide existence of TIVs on the internet, the prediction accuracy of such systems is limited. Phoenix uses a matrix factorization (MF) model, which does not have the constraint of TIV. The linear dependence among the rows motivates the factorization of internet distance matrix, i.e. for a system with internet nodes, the internet distance matrix D can be factorized into two smaller matrices. where and are matrices (d << N). This matrix factorization is essentially a problem of linear dimensionality reduction and Phoenix tries to solve it in a distributed way. Design choices in Phoenix Different from the existing MF based NC systems such as IDES and DMF, Phoenix introduces a weight to each reference NC and trusts the NCs with higher weight values more than the others. The weight-based mechanism can substantially reduce the impact of the error propagation. For node discovery, Phoenix uses a distributed scheme, so-called peer exchange (PEX), which is used in BitTorrent (protocol). The usage of PEX reduces the load of the tracker, while still ensuring the prediction accuracy under node churn. Similar to DMF, for avoiding the potential drift of the NCs, Regularization (mathematics) is introduced in NC calculation. NCShield is a decentralized, goosip-based trust an
https://en.wikipedia.org/wiki/Biological%20process
Biological processes are those processes that are vital for an organism to live, and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms. Metabolism and homeostasis are examples. Biological processes within an organism can also work as bioindicators. Scientists are able to look at an individual's biological processes to monitor the effects of environmental changes. Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule. Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature Organization: being structurally composed of one or more cells – the basic units of life Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life. Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter. Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis. Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms. Interaction between organisms. the processes
https://en.wikipedia.org/wiki/Optogenetics
Optogenetics is a biological technique to control the activity of neurons or other cell types with light. This is achieved by expression of light-sensitive ion channels, pumps or enzymes specifically in the target cells. On the level of individual cells, light-activated enzymes and transcription factors allow precise control of biochemical signaling pathways. In systems neuroscience, the ability to control the activity of a genetically defined set of neurons has been used to understand their contribution to decision making, learning, fear memory, mating, addiction, feeding, and locomotion. In a first medical application of optogenetic technology, vision was partially restored in a blind patient. Optogenetic techniques have also been introduced to map the functional connectivity of the brain. By altering the activity of genetically labelled neurons with light and using imaging and electrophysiology techniques to record the activity of other cells, researchers can identify the statistical dependencies between cells and brain regions. In a broader sense, optogenetics also includes methods to record cellular activity with genetically encoded indicators. In 2010, optogenetics was chosen as the "Method of the Year" across all fields of science and engineering by the interdisciplinary research journal Nature Methods. At the same time, optogenetics was highlighted in the article on "Breakthroughs of the Decade" in the academic research journal Science. History In 1979, Francis Crick suggested that controlling all cells of one type in the brain, while leaving the others more or less unaltered, is a real challenge for neuroscience. Francis Crick speculated that a technology using light might be useful to control neuronal activity with temporal and spatial precision but at the time there was no technique to make neurons responsive to light. By early 1990s LC Katz and E Callaway had shown that light could uncage glutamate. Heberle and Büldt in 1994 had already shown fun
https://en.wikipedia.org/wiki/Graduate%20Studies%20in%20Mathematics
Graduate Studies in Mathematics (GSM) is a series of graduate-level textbooks in mathematics published by the American Mathematical Society (AMS). The books in this series are published in hardcover and e-book formats. List of books 1 The General Topology of Dynamical Systems, Ethan Akin (1993, ) 2 Combinatorial Rigidity, Jack Graver, Brigitte Servatius, Herman Servatius (1993, ) 3 An Introduction to Gröbner Bases, William W. Adams, Philippe Loustaunau (1994, ) 4 The Integrals of Lebesgue, Denjoy, Perron, and Henstock, Russell A. Gordon (1994, ) 5 Algebraic Curves and Riemann Surfaces, Rick Miranda (1995, ) 6 Lectures on Quantum Groups, Jens Carsten Jantzen (1996, ) 7 Algebraic Number Fields, Gerald J. Janusz (1996, 2nd ed., ) 8 Discovering Modern Set Theory. I: The Basics, Winfried Just, Martin Weese (1996, ) 9 An Invitation to Arithmetic Geometry, Dino Lorenzini (1996, ) 10 Representations of Finite and Compact Groups, Barry Simon (1996, ) 11 Enveloping Algebras, Jacques Dixmier (1996, ) 12 Lectures on Elliptic and Parabolic Equations in Hölder Spaces, N. V. Krylov (1996, ) 13 The Ergodic Theory of Discrete Sample Paths, Paul C. Shields (1996, ) 14 Analysis, Elliott H. Lieb, Michael Loss (2001, 2nd ed., ) 15 Fundamentals of the Theory of Operator Algebras. Volume I: Elementary Theory, Richard V. Kadison, John R. Ringrose (1997, ) 16 Fundamentals of the Theory of Operator Algebras. Volume II: Advanced Theory, Richard V. Kadison, John R. Ringrose (1997, ) 17 Topics in Classical Automorphic Forms, Henryk Iwaniec (1997, ) 18 Discovering Modern Set Theory. II: Set-Theoretic Tools for Every Mathematician, Winfried Just, Martin Weese (1997, ) 19 Partial Differential Equations, Lawrence C. Evans (2010, 2nd ed., ) 20 4-Manifolds and Kirby Calculus, Robert E. Gompf, András I. Stipsicz (1999, ) 21 A Course in Operator Theory, John B. Conway (2000, ) 22 Growth of Algebras and Gelfand-Kirillov Dimension, Günter R. Krause, Thomas H. Lenagan (2000, Revised ed., ) 23 Foliation
https://en.wikipedia.org/wiki/Compliant%20bonding
Compliant bonding is used to connect gold wires to electrical components such as integrated circuit "chips". It was invented by Alexander Coucoulas in the 1960s. The bond is formed well below the melting point of the mating gold surfaces and is therefore referred to as a solid-state type bond. The compliant bond is formed by transmitting heat and pressure to the bond region through a relatively thick indentable or compliant medium, generally an aluminum tape (Figure 1). Comparison with other solid state bond methods Solid-state or pressure bonds form permanent bonds between a gold wire and a gold metal surface by bringing their mating surfaces in intimate contact at about 300 °C which is well below their respective melting points of 1064 °C, hence the term solid-state bonds. Two commonly used methods of forming this type of bond are thermocompression bonding and thermosonic bonding. Both of these processes form the bonds with a hard faced bonding tool that makes direct contact to deform the gold wires against the gold mating surfaces (Figure 2). Since gold is the only metal that does not form an oxide coating which can interfere with making a reliable metal to metal contact, gold wires are widely used to make these important wire connections in the field of microelectronic packaging. During the compliant bonding cycle the bond pressure is uniquely controlled by the inherent flow properties of the aluminum compliant tape (Figure 3). Therefore, if higher bond pressures are needed to increase the final deformation (flatness) of a compliant bonded gold wire, a higher yielding alloy of aluminum could be employed. The use of a compliant medium also overcomes the thickness variations when attempting to bond a multiple number of conductor wires simultaneously to a gold metalized substrate (Figure 4). It also prevents the leads from being excessively deformed since the compliant member deforms around the leads during the bonding cycle thus eliminating mechanical failur
https://en.wikipedia.org/wiki/In%20situ%20hybridization
In situ hybridization (ISH) is a type of hybridization that uses a labeled complementary DNA, RNA or modified nucleic acids strand (i.e., probe) to localize a specific DNA or RNA sequence in a portion or section of tissue (in situ) or if the tissue is small enough (e.g., plant seeds, Drosophila embryos), in the entire tissue (whole mount ISH), in cells, and in circulating tumor cells (CTCs). This is distinct from immunohistochemistry, which usually localizes proteins in tissue sections. In situ hybridization is used to reveal the location of specific nucleic acid sequences on chromosomes or in tissues, a crucial step for understanding the organization, regulation, and function of genes. The key techniques currently in use include in situ hybridization to mRNA with oligonucleotide and RNA probes (both radio-labeled and hapten-labeled), analysis with light and electron microscopes, whole mount in situ hybridization, double detection of RNAs and RNA plus protein, and fluorescent in situ hybridization to detect chromosomal sequences. DNA ISH can be used to determine the structure of chromosomes. Fluorescent DNA ISH (FISH) can, for example, be used in medical diagnostics to assess chromosomal integrity. RNA ISH (RNA in situ hybridization) is used to measure and localize RNAs (mRNAs, lncRNAs, and miRNAs) within tissue sections, cells, whole mounts, and circulating tumor cells (CTCs). In situ hybridization was invented by American biologists Mary-Lou Pardue and Joseph G. Gall. Challenges of in-situ hybridization In situ hybridization is a powerful technique for identifying specific mRNA species within individual cells in tissue sections, providing insights into physiological processes and disease pathogenesis. However, in situ hybridization requires that many steps be taken with precise optimization for each tissue examined and for each probe used. In order to preserve the target mRNA within tissues, it is often required that crosslinking fixatives (such as formaldehyde
https://en.wikipedia.org/wiki/RAID%20processing%20unit
A RAID processing unit (RPU) is an integrated circuit that performs specialized calculations in a RAID host adapter. XOR calculations, for example, are necessary for calculating parity data, and for maintaining data integrity when writing to a disk array that uses a parity drive or data striping. An RPU may perform these calculations more efficiently than the computer's central processing unit (CPU).
https://en.wikipedia.org/wiki/Mathematical%20table
Mathematical tables are lists of numbers showing the results of a calculation with varying arguments. Trigonometric tables were used in ancient Greece and India for applications to astronomy and celestial navigation, and continued to be widely used until electronic calculators became cheap and plentiful, in order to simplify and drastically speed up computation. Tables of logarithms and trigonometric functions were common in math and science textbooks, and specialized tables were published for numerous applications. History and use The first tables of trigonometric functions known to be made were by Hipparchus (c.190 – c.120 BCE) and Menelaus (c.70–140 CE), but both have been lost. Along with the surviving table of Ptolemy (c. 90 – c.168 CE), they were all tables of chords and not of half-chords, that is, the sine function. The table produced by the Indian mathematician Āryabhaṭa (476–550 CE) is considered the first sine table ever constructed. Āryabhaṭa's table remained the standard sine table of ancient India. There were continuous attempts to improve the accuracy of this table, culminating in the discovery of the power series expansions of the sine and cosine functions by Madhava of Sangamagrama (c.1350 – c.1425), and the tabulation of a sine table by Madhava with values accurate to seven or eight decimal places. Tables of common logarithms were used until the invention of computers and electronic calculators to do rapid multiplications, divisions, and exponentiations, including the extraction of nth roots. Mechanical special-purpose computers known as difference engines were proposed in the 19th century to tabulate polynomial approximations of logarithmic functions – that is, to compute large logarithmic tables. This was motivated mainly by errors in logarithmic tables made by the human computers of the time. Early digital computers were developed during World War II in part to produce specialized mathematical tables for aiming artillery. From 1972 onwards,
https://en.wikipedia.org/wiki/Qualitative%20property
Qualitative properties are properties that are observed and can generally not be measured with a numerical result. They are contrasted to quantitative properties which have numerical characteristics. Some engineering and scientific properties are qualitative. A test method can result in qualitative data about something. This can be a categorical result or a binary classification (e.g., pass/fail, go/no go, conform/non-conform). It can sometimes be an engineering judgement. The data that all share a qualitative property form a nominal category. A variable which codes for the presence or absence of such a property is called a binary categorical variable, or equivalently a dummy variable. In businesses Some important qualitative properties that concern businesses are: Human factors, 'human work capital' is probably one of the most important issues that deals with qualitative properties. Some common aspects are work, motivation, general participation, etc. Although all of these aspects are not measurable in terms of quantitative criteria, the general overview of them could be summarized as a quantitative property. Environmental issues are in some cases quantitatively measurable, but other properties are qualitative e.g.: environmentally friendly manufacturing, responsibility for the entire life of a product (from the raw-material till scrap), attitudes towards safety, efficiency, and minimum waste production. Ethical issues are closely related to environmental and human issues, and may be covered in corporate governance. Child labour and illegal dumping of waste are examples of ethical issues. The way a company deals with its stockholders (the 'acting' of a company) is probably the most obvious qualitative aspect of a business. Although measuring something in qualitative terms is difficult, most people can (and will) make a judgement about a behaviour on the basis of how they feel treated. This indicates that qualitative properties are closely related to emotiona
https://en.wikipedia.org/wiki/Microcosm%20%28experimental%20ecosystem%29
Microcosms are artificial, simplified ecosystems that are used to simulate and predict the behaviour of natural ecosystems under controlled conditions. Open or closed microcosms provide an experimental area for ecologists to study natural ecological processes. Microcosm studies can be very useful to study the effects of disturbance or to determine the ecological role of key species. A Winogradsky column is an example of a microbial microcosm. See also Closed ecological system Ecologist Howard T. Odum was a pioneer in his use of small closed and open ecosystems in classroom teaching. Biosphere 2 - Controversial project with a 1.27 ha artificial closed ecological system in Oracle, Arizona (USA).
https://en.wikipedia.org/wiki/Autodyne
The autodyne circuit was an improvement to radio signal amplification using the De Forest Audion vacuum tube amplifier. By allowing the tube to oscillate at a frequency slightly different from the desired signal, the sensitivity over other receivers was greatly improved. The autodyne circuit was invented by Edwin Howard Armstrong of Columbia University, New York, NY. He inserted a tuned circuit in the output circuit of the Audion vacuum tube amplifier. By adjusting the tuning of this tuned circuit, Armstrong was able to dramatically increase the gain of the Audion amplifier. Further increase in tuning resulted in the Audion amplifier reaching self-oscillation. This oscillating receiver circuit meant that the then latest technology continuous wave (CW) transmissions could be demodulated. Previously only spark, interrupted continuous wave (ICW, signals which were produced by a motor chopping or turning the signal on and off at an audio rate), or modulated continuous wave (MCW), could produce intelligible output from a receiver. When the autodyne oscillator was advanced to self-oscillation, continuous wave Morse code dots and dashes would be clearly heard from the headphones as short or long periods of sound of a particular tone, instead of an all but impossible to decode series of thumps. Spark and chopped CW (ICW) were amplitude modulated signals which didn't require an oscillating detector. Such a regenerative circuit is capable of receiving weak signals, if carefully coupled to an antenna. Antenna coupling interacts with tuning, making optimum adjustments difficult. Heterodyne detection Damped wave transmission Early transmitters emitted damped waves, which were radio frequency sine wave bursts of a number of cycles duration, of decreasing amplitude with each cycle. These bursts recurred at an audio frequency rate, producing an amplitude modulated transmission. The damped waves were a result of the available technologies to generate radio frequencies. See
https://en.wikipedia.org/wiki/UIP%20%28software%29
The uIP is an open-source implementation of the TCP/IP network protocol stack intended for use with tiny 8- and 16-bit microcontrollers. It was initially developed by Adam Dunkels of the Networked Embedded Systems group at the Swedish Institute of Computer Science, licensed under a BSD style license, and further developed by a wide group of developers. uIP can be very useful in embedded systems because it requires very small amounts of code and RAM. It has been ported to several platforms, including DSP platforms. In October 2008, Cisco, Atmel, and SICS announced a fully compliant IPv6 extension to uIP, called uIPv6. Implementation uIP makes many unusual design choices in order to reduce the resources it requires. uIP's native software interface is designed for small computer systems with no operating system. It can be called in a timed loop, and the call manages all the retries and other network behavior. The hardware driver is called after uIP is called. uIP builds the packet, and then the driver sends it, and optionally receives a response. It is normal for IP protocol stack software to keep many copies of different IP packets, for transmission, reception and to keep copies in case they need to be resent. uIP is economical in its use of memory because it uses only one packet buffer. First, it uses the packet buffer in a half-duplex way, using it in turn for transmission and reception. Also, when uIP needs to retransmit a packet, it calls the application code in a way that requests for the previous data to be reproduced. Another oddity is how uIP manages connections. Most IP implementations have one task per connection, and the task communicates with a task in a distant computer on the other end of the connection. In uIP, no multitasking operating system is assumed. Connections are held in an array. On each call, uIP tries to serve a connection, making a subroutine call to application code that responds to, or sends data. The size of the connection ar
https://en.wikipedia.org/wiki/Summation
In mathematics, summation is the addition of a sequence of any kind of numbers, called addends or summands; the result is their sum or total. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined. Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article. The summation of an explicit sequence is denoted as a succession of additions. For example, summation of is denoted , and results in 9, that is, . Because addition is associative and commutative, there is no need of parentheses, and the result is the same irrespective of the order of the summands. Summation of a sequence of only one element results in this element itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0. Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written as . Otherwise, summation is denoted by using Σ notation, where is an enlarged capital Greek letter sigma. For example, the sum of the first natural numbers can be denoted as For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result. For example, Although such formulas do not always exist, many summation formulas have been discovered—with some of the most common and elementary ones being listed in the remainder of this article. Notation Capital-sigma notation Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, , an enlarged form of the upright capital Greek l
https://en.wikipedia.org/wiki/Particular%20values%20of%20the%20gamma%20function
The gamma function is an important special function in mathematics. Its particular values can be expressed in closed form for integer and half-integer arguments, but no simple expressions are known for the values at rational points in general. Other fractional arguments can be approximated through efficient infinite products, infinite series, and recurrence relations. Integers and half-integers For positive integer arguments, the gamma function coincides with the factorial. That is, and hence and so on. For non-positive integers, the gamma function is not defined. For positive half-integers, the function values are given exactly by or equivalently, for non-negative integer values of : where denotes the double factorial. In particular, {| |- | | | | |- | | | | |- | | | | |- | | | | |} and by means of the reflection formula, {| |- | | | | |- | | | | |- | | | | |} General rational argument In analogy with the half-integer formula, where denotes the th multifactorial of . Numerically, . As tends to infinity, where is the Euler–Mascheroni constant and denotes asymptotic equivalence. It is unknown whether these constants are transcendental in general, but and were shown to be transcendental by G. V. Chudnovsky. has also long been known to be transcendental, and Yuri Nesterenko proved in 1996 that , , and are algebraically independent. The number is related to the lemniscate constant by and it has been conjectured by Gramain that where is the Masser–Gramain constant , although numerical work by Melquiond et al. indicates that this conjecture is false. Borwein and Zucker have found that can be expressed algebraically in terms of , , , , and where is a complete elliptic integral of the first kind. This permits efficiently approximating the gamma function of rational arguments to high precision using quadratically convergent arithmetic–geometric mean iterations. For example: No similar relations are known for or other denominator
https://en.wikipedia.org/wiki/Creation%20and%20evolution%20in%20public%20education
The status of creation and evolution in public education has been the subject of substantial debate and conflict in legal, political, and religious circles. Globally, there are a wide variety of views on the topic. Most western countries have legislation that mandates only evolutionary biology is to be taught in the appropriate scientific syllabuses. Overview While many Christian denominations do not raise theological objections to the modern evolutionary synthesis as an explanation for the present forms of life on planet Earth, various socially conservative, traditionalist, and fundamentalist religious sects and political groups within Christianity and Islam have objected vehemently to the study and teaching of biological evolution. Some adherents of these Christian and Islamic religious sects or political groups are passionately opposed to the consensus view of the scientific community. Literal interpretations of religious texts are the greatest cause of conflict with evolutionary and cosmological investigations and conclusions. Internationally, biological evolution is taught in science courses with limited controversy, with the exception of a few areas of the United States and several Muslim-majority countries, primarily Turkey. In the United States, the Supreme Court has ruled the teaching of creationism as science in public schools to be unconstitutional, irrespective of how it may be purveyed in theological or religious instruction. In the United States, intelligent design (ID) has been represented as an alternative explanation to evolution in recent decades, but its "demonstrably religious, cultural, and legal missions" have been ruled unconstitutional by a lower court. By country Australia Although creationist views are popular among religious education teachers and creationist teaching materials have been distributed by volunteers in some schools, many Australian scientists take an aggressive stance supporting the right of teachers to teach the theory
https://en.wikipedia.org/wiki/Live%20crown
The live crown is the top part of a tree, the part that has green leaves (as opposed to the bare trunk, bare branches, and dead leaves). The ratio of the size of a tree's live crown to its total height is used in estimating its health and its level of competition with neighboring trees. Trees Biology terminology Sustainable forest management
https://en.wikipedia.org/wiki/UniPro%20protocol%20stack
In mobile-telephone technology, the UniPro protocol stack follows the architecture of the classical OSI Reference Model. In UniPro, the OSI Physical Layer is split into two sublayers: Layer 1 (the actual physical layer) and Layer 1.5 (the PHY Adapter layer) which abstracts from differences between alternative Layer 1 technologies. The actual physical layer is a separate specification as the various PHY options are reused in other MIPI Alliance specifications. The UniPro specification itself covers Layers 1.5, 2, 3, 4 and the DME (Device Management Entity). The Application Layer (LA) is out of scope because different uses of UniPro will require different LA protocols. The Physical Layer (L1) is covered in separate MIPI specifications in order to allow the PHY to be reused by other (less generic) protocols if needed. OSI Layers 5 (Session) and 6 (Presentation) are, where applicable, counted as part of the Application Layer. Physical Layer (L1) D-PHY Versions 1.0 and 1.1 of UniPro use MIPI's D-PHY technology for the off-chip Physical Layer. This PHY allows inter-chip communication. Data rates of the D-PHY are variable, but are in the range of 500-1000 Mbit/s (lower speeds are supported, but at decreased power efficiency). The D-PHY was named after the Roman number for 500 ("D"). The D-PHY uses differential signaling to convey PHY symbols over micro-stripline wiring. A second differential signal pair is used to transmit the associated clock signal from the source to the destination. The D-PHY technology thus uses a total of 2 clock wires per direction plus 2 signal wires per lane and per direction. For example, a D-PHY might use 2 wires for the clock and 4 wires (2 lanes) for the data in the forward direction, but 2 wires for the clock and 6 wires (3 lanes) for the data in the reverse direction. Data traffic in the forward and reverse directions are totally independent at this level of the protocol stack. In UniPro, the D-PHY is used in a mode (called "8b9b" encod
https://en.wikipedia.org/wiki/Mariam%20Nabatanzi
Mariam Nabatanzi Babirye (born ) also known as Maama Uganda or Mother Uganda, is a Ugandan woman known for birthing 44 children. As of April 2023, her eldest children were twenty-eight years old, and the youngest were six years old. She is a single mother, who was abandoned by her husband in 2015. He reportedly feared the responsibility of supporting so many children. Born around 1980, Babirye first gave birth when she was 13 years old, having been forced into marriage the year prior. By the age of 36, she had given birth to a total of 44 children, including three sets of quadruplets, four sets of triplets, and six sets of twins, for a total of fifteen births. The number of multiple births was caused by a rare genetic condition causing hyperovulation as a result of enlarged ovaries. In 2019, when Babirye was aged 40, she underwent a medical procedure to prevent any further pregnancies. She lives in the village of Kasawo, located in the Mukono district of Central Uganda. Life and background In 1993, Babirye was sold into child marriage at the age of twelve to a violent 40-year-old man. A year later, she first became a mother in 1994 with a set of twins, followed by triplets in 1996. She then gave birth to a set of quadruplets 19 months later. She never found the rate at which she was procreating unusual due to her family history; she had been quoted as saying: "My father gave birth to forty-five children with different women, and these all came in quintuplets, quadruples, twins and triplets." In Uganda, there are some communities that practice early child marriages, where a young girl is given off to an older man in exchange for a dowry that most frequently consists of cows. Babirye's marriage was an example of this. At the age of twenty-three, she had given birth to twenty-five children, but was advised to continue giving birth, as it would help reduce further fertility. Those affected with Babirye's condition are often advised that abstinence from pregnancy ca
https://en.wikipedia.org/wiki/Tally%20stick
A tally stick (or simply tally) was an ancient memory aid device used to record and document numbers, quantities and messages. Tally sticks first appear as animal bones carved with notches during the Upper Palaeolithic; a notable example is the Ishango Bone. Historical reference is made by Pliny the Elder (AD 23–79) about the best wood to use for tallies, and by Marco Polo (1254–1324) who mentions the use of the tally in China. Tallies have been used for numerous purposes such as messaging and scheduling, and especially in financial and legal transactions, to the point of being currency. Kinds of tallies Principally, there are two different kinds of tally sticks: the single tally and the split tally. A common form of the same kind of primitive counting device is seen in various kinds of prayer beads. Possible palaeolithic tally sticks A number of anthropological artefacts have been conjectured to be tally sticks: The Lebombo bone, dated between 44,200 and 43,000 years old, is a baboon's fibula with 29 distinct notches, discovered within the Border Cave in the Lebombo Mountains of Eswatini. The so-called Wolf bone (cs) is a prehistoric artefact discovered in 1937 in Czechoslovakia during excavations at Dolní Věstonice, Moravia, led by Karl Absolon. Dated to the Aurignacian, approximately 30,000 years ago, the bone is marked with 55 marks which some believe to be tally marks. The head of an ivory Venus figurine was excavated close to the bone. The Ishango bone is a bone tool, dated to the Upper Palaeolithic era, around 18,000 to 20,000 BC. It is a dark brown length of bone. It has a series of possible tally marks carved in three columns running the length of the tool. It was found in 1950 in Ishango (east Belgian Congo). Single tally The single tally stick was an elongated piece of bone, ivory, wood, or stone which is marked with a system of notches (see: Tally marks). The single tally stick serves predominantly mnemonic purposes. Related to the single tally con
https://en.wikipedia.org/wiki/Sca-1
Sca-1 stands for "Stem cells antigen-1" (official gene symbol: Ly6a). It consist of 18-kDa mouse glycosyl phosphatidylinositol-anchored cell surface protein (GPI-AP) of the LY6 gene family. It is the common biological marker used to identify hematopoitic stem cell (HSC) along with other markers. Application of Sca-1 Sca-1 has a regenerative role in cardiac repair: Host cells with specific Sca-1+CD31− markers arise upon myocardial infarction, with evidence of expression of Sca-1 protein. Sca-1 plays a role in hematopoietic progenitor/stem cell lineage fate and c-kit expression.
https://en.wikipedia.org/wiki/Exceptional%20object
Many branches of mathematics study objects of a given type and prove a classification theorem. A common theme is that the classification results in a number of series of objects and a finite number of exceptions — often with desirable properties — that do not fit into any series. These are known as exceptional objects. In many cases, these exceptional objects play a further and important role in the subject. Furthermore, the exceptional objects in one branch of mathematics often relate to the exceptional objects in others. A related phenomenon is exceptional isomorphism, when two series are in general different, but agree for some small values. For example, spin groups in low dimensions are isomorphic to other classical Lie groups. Regular polytopes The prototypical examples of exceptional objects arise in the classification of regular polytopes: in two dimensions, there is a series of regular n-gons for n ≥ 3. In every dimension above 2, one can find analogues of the cube, tetrahedron and octahedron. In three dimensions, one finds two more regular polyhedra — the dodecahedron (12-hedron) and the icosahedron (20-hedron) — making five Platonic solids. In four dimensions, a total of six regular polytopes exist, including the 120-cell, the 600-cell and the 24-cell. There are no other regular polytopes, as the only regular polytopes in higher dimensions are of the hypercube, simplex, orthoplex series. In all dimensions combined, there are therefore three series and five exceptional polytopes. Moreover, the pattern is similar if non-convex polytopes are included: in two dimensions, there is a regular star polygon for every rational number . In three dimensions, there are four Kepler–Poinsot polyhedra, and in four dimensions, ten Schläfli–Hess polychora; in higher dimensions, there are no non-convex regular figures. These can be generalized to tessellations of other spaces, especially uniform tessellations, notably tilings of Euclidean space (honeycombs), which hav
https://en.wikipedia.org/wiki/Fractional%20lambda%20switching
Fractional lambda switching (FλS) leverages on time-driven switching (TDS) to realize sub-lambda switching in highly scalable dynamic optical networking, which requires minimum (possibly optical) buffers. Fractional lambda switching implies switching fractions of optical channels as opposed to whole lambda switching where whole optical channels are the switching unit. In this context, TDS has the same general objectives as optical burst switching and optical packet switching: realizing all-optical networks with high wavelength utilization. TDS operation is based on time frames (TFs) that can be viewed as virtual containers for multiple IP packets that are switched at every TDS switch based on and coordinated by the UTC (coordinated universal time) signal implementing pipeline forwarding. In the context of optical networks, synchronous virtual pipes SVPs typical of pipeline forwarding are called fractional lambda pipes (FλPs). In FλS, likewise in TDS, all packets in the same time frame are switched in the same way. Consequently, header processing is not required, which results in low complexity (hence high scalability) and enables optical implementation. The TF is the basic SVP capacity allocation unit; hence, the allocation granularity depends on the number of TFs per time cycle. For example, with a 10 Gbit/s optical channel and 1000 TFs in each time cycle, the minimum FλP capacity (obtained by allocating one TF in every time cycle) is 10 Mbit/s. Scheduling through a switching fabric is based on a pre-defined schedule, which enables the implementation of a simple controller. Moreover, low-complexity switching fabric architectures, such as Banyan, can be deployed notwithstanding their blocking features, thus further enhancing scalability. In fact, blocking can be avoided during schedule computation by avoiding conflicting input/output connections during the same TF. Several results show that (especially if multiple wavelength division multiplexing channels are dep
https://en.wikipedia.org/wiki/Wireless%20data%20center
A Wireless Data center is a type of data center that uses wireless communication technology instead of cables to store, process and retrieve data for enterprises. The development of Wireless Data centers arose as a solution to growing cabling complexity and hotspots. The wireless technology was introduced by Shin et al., who replaced all cables with 60 GHz wireless connections at the Cayley data center. Motivation Most DCs deployed today can be classified as wired DCs because they use copper and optical fiber cables to handle intra- and inter-rack connections in the network. This approach has two problems, cable complexity and hotspots. Hotspots, also known as hot servers, are servers that generate high traffic compared to others in the network and they might become bottlenecks of the system. To address these problems several researchers propose the use of wireless communication into data center networks, to either augment existing wired data centers, or to realize a pure wireless data center Although cable complexity at first seems like an esthetical problem, it can affect a DC in different ways. First, a significant manual effort is necessary to install and manage these cables. Apart from that, cables can additionally affect data center cooling. Finally, cables take up space, which could be used to add more servers. The use of wireless technologies could reduce the cable complexity and avoid the problems cited before, moreover, it would allow for automatic configurable link establishment between nodes with minimum effort. Wireless links can be rearranged dynamically which makes it possible to perform adaptive topology adjustment. This means that the network can be rearranged to fulfil the real-time traffic demands of hotspots, thus solving the hot servers problem. Additionally, wireless connections do not rely on switches and therefore are free of problems such as single-point of failure and limited bisection bandwidth. Requirements The Data Center Network (
https://en.wikipedia.org/wiki/List%20of%20mathematics-based%20methods
This is a list of mathematics-based methods. Adams' method (differential equations) Akra–Bazzi method (asymptotic analysis) Bisection method (root finding) Brent's method (root finding) Condorcet method (voting systems) Coombs' method (voting systems) Copeland's method (voting systems) Crank–Nicolson method (numerical analysis) D'Hondt method (voting systems) D21 – Janeček method (voting system) Discrete element method (numerical analysis) Domain decomposition method (numerical analysis) Epidemiological methods Euler's forward method Explicit and implicit methods (numerical analysis) Finite difference method (numerical analysis) Finite element method (numerical analysis) Finite volume method (numerical analysis) Highest averages method (voting systems) Method of exhaustion Method of infinite descent (number theory) Information bottleneck method Inverse chain rule method (calculus) Inverse transform sampling method (probability) Iterative method (numerical analysis) Jacobi method (linear algebra) Largest remainder method (voting systems) Level-set method Linear combination of atomic orbitals molecular orbital method (molecular orbitals) Method of characteristics Least squares method (optimization, statistics) Maximum likelihood method (statistics) Method of complements (arithmetic) Method of moving frames (differential geometry) Method of successive substitution (number theory) Monte Carlo method (computational physics, simulation) Newton's method (numerical analysis) Pemdas method (order of operation) Perturbation methods (functional analysis, quantum theory) Probabilistic method (combinatorics) Romberg's method (numerical analysis) Runge–Kutta method (numerical analysis) Sainte-Laguë method (voting systems) Schulze method (voting systems) Sequential Monte Carlo method Simplex method Spectral method (numerical analysis) Variational methods (mathematical analysis, differential equations) Welch's method See also Automatic basis function construction List of graphi
https://en.wikipedia.org/wiki/Examples%20of%20vector%20spaces
This page lists some examples of vector spaces. See vector space for the definitions of terms used on this page. See also: dimension, basis. Notation. Let F denote an arbitrary field such as the real numbers R or the complex numbers C. Trivial or zero vector space The simplest example of a vector space is the trivial one: {0}, which contains only the zero vector (see the third axiom in the Vector space article). Both vector addition and scalar multiplication are trivial. A basis for this vector space is the empty set, so that {0} is the 0-dimensional vector space over F. Every vector space over F contains a subspace isomorphic to this one. The zero vector space is conceptually different from the null space of a linear operator L, which is the kernel of L. (Incidentally, the null space of L is a zero space if and only if L is injective.) Field The next simplest example is the field F itself. Vector addition is just field addition, and scalar multiplication is just field multiplication. This property can be used to prove that a field is a vector space. Any non-zero element of F serves as a basis so F is a 1-dimensional vector space over itself. The field is a rather special vector space; in fact it is the simplest example of a commutative algebra over F. Also, F has just two subspaces: {0} and F itself. Coordinate space A basic example of a vector space is the following. For any positive integer n, the set of all n-tuples of elements of F forms an n-dimensional vector space over F sometimes called coordinate space and denoted Fn. An element of Fn is written where each xi is an element of F. The operations on Fn are defined by Commonly, F is the field of real numbers, in which case we obtain real coordinate space Rn. The field of complex numbers gives complex coordinate space Cn. The a + bi form of a complex number shows that C itself is a two-dimensional real vector space with coordinates (a,b). Similarly, the quaternions and the octonions are respectively
https://en.wikipedia.org/wiki/Sneakernet
Sneakernet, also called sneaker net, is an informal term for the transfer of electronic information by physically moving media such as magnetic tape, floppy disks, optical discs, USB flash drives or external hard drives between computers, rather than transmitting it over a computer network. The term, a tongue-in-cheek play on net(work) as in Internet or Ethernet, refers to walking in sneakers as the transport mechanism. Alternative terms may be floppy net, train net, or pigeon net. Summary and background Sneakernets are in use throughout the computer universe. A sneakernet may be used when computer networks are prohibitively expensive for the owner to maintain; in high-security environments where manual inspection (for re-classification of information) is necessary; where information needs to be shared between networks with different levels of security clearance; when data transfer is impractical due to bandwidth limitations; when a particular system is simply incompatible with the local network, unable to be connected, or when two systems are not on the same network at the same time. Because sneakernets take advantage of physical media, security measures used for the transfer of sensitive information are respectively physical. This form of data transfer is also used for peer-to-peer (or friend-to-friend) file sharing and has grown in popularity in metropolitan areas and college communities. The ease of this system has been facilitated by the availability of USB external hard drives, USB flash drives and portable music players. The United States Postal Service offers a Media Mail service for compact discs, among other items. This provides a viable mode of transport for long distance sneakernet use. In fact, when mailing media with sufficiently high data density such as high capacity hard drives, the throughput (data transferred per unit of time) as well as the cost per unit of data transferred may compete favorably with networked methods of data transfer. Usage
https://en.wikipedia.org/wiki/Suctorial
Suctorial pertains to the adaptation for sucking or suction, as possessed by marine parasites such as the Cookiecutter shark, specifically in a specialised lip organ enabling attachment to the host. Suctorial organs of a different form are possessed by the Solifugae arachnids, enabling the climbing of smooth, vertical surfaces. Another variation on the suctorial organ can be found as part of the glossa proboscis of Masarinae (pollen wasps), enabling nectar feeding from the deep and narrow corolla of flowers.
https://en.wikipedia.org/wiki/Christoffel%20symbols
In mathematics and physics, the Christoffel symbols are an array of numbers describing a metric connection. The metric connection is a specialization of the affine connection to surfaces or other manifolds endowed with a metric, allowing distances to be measured on that surface. In differential geometry, an affine connection can be defined without reference to a metric, and many additional concepts follow: parallel transport, covariant derivatives, geodesics, etc. also do not require the concept of a metric. However, when a metric is available, these concepts can be directly tied to the "shape" of the manifold itself; that shape is determined by how the tangent space is attached to the cotangent space by the metric tensor. Abstractly, one would say that the manifold has an associated (orthonormal) frame bundle, with each "frame" being a possible choice of a coordinate frame. An invariant metric implies that the structure group of the frame bundle is the orthogonal group . As a result, such a manifold is necessarily a (pseudo-)Riemannian manifold. The Christoffel symbols provide a concrete representation of the connection of (pseudo-)Riemannian geometry in terms of coordinates on the manifold. Additional concepts, such as parallel transport, geodesics, etc. can then be expressed in terms of Christoffel symbols. In general, there are an infinite number of metric connections for a given metric tensor; however, there is a unique connection that is free of torsion, the Levi-Civita connection. It is common in physics and general relativity to work almost exclusively with the Levi-Civita connection, by working in coordinate frames (called holonomic coordinates) where the torsion vanishes. For example, in Euclidean spaces, the Christoffel symbols describe how the local coordinate bases change from point to point. At each point of the underlying -dimensional manifold, for any local coordinate system around that point, the Christoffel symbols are denoted for . Each entry
https://en.wikipedia.org/wiki/Register%E2%80%93memory%20architecture
In computer engineering, a register–memory architecture is an instruction set architecture that allows operations to be performed on (or from) memory, as well as registers. If the architecture allows all operands to be in memory or in registers, or in combinations, it is called a "register plus memory" architecture. In a register–memory approach one of the operands for operations such as the ADD operation may be in memory, while the other is in a register. This differs from a load–store architecture (used by RISC designs such as MIPS) in which both operands for an ADD operation must be in registers before the ADD. An example of register-memory architecture is Intel x86. Examples of register plus memory architecture are: IBM System/360 and its successors, which support memory-to-memory fixed-point decimal arithmetic operations, but not binary integer or floating-point arithmetic operations; VAX, which supports memory or register source and destination operands for binary integer and floating-point arithmetic; the Motorola 68000 series, which supports integer arithmetic with a memory source or destination, but not with a memory source and destination. See also Load–store architecture Addressing mode
https://en.wikipedia.org/wiki/Radical%20%28chemistry%29
In chemistry, a radical, also known as a free radical, is an atom, molecule, or ion that has at least one unpaired valence electron. With some exceptions, these unpaired electrons make radicals highly chemically reactive. Many radicals spontaneously dimerize. Most organic radicals have short lifetimes. A notable example of a radical is the hydroxyl radical (HO·), a molecule that has one unpaired electron on the oxygen atom. Two other examples are triplet oxygen and triplet carbene (꞉) which have two unpaired electrons. Radicals may be generated in a number of ways, but typical methods involve redox reactions, Ionizing radiation, heat, electrical discharges, and electrolysis are known to produce radicals. Radicals are intermediates in many chemical reactions, more so than is apparent from the balanced equations. Radicals are important in combustion, atmospheric chemistry, polymerization, plasma chemistry, biochemistry, and many other chemical processes. A majority of natural products are generated by radical-generating enzymes. In living organisms, the radicals superoxide and nitric oxide and their reaction products regulate many processes, such as control of vascular tone and thus blood pressure. They also play a key role in the intermediary metabolism of various biological compounds. Such radicals can even be messengers in a process dubbed redox signaling. A radical may be trapped within a solvent cage or be otherwise bound. Formation Radicals are either (1) formed from spin-paired molecules or (2) from other radicals. Radicals are formed from spin-paired molecules through homolysis of weak bonds or electron transfer, also known as reduction. Radicals are formed from other radicals through substitution, addition, and elimination reactions. Radical formation from spin-paired molecules Homolysis Homolysis makes two new radicals from a spin-paired molecule by breaking a covalent bond, leaving each of the fragments with one of the electrons in the bond. Bec
https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20statistics
In quantum statistics, Bose–Einstein statistics (B–E statistics) describes one of two possible ways in which a collection of non-interacting identical particles may occupy a set of available discrete energy states at thermodynamic equilibrium. The aggregation of particles in the same state, which is a characteristic of particles obeying Bose–Einstein statistics, accounts for the cohesive streaming of laser light and the frictionless creeping of superfluid helium. The theory of this behaviour was developed (1924–25) by Satyendra Nath Bose, who recognized that a collection of identical and indistinguishable particles can be distributed in this way. The idea was later adopted and extended by Albert Einstein in collaboration with Bose. Bose–Einstein statistics apply only to particles that do not follow the Pauli exclusion principle restrictions. Particles that follow Bose-Einstein statistics are called bosons, which have integer values of spin. In contrast, particles that follow Fermi-Dirac statistics are called fermions and have half-integer spins. Bose–Einstein distribution At low temperatures, bosons behave differently from fermions (which obey the Fermi–Dirac statistics) in a way that an unlimited number of them can "condense" into the same energy state. This apparently unusual property also gives rise to the special state of matter – the Bose–Einstein condensate. Fermi–Dirac and Bose–Einstein statistics apply when quantum effects are important and the particles are "indistinguishable". Quantum effects appear if the concentration of particles satisfies where is the number of particles, is the volume, and is the quantum concentration, for which the interparticle distance is equal to the thermal de Broglie wavelength, so that the wavefunctions of the particles are barely overlapping. Fermi–Dirac statistics applies to fermions (particles that obey the Pauli exclusion principle), and Bose–Einstein statistics applies to bosons. As the quantum concentration depends
https://en.wikipedia.org/wiki/Near%E2%80%93far%20problem
The near–far problem or hearability problem is the effect of a strong signal from a near signal source in making it hard for a receiver to hear a weaker signal from a further source due to adjacent-channel interference, co-channel interference, distortion, capture effect, dynamic range limitation, or the like. Such a situation is common in wireless communication systems, in particular CDMA. In some signal jamming techniques, the near–far problem is exploited to disrupt ("jam") communications. Analogies Consider a receiver and two transmitters, one close to the receiver, the other far away. If both transmitters transmit simultaneously and at equal powers, then due to the inverse square law the receiver will receive more power from the nearer transmitter. Since one transmission's signal is the other's noise, the signal-to-noise ratio (SNR) for the further transmitter is much lower. This makes the farther transmitter more difficult, if not impossible, to understand. In short, the near–far problem is one of detecting or filtering out a weaker signal amongst stronger signals. To place this problem in more common terms, imagine you are talking to someone 6 meters away. If the two of you are in a quiet, empty room then a conversation is quite easy to hold at normal voice levels. In a loud, crowded bar, it would be impossible to hear the same voice level, and the only solution (for that distance) is for both you and your friend to speak louder. Of course, this increases the overall noise level in the bar, and every other patron has to talk louder too (this is equivalent to power control runaway). Eventually, everyone has to shout to make themselves heard by a person standing right beside them, and it is impossible to communicate with anyone more than half a meter away. In general, however, a human is very capable of filtering out loud sounds; similar techniques can be deployed in signal processing where suitable criteria for distinguishing between signals can be establis
https://en.wikipedia.org/wiki/Software%20bot
A software bot is a type of software agent in the service of software project management and software engineering. A software bot has an identity and potentially personified aspects in order to serve their stakeholders. Software bots often compose software services and provide an alternative user interface, which is sometimes, but not necessarily conversational. Software bots are typically used to execute tasks, suggest actions, engage in dialogue, and promote social and cultural aspects of a software project. The term bot is derived from robot. However, robots act in the physical world and software bots act only in digital spaces. Some software bots are designed and behave as chatbots, but not all chatbots are software bots. Erlenhov et al. discuss the past and future of software bots and show that software bots have been adopted for many years. Usage Software bots are used to support development activities, such as communication among software developers and automation of repetitive tasks. Software bots have been adopted by several communities related to software development, such as open-source communities on GitHub and Stack Overflow. GitHub bots have user accounts and can open, close, or comment on pull requests and issues. GitHub bots have been used to assign reviewers, ask contributors to sign the Contributor License Agreement, report continuous integration failures, review code and pull requests, welcome newcomers, run automated tests, merge pull requests, fix bugs and vulnerabilities, etc. The Slack tool includes an API for developing software bots. There are slack bots for keeping track of todo lists, coordinating standup meetings, and managing support tickets. The Chatbot company products further simplify the process of creating a custom Slack bot. On Wikipedia, Wikipedia bots automate a variety of tasks, such as creating stub articles, consistently updating the format of multiple articles, and so on. Bots like ClueBot NG are capable of recogniz
https://en.wikipedia.org/wiki/Overlap%E2%80%93save%20method
In signal processing, overlap–save is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal and a finite impulse response (FIR) filter : where for m outside the region . This article uses common abstract notations, such as or in which it is understood that the functions should be thought of in their totality, rather than at specific instants (see Convolution#Notation). The concept is to compute short segments of y[n] of an arbitrary length L, and concatenate the segments together. Consider a segment that begins at n = kL + M, for any integer k, and define: Then, for , and equivalently , we can write: With the substitution , the task is reduced to computing for . These steps are illustrated in the first 3 traces of Figure 1, except that the desired portion of the output (third trace) corresponds to 1  ≤   ≤  L. If we periodically extend xk[n] with period N  ≥  L + M − 1, according to:the convolutions    and    are equivalent in the region . It is therefore sufficient to compute the N-point circular (or cyclic) convolution of with   in the region [1, N].  The subregion [M + 1, L + M] is appended to the output stream, and the other values are discarded.  The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem:where:DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over N discrete points, and is customarily chosen such that is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency. The leading and trailing edge-effects of circular convolution are overlapped and added, and subsequently discarded. Pseudocode (Overlap-save algorithm for linear convolution) h = FIR_impulse_response M = length(h) overlap = M − 1 N = 8 × overlap (see next section for a better choice) step_size = N − overlap H = DFT(h, N) position = 0 while pos