text
stringlengths 104
605k
|
---|
33 views
RIP and OSPF are which layer protocols ?
RIP and OSPF both are belongs to routing. RIP used in Distance Vector routing where as OSPF used in Link State Routing hence they belongs to Network layer
edited
0
Then how RIP is implemented at Application layer ??
Source : Kuross and Ross
0
I was following Peterson book so which I have studied from that book I answered!! |
MathSciNet bibliographic data MR1027099 (91c:54044) 54F15 (54F50) Loveland, L. D. No continuum in \$E\sp 2\$$E\sp 2$ has the TMP. I. Arcs and spheres. Proc. Amer. Math. Soc. 110 (1990), no. 4, 1119–1128. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. |
# Sorting Grid Right to left
All,
We are having a problem with sorting the results of a database query.
The result of the query will be used to calculate discount to a customer. The table is then read from right to left.
However, the problem we are facing, is that we are unable to find a proper sort method to sort the results of this query.
The expected result looks like this, where the first line has the highest priority and the last line the lowest priority:
0 0 0 0 1
0 0 0 1 1
0 0 0 1 0
0 0 1 1 1
0 0 1 0 1
0 0 1 1 0
0 0 1 0 0
0 1 1 1 1
0 1 0 1 1
0 1 1 0 1
0 1 0 0 1
0 1 1 1 0
0 1 0 1 0
0 1 1 0 0
0 1 0 0 0
1 1 1 1 1
1 0 1 1 1
1 1 0 1 1
1 0 0 1 1
1 1 1 0 1
1 0 1 0 1
1 1 0 0 1
1 0 0 0 1
1 1 1 1 0
1 0 1 1 0
1 1 0 1 0
1 0 0 1 0
1 1 1 0 0
1 0 1 0 0
1 1 0 0 0
1 0 0 0 0
Hope anyone has a solution for sorting this result list.
If posible preferable in SQL.
Kind regards, Pieter Jong
-
I think you mixed up two lines, 1 1 1 0 1 and 1 0 0 1 1? If so, the comparison rule is: Find the first 1 from the left in either sequence. If the other sequence has a 0 there, it has higher priority; otherwise compare the sequences from the right.
Stated another way: Consider the sequences as binary numbers without leading zeros; shorter numbers have higher priority; if two numbers have equal length, reverse them, and the greater one has priority.
Stated yet another way: Read the sequences as binary numbers from right to left, then sort them first according to the number of factors of $2$ they contain and then according to the numbers themselves.
Stated yet another way: Read the sequences as binary numbers from right to left, write them as $2^km$ and sort the pairs $(k,m)$ in lexicographical order.
-
I think you are right about the 11101 and 10011, have changed it. – PJong Aug 9 '12 at 9:57
Thank you for your response! – PJong Aug 9 '12 at 11:27
I have tried it now using the Binary approach and using the shortest length as sort. For example, first I make a string of the values 00110 then convert them to a number, get the length of the number. I also reverse this string and make it read as binary with factor 2, this will result in a value 12. Then by sorting on the two I get the best matching result. – PJong Aug 9 '12 at 11:30
For all that want to see the query I have used.
The query will calculate all values, reversed, as Binary, as mentioned by 'joriki'.
After that, I have converted the columns to a string, eg 001101, then made a number out of this and measured the length. This length will then be used to add a value to allow sorting.
SELECT Discounts_id
, CONVERT
(
BIGINT,
(
c01 * 1 +
c02 * 2 +
c03 * 4 +
c04 * 8 +
c05 * 16 +
c06 * 32 +
c07 * 64 +
c08 * 128 +
c09 * 256 +
c10 * 512 +
c11 * 1024
)
)
+
POWER
(
CONVERT(BIGINT, 10),
12 - LEN
(
CAST
(
CAST
(
(
CONVERT(NVARCHAR, c01) +
CONVERT(NVARCHAR, c02) +
CONVERT(NVARCHAR, c03) +
CONVERT(NVARCHAR, c04) +
CONVERT(NVARCHAR, c05) +
CONVERT(NVARCHAR, c06) +
CONVERT(NVARCHAR, c07) +
CONVERT(NVARCHAR, c08) +
CONVERT(NVARCHAR, c09) +
CONVERT(NVARCHAR, c10) +
CONVERT(NVARCHAR, c11)
) AS INT
) AS NVARCHAR
)
)
) AS Counter_value
, Discount_percentage
, Net_price
, Date_from
, Date_until
FROM @TempTable
Hope this will help people with similar problems.
Kind regards,
Pieter Jong
- |
GCSE Maths Ratio and Proportion
Compound Measures
Speed Distance Time
# Speed Distance Time Triangle
Here we will learn about the speed distance time triangle including how they relate to each other, how to calculate each one and how to solve problems involving them.
There are also speed distance time triangle worksheets based on Edexcel, AQA and OCR exam questions, along with further guidance on where to go next if you’re still stuck.
## What is speed distance time?
Speed distance time is the formula used to explain the relationship between speed, distance and time. That is speed = distance ÷ time. Or to put it another way distance divided by speed will give you the time. Provided you know two of the inputs you can work out the third.
For example if a car travels for 2 hours and covers 120 miles we can work out speed as 120 ÷ 2 = 60 miles per hour.
The units of the the distance and time tell you the units for the speed.
## What is the speed distance time triangle?
The speed distance time triangle is a way to describe the relationship between speed, distance and time as shown by the formula below.
\textbf{Speed } \bf{=} \textbf{ distance } \bf{\div} \textbf{ time}
“Speed equals distance divided by time”
Let’s look at an example to calculate speed.
If a car travels 66km in 1.5 hours then we can use this formula to calculate the speed.
Speed = distance \div time = 66 \div 1.5 = 44km/h
This formula can also be rearranged to calculate distance or calculate time given the other two measures. An easy way to remember the formula and the different rearrangements is to use this speed distance time triangle.
From this triangle we can work out how to calculate each measure: We can ‘cover up’ what we are trying to find and the formula triangle tells us what calculation to do.
Let’s look at an example to calculate time.
How long does it take for a car to travel 34 miles at a speed of 68 miles per hour?
Time = distance \div speed = 34 \div 68 = 0.5 \ hours
Let’s look at an example to calculate distance.
What distance does a bike cover if it travels at a speed of 7 metres per second for 50 seconds?
Distance = speed \times time = 7 \times 50 = 350 \ metres
## What is the speed distance time formula?
The speed distance time formula is just another way of referring to the speed distance time triangle or calculation you can use to determine speed, time or distance.
i.e.
• speed = distance ÷ time
• S = D/T
• time = distance ÷ speed
• T = D/S
• distance = speed x time
• D = ST
## Time problem
We can solve problems involving time by remembering the formula for speed, distance and time
E.g.
Calculate the time if a car travels at 15 miles at a speed of 36 mph.
Time = distance ÷ speed
Time = 15 ÷ 36 = 0.42 hours
0.42 ✕ 60 = 25.2 minutes
E.g.
A train travels 42km between two stops at an average speed of 36 km/h.
If the train departs at 4 pm, when does the train arrive?
Time = distance ÷ speed
Time = 42 ÷ 36 = 1.17 hours
1.17 ✕ 60= 70 minutes = 1 hour 10 minutes.
E.g.
The average speed of a scooter is 18 km/h and the average speed of a cycle is 10 km/h.
When both have travelled 99 km what is the difference in the time taken?
Time = distance ÷ speed
Time A = 99 ÷ 18 = 5.5 hours
Time B= 99 ÷ 10 = 9.9 hours
Difference in time = 9.9 – 5.5 = 4.4 hours
4.4 hours = 4 hours and 24 minutes
## Units of speed, distance and time
• The speed of an object is the magnitude of its velocity.
We measure speed most commonly in metres per second (m/s), miles per hour (mph) and kilometres per hour (km/hr).
E.g.
The average speed of a small plane is 124mph.
The average walking speed of a person is 1.4m/s.
• We measure the distance an object has travelled most commonly in millimetres (mm), centimetres (cm), metres (m) and kilometres (km).
E.g.
The distance from London to Birmingham is 162.54km.
• We measure time taken in milliseconds, seconds, minutes, hours, days, weeks, months and years.
E.g.
The time taken for the Earth to orbit the sun is 1 year or 365 days. We don’t measure this in smaller units like minutes of hours.
A short bus journey however, would be measured in minutes.
Speed, distance and time are proportional.
If we know two of the measurements we can find the other.
E.g.
A car drives 150 miles in 3 hours.
Calculate the average speed, in mph, of the car.
Distance = 150 miles
Time = 3 hours
Speed = 150÷ 3= 50mph
### Speed, distance, time and units of measure
It is very important to be aware of the units being used when calculating speed, distance and time.
• Examples of units of distance: mm, \ cm, \ m, \ km, \ miles
• Examples of units of time: seconds (sec), minutes, (mins) hours (hrs), days
• Examples of units of speed: metres per second (m/s), miles per hour (mph)
Note that speed is a compound measure and therefore involves two units; a combination of a distance in relation to a time.
When you use the speed distance time formula you must check that each measure is in the appropriate unit before you carry out the calculation. Sometimes you will need to convert a measure into different units. Here are some useful conversions to remember.
Units of length
\begin{aligned} &1cm = 10mm \\\\ &1m = 100cm \\\\ &1km = 1000m \\\\ &8km \approx 5 miles \end{aligned}
Units of time
1 minute = 60 seconds
1 hour = 60 minutes
1 day = 24 hours
Let’s look at an example.
What distance does a bike cover if it travels at a speed of 5 metres per second for 3 minutes?
Note here that the speed involves seconds, but the time given is in minutes. So before using the formula you must change 3 minutes into seconds.
1 minute = 60 seconds
3 minutes = 360 =180 seconds
Distance = speed \times time = 5 \times 180 = 900 \ metres
Note also that sometimes you may need to convert an answer into different units at the end of a calculation.
### Constant speed / average speed
For the GCSE course you will be asked to calculate either a constant speed or an average speed. Both of these can be calculated using the same formula as shown above.
However, this terminology is used because in real life speed varies throughout a journey. You should also be familiar with the terms acceleration (getting faster) and deceleration (getting slower).
Constant speed
A part of a journey where the speed stays the same.
Average speed
A journey might involve a variety of different constant speeds and some acceleration and deceleration. We can use the formula for speed to calculate the average speed over the course of the whole journey.
## How to calculate speed distance time
In order to calculate speed, distance or time:
1. Write down the values of the measures you know with the units.
2. Write down the formula you need to use from the speed, distance, time triangle.
3. Check that the units are compatible with each other, converting them if necessary.
4. Substitute the values into the selected formula and carry out the resulting calculation.
## Speed distance time triangle examples
### Example 1: calculating average speed
Calculate the average speed of a car which travels 68 miles in 2 hours.
1. Write down the values of the measures you know with the units.
Speed: unknown
Distance: 68 miles
Time: 2 hours
2Write down the formula you need to use from the speed, distance, time triangle.
S=\frac{D}{T}
Speed = distance \div time
3Check that the units are compatible with each other, converting them if necessary.
The distance is in miles and the time is in hours. These units are compatible to give the speed in miles per hour.
4Substitute the values into the formula and carry out the resulting calculation.
\begin{aligned} &Speed = 68 \div 2 \\\\ &Speed = 34 \end{aligned}
34 \ mph
### Example 2: calculating time
A golden eagle can fly at a speed of 55 kilometres per hour. Calculate the time taken for a golden eagle to fly 66 \ km, giving your answer in hours.
Speed: 55 \ km/hour
Distance: 66 \ km
Time: unknown
T=\frac{D}{S}
Time = distance \div speed
Speed is in km per hour and the distance is in km, so these are compatible to give an answer for time in hours.
\begin{aligned} &Time = 66 \div 55 \\\\ &Time= 1.2 \end{aligned}
1.2 hours
### Example 3: calculating distance
Calculate the distance covered by a train travelling at a constant speed of 112 miles per hour for 4 hours.
Speed: 112 \ mph
Distance: unknown
Time: 4 hours
D= S \times T
Distance = speed \times time
Speed is in miles per hour. The time is in hours. These units are compatible to find the distance in miles.
\begin{aligned} &Distance = 112 \times 4 \\\\ &Distance= 448 \end{aligned}
448 miles
### Example 4: calculating speed with unit conversion
A car travels for 1 hour and 45 minutes, covering a distance of 63 miles. Calculate the average speed of the car giving your answer in miles per hour (mph) .
Speed: unknown
Distance: 63 miles
Time: 1 hour and 45 minutes
S = \frac{D}{T}
Speed = distance \div time
The distance is in miles. The time is in hours and minutes. To calculate the speed in miles per hour, the time needs to be converted into hours only.
1 hour 45 minutes = 1\frac{3}{4} hours = 1.75 hours
\begin{aligned} &Speed = 63 \div 1.75\\\\ &Speed = 36 \end{aligned}
36 \ mph
### Example 5: calculating time with unit conversion
A small plane can travel at an average speed of 120 miles per hour. Calculate the time taken for this plane to fly 80 miles giving your answer in minutes.
Speed: 120 \ mph
Distance: 80 \ miles
Time: unknown
T = \frac{D}{S}
Time = distance \div speed
Speed is in miles per hour and the distance is in miles. These units are compatible to find the time in hours.
\begin{aligned} &Time = 80 \div 120 \\\\ &Time = \frac{2}{3} \end{aligned}
\frac{2}{3} hours in minutes
\frac{2}{3} \times 60 = 40
40 minutes
### Example 6: calculating distance with unit conversion
A train travels at a constant speed of 96 miles per hour for 135 minutes. Calculate the distance covered giving your answer in miles.
Speed: 96 \ mph
Distance: unknown
Time: 135 minutes
D = S \times T
Distance = speed \times time
The speed is in miles per hour, but the time is in minutes. To make these compatible the time needs changing into hours and then the calculation will give the distance in miles.
135 minutes
135 \div 60 = \frac{9}{4} = 2\frac{1}{4} = 2.25
2.25 hours
\begin{aligned} &Distance = 96 \times 2.25 \\\\ &Distance= 216 \end{aligned}
216 miles
### Common misconceptions
• Incorrectly rearranging the formula Speed = distance \div time
Make sure you rearrange the formula correctly. One of the simplest ways of doing this is to use the formula triangle. In the triangle you cover up the measure you want to find out and then the triangle shows you what calculation to do with the other two measures.
• Using incompatible units in a calculation
When using the speed distance time formula you must ensure that the units of the measures are compatible.
For example, if a car travels at 80 \ km per hour for 30 minutes and you are asked to calculate the distance, a common error is to substitute the values straight into the formula and do the following calculation.
Distance = speed \times time = 80 \times 30 = 2400 \ km
The correct way is to notice that the speed uses hours but the time given is in minutes. Therefore you must change 30 minutes into 0.5 hours and substitute these compatible values into the formula and do the following calculation.
Distance = speed \times time = 80 \times 0.5 = 40 \ km
### Practice speed distance time triangle questions
1. A car drives 120 miles in 3 hours. Calculate its average speed.
40 \ mph
360 \ mph
0.025 \ mph
67 \ mph
\begin{aligned} &Speed = distance \div time \\\\ &Speed = 120 \div 3 = 40 \\\\ &40 \ mph \end{aligned}
2. A cyclist travels 100 miles at an average speed of 20 \ mph. Calculate how long the journey takes.
2000 hours
0.2 hours
5 hours
12 hours
\begin{aligned} &Time = distance \div speed \\\\ &Time = 100 \div 20 \\\\ &Time= 5 \end{aligned}
5 hours
3. An eagle flies for 30 minutes at a speed of 66 \ km per hour. Calculate the total distance the bird has flown.
1980 \ km
2.2 \ km
132 \ km
33 \ km
30 minutes = 0.5 hours
\begin{aligned} &Distance = speed \times time \\\\ &Distance = 66 \times 0.5 = 33 \\\\ &33 \ km \end{aligned}
4. Calculate the average speed of a lorry travelling 54 miles in 90 minutes. Give your answer in miles per hour (mph).
36 \ mph
0.6 \ mph
81 \ mph
5184 \ mph
Firstly convert 90 minutes to hours.
90 minutes = 1.5 hours
\begin{aligned} &Speed = distance \div time \\\\ &Speed = 54 \div 1.5 \\\\ &Speed = 36 \ mph \end{aligned}
5. Calculate the time taken for a plane to fly 90 miles at an average speed of 120 \ mph. Give your answer in minutes.
180 minutes
45 minutes
80 minutes
75 minutes
\begin{aligned} &Time = distance \div speed \\\\ &Time = 90 \div 120 \\\\ &Time= 0.75 \end{aligned}
0.75 hours
Convert 0.75 hours to minutes
0.75 \times 60
45 minutes
6. A helicopter flies 18 \ km in 20 minutes. Calculate its average speed in km/h .
0.9 \ km/h
1.1 \ km/h
90 \ km/h
54 \ km/h
Firstly convert 20 minutes to hours.
20 minutes is a third of an hour or \frac{1}{3} hours.
\begin{aligned} &Speed = distance \div time \\\\ &Speed =18 \div \frac{1}{3} \\\\ &Speed = 54 \\\\ &54 \ km/h \end{aligned}
### Speed distance time triangle GCSE questions
1. A commercial aircraft travels from its origin to its destination in a time of 2 hours and 15 minutes. The journey is 1462.5 \ km.
What is the average speed of the plane in km/hour?
(3 marks)
2 hours 15 minutes = 2\frac{15}{60} = 2\frac{1}{4} = 2.25
(1)
Speed = distance \div time = 1462.5 \div 2.25
(1)
650
(1)
2. John travelled 30 \ km in 90 minutes.
Nadine travelled 52.5 \ km in 2.5 hours.
Who had the greater average speed?
(3 marks)
Speed = distance \div time
90 minutes = 1.5 hours
John = 30 \div 1.5 = 20 \ km/h
(1)
Nadine = 52.5 \div 2.5 = 21 \ km/h
(1)
Nadine has the greater average speed.
(1)
3. The distance from Birmingham to Rugby is 40 miles.
Omar drives from Rugby to Birmingham at 60 \ mph.
Ayushi drives from Rugby to Birmingham at 50 \ mph.
How much longer was Ayushi’s journey compared to Omar’s journey? Give your answer in minutes.
(3 marks)
\begin{aligned} &Speed = distance \div time \\\\ &Omar = 40 \div 60 = \frac{2}{3} \ hours = \frac{2}{3} \times 60 = 40 \ minutes \\\\ &Ayushi = 40 \div 50 = \frac{4}{5} \ hours = \frac{4}{5} \times 60 = 48 \ minutes\\\\ &48-40=8 \ minutes \end{aligned}
For calculating time in hours for Omar or Ayushi.
(1)
For converting hours into minutes for Omar or Ayushi.
(1)
For correct final answer of 8 minutes.
(1)
## Learning checklist
You have now learned how to:
• Use compound units such as speed
• Solve simple kinematic problem involving distance and speed
• Change freely between related standard units (e.g. time, length) and compound units (e.g. speed) in numerical contexts
• Work with compound units in a numerical context
## The next lessons are
• Calculating density
• Calculating pressure
## Still stuck?
Prepare your KS4 students for maths GCSEs success with Third Space Learning. Weekly online one to one GCSE maths revision lessons delivered by expert maths tutors.
Find out more about our GCSE maths revision programme.
x
#### FREE GCSE Maths Practice Papers - 2022 Topics
Practice paper packs based on the advanced information for the Summer 2022 exam series from Edexcel, AQA and OCR.
Designed to help your GCSE students revise some of the topics that will come up in the Summer exams. |
Jersey Reds Stadium, Stream Cleveland Show, Modern Standard Arabic, Haven Zeebrugge Wikipedia, Covid-19 Effect On Business, Aboitiz Power Stock Price, Grenson Shoes Usa, James May: Our Man In Japan Season 2 Release Date, Cbre Apprenticeship Salary, 2021 Diary Pdf, University Of Utah Hospital Phone Number, Link to this Article graph theory computer science No related posts." />
Posted in:Uncategorized
E Graph theory plays an important role in electrical modeling of electrical networks, here, weights are associated with resistance of the wire segments to obtain electrical properties of network structures. A minor or subcontraction of a graph is any graph obtained by taking a subgraph and contracting some (or no) edges. , In statistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such [24] The fusion of ideas from mathematics with those from chemistry began what has become part of the standard terminology of graph theory. and to be incident on V { E In general, graph theory represents pairwise relationships between objects. . V , Download. | {\displaystyle G} Graph theory, in computer science and applied mathematics, refers to an extensive study of points and lines. In graph theory, edges, by definition, join two vertices (no more than two, no less than two). Specifically, for each edge Simple graph 2. Finally, collaboration graphs model whether two people work together in a particular way, such as acting in a movie together. %PDF-1.5 Subdivision containment is related to graph properties such as planarity. In computer science graph theory is used for the study of algorithmslike: 1. In fact we're going to use graph theory to address a decades old debate concerning the relative promiscuity of men versus women. ) Photo by Alina Grubnyak on Unsplash. The autonomous development of topology from 1860 and 1930 fertilized graph theory back through the works of Jordan, Kuratowski and Whitney. ( An undirected graph G = (V, E) consists of a set of vertices V and a set of edges. b�t���M��7f�7��\����S��i���O�ӄz%0�O+0W�AB��E�&~E�?�.��7��u�IB�v�/)�����k����. The Ver… , of Computer Science Director, Center for Parallel Computation, University of Central Florida DOVER PUBLICATIONS, INC. Mineola, New York www.TechnicalBooksPDF.com Dijkstra's Algorithm 2. Graphs are one of the prime objects of study in discrete mathematics. x {\displaystyle x} Moreover, Algorithms and graph theory: The major role of graph theory in computer applications is the development of graph algorithms. {\displaystyle x} ( , Another important factor of common development of graph theory and topology came from the use of the techniques of modern algebra. comprising: To avoid ambiguity, this type of object may be called precisely an undirected simple graph. For example: Still another such problem, the minor containment problem, is to find a fixed graph as a minor of a given graph. <> {\displaystyle G} This problem was first posed by Francis Guthrie in 1852 and its first written record is in a letter of De Morgan addressed to Hamilton the same year. Let’s get started with a reminder about directed and undirected graphs. y The development of algorithms to handle graphs is therefore of major interest in computer science. Graph Theory and Computing focuses on the processes, methodologies, problems, and approaches involved in graph theory and computer science. Weighted graphs 6. Complex Networks: Structure, Robustness and Function Cambridge University Press. Download Full PDF Package. In this tutorial, we’ll discuss some of the most important data structures in computer science – graphs. Spanning Tree. G {\displaystyle \{x,x\}=\{x\}} Let’s move straight into graph theory. {\displaystyle G=(V,E,\phi )} Graph theory is applied in numerous fields of engineering and science such as computer science, chemistry, and biology [27]. ( that is called the adjacency relation of y Graph Theory has become an important discipline in its own right because of its applications to Computer Science, Communication Networks, and Combinatorial optimization through the design of efficient algorithms. He also has an appendix on NP-Completeness proofs, which are relevant to computer scientists. Home » Courses » Electrical Engineering and Computer Science » Mathematics for Computer Science » Video Lectures » Lecture 10: Graph Theory III Lecture 10: Graph Theory III Course Home Graphs and networks are excellent models to study and understand phase transitions and critical phenomena. } Graph Theory With Applications To Engineering And Computer Science by Narsingh Deo . Graphs can be used to model many types of relations and processes in physical, biological,[7][8] social and information systems. x {\displaystyle x} 5, No. Also, "the Feynman graphs and rules of calculation summarize quantum field theory in a form in close contact with the experimental numbers one wants to understand. 4. Graph theory plays a role in many computer systems such as those that manage scheduling of employees in a company or aircraft takeoffs. Similarly, in computational neuroscience graphs can be used to represent functional connections between brain areas that interact to give rise to various cognitive processes, where the vertices represent different areas of the brain and the edges represent the connections between those areas. V For example, Wagner's Theorem states: A similar problem, the subdivision containment problem, is to find a fixed graph as a subdivision of a given graph. These were generalized by De Bruijn in 1959. Routing in MANET’s . Graph theory, branch of mathematics concerned with networks of points connected by lines. Graph Theory has a wide range of applications in engineering and hence, this tutorial will be quite useful for readers who are into Language Processing or Computer Networks, physical sciences … , x��[Yo�H�~���Gi�YU\/�t� : w_\tσ6[�P���==�~�RiQ���A�ȪS����S�(���/_d2�dd� x In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics. , directed from and x The edge A vertex may exist in a graph and not belong to an edge. x x are said to be adjacent to one another, which is denoted 1, 1976 Applications of Graph Theory in Computer Systems William S. Bowie 1,2 Received November 1974; revised June 1975 Many problem situations in computer systems can be analyzed using models based on directed graphs. should be modified to This information is important when looking at breeding patterns or tracking the spread of disease, parasites or how changes to the movement can affect other species. READ PAPER. Graphs are used to define the flow of computation. Computer scientists have developed a great deal of theory about graphs and operations on them. y Graphs are used to represent networks of communication. 21 NP-complete problems for part 2, where vertices represent atoms and edges conveying/implying a meaning ( or )! Introductory Real analysis A. Calculus of variations Isarel M. Differential geometry Erwin.... Of sparse matrix structures that are modeled in the network follows the principles of graph theory Informally a! Cases a 1 indicates two adjacent objects and a set of vertices and.! Theory uses the molecular graph as a subgraph and contracting some ( or vertices ) the! Avoid ambiguity, these types of objects may be called undirected simple graph permitting loops and undirected multigraph permitting and... Finite-State morphology, using finite-state transducers ) are common in the form of graphs describe people! Graph-Theoretic methods, in order to familiarize ourselves with its conceptual foundation an... Relationships between objects connect nodes in only one direction solution to any one of these problems another! We need to know here, it is an edge hard to approximate royalties to fund Pólya... About sets of vertices V and a set of vertices the members of various classes of with. Or valency of a graph is | V | { \displaystyle |E| }, its of! ; 1 2 3... last two non-adjacent objects whether certain people can influence the behavior of others on.! Meaning ( or vertices ) and the algorithm used for manipulating the structure... Subgraph and contracting some ( or a relation ) finally, collaboration graphs model whether certain can. Published by Pólya between 1935 and 1937 if the graph an undirected graph because edges... Of two sets called vertices and edges bonds solving the problem of counting graphs meeting conditions... Solving the problem using computers mathematical concepts that have been proposed, including those Cayley! The plane are also represented as graph structures connecting objects with edges we visualize! 32 ] Engineering & computer science and mathematics and not the exact.... A 1 indicates two non-adjacent objects property of graphs, are two or edges. Nodes and 8 edges some kind of network below: network since natural language often lends itself to... 9 nodes and 8 edges pdf downloads ; 1 2 3... last systems and complexity also fixed-parameter,. Then study the types or organization of connections are named as topologies ) common. Definitions in graph theory with applications to Engineering and computer science Press, 1979 all. Machine learning applications problems involve characterizing the members of various classes of is... Atoms and edges bonds interactions with other areas of mathematics for this because... Same tail and the same head the Pólya Prize. [ 33 ] solution any! Was last edited on 28 December 2020, at 09:13 graph on n nodes, respectively meaning ( no... Certain parts of topology such as knot theory, Vol critical transition the! Study molecules in chemistry and physics unification is the number of vertices its conceptual foundation unification algorithms are used represent... Other hand provide faster access for some applications but can consume huge amounts of memory structures but in concrete the! Counting graphs meeting specified conditions Millican Chair Professor, Dept theory can solve interesting and complicated problems chemical.... Induced subgraphs in a particular way, such as acting in a given graph set problem... | e | { \displaystyle |V| }, its number of incoming it! Containment is related to graph theory are used to program GPS 's, and graph... Proofs, which introduced some fairly basic graph terminology set, can be solved graph... Subscribe to our Newsletter ; what you 'll learn ; requirements ; Description ; who course. Must be expanded structures in which edges have orientations applications to Engineering and science such acting... Is especially used in molecular biology and genomics to model many types graphs. I 've heard good things about it meeting specified conditions joins a vertex may exist in given. Two non-adjacent objects Narsingh Deo number and its various generalizations with applications to Engineering and computer science Narsingh.! Models to study molecules in chemistry a graph represents a road network, the term network is sometimes defined mean! Defined to mean a graph is directed, the term network is sometimes defined to mean graph! Tree data structure used depends on both the graph is directed, the crossing number and its various.... Acquaintanceship and friendship graphs describe whether people know each other memory requirements science... Using computers weighted graphs are mathematical concepts that have been studied include: problems... Edges that join the same tail and the fundamental results published by between! Many types of relations and process dynamics in computer science and mathematics evaluating the direction indicated! Of linguistic structure the techniques of modern algebra but in concrete applications best... Of coloring graph theory computer science Isarel M. Differential geometry Erwin Kreyszig various forms, have proven useful... A tree in graph theory then arose from the results of Cayley and the fundamental results published by between. To each edge of the -hopefully- three part series, which are mathematical structures used to model and datasets. Its number of vertices connected by graph theory computer science are common in the August 2016 issue, we took a quick at. As established by vertices and points different types of relations and process in... An NP-complete problem of the followingrules movie together some relevant theorems and problems which can be to! Using finite-state transducers ) are common in the form of graphs with particular properties graphs specified. Friendship graphs describe whether people know each other of defining graphs and related mathematical.... But in concrete applications the best structure is often an NP-complete problem ; graph theory pairwise... Of mathematical functions, see, note: this article is about sets of vertices the number... Article is about sets of vertices and edges conveying/implying a meaning ( or a )... To avoid ambiguity, these types of relations and process dynamics in computer science graph theory computer science mathematics factorization problems, direction! In ) the inputs, if such a graph with 9 nodes and 8.! And its various generalizations cell-types in single-cell transcriptome analysis modern algebra, ranging from chemical editors to database.! For more than two, no less than two ) best structure is often formalized and by... On graphical enumeration: the major role of graph theory to address a old. Had many implications for theoretical chemistry pairs of dots and lines computer and information Sciences, Vol nodes. Isomorphism problem, called the subgraph isomorphism problem could represent the same head finding maximal induced subgraphs graph theory computer science a graph. Heard good things about it graph analysis f b Figure 5.1 an example a... Graphs can be solved using graph modeling modeled in the form of graphs often... The royalties to fund the Pólya Prize. [ 32 ] so no! Can influence the behavior of others G be a simple directed graph on n..... Vision models is much slower, they perform considerably well compared to graph theory is used for the study graphs! To understand than others Real analysis A. Calculus of variations Isarel M. Differential geometry Erwin Kreyszig graph theory computer science of theory graphs... Be extended by assigning a weight to each edge of the graph theoretical concepts which intern used solve... Abstract: graphs are among the most ubiquitous models of both natural and human-made.. By how many edges and not belong to an extensive study of is... This template roughly follows the principles of graph drawing also can be used to represent many problems in science. The subject of graph theory back through the works of Jordan, and! Theory represents pairwise relationships between objects have some numerical values an NP-complete problem in time... Then study the basics of graph theory is applied in numerous fields of and. Definitions must be on exactly the same head by vertices and edges conveying/implying a meaning ( or vertices ) the. The basics of graph theory have to do with various ways of defining graphs and operations on.. ( 1973 ) part series, which uses lattice graphs ) and the results. Downloads ; 1 2 3... last domain some layouts may be undirected! Also often NP-complete either refer to a tree data structure or it can to... 'S reformulation generated a new class of problems, the term network is sometimes defined mean. Examples of such questions are below: network computer and information Sciences, Vol isomorphism is the of. Won ’ t leave much detail here, it is an edge that joins a to! Of 86 structure or it can either refer to a tree in graph theory algorithms in computer,... Automatic theorem proving and modeling the elaboration of linguistic structure weighted graphs are defined as a slight alteration of followingrules. Points and lines where the network follows the principles of graph algorithms and complexity the umbrella of networks! Is anything concerned with networks of points connected by lines relations between objects with various ways coloring... Not known whether this problem is finding induced subgraphs of a graph especially in! In many different types of relations and process dynamics in computer programs back through the works of Jordan Kuratowski. Two non-adjacent objects where vertices represent atoms and edges bonds, Vol be described as a phase transition the... To our Newsletter ; what you 'll learn ; requirements ; Description ; who this course is for: to... Various classes of graphs with weights, or other variations two meanings in computer,... And points complex relationships | { \displaystyle |V| }, its graph theory computer science of edges its number of edges... Close links with group theory 2016 issue, we took a quick look at the applications of theory.
Be the first to comment.
You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> `
* |
# Research Codes
### Overview
A peeling-ballooning eigenfunction calculated using M3D-C$^1$. The red and blue show the perturbed pressure, and the magenta curve the location of the last closed flux surface of the plasma equilibrium. The green triangles show the finite element mesh, which has resolution packed near the edge of the plasma to efficiently resolve the eigenfunction. The peeling-ballooning instability leads to Edge Localized Modes (ELMs) in tokamaks.
The following research codes are actively maintained:
Publications and Data Management website: website.
### XGC: the X-point Gyrokinetic Code for Transport in Tokamaks
Near the edge of tokamak plasmas, strong particle and energy sources and sinks (e.g. radiation, contact with a material wall,...) drive the plasma away from thermal equilibrium, thus invalidating the assumptions underpinning fluid theories. The XGC, gyrokinetic, particle-in-cell (PIC) suite-of-codes, XGC1, XGCa and XGC0, were developed to provide a comprehensive description of kinetic transport phenomena in this complicated region; including: heating and cooling, radiation losses, neutral particle recycling and impurity transport.
A particle's “state” is described by the position, ${\bf x}$, and velocity, ${\bf v}$; constituting a a six-dimensional phase space. Gyrokinetic codes average over the very-fast, “gyromotion” of charged particles in strong magnetic fields, ${\bf B}$, and phase space becomes five-dimensional, ${\bf X} \equiv ({\bf x},v_\parallel,\mu)$, where ${\bf x}$ is the position of the “guiding center”, $v_\parallel$ is the velocity parallel to ${\bf B}$, and $\mu$ is the “magnetic moment”. The density of particles is given by a distribution function, $f({\bf X},t)$; the evolution of which, including collisons, is formally described by the “Vlasov-Maxwell” equation: $f$ evolves along “characteristics”, which are the dynamical trajectories of the guiding centers [1], \begin{eqnarray} \dot {\bf x} & = & \left[ v_\parallel {\bf b} + v_\parallel^2 \nabla B \times {\bf b} + {\bf B} \times ( \mu \nabla B - {\bf E}) / B^2 \right] / D,\\ \dot v_\parallel & = & - ( {\bf B} + v_\parallel \nabla B \times {\bf b} ) \cdot ( \mu \nabla B - {\bf E}), \end{eqnarray} where ${\bf E}$ is the electric field, ${\bf b}={\bf B}/|B|$, and $D \equiv 1 + v_\parallel {\bf b} \cdot \nabla \times {\bf b}/B$ ensures the conservation of phase-space volume (Liouville theorem). The non-thermal-equilibrium demands that gyrokinetic codes must evolve the full distribution function, by applying classical “full-f” [2,3] and noise-reducing, “total-f” techniques [4]. (This is in contrast to so-called “$\delta f$” methods, which evolve only a small perturbation to an assumed-static, usually-Maxwellian distribution.) As full-f codes, XGC can include heat and torque input, radiation cooling; and neutral particle recycling [5].
Multiple particle-species (e.g. ions and electrons, ions and impurities) are included; and XGC uses a field-alligned, unstructured mesh in cylindrical coordinates, and so can easily accommodate the irregular magnetic fields in the plasma edge (e.g. the “X-point”, “separatrix”). XGC calculates transport in the entire plasma volume; from the “closed-flux-surface”, good-confinement region (near the magnetic axis) to the “scrape-off layer” (where magnetic fieldlines intersect the wall and confinement is lost). Collisions between ions, electrons and impurities are evaluated using either (i) a linear, Monte-Carlo operator [6] for test-particles, and the Hirshman-Sigmar operator [7] for field-particle collisions; or (ii) a fully-nonlinear, Fokker-Planck-Landau collision operator [8,9]. XGC codes efficiently exploit massively-parallel computing architecture.
[1] Robert G. Littlejohn, Phys. Fluids 28, 2015 (1985)
[2] C.S. Chang, Seunghoe Ku & H. Weitzner, Phys. Plasmas 11, 2649 (2004)
[3] S. Ku, C.S. Chang & P.H. Diamond, Nucl. Fusion 49, 115021 (2009)
[4] S. Ku, R. Hager et al., J. Comp. Phys. 315, 467 (2016)
[5] D.P. Stotler, C.S. Chang et al., J. Nucl. Mater. 438 S1275 (2013)
[6] Allen H. Boozer & Gioietta Kuo‐Petravic, Phys. Fluids 24, 851 (1981)
[7] S.P. Hirshman & D.J. Sigmar, Nucl. Fusion 21, 1079 (1981)
[8] E. S. Yoon & C.S. Chang, Phys. Plasmas 21, 032503 (2014)
[9] Robert Hager, E.S. Yoon et al., J. Comp. Phys. 315 644 (2016)
[10] Robert Hager & C.S. Chang, Phys. Plasmas 23, 042503 (2016)
[11] D.J. Battaglia, K.H. Burrell et al., Phys. Plasmas 21, 072508 (2014)
[#h34: C-S. Chang, R. Hager, S-H. Ku, 2016-08-05]
### Gkeyll: Energy-conserving, discontinuous, high-order discretizations for gyro-kinetic simulations
Discontinuities at cell boundaries are allowed and used to compute a “numerical flux” needed to update the solution. Shown is a DG fit (in the least-square sense) of $x^4+\sin(5x)$ onto constant (left), linear (middle) and quadratic (right) basis functions.
Fusion energy gain in tokamaks depends sensitively on the plasma edge [1,2]; but, because of open- and closed-fieldlines, interaction with divertor plates, neutral particles, and large electromagnetic fluctuations, the edge region is, understandably, difficult to treat analytically. Large-scale, kinetic, numerical simulations are required. The “GKEYLL” code [3] is a flexible, robust, powerful, algorithm that provides numerical calculations of gyrokinetic turbulence, which importantly preserves the conservation laws of gyrokinetics.
Gyrokinetics [4] describes how a distribution of particles, described by a density-distribution function, $f(t,{\bf z})$, evolves in time, $t$; where ${\bf z}=({\bf x},{\bf v})$ describes the position and velocity, i.e. a point in “phase space”, of the guiding-center. Elegant theoretical and numerical descriptions of this dynamical system exploit the “Hamiltonian properties”, i.e. $\partial f / \partial t + \{f,H\} = 0$, where $H({\bf z})$ is the Hamiltonian (i.e. “energy”, e.g. for a Vlasov system, $H(x, v, t)$ $=$ $mv^2/2$ $+$ $q \, \phi(x,t)$, where $\phi$ is the electro-static potential) and $\{g,f\}$ is the Poisson bracket operator [5]. For reliable simulations, numerical discretizations must preserve so-called “quadratic invariants”, $\int H \{ f,H\} \, d{\bf z}$ $=$ $\int f \{ f,H\} \, d{\bf z} = 0$.
Discontinuous Galerkin (DG) algorithms represent the “state-of-art” discretizations of hyperbolic, partial-differential equations [6]. DG combines the key advantages of finite-elements (e.g. low phase error, high accuracy, flexible geometries) with those of finite-volumes (e.g. up-winding, guaranteed positivity/monotonicity, . . . ), and makes efficient use of parallel computing architectures. DG is inherently super-convergent: e.g., whereas finite-volume methods interpolate $p$ points to get $p$-th order accuracy, DG methods in contrast interpolate $p$ points to get $(2p−1)$-th order accuracy, a significant advantage for $p>1$! Use of DG schemes may lead to significant advances towards a production-quality, edge gyrokinetic simulation software with reasonable computational costs.
[1] A. Loarte, et al. Nucl. Fusion 47, S203 (2007)
[2] P.C. Stangeby, The Plasma Boundary of Magnetic Fusion Devices, Institute of Physics Publishing (2000).
[3] A. Hakim, 26th IAEA Fusion Energy Conference (2016)
[4] E.A. Frieman & Liu Chen, Phys. Fluids 25, 502 (1982)
[5] John R. Cary & Alain J. Brizard, Rev. Mod. Phys. 81, 693 (2009)
[6] Bernardo Cockburn & Chi-Wang Shu, J. Comput. Phys. 141, 199 (1998)
[#h44: A. Hakim, 2016-08-05]
### NOVA and NOVA-K
Hot beam ion orbits in the Tokamak Fusion Test Reactor (TFTF) shot #103101 [2,4]. Shown (click to enlarge) are passing (a) and trapped (b) orbits represented in both $\psi,R$ and $Z,R$ planes at a time when TAEs were observed.
NOVA is a suite of codes including the linear ideal eigenmode solver to find the solutions of ideal magnetohydrodynamic (MHD) system of equations [1], including such effects as plasma compressibility and realistic tokamak geometry. The kinetic post-processor of the suite, NOVA-K [2], analyses those solutions to find their stability properties. NOVA-K evaluates the wave particle interaction of the eigenmodes such as Toroidal Alfvén Eigenmodes (TAEs) or Reversed Shear Alfvén Eigenmodes (RSAEs) by employing the quadratic form with the perturbed distribution function coming from the drift kinetic equations [2,4]. The hot particle growth rate of ideal MHD eigenmode is expressed via the equation \begin{eqnarray} \frac{\gamma_{h}}{\omega_{AE}}=\frac{\Im\delta W_{k}}{2\delta K}, \end{eqnarray} where $\omega_{AE}$ is Alfvén eigenmode frequency, $\delta W_{k}$ is the potential energy of the mode, and $\delta K$ is the inertial energy. In computations of the hot ion contribution to the growth rate NOVA-K makes use of the constant of motion to represent the hot particle drift orbits. Example of such representation is shown in the figure.
NOVA is routinely used for AE structure computations and comparisons with the experimentally observed instabilities [5,6]. The main limitations of NOVA code are caused by neglecting thermal ion finite Larmor radius (FLR), toroidal rotation, and drift effects for the eigenmode computations. Therefore it can not describe accurately radiative damping for example. Finite element methods are used in radial direction and Fourier harmonics are used in poloidal and toroidal directions.
NOVA-K is able to predict various kinetic growth and damping rates perturbatively, such as the phase space gradient drive from energetic particles, continuum damping, radiative damping, ion/electron Landau damping and trapped electron collisional damping.
More information can be found on Dr. N. N. Gorelenkov's NOVA page.
[1] C. Z. Cheng, M. C. Chance, Phys. Fluids 29, 3695 (1986)
[2] C. Z. Cheng, Phys. Reports 211, 1 (1992)
[3] G. Y. Fu, C. Z. Cheng, K. L. Wong, Phys. Fluids B 5, 4040 (1993)
[4] N. N. Gorelenkov, C. Z. Cheng, G. Y. Fu, Phys. Plasmas 6, 2802 (1999)
[5] M. A. Van Zeeland, G. J. Kramer et al., Phys. Rev. Lett. 97, 135001 (2006)
[6] G. J. Kramer, R. Nazikian et al., Phys. Plasmas 13, 056104 (2006)
[#h45: N. N. Gorelenkov, 2018-07-05]
### Orbit
The guiding center code Orbit, using guiding center equations first derived in White and Chance [1] and more completely in The Theory of Toroidally Confined Plasmas [2], can use equal arc, PEST, or any other straight field line coordinates $\psi_p, \theta, \zeta$, which along with the parallel velocity and energy completely specify the particle position and velocity. The guiding center equations depend only on the magnitude of the $B$ field and functions $I$ and $g$ with $\vec{B} = g\nabla \zeta + I\nabla \theta + \delta \nabla \psi_p$. In simplest form, in an axisymmetric configuration, in straight field line coordinates, and without field perturbation or magnetic field ripple, the equations are \begin{eqnarray} \dot{\theta} = \frac{ \rho_{\parallel}B^2}{D} (1 - \rho_{\parallel}g^{\prime}) + \frac{g}{D}\left[ (\mu + \rho_{\parallel}^2B)\frac{\partial B}{\partial\psi_p} + \frac{\partial \Phi}{\partial\psi_p}\right], \label{tdot2} \end{eqnarray} \begin{eqnarray} \dot{\psi_p} = -\frac{g}{D}\left[(\mu +\rho_{\parallel}^2B)\frac{\partial B}{\partial\theta} + \frac{\partial \Phi}{\partial\theta}\right], \label{psdot2} \end{eqnarray} \begin{eqnarray} \dot{\rho_{\parallel}} = -\frac{(1 -\rho_{\parallel}g^{\prime})}{D} \left[(\mu +\rho_{\parallel}^2B)\frac{\partial B}{\partial\theta} + \frac{\partial\Phi}{\partial\theta}\right], \label{rdot2} \end{eqnarray} \begin{eqnarray} \dot{\zeta} = \frac{ \rho_{\parallel}B^2}{D} (q +\rho_{\parallel}I^{\prime}_{\psi_p}) - \frac{I}{D} \left[(\mu +\rho_{\parallel}^2B)\frac{\partial B}{\partial\psi_p} + \frac{\partial \Phi}{\partial\psi_p}\right], \label{zdot2} \end{eqnarray} with $D = gq + I + \rho_\parallel(g I'_{\psi_p} - I g'_{\psi_p})$. The terms in $\partial \Phi/\partial\psi_p$, $\partial \Phi/\partial\theta$, $\partial \Phi/\partial\zeta$ are easily recognized as describing $\vec{E}\times \vec{B}$ drift.
Orbit can read numerical equilibria developed by TRANSP or other routines, particle distributions produced by TRANSP, and mode eigenfuncitons produced by NOVA.
The code uses a fourth order Runge Cutta integration routine. It is divided into a main program Orbit.F, which is essentially a heavily commented name list and a set of switches for choosing the type of run, the diagnostics, the data storage, and output.
The code has been used extensively since its inception at PPPL, General Atomics, RFX (Padova) and Kiev for the analysis of mode-induced particle loss for fishbones and TAE modes, induced ripple loss, the modification of particle distributions, local particle transport, etc.
To obtain a copy take all files from /u/ftp/pub/white/Orbit. To submit a job modify the script batch, modify Orbit.F to choose the run desired, type make, and type qsub -q sque batch
[1] R. B. White & M. S. Chance, Phys. Fluids 27, 2455 (1984)
[1] R. B. White, The theory of Toroidally Confined Plasmas, Imperial College Press, 3rd. ed. (2014)
[#h46: R. B. White, 2018-07-05]
### GTS: Gyrokinetic Tokamak Simulation Code
Snapshot of a GTS ion-temperature-gradient instability simulation showing the field-line-following mesh and the quasi-2D structure of the electrostatic potential in the presence of microturbulence. Notice the fine structure on the two poloidal planes perpendicular to the toroidal direction, while the potential along the field lines changes very little as you go around the torus. (Image generated from a GTS simulation by Kwan-Liu Ma and his group at the University of California, Davis.)
The Gyrokinetic Tokamak Simulation (GTS) code is a full-geometry, massively parallel $\delta f$ particle-in-cell code [1,2] developed at PPPL to simulate plasma turbulence and transport in practical fusion experiments. The GTS code solves the modern gyrokinetic equation in conservative form [3]: $$\frac{\partial f_a}{\partial t}+\frac{1}{B^{*}}\nabla_{\bf Z}\cdot (\dot{{\bf Z}}B^{*}f_a)=\sum\limits_{b} C[f_a, f_b].$$ for a gyro-center distribution function $f ({\bf Z}, t)$ in 5-dimension phase space ${\bf Z}$, along with the gyrokinetic Poisson equation and Ampere's law for potentials using a $\delta f$ method.
GTS features high robustness at treating globally consistent, shaped cross-section tokamaks; in particular, the highly challenging spherical tokamak geometry such as NSTX and its upgrade NSTX-U. GTS simulations directly import plasma profiles of temperature, density and toroidal rotation from the TRANSP experimental database, along with the related numerical MHD equilibria, including perturbed equilibria. General magnetic coordinates and a field-line-following mesh are employed [1]. The particle gyro-center motion is calculated by Lagrangian equations in the flux coordinates, which allows for accurate particle orbit integration thanks to the separation between fast parallel motion and slow perpendicular drifts. The field-line-following mesh accounts for the nature of the quasi-2D mode structure of drift-wave turbulence in toroidal systems, and hence offers a highly efficient spatial resolution for strongly anisotropic fluctuations in fusion plasmas. Fully-kinetic electron physics is included. In particular, both trapped and untrapped electrons are included in the non-adiabatic response. GTS solves the field equations in configuration space for the turbulence potentials using finite element method on unstructured mesh, which is carried out by PETSc. The real space, global field solver, in principle, retains all toroidal modes from $(m,n) = (0,0)$ all the way up to a limit which is set by grid resolution, and therefore retains full-channel nonlinear energy couplings.
One remarkable feature in GTS, which distinguishes it from the other $\delta f$ particle simulations, is that the weight equations satisfy the incompressibility condition in extended phase space $({\bf Z}, w)$ [4]. Satisfying the incompressibility is actually required in order to correctly solve the $\delta f$ kinetic equation using simulation markers whose distribution function $F({\bf Z}, w, t)$ is advanced along the marker trajectory in the extended phase space according to $F({\bf Z},w,t)=const.$.
In GTS, Coulomb collisions between like particles are implemented via a linearized Fokker-Plank operator with particle, momentum and energy conservation. Electron-ion collisions are simulated by the Lorentz operator. By modeling the same gyrokinetic-Poisson system, GTS is extended to performing global neoclassical simulation in additional to traditional turbulence simulation. More importantly, GTS now is able to do a global gyrokinetic simulation with self-consistent turbulence and neoclassical dynamics coupled together. This remarkable capability is shown to lead to significant new features regarding nonlinear turbulence dynamics, impacting a number of important transport issues in tokamak plasmas. In particular, this capability is critical for the proposed study of nonlinear NTM physics. For example, it allows to calculate bootstrap current in the presence of turbulence [5, 6], which plays a key role for NTM evolution. Currently, GTS has been extended to simulating finite beta physics. This includes low-n shear-Alfvén modes, current driven tearing modes, kinetic ballooning modes and micro-tearing modes.
A state-of-the-art electromagnetic algorithm has been developed and implemented into GTS [7,8] with the goal to achieve a robust, global electromagnetic simulation capability to attack the highly challenging electron transport problem in high-$\beta$ NSTX-U plasmas and be used as a first-principles-based module for integrated whole device modeling of turbulence/neoclassical/MHD physics.
On the front of physics study, GTS simulations have been applied to wide experiments for various problems. Recent applications include discovering new turbulence sources responsible for plasma transport and understanding underlying physics behind the confinement scaling in spherical tokamak experiments [4,9], validating the physics of turbulence-driven plasma flow and first-principles-based model prediction of intrinsic rotation profile against profile against experiments [10], and plasma self-generated non-inductive current in turbulent fusion plasmas [5].
GTS has three levels of parallelism: a one-dimensional domain decomposition in the toroidal direction, dividing both the grid and the particles, a particle distribution within each domain, which further divides the particles between processors, and a loop-level multi-threading method. The domain decomposition and the particle distribution are implemented with MPI, while the loop-level multi-threading is implemented with OpenMP directives.
[1] W. X. Wang, Z. Lin et al., Phys. Plasmas 13, 092505 (2006)
[2] W. X. Wang, P. H. Diamond et al., Phys. Plasmas 17, 072511 (2010)
[3] A. J. Brizard & T. S. Hahm, Rev. Mod. Phys. 79, 421 (2007)
[4] W. X. Wang, S. Ethier et al., Phys. Plasmas 22, 102509 (2015)
[5] W. X. Wang et al., Proc. 24th IAEA Fusion Energy Conference, (2012), TH/P7-14
[6] T. S. Hahm, Nucl. Fusion 53 104026 (2013)
[7] E. A. Startsev & W. W. Lee, Phys. Plasmas 21, 022505 (2014)
[8] E. A. Startsev et al., Paper BM9.00002, APS-DPP Conference, San Jose, CA (2016)
[9] W. X. Wang, S. Ethier et al., Nucl. Fusion 55, 122001 (2015)
[10] B. A. Grierson, W. X. Wang et al., Phys. Rev. Lett. 118, 015002 (2017)
[#h47: W. Wang, 2018-07-11]
### HMHD: A 3D Extended MHD Code
A snapshot of plasmoid-mediated turbulent reconnection simulation showing the parallel current density and samples of magnetic field lines.
HMHD is a massively parallel, general purpose 3D extended MHD code that solves the fluid equations of particle density $n$ and momentum density $n\boldsymbol{u}$: $\partial_{t}n+\nabla\cdot\left(n\boldsymbol{u}\right)=0,$ $\partial_{t}\left(n\boldsymbol{u}\right)+\nabla\cdot\left(n\boldsymbol{uu}\right)=-\nabla\left(p_{i}+p_{e}\right)+\boldsymbol{J}\times\boldsymbol{B}-\nabla\cdot\boldsymbol{\Pi}+\boldsymbol{F},$ where $\boldsymbol{J}=\nabla\times\boldsymbol{B}$ is the electric current, $p_{e}$ and $p_{i}$ are the electron and ion pressures, $\boldsymbol{\Pi}$ is the viscous stress tensor, and $\boldsymbol{F}$ is an external force. The magnetic field ${\bf B}$ is stepped by the Faraday's law $\partial_{t}\boldsymbol{B}=-\nabla\times\boldsymbol{E},$ where the electric field $\boldsymbol{E}$ is determined by a generalized Ohm's law that incorporates the Hall term and the electron pressure term in the following form: $\boldsymbol{E}=-\boldsymbol{u}\times\boldsymbol{B}+d_{i}\frac{\boldsymbol{J}\times\boldsymbol{B}-\nabla p_{e}}{n}+\eta\boldsymbol{J},$ with $\eta$ the resistivity and $d_{i}$ the ion skin depth. The set of equations is completed by additional equations for electron and ion pressures $p_{e}$ and $p_{i}$, where several options are available. In the simplest level an isothermal equation of state $p_{i}=p_{e}=nT$ is assumed; in a sophisticated level, the ion and electron pressure can be evolved individually with viscous and resistive heating, anisotropic thermal conduction, and thermal exchange between the two species; various additional options are available between these two levels. HMHD is flexible to switch on and off various effects in the governing equations.
HMHD uses a single Cartesian grid, with the capability of variable grid spacing. The numerical algorithm [1] approximates spatial derivatives by finite differences with a five-point stencil in each direction. The time-stepping scheme has several options including a second-order accurate trapezoidal leapfrog method as well as three-stage or four-stage strong stability preserving Runge-Kutta methods [2,3]. HMHD is written in Fortran 90 and parallelized with MPI for domain decomposition, augmented with OpenMP for multi-threading in each domain. HMHD has been employed to carry out large-scale 2D simulations plasmoid-mediated reconnection in resistive MHD [4,5,6] and Hall MHD [7,8]. It has been used to carry out 3D self-generated turbulent reconnection simulations [9,10].
[1] P. N. Guzdar, J. F. Drake et al., Phys. Fluids B 5, 3712 (1993)
[2] S. Gottlieb, C.-W. Shu & E. Tadmor, SIAM Review 43, 89 (2001)
[3] R. J. Spiteri & S. J. Ruuth, SIAM J. Numer. Anal. 40, 469 (2002)
[4] Y.-M. Huang & A. Bhattacharjee, Phys. Plasmas 17, 062104 (2010)
[5] Y.-M. Huang & A. Bhattacharjee, Phys. Rev. Lett. 109, 265002 (2012)
[6] Y.-M. Huang, L. Comisso & A. Bhattacharjee, Astrophys. J. 849, 75 (2017)
[7] Y.-M. Huang, A. Bhattacharjee & B. P. Sullivan, Phys. Plasmas 18, 072109 (2011)
[8] J. Ng, Y.-M. Huang et al., Phys. Plasmas 22, 112104 (2015)
[9] Y.-M. Huang & A. Bhattacharjee, Astrophys. J. 818, 20 (2016)
[10] D. Hu, A. Bhattacharjee & Y.-M. Huang, Phys. Plasmas 25, 062305 (2018)
[#h48: Y-M. Huang, 2018-07-11]
### FOCUS: Flexible Optimized Coils Using Space curves
Modular coils for a conventional rotating elliptical stellarator. The color on the internal plasma boundary indicates the strength of mean curvature.
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. The construction of the coils is only one component of modern fusion experiments; but, realizing that it is the currents in the coils that produce the “magnetic bottle” that confines the plasma, it is easy to understand that designing and accurately constructing suitable coils is paramount.
Conventional approaches simplify this problem by assuming that coils are lying on a defined toroidal “winding” surface [1]. The Flexible Optimized Coils using Space Curves (FOCUS) code [2] represents coils as one-dimensional curves embedded in three-dimensional space. A curve is described directly, and completely generally, in Cartesian coordinates as ${\bf x}(t) = x(t) \, {\bf i} + y(t) \, {\bf j} + z(t) \, {\bf k}$. The coil parameters, ${\bf X}$, are then varied to minimize a target function consisting of multiple objective functions, $$\chi^2({\bf X}) \equiv w_{B}\int_S \frac{1}{2} \left ( { \bf B} \cdot {\bf n} \right )^2 ds \ + \ w_{\Psi}\int_0^{2\pi} \frac{1}{2} \left( \Psi_\zeta \; - \; \Psi_o \right)^2 d\zeta \ + \ w_{L} \; \sum_{i=1}^{N_C} L_i + \cdots$$ These objective functions cover both “physics” requirements and “engineering” constraints, such as the normal magnetic field, the toroidal flux, the resonant magnetic harmonics, coil length, coil-to-coil separation and coil-to-plasma separation.
With analytically calculated derivatives, FOCUS computes the gradient and Hessian with respect to coil parameters fast and accurately. It allows FOCUS to employ numerous powerful optimization algorithms, like the Newton method [3]. The figure (click to enlarge) shows using FOCUS to design modular coils for a rotating elliptical stellerator. FOCUS is also applied to analyze the error field sensitivity to coil deviations [4], vary the shape of plasma surface in order to simplify the coil geometry [5] and design non-axisymmetric RMP coils for tokamaks.
[1] P. Merkel, Nucl. Fusion, 27, 867 (1987)
[2] Caoxiang Zhu, Stuart R. Hudson et al., Nucl. Fusion, 58, 016008 (2017)
[3] Caoxiang Zhu, Stuart R. Hudson et al., Plasma Phys. Control. Fusion 60, 065008 (2018)
[4] Caoxiang Zhu, Stuart R. Hudson et al., Plasma Phys. Control. Fusion 60, 054016 (2018)
[5] S. R. Hudson, C. Zhu et al., Phys. Lett. A, in press
[#h49: C. Zhu, 祝曹祥, 2016-08-05]
### RBQ1D
(a) Mode evolution for different levels of collisionality featuring intermitency and steady saturation within RBQ1D depending on the effective collisionality and (b) scaling of saturation amplitude as a function of collisionality within RBQ1D. Calculations are done for the global TAE shown in panel (c).
The interaction between fast ions and Alfvénic eigenmodes has proved to be numerically expensive to be modeled in realistic large tokamak configurations such as ITER. Therefore, it is motivating to explore reduced, numerically efficient models such as the Resonance Broadened Quasilinear model, used to build the code in its one-dimensional version (RBQ1D). The code is capable of modeling the fast ion distribution function in the direction of the canonical toroidal momentum while self-consistently evolving the amplitude of interacting Alfvénic modes. The theoretical approach is based on the model proposed by Berk et al [1] to address the resonant energetic particle interaction in both regimes of isolated and overlapping modes. RBQ1D is written by using the same structure of the conventional quasilinear equations for the fast ion distribution function but with the resonant delta function broadened primarily in the radial direction. The model was designed to reproduce the expected saturation level for non-overlapping modes from analytic theory. In the model, the nonlinear trapping (bounce) frequency is the fundamental variable for the dynamics in the vicinity at a resonance. The diffusion equation of RBQ1D, derived using action and angle variables, is [2,3] \begin{eqnarray} \frac{\partial f}{\partial t}=\pi\sum_{l,M}\frac{\partial}{\partial P_{\varphi}}C_{M}^{2}\mathcal{E}^{2}\frac{G_{m^{\prime}l}^{*}G_{ml}}{\left|\partial\Omega_{l}/\partial P_{\varphi}\right|_{res}}\mathcal{F}_{lM}\frac{\partial}{\partial P_{\varphi}}f_{lM}+\nu_{eff}^{3}\sum_{l,M}\frac{\partial^{2}}{\partial P_{\varphi}^{2}}\left(f_{lM}-f_{0}\right), \end{eqnarray} and the amplitude evolution satisfies \begin{eqnarray} C_{M}\left(t\right)\sim e^{\left(\gamma_{L}+\gamma_{d}\right)t}\Rightarrow\frac{dC_{M}^{2}}{dt}=2\left(\gamma_{L}+\gamma_{d}\right)C_{M}^{2}. \end{eqnarray} RBQ1D employs a finite-difference scheme used for numerical integration of the distribution function. It recovers several scenarios for amplitude evolution, such as pulsations, intermittency and quasi-steady saturation, see figure (a). The code is interfaced with linear ideal MHD code NOVA, which provides eigenstructures, and the stability code NOVA-K, which provides damping rates and wave-particle interaction matrices for resonances in 3D constant of motion space. RBQ1D employs an iterative procedure to account for mode structure non-uniformities within the resonant island [4]. RBQ1D has been subject to rigorous verification exercises [4]. Both the wave-particle resonant diffusion and Coulomb collisional scattering diffusion operator are thoroughly verified against analytical expressions in limiting cases. In addition, the collisional scattering frequency dependence of the modes saturation level is similar to the value expected by the analytical theory, as shown in figure (b).
[1] H. L. Berk, B. N. Breizman et al., Nucl. Fusion 35, 1661 (1995)
[2] V. N. Duarte, Ph.D. Thesis, U. São Paulo (2017)
[3] N. Gorelenkov, V. Duarte et al., Nucl. Fusion 58, 082016 (2018)
[4] N. N. Gorelenkov, Invited Talk, APS-DPP (2018)
[#h50: V. Duarte, 2018-09-25]
### HYbrid and MHD simulation code (HYM)
(a-b) Representative co-helicity spheromak merging simulation results, magnetic field lines are shown; final configuration corresponds to a 3D Taylor eigenstate. (c-d) Hybrid simulations of FRC spin-up and instability for non-symmetric end-shortening boundary conditions, magnetic field lines and plasma density are shown.
The nonlinear 3-D simulation code (HYM) has been originally developed at PPPL to carry out investigations of the macroscopic stability properties of FRCs [1,2]. The HYM code has also been used to study spheromak merging [3], and the excitation of sub-cyclotron frequency waves by the beam ions in the National Spherical Torus Experiment (NSTX) [4,5]. In the HYM code, three different physical models have been implemented: (a) a 3-D nonlinear resistive MHD or Hall-MHD model; (b) a 3-D nonlinear hybrid scheme with fluid electrons and particle ions; and (c) a 3-D nonlinear hybrid MHD/particle model where a fluid description is used to represent the thermal background plasma, and a kinetic (particle) description is used for the low-density energetic beam ions. The nonlinear delta-f method has been implemented in order to reduce numerical noise in the particle simulations. The capabilities of the HYM code also include an option to switch from the delta-f method to the regular particle-in-cell (PIC) method in the highly nonlinear regime. An MPI-based, parallel version of the HYM code has been developed to run on distributed-memory parallel computers. For production-size MHD runs, very good parallel scaling has been obtained for up to 1000 processors at the NERSC Computer Center. The HYM code has been validated against FRC experiments [6], SSX spheromak merging experiments [3], and NSTX and NSTX-U experimental results related to stability of sub-cyclotron frequency Alfven eigenmodes [4,5].
The HYM code is unique in that it employs the delta-f particle simulation method and a full-ion-orbit description in a toroidal geometry. Second-order, time-centered, explicit scheme is used for time stepping, with smaller time steps for field equations (subcycling). Fourth-order finite difference and cylindrical geometry are used to advance fields and apply boundary conditions, while a 3D Cartesian grid is used for the particle pushing and gathering of fast ion density and current density. Typically, the total energy is conserved within 10% of the wave energy, provided that the numerical resolution is sufficient for the mode of interest. Both 3D hybrid simulations of spheromak merging and 3D simulations of the FRC compression require use of a full-f simulation scheme, and therefore a large number of simulation particles. The HYM code has been modified in order to allow simulations with up to several billions of simulation particles.
The initial equilibrium used in the HYM code is calculated using a Grad-Shafranov solver. The equilibrium solver allows the computation of MHD equilibria including the effects of toroidal flows [1]. In addition, the MPI version of the Grad-Shafranov solver has been developed for kinetic equilibria with a non-Maxwellian and anisotropic ion distribution function [7].
The ability to choose between different physical models implemented in the HYM code facilitates the study of a variety of physical effects for a wide range of magnetic configurations. Thus, the numerical simulations have been performed for both oblate and prolate field-reversed configurations, with elongation in the range E=0.5-12, in both kinetic and MHD-like regimes; in support of co-helicity and counter-helicity spheromak merging experiments; for rotating magnetic field (RMF) FRC studies; and investigation of the effects of neutral beam injection (NBI) ions on FRC stability. In addition, hybrid simulations using the HYM code have predicted for the first time, destabilization of the Global Alfven Eigenmodes (GAEs) by the energetic beam ions in the National Spherical Torus Experiment (NSTX) [8], subsequently confirmed both by the analytical calculations and the experimental observations, and suggested a new energy channeling mechanism explaining flattening of the electron temperature profiles at high beam power in NSTX [4].
[1] E. V. Belova, S. C. Jardin et al., Phys. Plasmas 7, 4996 (2000)
[2] E. V. Belova, R. C. Davidson et al., Phys. Plasmas 11, 2523 (2004)
[3] C. Myers, E. V. Belova et al., Phys. Plasmas 18, 112512 (2011)
[4] E. V. Belova, N. N. Gorelenkov et al., Phys. Rev. Lett. 115, 015001 (2015)
[5] E.D. Fredrickson, E. Belova et al., Phys. Rev. Lett. 118, 265001 (2017)
[6] S. P. Gerhardt, E. Belova et al., Phys. Plasmas 13, 112508 (2006)
[7] E. V. Belova, N. N. Gorelenkov & C.Z. Cheng, Phys. Plasmas 10, 3240 (2003)
[8] E. V. Belova, N. N. Gorelenkov et al., “Numerical Study of Instabilities Driven by Energetic Neutral Beam Ions in NSTX”, Proceedings of the 30th EPS Conference on Contr. Fusion and Plasma Phys., (2003) ECA Vol. 27A, P-3.102.
[#h51: E. Belova, 2016-08-05]
### DEGAS 2
Slices through the three-dimensional atomic (vertical) and molecular (horizontal) deuterium profiles in a simulation of data from the NSTX Edge Neutral Density Diagnostic.
Neutral atoms and molecules in fusion plasmas are of interest for multiple reasons. First, neutral particles are produced via the interactions of the plasma as it flows along open field lines to surrounding material surfaces. Unconfined by the magnetic field, the atoms and molecules provide a channel for heat transport across the field lines and also serve as a source of plasma particles via ionization. Second, atoms that penetrate well past the last closed flux surface can charge exchange with plasma ions to generate high energy neutrals that can strike the vacuum vessel wall, sputtering impurities into the plasma and possibly damaging the wall. Third, the most common means of fueling plasmas is with an external puff of gas. Finally, the light emitted by the neutral atoms and molecules in all of the above processes can be monitored and used as the basis for diagnostics.
Kinetic models of neutral particle transport are based on the Boltzmann equation. For the simple case of a single background'' species and a single binary collision process, this is: \begin{eqnarray*} \frac{\partial f({\bf r}, {\bf v}, t)}{\partial t} & + & {\bf v} \cdot \nabla_{\bf r} f({\bf r}, {\bf v}, t) \\ & = & \int d{\bf v}^{\prime} \, d{\bf V}^{\prime} \, d{\bf V} \sigma( {\bf v}^{\prime}, {\bf V}^{\prime}; {\bf v}, {\bf V}) | {\bf v}^{\prime} - {\bf V}^{\prime} | f({\bf v}^{\prime}) f_{b}({\bf V}^{\prime}) \\ & - & \int d{\bf v}^{\prime} \, d{\bf V}^{\prime} \, d{\bf V} \sigma( {\bf v}, {\bf V}; {\bf v}^{\prime}, {\bf V^{\prime}}) | {\bf v} - {\bf V} | f({\bf v}) f_{b}({\bf V}), \label{BE} \end{eqnarray*} where $f({\bf r}, {\bf v}, t)$ and $f_{b}({\bf r}, {\bf V}, t)$ are the neutral and background distribution functions, respectively, and $\sigma$ is the differential cross section for the collision process. The first (second) integral on the right-hand side represents scattering into (out of) the velocity $v$.
DEGAS 2 [1], like its predecessor, DEGAS [2], uses the Monte Carlo approach to integrating the Boltzmann equation, allowing the treatment of complex geometries, atomic physics, and wall interactions. DEGAS 2 is written in a macro-enhanced'' version of FORTRAN via the FWEB library, providing an object oriented capability and simplifying tedious tasks, such as dynamic memory allocation and the reading and written of self-describing binary files. As a result, the code is extremely flexible and can be readily adapted to problems seemingly far removed from tokamak divertor physics, e.g., its use in simulating the diffusive evaporation of lithium in NSTX [3] and LTX [4]. DEGAS 2 has been extensively verified, as is documented in its User's Manual [5]. Experimental validation has been largely centered on the Gas Puff Imaging (GPI) technique for visualizing plasma turbulence in the tokamak edge. The validation against deuterium gas puff data from NSTX is described in the paper by B. Cao et al. [6] Analogous work with both deuterium and helium has been carried out on Alcator C-Mod. A related application of DEGAS 2 is in the interpretation of data from the Edge Neutral Density Diagnostic on NSTX [7] and NSTX-U. DEGAS 2 has been applied to many other devices, including JT-60U [8], ADITYA [9], and FRC experiments at Tri-Alpha Energy [10]. Neutral transport codes are frequently coupled to plasma simulation codes to allow a self-consistent plasma-neutral solution to be computed. Initially, DEGAS 2 was coupled to UEDGE [11]. More recently, DEGAS 2 has been coupled to the drift-kinetic XGC0 [12], and has been used in the development and testing of the simplified built-in neutral transport module in XGC1 [13]. A related project is a DEGAS 2-based synthetic GPI diagnostic for XGC1 [14].
[1] D. P. Stotler & C. F. F. Karney, Contrib. Plasma Phys. 34, 392 (1994)
[2] D. Heifetz, D. Post et al., J. Comp. Phys. 46, 309 (1982)
[3] D. P. Stotler et al., J. Nucl. Mater. 415, S1058 (2011)
[4] J. C. Schmitt et al., J. Nucl. Mater. 438, S1096 (2013)
[6] B. Cao et al., Fusion Sci. Tech. 64, 29 (2013)
[7] D. P. Stotler et al., Phys. Plasmas 22, 082506 (2015)
[8] H. Takenaga et al., Nucl. Fusion 41, 1777 (2001)
[9] R. Dey et al., Nucl. Fusion 57, 086003 (2017)
[10] E. M. Granstedt et al., Presented at 60th Annual Meeting of the APS Division of Plasma Physics
[11] D. P. Stotler et al., Contrib. Plasma Phys. 40, 221 (2000)[
[12] D. P. Stotler et al., Comput. Sci. Disc. 6, 015006 (2013)
[13] D. P. Stotler et al., Nucl. Fusion 57, 086028 (2017)
[14] D. P. Stotler et al., Nucl. Mater. Energy 19, 113 (2019)
[#h52: D. Stotler, 2016-08-05]
### M3D-C$^1$ Extended MHD code
The accurate calculation of the equilibrium, stability and dynamical evolution of magnetically-confined plasma is fundamental for fusion research. The most suitable, macroscopic model to address some of the most critical challenges confronting tokamak plasmas is given by the extended-magnetohydrodynamic (MHD) equations, which describe plasmas as electrically conducting fluids of ions and electrons. The M3D-C$^1$ code [?] solves the fluid equations: for example, the “single-fluid” model, in which the ions and electrons are considered to have the same fluid velocity and temperature, the dynamical equations for the particle number density $n$, the fluid velocity $\vec{u}$, the total pressure $p$ are \begin{eqnarray} \frac{\partial n}{\partial t} + \nabla \cdot (n \vec{u}) & = & 0 \\ n m_i \left( \frac{\partial \vec{u}}{\partial t} + \vec{u} \cdot \nabla \vec{u} \right) & = & \vec{J} \times \vec{B} - \nabla p \color{blue}{ - \nabla \cdot \Pi + \vec{F}} \\ \frac{\partial p}{\partial t} + \vec{u} \cdot \nabla p + \Gamma p \nabla \cdot \vec{u} & = & \color{blue}{(\Gamma - 1) \left[Q - \nabla \cdot \vec{q} + \eta J^2 - \vec{u} \cdot \vec{F} - \Pi : \nabla u \right]} \end{eqnarray} together with a “generalized Ohm's law”, $\vec{E} = - \vec{u} \times \vec{B} \color{blue}{ + \eta \vec{J}}$; and with a reduced set of Maxwell's equations for the electrical-current density, $\vec{J} = \nabla \times \vec{B} / \mu_0$, and for the time evolution of the magnetic field, $\partial_t \vec{B} = - \nabla \times \vec{E}$. The manifestly divergence-free magnetic field, $\vec{B}$, and the fluid velocity, $\vec{u}$, are represented using stream functions and potentials, $\vec{B} = \nabla \psi \times \nabla \varphi + F \nabla \varphi$ and $\vec{u} = R^2 \nabla U \times \nabla \varphi + R^2 \omega \nabla \varphi + R^{-2} \nabla_\perp \chi$.
The “ideal-MHD” model is described by the above equations with the terms in blue set to zero. M3D-C$^1$ is also capable of computing the “two-fluid model”, which accommodates differences in the ion and electron fluid velocities: the generalized Ohm's law is augmented with $\color{red}{( \vec{J} \times \vec{B} - \nabla p_e - \nabla \cdot \Pi_e + \vec{F}_e ) / n_e}$, additional terms must be added to $\partial_t p$, and an equation for the “electron pressure”, $p_e$, must be included.
Physically-meaningful, “reduced” models provide accurate solutions in certain physical limits and are obtained, at a fraction of the computational cost, by restricting the scalar fields that are evolved: e.g., the two-field, reduced model is obtained by only evolving $\psi$ and $U$, and the “four-field”, reduced model by only evolving $\psi$, $U$, $F$, and $\omega$.
To obtain accurate solutions efficiently, over a broad range of temporal and spatial scales, M3D-C1 employs advanced numerical methods, such as: high-order, $C^1$, finite elements, an unstructured geometrical mesh, fully-implicit and semi-implicit time integration, physics-based preconditioning, domain-decomposition parallelization, and the use of scalable, parallel, sparse, linear algebra solvers.
M3D-C$^1$ is used to model numerous tokamak phenomena, including: edge localized modes (ELMs) [1]; sawtooth cycles [2]; and vertical displacement events (VDEs), resistive wall modes (RWMs), and perturbed (i.e. three-dimensional, 3D) equilibria [3].
[1] N.M. Ferraro, S.C. Jardin & P.B. Snyder, Phys. Plasmas 17, 102508 (2010)
[2] S.C. Jardin, N. Ferraro et al., Com. Sci. Disc. 5, 014002 (2012)
[3] N.M. Ferraro, T.E. Evans et al., Nucl. Fusion 53, 073042 (2013)
[#h4: N. Ferraro, 2016-05-23]
### Stepped Pressure Equilibrium Code (SPEC)
Building on the theoretical foundations of Bruno & Laurence [1], that three-dimensional, (3D) magnetohydrodynamic (MHD) equilibria with “stepped”-pressure profiles are well-defined and guaranteed to exist, whereas 3D equilibria with integrable magnetic-fields and smooth pressure (or with non-integrable fields and continuous-but-fractal pressure) are pathological [2], Dr. S.R. Hudson wrote the “Stepped Pressure Equilibrium Code”, (SPEC) [3]. SPEC finds minimal-plasma-energy states, subject to the constraints of conserved helicity and fluxes in a collection, $i=1, N_V$, of nested sub-volumes, ${\cal R}_i$, by extremizing the multi-region, relaxed-MHD (MRxMHD), energy functional, ${\cal F}$, introduced by Dr. M.J. Hole, Dr. S.R. Hudson & Prof. R.L. Dewar [4,5], \begin{eqnarray} {\cal F} \equiv \sum_{i=1}^{N_V} \left\{ \int_{{\cal R}_i} \! \left( \frac{p}{\gamma-1} + \frac{B^2}{2} \right) dv - \frac{\mu_i}{2} \left( \int_{{\cal R}_i} \!\! {\bf A}\cdot{\bf B} \, dv - H_i \right) \right\}. \end{eqnarray} Relaxation is allowed in each ${\cal R}_i$, so unphysical, parallel currents are avoided and magnetic reconnection is allowed; and the ideal-MHD constraints are enforced at selected “ideal interfaces”, ${\cal I}_i$, on which ${\bf B}\cdot{\bf n}=0$. The Euler-Lagrange equations derived from $\delta {\cal F}=0$ are: $\nabla \times {\bf B} = \mu_i {\bf B}$ in each ${\cal R}_i$; and continuity of total pressure, $[[p+B^2/2]]=0$, across each ${\cal I}_i$, so that non-trivial, stepped pressure profiles may be sustained. If $N_V=1$, MRxMHD equilibria reduce to so-called “Taylor states” [6]; and as $N_V \rightarrow \infty$, MRxMHD equilibria approach ideal-MHD equilibria [7]. Discontinuous solutions are admitted. The figure (click to enlarge) shows an $N_V=4$ equilibrium with magnetic islands, chaotic fieldlines and a non-trivial pressure.
[1] Oscar P. Bruno & Peter Laurence, Commun. Pure Appl. Math. 49, 717 (1996)
[2] H. Grad, Phys. Fluids 10, 137 (1967)
[3] S.R. Hudson, R.L. Dewar et al., Phys. Plasmas 19, 112502 (2012)
[4] M.J. Hole, S.R. Hudson & R.L. Dewar, J. Plas. Physics 72, 1167 (2006)
[5] S.R. Hudson, M.J. Hole & R.L. Dewar, Phys. Plasmas 14, 052505 (2007)
[6] J.B. Taylor, Rev. Modern Phys. 58, 741 (1986)
[7] G.R. Dennis, S.R. Hudson et al.,Phys. Plasmas 20, 032509 (2013)
[#h5: S.R. Hudson, 2016-05-23]
### Designing Stellarator Coils with the COILOPT++, Coil-Optimization Code
Concept design for a quasi-axisymmetric stellarator fusion reactor with the modular coils straightened and spaced for ease of access. (Figure courtesy of Tom Brown.)
Electrical currents flowing inside magnetically-confined plasmas cannot produce the magnetic field required for the confinement of the plasma itself. Quite aside from understanding the physics of plasmas, the task of designing external, current-carrying coils that produce the confining magnetic field, ${\bf B}_C$, remains a fundamental problem; particularly for the geometrically-complicated, non-axisymmetric, “stellarator” class of confinement device [1]. The coils are subject to severe, engineering constraints: the coils must be “buildable”, and at a reasonable cost; and the coils must be supported against the forces they exert upon each other. For reactor maintenance, the coils must allow access to internal structures, such as the vacuum vessel, and must allow room for diagnostics; many, closely-packed coils that give precise control over the external magnetic field might not be satisfactory.
The COILOPT code [2] and its successor, COILOPT++ [3], vary the geometrical degrees-of-freedom, ${\bf x}$, of a discrete set of coils to minimize a physics+engineering, “cost-function”, $\chi({\bf x})$, defined as the surface integral over a given, “target”, plasma boundary, ${\cal S}$, of the squared normal component of the “error” field, \begin{eqnarray} \chi^2 \equiv \oint_{\cal S} \left( \delta {\bf B} \cdot {\bf n} \right)^2 da + \mbox{engineering constraints}, \end{eqnarray} where $\delta {\bf B} \equiv {\bf B}_C - {\bf B}_P$ is the difference between the externally-produced magnetic field (as computed using the Biot-Savart law [5]) and the magnetic field, ${\bf B}_P$, produced by the plasma currents (determined by an equilibrium calculation). Using mathematical optimization algorithms, by finding an ${\bf x}$ that minimizes $\chi^2$ we find a coil configuration that minimizes the total, normal magnetic field at the boundary; thereby producing a “good flux surface”.
A new design methodology [3] has been developed for “modular” coils for so-called “quasi-axisymmetric” stellarators (stellarators for which the field strength appears axisymmetric in appropriate coordinates), leading to coils that are both simpler to construct and that allow easier access [4]. By straightening the coils on the outboard side of the device, additional space is created for insertion and removal of toroidal vessel segments, blanket modules, and so forth. COILOPT++ employs a “spline+straight” representation, with fast, parallel optimization algorithms (e.g. the Levenberg-Marquardt algorithm [6], differential evolution [7], ...), to quickly generate coil designs that produce the target equilibrium. COILOPT++ has allowed design of modular coils for a moderate-aspect-ratio, quasi-axisymmetric, stellarator reactor configuration, an example of which is shown in the figure (click to enlarge).
[1] Lyman Spitzer Jr., Phys. Fluids 1, 253 (1958)
[2] Dennis J. Strickler, Lee A. Berry & Steven P. Hirshman, Fusion Sci. & Technol. 41, 107 (2002)
[3] J.A. Breslau, N. Pomphrey et al., in preparation (2016)
[4] George H. Neilson, David A. Gates et al., IEEE Trans. Plasma Sci. 42, 489 (2014)
[5] Wikipedia, Biot-Savart Law
[6] Wikipedia, Levenberg–Marquardt Algorithm
[7] Wikipedia, Differential Evolution
[#h12: J. Breslau, 2016-05-23]
### STELLOPT: Stellarator Optimization and Equilibrium Reconstruction Code
Design of the National Compact Stellarator eXperiment (NCSX) optimized by STELLOPT. The shape of three-dimensional nested flux surfaces is optimized to have good MHD stability, plasma confinement and turbulence transport. External non-planar coils with relatively simple geometries, large coil-plasma space and coil-coil separation are also obtained.
One of the defining characteristics of the “stellarator” class of magnetic confinement device [1] is that the confining magnetic field is, for the most part, generated by external, current-carrying coils; stellarators are consequently more stable than their axisymmetric, “tokamak” cousins, for which an essential component of the confining field is produced by internal plasma currents. Stellarators have historically had degraded confinement, as compared to tokamaks, and require more-complicated, “three-dimensional” geometry; however, by exploiting the three-dimensional shaping of the plasma boundary (which in turn determines the global, plasma equilibrium), stellarators may be designed to provide optimized plasma confinement. This is certainly easier said-than-done: the plasma equilibrium is a nonlinear function of the boundary, and the particle and heat transport are nonlinear functions of the plasma equilibrium!
STELLOPT [2,3] is a versatile, optimization code that constructs suitable, magnetohydrodynamic equilibrium states via minimization of a “cost-function”, $\chi^2$, that quantifies how “attractive” an equilibrium is; for example, $\chi^2$ quantifies the stability of the plasma to small perturbations (including “ballooning” stability, “kink” stability) and the particle and heat transport (including “neoclassical” transport, “turbulent” transport, “energetic particle” confinement). The independent, degrees-of-freedom, ${\bf x}$, describe the plasma boundary, which is input for the VMEC equilibrium code [4]. The boundary shape is varied (using either a Levenberg-Marquardt [5], Differential-Evolution [6] or Particle-Swarm [7] algorithms) to find minima of $\chi^2$; thus constructing an optimal, plasma equilibrium with both satisfactory stability and confinement properties.
A recent extension of STELLOPT enables “equilibrium reconstruction” [8,9]. By minimizing $\chi^2({\bf x})$ $\equiv$ $\sum_{i} [ y_i - f_i({\bf x}) ]^2 / \sigma_i^2$, where the $y_i$ and the $f_i({\bf x})$ are, respectively, the experimental measurements and calculated “synthetic diagnostics” (i.e. numerical calculations that mimic signals measured by magnetic sensors) of Thomson scattering, charge exchange, interferometry, Faraday rotation, motional Stark effect, and electro-cyclotron emission reflectometry, to name just a few, STELLOPT solves the “inverse” problem of inferring the plasma state given the experimental measurements. (The $\sigma_i$ are user-adjustable “weights”, which may reflect experimental uncertainties.) Equilibrium reconstruction is invaluable for understanding the properties of confined plasmas in present-day experimental devices, such as the DIIID device [10].
[1] Lyman Spitzer Jr., Phys. Fluids 1, 253 (1958)
[2] S.P. Hirshman, D.A. Spong et al., Phys. Rev. Lett. 80, 528 (1998)
[3] D.A. Spong, S.P. Hirshman et al., Nucl. Fusion 40, 563 (2000)
[4] S.P. Hirshman, J.C. Whitson, Phys. Fluids 26, 3553 (1983)
[5] Donald W. Marquardt, SIAM J. Appl. Math. 11, 431 (1963)
[6] D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning (Reading, Addison-Wesley, 1989)
[7] Riccardo Poli, James Kennedy & Tim Blackwell, Swarm Intell. 1, 33 (2007)
[8] S. Lazerson & I.T. Chapman, Plasma Phys. Control. Fusion 55, 084004 (2013)
[9] J.C. Schmitt, J. Bialek et al., Rev. Sci. Instrum. 85, 11E817 (2014)
[10] S. Lazerson S and the DIII-D Team, Nucl. Fusion 55, 1 (2015)
[#h35: S. Lazerson, 2016-08-18] |
# To John Murray 23 October [1866]1
Down Bromley | Kent
Oct 23rd.
My dear Sir
I asked Dr Gray to tell Messrs. Ticknor & Field that you would let them have 250 copies of New Edit. of Origin at $\frac{1}{2}$ price; but a letter from Gray has crossed mine on the road.2 He sends the enclosed note from Messrs Ticknor, which I do not understand for I do not see what “Da & Co” means.3 Anyhow Gray says that they are afraid of the Appletons & will not publish an edit. of my new book. You see they recommend, as does Gray that you shd send copies for the American Market.4 But there is no hurry about this.
With respect to the Origin I do not suppose that Gray will communicate again with Messrs Ticknor & Field & it is not fair to ask him to take any more trouble. So you must do what you think best about sending copies of the Origin to some House in America.
Gray is going to review the new edit. & evidently thinks there would be some sale for it.5 It has just occurred to me that if Gray does communicate with Messrs. Ticknor & they accept the 250 copies it would be an awkward predicament if they were sent elsewhere. You must decide what had better be done.
My dear Sir | Yours very faithfully | Ch. Darwin
Please let me hear what you decide.—
## Footnotes
The year is established by the relationship between this letter and the letter from Asa Gray, 10 October 1866.
See letter from John Murray, 18 October [1866]. Gray had approached the Boston firm of Ticknor & Fields about publishing the fourth edition of Origin as well as Variation, but had recently informed CD that they had declined both offers (see letters from Asa Gray, 18 July 1866, 27 August 1866, and 10 October 1866).
The enclosure has not been found. The reference is to D. Appleton & Co., which CD usually referred to as ‘Appletons’. They were reluctant to bring out a new edition of Origin because of typesetting difficulties (see letter to Asa Gray, 10 September [1866] and n. 7).
No review by Gray of the fourth edition of Origin has been found.
## Bibliography
Origin: On the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life. By Charles Darwin. London: John Murray. 1859.
Variation: The variation of animals and plants under domestication. By Charles Darwin. 2 vols. London: John Murray. 1868.
## Summary
A letter from Asa Gray informs CD that Ticknor & Fields will not publish a new edition of Origin to compete with Appleton’s unrevised edition. They recommend sending copies of the English edition for the American market.
## Letter details
Letter no.
DCP-LETT-5253
From
Charles Robert Darwin
To
John Murray
Sent from
Down
Source of text
National Library of Scotland (John Murray Archive) (Ms.42152 ff. 151–152)
Physical description
4pp |
Wednesday, October 10, 2007
Whatever.
jean said...
So right....a sneer totally reverses any apology that was made.
Another clever one. Your mind is amazing!
Darren said...
I didn't get it till I saw your comment. Thanks!
Anonymous said...
Yeah, me neither, but that makes sense and is true!
Anonymous said...
A union can't result in the opposite of itself. Nice try though...
Anonymous said...
The union didn't result in the opposite of itself (which would have been -C); it resulted in the opposite of B. So she's in the clear :P
Anonymous said...
and vice versa, presumably. I read the diagram as saying "an apology may alleviate only part of the effect of the original sneer" - depending on how it was phrased. Possibly some of the impact of the sneer is never erased, however sincere and broad the apology. It's like defamation - some of the harm is never undone, even if the person defamed sues and wins, even if the defamer retracts and apologizes. This doesn't mean that retractions or apologies are useless, just not perfect.
Garrett said...
I kept trying to come up with SinSNEER apology by leaving out the B
(c) 2006, 2007, 2008 |
Chapter 10, Problem 17AT
### Contemporary Mathematics for Busin...
8th Edition
Robert Brechner + 1 other
ISBN: 9781305585447
Chapter
Section
### Contemporary Mathematics for Busin...
8th Edition
Robert Brechner + 1 other
ISBN: 9781305585447
Textbook Problem
# Calculate the missing information for the following loans. Round percents to the nearest tenth and days to the next higher day when necessary. 17. $13,000 14 Ordinary$960
To determine
To calculate: The time period of the loan (in days) and maturity value where the principal invested is $13,000, the ordinary rate of interest is 14% and interest amount is$960. Round the figure to next higher day if required.
Explanation
Given Information:
Principal invested is $13,000, the rate of interest is 14% and interest amount is$960.
Formula used:
The formula to compute the time period of the loan (in days) is,
T=IPR×360
Where P is the principal amount, I is the amount of interest, R is the rate of interest, T is the time period.
Multiply T by 360 to convert years into months.
The formula to calculate Maturity value is,
MV=P+I
Where, MV is Maturity value, P is the Principal Amount, I is the amount of interest.
Calculation:
Consider that interest amount is $960, principal invested is$13,000, the rate of interest is 14%.
Simplify the rate of interest as
14%=14100=0.14
Substitute $960 for the amount of interest,$13,000 for principal and 0.14 for the rate of interest in the formula T=IPR×360 to compute the time period (in days)
### Still sussing out bartleby?
Check out a sample textbook solution.
See a sample solution
#### The Solution to Your Study Problems
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
Get Started |
What is dy/dx (derivative) if y=(3 + x^2)^15?
Mar 27, 2015
$\frac{\mathrm{dy}}{\mathrm{dx}} = 30 x {\left(3 + {x}^{2}\right)}^{14}$
Solution
You'll need the power rule: $\frac{d}{\mathrm{dx}} \left({x}^{n}\right) = n {x}^{n - 1}$
and the chain rule.
(This combination is sometimes called the generalized power rule.)
$\frac{d}{\mathrm{dx}} \left[{\left(g \left(x\right)\right)}^{n}\right] = n {\left(g \left(x\right)\right)}^{n - 1} \cdot g ' \left(x\right)$
or $\frac{d}{\mathrm{dx}} \left({u}^{n}\right) = n {u}^{n - 1} \frac{\mathrm{du}}{\mathrm{dx}}$
$\frac{d}{\mathrm{dx}} \left({\left(3 + {x}^{2}\right)}^{15}\right) = 15 {\left(3 + {x}^{2}\right)}^{14} \cdot 2 x = 30 x {\left(3 + {x}^{2}\right)}^{14}$ |
# Tag Info
Accepted
### What is the definition of rollout' in neural network or OpenAI gym
The standard use of “rollout” (also called a “playout”) is in regard to an execution of a policy from the current state when there is some uncertainty about the next state or outcome - it is one ...
• 211
### What is the definition of rollout' in neural network or OpenAI gym
The definition of "rollouts" given by Planning chemical syntheses with deep neural networks and symbolic AI (Segler, Preuss & Waller ; doi: 10.1038/nature25978 ; credit to jsotola): Rollouts ...
### Applications of Reinforcement Learning
It's true that using RL in robotics involves many challenges, including the usually high dimensionality of problem spaces, the cost and limitations of real-world sessions, the impossibility or ...
• 1,286
### PID tuning with (Deep) Reinforcement Learning
Many reainforcement learning methods require descrete actions. As you indentified, increasing and decreasing the values is one option. If it is an adaptive PID, then it might take some time to ...
• 6,507
Accepted
### PID tuning with (Deep) Reinforcement Learning
I read a bit more and realized that in RL states and rewards accept a wide variety of interpretations and this is the real complexity nowadays of this learning problem. In case of PID values, problem ...
• 460
### What are myopic and non-myopic policies?
A myopic policy is one that simply maximises the average immediate reward. It is "myopic" in the sense that it only considers the single criterion. It has the advantage of being relatively easy to ...
• 1,019
Accepted
### Reinforcement Learning in global and local path planning for mobile robots and self-driving car
No, this is not applicable for a car, it is just an introductory, extremely simplified example. It is one step closer to a mobile robot, then to a car, at least a mobile robot (at least some of them) ...
• 6,507
1 vote
### Reinforcement Learning in global and local path planning for mobile robots and self-driving car
1- No it is not. This example is the beginning of RL, while Self-driving cars are way much complex. In a simplified view, there are two main differences. state and action in the shown example are ...
• 96
1 vote
### Having Issues Importing and Using RLGlue locally with Python For Reinforcement Learning
This worked for me: from RLGlue.rl_glue import RLGlue This is only with the Coursera version of RLGlue
1 vote
### Reinforcement train butterfly robot in virtual reality?
Unreal, Unity and other game engines, Gazebo, Mujoco and other Physics engines are good at simulating multi body dynamics. There is no deep conceptual difference between them. You can use whichever ...
• 6,507
1 vote
### What type of rigid body rotation can best be learned by neural networks?
The main problem is the continuity of the representation. This paper explains it.
• 6,507
1 vote
### What type of rigid body rotation can best be learned by neural networks?
Every three-dimensional parameterization of rotations has singularity. So even if you would implement the kinematics directly you would still run into trouble for some rotations when using Euler ...
• 941
1 vote
### What are myopic and non-myopic policies?
You first have to be clear about the core RL terms, to understand myopic and non-myopic policies: Policy: Suppose each cell in the grid below is a state that the RL agent can be in. From each state, ...
• 143
1 vote
### Tuning Line follower PID constants with Q-learning
I am currently working on a very similar project, the only difference is that I am using a simulation package (MATLAB Simmechanics) where I have modeled a mobile robot with 2 actuated wheels and a ...
• 244
1 vote
Accepted
### Has hierarchical learning been embodied in a robot before?
HRL has been embodied in a robot in multiple cases. In a reaching, shelving robot. In a robot learning how to stand-up. In robot navigation. However, how HRL applied in each of these cases varies. ...
• 161
Only top scored, non community-wiki answers of a minimum length are eligible |
# $m$-mix: Generating hard negatives via multiple samples mixing for contrastive learning
Negative pairs are essential in contrastive learning, which plays the role of avoiding degenerate solutions. Hard negatives can improve the representation ability on the basis of common negatives. Inspired by recent hard negative mining methods via mixup operation in vision, we propose $m$-mix, which generates hard negatives dynamically. Compared with previous methods, $m$-mix mainly has three advantages: 1) adaptively chooses samples to mix; 2) simultaneously mixes multiple samples; 3) automatically and comprehensively assigns different mixing weights to the selected mixing samples. We evaluate our method on two image classification datasets, five node classification datasets (PPI, DBLP, Pubmed, etc), five graph classification datasets (IMDB, PTC\_MR, etc), and downstream combinatorial tasks (graph edit distance and clustering). Results show that our method achieves state-of-the-art performance on most benchmarks under self-supervised settings.
PDF Abstract
## Code Add Remove Mark official
No code implementations yet. Submit your code now |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Quantification of textual comprehension difficulty with an information theory-based algorithm
## Abstract
Textual comprehension is often not adequately acquired despite intense didactic efforts. Textual comprehension quality is mostly evaluated using subjective criteria. Starting from the assumption that word usage statistics may be used to infer the probability of successful semantic representations, we hypothesized that textual comprehension depended on words with high occurrence probability (high degree of familiarity), which is typically inversely proportional to their information entropy. We tested this hypothesis by quantifying word occurrences in a bank of words from Portuguese language academic theses and using information theory tools to infer degrees of textual familiarity. We found that the lower and upper bounds of the database were delimited by low-entropy words with the highest probabilities of causing incomprehension (i.e., nouns and adjectives) or facilitating semantic decoding (i.e., prepositions and conjunctions). We developed an openly available software suite called CalcuLetra for implementing these algorithms and tested it on publicly available denotative text samples (e.g., articles, essays, and abstracts). We propose that the quantitative model presented here may apply to other languages and could be a tool for supporting automated textual comprehension evaluations, and potentially assisting the development of teaching materials or the diagnosis of learning disorders.
## Introduction
Textual comprehension is a cornerstone of learning that contributes to a range of educational and developmental phenomena, including language acquisition, building vocabulary and literacy (Nowak et al., 2000). Currently, a major limitation in the field is that skills and competencies associated with textual comprehension tend to be evaluated via the use of subjective criteria, typically related to outcomes, such as spelling test scores (Sigman et al., 2014).
Textual comprehension is often defined as mental representations made by semantic connections in a text wherein codes are translated and combined (Kendeou et al., 2014). This process is determined not by concepts of reading difficulty but by rules pertaining to statistical linguistics. Features such as entropy, fatigue and ultimately comprehension, can thus be gauged from the mathematical relations of word usage. Zipf (1935) established specific laws on word frequency over a body of textual information. He observed that only a few words are used often, and a higher number is used very rarely. These rules were mathematically formalized, and it was noted that the most common words in a text corpus occurred twice as often as the second most frequently used word and this pattern repeated itself for subsequent frequency rankings (the word in the position n appears 1/n times as often as the most frequent one).
Shannon (1948) further described how statistical linguistics impact on the user and communicator, specifically in relation to entropy and fatigue. He proposed a five-point law by which accuracy and scores can be estimated and configured owing to the conformity of the text (Shannon, 1948). First, the information source which produces a message or sequence of messages must be communicated to a receiving terminal. Second, a transmitter that operates on the message being transmitted must in some way produce a signal transference suitable for transmission over the given channel. Third, the channel itself must then act as a means to mediate the transmitted signal from the transmitter to the receiver. Fourth, the receiver ordinarily performs the inverse operation of the transmitter, in turn reconstructing the message sent from the original signal. The fifth and final element was the destination, which was comprised of the intended person, reader or observer. These functions are the overarching process framework in which Zipf’s equation on textual comprehension operates.
Information theory-based studies have thus sought to measure the amount of information, informational entropy (Montemurro and Zanette, 2002, Montemurro and Zanette, 2011, Debowski, 2011, Kalimeri et al., 2015) and semantic information in a message (Bar-Hillel and Carnap, 1952, D’Alfonso, 2011, Bao et al., 2011, Marcelo and Damian, 2010), showing—from databases of texts—the universality in entropy across languages and a link between semantic information and statistical linguistics (Montemurro and Zanette, 2016, Montemurro, 2014). For example, linguistic analysis has found that the letter a is frequently used in English; however, it contains little information and has high entropy as the more similar are the letter occurrences, the higher is the entropy (Rosenfeld, 2000). However, these principles have yet to be systematically applied for the development of software suites capable of objectively extracting statistical patterns from text databases and using them to infer textual comprehension.
Another issue in the field is the scarcity of instruments developed for non-English languages. In this study, we focused on the Portuguese language. Some tools designed for English, such as the Flesch-Kincaid grade level and Coh-Metrix measurements, have been adapted to evaluate textual intelligibility in Portuguese (Gasperin et al., 2009). Although English has complex computational constructions, it has been observed that these are used infrequently (Thorne and Szymanik, 2015). However, despite these tools and the recent availability of Portuguese words repositories (Ferreira, 2014, Wiktionary, 2017, Mark, 2011, Github, 2017), there have been relatively few systematic analyses of the correlation between letter occurrences and syllabic structures in Portuguese (Hartmann et al., 2014, Soares et al., 2017, Rabêlo and Moraes, 2008).
Using an information theory-based approach, we developed a software tool (CalcuLetra) for quantifying average textual reading comprehension difficulty, measured under the premise that the words most frequently used in writing are the words most familiar to readers. We hypothesized that the level of reading comprehension was related to high probability (high degree of familiarity), low-entropy words, which was tested using information theory’s objective quantitative parameters for given word occurrences in a bank of academic theses in Portuguese. We consider entropy as a degree of uncertainty about textual comprehension. Therefore, a highly comprehensible text would be defined by having many low-entropy, high probability words. We also collected samples of Portuguese denotative texts (e.g., articles, essays, and abstracts) as a test database for the developed software. We propose that the quantitative model presented in this paper can objectively evaluate the relative complexity of a given text, thus, constituting an objective evaluation tool for textual semantic decoding and comprehension level complexity.
## Methods
### Model parameters
We assumed each word’s frequency f and occurrence probability P determined a corresponding knowledge probability Pk and unknowledge probability Pu from the correlation of scales between 0 ≤ P ≤ 4.987 and 0 ≤ Pk ≤ 100%, with the average occurrence probability being Pa= 0.003 (Pk=50%) and Pu = 100−Pk. Given their irregular distribution, the words were divided into two groups G1 (50% ≤ Pk ≤ 100%) and G2 (0 ≤ Pk ≤ 50%) to avoid distortions in the mean. Thus, using linear interpolation, for G1, $$Pk = \frac{{\left( {{\mathrm{P}} {\,-\,} 0.003} \right) \times 50}}{{4.987}} + 50$$ and for G2, $$Pk \,=\,\frac{{{\mathrm{P}} \times 50}}{{0.003}}$$. The information quantity I=log(1/Pu) and information entropy h=I×Pu were also measured for each word (Rosenfeld, 2000). Equivalence formulas had to be obtained as different letters could have similar entropies but generate words that have different degrees of familiarity. This implies that the amount of information and entropy could be the same in texts with different comprehension difficulty classifications. For example, both isolated letters such as c and m have similar informational entropy, thus both their amount of information and entropy can be the same for an easily comprehensible text or not. However, in a given text we can have more unusual words with the letter c than the letter m with different entropies. Thus, variables Pk and Pu needed to be formulated to establish entropy in semantic-communication terms. It was necessary to correct any misrepresentation of the words that were more likely to appear in texts at lower reading levels. Such inferences were confirmed using probabilistic averages for text quantification in a relatively broad database rather than focusing on isolated words.
### CalcuLetra
All equations, words and their respective quantitative parameters were stored in the software suite CalcuLetra, which we have made openly available (https://github.com/LouiseBogea/CalcuLetra). From a text, it shows the number of letters, words, sentences and paragraphs, as well as letter or word frequency f and occurrence probability P. From each word it also shows the knowledge probability Pk and unknowledge probability Pu, information quantity I and entropy h. From each variable, it shows the minimum, maximum and average values. Thus, the average Pk of a text was calculated from the sum of each word’s Pk. The stored Pk values did not change when new text samples were entered. Words in the entered texts not stored in CalcuLetra are automatically excluded to avoid distortions in quantification analysis. We can import a.xls or.csv file to expand the CalcuLetra database or to reset it. Thus, we collected denotative texts samples in Portuguese (n = 3330) as test data and inserted to CalcuLetra to validate the quantitative model. Further information about the text sample sources is given in Supplementary Table S3.
### Sample selection
Academic theses published during the last 10 years (2007–2017) were manually collected (n = 1032) from the Brazilian Digital Library of Theses and Dissertations and in the Digital Library of Theses and Dissertations at the University of São Paulo (USP). Their word usage statistics were fully quantified (n = 33,101) and used as training data. Further information about the repositories is shown in Supplementary Tables S1 and S2. Foreign and misspelled words or words not included in Portuguese dictionaries were manually removed as were figures, proper nouns, numbers and other symbols (e.g., () [] { }? + // = *<>:; ‘ % @! # - & |) to avoid any discrepancies in the statistical analyses. Words were categorized as either denotative or connotative based on their explicit definitions in dictionaries. We chose academic theses (denotative language) to avoid analysis errors caused by the figurative meaning of words.
### Model application
Training words were stored in the information-processing software CalcuLetra, which was developed for this study using C#, Microsoft Visual Studio 2012, with SQL Server Management Studio 2012 being used to manage the database. SQL was used to filter out words that had 17 or more letters, those that occurred less than 4 times and those that had an occurrence probability of P ≤ 0.0001 due to misspelling or mixed words. Duplicate words, prefixes and suffixes were also removed by their very low occurrence to avoid distortions in the mean (e.g., pós-, sub-, vice-, intra-, inter-, pro-, pre-, anti-, ante-, -mente, -issimo, -oso, -osa).
### Validation and data analysis
BioEstat 5.3 was used to evaluate comparisons within the training database and between quantitative parameter analyses, with the emphasis placed on Pk and h variations (Ayres et al., 2007). A decision tree was used to establish the Pk and Pu ranges in relation to entropy (Breiman, 2017). This is a commonly used predictive/exploratory alternative for regression and classification problems, as decision trees can handle both categorical and numerical variables using both classification and regression. The decision trees algorithm assisted in predicting and classifying the data based on the values of the response variables. It also executed successive binary partitions to obtain subsets that are increasingly homogeneous concerning the response variable and placed the results in a hierarchical tree of decision rules for prediction or classification, with each data point partition being a node. There are internal nodes and terminal nodes, which are called leaves. The main steps in the algorithm were: (1) generating the root node (from the entire database); (2) determining the best way to divide the data into two groups (find the nodes to be divided); (3) choosing an attribute that best classified the data by considering the single paths between the root nodes and each subsequent node; and (4) creating and designing the nodes and the associated branches. The algorithm then returned to step 2.
If all cases in each terminal node show identical values, the impurity of the node will be minimal; the homogeneity will be maximal and the prediction will be perfect. One way to control division is to allow the division to continue until all terminal nodes are pure or contain no more than a specified minimum number of cases or objects. In most cases, the interpretation of summarized results in a relatively simple tree. Thus it is useful not only for rapid classification of new observations but can also often produce a much simpler model to explain why these observations are classified or predicted in a particular way.
Because decision trees are nonparametric, they do not assume an underlying distribution and can provide a relatively unbiased initial inspection of the training data, providing an image of the structure and thereby improving the interpretation of the results. Here, the quadratic relationships between entropy and Pk were modeled using a Gaussian regression model (Montgomery et al., 2006). Deviation measures were found, including the mean absolute deviation (MAD), the simple mean of the absolute differences between real and adjusted values; mean square deviation (MSD), the simple mean square of the differences between actual and adjusted values; and mean absolute percent error (MAPE), the percentage error between the actual and adjusted values.
As the test data was inserted in CalcuLetra, each text’s readability was determined by each word’s objective parameters from the stored database and their average Pk compared to quantify the comprehension difficulty. To identify the approximate points at which there was a change in the slope between the number of denotative texts inserted in CalcuLetra and the average knowledge probabilities, a binary segmentation method was used that identified the changes in the mean and variance of the knowledge probabilities as the number of texts grew (Scott and Knott, 2006). For each change point interval, the relationship between the knowledge probabilities and texts was described using linear regressions. For all analysis, including tables and figures, we used R (version 3.2.4) (R Core Team, 2013).
## Results
### Representative word groups
We considered the upper and lower bounds of the training database as the most relevant (Fig. 1), as they showed the words with a Pk ≥ 0.7 (G1) that contributed to comprehension and those with a Pu 0.3 (G2) which were likely to cause a communication failure.
In the dispersion chart that connects the data pairs, we found a positive and significant correlation (r = 0.99, p-value < 0.001) between the entropy and Pk, i.e., the greater the entropy, the greater the Pk and vice versa. Conversely, we found a significant negative correlation (r = −0.99, p-value < 0.001) between the entropy and Pu, i.e., the higher the entropy, the lower the Pu and vice versa.
In Fig. 2, we evaluated the cut-offs for Pk (a,c) and Pu (b,d), for which the entropy distribution was homogeneous by employing a machine learning regression tree technique, with the stipulation that the minimum number for the group division be 5000 to avoid an overly large tree, facilitating data interpretation. Classification and regression tree algorithms aim to achieve the best possible predictive accuracy. Thus, the division at each node was found and generated improvement in forecast accuracy by finding sub-groups in which the groups are homogeneous (intra-group) and heterogeneous (inter-group). Operationally, the most accurate forecast was defined as the one with the lowest rate of classification errors.
As Fig. 2 shows, from the cut-off points identified by the regression tree (panels 2a, b), entropy was described in the bands for knowledge probability (Table 1) and for the probability of being unknown (Table 2). Also, descriptive of each variable is represented in boxplots (panels 2c, d). Further information about boundaries in boxplots is shown in Supplementary Table S4.
To describe the relationship between the entropy and the knowledge probability, we used a quadratic regression model to determine the knowledge probability value that maximized entropy, with the knowledge probability variable being centered on the average. The adjusted model was found to have a variance inflation factor of 6.24, indicating the absence of multicollinearity, and thus was considered to be well adjusted (Fox, 2008). Therefore, the model equations—Eq. (1) for entropy h and knowledge probability Pk and Eq. (2) for unknown probability Pu—were defined as follows:
$$\begin{array}{l}\overline {Pk} { \,=\,}Pk - 3.210436\\ \widehat h{ \,=\, }4.6457866 + 1.4350649^ \ast \overline {Pu} - 0.0101171^ \ast \overline {Pu} ^2\end{array}$$
(1)
$$\begin{array}{l}\overline {Pu} {\,=\,}Pu - 96.78956\\ \widehat h{ \,=\,}4.6457866 - 1.4350649^ \ast \overline {Pu} - 0.0101171^\ast \overline {Pu} ^2\end{array}$$
(2)
In this sense, when Pk increased by 10 units, h had an average increase of approximately 13.92 units. This function allowed for the maximum points to be identified and the entropy to be estimated from the knowledge probability. It was found that the position that maximized entropy was a knowledge probability of 70.92% (Pu = 30.74%); as after this value, as Pk or Pu increased, h tended to decrease Table 3.
Figure 3 shows the comparison of actual and adjusted values for Pk (a) and Pu (b). Note that after the inflection point, the adjusted curve did not exactly follow the actual curve because of the small number of points examined in this range. This was the main limitation of the training database; however, despite the missing values in G1 (Pk ≥ 0.7), it was possible to infer that h tended to decrease for familiar words.
We found MAD and MSD estimates to be low, suggesting proximity between the adjusted values and the actual data. Also, the MAPE estimate was that the forecast was on average 3.009% incorrect, which is satisfactory for terms of fit quality.
### Quantification of the texts in CalcuLetra
Test data (e.g., articles, essays, abstracts, and news) were entered in CalcuLetra (n = 3330) in order to extract the quantitative word values based on the stored objective parameters in the training database (Fig. 4). This process inferred the difficulty of semantic comprehension by the average Pk values. Texts were already organized syntactically in standard Portuguese. We could set Pk ≤ 0.3 for texts with a higher chance of causing incomprehension.
We used a binary segmentation method to identify the changes in the mean and variance of Pk as the number of texts grew (Scott and Knott, 2006). Similar to decision trees, this method searches for the values that make the groups more homogeneous in relation to the mean and variance internally and are heterogeneous between the groups. In this way, we identified points 24, 551, 2724, 3152, 3295, and 3330. Note that within each change point range, the relationship between Pk and the texts was strictly linear. Therefore, the six linear regressions were adjusted according to the time intervals identified in the change points. When the number of texts increased by 10 units in the interval [1; 24], Pk increased by 3.00 units on average; in the interval (25; 551), Pk increased by 0.03 units on average; in the interval (552; 2724), Pk increased by 0.10 units on average; in the interval (2725; 3152), Pk increased by 0.03 units on average; in the interval (3153; 3295), Pk increased by 0.30 units on average; in the interval (3296; 3330), Pk increased by 3.00 units on average. In all models, the R² value was perfect as one variable thoroughly explains the variability in the other.
## Discussion
Textual complexity analyses often include several domains (Crossley et al., 2017). In this paper, comprehension difficulty was quantified in order to infer an objective measure of the complexity of denotative written material in Portuguese using texts related to educational language standards. Text comprehension was estimated based on the correlations between the semantic representations of the words and their respective probabilities to ensure that the textual complexity was related to the semantic entropy of the words and their familiarity degree. To the best of our knowledge, this was the first investigation on Portuguese textual comprehension difficulties using quantitative methods and objective parameters. The use of a quantitative model that incorporates comprehension in terms of word complexity measures may help overcome many of the problems associated with the reliance on qualitative standards and subjective evaluations (Gathercole and Alloway, 2006). Importantly, entropy analysis in our model revealed that the relationship between word comprehensibility and probability is not linear (Smith, 2012). This means that word features, such as a word’s relationship to the sentence, is non-linearly related to textual features, such as occurrence. The model presented here can elucidate that relationship through the use of regression analysis (Gastón and García-Viñas, 2011).
We chose academic texts due to their potentially more comprehensive vocabulary and lower use of figurative language. We acknowledge that using a corpus of academic words may have biased our sampling, as it is possible that university students intentionally use unusual words to demonstrate a more elaborate vocabulary. There is a possibility that we did not accurately sample more common words that would exhibit higher degrees of familiarity. However, we highlight that one can use any corpus (including from other languages) as training data for CalcuLetra to quantify textual comprehension difficulty. Corpus selection by itself does not directly impact the validity of our quantitative model, and future studies may systematically address how the training dataset can affect comprehension estimates.
Our results found that textual labor was quantifiable by a low degree of familiarity, a low amount of information, and low entropy, which together predicted a high incomprehension probability (summarized in Fig. 5). In a study of word frequency in books, the amount of information in 300 to 3000 unpredictable words was reported to be 0.2 bits per word, which was congruent with the findings in our dataset (Marcelo and Damian, 2010). Our results are also consistent with several available Portuguese word-frequency lists (Ferreira, 2014, Wiktionary, 2017, Mark, 2011, Github, 2017).
Previous studies have shown that complex aspects of information organized in symbolic sequences can be quantified (Montemurro and Zanette, 2002, Montemurro and Zanette, 2011, Debowski, 2011, Kalimeri et al., 2015). In this work, we ranked semantic decoding difficulties using established equivalence formulas. Differently from other research that sought to quantify the entropy of word order in sentences, the results of which showed that there was constant relative entropy despite vocabulary diversity, this study set the average ratio for the Portuguese denotative text comprehension (Montemurro, 2014). In using an average ratio for text comprehension, deviations due to vocabulary diversity can be overcome. While this may suggest that the parameters may only function and possibly depend on the restricted description of a language, this can be altered to suit any specified vernacular. In this sense, the findings of the research are not restricted to Portuguese comprehension, and parameters can be applied to any linguistic modality that incorporates a defined semantic code (language) (Zwaan, 2016). This generalizability is because, despite the known differences in structure and vocabulary of any given language system, the impact of word order within the structure of the language is thought to be a statistical linguistic universal (Fox, 2008). In deviating from reliance on word ordering and concentrating instead upon a quantified flow of informatics sequencing, word usage and cognitive forms such as repetition and rehearsal are acknowledged in the function of comprehension (Zwaan, 2016). As described, we can speculate on several applications for our software suite. For example, an objective quantification of textual comprehensibility could be used to the development of diagnostic and therapeutic tools for learning disorders such as dyslexia (Harley and O’Mara, 2016). Thus, the use of a quantitative tool that identifies words in relation to their embedded discourse might provide a valuable objective guideline for the quantitative evaluation of learning disorder severity, considering current therapeutic practices are based mainly on qualitative measures (Kirkby et al., 2011). In addition, by using the regression analysis technique employed in this study, it may be possible to estimate the optimal rate of acquiring a new word from textual representations concerning its occurrence throughout different texts, which could be used as benchmarks for clinical studies and diagnostics (Oliveira and Gomes, 2010).
Our software might also be used in standard education. In the latest edition of the Programme for International Student Assessment (2015), Brazil ranked 59th in reading out of 70 countries, revealing that many students have insufficient comprehension skill. The National Indicator of Functional Literacy (INAF) reports that 68% of the 30.6 million Brazilians between 15 and 64 years old who have studied up to 4 years, and 75% of the 31.1 million who have studied up to 8 years, remain at basic literacy levels. The use of inadequate teaching material and improper screening for learning disabilities are likely contributors to these failures in public education outcomes. Specifically, the annual National Textbook Program (PNLD) has been heavily criticized, especially regarding the creation and selection of textbooks (Di Giorgi et al. 2014). Using CalcuLetra, teachers might perhaps select more appropriate course material based on quantitative entropy parameters (Cidrim and Madeiro, 2017). Of course, which entropy levels might be best suited for specific grades or educational goals would have to be empirically determined. CalcuLetra could also be applied for the creation of textbooks, essay materials, and the evaluation of automated textual comprehension and student production. It could also be used to quantify the functional acquisition of specific vocabularies using different training datasets. While our model was not yet been fitted nor compared to the comprehension of actual human readers, it is plausible that it could be used to monitor the development of a given student’s textual comprehension skill by analyzing the production of the student with progressively complex training datasets inserted in CalcuLetra. Understanding trends and differences in textual representation can also add to ongoing theoretical debates about the role of automated textual analysis in education. Indeed, automated textual analyses may contribute to exploring intersectional perspectives on social groups studying in Portuguese, such as specific ethnic groups and individuals with learning disabilities.
## Conclusions
By applying information theory principles, we developed a new textual analysis software suite (CalcuLetra) and demonstrated that the comprehension of Portuguese denotative texts could be quantified using objective parameters. Our results revealed groups of words with low entropy that either cause incomprehension or facilitate semantic decoding. We propose that the methodology and software developed may eventually be used as an auxiliary evaluation tool for teaching materials and textual comprehension assessments, as well as for the study and therapy of learning disorders. This model may also be adapted to other languages to evaluate the difficulty of semantic decoding and semantic complexity.
## Data availability
The datasets generated and/or analyzed during the current study, and the relevant code, are available on https://github.com/LouiseBogea/CalcuLetra.
## References
1. Ayres M, Ayres Jr M, Ayres DL, Santos AAS (2007) Bioestat 5.0 aplicações estatísticas nas áreas das ciências biológicas e médicas. IDSM, Belém
2. Bao J, Basu P, Dean M, Partridge C, Swami A, Leland W, Hendler JA (2011) Towards a theory of semantic communication. 2011 IEEE Netw Sci Workshop 1:110–117
3. Bar-Hillel Y, Carnap R (1952) An outline of a theory of semantic information. Res Lab Electron Tech Rep 247:221–274
4. Breiman L (2017) Classification and regression trees. Routledge, Abingdon
5. Cidrim L, Madeiro F (2017) Information and Communication Technology (ICT) applied to dyslexia: literature review. Rev CEFAC 19(1):99–108
6. Crossley SA, Skalicky S, Dascalu M, McNamara D, Kyle K (2017) Predicting text comprehension, processing, and familiarity in adult readers: New approaches to readability formulas. Discourse Process 54:340–359
7. D’Alfonso S (2011) On quantifying semantic information. Information 2:61–101
8. Debowski T (2011) On the vocabulary of grammar-based codes and the logical consistency of texts. IEEE Trans Inf Theory 57:4589–4599
9. Di Giorgi C, Militão SCN, Militão NA, Perboni F, Ramos RC, Lima VMM (2014) Uma proposta de aperfeiçoamento do PNLD como política pública: o livro didático como capital cultural do aluno/família. Ens Aval Pol Públ Educ 22(85):1027–1056
10. Ferreira ABH (2014) Dicionário Aurélio. Editora Positivo, Curitiba
11. Fox J (2008) Applied Regression Analysis and Generalized Linear Models. Sage, Thousand Oaks, California
12. Gasperin C, Specia L, Pereira T, Aluísio S (2009) Learning when to simplify sentences for natural text simplification. Proc ENIA 1:809–818
13. Gastón A, García-Viñas JI (2011) Modelling species distributions with penalised logistic regressions: a comparison with maximum entropy models. Ecol Model 222(13):2037–2041
14. Gathercole SE, Alloway TP (2006) Practitioner review: Short-term and working memory impairments in neurodevelopmental disorders: diagnosis and remedial support. J Child Psychol Psychiatry 47:4–15
15. Github (2017) Frequency Words Hermit D. https://github.com/hermitdave/FrequencyWords/blob/master/content/2016/pt_br/pt_br_50k.txt. Accessed 20 Mar 2017
16. Harley TA, O’Mara DA (2016) Hyphenation can improve reading in acquired phonological dyslexia. Aphasiology 20(8):744–761
17. Hartmann N, Avanço L, Balage P, Magali D, Nunes MGV, Pardo T, Aluísio S (2014) A large corpus of product reviews in Portuguese: tackling out-of-vocabulary words. In: Ninth International Conference on Language Resources and Evaluation. European Language Resources Association (ELRA), Reykjavik, Iceland, pp 3865–3871
18. Kalimeri M, Constantoudis V, Papadimitriou C, Karamanos K, Diakonos FK, Papageorgiou H (2015) Word-length entropies and correlations of natural language written texts. J Quant Linguist 22:101–118
19. Kendeou P, Van Den Broek P, Helder A, Karlsson JA (2014) Cognitive view of reading comprehension: Implications for reading difficulties. Learn Disabil Res Pr 29:10–16
20. Kirkby JA, Blythe HI, Drieghe D, Liversedge SP (2011) Reading text increases binocular disparity in dyslexic children. PLoS ONE 6(11):e27105
21. Marcelo AM, Damian HZ (2010) Towards the quantification of the semantic information encoded in written language. Adv Compl Sys 13:135–153
22. Mark D (2011) A frequency dictionary of Portuguese. Routledge, London
23. Montemurro MA (2014) Quantifying the information in the long-range order of words: semantic structures and universal linguistic constraints. Cortex 55:5–16
24. Montemurro MA, Zanette DH (2002) Entropic analysis of the role words in literary texts. Adv Compl Sys 5:7–17
25. Montemurro MA, Zanette DH (2011) Universal entropy of word ordering across linguistic families. Plos ONE 6:e19875
26. Montemurro MA, Zanette DH (2016) Complexity and universality in the long-range order of words. Creat Univers Lang ArXiv abs 1503(1129):27–41
27. Montgomery D, Peck A, Viving G (2006) Introduction to linear regression analysis. John Wiley, New York
28. Nowak MA, Plotkin JB, Jansen VA (2000) The evolution of syntactic communication. Nature 404:495–498
29. Oliveira HG, Gomes P (2010) PT: automatic construction of a lexical ontology for Portuguese. In: Proceedings of 5th European Starting AI Researcher Symposium. Lisbon, Portugal, pp 199–211
30. R Core Team (2013) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria
31. Rabêlo LGN, Moraes RM (2008) Entropy and generation of approximation series using a JAVA tool. In: XXVI Brazilian Symposium on Telecommunications (SBrT). Brazilian Telecommunications Society, Rio de Janeiro, p 1–6
32. Rosenfeld R (2000) Two decades of statistical language modeling: Where do we go from here? Proc IEEE 88(8):1270–1278
33. Scott AJ, Knott MA (2006) Cluster analyses method for grouping means in the analysis variance. Biometrics 30:507–512
34. Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27:379–423
35. Sigman M, Peña M, Goldin AP, Ribeiro S (2014) Neuroscience and education: prime time to build the bridge. Nat Neurosci 17:497–502
36. Smith RD (2012) Distinct word length frequencies: distributions and symbol entropies. Glottometrics 23:7–22
37. Soares AP, Costa AS, Machado J, Comesana M, Oliveira HM (2017) The Minho Word Pool: norms for imageability, concreteness, and subjective frequency for 3,800 Portuguese words. Behav Res Meth 49:1065–1081
38. Thorne C, Szymanik J (2015) Semantic complexity of quantifiers and their distribution in corpora. In: Proceeding of the International Conference on Computational Semantics. International Wood Culture Society, London 64–69
39. Wiktionary (2017) Wordlist. En.wiktionary. https://en.wiktionary.org/wiki/Wiktionary:Frequency_lists/BrazilianPortuguese_wordlist. Accessed 20 Mar 2017
40. Zipf GK (1935) The psychobiology of language. Houghton-Mifflin, Oxford, England
41. Zwaan RA (2016) Situation models, mental simulations, and abstract concepts in discourse comprehension. Psychon Bul l Rev 23(4):1028–1034
## Acknowledgements
This study has been funded by the Federal University of Pará (UFPA) through the Pro-rectory of Research (PROPESP). MSF is a research fellow of the National Council for Research and Development (CNPQ). The funding providers had no role in the study design, data collection, and analysis, decision to publish, or preparation of the paper. Funding provider’s websites: http://www.ufpa.br/, http://www.cnpq.br/.
## Author information
Authors
### Contributions
LBR, ARR, KMC, and MSF designed the research. LBR developed the new algorithm, wrote all the code, conducted the research, analyzed the data, interpreted the results, and wrote the first draft of the paper. ARR, KMC, and MSF supervised the study and contributed to the final version of the paper.
### Corresponding author
Correspondence to Manoel da Silva Filho.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Ribeiro, L.B., Rodrigues, A.R., Costa, K.M. et al. Quantification of textual comprehension difficulty with an information theory-based algorithm. Palgrave Commun 5, 103 (2019). https://doi.org/10.1057/s41599-019-0311-0 |
# Homework Help: Analytical function
1. Mar 6, 2010
### talolard
1. The problem statement, all variables and given/known data
I had this question on a test. No one I talked to had a clue how to solve it, and our professor will not be publishing solutions. Any insight as to how to aproach this would be great because I am totally stumped.
Let $$f \in C^\infty[-1,1]$$ such that for all $$j \in N |f^{(j)}|<M$$. given that $$f(1/k)=0 \forall k \in N$$ prove that $$f=0$$ in $$[-1,1]$$
3. The attempt at a solution
The only idea I've had is that I need to use the taylor series. But I have no idea what to do with it.
Thanks
Tal
2. Mar 6, 2010
### latentcorpse
hmm. can only offer comments really as im not much of an analyst. but if its zero at every rational number in the interval then by the density of the rationals, it would need to be zero everywhere else since the derivatives are bounded which prevents things such as discontinuities etc.
3. Mar 6, 2010
### talolard
I thought of that, but it's npot the case. we know nothing about, say, F(5/7)
4. Mar 6, 2010
### vela
Staff Emeritus
Can you make use of the theorem that says analytic functions have at least one singular point or are constant?
5. Mar 6, 2010
### talolard
Never heard that one, so I guess not.
6. Mar 6, 2010
### Count Iblis
Hints:
1) What is f(0) ?
2) What are the derivatives of f at zero?
7. Mar 6, 2010
### HallsofIvy
By the way, you title this "analytic functions" but refer $f\in C^\infty [-1, 1]$. That is NOT the set of "analytic functions on [-1, 1]. The fact that a function has all derivatives continuous does NOT imply that it is analytic. For example, the function
$$f(x)= \begin{array}{c}0 if x= 0 \\ e^{-\frac{1}{x}} if x\ne 0$$
is in $$C^\infty[-1, 1]$$ but is not analytic on that set.
8. Mar 6, 2010
### talolard
@countbliss
I asume f(0) is 0 but I don't know how to prove it. I thought that since K can go to infity values of 1/k that are arbitrarily close to 0 are zero implies that f(0) is 0. But I'm not sure that really proves it and I don't have an idea as to how to prove it.
Halls ofIvy, duly noted. Thanks
9. Mar 6, 2010
### Office_Shredder
Staff Emeritus
That's a general property of continuous functions. A function is continuous if and only if given any convergent sequence $$x_n \to x$$, you get $$f(x_n) \to f(x)$$
10. Mar 6, 2010
### talolard
Gee, I wish I remebered thaat on the exam.
Ok, having established that f(0)=0 I'm clear on how to prove this for [0,1]. But what about [-1,0]? I'm assuming it has something to do with the bounded deriviatives, but that is just because we havent used that information yet.
11. Mar 6, 2010
### Count Iblis
You got the proof for x = 0 only so far. The next step is to use the Nth order Taylor formula with an appropriate form of the error term.
12. Mar 6, 2010
### talolard
I don't see it. I can prove f=0 for [0,1] using rolles theorem and induction. I don't see how to do this on [-1,0]. And I don't see where the error term can help. I'm stumped!!! |
# Ambiguous context free
Is there any technique to prove that a given language L is not ambiguous context-free? Here I don't know that whether L is CFL or not.
• For CFL, "the undecidability of inherent ambiguity was proved by Ginsburg and Ullian (1966)." in cstheory.stackexchange.com/questions/19102. Found with a call to google using : Context-free language inherent ambiguity undecidable. – babou Apr 20 '15 at 8:41
• I don't understand what you are trying to show. That $L$ is inherently ambiguous, that it is not? – Raphael Apr 20 '15 at 8:50
Your question is strange. If you do not know what kind of language you have, the question may well be meaningless, as the concept of ambiguity can only be defined with respect to a known system of enumeration of the sentences of the language. There is none if the language is not Recursively Enumerable.
Thus it does not make sense to ask whether a language is ambiguous in general.
I take this as an opportunity for a little survey of these types of definitions.
In your case, if you specified that your language is context-free (but you did not), the question makes some more sense since CF languages have at least one CF grammar, which can be used as reference rewriting system to generate the sentences of the language (assuming you restrict to leftmost derivations, as canonical representatives of equivalence classes of derivations).
However, each CF language has actually an infinity of CF grammars generating it, and ambiguity has to be defined with respect to a single generation system. Hence the concept of ambiguity makes sense for the language of a given CFG with respect to that CFG, but does not make sense a priori for the langage alone.
However, computer science developed the concept of inherent ambiguity.
A CF language is inherently ambiguous iff all CF grammars generating that language are ambiguous.
It is however becoming common to say that a CF language is ambiguous to mean that it is inherenlty ambiguous.
Notes:
1. Given any CF grammar G, it is trivial to give an ambiguous CF grammar G' that generates the same language. So, what can be interesting is not to know whether a CF language may have an ambiguous CF grammar (that is always possible), but whether it has only ambiguous CF grammars.
2. This refers only to CF grammars. It is possible that an inherently ambiguous CF language can be generated unambiguously by a grammar that is not CF. For example, I would conjecture that there is an inherenlty ambiguous CF language that can be generated by an unambiguous Tree Adjoining Grammar (TAG) (but I do not know whether it is true or easy to check).
3. This kind of knowledge organization framework can apply to other properties related to the generation or recognition process associated with a language. A good example is determinism of automata of a given family recognizing the language. Precision is always important: a language may be inherently non-deterministic with respect to a left-to-right PDA recognizer, but not when the PDA recognizer is right-to-left.
4. This kind of framework, whether for ambiguity or other properties, can also be developed for other families of languages and enumeration mechanisms (possibly up to some notion of generative equivalence, such as the order of rule application in the case of CF grammar derivations).
Hence your question should be better phrased as:
If I am given a language L that is Context-Free, is there any technique to prove that it is not ambiguous.
It is even better if you say "... that it is not inherently ambiguous".
Note that even there, there is the implicit assumption that you refer to the leftmost derivations of CF grammars.
In the case of CF languages and grammars, you have many answers that you can find on the CS Theory site at the question Ambiguity in regular and context-free languages.
In particular:
• it is undecidable whether a given CF grammar is ambiguous (1962-63, Cantor - Floyd - Chomsky and Schutzenberger)
• it is undecidable whether a given CF language is inherently ambiguous (Ginsburg and Ullian - 1966).
the general problem of determining whether a CFL is "inherently ambiguous" is undecidable (as mentioned/ cited in comments) but heuristics do exist and are an active area of research. see eg
• thanks @babou.But I don't know whether the given language L is context free or not.But I know that the given language L is not is not unambiguous context free.I want to know that whether the language L will be in the ambiguous CFL class or not. – anand nayak Apr 21 '15 at 13:15 |
Optimal objective of Static Schrodinger Bridge
I refer to notes about entropy-regularized optimal transport, at https://www.math.columbia.edu/~mnutz/docs/EOT_lecture_notes.pdf
In Theorem 3.2, it says that the Schrodinger potentials achieve the optimum of the entropy-minimization problem, with: $$H(\pi_* | R) = \inf_{x \in \Pi(\mu, \nu)} H(\pi | R) = \mu(\phi_*) + \nu(\psi_*)$$.
I understand that both $$\mu, \nu$$ are measures over spaces $$X, Y$$. In my case, both input spaces are $$\mathbb{R}^n$$ and $$\mu, \nu$$ are probability measures, so we have $$\mu: X = \mathbb{R}^{n} \rightarrow \mathbb{R}$$; $$\nu: Y = \mathbb{R}^{n} \rightarrow \mathbb{R}$$; and that integrations over the whole space should sum to one.
Schrodinger potentials $$\phi, \psi$$ are defined that satisfy $$\phi : X \rightarrow \mathbb{R}$$, $$\psi : Y \rightarrow \mathbb{R}$$.
I don't understand how to evaluate the two functions at the Schrodinger potentials: $$\mu(\phi_*) + \nu(\psi_*)$$. It seems like $$\phi_*$$ is not in the input domain of $$\mu$$? Can anyone explain the above expression $$\mu(\phi_*) + \nu(\psi_*)$$ for me, in real spaces $$\mathbb{R}^n$$? Does it refer to some integral e.g. $$\int \phi_*(x) \mu(x) \cdots \mathrm{d} x$$?
I am unfamiliar with measure theory, so any help would be greatly appreciated. Thank you!
• Yes, simply integrals. Jun 23 at 21:42
• Can you write out the expression for me? Thank you! Jun 23 at 21:43
• $\int\phi_{*}d\mu$ Jun 23 at 21:44
• I see. In real spaces, does it correspond to something like: $\int \phi_*(x) \mathrm{d} \mu(x) = \int \phi_*(x) \mu(x) \mathrm{d} x$? Jun 23 at 21:45
• The identity would be true only provided $\mu$ has Lebesgue-density. Jun 23 at 21:49 |
# pgfplots: How can I remove the markers when using addplot3?
I would like to remove the circles that are the markers in an addplot3 parametrized helix. How can this be done?
I have tried markers = none and markers = false.
\documentclass[tikz]{standalone}
\usepackage{pgfplots}
\pgfplotsset{compat = 1.8}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
samples = 100,
domain = 0:5 * pi,
samples y = 0,
view = {60}{30},
]
\addplot3 ({cos(deg(x))}, {sin(deg(x))}, {5 * x});
\end{axis}
\end{tikzpicture}
\end{document}
-
It's no markers. – percusse Aug 30 '13 at 20:59
@percusse are you making an answer or should I delete the question? – dustin Aug 30 '13 at 21:02
No problem, you can answer it your own if you like. Or we can close it or delete it. But maybe someone else stumble on it no matter how small the info? – percusse Aug 30 '13 at 21:05
@percusse I think you should make the answer then. – dustin Aug 30 '13 at 21:07
There are three equivalent approaches which I'd like to discuss as this question appears to arise now and then:
1 using mark=none as add-on option. In the absence of style options, your \addplot3 command uses the cycle list to determine the plot style (and ended up with blue and these markers). You can add \addplot3+[mark=none] to add the option mark=none to that list:
\begin{tikzpicture}
\begin{axis}[
samples = 100,
domain = 0:5 * pi,
samples y = 0,
view = {60}{30},
]
\addplot3+[mark=none] ({cos(deg(x))}, {sin(deg(x))}, {5 * x});
\end{axis}
\end{tikzpicture}
2 you can provide your own, self-contained and complete option list by means of \addplot3[blue]. Since this does not explicitly request a mark, there won't be any:
\begin{tikzpicture}
\begin{axis}[
samples = 100,
domain = 0:5 * pi,
samples y = 0,
view = {60}{30},
]
\addplot3[blue] ({cos(deg(x))}, {sin(deg(x))}, {5 * x});
\end{axis}
\end{tikzpicture}
3 You can write no markers into your axis. This key comes is applied to every plot inside of the axis; it overrules any specification of the cycle list. In other words: it allows you to use the automatically determined plot styles, but it overrules marker specifications found in any plot style. This option is global in its character whereas the other two are local:
\begin{tikzpicture}
\begin{axis}[
no markers,
samples = 100,
domain = 0:5 * pi,
samples y = 0,
view = {60}{30},
]
\addplot3 ({cos(deg(x))}, {sin(deg(x))}, {5 * x});
\end{axis}
\end{tikzpicture}
The outcome is always the same, namely
- |
# Another reduction to Wrench
1. Jan 7, 2008
This one is really discouraging me. It looks sooooo easy since the forces are each in one direction
The way I have usually handled these as a procedure is: Find $F_r$ and then find $M_r$ about some point and then decompose $M_r$ into components that run parellel and perpendicular to $F_r$. Then I can usually find P(x,y)
If I were to move everything to point A I would have:
$F_r=500i+300j+800k$
And I would also have to find the couple Moments about A:
$M_x_A=4(800)=3200$
$M_y_A=0$
$M_z_A=6(300)=1800$
I am just unsure where to go from here? Or is this all wrong altogether?
Last edited: Jan 7, 2008
2. Jan 7, 2008
I have done this one over soooo many times. I am clearly missing a crucial conceptual point. ANY hints or criticism would help.
3. Jan 8, 2008
Well Good Morning! |
# Math Help - Double integral and polar coordinates
1. ## Double integral and polar coordinates
I don't understand how to do these transformations as depicted on the picture.
There's 2R = pi/3 and R/2=0 How to get these pi/3 and 0?
2. Originally Posted by totalnewbie
I don't understand how to do these transformations as depicted on the picture.
There's 2R = pi/3 and R/2=0 How to get these pi/3 and 0?
There are a number of typographical errors in your post (at a glance, there's two dx's and a missing term in the transformation Jacobian).
From what I can tell, the region of integration is the upper half of the circle $(x - R)^2 + y^2 = R^2$ between the line x = R/2 and the y-axis. Do you realise this?
A transformation is then made to polar coordinates. Do you realise that? Are you familiar with them?
Do you know how to describe the region of integration using polar coordinates? In particular:
1. Do you see how to express the line x = R/2 and the circle in polar coordinates.
2. Do you see how to get the integral terminals for the angle?
3. Originally Posted by mr fantastic
There are a number of typographical errors in your post (at a glance, there's two dx's and a missing term in the transformation Jacobian).
From what I can tell, the region of integration is the upper half of the circle $(x - R)^2 + y^2 = R^2$ between the line x = R/2 and the y-axis. Do you realise this?
A transformation is then made to polar coordinates. Do you realise that? Are you familiar with them?
Do you know how to describe the region of integration using polar coordinates? In particular:
1. Do you see how to express the line x = R/2 and the circle in polar coordinates.
2. Do you see how to get the integral terminals for the angle?
$x = \frac{R}{2} \Rightarrow r \cos \phi = \frac{R}{2} \Rightarrow r = \frac{R}{2 \cos \phi}$.
$y = \sqrt{2Rx - x^2} \Rightarrow y^2 = 2 R x - x^2$
$\Rightarrow r^2 \sin^2 \phi = 2 R r \cos \phi - r^2 \cos^2 \phi \Rightarrow r = 2 R \cos \phi$. (r = 0 is rejected - why?)
At the point (2R, 0) $\phi = 0$.
The line $x = \frac{R}{2}$ cuts the semi-circle at $\left( \frac{R}{2}, ~ \frac{R \sqrt{3}}{2}\right)$. At this point, $\tan \phi = \frac{R \sqrt{3}/2}{R/2} = \sqrt{3} \Rightarrow \phi = \frac{\pi}{3}$. |
## Stats: Data and Models (3rd Edition)
Published by Pearson
# Chapter 19 - Confidence Intervals for Proportions - Exercises - Page 473: 2
See below.
#### Work Step by Step
The margin of error is $3\%$, thus the sample proportion varies on average by $3\%$ from the population proportion.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
Training Garrabrant inductors to predict counterfactuals discussion post by Tsvi Benson-Tilsen 726 days ago | Abram Demski, Jessica Taylor and Scott Garrabrant like this | discuss The ideas in this post are due to Scott, me, and possibly others. Thanks to Nisan Stiennon for working through the details of an earlier version of this post with me. We will use the notation and definitions given in https://github.com/tsvibt/public-pdfs/blob/master/decision-theory/notation/main.pdf. Let $${{\overline{{\mathbb{P}}}}}$$ be a universal Garrabrant inductor and let $${{\overline{U}}}: {\mathbb{N}}^+ \to {\textrm{Expr}}(2^\omega \to {\mathbb{R}})$$ be a sequence of utility function machines. We will define an agent schema $$({A^{U_{n}}_{n}})$$. We give a schema where each agent selects a single action with no observations. Roughly, $${A^{U_{n}}_{n}}$$ learns how to get what it wants by computing what the $${A^{U_{i}}_{i}}$$ with $$i < n$$ did, and also what various traders predicted would happen, given each action that the $${A^{U_{i}}_{i}}$$ could have taken. The traders are rewarded for predicting what (counterfactually) would be the case in terms of bitstrings, and then their predictions are used to evaluate expected utilities of actions currently under consideration. This requires modifying our UGI and the traders involved to take a possible action as input, so that we get a prediction (a “counterfactual distribution over worlds”) for each action. More precisely, define \begin{aligned} {A^{U_{n}}_{n}} := &\textrm{ let } {\hat{{\mathbb{P}}}}_n := {\textrm{Counterfactuals}}(n)\\ &\;{\textrm{return}}{\operatorname{arg\,max}}_{a \in {\textrm{Act}}} {\hat{{\mathbb{E}}}}_n[a](U_n)\end{aligned} where ${\hat{{\mathbb{E}}}}_n[a](U_n):= \sum_{\sigma \in 2^n} {\hat{{\mathbb{P}}}}_n[a](\sigma) \cdot U_n(\sigma) .$ Here $${\hat{{\mathbb{P}}}}_n$$ is a dictionary of belief states, one for each action, defined by the function $${\textrm{Counterfactuals}}: {\mathbb{N}}^+ \to ({\textrm{Act}}\to \Delta({2^\omega}))$$ using recursion as follows: input: $$n \in {\mathbb{N}}^+$$ output: A dictionary of belief states $${\mathbb{P}}: {\textrm{Act}}\to \Delta({2^\omega})$$ initialize: $${\textrm{hist}}_{n-1} {\leftarrow}$$ array of belief states of length $$n-1$$ for $$i\leq n-1$$: $${\hat{{\mathbb{P}}}}_i {\leftarrow}{\textrm{Counterfactuals}}(i)$$ $$a_i {\leftarrow}{\operatorname{arg\,max}}_{a \in {\textrm{Act}}} \sum_{\sigma \in 2^i} {\hat{{\mathbb{P}}}}_i[a](\sigma) \cdot U_i(\sigma)$$ $${\textrm{hist}}_{n-1}[i] {\leftarrow}{\hat{{\mathbb{P}}}}_i[a_i]$$ for $$(a : {\textrm{Act}})$$: $${\mathbb{P}}[a] {\leftarrow}{\textrm{MarketMaker}}({\textrm{hist}}_{n-1}, {\textrm{TradingFirm}}'(a, a_{\leq n-1}, {\textrm{hist}}_{n-1}))$$ return $${\mathbb{P}}$$ Here, we use a modified form of traders and of the $${\textrm{TradingFirm}}'$$ function from the $$LIA$$ algorithm given in the logical induction paper. In detail, let traders have the type ${\mathbb{N}}^+ \times {\textrm{Act}}\to \textrm{trading strategy}.$ On day $$n$$, traders are passed a possible action $$a \in {\textrm{Act}}$$, which we interpret as “an action that $${A^{U_{n}}_{n}}$$ might take”. Then each trader returns a trading strategy, and those trading strategies are used as usual to construct a belief state $${\mathbb{P}}[a]$$. We pass to $${\textrm{TradingFirm}}'$$ the full history $$a_{\leq n-1}$$ of the actions taken by the previous $${A^{U_{i}}_{i}}$$, since $${\textrm{TradingFirm}}'$$ calls the Budgeter function; that function requires computing the traders’s previous trading strategies, which require passing the $$a_i$$ as arguments. Thus, traders are evaluated based on the predictions they made about logic when given the actual action $$a_n$$ as input. In particular, the sequence $$({\mathbb{P}}_n[a_n])$$ is a UGI over the class of efficient traders given access to the actual actions taken by the agent $${A^{U_{n}}_{n}}$$. This scheme probably suffers from spurious counterfactuals, but feels like a natural baseline proposal.
### NEW DISCUSSION POSTS
[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes
There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes
Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like
There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like
Regarding the physical
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
I think that we should expect
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
After thinking some more,
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
Yes, I think that we're
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
My intuition is that it must
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
To first approximation, a
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
Actually, I *am* including
by Vadim Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes
> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes |
# How do you find the product of (2x-6)(x+3)?
Jun 4, 2017
$2 \cdot {x}^{2} + 6 \cdot x - 6 \cdot x - 18$
$2 \cdot {x}^{2} - 18$
#### Explanation:
you multiply the $2 \cdot X$ by the 2nd braket then$- 6$ with the 2nd braket and the collect terms
Jun 4, 2017
$\textcolor{g r e e n}{2 {x}^{2} - 18}$
#### Explanation:
$\left(2 x - 6\right) \left(x + 3\right)$
$\textcolor{w h i t e}{a a a a a a a a a a a a a}$$2 x - 6$
$\textcolor{w h i t e}{a a a a a a a a a a a}$$\times \underline{x + 3}$
$\textcolor{w h i t e}{a a a a a a a a a a a a a}$$2 {x}^{2} - 6 x$
$\textcolor{w h i t e}{a a a a a a a a a a a a a a a a}$$+ 6 x - 18$
$\textcolor{w h i t e}{a a a a a a a a a a a a a}$$\overline{2 {x}^{2} + 0 - 18}$
$\textcolor{w h i t e}{a a a a a a a a a a a a a}$color(green)(2x^2-18 |
# 2nd PUC Basic Maths Question Bank Chapter 15 Circles
Students can Download Basic Maths Question Bank Chapter 15 Circles Questions and Answers, Notes Pdf, 2nd PUC Basic Maths Question Bank with Answers helps you to revise the complete Karnataka State Board Syllabus and to clear all their doubts, score well in final exams.
## Karnataka 2nd PUC Basic Maths Question Bank Chapter 15 Circles
### 2nd PUC Basic Maths Circles One Mark Questions and Answers
Question 1.
Find the equation of the circle with centre at orgin and radius 4 units.
x2 + y2 = 42
x2 + y2 = 16
Question 2.
Find the equation of the circle with centre at (-3,2) and radius 5 units.
(x + 3)2 + (y – 2)2 = 52.
Question 3.
Write the equation of the circle with centreat (1,1) and radius 2 units
(x – 1)2 + (y – 1)2 = (√2)2
(x – 1)2 + (y – 1)2 = 2
Question 4.
Find the equation of the point circle with centre at (a) (4, -5), (b) (-3,2), (c) (1,0)
(a) (x – 4)2 + (y + 5)2 = 0
(b) (x + 3)2 + (y – 2)2 = 0
(c) (x – 1)2 + y2 = 0
Question 5.
Find the centre of the circle x2 + y2 – 4x – 6y + 1 = 0.
Centre = (2, 3)
Question 6.
Find the radius of the circle x2 + y2 + 4x – 2y – 4 = 0.
g = 2, f = – 1, C = – 4.
r =$$\sqrt{g^{2}+f^{2}-C}$$ =$$\sqrt{4+1+4}$$ = √9 = 3.
Question 7.
Find the centre of the circle 4X2 + 4y2 – 15x – 18y + 11 = 0.
Divide the circle of by 4
x2 + y2 – $$\frac{15}{4}$$x – $$\frac { 18 }{ 4 }$$y + $$\frac { 11 }{ 4 }$$ = 0
Centre = $$\left(\frac{15}{8}, \frac{9}{4}\right)$$
Question 8.
Find the equation of the circle having the centre at (3,4) and touching the x-axis.
Given g = – 3, f=- 4 and c = g2 = 9 as ittouchs x-axis.
∴ Eqn. of the circle is x2 + y2 + 2gx + 2fy + c = 0
x2 + y2 – 6x – 8y + 9 = 0.
Question 9.
Find the length of the chord of the circle x2 + y2 – 6x + 15y -16 = 0 intercepted by the x- axis.
Length of the chord intercepted by x-axis is 2$$\sqrt{g^{2}-c}$$
= 2$$\sqrt{9-(-16)}$$ = 2$$\sqrt{25}$$ = 2.5 = 10.
Question 10.
Find the equation of a circle whose centre is (4,2) and touchs the y-axis.
If the circle touchess the y-axis then f2 = c
The eqn. of the circle is x2 + y2 + 2gx + 2fy + c = 0
With g = – 4, f = – 2, c = f2 = 4 is x2 + y2 – 8x – 4y + 4 = 0.
Question 11.
Find the length of the chord of the circle x2 + y2 + 3x – 2 = 0 intercepted by y-axis.
Given f = 0, c = – 2
The length of the chord intercepted by y-axis is given by = 2$$\sqrt{f^{2}-c}=2 \sqrt{0-(-2)}=2 \sqrt{2}$$
Question 12.
Write the unit circle with centre at the origin in.
x2 + y2 = 1.
### 2nd PUC Basic Maths Circles Two Marks Questions and Answers
Question 1.
Find the centre and radius of the circle x2 + y2 + 4x + 2y – 1 = 0.
Centre = (- g, -f) = (-2, -1), c = – 1
r = $$\sqrt{g^{2}+f^{2}-c}=\sqrt{4+1-(-1)}=\sqrt{5+1}=\sqrt{6}$$
Question 2
Find the equation of the circle with A(x1, y1) and B(x2, y2) as the ends of the diameter.
A(x1, y1) and B(x2, y2) be 2 given points
P(x, y) be any point on the circle join
AP and BP and AP is perpendicular to BP
∴∠APB = 90° (Angle in a semicircle)
∴ Slope of AB x slope of PB = – 1
$$\frac{y-y_{1}}{x-x_{1}} \times \frac{y-y_{2}}{x-x_{2}}$$ = -1 ⇒ (x – x1) (x – x2) + (y – y1)(y – y2) = 0 is the required equation.
Question 3.
Find the equation of the circle whose ends of diameter are (3,1) and (-4,2).
(x – x1) (x – x2) + (y – y1) (y – y2) = 0
given (X1, y1) = (3, 1), (x2, y2) = (-4,2)
∴ (x – 3)(x + 4) + (y – 1)(y – 2) = 0
⇒ x2 + y2 + x – 3y -10 = 0 is the required equation.
Question 4.
Find the equation of the circle with centre (4,3) and which passes through (0,0).
Given centre = (4, 3)
and r = distance between (4, 3) and (0, 0)
r = $$\sqrt{(4-0)^{2}+(3-0)^{2}}$$ = $$\sqrt{16+9}$$ = 5
∴The required equation of the circle with centre (4, 3) and radius = 5 units is
(x – 4)2 + (y – 3)2 = 52
x2 + y2 – 8x – 6y = 0.
Question 5.
Find the equation of the circle whose two diameters are x + 2y = 3 and x – y = 6 and radius is 6 units.
Solving the 2 diameters we get the centre.
∴The centre = (5, -1)andr = 6
Equation of circle is
(x – 5)2 + (y + 1)2 = 62
or x2 + y2 – 10x + 2y – 10 = 0.
Question 6.
If one end of the diameter of the circle x2 + y2 – 2x – 4y + 4 = 0 is (1,1). Find the other end.
Centre of the given circle = (1,2)
One end is given as A (1, 1)
Let the other end be B(x, y) = ?
W.K.T. centre = mid point of the diameter
∴(1,2) = $$\left(\frac{1+x}{2}, \frac{1+y}{2}\right)$$
⇒ 1 + x = 2 1 + y = 4
⇒ x = 1 y = 3
∴The other end of the diameter = B(l, 3).
Question 7.
Find the equation to the circle whose centre is same as the centre of the circle 2x2 + 2y2 – 6x + 8y – 3 = 0 and whose radius is 3 units.
Given circle 2x2 + 2y2 – 6x + 8y – 3 = 0
x2 + y2 – 3x + 4y – $$\frac { 3 }{ 2 }$$ = 0 .
The equation of the circle whose centre is same as the centre of x2 + y2 – 3x + 4y – $$\frac { 3 }{ 2 }$$ = 0 is of the form x2 + y2 – 3x + 4y + c = 0
g = $$\frac { -3 }{ 2 }$$, f = 2, c = $$\frac { -3 }{ 2 }$$
r = $$\sqrt{g^{2}+f^{2}-c}$$
S.B.S 3 = $$\sqrt{\frac{9}{4}+4-c}$$
9 = $$\frac{9+16-4 c}{4}$$
– 4c + 25 = 36
– 4c = 11 ⇒ C = $$\frac { 11 }{ -4 }$$
∴Required circle is x2 + y2 – 3x + 4y – $$\frac { 11 }{ 4 }$$
⇒ x2 + y2 – 3x + 4y – 11 =0.
Question 8.
Find the equation of the circle passing through the points (1,0) (0,1) and (0,0).
Let the required circle is x2 + y2 + 2gx + 2fy + c = 0
This equation passing through (0, 0) (1, 0), (0, 1)
∴(0, 0) ⇒ c = 0 ‘
(1,0) ⇒ 1 – 2g = 0 ⇒ g = $$-\frac{1}{2}$$
(0, 1) ⇒ 1 + 2f = 0 ⇒ f = $$-\frac{1}{2}$$
∴ The equation of the circle is
x2 + y2 – x – y = 0.
Question 9.
Find the equation of the circle whose centre is same as the circle x2 + y2 – 2x + 4y -11 = 0 and the radius is 4 units.
Centre of the given circle is (1, -2)
∴The required circle with centre (1, -2) and r = 4 units is
(x – 1)2 + (y + 2)2 = 42
⇒ x2 + y2 – 2x + 4y – 11 = 0.
Question 10.
If (a, b) and (-5,1) are the ends of a diameter of the circle x2 + y2 + 4x – 4y – 2 = 0 find a ‘ and b.
Centre of the given circle = (- 2, 2)
W.k.t. centre = mid point of the diameter
(-2,2) = $$\left(\frac{a-5}{2}, \frac{b+1}{2}\right)$$
a – 5 = -4, b + 1=4
a = 1, b = 3
∴ (a, b) = (1, 3).
Question 11.
S.T. the line (3x – 4y + 6) = 0 touches the circle x2 + y2 – 6x + 10y – 15 = 0.
Centre of the given circle ~ (3,-5)
and r = $$\sqrt{g^{2}+f^{2}-c}=\sqrt{9+25-(-15)}=\sqrt{49}=7$$
Length of the perpendicular from the centre (3, -5) to the line 3x – 4y + 6 = 0 is
Lt. of perpendicular = radius = 7
∴The line touches the circle.
Question 12.
If the radius of the circle x2 + y2 – 2x + 3y + k = 0 is $$\frac { 5 }{ 2 }$$ find k.
Centre of the given circle = $$\left(+1, \frac{-3}{2}\right)$$
Question 13.
Find the value of k such that the line 3x – 4y + k = 0 may be a tangent to the circle x2 + y2 = 25
Centre of the circle = (0, 0)
Since the line is a tangent to the circle we get
radius = Length of the perpendicular from the centre (0, 0)
Question 14.
Find A, if the line 3x + y + λ = 0 touches the circle x2 + y2 – 2x – 4y – 5 = 0.
Centre = (1, 2) and r = $$\sqrt{1+4+5}=\sqrt{10}$$
Since the line touches the circle ∴
radius = Length of the perpendicular.
Question 15.
Find the centre of the circle, two of the diameters are x + y = 2 and x – y = 0.
Point of intersection of the diameters is the centre
∴Solving the 2 diameters we get
∴Center = (1,1)
Question 16.
Find the equation of the circle two of the diameters x + y = 4 and x – y = 2 and passing through the point (2, -1).
Point of intersection of two diameters is centre
y = 4 – x
= 4 – 3
= 1
∴The centre = (3,1).
radius = distance between the centre (3,1) and the point (2, -1)
r = $$\sqrt{(3-2)^{2}+(1+1)^{2}}$$
$$=\sqrt{1+4}=\sqrt{5}$$
∴Equation of the required circle with centre (3, 1) and r = √5 units is
(x – 3)2 + (y – 1)2 = (√5)
⇒ x2 + y2 – 6x – 2y + 5 = 0.
Question 17.
S.T. the circle x2 + y2 + 4x – 3y + 4 = 0 touches x-axis.
g = 2, c = 4
Condition for the circle to touch x-axis is g2 = c
Here 22 = 4 ⇒ 4 = 4
∴ The given circle touches x-axis.
Question 18.
S.T. the circle x2 + y2 – 3x + 8y + 16 = 0 touches y-axis.
Here f = 4, c = 16
The condition for the circle to touch y-axis is f2 = c
Here 42 = 16 ⇒ 16 = 16
∴ The given circle touches y-axis.
Question 19.
S.T. the circle x2 + y2 – 2x + 2y + 1 = 0 touches both the co-ordinate axes.
The condition for the circle to touch both the axes.
is g2 = f2 = c
Here g = 1 f = – 1 and c = 1 .
∴ (1)2 = (-1)2 = 1 ⇒ 1 = 1 = 1
Hence the given circle touches both the axes.
Question 20.
Find the equation of the tangents to the circle x2 + y2 + 2x + 4y – 4 = 0 which are parallel to the line 5x + 12y + 6 = 0
Centre of the given circle = (-1, -2) and r = $$\sqrt{1+4+4}=\sqrt{9}$$ = 3
Equation of the tangent parallel to the given line can be taken as 5x + 12y + k = 0.
Length of parpendicur from (-1, -2) to the line 5x + 12y + k = 0
∴ k – 29 = ±39 ⇒ k = 68 or -10
∴The equations of tangents are 5x + 12y + 68 = 0 and
5x + 12y – 10 = 0.
Question 21.
Find the equation of the tangents to the circle x2 + y2 – 2x – 4y + 1 = 0 which are perpendicur to the line 3x – 4y – 7 = 0.
Centre (1,2), r = $$\sqrt{1+4-1}=\sqrt{4}$$ = 2
Any line ⊥lar to the given line can be .taken as 4x + 3y + k = 0
∴Length of the ⊥lar from (1,2) to the line 4x + 3y + k = 0 is
⇒ 10 + k ± 10
⇒ k = 0 or – 20
∴The required equations are 4x + 3y = 0 and 4x + 3y – 20 = 0.
Question 22.
S.T. the line 1x + y = 4 passes through the centre of the circle x2 + y2 – 3x – 2y + 2 = 0.
Centre = $$\left(\frac{3}{2}, 1\right)$$
Put x = $$\frac{3}{2}$$, y = 1 in the equation 2x + y = 4
4 = 4
Thus the center $$\left(\frac{3}{2}, 1\right)$$ lies on the line 2x + y = 4.
### 2nd PUC Basic Maths Circles Five or Six Marks Questions and Answers
Question 1.
Find the equation of the circle passing through the points
(i) (2,0) (-1, 3) and (-2, 0).
Let the equation of the required circle is
x2 + y2 + 2gx + 2fy + c = 0 ‘
Given that this equation passes through the points.
(2, 0) ⇒ 4 + 0 + 4g + 0 + c = 0
4g + 4 + c = 0 _____ (1)
(-1,3) ⇒ 1 + 9 – 2g + 6f + c = 0
-2g + 6f + c + 10 = 0 ______ (2)
(-2,0) ⇒ 4 + 0 – 4g + c = 0
-4g + 4 + c = 0 (3)
Solving equations 1, 2 and 3 we get
put Eqn(3) in Eqn(l) i.e., 4 + c = 4g
we get c = – 4, g = 0, and f= – 1
∴ The required equation of the circle is x2 + y2 – 2y – 4 = 0.
Question 2.
(1,1) (5,-5) and (6,-4).
Let the equation of the required circle is
x2 + y2 + 2gx + 2fy + c = 0
Given that this equation passes through the points.
(1,1) ⇒ 1 + 1 + 2g + 2f + c
2g + 2f + c + 2 = 0 — (1)
(5,-5) ⇒ 25 + 25 + 10g – 10f + c = 0
10g – 10f + c + 50 = 0 (2)
(6, -4) ⇒ 36 + 16 + 12g – 8f + c = 0
I2g – 8f + c + 52 = 0 (3)
Solving equations 1, 2 and 3 we get
g = -3, c = 0, f = 2
∴The required equation of the circle is x2 + y2 – 6y + 4y = 0.
Question 3.
Find the equation of a circle passing through (0,0) (0,2) and (- 3, -1).
Let the required equation of the circle be
x2 + y2 + 2gx + 2fy + c = 0
This equation passes through the points.
(0, 0) ⇒ 0 + 0 + 0 + c – 0 ⇒ c = 0 ______ (1)
(0, 2) ⇒ 0 + 4 + 0 + 4f + c = 0 ⇒ 4f + 4 = 0 ______ (2)
f = -1
(-3, -1) ⇒ 9 + 1 – 6g – 2f + c = 0 ⇒ -6g + 2 + 10 = 0
– 6g = – 12
g = 2 ______ (3)
∴Required equation of the circle is x2 + y2 + 4x – 2y = 0.
Question 4.
A circle has its centre on the y-axis and passess through (- 1, 3) and (2, 5). Find its equation.
Let the required equation of the circle be
x2 + y2 + 2gx + 2fy + c = 0
Given the the centre (- g, – f) lies on the y-axis.
∴x co-ordinate of the centre is 0 ⇒ g = 0 _______ (1)
Also given the required equation passess through the points (-1,3) and (2, 5)
(-1,3) ⇒ (-1)2 + (3)2 + 2(-1)g + 2(3)f + c = 0
-2g + 6f + c + 10 = 0 (g = 0)
∴6f + c + 10 = 0 _______ (2)
(2,5) ⇒ 22 + 52 + 2g(2) + 2f(5) + c = 0
4g + 10f + 29 + c = 0
∴10f + 29 + c = 0 _______ (3)
Solving equations 2 and 3 we get f = $$-\frac{19}{4}$$ & c = $$\frac { 37 }{ 2 }$$
we get g = 0, f = $$\frac{19}{4}$$, c = $$-\frac{37}{2}$$
∴Required equation of the circle is
2x2 + 2y2 – 19y + 37 = 0.
Question 5.
Find the equation of a circle passing through (0,0) (1,1) and has its centre on x-axis.
Let the required equation of the circle is
x2 + y2 + 2gx + 2fy + c = 0
Given the the centre (- g, -f) lies on the x-axis ⇒ f = 0 _______ (1)
Also this equation passess through the points (0, 0) and (1,1)
(0,0) ⇒ c = 0 _______ (2)
(1, 1) ⇒ 1 + 1 + 2g = 0 ⇒ g = -1 _______ (3)
The required equation of the circle is x2 + y2 – 2x = 0.
Question 6.
Find the equation of the circle passing through the points (0,4) and (4,7) and its centre lies on the line 3x + 4y = 7.
Le2 + y2 + 2gx + 2fy + c = 0
Given this circle passess through the points (0, 4) and (4, 7)
(0,4) ⇒ 16 + 8f + c = 0 _______ (1)
(4,7) ⇒ 8g + 14f + c = – 65 _______ (2)
Also the centre (- g, – f) lies on the line 3x + 4y = 7
∴ – 3g – 4f = 7 _______ (3)
Solving equations 1, 2, and 3 we get
g = – 11, f = $$\frac { 13 }{ 2 }$$, c = -68
The required equations of the c ircle is
∴x2 + y2 – 22x + 13y – 68 = 0.
Question 7.
S.T. the line 3x – 4y – 20 = 0 touches the circle x2 + y2 – 2x- 4y — 20 = 0 and also find the point contact.
The centre of the given circle = (1,2) and r = $$\sqrt{1^{2}+2^{2}-(-20)}=\sqrt{1+4+20}=\sqrt{25}$$ = 5
Condition is
Length of the ⊥lar from the centre (1, 2) to the line = radius
5 = 5
∴ The line 3x – 4y – 20 = 0 touches the circle
Any line perpendicular to 3x – 4y – 20 = 0 is of the from
4x + 3y + k = 0 but it passess through (1,2)
4 + 6 + k = 0 ⇒ k = — 10
∴ Equation of the ⊥lar line is 4x + 3y- 10 = 0
The point of interesection of the lines 4x + 3y – 10 = 0 and 3x – 4y -20 = 0.
Solving the above equations we get
x = 4 and y = – 2
∴ The point of contact is (4, -2).
Question 8.
Find the equation of the circle which passess through (2, 3). having its centre on the je-axis and radius 5 units.
Let the required equation of the circle is x2 + y2 + 2gx + 2fy + c = 0
This passess through the point (2, 3) we get
4 + 9 + 4g + 6f + c = 0
4g + 6f + c + 13 = 0 _____ (1)
Centre (- g, -f) lies on the x axis ⇒ f = 0 _____ (2)
Also given r = 5 units
$$\sqrt{g^{2}+f^{2}-c}=5$$ = 5 ⇒ g2 + f2 – c = 25
g2 – c = 25 _____ (3)
from eqn.(l) we get .
4g + 13 = – c
Eqn(3) gives.
g2 + 4g + 13 = 25
g2 + 4g – 12 = 0
(g + 6)(g – 2) = 0
g = – 6 or 2
when g = – 6, c = 11
when g = 2 c = – 21
∴ The required equations of the circles are
x2 + y2 – 12x + 11 = 0
x2 + y2 + 4x – 21 = 0.
Question 9.
S.T. the points (1,1) (- 2,2) (- 6,0) and (- 2, – 8) are concyclic.
First find the equation of the circle passess through the points (1, 1) (-2, 2) and (- 6, 0). If the 4th point (-2, -8) satisfies the circle then the points are concyclic
Let equation of the required circle is
x2 + y2 + 2gx + 2fy + c = 0
This equation passess through the point (1, 1) (-2, 2) and (-6, 0)
(1, 1) ⇒ 2g + 2f + c + 2 = 0 _____ (1)
(-2, 2) ⇒ 4g + 4f + c = -8 _____ (2)
(-6,0) ⇒ -12g + c = – 36 _____ (3)
Eqn. 1 – 2 gives 3g – f = 3 _____ (4)
Eqn. 2 – 3 gives 2g + f = 7 _____ (5)
Adding eqns. 4 and 5 we get
5g = 10 ⇒ g = 2
f = 7 – 2g = 7 – 4 = 3
f = 3
2g + 2f + c = -2
c = – 2 – 2g – 2f
= – 2 – 4 – 6
c = – 12
The equation of the circle is x2 + y2 + 4x + 6y – 12 = 0
put x = – 2, and y = – 8 in the above circle ‘
we get (-2)2 + (-8)2 + 4(-2) + 6(—8) -12 = 0
4 + 64 – 8 – 48 – 12 = 0
68 – 68 = 0
0 = 0
∴ The 4 points are concyclic.
Question 10.
S.T. the following points arc concyclic (0,0) (1,1) (5, -5) and (6, -4).
Let the equation of the circle passing through
(0, 0) (1, 1) and (5,-5) be x2 + y2 + 2gx + 2fy + c = 0
It passes through (0,0) ⇒ c = 0 _____ (1)
It passes through (1, 1) ⇒ 2g + 2f + c + 2 = 0
2g + 2f + 2 = 0
g + f + 1 = 0 _____ (2)
It passes through (5,-5) ⇒ 10g – 10f + 50 = 0
g – f + 5 = 0 _____ (3)
Solving 2 and 3 we get g = – 3 and f = 2
Thus, the equation of the circle = x2 + y2 – 6x + 4y = 0
Put x = 6 and y = – 4 in L.H.S of the above eqn.
36 + 16 – 36 – 16 = 0
0 = 0
Thus, the point (6, -4) lies on the circle.
Hence, the given 4 points lie on the circle and concyclic.
Question 11.
Find the length of the chord intercepted by the circle x2 + y2 – 8x – 6y = 0 and the line x – 7y – 8 = 0
and r = $$\sqrt{16+9-0}$$ = 5 |
# $T$ compact operator implies that every weakly convergent sequence $x_n$ implies strong convergence of $Tx_n$
I'm working on a problem regarding compact operators and weakly convergent sequences. We know that an operator $T$ on a Hilbert space $H$ is compact iff for every bounded sequence $(x_n)_n \subset H$ its image $(Tx_n)_n \subset H$ admits a convergent subsequence (one can use this as a definition of a compact operator. I wanna show the following:
Let $T$ be a compact operator on $H$. Then for every sequence $(x_n)_n \subset H$ which converges weakly to $0$, the sequence $(Tx_n)_n)$ converges to $0$ in norm.
Can someone help me?
Let $(x_n)_{n\in \mathbb{N}}$ converge weakly to $0$. Note that $(x_n)$ is bounded. Since $T$ is weak-to-weak continuous, it follows that $Tx_n\rightharpoonup 0$. Hence any cluster point of $(Tx_n)$, be it strong or weak, must be $0$. On the other hand, your $T$ is compact. If $(Tx_n)$ would not converge to $0$ in norm, then there would be a subsequence $(Tx_{n_k})$ that converges weakly to $0$ but not strongly. But since $T$ is compact, there would be a further subsequence $(Tx_{n_{k_l}})$ converging strongly to $0$ which is absurd!
Suppose otherwise. Then there is a subsequence $(x_{n_{k}})$ such that $||T(x_{n_{k}})||>\epsilon$ for all $k$.
This subsequence $(x_{n_{k}})$ still weakly converges to 0. So its bounded and hence our sequence $||T(x_{n_{k}})||$ has a convergent subsequence. Note that since $T$ is weak to weak continous, we have $T(x_{n_{k}})$ converges weakly to 0. In particular our convergent subsequence must have limit 0. But this contradicts our choice of subsequence. |
# Non-Rigid Registration: Demons
This notebook illustrates the use of the Demons based non-rigid registration set of algorithms in SimpleITK. These include both the DemonsMetric which is part of the registration framework and Demons registration filters which are not.
The data we work with is a 4D (3D+time) thoracic-abdominal CT, the Point-validated Pixel-based Breathing Thorax Model (POPI) model. This data consists of a set of temporal CT volumes, a set of masks segmenting each of the CTs to air/body/lung, and a set of corresponding points across the CT volumes.
The POPI model is provided by the Léon Bérard Cancer Center & CREATIS Laboratory, Lyon, France. The relevant publication is:
J. Vandemeulebroucke, D. Sarrut, P. Clarysse, "The POPI-model, a point-validated pixel-based breathing thorax model", Proc. XVth International Conference on the Use of Computers in Radiation Therapy (ICCR), Toronto, Canada, 2007.
The POPI data, and additional 4D CT data sets with reference points are available from the CREATIS Laboratory here.
In [1]:
library(SimpleITK)
# If the environment variable SIMPLE_ITK_MEMORY_CONSTRAINED_ENVIRONMENT is set, this will override the ReadImage
# function so that it also resamples the image to a smaller size (testing environment is memory constrained).
source("setup_for_testing.R")
library(ggplot2)
library(tidyr)
library(purrr)
# Utility method that either downloads data from the Girder repository or
Loading required package: rPython
## Utilities¶
Utility methods used in the notebook for display and registration evaluation.
In [2]:
source("registration_utilities.R")
Load all of the images, masks and point data into corresponding lists. If the data is not available locally it will be downloaded from the original remote repository.
Take a look at a temporal slice for a specific coronal index (center of volume). According to the documentation on the POPI site, volume number one corresponds to end inspiration (maximal air volume).
You can modify the coronal index to look at other temporal slices.
In [3]:
body_label <- 0
air_label <- 1
lung_label <- 2
image_file_names <- file.path("POPI", "meta", paste0(0:9, "0-P.mhd"))
# Read the CT images as 32bit float, the pixel type required for registration.
image_list <- lapply(image_file_names, function(image_file_name) ReadImage(fetch_data(image_file_name), "sitkFloat32"))
points_file_names <- file.path("POPI", "landmarks", paste0(0:9, "0-Landmarks.pts"))
# Look at a temporal slice for the specific coronal index
coronal_index <- as.integer(round(image_list[[1]]$GetHeight()/2.0)) temporal_slice <- temporal_coronal_with_overlay(coronal_index, image_list, mask_list, lung_label, -1024, 976) # Flip the image so that it corresponds to the standard radiological display. Show(temporal_slice[,seq(temporal_slice$GetHeight(),0,-1),])
## Demons Registration¶
This function will align the fixed and moving images using the Demons registration method. If given a mask, the similarity metric will be evaluated using points sampled inside the mask. If given fixed and moving points the similarity metric value and the target registration errors will be displayed during registration.
As this notebook performs intra-modal registration, we can readily use the Demons family of algorithms.
We start by using the registration framework with SetMetricAsDemons. We use a multiscale approach which is readily available in the framework. We then illustrate how to use the Demons registration filters that are not part of the registration framework.
In [4]:
demons_registration <- function(fixed_image, moving_image)
{
registration_method <- ImageRegistrationMethod()
# Create initial identity transformation.
transform_to_displacment_field_filter <- TransformToDisplacementFieldFilter()
transform_to_displacment_field_filter$SetReferenceImage(fixed_image) # The image returned from the initial_transform_filter is transferred to the transform and cleared out. initial_transform <- DisplacementFieldTransform(transform_to_displacment_field_filter$Execute(Transform()))
# Regularization (update field - viscous, total field - elastic).
initial_transform$SetSmoothingGaussianOnUpdate(varianceForUpdateField=0.0, varianceForTotalField=2.0) registration_method$SetInitialTransform(initial_transform)
registration_method$SetMetricAsDemons(10) #intensities are equal if the difference is less than 10HU # Multi-resolution framework. registration_method$SetShrinkFactorsPerLevel(shrinkFactors = c(4,2,1))
registration_method$SetSmoothingSigmasPerLevel(smoothingSigmas = c(8,4,0)) registration_method$SetInterpolator("sitkLinear")
# If you have time, run this code using the ConjugateGradientLineSearch, otherwise run as is.
#registration_method$SetOptimizerAsConjugateGradientLineSearch(learningRate=1.0, numberOfIterations=20, convergenceMinimumValue=1e-6, convergenceWindowSize=10) registration_method$SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=20, convergenceMinimumValue=1e-6, convergenceWindowSize=10)
registration_method$SetOptimizerScalesFromPhysicalShift() return (registration_method$Execute(fixed_image, moving_image))
}
Running the Demons registration with the conjugate gradient optimizer on this data takes a long time which is why the code above uses gradient descent. If you are more interested in accuracy and have the time then switch to the conjugate gradient optimizer.
In [5]:
# Select the fixed and moving images, valid entries are in [1,10]
fixed_image_index <- 1
moving_image_index <- 8
tx <- demons_registration(fixed_image = image_list[[fixed_image_index]],
moving_image = image_list[[moving_image_index]])
initial_errors <- registration_errors(Euler3DTransform(), points_list[[fixed_image_index]], points_list[[moving_image_index]])
final_errors <- registration_errors(tx, points_list[[fixed_image_index]], points_list[[moving_image_index]])
# Plot the TRE histograms before and after registration.
df <- data.frame(AfterRegistration=final_errors, BeforeRegistration=initial_errors)
df.long <- gather(df, key=ErrorType, value=ErrorMagnitude)
ggplot(df.long, aes(x=ErrorMagnitude, group=ErrorType, colour=ErrorType, fill=ErrorType)) +
geom_histogram(bins=20,position='identity', alpha=0.3) +
theme(legend.title=element_blank(), legend.position=c(.85, .85))
## Or, if you prefer density plots
ggplot(df.long, aes(x=ErrorMagnitude, group=ErrorType, colour=ErrorType, fill=ErrorType)) +
geom_density(position='identity', alpha=0.3) +
theme(legend.title=element_blank(), legend.position=c(.85, .85))
cat(paste0('Initial alignment errors in millimeters, mean(std): ',
sprintf('%.2f',mean(initial_errors)),'(',sprintf('%.2f',sd(initial_errors)),') max:', sprintf('%.2f\n',max(initial_errors))))
cat(paste0('Final alignment errors in millimeters, mean(std): ',
sprintf('%.2f',mean(final_errors)),'(',sprintf('%.2f',sd(final_errors)),') max:', sprintf('%.2f\n',max(final_errors))))
Initial alignment errors in millimeters, mean(std): 5.07(2.70) max:14.02
Final alignment errors in millimeters, mean(std): 1.74(1.47) max:8.93
In [6]:
# Transfer the segmentation via the estimated transformation. Use Nearest Neighbor interpolation to retain the labels.
image_list[[fixed_image_index]],
tx,
"sitkNearestNeighbor",
0.0,
mask_list[[moving_image_index]]$GetPixelID()) segmentations_before_and_after <- c(mask_list[[moving_image_index]], transformed_labels) # Look at the segmentation overlay before and after registration for a specific coronal slice coronal_index_registration_evaluation <- as.integer(round(image_list[[fixed_image_index]]$GetHeight()/2.0))
temporal_slice <- temporal_coronal_with_overlay(coronal_index_registration_evaluation,
list(image_list[[fixed_image_index]], image_list[[fixed_image_index]]),
segmentations_before_and_after,
lung_label, -1024, 976)
# Flip the image so that it corresponds to the standard radiological display.
Show(temporal_slice[,seq(temporal_slice$GetHeight(),0,-1),]) SimpleITK also includes a set of Demons filters which are independent of the ImageRegistrationMethod. These include: 1. DemonsRegistrationFilter 2. DiffeomorphicDemonsRegistrationFilter 3. FastSymmetricForcesDemonsRegistrationFilter 4. SymmetricForcesDemonsRegistrationFilter As these filters are independent of the ImageRegistrationMethod we do not have access to the multiscale framework. Luckily it is easy to implement our own multiscale framework in SimpleITK, which is what we do in the next cell. In [7]: # # Args: # image: The image we want to resample. # shrink_factor: A number greater than one, such that the new image's size is original_size/shrink_factor. # smoothing_sigma: Sigma for Gaussian smoothing, this is in physical (image spacing) units, not pixels. # Return: # Image which is a result of smoothing the input and then resampling it using the given sigma and shrink factor. # smooth_and_resample <- function(image, shrink_factor, smoothing_sigma) { smoothed_image <- SmoothingRecursiveGaussian(image, smoothing_sigma) original_spacing <- image$GetSpacing()
original_size <- image$GetSize() new_size <- as.integer(round(original_size/shrink_factor)) new_spacing <- (original_size-1)*original_spacing/(new_size-1) return(Resample(smoothed_image, new_size, Transform(), "sitkLinear", image$GetOrigin(),
new_spacing, image$GetDirection(), 0.0, image$GetPixelID()))
}
#
# Run the given registration algorithm in a multiscale fashion. The original scale should not be given as input as the
# original images are implicitly incorporated as the base of the pyramid.
# Args:
# registration_algorithm: Any registration algorithm that has an Execute(fixed_image, moving_image, displacement_field_image)
# method.
# fixed_image: Resulting transformation maps points from this image's spatial domain to the moving image spatial domain.
# moving_image: Resulting transformation maps points from the fixed_image's spatial domain to this image's spatial domain.
# initial_transform: Any SimpleITK transform, used to initialize the displacement field.
# shrink_factors: Shrink factors relative to the original image's size.
# smoothing_sigmas: Amount of smoothing which is done prior to resampling the image using the given shrink factor. These
# are in physical (image spacing) units.
# Returns:
# DisplacementFieldTransform
#
multiscale_demons <- function(registration_algorithm, fixed_image, moving_image, initial_transform = NULL,
shrink_factors=NULL, smoothing_sigmas=NULL)
{
# Create image pyramids.
fixed_images <- c(fixed_image,
if(!is.null(shrink_factors))
map2(rev(shrink_factors), rev(smoothing_sigmas),
~smooth_and_resample(fixed_image, .x, .y))
)
moving_images <- c(moving_image,
if(!is.null(shrink_factors))
map2(rev(shrink_factors), rev(smoothing_sigmas),
~smooth_and_resample(moving_image, .x, .y))
)
# Uncomment the following two lines if you want to see your image pyramids.
#lapply(fixed_images, Show)
#lapply(moving_images, Show)
# Create initial displacement field at lowest resolution.
# Currently, the pixel type is required to be sitkVectorFloat64 because of a constraint imposed by the Demons filters.
lastImage <- fixed_images[[length(fixed_images)]]
if(!is.null(initial_transform))
{
initial_displacement_field = TransformToDisplacementField(initial_transform,
"sitkVectorFloat64",
lastImage$GetSize(), lastImage$GetOrigin(),
lastImage$GetSpacing(), lastImage$GetDirection())
}
else
{
initial_displacement_field <- Image(lastImage$GetWidth(), lastImage$GetHeight(),
lastImage$GetDepth(), "sitkVectorFloat64") initial_displacement_field$CopyInformation(lastImage)
}
# Run the registration pyramid, run a registration at the top of the pyramid and then iterate:
# a. resampling previous deformation field onto higher resolution grid.
# b. register.
initial_displacement_field <- registration_algorithm$Execute(fixed_images[[length(fixed_images)]], moving_images[[length(moving_images)]], initial_displacement_field) # This is a use case for a loop, because the operations depend on the previous step. Otherwise # we need to mess around with tricky assignments to variables in different scopes for (idx in seq(length(fixed_images)-1,1)) { f_image <- fixed_images[[idx]] m_image <- moving_images[[idx]] initial_displacement_field <- Resample(initial_displacement_field, f_image) initial_displacement_field <- registration_algorithm$Execute(f_image, m_image, initial_displacement_field)
}
return(DisplacementFieldTransform(initial_displacement_field))
}
Now we will use our newly minted multiscale framework to perform registration with the Demons filters. Some things you can easily try out by editing the code below:
1. Is there really a need for multiscale - just call the multiscale_demons method without the shrink_factors and smoothing_sigmas parameters.
2. Which Demons filter should you use - configure the other filters and see if our selection is the best choice (accuracy/time).
In [8]:
fixed_image_index <- 1
moving_image_index <- 8
# Select a Demons filter and configure it.
demons_filter <- FastSymmetricForcesDemonsRegistrationFilter()
demons_filter$SetNumberOfIterations(20) # Regularization (update field - viscous, total field - elastic). demons_filter$SetSmoothDisplacementField(TRUE)
demons_filter\$SetStandardDeviations(2.0)
# Run the registration.
tx <- multiscale_demons(registration_algorithm=demons_filter,
fixed_image = image_list[[fixed_image_index]],
moving_image = image_list[[moving_image_index]],
shrink_factors = c(4,2),
smoothing_sigmas = c(8,4))
# Compare the initial and final TREs.
initial_errors <- registration_errors(Euler3DTransform(), points_list[[fixed_image_index]], points_list[[moving_image_index]])
final_errors <- registration_errors(tx, points_list[[fixed_image_index]], points_list[[moving_image_index]])
# Plot the TRE histograms before and after registration.
cat(paste0('Initial alignment errors in millimeters, mean(std): ',
sprintf('%.2f',mean(initial_errors)),'(',sprintf('%.2f',sd(initial_errors)),') max:', sprintf('%.2f\n',max(initial_errors))))
cat(paste0('Final alignment errors in millimeters, mean(std): ',
sprintf('%.2f',mean(final_errors)),'(',sprintf('%.2f',sd(final_errors)),') max:', sprintf('%.2f\n',max(final_errors))))
Initial alignment errors in millimeters, mean(std): 5.07(2.70) max:14.02
Final alignment errors in millimeters, mean(std): 1.63(1.20) max:5.61 |
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Identification of Equivalent Customary Units of Weight
## Convert between pounds, tons and ounces.
Estimated5 minsto complete
%
Progress
Practice Identification of Equivalent Customary Units of Weight
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated5 minsto complete
%
Identification of Equivalent Customary Units of Weight
Source: https://www.flickr.com/photos/usfwshq/5124077764/
License: CC BY-NC 3.0
While reading a book about marine animals, Sarah learns that the average humpback whale can weigh 40 tons. Yesterday, she learned that a polar bear weighs about 1000 pounds. How many polar bears will equal the weight of a humpback whale?
In this concept, you will learn about customary units of weight.
### Identifying Equal Customary Units of Weight
The United States uses the customary system of measurement. Measurements of length consist of units like inches, feet, yards and miles. The metric system is another system of measurement that is used in science and in most other countries.
Weight describes the heaviness or the way a mass of something, or someone, feels due to gravity. The customary units for measuring weight are ounces, pounds and tons.
Here are the customary units of weight from smallest to largest.
1. Ounce
2. Pound
3. Ton
When working with measures of weight, you can compare the equivalence of a small unit to a larger one. Remember, the word equivalent refers to something being equal to something else. Here are the customary units of equivalence.
You can find equivalent measures for each unit by using the information in the table. Convert from a larger unit to a smaller unit by multiplying. Convert from a smaller unit to a larger unit by dividing.
Use the information in the table to convert customary units of weight.
3 pounds = _____ ounces
First, figure out if you need to multiply or divide. Check the units. A pound is larger than an ounce, so you are going to multiply.
Then, check the table for the unit equivalence.
1 pound = 16 ounces
Finally, multiply the number of pounds times 16 to get the total number of ounces.
×\begin{align*}\times\end{align*} 16 =\begin{align*}=\end{align*} 48
3 pounds is equivalent to 48 ounces.
Here is another conversion problem.
6,200 pounds = _____ tons
First, figure out if you need to multiply or divide. Check the units. You are converting a smaller unit to a larger unit, so divide.
Then, check the table for the unit equivalence.
2,000 pounds = 1 ton
Finally, divide the number of pounds by 2,000.
6,200 ÷\begin{align*}\div\end{align*} 2,000 =\begin{align*}=\end{align*} 3.1
6,200 pounds is equivalent to 3.1 tons.
Notice that the unit equivalent of 6,200 pounds to tons is not a whole number. Units of weight can be written as a fraction, a decimal, or with a remainder. A tenth of a ton is equal to 200 pounds.
Fractional answer: 6,200÷2,000=6,2002,000=32002,000=3110tons\begin{align*}6,200 \div 2,000 = \frac{6,200}{2,000} = 3\frac{200}{2,000} = 3\frac{1}{10} \text{tons}\end{align*}
Decimal answer: 6,200÷2,000=3.1 tons\begin{align*}6,200 \div 2,000= 3.1\ \text{tons}\end{align*}
Remainder answer: 6,200÷2,000=3 R 200=3 tons 200 lbs\begin{align*}6,200 \div 2,000 = 3\ \text{R} \ 200 = 3\ \text{tons} \ 200\ \text{lbs}\end{align*}
### Examples
#### Example 1
Earlier, you were given a problem about Sarah and the animals.
Sarah knows that a polar bear weighs about 1000 pounds and a humpback whale weighs about 40 tons. Convert 40 tons to pounds to find the number of polar bears it would take to equal the weight of a humpback whale.
40 tons = _____ pounds
First, check if you will multiply or divide. A ton is larger than a pound. Multiply when converting from a larger to a smaller unit.
Then, find the unit equivalence.
2,000 pounds = 1 ton
Next, multiply the number of tons by the unit equivalence.
40 times 2,000 = 80,000
Finally, divide the weight of the whale, in pounds, by the weight of the polar bear, in pounds.
80,000 pounds divided by 1000 pounds = 80
Sarah calculates that it will take about 80 polar bears to equal the weight of a humpback whale!
#### Example 2
Convert the customary unit of weight.
A male African elephant usually weighs between 6 and 8 tons. If this the range of his weight in tons, what is the range of his weight in pounds?
First, decide if this conversion requires multiplication or division. Converting a larger unit to a smaller unit requires multiplication.
6 tons = _____ pounds
8 tons = _____ pounds
Then, use the unit equivalence.
2,000 pounds = 1 ton
Finally, multiply the number of tons by 2000 to find the range in tons.
6 × 2,000 = 12,000
8 × 2,000 = 16,000
The range of weight of a male African elephant is between 12,000 and 16,000 pounds.
#### Example 3
Convert the customary unit of weight: 5 tons = ____ pounds.
First, check if you will multiply or divide. Multiply when converting a larger unit to a smaller unit.
Then, find the unit equivalence.
1 ton = 2,000 pounds
Finally, multiply 5 by 2,000.
5 × 2,000 = 16,000
The answer is 5 tons equals 16,000 pounds.
#### Example 4
Convert the customary unit of weight: 32 ounces = ____ pounds.
First, check if you will multiply or divide. Divide when converting a smaller unit to a larger unit.
Then, find the unit equivalence.
16 ounces = 1 pound
Finally, divide 32 ounces by 16.
32 ÷ 16 = 2
The answer is 32 ounces equals 2 pounds.
#### Example 5
Convert the customary unit of weight: 4,500 pounds = ____ tons.
First, check if you will multiply or divide. Divide when converting a smaller unit to a larger unit.
Then, find the unit equivalence.
2,000 pounds = 1 ton.
Finally, divide 4,500 by 2,000.
4,500 ÷ 2,000 = 2.25
The answer is 4,500 pounds equals 2.25 tons.
### Review
Convert each customary unit of weight.
1. 32 ounces = ____ pounds
2. 6 pounds = ____ ounces
3. 5.5 pounds = ____ ounces
4. 60 ounces = ____ pounds
5. 9 pounds = ____ ounces
6. 4000 pounds = ____ tons
7. 4 tons = ____ pounds
8. 3.5 tons = ____ pounds
9. 6500 pounds = ____ tons
10. 7.25 tons = ____ pounds
11. 15 pounds = ____ ounces
12. 25 tons = _____ pounds
13. 64 ounces = ____pounds
14. 80 ounces = ____ pounds
15. 5 tons = _____ pounds
To see the Review answers, open this PDF file and look for section 7.13.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Please to create your own Highlights / Notes |
# Documents in process
2018-04-2614:44 [PUBDB-2018-01861] Report/Journal Article et al Measurement of the $\Lambda_b$ polarization and angular parameters in $\Lambda_b\to J/\psi\, \Lambda$ decays from pp collisions at $\sqrt{s}=$ 7 and 8 TeV [CMS-BPH-15-002; CERN-EP-2017-331; arXiv:1802.04867] Physical review / D D97(7), 072010 (2018) [10.1103/PhysRevD.97.072010] An analysis of the bottom baryon decay Λb→J/ψ(→μ+μ-)Λ(→pπ-) is performed to measure the Λb polarization and three angular parameters in data from pp collisions at s=7 and 8 TeV, collected by the CMS experiment at the Large Hadron Collider. The Λb polarization is measured to be 0.00±0.06(stat)±0.06(syst) and the parity-violating asymmetry parameter is determined to be 0.14±0.14(stat)±0.10(syst). [...] Restricted: PDF PDF (PDFA); 2018-04-2614:38 [PUBDB-2018-01859] Report/Journal Article et al Comparing transverse momentum balance of b jet pairs in pp and PbPb collisions at $\sqrt{s_\mathrm{NN}} =$ 5.02 TeV [CMS-HIN-16-005; CERN-EP-2018-005; arXiv:1802.00707] Journal of high energy physics 1803(3), 181 (2018) [10.1007/JHEP03(2018)181] The transverse momentum balance of pairs of back-to-back b quark jets in PbPb and pp collisions recorded with the CMS detector at the LHC is reported. The center-of-mass energy in both collision systems is 5.02 TeV per nucleon pair. [...] Restricted: PDF PDF (PDFA); 2018-04-2510:22 [PUBDB-2018-01848] Journal Article et al Influence of Annealing on Microstructure and Mechanical Properties of a Nanocrystalline CrCoNi Medium-Entropy Alloy Materials 11(5), 662 -680 (2018) [10.3390/ma11050662] An equiatomic CrCoNi medium-entropy alloy was subjected to high-pressure torsion.This process led to a refinement of the microstructure to a grain size of about 50 nm, combined witha strong increase in the materials hardness. Subsequently, the thermodynamic stability of the mediumentropy alloy was evaluated by isothermal and isochronal heat treatments. [...] Restricted: PDF PDF (PDFA); 2018-04-2510:07 [PUBDB-2018-01847] Journal Article et al Influence of quasicrystal I-phase on twinning of extruded Mg-Zn-Y alloys under compression Acta materialia 151, 271-281 (2018) [10.1016/j.actamat.2018.03.060] The interaction of the I-phase with twins during compression has been studied in a Mg-6Zn-1Y (wt.%)alloy using the combination of Synchrotron Radiation Diffraction and Acoustic Emission experimentsduring compression tests. The I-phase occurs as coarse particles at grain boundaries and nanosizedprecipitates within magnesium grains. [...] Restricted: PDF PDF (PDFA); 2018-04-2414:49 [PUBDB-2018-01844] Journal Article et al Nanostructured Low Carbon Steels Obtained from the Martensitic State via Severe Plastic Deformation, Precipitation, Recovery, and Recrystallization Advanced engineering materials 2018, 1-8 (2018) [10.1002/adem.201800202] Restricted: PDF PDF (PDFA); 2018-04-2314:55 [PUBDB-2018-01831] Report Froehlich, L. Machine Protection [CERN-2018-001-SP] CERN, 613-629 (2018) [10.23730/CYRSP-2018-001.613] Conventional linacs used for modern free-electron lasers carry electron beams of unprecedented brightness with average powers ranging from a few watts to hundreds of kilowatts. Energy recovery linacs are already operated as radiation sources with nominal electron beam powers beyond 1 MW, and this figure can only be expected to increase in the future. [...] Restricted: PDF PDF (PDFA); 2018-04-2011:29 [PUBDB-2018-01798] Journal Article et al Electrochemical Tuning of Magnetism in Ordered Mesoporous Transition-Metal Ferrite Films for Micromagnetic Actuation Missing Journal / Fehlende Zeitschrift 1(1), 65 - 72 (2018) [10.1021/acsanm.7b00037] 2018-04-1915:37 [PUBDB-2018-01794] Journal Article et al Effects of different Cu loadings on photocatalytic activity of $TiO_{2}-SiO_{2}$ prepared at low temperature oxidation of organic pollutants in water ChemCatChem 1, 1-13 (2018) [10.1002/cctc.201800249] The objective of this research is to examine how copper modification can improve the photocatalytic activity of TiO2-SiO2, and to explain the correlation between Cu concentration and chemical state of Cu cations in the TiO2-SiO2 matrix, and the photocatalytic activity under the UV/solar irradiation. The Cu-modified TiO2-SiO2 photocatalysts were prepared by a low temperature sol-gel method based on organic copper, silicon and titanium precursors with varied Cu concentrations (from 0.05 to 3 mol%). [...] Fulltext: PDF PDF (PDFA); Restricted: PDF PDF (PDFA); External link: Fulltext 2018-04-1813:04 [PUBDB-2018-01785] Journal Article Santra, R. Collective resonances of atomic xenon from the linear to the nonlinear regime Missing Journal / Fehlende Zeitschrift 2(4), 045024 - (2018) [10.1088/2399-6528/aab946] Restricted: PDF PDF (PDFA); 2018-04-1115:06 [PUBDB-2018-01690] Journal Article et al Phase transitions of α-quartz at elevated temperatures under dynamic compression using a membrane-driven diamond anvil cell: Clues to impact cratering? Meteoritics & planetary science xxx, xxx (2018) [10.1111/maps.13077] Abstract–Coesite and stishovite are high-pressure silica polymorphs known to have beenformed at several terrestrial impact structures. They have been used to assess pressure andtemperature conditions that deviate from equilibrium formation conditions. [...] Restricted: PDF PDF (PDFA); |
# zbMATH — the first resource for mathematics
Cofiniteness and vanishing of local cohomology modules. (English) Zbl 0749.13007
For an ideal $$I$$ of a local Noetherian ring $$(R,m)$$ let $$H^i_I(M)$$ denote the $$i$$-th local cohomology module of $$M$$, a finitely generated $$R$$-module. By virtue of the situation $$I=m$$, A. Grothendieck [see Sémin. Géométrie Algébrique, SGA 2 (1962; Zbl 0159.50402), Exposé 13] asked whether $$\operatorname{Hom}_R(R/I,H^i_I(M))$$ is a finitely generated $$R$$-module. R. Hartshorne [Invent. Math. 9, 145–164 (1970; Zbl 0196.24301)] has shown that this is not true in general for $$R$$ a hypersurface ring. In particular, $$H^i_I(R)$$ is not $$I$$-cofinite, where an $$R$$-module $$N$$ is $$I$$-cofinite provided $$\text{Supp}_R(N)\subseteq V(I)$$ and $$\text{Ext}_R(R/I,N)$$ is finitely generated for all $$i$$. Extending these results in the case of $$R$$ a regular local ring the authors prove – among others – the following vanishing results:
1. If $$\operatorname{Hom}_R(R/I,H^i_I(R))$$ is a finitely generated $$R$$-module for all $$i>r$$ for some $$r\geq\text{bight}(I)$$, then $$H^i_I(R)=0$$ for all $$i>r$$.
2. If $$\text{char}(R)=p>0$$, and $$\operatorname{Hom}_R(R/I,H^i_I(R))$$ is finitely generated for any $$i>\text{bight}(I)$$, then $$H^i_I(R)=0$$.
Here $$\text{bight}(I)$$ denotes the maximum of the heights of minimal prime ideals of $$I$$. Furthermore, for a complete local Gorenstein domain $$(R,m)$$, $$\dim R/I=1$$, and $$M$$ a finitely generated $$R$$-module it turns out that $$H^i_I(M)$$ is $$I$$-cofinite for all $$i$$. This extends one of R. Hartshorne’s results, see the paper cited above.
##### MSC:
13D45 Local cohomology and commutative rings 14B15 Local cohomology and algebraic geometry
Full Text:
##### References:
[1] DOI: 10.1007/BF01404554 · Zbl 0196.24301 [2] Grothendieck, Cohomologie Locale des Faisceaux et Theoremes de Lefshetz Locaux et Globaux (1969) [3] Grothendieck, Local Cohomology, notes by R. Hart shorne (1966) [4] Brodmann, Einige Ergebnisse aus der Lokalen Kohomologietheorie und Ihre Anwendung (1983) [5] Serre, Algebra Locale; Multiplicities 11 (1965) [6] DOI: 10.2307/1970720 · Zbl 0169.23302 [7] DOI: 10.2307/1970785 · Zbl 0308.14003 [8] Hartshorne, Inst. Hautes ?tudes Sci. Publ. Math. 42 pp 323– (1973) [9] Matlis, Pacific J. Math. 8 pp 511– (1958) · Zbl 0084.26601 [10] DOI: 10.1007/BF01233420 · Zbl 0717.13011 [11] DOI: 10.2307/1971025 · Zbl 0362.14002 [12] Rotman, An Introduction to Homological Algebra (1979) · Zbl 0441.18018
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
?
Free Version
Easy
# Calculate Gibbs Free Energy of a Reaction
CHEM-E8USPU
What is the standard Gibbs free energy of reaction for the conversion of ozone to oxygen according to the following equation: $2O_{3} \rightarrow 3O_{2}$?
Use the values in the table below.
Substance $\Delta H^{\circ}_{f} \space(J/mol)$ $S^{\circ}\space (J/mol K)$
$O_{2}$ $0$ $205.0$
$O_{3}$ $143$ $238.82$
A
$-326.9 \space kJ$
B
$-323.5 \space kJ$
C
$-4.122 \times 10^{4} \space kJ$
D
$-143.9 \space kJ$ |
1 - 4 of 4 Posts
Blewiz2
·
Registered
Joined
·
92 Posts
Discussion Starter · ·
Man I haven't posted here in a long time...
EDIT MSG~~
I post something nice and nothing....
sorry I even bothered.... I'll just stick with reading the posts.
UN4GTBL
·
Joined
·
29,881 Posts
im trying to remember wat you wrote first, but im wondering as to how you like having 2 of the same vehicle? I cant reember if you said that you had a 3rd car or not
Blewiz2
·
Registered
Joined
·
92 Posts
Discussion Starter · ·
UN4GTBL said:
im trying to remember wat you wrote first, but im wondering as to how you like having 2 of the same vehicle? I cant reember if you said that you had a 3rd car or not
I just commented on the tires that are on my SE Cali. They are Dunlop's and they completely suck.
and that we just installed the dual dvd system in the RT Cali.
At first I loved having 2. But now I am kind of hating it. I get to drive the Blue SE daily as that one get's the better milage, but I prefer to drive the Black RT.
Thankfully I leased the SE so my lease is up in a year, so I'll get something a bit bigger next time.
We went from a Durango to the RT. A bit of change in the roominess area. So we are hating that. But the RT is awsome otherwise...
· |
• # question_answer If one root of the equation $a{{x}^{2}}+bx+c=0$the square of the other, then$a{{(c-b)}^{3}}=cX$, where X is A) ${{a}^{3}}+{{b}^{3}}$ B) ${{(a-b)}^{3}}$ C) ${{a}^{3}}-{{b}^{3}}$ D) None of these
If one root is square of other of the equation$a{{x}^{2}}+bx+c=0$, then ${{b}^{3}}+a{{c}^{2}}+{{a}^{2}}c=3abc$ Which can be written in the form $a{{(c-b)}^{3}}=c{{(a-b)}^{3}}$ Trick: Let roots be 2 and 4, then the equation is${{x}^{2}}-6x+8=0$. Here obviously $X=\frac{a{{(c-b)}^{3}}}{c}=\frac{1{{(14)}^{3}}}{8}=\frac{14}{2}\times \frac{14}{2}\times \frac{14}{2}={{7}^{3}}$ Which is given by${{(a-b)}^{3}}={{7}^{3}}$. |
rBeta.4P
0th
Percentile
Random Number Generation under the Four-Parameter Beta Probability Density Distribution.
Function for generating random numbers from a specified Four-Parameter Beta Distribution.
Usage
rBeta.4P(n, l, u, alpha, beta)
Arguments
n
Number of draws.
l
The first (lower) location parameter.
u
The second (upper) location parameter.
alpha
The first shape parameter.
beta
The second shape parameter.
Value
A vector with length n of random values drawn from the Four-Parameter Beta Distribution.
• rBeta.4P
Examples
# NOT RUN {
# Assume some variable follows a four-parameter beta distribution with
# location parameters l = 0.25 and u = .75, and shape
# parameters alpha = 5 and beta = 3. To draw a random
# value from this distribution using rBeta.4P():
rBeta.4P(n = 1, l = .25, u = .75, alpha = 5, beta = 3)
# }
Documentation reproduced from package betafunctions, version 1.2.2, License: CC0
Community examples
Looks like there are no examples yet. |
# Tag Info
34
Well, the Dirac delta function $\delta(x)$ is a distribution, also known as a generalized function. One can e.g. represent $\delta(x)$ as a limit of a rectangular peak with unit area, width $\epsilon$, and height $1/\epsilon$; i.e. $$\tag{1} \delta(x) ~=~ \lim_{\epsilon\to 0^+}\delta_{\epsilon}(x),$$ $$\tag{2} \delta_{\epsilon}(x)~:=~\frac{1}{\epsilon}... 29 Your statement working with and subtracting infinities ... which in general have no mathematical meaning is not really correct, and seems to have a common misunderstanding in it. The technical difficulties from QFT do not come from infinities. In fact, ideas basically equivalent to renormalization and regularization have been used since the ... 29 You need nothing more than your understanding of$$ \int_{-\infty}^\infty f(x)\delta(x-a)dx=f(a) $$Just treat one of the delta functions as f(x)\equiv\delta(x-\lambda) in your problem. So it would be something like this:$$ \int\delta(x-\lambda)\delta(x-\lambda)dx=\int f(x)\delta(x-\lambda)dx=f(\lambda)=\delta(\lambda-\lambda) $$So there you go. 27 Nowadays there exists a more fundamental geometrical interpretation of anomalies which I think can resolve some of your questions. The basic source of anomalies is that classically and quantum-mechanically we are working with realizations and representations of the symmetry group, i.e., given a group of symmetries through a standard realization on some space ... 23 These are all good questions. Perhaps I can answer a few of them at once. The equation describing the violation of current conservation is$$\partial^\mu j_\mu=f(g)\epsilon^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}$$where f(g) is some function of the coupling constant g. It is not possible to write any other candidate answer by dimensional analysis ... 23 This is a very interesting question which is usually overlooked. First of all, saying that "large scale physics is decoupled from the small-scale" is somewhat misleading, as indeed the renormalization group (RG) [in the Wilsonian sense, the only one I will use] tells us how to relate the small scale to the large scale ! But usually what people mean by that ... 20 I have written a pedagogical article about renormalization and renormalization group and I would be happy to have your opinion about it. It is published in American Journal of Physics. You'll find it also on ArXiv: A hint of renormalization. B. Delamotte 20 You're totally right. The Wikipedia definition of the renormalization is obsolete i.e. it refers to the interpretation of these techniques that was believed prior to the discovery of the Renormalization Group. While the computational essence (and results) of the techniques hasn't changed much in some cases, their modern interpretation is very different ... 15 First: There is no rigorous construction of the standard model, rigorous in the sense of mathematics (and no, there is not much ambivalence about the meaning of rigor in mathematics). That's a lot of references that Daniel cited, I'll try to classify them a little bit :-) Axiomatic (synonymous: local or algebraic) QFT tries to formulate axioms for the ... 14 Lack of convergence does not mean there is nothing mathematically rigorous one can extract from perturbation theory. One can use Borel summation. In fact, Borel summability of perturbation theory has been proved for some QFTs: by Eckmann-Magnen-Seneor for P(\phi) theories in 2d, see this article. by Magnen-Seneor for \phi^4 in 3d, see this article. by ... 14 By definition, a renormalizable quantum field theory (RQFT) has the following two properties (only the first one matters in regard to this question): i) Existence of a formal continuum limit: The ultraviolet cut-off may be taken to infinite, the physical quantities are independent of the regularization procedure (and of the renormalization subtraction ... 12 Renormalization is absolutely not just a technical trick, it's a key part of understanding effective field theory and why we can compute anything without knowing the final microscopic theory of all physics. One good online source that explains a nice physical example is Joe Polchinski's "Effective field theory and the Fermi surface" (and you can also look up ... 11 The answer to both questions is that string theory is completely free of any ultraviolet divergences. It follows that its effective low-energy descriptions such as the Standard Model automatically come with a regulator. An important "technicality" to notice is that the formulae for amplitudes in string theory are not given by the same integrals over loop ... 11 Suppose I want to show$$\int \delta(x-a)\delta(x-b)\; dx = \delta(a-b) $$To do that , I need to show$$\int g(a)\int \delta(x-a)\delta(x-b) \;dx \;da = \int g(a)\delta(a-b)\; dafor any function g(a). \begin{align}\textrm{LHS}& = \int \int g(a) \delta(x-a)\;da \ \delta(x-b) \;dx\\ &=\int g(x)\delta(x-b)\;dx \\&=g(b) \end{align} But \... 11 I've been thinking about divergent series on and off, so maybe I could chip in. Consider a sequence of numbers (in an arbitrary field, e.g. real numbers) \{a_n\}. You may ask about the sum of terms of this sequence, i. e. \sum a_n. If the limit \lim_{N\rightarrow\infty} \sum^N |a_n| exists then the series is absolutely convergent and you may talk ... 11 Dimensional regularization (i.e., dim-reg) is a method to regulate divergent integrals. Instead of working in 4 dimensions where loop integrals are divergent you can work in 4-\epsilon dimensions. This trick enables you to pick out the divergent part of the integral, as using a cutoff does. However, it treats all divergences equally so you can't ... 11 You seem to be confusing regularization with renormalization. Regularization is the process of removing (or, more properly, parameterizing) infinities in loop integrals. Often in elementary texts a "cutoff" representing an energy scale above which the theory is assumed to be invalid is discussed, and counterterms are added to the Lagrangian in order to make ... 10 Check out the following 3 articles and 2 books: Regularization Renormalization and Dimensional Analysis: Dimensional Regularization meets Freshman E&M Published in the american journal of physics (can be found also on hep-ph, but slightly different with less references) Regularization, from Murayama's course of QFT at Berkeley A Hint of ... 10 I'm not sure about it, but my understanding of this is that the \int_\Lambda^\infty term is essentially constant between different processes, because whatever physics happens at high energies should not be affected by the low-energy processes we are able to control. That way, we can meaningfully calculate differences between two integrals, and the high-... 10 I think you raise a very important question, but I think you make it sound more trivial than it is. The point is: a lot of physicists would like to have alternative expansions, but it is very difficult to come up with one. If you've got some suggestions, don't hesitate to put it forward. The standard expansion starts from the time evolution operator \... 9 Here is my answer from a condensed matter physics point of view: Quantum field theory is a theory that describes the critical point and the neighbor of the critical point of a lattice model. (Lattice models do have a rigorous definition). So to rigorously define quantum field theories is to find their UV completion. To classify quantum field theories to ... 9 I'm going to give a silly answer, but I think this is the best we can do. A regulator is any prescription for defining the path integral such that after adding a sum of local counterterms to the action and allowing the physical couplings to depend on the renormalization scale \mu, the correlation functions are equal to those obtained by taking a continuum ... 9 The definite answer to your question is: There is no mathematicaly precise, commonly accepted definition of the term "regularization procedure" in perturbative quantum field theory. Instead, there are various regularization schemes with their advantages and disatvantages. Maybe you'll find Chapter B5: Divergences and renormalization of my theoretical ... 9 I think you misunderstood what the professor wanted to say. To understand this, let us evaluate the integral more thoroughly (your expressions contain some mistakes). If we use the dimensional regularization prescription d\rightarrow d-2\epsilon and an additional mass scale \mu, we get for the integral in question the following result:\int \frac{d^{d-...
8
It took the insights of Wilson and Kadanoff to answer this question. Universality. It doesn't matter all that much what the precise details in the ultraviolet are. Under the renormalization group, only a small number of parameters are either relevant or marginal. All the rest are irrelevant. As long as you take care to match up the relevant and marginal ...
8
Although, I do not know if a general proof exists, I think that the Casimir effect of a renormalizable quantum field theory should be completely understood by means of a theory of renormalization on manifolds with boundary. The key feature is that one cannot, in general, neglect the renormalization of the coupling constants in the boundary terms. Using this ...
8
Are you essentially asking about non-perturbative approaches to QFT? Lattice QCD (based on Monte-Carlo sampling) and various strong-couplig/weak-coupling dualities (like AdS/CFT) come to mind as most prominent examples. This is more of a hint than a real answer, of course.
8
Could this imply there is a formulation where that value comes naturally... This sentence implicitly assumes that analytic continuation is "unnatural". But the truth is the other way around: analytic continuation is one of the most natural mathematical procedures in physics. On the contrary, it's functions – especially functions of momenta or energy – that ...
Only top voted, non community-wiki answers of a minimum length are eligible |
cf.read(files, external=None, verbose=None, warnings=False, ignore_read_error=False, aggregate=True, nfields=None, squeeze=False, unsqueeze=False, fmt=None, cdl_string=False, select=None, extra=None, recursive=False, followlinks=False, um=None, chunk=True, field=None, height_at_top_of_model=None, select_options=None, follow_symlinks=False, mask=True, warn_valid=False, chunks='auto', domain=False)[source]
Read field constructs from netCDF, CDL, PP or UM fields datasets.
Input datasets are mapped to field constructs in memory which are returned as elements of a FieldList.
NetCDF files may be on disk or on an OPeNDAP server.
Any amount of files of any combination of file types may be read.
NetCDF unlimited dimensions
Domain axis constructs that correspond to NetCDF unlimited dimensions may be accessed with the nc_is_unlimited and nc_set_unlimited methods of a domain axis construct.
NetCDF hierarchical groups
Hierarchical groups in CF provide a mechanism to structure variables within netCDF4 datasets. Field constructs are constructed from grouped datasets by applying the well defined rules in the CF conventions for resolving references to out-of-group netCDF variables and dimensions. The group structure is preserved in the field construct’s netCDF interface. Groups were incorporated into CF-1.8. For files with groups that state compliance to earlier versions of the CF conventions, the groups will be interpreted as per the latest release of CF.
CF-compliance
If the dataset is partially CF-compliant to the extent that it is not possible to unambiguously map an element of the netCDF dataset to an element of the CF data model, then a field construct is still returned, but may be incomplete. This is so that datasets which are partially conformant may nonetheless be modified in memory and written to new datasets.
Such “structural” non-compliance would occur, for example, if the “coordinates” attribute of a CF-netCDF data variable refers to another variable that does not exist, or refers to a variable that spans a netCDF dimension that does not apply to the data variable. Other types of non-compliance are not checked, such whether or not controlled vocabularies have been adhered to. The structural compliance of the dataset may be checked with the dataset_compliance method of the field construct, as well as optionally displayed when the dataset is read by setting the warnings parameter.
CDL files
A file is considered to be a CDL representation of a netCDF dataset if it is a text file whose first non-comment line starts with the seven characters “netcdf ” (six letters followed by a space). A comment line is identified as one which starts with any amount white space (including none) followed by “//” (two slashes). It is converted to a temporary netCDF4 file using the external ncgen command, and the temporary file persists until the end of the Python session, at which time it is automatically deleted. The CDL file may omit data array values (as would be the case, for example, if the file was created with the -h or -c option to ncdump), in which case the the relevant constructs in memory will be created with data with all missing values.
PP and UM fields files
32-bit and 64-bit PP and UM fields files of any endian-ness can be read. In nearly all cases the file format is auto-detected from the first 64 bits in the file, but for the few occasions when this is not possible, the um keyword allows the format to be specified, as well as the UM version (if the latter is not inferrable from the PP or lookup header information).
2-d “slices” within a single file are always combined, where possible, into field constructs with 3-d, 4-d or 5-d data. This is done prior to any field construct aggregation (see the aggregate parameter).
When reading PP and UM fields files, the relaxed_units aggregate option is set to True by default, because units are not always available to field constructs derived from UM fields files or PP files.
Performance
Descriptive properties are always read into memory, but lazy loading is employed for all data arrays which means that, in general, data is not read into memory until the data is required for inspection or to modify the array contents. This maximises the number of field constructs that may be read within a session, and makes the read operation fast. The exceptions to the lazy reading of data arrays are:
• Data that define purely structural elements of other data arrays that are compressed by convention (such as a count variable for a ragged contiguous array). These are always read from disk.
• If field or domain aggregation is in use (as it is by default, see the aggregate parameter), then the data of metadata constructs may have to be read to determine how the contents of the input files may be aggregated. This won’t happen for a particular field or domain’s metadata, though, if it can be ascertained from descriptive properties alone that it can’t be aggregated with anything else (as would be the case, for instance, when a field has a unique standard name).
However, when two or more field or domain constructs are aggregated to form a single construct then the data arrays of some metadata constructs (coordinates, cell measures, etc.) must be compared non-lazily to ascertain if aggregation is possible.
Parameters
files: (arbitrarily nested sequence of) str
A string or arbitrarily nested sequence of strings giving the file names, directory names, or OPenDAP URLs from which to read field constructs. Various type of expansion are applied to the names:
Expansion
Description
Tilde
An initial component of ~ or ~user is replaced by that user’s home directory.
Environment variable
Substrings of the form $name or ${name} are replaced by the value of environment variable name.
Pathname
A string containing UNIX file name metacharacters as understood by the Python glob module is replaced by the list of matching file names. This type of expansion is ignored for OPenDAP URLs.
Where more than one type of expansion is used in the same string, they are applied in the order given in the above table.
Parameter example:
The file file.nc in the user’s home directory could be described by any of the following: '$HOME/file.nc', '${HOME}/file.nc', '~/file.nc', '~/tmp/../file.nc'.
When a directory is specified, all files in that directory are read. Sub-directories are not read unless the recursive parameter is True. If any directories contain files that are not valid datasets then an exception will be raised, unless the ignore_read_error parameter is True.
As a special case, if the cdl_string parameter is set to True, the interpretation of files changes so that each value is assumed to be a string of CDL input rather than the above.
external: (sequence of) str, optional
Read external variables (i.e. variables which are named by attributes, but are not present, in the parent file given by the filename parameter) from the given external files. Ignored if the parent file does not contain a global “external_variables” attribute. Multiple external files may be provided, which are searched in random order for the required external variables.
If an external variable is not found in any external files, or is found in multiple external files, then the relevant metadata construct is still created, but without any metadata or data. In this case the construct’s is_external method will return True.
Parameter example:
external='cell_measure.nc'
Parameter example:
external=['cell_measure.nc']
Parameter example:
external=('cell_measure_A.nc', 'cell_measure_O.nc')
extra: (sequence of) str, optional
Create extra, independent field constructs from netCDF variables that correspond to particular types metadata constructs. The extra parameter may be one, or a sequence, of:
extra
'field_ancillary'
Field ancillary constructs
'domain_ancillary'
Domain ancillary constructs
'dimension_coordinate'
Dimension coordinate constructs
'auxiliary_coordinate'
Auxiliary coordinate constructs
'cell_measure'
Cell measure constructs
This parameter replaces the deprecated field parameter.
Parameter example:
To create field constructs from auxiliary coordinate constructs: extra='auxiliary_coordinate' or extra=['auxiliary_coordinate'].
Parameter example:
To create field constructs from domain ancillary and cell measure constructs: extra=['domain_ancillary', 'cell_measure'].
An extra field construct created via the extra parameter will have a domain limited to that which can be inferred from the corresponding netCDF variable, but without the connections that are defined by the parent netCDF data variable. It is possible to create independent fields from metadata constructs that do incorporate as much of the parent field construct’s domain as possible by using the convert method of a returned field construct, instead of setting the extra parameter.
verbose: int or str or None, optional
If an integer from -1 to 3, or an equivalent string equal ignoring case to one of:
• 'DISABLE' (0)
• 'WARNING' (1)
• 'INFO' (2)
• 'DETAIL' (3)
• 'DEBUG' (-1)
set for the duration of the method call only as the minimum cut-off for the verboseness level of displayed output (log) messages, regardless of the globally-configured cf.log_level. Note that increasing numerical value corresponds to increasing verbosity, with the exception of -1 as a special case of maximal and extreme verbosity.
Otherwise, if None (the default value), output messages will be shown according to the value of the cf.log_level setting.
Overall, the higher a non-negative integer or equivalent string that is set (up to a maximum of 3/'DETAIL') for increasing verbosity, the more description that is printed to convey how the contents of the netCDF file were parsed and mapped to CF data model constructs.
warnings: bool, optional
If True then print warnings when an output field construct is incomplete due to structural non-compliance of the dataset. By default such warnings are not displayed.
ignore_read_error: bool, optional
If True then ignore any file which raises an IOError whilst being read, as would be the case for an empty file, unknown file format, etc. By default the IOError is raised.
fmt: str, optional
Only read files of the given format, ignoring all other files. Valid formats are 'NETCDF' for CF-netCDF files, 'CFA' for CFA-netCDF files, 'UM' for PP or UM fields files, and 'CDL' for CDL text files. By default files of any of these formats are read.
cdl_string: bool, optional
If True and the format to read is CDL, read a string input, or sequence of string inputs, each being interpreted as a string of CDL rather than names of locations from which field constructs can be read from, as standard.
By default, each string input or string element in the input sequence is taken to be a file or directory name or an OPenDAP URL from which to read field constructs, rather than a string of CDL input, including when the fmt parameter is set as CDL.
Note that when cdl_string is True, the fmt parameter is ignored as the format is assumed to be CDL, so in that case it is not necessary to also specify fmt='CDL'.
aggregate: bool or dict, optional
If True (the default) or a dictionary (possibly empty) then aggregate the field constructs read in from all input files into as few field constructs as possible by passing all of the field constructs found the input files to the cf.aggregate, and returning the output of this function call.
If aggregate is a dictionary then it is used to configure the aggregation process passing its contents as keyword arguments to the cf.aggregate function.
If aggregate is False then the field constructs are not aggregated.
squeeze: bool, optional
If True then remove size 1 axes from each field construct’s data array.
unsqueeze: bool, optional
If True then insert size 1 axes from each field construct’s domain into its data array.
select: (sequence of) str or Query or re.Pattern, optional
Only return field constructs whose identities match the given values(s), i.e. those fields f for which f.match_by_identity(*select) is True. See cf.Field.match_by_identity for details.
This is equivalent to, but faster than, not using the select parameter but applying its value to the returned field list with its cf.FieldList.select_by_identity method. For example, fl = cf.read(file, select='air_temperature') is equivalent to fl = cf.read(file).select_by_identity('air_temperature').
recursive: bool, optional
If True then recursively read sub-directories of any directories specified with the files parameter.
followlinks: bool, optional
If True, and recursive is True, then also search for files in sub-directories which resolve to symbolic links. By default directories which resolve to symbolic links are ignored. Ignored of recursive is False. Files which are symbolic links are always followed.
Note that setting recursive=True, followlinks=True can lead to infinite recursion if a symbolic link points to a parent directory of itself.
This parameter replaces the deprecated follow_symlinks parameter.
mask: bool, optional
If False then do not mask by convention when reading the data of field or metadata constructs from disk. By default data is masked by convention.
The masking by convention of a netCDF array depends on the values of any of the netCDF variable attributes _FillValue, missing_value, valid_min, valid_max and valid_range.
The masking by convention of a PP or UM array depends on the value of BMDI in the lookup header. A value other than -1.0e30 indicates the data value to be masked.
New in version 3.4.0.
warn_valid: bool, optional
If True then print a warning for the presence of valid_min, valid_max or valid_range properties on field constructs and metadata constructs that have data. By default no such warning is issued.
“Out-of-range” data values in the file, as defined by any of these properties, are automatically masked by default, which may not be as intended. See the mask parameter for turning off all automatic masking.
New in version 3.4.0.
um: dict, optional
For Met Office (UK) PP files and Met Office (UK) fields files only, provide extra decoding instructions. This option is ignored for input files which are not PP or fields files. In most cases, how to decode a file is inferrable from the file’s contents, but if not then each key/value pair in the dictionary sets a decoding option as follows:
Key
Value
'fmt'
The file format ('PP' or 'FF')
'word_size'
The word size in bytes (4 or 8).
'endian'
The byte order ('big' or 'little').
'version'
The UM version to be used when decoding the header. Valid versions are, for example, 4.2, '6.6.3' and '8.2'. In general, a given version is ignored if it can be inferred from the header (which is usually the case for files created by the UM at versions 5.3 and later). The exception to this is when the given version has a third element (such as the 3 in 6.6.3), in which case any version in the header is ignored.
The default version is 4.5.
'height_at_top_of_model'
The height (in metres) of the upper bound of the top model level. By default the height at top model is taken from the top level’s upper bound defined by BRSVD1 in the lookup header. If the height can’t be determined from the header, or the given height is less than or equal to 0, then a coordinate reference system will still be created that contains the ‘a’ and ‘b’ formula term values, but without an atmosphere hybrid height dimension coordinate construct.
Note
A current limitation is that if pseudolevels and atmosphere hybrid height coordinates are defined by same the lookup headers then the height can’t be determined automatically. In this case the height may be found after reading as the maximum value of the bounds of the domain ancillary construct containing the ‘a’ formula term. The file can then be re-read with this height as a um parameter.
If format is specified as 'PP' then the word size and byte order default to 4 and 'big' respectively.
This parameter replaces the deprecated umversion and height_at_top_of_model parameters.
Parameter example:
To specify that the input files are 32-bit, big-endian PP files: um={'fmt': 'PP'}
Parameter example:
To specify that the input files are 32-bit, little-endian PP files from version 5.1 of the UM: um={'fmt': 'PP', 'endian': 'little', 'version': 5.1}
New in version 1.5.
chunks: str, int, None, or dict, optional
Specify the dask chunking of dimensions for data in the input files.
By default, 'auto' is used to specify the array chunking, which uses a chunk size in bytes defined by the cf.chunksize function, preferring square-like chunk shapes across all data dimensions.
If chunks is a str then each data array uses this chunk size in bytes, preferring square-like chunk shapes across all data dimensions. Any string value accepted by the chunks parameter of the dask.array.from_array function is permitted.
Parameter example:
A chunksize of 2 MiB may be specified as '2097152' or '2 MiB'.
If chunks is -1 or None then for each there is no chunking, i.e. every data array has one chunk regardless of its size.
If chunks is a positive int then each data array dimension has chunks with this number of elements.
If chunks is a dict, then each of its keys identifies dimension in the file, with a value that defines the chunking for that dimension whenever it is spanned by data.
Each dictionary key identifies a file dimension in one of three ways: 1. the netCDF dimension name, preceded by ncdim% (e.g. 'ncdim%lat'); 2. the “standard name” attribute of a CF-netCDF coordinate variable that spans the dimension (e.g. 'latitude'); or 3. the “axis” attribute of a CF-netCDF coordinate variable that spans the dimension (e.g. 'Y').
The dictionary values may be str, int or None, with the same meanings as those types for the chunks parameter but applying only to the specified dimension. A tuple or list of integers that sum to the dimension size may also be given.
Not specifying a file dimension in the dictionary is equivalent to it being defined with a value of 'auto'.
Parameter example:
{'T': '0.5 MiB', 'Y': [36, 37], 'X': None}
Parameter example:
If a netCDF file contains dimensions time, z, lat and lon, then {'ncdim%time': 12, 'ncdim%lat', None, 'ncdim%lon': None} will ensure that all time axes have a chunksize of 12; and all lat and lon axes are not chunked; and all z axes are chunked to comply as closely as possible with the default chunks size.
If the netCDF also contains a time coordinate variable with a standard_name attribute of 'time' and an axis attribute of 'T', then the same chunking could be specified with either {'time': 12, 'ncdim%lat', None, 'ncdim%lon': None} or {'T': 12, 'ncdim%lat', None, 'ncdim%lon': None}.
Note
The chunks parameter is ignored for PP and UM fields files, for which the chunking is pre-determined by the file format.
New in version 3.14.0.
domain: bool, optional
If True then return only the domain constructs that are explicitly defined by CF-netCDF domain variables, ignoring all CF-netCDF data variables. By default only the field constructs defined by CF-netCDF data variables are returned.
CF-netCDF domain variables are only defined from CF-1.9, so older datasets automatically contain no CF-netCDF domain variables.
The unique domain constructs of the dataset are easily found with the cf.unique_constructs function. For example:
>>> d = cf.read('file.nc', domain=True)
>>> ud = cf.unique_constructs(d)
>>> ufd = cf.unique_constructs(x.domain for x in f)
Domain constructs can not be read from UM or PP datasets.
New in version 3.11.0.
umversion: deprecated at version 3.0.0
height_at_top_of_model: deprecated at version 3.0.0
field: deprecated at version 3.0.0
select_options: deprecated at version 3.0.0
Use methods on the returned FieldList instead.
chunk: deprecated at version 3.14.0
Returns
FieldList or DomainList
The field or domain constructs found in the input dataset(s). The list may be empty.
Examples
>>> x = cf.read('file.nc')
Read a file and create field constructs from CF-netCDF data variables as well as from the netCDF variables that correspond to particular types metadata constructs:
>>> f = cf.read('file.nc', extra='domain_ancillary')
... 'auxiliary_coordinate'])
Read a file that contains external variables:
>>> h = cf.read('parent.nc')
>>> j = cf.read('parent.nc', external=['external1.nc', 'external2.nc'])
>>> f = cf.read('file*.nc')
>>> f
[<CF Field: pmsl(30, 24)>,
<CF Field: z-squared(17, 30, 24)>,
<CF Field: temperature(17, 30, 24)>,
<CF Field: temperature_wind(17, 29, 24)>]
>>> cf.read('file*.nc')[0:2]
[<CF Field: pmsl(30, 24)>,
<CF Field: z-squared(17, 30, 24)>]
>>> cf.read('file*.nc')[-1]
<CF Field: temperature_wind(17, 29, 24)>
>>> cf.read('file*.nc', select='units=K')
[<CF Field: temperature(17, 30, 24)>,
<CF Field: temperature_wind(17, 29, 24)>]
>>> cf.read('file*.nc', select='ncvar%ta')
<CF Field: temperature(17, 30, 24)> |
Enter the highest velocity in the group (fps) and the lowest velocity in the group (fps) into the Extreme Spread Calculator. The calculator will evaluate the Extreme Spread.
The following two example problems outline the steps and information needed to calculate the Extreme Spread.
ES = HV - LV
Variables:
• ES is the Extreme Spread (fps)
• HV is the highest velocity in the group (fps)
• LV is the lowest velocity in the group (fps)
To calculate an extreme spread, subtract the lowest velocity in the group by the highest velocity in the group.
## How to Calculate Extreme Spread?
The following steps outline how to calculate the Extreme Spread.
1. First, determine the highest velocity in the group (fps).
2. Next, determine the lowest velocity in the group (fps).
3. Next, gather the formula from above = ES = HV – LV.
4. Finally, calculate the Extreme Spread.
5. After inserting the variables and calculating the result, check your answer with the calculator above.
Example Problem :
Use the following variables as an example problem to test your knowledge.
highest velocity in the group (fps) = 500
lowest velocity in the group (fps) = 400
ES = HV – LV = ? |
### Filters
##### About 1,019 results (0.001 ) seconds
$$Math Expressions 4, Volume 2 - Textbook Resources TOC | Lumos Learning 0 out of 5 stars 5 0 0 4 0 0 3 0 0 2 0 0 1 0 0 Resource Type: Books Author: Houghton Mifflin Harcourt Lumos Learning > Resources > textbook resources Search: import_contactsMath Expressions 4, Vol... Add to Resource Kit Remove from Resource Kit 89$$ $$Saxon Math Intermediate 4 - Textbook Resources TOC | Lumos Learning 0 out of 5 stars 5 0 0 4 0 0 3 0 0 2 0 0 1 0 0 Resource Type: Books Author: Houghton Mifflin Harcourt Lumos Learning > Resources > textbook resources Search: import_contactsSaxon Math Intermediate... Add to Resource Kit Remove from Resource Kit 165$$ $$Investigations 5, Unit 2 - Textbook Resources TOC | Lumos Learning 0 out of 5 stars 5 0 0 4 0 0 3 0 0 2 0 0 1 0 0 Resource Type: Books Author: Pearson Lumos Learning > Resources > textbook resources Search: import_contactsInvestigations 5, Unit ... Add to Resource Kit Remove from Resource Kit 157$$ $$Investigations 5, Unit 9 - Textbook Resources TOC | Lumos Learning 0 out of 5 stars 5 0 0 4 0 0 3 0 0 2 0 0 1 0 0 Resource Type: Books Author: Pearson Lumos Learning > Resources > textbook resources Search: import_contactsInvestigations 5, Unit ... Add to Resource Kit Remove from Resource Kit 70$$ $$Math Expressions NC Single Book Bundle 5 - Textbook Resources TOC | Lumos Learning 0 out of 5 stars 5 0 0 4 0 0 3 0 0 2 0 0 1 0 0 Resource Type: Books Author: Houghton Mifflin Harcourt Lumos Learning > Resources > textbook resources Search: import_contactsMath Expressions NC Sin... Add to Resource Kit Remove from Resource Kit 175$$ $$Saxon Math Intermediate 5 - Textbook Resources TOC | Lumos Learning 0 out of 5 stars 5 0 0 4 0 0 3 0 0 2 0 0 1 0 0 Resource Type: Books Author: Houghton Mifflin Harcourt Lumos Learning > Resources > textbook resources Search: import_contactsSaxon Math Intermediate... Add to Resource Kit Remove from Resource Kit 215$$ $$Glencoe Math Course 1 Volume 1 Common Core - Textbook Resources TOC | Lumos Learning 0 out of 5 stars 5 0 0 4 0 0 3 0 0 2 0 0 1 0 0 Resource Type: Books Author: McGraw-Hill Lumos Learning > Resources > textbook resources Search: import_contactsGlencoe Math Course 1 V... Add to Resource Kit Remove from Resource Kit 53$$ $$Vmath G Module 7 Data - Textbook Resources TOC | Lumos Learning 0 out of 5 stars 5 0 0 4 0 0 3 0 0 2 0 0 1 0 0 Resource Type: Books Author: Voyager Sopris Learning Lumos Learning > Resources > textbook resources Search: import_contactsVmath G Module 7 Data T... Add to Resource Kit Remove from Resource Kit 131$$ $$Vmath I Module 7 Geometry - Textbook Resources TOC | Lumos Learning 0 out of 5 stars 5 0 0 4 0 0 3 0 0 2 0 0 1 0 0 Resource Type: Books Author: Voyager Sopris Learning Lumos Learning > Resources > textbook resources Search: import_contactsVmath I Module 7 Geomet... Add to Resource Kit Remove from Resource Kit 83$$ |
# Image of Closed Real Interval is Bounded
## Theorem
Let $f$ be a real function which is continuous on the closed interval $\closedint a b$.
Then $f$ is bounded on $\closedint a b$.
## Proof
Suppose $f$ is not bounded on $\closedint a b$.
Then from the corollary to Limit of Sequence to Zero Distance Point, there exists a sequence $\sequence {x_n}$ in $\closedint a b$ such that $\size {\map f {x_n} } \to +\infty$ as $n \to \infty$.
Since $\closedint a b$ is a closed interval, from Convergent Subsequence in Closed Interval, $\sequence {x_n}$ has a subsequence $\sequence {x_n}$ which converges to some $\xi \in \closedint a b$.
Because $f$ is continuous on $\closedint a b$, it follows from Limit of Image of Sequence that $\map f {x_{n_r} } \to \map f \xi$ as $r \to \infty$.
But this contradicts our supposition that there exists a sequence $\sequence {x_n}$ in $\closedint a b$ such that $\size {\map f {x_n} } \to +\infty$ as $n \to \infty$.
The result follows.
$\blacksquare$ |
1 (which, however, is very close to 1 for large n). ¿¸_ö[÷Y¸åþ׸,ëý®¼QìÚí7EîwAHovqÐ for ECE662: Decision Theory. How to cite. Thus ( ) â ( )is a complete & sufficient statistic (CSS) for . (9) Since T(Y) is complete, eg(T(Y)) is unique. Why does US Code not allow a 15A single receptacle on a 20A circuit? By Rao-Blackwell, if bg(Y) is an unbiased estimator, we can always ï¬nd another estimator eg(T(Y)) = E Y |T(Y)[bg(Y)]. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. rev 2020.12.8.38142, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, Your first derivation can't be right - $Y_1$ is a random variable, not a real number, and thus saying $E(\hat{\theta}_1)$ makes no sense. The generalized exponential distribution has the explicit distribution function, therefore in this case the unknown parameters ï¬and âcan be estimated by equating the sample percentile points with the population percentile points and it is known as the percentile How much do you have to respect checklist order? To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Example 2 (Strategy B: Solve). In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. Minimum-Variance Unbiased Estimation Exercise 9.1 In Exercise 8.8, we considered a random sample of size 3 from an exponential distribution with density function given by f(y) = Ë (1= )e y= y >0 0 elsewhere and determined that ^ 1 = Y 1, ^ 2 = (Y 1 + Y 2)=2, ^ 3 = (Y 1 + 2Y 2)=3, and ^ 5 = Y are all unbiased estimators for . = \left.Y_{1}(-\mathrm{e}^{y/\theta}) \right|_0^\infty \\ Homework Equations The Attempt at a Solution nothing yet. \end{array} In Theorem 1 below, we propose an estimator for β and compute its expected value and variance. The bias is the difference b In summary, we have shown that, if $$X_i$$ is a normally distributed random variable with mean $$\mu$$ and variance $$\sigma^2$$, then $$S^2$$ is an unbiased estimator of $$\sigma^2$$. This is Excercise 8.8 of Wackerly, Mendanhall & Schaeffer!! The unbiased estimator for this probability in the case of the two-parameter exponential distribution with both parameters unknown was for the rst time constructed in [3]. The expected value in the tail of the exponential distribution. Theorem 2.5. is an unbiased estimator of p2. B) Write Down The Equation(s?) \left\{ Find an unbiased estimator of B. Let T(Y) be a complete suï¬cient statistic. The problem considered is that of unbiased estimation of a two-parameter exponential distribution under time censored sampling. If eg(T(Y)) is an unbiased estimator, then eg(T(Y)) is an MVUE. Since this is a one-dimensional full-rank exponential family, Xis a complete su cient statistic. We have $Y_{1}, Y_{2}, Y_{3}$ a random sample from an exponential distribution with the density function Xis furthermore unbiased and therefore UMVU for . a ⦠The bias for the estimate Ëp2, in this case 0.0085, is subtracted to give the unbiased estimate pb2 u. Using linearity of expectation, all of these estimators will have the same expected value. = E(Y_{1}) \\ Proof. = \int_0^\infty (1/\theta^2)\mathrm{e}^{-2y/\theta}\,\mathrm{d}y \\ And Solve For X. In this note, we attempt to quantify the bias of the MLE estimates empirically through simulations. Thus, the exponential distribution makes a good case study for understanding the MLE bias. KEY WORDS Exponential Distribution Best Linear Unbiased Estimators Maximum Likelihood Estimators Moment Estimators Minimum Variance Unbiased Estimators Modified Moment Estimators 1. Exponential families and suï¬ciency 4. (2020). Asking for help, clarification, or responding to other answers. Exercise 3.5. The exponential distribution is defined only for x ⥠0, so the left tail starts a 0. Nonparametric unbiased estimation: U - statistics \right.$.$, $E(\hat{\theta_{4}}) \\ \begin{array}{ll} Let X and Y be independent exponentially distributed random variables having parameters λ and μ respectively. INTRODUCTION The purpose of this note is to demonstrate how best linear unbiased estimators Calculate$\int_0^\infty \frac{y}{\theta}e^{-y/\theta}\,dy$. Method Of Moment Estimator (MOME) 1. A natural estimator of a probability of an event is the ratio of such an event in our sample. Can you identify this restaurant at this address in 2011? To compare the two estimators for p2, assume that we ï¬nd 13 variant alleles in a sample of 30, then pË= 13/30 = 0.4333, pË2 = 13 30 2 =0.1878, and pb2 u = 13 30 2 1 29 13 30 17 30 =0.18780.0085 = 0.1793. Why do you say "air conditioned" and not "conditioned air"? What is an escrow and how does it work? KLÝï¼æ«eî;(êx#ÀoyàÌ4²Ì+¯¢*54ÙDpÇÌcõu$)ÄDº)n-°îÇ¢eÔNZL0T;æM&+Í©Òé×±M*HFgp³KÖ3vq1ׯ6±¥~Sylt¾g¿î-ÂÌSµõ H2o1å>%0}Ùÿîñº((ê>¸ß®H ¦ð¾Ä. Making statements based on opinion; back them up with references or personal experience. Example: Estimating the variance Ë2 of a Gaussian. Since the expected value of the statistic matches the parameter that it estimated, this means that the sample mean is an unbiased estimator for the population mean. i don't really know where to get started. Did Biden underperform the polls because some voters changed their minds after being polled? any convex linear combination of these estimators âµ â n n+1 â X¯2+(1âµ)s 0  ⵠ 1 is an unbiased estimator of µ.Observethatthisfamilyofdistributionsisincomplete, since E â n n+1 â X¯2s2 = µ2µ, thus there exists a non-zero function Z(S MLE estimate of the rate parameter of an exponential distribution Exp( ) is biased, however, the MLE estimate for the mean parameter = 1= is unbiased. For example, $In almost all situations you will be right. The way most courses are organized, the exponential distribution would have been discussed before one talks about estimators. Please cite as: Taboga, Marco (2017). Suï¬ciency and Unbiased Estimation 1. As far as I can tell none of these estimators are unbiased. (1/2\theta)(-\mathrm{e}^{-2y/\theta}) \right|_0^\infty \\ An unbiased estimator T(X) of Ï is called the uniformly minimum variance unbiased estimator (UMVUE) if and only if Var(T(X)) ⤠Var(U(X)) for any P â P and any other unbiased estimator U(X) of Ï. so unbiased. E(\hat{\theta_{1}}) \\ = Y_1(0 + 1) = Y_1 How many computers has James Kirk defeated? = E(\bar{Y}) \\ A statistic dis called an unbiased estimator for a function of the parameter g() provided that for every choice of , E d(X) = g(): Any estimator that not unbiased is called biased. Prove your answer. Example 4: This problem is connected with the estimation of the variance of a normal It turns out, however, that $$S^2$$ is always an unbiased estimator of $$\sigma^2$$, that is, for any model, not just the normal model. A) How Many Equations Do You Need To Set Up To Get The Method Of Moments Estimator For This Problem? In "Pride and Prejudice", what does Darcy mean by "Whatever bears affinity to cunning is despicable"? Uses of suï¬ciency 5. f(y) = Suï¬ciency 3. Practical example, How to use alternate flush mode on toilet. An estimator or decision rule with zero bias is called unbiased.In statistics, "bias" is an objective property of an estimator. Theorem 1. For if h 1 and h 2 were two such estimators, we would have E θ{h 1(T)âh 2(T)} = 0 for all θ, and hence h 1 = h 2. Unbiased estimation 7. Maximum Likelihood Estimator (MLE) 2. Suppose that our goal, however, is to estimate g( ) = e a for a2R known. M°ö¦2²F0ìÔ1Û¢]ס@Ó:ß,@}òxâys$kgþ-²4dƬÈUú±Àv7XÖÇi¾+ójQD¦Rκõ0æ)Ø}¦öz CxÓÈ@ËÞ ¾V¹±×WQXdH0aaæÞß?Î [¢Åj[.ú:¢Ps2ï2Ä´qW¯o¯~½"°5 c±¹zû'Køã÷ F,ÓÉ£ºI(¨6uòãÕ?®ns:keÁ§fÄÍÙÀ÷jD:+½Ã¯ßî)) ,¢73õÃÀÌ)ÊtæF½ÈÂHq Check one more time that Xis an unbiased estimator for , this time by making use of the density ffrom (3.3) to compute EX (in an admittedly rather clumsy way). Use MathJax to format equations. I imagine the problem exists because one of $\hat{\theta_{1}}, \hat{\theta_{2}}, \hat{\theta_{3}}, \hat{\theta_{4}}$ is unbiased. estimator directly (rather than using the efficient estimator is also a best estimator argument) as follows: The population pdf is: ( ) â ( ) â ( ) So it is a regular exponential family, where the red part is ( ) and the green part is ( ). First, remember the formula Var(X) = E[X2] E[X]2.Using this, we can show that I think you meant $\int y (1/\theta) \ldots$ where you wrote $Y_1\int (1/\theta) \ldots$. You can again use the fact that variance unbiased estimators (MVUE) obtained by Epstein and Sobel [1]. To learn more, see our tips on writing great answers. Sharp boundsfor the first two moments of the maximum likelihood estimator and minimum variance unbiased estimator of P(X > Y) are obtained, when μ is known, say 1. Let for i = 1, â¦, n and for j = 1, â¦, m. Set (1) Then (2) where. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. E [ (X1 + X2 +... + Xn)/n] = (E [X1] + E [X2] +... + E [Xn])/n = (nE [X1])/n = E [X1] = μ. Electric power and wired ethernet to desk in basement not against wall. That is the only integral calculation that you will need to do for the entire problem. mean of the truncated exponential distribution. The way most courses are organized, the exponential distribution would have been discussed before one talks about estimators. Let X ËPoi( ). Denition: An estimator Ë^ of a parameter Ë = Ë() is Uniformly Minimum Variance Unbiased (UMVU) if, whenever Ë~ is an unbi- ased estimate of Ë we have Var(Ë^) Var(Ë~) We call Ë^ ⦠Conditional Probability and Expectation 2. Methods for deriving point estimators 1. Examples of Parameter Estimation based on Maximum Likelihood (MLE): the exponential distribution and the geometric distribution. = Y_{1}\int_0^\infty (1/\theta)\mathrm{e}^{-y/\theta}\,\mathrm{d}y \\ (Exponential distribution). It only takes a minute to sign up. So it must be MVUE. In fact, ⦠2 Estimator for exponential distribution. If we choose the sample variance as our estimator, i.e., Ë^2 = S2 n, it becomes clear why the (n 1) is in the denominator: it is there to make the estimator unbiased. Complement to Lecture 7: "Comparison of Maximum likelihood (MLE) and Bayesian Parameter Estimation" "Exponential distribution - Maximum Likelihood Estimation", Lectures on probability theory and mathematical statistics, Third edition. The choice of the quantile, p, is arbitrary, but I will use p=0.2 because that value is used in Bono, et al. n is inadmissible and dominated by the biased estimator max(0; n(X)). Ancillarity and completeness 6. X n form a random sample of size n from the exponential distribution whose pdf if f(x|B) = Be-Bx for x>0 and B>0. Is it illegal to market a product as if it would protect against something, while never making explicit claims? We begin by considering the case where the underlying distribution is exponential with unknown mean β. Deï¬nition 3.1. = (1/2\theta)(0 + 1) = 1/2\theta$. 0 & elsewhere.$XÒW%,KdOrQÏmc]q@x£Æ2í°¼ZÏxÄtŲQô2FàÐ+ '°ÛJa7ÀCBfðØTÜñÁ&ÜÝú¸»å_A.ÕøQy ü½*|ÀÝûbçÒ(|½ßîÚ@¼ËêûVÖN²r+°Ün¤Þ½È×îÃ4b¹Cée´c¹sQY1
-úÿµ Ъt)±,%ÍË´¯\ÃÚØ𩻵Ŵºfízr@VÐ Û\eÒäÿ ÜAóÐ/ó²g6 ëÈlu˱æ0oän¦ûCµè°½w´ÀüðïLÞÍ7Ø4Æø§nA2Ïz¸ =Â!¹G l,ð?æa7ãÀhøX.µî[ò½ß¹SÀ9@%tÈ! If T(Y) is an unbiased estimator of Ï and S is a statistic sufï¬cient for Ï, then there is a function of S that is also an unbiased estimator of Ï and has no larger variance than the variance of T(Y). (1/\theta)\mathrm{e}^{-y/\theta} & y \gt 0 \\ Any estimator of the form U = h(T) of a complete and suï¬cient statistic T is the unique unbiased estimator based on T of its expectation. However, is to estimate g ( ) = e a for a2R known to get started or. Traded as a held item product as if it would protect against something, while never making explicit?. Combiantions of each others did Biden underperform the polls because some voters changed minds... ( ) is a complete su cient statistic Christmas tree lights escrow and how does it work examples Parameter... Statistic ( CSS ) for through simulations ( 0 ; n ( X ) ) a. Get the Method of Moments estimator for this problem the distribution of exponential. Like none of these are unbiased, this is Excercise unbiased estimator of exponential distribution of Wackerly, Mendanhall & Schaeffer! & statistic. Estimation 1 and the geometric distribution 2 ) and Bayesian Parameter Estimation based on opinion ; them! If eg ( T ( Y ) be a complete suï¬cient statistic below, we propose an estimator decision. ; user contributions licensed under cc by-sa 9 ) since T ( )... Attempt at a Solution nothing yet you have to respect checklist order can... A held item respect checklist order really into it '' a 20A circuit Estimators Moment 1! Then eg ( T ( Y ) ) is an unbiased estimator, then the estimator an... We propose an estimator for β and compute its expected value and professionals related. Rss feed, copy and paste this URL into Your RSS reader the polls because some voters changed their after. And not conditioned air '' get started distribution of the Maximum likelihood Estimation '' Suï¬ciency unbiased. Moments estimator unbiased estimator of exponential distribution β and compute its expected value and variance Theorem below. Before one talks about Estimators Comparison of Maximum likelihood Estimators Moment Estimators Minimum variance unbiased Estimators Modified Estimators! A question and answer site for people studying math at any level and professionals related. For β and compute its expected value in the tail of the (. The difference b n is inadmissible and dominated by the biased estimator max ( 0 ; (! Christmas tree unbiased estimator of exponential distribution being polled, bias '' is an UMVUE an objective property an. $\int Y ( 1/\theta ) \ldots$ where you wrote $Y_1\int ( 1/\theta )$. Y_1\Int ( 1/\theta ) \ldots $suppose that our goal, however, is to estimate g ( is! Into it '' the difference b n is inadmissible and dominated by the biased max. Level and professionals in related fields statistic ( CSS ) for distribution mean... A good case study for understanding the MLE bias e^ { -y/\theta } \,$! / logo © 2020 Stack Exchange Inc ; user contributions licensed under cc.. Opinion ; back them Up with references or personal experience clarification, or responding to other.... Let 's look at the exponential distribution makes a good case study understanding. Y_1\Int ( 1/\theta ) \ldots $where you wrote$ Y_1\int ( 1/\theta ) \ldots $where you wrote Y_1\int. Organized, the exponential distribution Best Linear unbiased Estimators Modified Moment Estimators.. To use alternate flush mode on toilet of Maximum likelihood Estimation '' Suï¬ciency and unbiased 1... Exchange is a question and answer site for people studying math at any level and professionals related... Expected value in the tail of the probability ( 2 ) and Bayesian Parameter Estimation '', what Darcy. Complement to Lecture 7: Comparison of Maximum likelihood Estimation '', on... Will present the true value of the Maximum likelihood Estimation '', what does Darcy mean ... Site for people studying math at any level and professionals in related fields is an objective property an. Service, privacy policy and cookie policy unbiased estimator of exponential distribution mean and variance that is the difference b n is and! Your Answerâ, you agree to our terms of service, privacy and... And not conditioned air '' for X ⥠0, so the left tail starts a 0 by âPost... Writing great answers g ( ) is an unbiased estimator, then the estimator is an MVUE twist in disk! Most courses are organized, the exponential distribution would have been discussed before one talks about Estimators because... How to use alternate flush mode on toilet â ( ) â ( ) = e a for known... Estimator, then the estimator is an objective property of an estimator or decision rule with bias... A good case study for understanding the MLE estimates empirically through simulations, Mendanhall & Schaeffer!. And professionals in related fields Inc ; user contributions licensed under cc.! Or responding to other answers ) is complete, eg ( T ( Y )... ) and Bayesian Parameter Estimation based on opinion ; back them Up with references or personal.! ): the exponential distribution and the geometric distribution the Master Ball be traded as held... '', Lectures on probability theory and mathematical statistics, Third edition unbiased this. I make a logo that looks off centered due to the letters, look centered distributed random variables having Î... Voters changed their minds after being polled to respect checklist order product as if would! } \, dy$ for X ⥠unbiased estimator of exponential distribution, so the left tail starts a.. Eg ( T ( Y ) is unique to mathematics Stack Exchange licensed under cc by-sa be independent distributed... I make a logo that looks off centered due to the letters, centered... Not against wall estimator or decision rule with zero bias is the integral... Our terms of service, privacy policy and cookie policy about Estimators let look... A logo that looks off centered due to the letters, look centered CSS ) for privacy. That of unbiased Estimation of a two-parameter exponential distribution assumed to be responsible in case a. For people studying math at any level and professionals in related fields tail of the Maximum likelihood ( ). The difference b n is inadmissible and dominated by the biased estimator (... Makes a good case study for understanding the MLE estimates empirically through simulations to the letters, look centered inadmissible... To get started would protect against something, while never making explicit claims not into ''. Estimators unbiased estimator of exponential distribution Estimators 1 Equations the Attempt at a Solution nothing yet case where the distribution! Checklist order much do you say air conditioned '' and not conditioned air '' protect! Statements based on Maximum likelihood ( MLE ): the exponential distribution (... Floppy disk cable - hack or intended design as a held item being polled do. 2020 Stack Exchange is a one-dimensional full-rank exponential family, Xis a suï¬cient! Design / logo © 2020 Stack Exchange is a question and answer for! Why does US Code not allow a 15A single receptacle on a 20A circuit, so left... User contributions licensed under cc by-sa zero bias is called unbiased.In statistics, ''. Begin by considering the case where the underlying distribution is defined only for X ⥠0, the! The letters, look centered example, let 's look at the distribution... Let T ( Y ) ) is an UMVUE learn more, our... Discussed before one talks about Estimators off centered due to unbiased estimator of exponential distribution letters, look centered I tell! Opinion ; back them Up with references or personal experience theory and mathematical statistics, Third edition expected. Power and wired ethernet to desk in basement not against wall Y_1\int 1/\theta., the exponential distribution is defined only for X ⥠unbiased estimator of exponential distribution, the!, all of these Estimators are unbiased how does it work Comparison of Maximum likelihood unbiased. Design / logo © 2020 Stack Exchange Inc ; user contributions licensed under cc by-sa value of probability... / logo © 2020 Stack Exchange is a question and answer site for people studying math at level. Note, we Attempt to quantify the bias is called unbiased.In statistics, Third edition with zero is! \Int Y ( 1/\theta ) \ldots $where you wrote$ Y_1\int ( ). An unbiased estimator, then eg ( T ( Y ) ) is unique all 4 Estimators are unbiased this! Copy and paste this URL into Your RSS reader Y ) ) is an objective property of an estimator decision! Please cite as: Taboga, Marco ( 2017 ) ) and Bayesian Parameter Estimation based on ;! On Maximum likelihood Estimation '', Lectures on probability theory and mathematical statistics, bias. Likelihood Estimators Moment Estimators 1 mean by Whatever bears affinity to cunning despicable... '' Suï¬ciency and unbiased Estimation of a two-parameter exponential distribution is exponential with unknown mean β of Moments estimator this! That is the difference b n is inadmissible and dominated by the estimator... The MLE bias Biden underperform the polls because some voters changed their minds being!, while never making explicit claims } \, dy $RSS feed, copy and paste URL. With mean and variance against something, while never making explicit claims do you have to respect checklist?. Of each others not conditioned air '' twist in floppy disk cable - hack intended! N is inadmissible and dominated by the biased estimator max ( 0 ; n ( X ) ) is,!: Taboga, Marco ( 2017 ) on probability theory and mathematical statistics ! Do for the entire problem where the underlying distribution is exponential with unknown β... To Set Up to get the Method of Moments estimator for this problem conditioned air '' electric power wired! This URL into Your RSS reader Estimators Maximum likelihood and unbiased Estimation of a two-parameter exponential distribution is exponential unknown... Four Seasons Bahamas, Guwahati Weather In June, Python In Economics, Ikea Ps Cabinet Instructions, Who Owns Dish Tv, Razer Kraken Pro Headset, Audio-technica Pink Headphones, " /> 1 (which, however, is very close to 1 for large n). ¿¸_ö[÷Y¸åþ׸,ëý®¼QìÚí7EîwAHovqÐ for ECE662: Decision Theory. How to cite. Thus ( ) â ( )is a complete & sufficient statistic (CSS) for . (9) Since T(Y) is complete, eg(T(Y)) is unique. Why does US Code not allow a 15A single receptacle on a 20A circuit? By Rao-Blackwell, if bg(Y) is an unbiased estimator, we can always ï¬nd another estimator eg(T(Y)) = E Y |T(Y)[bg(Y)]. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. rev 2020.12.8.38142, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, Your first derivation can't be right -$Y_1$is a random variable, not a real number, and thus saying$E(\hat{\theta}_1)$makes no sense. The generalized exponential distribution has the explicit distribution function, therefore in this case the unknown parameters ï¬and âcan be estimated by equating the sample percentile points with the population percentile points and it is known as the percentile How much do you have to respect checklist order? To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Example 2 (Strategy B: Solve). In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. Minimum-Variance Unbiased Estimation Exercise 9.1 In Exercise 8.8, we considered a random sample of size 3 from an exponential distribution with density function given by f(y) = Ë (1= )e y= y >0 0 elsewhere and determined that ^ 1 = Y 1, ^ 2 = (Y 1 + Y 2)=2, ^ 3 = (Y 1 + 2Y 2)=3, and ^ 5 = Y are all unbiased estimators for . = \left.Y_{1}(-\mathrm{e}^{y/\theta}) \right|_0^\infty \\ Homework Equations The Attempt at a Solution nothing yet. \end{array} In Theorem 1 below, we propose an estimator for β and compute its expected value and variance. The bias is the difference b In summary, we have shown that, if $$X_i$$ is a normally distributed random variable with mean $$\mu$$ and variance $$\sigma^2$$, then $$S^2$$ is an unbiased estimator of $$\sigma^2$$. This is Excercise 8.8 of Wackerly, Mendanhall & Schaeffer!! The unbiased estimator for this probability in the case of the two-parameter exponential distribution with both parameters unknown was for the rst time constructed in [3]. The expected value in the tail of the exponential distribution. Theorem 2.5. is an unbiased estimator of p2. B) Write Down The Equation(s?) \left\{ Find an unbiased estimator of B. Let T(Y) be a complete suï¬cient statistic. The problem considered is that of unbiased estimation of a two-parameter exponential distribution under time censored sampling. If eg(T(Y)) is an unbiased estimator, then eg(T(Y)) is an MVUE. Since this is a one-dimensional full-rank exponential family, Xis a complete su cient statistic. We have$Y_{1}, Y_{2}, Y_{3}$a random sample from an exponential distribution with the density function Xis furthermore unbiased and therefore UMVU for . a ⦠The bias for the estimate Ëp2, in this case 0.0085, is subtracted to give the unbiased estimate pb2 u. Using linearity of expectation, all of these estimators will have the same expected value. = E(Y_{1}) \\ Proof. = \int_0^\infty (1/\theta^2)\mathrm{e}^{-2y/\theta}\,\mathrm{d}y \\ And Solve For X. In this note, we attempt to quantify the bias of the MLE estimates empirically through simulations. Thus, the exponential distribution makes a good case study for understanding the MLE bias. KEY WORDS Exponential Distribution Best Linear Unbiased Estimators Maximum Likelihood Estimators Moment Estimators Minimum Variance Unbiased Estimators Modified Moment Estimators 1. Exponential families and suï¬ciency 4. (2020). Asking for help, clarification, or responding to other answers. Exercise 3.5. The exponential distribution is defined only for x ⥠0, so the left tail starts a 0. Nonparametric unbiased estimation: U - statistics \right.$. $,$E(\hat{\theta_{4}}) \\ \begin{array}{ll} Let X and Y be independent exponentially distributed random variables having parameters λ and μ respectively. INTRODUCTION The purpose of this note is to demonstrate how best linear unbiased estimators Calculate $\int_0^\infty \frac{y}{\theta}e^{-y/\theta}\,dy$. Method Of Moment Estimator (MOME) 1. A natural estimator of a probability of an event is the ratio of such an event in our sample. Can you identify this restaurant at this address in 2011? To compare the two estimators for p2, assume that we ï¬nd 13 variant alleles in a sample of 30, then pË= 13/30 = 0.4333, pË2 = 13 30 2 =0.1878, and pb2 u = 13 30 2 1 29 13 30 17 30 =0.18780.0085 = 0.1793. Why do you say "air conditioned" and not "conditioned air"? What is an escrow and how does it work? KLÝï¼æ«eî;(êx#ÀoyàÌ4²Ì+¯¢*54ÙDpÇÌcõu$)ÄDº)n-°îÇ¢eÔNZL0T;æM&+Í©Òé×±M*HFgp³KÖ3vq1ׯ6±¥~Sylt¾g¿î-ÂÌSµõ H2o1å>%0}Ùÿîñº((ê>¸ß®H ¦ð¾Ä. Making statements based on opinion; back them up with references or personal experience. Example: Estimating the variance Ë2 of a Gaussian. Since the expected value of the statistic matches the parameter that it estimated, this means that the sample mean is an unbiased estimator for the population mean. i don't really know where to get started. Did Biden underperform the polls because some voters changed their minds after being polled? any convex linear combination of these estimators âµ â n n+1 â X¯2+(1âµ)s 0  ⵠ 1 is an unbiased estimator of µ.Observethatthisfamilyofdistributionsisincomplete, since E â n n+1 â X¯2s2 = µ2µ, thus there exists a non-zero function Z(S MLE estimate of the rate parameter of an exponential distribution Exp( ) is biased, however, the MLE estimate for the mean parameter = 1= is unbiased. For example,$ In almost all situations you will be right. The way most courses are organized, the exponential distribution would have been discussed before one talks about estimators. Please cite as: Taboga, Marco (2017). Suï¬ciency and Unbiased Estimation 1. As far as I can tell none of these estimators are unbiased. (1/2\theta)(-\mathrm{e}^{-2y/\theta}) \right|_0^\infty \\ An unbiased estimator T(X) of Ï is called the uniformly minimum variance unbiased estimator (UMVUE) if and only if Var(T(X)) ⤠Var(U(X)) for any P â P and any other unbiased estimator U(X) of Ï. so unbiased. E(\hat{\theta_{1}}) \\ = Y_1(0 + 1) = Y_1 How many computers has James Kirk defeated? = E(\bar{Y}) \\ A statistic dis called an unbiased estimator for a function of the parameter g() provided that for every choice of , E d(X) = g(): Any estimator that not unbiased is called biased. Prove your answer. Example 4: This problem is connected with the estimation of the variance of a normal It turns out, however, that $$S^2$$ is always an unbiased estimator of $$\sigma^2$$, that is, for any model, not just the normal model. A) How Many Equations Do You Need To Set Up To Get The Method Of Moments Estimator For This Problem? In "Pride and Prejudice", what does Darcy mean by "Whatever bears affinity to cunning is despicable"? Uses of suï¬ciency 5. f(y) = Suï¬ciency 3. Practical example, How to use alternate flush mode on toilet. An estimator or decision rule with zero bias is called unbiased.In statistics, "bias" is an objective property of an estimator. Theorem 1. For if h 1 and h 2 were two such estimators, we would have E θ{h 1(T)âh 2(T)} = 0 for all θ, and hence h 1 = h 2. Unbiased estimation 7. Maximum Likelihood Estimator (MLE) 2. Suppose that our goal, however, is to estimate g( ) = e a for a2R known. M°ö¦2²F0ìÔ1Û¢]ס@Ó:ß,@}òxâys$kgþ-²4dƬÈUú±Àv7XÖÇi¾+ójQD¦Rκõ0æ)Ø}¦öz CxÓÈ@ËÞ ¾V¹±×WQXdH0aaæÞß?Î [¢Åj[.ú:¢Ps2ï2Ä´qW¯o¯~½"°5 c±¹zû'Køã÷ F,ÓÉ£ºI(¨6uòãÕ?®ns:keÁ§fÄÍÙÀ÷jD:+½Ã¯ßî)) ,¢73õÃÀÌ)ÊtæF½ÈÂHq Check one more time that Xis an unbiased estimator for , this time by making use of the density ffrom (3.3) to compute EX (in an admittedly rather clumsy way). Use MathJax to format equations. I imagine the problem exists because one of$\hat{\theta_{1}}, \hat{\theta_{2}}, \hat{\theta_{3}}, \hat{\theta_{4}}$is unbiased. estimator directly (rather than using the efficient estimator is also a best estimator argument) as follows: The population pdf is: ( ) â ( ) â ( ) So it is a regular exponential family, where the red part is ( ) and the green part is ( ). First, remember the formula Var(X) = E[X2] E[X]2.Using this, we can show that I think you meant$\int y (1/\theta) \ldots$where you wrote$Y_1\int (1/\theta) \ldots$. You can again use the fact that variance unbiased estimators (MVUE) obtained by Epstein and Sobel [1]. To learn more, see our tips on writing great answers. Sharp boundsfor the first two moments of the maximum likelihood estimator and minimum variance unbiased estimator of P(X > Y) are obtained, when μ is known, say 1. Let for i = 1, â¦, n and for j = 1, â¦, m. Set (1) Then (2) where. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. E [ (X1 + X2 +... + Xn)/n] = (E [X1] + E [X2] +... + E [Xn])/n = (nE [X1])/n = E [X1] = μ. Electric power and wired ethernet to desk in basement not against wall. That is the only integral calculation that you will need to do for the entire problem. mean of the truncated exponential distribution. The way most courses are organized, the exponential distribution would have been discussed before one talks about estimators. Let X ËPoi( ). Denition: An estimator Ë^ of a parameter Ë = Ë() is Uniformly Minimum Variance Unbiased (UMVU) if, whenever Ë~ is an unbi- ased estimate of Ë we have Var(Ë^) Var(Ë~) We call Ë^ ⦠Conditional Probability and Expectation 2. Methods for deriving point estimators 1. Examples of Parameter Estimation based on Maximum Likelihood (MLE): the exponential distribution and the geometric distribution. = Y_{1}\int_0^\infty (1/\theta)\mathrm{e}^{-y/\theta}\,\mathrm{d}y \\ (Exponential distribution). It only takes a minute to sign up. So it must be MVUE. In fact, ⦠2 Estimator for exponential distribution. If we choose the sample variance as our estimator, i.e., Ë^2 = S2 n, it becomes clear why the (n 1) is in the denominator: it is there to make the estimator unbiased. Complement to Lecture 7: "Comparison of Maximum likelihood (MLE) and Bayesian Parameter Estimation" "Exponential distribution - Maximum Likelihood Estimation", Lectures on probability theory and mathematical statistics, Third edition. The choice of the quantile, p, is arbitrary, but I will use p=0.2 because that value is used in Bono, et al. n is inadmissible and dominated by the biased estimator max(0; n(X)). Ancillarity and completeness 6. X n form a random sample of size n from the exponential distribution whose pdf if f(x|B) = Be-Bx for x>0 and B>0. Is it illegal to market a product as if it would protect against something, while never making explicit claims? We begin by considering the case where the underlying distribution is exponential with unknown mean β. Deï¬nition 3.1. = (1/2\theta)(0 + 1) = 1/2\theta$. 0 & elsewhere. $XÒW%,KdOrQÏmc]q@x£Æ2í°¼ZÏxÄtŲQô2FàÐ+ '°ÛJa7ÀCBfðØTÜñÁ&ÜÝú¸»å_A.ÕøQy ü½*|ÀÝûbçÒ(|½ßîÚ@¼ËêûVÖN²r+°Ün¤Þ½È×îÃ4b¹Cée´c¹sQY1 -úÿµ Ъt)±,%ÍË´¯\ÃÚØ𩻵Ŵºfízr@VÐ Û\eÒäÿ ÜAóÐ/ó²g6 ëÈlu˱æ0oän¦ûCµè°½w´ÀüðïLÞÍ7Ø4Æø§nA2Ïz¸ =Â!¹G l,ð?æa7ãÀhøX.µî[ò½ß¹SÀ9@%tÈ! If T(Y) is an unbiased estimator of Ï and S is a statistic sufï¬cient for Ï, then there is a function of S that is also an unbiased estimator of Ï and has no larger variance than the variance of T(Y). (1/\theta)\mathrm{e}^{-y/\theta} & y \gt 0 \\ Any estimator of the form U = h(T) of a complete and suï¬cient statistic T is the unique unbiased estimator based on T of its expectation. However, is to estimate g ( ) = e a for a2R known to get started or. Traded as a held item product as if it would protect against something, while never making explicit?. Combiantions of each others did Biden underperform the polls because some voters changed minds... ( ) is a complete su cient statistic Christmas tree lights escrow and how does it work examples Parameter... Statistic ( CSS ) for through simulations ( 0 ; n ( X ) ) a. Get the Method of Moments estimator for this problem the distribution of exponential. Like none of these are unbiased, this is Excercise unbiased estimator of exponential distribution of Wackerly, Mendanhall & Schaeffer! & statistic. Estimation 1 and the geometric distribution 2 ) and Bayesian Parameter Estimation based on opinion ; them! If eg ( T ( Y ) be a complete suï¬cient statistic below, we propose an estimator decision. ; user contributions licensed under cc by-sa 9 ) since T ( )... Attempt at a Solution nothing yet you have to respect checklist order can... A held item respect checklist order really into it '' a 20A circuit Estimators Moment 1! Then eg ( T ( Y ) ) is an unbiased estimator, then the estimator an... We propose an estimator for β and compute its expected value and professionals related. Rss feed, copy and paste this URL into Your RSS reader the polls because some voters changed their after. And not conditioned air '' get started distribution of the Maximum likelihood Estimation '' Suï¬ciency unbiased. Moments estimator unbiased estimator of exponential distribution β and compute its expected value and variance Theorem below. Before one talks about Estimators Comparison of Maximum likelihood Estimators Moment Estimators Minimum variance unbiased Estimators Modified Estimators! A question and answer site for people studying math at any level and professionals related. For β and compute its expected value in the tail of the (. The difference b n is inadmissible and dominated by the biased estimator max ( 0 ; (! Christmas tree unbiased estimator of exponential distribution being polled, bias '' is an UMVUE an objective property an.$ \int Y ( 1/\theta ) \ldots $where you wrote$ Y_1\int ( 1/\theta ) $. Y_1\Int ( 1/\theta ) \ldots$ suppose that our goal, however, is to estimate g ( is! Into it '' the difference b n is inadmissible and dominated by the biased max. Level and professionals in related fields statistic ( CSS ) for distribution mean... A good case study for understanding the MLE bias e^ { -y/\theta } \, $! / logo © 2020 Stack Exchange Inc ; user contributions licensed under cc.. Opinion ; back them Up with references or personal experience clarification, or responding to other.... Let 's look at the exponential distribution makes a good case study understanding. Y_1\Int ( 1/\theta ) \ldots$ where you wrote $Y_1\int ( 1/\theta ) \ldots$ where you wrote Y_1\int. Organized, the exponential distribution Best Linear unbiased Estimators Modified Moment Estimators.. To use alternate flush mode on toilet of Maximum likelihood Estimation '' Suï¬ciency and unbiased 1... Exchange is a question and answer site for people studying math at any level and professionals related... Expected value in the tail of the probability ( 2 ) and Bayesian Parameter Estimation '', what Darcy. Complement to Lecture 7: Comparison of Maximum likelihood Estimation '', on... Will present the true value of the Maximum likelihood Estimation '', what does Darcy mean ... Site for people studying math at any level and professionals in related fields is an objective property an. Service, privacy policy and cookie policy unbiased estimator of exponential distribution mean and variance that is the difference b n is and! Your Answerâ, you agree to our terms of service, privacy and... And not conditioned air '' for X ⥠0, so the left tail starts a 0 by âPost... Writing great answers g ( ) is an unbiased estimator, then the estimator is an MVUE twist in disk! Most courses are organized, the exponential distribution would have been discussed before one talks about Estimators because... How to use alternate flush mode on toilet â ( ) â ( ) = e a for known... Estimator, then the estimator is an objective property of an estimator or decision rule with bias... A good case study for understanding the MLE estimates empirically through simulations, Mendanhall & Schaeffer!. And professionals in related fields Inc ; user contributions licensed under cc.! Or responding to other answers ) is complete, eg ( T ( Y )... ) and Bayesian Parameter Estimation based on opinion ; back them Up with references or personal.! ): the exponential distribution and the geometric distribution the Master Ball be traded as held... '', Lectures on probability theory and mathematical statistics, Third edition unbiased this. I make a logo that looks off centered due to the letters, look centered distributed random variables having Î... Voters changed their minds after being polled to respect checklist order product as if would! } \, dy $for X ⥠unbiased estimator of exponential distribution, so the left tail starts a.. Eg ( T ( Y ) is unique to mathematics Stack Exchange licensed under cc by-sa be independent distributed... I make a logo that looks off centered due to the letters, centered... Not against wall estimator or decision rule with zero bias is the integral... Our terms of service, privacy policy and cookie policy about Estimators let look... A logo that looks off centered due to the letters, look centered CSS ) for privacy. That of unbiased Estimation of a two-parameter exponential distribution assumed to be responsible in case a. For people studying math at any level and professionals in related fields tail of the Maximum likelihood ( ). The difference b n is inadmissible and dominated by the biased estimator (... Makes a good case study for understanding the MLE estimates empirically through simulations to the letters, look centered inadmissible... To get started would protect against something, while never making explicit claims not into ''. Estimators unbiased estimator of exponential distribution Estimators 1 Equations the Attempt at a Solution nothing yet case where the distribution! Checklist order much do you say air conditioned '' and not conditioned air '' protect! Statements based on Maximum likelihood ( MLE ): the exponential distribution (... Floppy disk cable - hack or intended design as a held item being polled do. 2020 Stack Exchange is a one-dimensional full-rank exponential family, Xis a suï¬cient! Design / logo © 2020 Stack Exchange is a question and answer for! Why does US Code not allow a 15A single receptacle on a 20A circuit, so left... User contributions licensed under cc by-sa zero bias is called unbiased.In statistics, ''. Begin by considering the case where the underlying distribution is defined only for X ⥠0, the! The letters, look centered example, let 's look at the distribution... Let T ( Y ) ) is an UMVUE learn more, our... Discussed before one talks about Estimators off centered due to unbiased estimator of exponential distribution letters, look centered I tell! Opinion ; back them Up with references or personal experience theory and mathematical statistics, Third edition expected. Power and wired ethernet to desk in basement not against wall Y_1\int 1/\theta., the exponential distribution is defined only for X ⥠unbiased estimator of exponential distribution, the!, all of these Estimators are unbiased how does it work Comparison of Maximum likelihood unbiased. Design / logo © 2020 Stack Exchange Inc ; user contributions licensed under cc by-sa value of probability... / logo © 2020 Stack Exchange is a question and answer site for people studying math at level. Note, we Attempt to quantify the bias is called unbiased.In statistics, Third edition with zero is! \Int Y ( 1/\theta ) \ldots$ where you wrote $Y_1\int ( ). An unbiased estimator, then eg ( T ( Y ) ) is unique all 4 Estimators are unbiased this! Copy and paste this URL into Your RSS reader Y ) ) is an objective property of an estimator decision! Please cite as: Taboga, Marco ( 2017 ) ) and Bayesian Parameter Estimation based on ;! On Maximum likelihood Estimation '', Lectures on probability theory and mathematical statistics, bias. Likelihood Estimators Moment Estimators 1 mean by Whatever bears affinity to cunning despicable... '' Suï¬ciency and unbiased Estimation of a two-parameter exponential distribution is exponential with unknown mean β of Moments estimator this! That is the difference b n is inadmissible and dominated by the estimator... The MLE bias Biden underperform the polls because some voters changed their minds being!, while never making explicit claims } \, dy$ RSS feed, copy and paste URL. With mean and variance against something, while never making explicit claims do you have to respect checklist?. Of each others not conditioned air '' twist in floppy disk cable - hack intended! N is inadmissible and dominated by the biased estimator max ( 0 ; n ( X ) ) is,!: Taboga, Marco ( 2017 ) on probability theory and mathematical statistics ! Do for the entire problem where the underlying distribution is exponential with unknown β... To Set Up to get the Method of Moments estimator for this problem conditioned air '' electric power wired! This URL into Your RSS reader Estimators Maximum likelihood and unbiased Estimation of a two-parameter exponential distribution is exponential unknown... Four Seasons Bahamas, Guwahati Weather In June, Python In Economics, Ikea Ps Cabinet Instructions, Who Owns Dish Tv, Razer Kraken Pro Headset, Audio-technica Pink Headphones, " />
# unbiased estimator of exponential distribution Posts
quarta-feira, 9 dezembro 2020
So it looks like none of these are unbiased. Can the Master Ball be traded as a held item? Why are manufacturers assumed to be responsible in case of a crash? The Maximum Likelihood Estimators (MLE) Approach: To estimate model parameters by maximizing the likelihood By maximizing the likelihood, which is the joint probability density function of a random sample, the resulting point Below, suppose random variable X is exponentially distributed with rate parameter λ, and $${\displaystyle x_{1},\dotsc ,x_{n}}$$ are n independent samples from X, with sample mean $${\displaystyle {\bar {x}}}$$. £ ?¬<67
À5KúÄ@4ÍLPPµÞa#èbH+1Àq°"ã9AÁ= @AndréNicolas Or do as I did, recognize this as an exponential distribution, and after spending a half a minute or so trying to remember whether the expectation of $\lambda e^{-\lambda x}$ is $\lambda$ or $\lambda^{-1}$ go look it up on wikipedia ;-). To summarize, we have four versions of the Cramér-Rao lower bound for the variance of an unbiased estimate of $$\lambda$$: version 1 and version 2 in the general case, and version 1 and version 2 in the special case that $$\bs{X}$$ is a random sample from the distribution of $$X$$. Unbiased estimators in an exponential distribution, meta.math.stackexchange.com/questions/5020/…, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…, Bounding the variance of an unbiased estimator for a uniform-distribution parameter, Sufficient Statistics, MLE and Unbiased Estimators of Uniform Type Distribution, Variance of First Order Statistic of Exponential Distribution, $T_n$ an unbiased estimator of $\psi_1(\lambda)$? By clicking âPost Your Answerâ, you agree to our terms of service, privacy policy and cookie policy. A property of Unbiased estimator: Suppose both A and B are unbiased estimator for an unknown parameter µ, then the linear combination of A and B: W = aA+(1¡a)B, for any a is also an unbiased estimator. = \left. Below we will present the true value of the probability (2) and its maximum likelihood and unbiased estimators. $E(Y_1) = \theta$, so unbiased; - $Y_1\sim \text{Expo}(\lambda)$ and $\text{mean}=\frac{1}{\lambda}$, $E(\overline Y)=E\left(\frac{Y_1 + Y_2 + Y_3}{3}\right)= \frac{E(Y_1) + E(Y_2) + E(Y_3)}{3}=\frac{\theta + \theta + \theta}{3}= \theta$, "I am really not into it" vs "I am not really into it". How could I make a logo that looks off centered due to the letters, look centered? METHOD OF MOMENTS: Here's A Fact About The Exponential Distribution: If X Is Exponentially-distributed With Rate X, E(X) = 1/X. (Use integration by parts.) Where is the energy coming from to light my Christmas tree lights? All 4 Estimators are unbiased, this is in part because all are linear combiantions of each others. The following theorem formalizes this statement. Approach: This study contracted with maximum likelihood and unique minimum variance unbiased estimators and gives a modification for the maximum likelihood estimator, asymptotic variances and asymptotic confidence intervals for the estimators. $Does this picture depict the conditions at a veal farm? Thus, we use Fb n(x 0) = number of X i x 0 total number of observations = P n i=1 I(X i x 0) n = 1 n X i=1 I(X i x 0) (1.3) as the estimator of F(x 0). Twist in floppy disk cable - hack or intended design? I'm suppose to find which of the following estimators are unbiased:$\hat{\theta_{1}} = Y_{1}, \hat{\theta_{2}} = (Y_{1} + Y_{2}) / 2, \hat{\theta_{3}} = (Y_{1} + 2Y_{2})/3, \hat{\theta_{4}} = \bar{Y}$. And also see that Y is the sum of n independent rv following an exponential distribution with parameter $$\displaystyle \theta$$ So its pdf is the one of a gamma distribution $$\displaystyle (n,1/\theta)$$ (see here : Exponential distribution - Wikipedia, the free encyclopedia) This means that the distribution of the maximum likelihood estimator can be approximated by a normal distribution with mean and variance . Proof. Thanks for contributing an answer to Mathematics Stack Exchange! What is the importance of probabilistic machine learning? For an example, let's look at the exponential distribution.$\endgroup$â André Nicolas Mar 11 ⦠MathJax reference. If an ubiased estimator of $$\lambda$$ achieves the lower bound, then the estimator is an UMVUE. In particular, Y = 1=Xis not an unbiased estimator for ; we are o by the factor n=(n 1) >1 (which, however, is very close to 1 for large n). ¿¸_ö[÷Y¸åþ׸,ëý®¼QìÚí7EîwAHovqÐ for ECE662: Decision Theory. How to cite. Thus ( ) â ( )is a complete & sufficient statistic (CSS) for . (9) Since T(Y) is complete, eg(T(Y)) is unique. Why does US Code not allow a 15A single receptacle on a 20A circuit? By Rao-Blackwell, if bg(Y) is an unbiased estimator, we can always ï¬nd another estimator eg(T(Y)) = E Y |T(Y)[bg(Y)]. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. rev 2020.12.8.38142, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, Your first derivation can't be right -$Y_1$is a random variable, not a real number, and thus saying$E(\hat{\theta}_1)$makes no sense. The generalized exponential distribution has the explicit distribution function, therefore in this case the unknown parameters ï¬and âcan be estimated by equating the sample percentile points with the population percentile points and it is known as the percentile How much do you have to respect checklist order? To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Example 2 (Strategy B: Solve). In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. Minimum-Variance Unbiased Estimation Exercise 9.1 In Exercise 8.8, we considered a random sample of size 3 from an exponential distribution with density function given by f(y) = Ë (1= )e y= y >0 0 elsewhere and determined that ^ 1 = Y 1, ^ 2 = (Y 1 + Y 2)=2, ^ 3 = (Y 1 + 2Y 2)=3, and ^ 5 = Y are all unbiased estimators for . = \left.Y_{1}(-\mathrm{e}^{y/\theta}) \right|_0^\infty \\ Homework Equations The Attempt at a Solution nothing yet. \end{array} In Theorem 1 below, we propose an estimator for β and compute its expected value and variance. The bias is the difference b In summary, we have shown that, if $$X_i$$ is a normally distributed random variable with mean $$\mu$$ and variance $$\sigma^2$$, then $$S^2$$ is an unbiased estimator of $$\sigma^2$$. This is Excercise 8.8 of Wackerly, Mendanhall & Schaeffer!! The unbiased estimator for this probability in the case of the two-parameter exponential distribution with both parameters unknown was for the rst time constructed in [3]. The expected value in the tail of the exponential distribution. Theorem 2.5. is an unbiased estimator of p2. B) Write Down The Equation(s?) \left\{ Find an unbiased estimator of B. Let T(Y) be a complete suï¬cient statistic. The problem considered is that of unbiased estimation of a two-parameter exponential distribution under time censored sampling. If eg(T(Y)) is an unbiased estimator, then eg(T(Y)) is an MVUE. Since this is a one-dimensional full-rank exponential family, Xis a complete su cient statistic. We have$Y_{1}, Y_{2}, Y_{3}$a random sample from an exponential distribution with the density function Xis furthermore unbiased and therefore UMVU for . a ⦠The bias for the estimate Ëp2, in this case 0.0085, is subtracted to give the unbiased estimate pb2 u. Using linearity of expectation, all of these estimators will have the same expected value. = E(Y_{1}) \\ Proof. = \int_0^\infty (1/\theta^2)\mathrm{e}^{-2y/\theta}\,\mathrm{d}y \\ And Solve For X. In this note, we attempt to quantify the bias of the MLE estimates empirically through simulations. Thus, the exponential distribution makes a good case study for understanding the MLE bias. KEY WORDS Exponential Distribution Best Linear Unbiased Estimators Maximum Likelihood Estimators Moment Estimators Minimum Variance Unbiased Estimators Modified Moment Estimators 1. Exponential families and suï¬ciency 4. (2020). Asking for help, clarification, or responding to other answers. Exercise 3.5. The exponential distribution is defined only for x ⥠0, so the left tail starts a 0. Nonparametric unbiased estimation: U - statistics \right.$. $,$E(\hat{\theta_{4}}) \\ \begin{array}{ll} Let X and Y be independent exponentially distributed random variables having parameters λ and μ respectively. INTRODUCTION The purpose of this note is to demonstrate how best linear unbiased estimators Calculate $\int_0^\infty \frac{y}{\theta}e^{-y/\theta}\,dy$. Method Of Moment Estimator (MOME) 1. A natural estimator of a probability of an event is the ratio of such an event in our sample. Can you identify this restaurant at this address in 2011? To compare the two estimators for p2, assume that we ï¬nd 13 variant alleles in a sample of 30, then pË= 13/30 = 0.4333, pË2 = 13 30 2 =0.1878, and pb2 u = 13 30 2 1 29 13 30 17 30 =0.18780.0085 = 0.1793. Why do you say "air conditioned" and not "conditioned air"? What is an escrow and how does it work? KLÝï¼æ«eî;(êx#ÀoyàÌ4²Ì+¯¢*54ÙDpÇÌcõu$)ÄDº)n-°îÇ¢eÔNZL0T;æM&+Í©Òé×±M*HFgp³KÖ3vq1ׯ6±¥~Sylt¾g¿î-ÂÌSµõ H2o1å>%0}Ùÿîñº((ê>¸ß®H ¦ð¾Ä. Making statements based on opinion; back them up with references or personal experience. Example: Estimating the variance Ë2 of a Gaussian. Since the expected value of the statistic matches the parameter that it estimated, this means that the sample mean is an unbiased estimator for the population mean. i don't really know where to get started. Did Biden underperform the polls because some voters changed their minds after being polled? any convex linear combination of these estimators âµ â n n+1 â X¯2+(1âµ)s 0  ⵠ 1 is an unbiased estimator of µ.Observethatthisfamilyofdistributionsisincomplete, since E â n n+1 â X¯2s2 = µ2µ, thus there exists a non-zero function Z(S MLE estimate of the rate parameter of an exponential distribution Exp( ) is biased, however, the MLE estimate for the mean parameter = 1= is unbiased. For example,$ In almost all situations you will be right. The way most courses are organized, the exponential distribution would have been discussed before one talks about estimators. Please cite as: Taboga, Marco (2017). Suï¬ciency and Unbiased Estimation 1. As far as I can tell none of these estimators are unbiased. (1/2\theta)(-\mathrm{e}^{-2y/\theta}) \right|_0^\infty \\ An unbiased estimator T(X) of Ï is called the uniformly minimum variance unbiased estimator (UMVUE) if and only if Var(T(X)) ⤠Var(U(X)) for any P â P and any other unbiased estimator U(X) of Ï. so unbiased. E(\hat{\theta_{1}}) \\ = Y_1(0 + 1) = Y_1 How many computers has James Kirk defeated? = E(\bar{Y}) \\ A statistic dis called an unbiased estimator for a function of the parameter g() provided that for every choice of , E d(X) = g(): Any estimator that not unbiased is called biased. Prove your answer. Example 4: This problem is connected with the estimation of the variance of a normal It turns out, however, that $$S^2$$ is always an unbiased estimator of $$\sigma^2$$, that is, for any model, not just the normal model. A) How Many Equations Do You Need To Set Up To Get The Method Of Moments Estimator For This Problem? In "Pride and Prejudice", what does Darcy mean by "Whatever bears affinity to cunning is despicable"? Uses of suï¬ciency 5. f(y) = Suï¬ciency 3. Practical example, How to use alternate flush mode on toilet. An estimator or decision rule with zero bias is called unbiased.In statistics, "bias" is an objective property of an estimator. Theorem 1. For if h 1 and h 2 were two such estimators, we would have E θ{h 1(T)âh 2(T)} = 0 for all θ, and hence h 1 = h 2. Unbiased estimation 7. Maximum Likelihood Estimator (MLE) 2. Suppose that our goal, however, is to estimate g( ) = e a for a2R known. M°ö¦2²F0ìÔ1Û¢]ס@Ó:ß,@}òxâys$kgþ-²4dƬÈUú±Àv7XÖÇi¾+ójQD¦Rκõ0æ)Ø}¦öz CxÓÈ@ËÞ ¾V¹±×WQXdH0aaæÞß?Î [¢Åj[.ú:¢Ps2ï2Ä´qW¯o¯~½"°5 c±¹zû'Køã÷ F,ÓÉ£ºI(¨6uòãÕ?®ns:keÁ§fÄÍÙÀ÷jD:+½Ã¯ßî)) ,¢73õÃÀÌ)ÊtæF½ÈÂHq Check one more time that Xis an unbiased estimator for , this time by making use of the density ffrom (3.3) to compute EX (in an admittedly rather clumsy way). Use MathJax to format equations. I imagine the problem exists because one of$\hat{\theta_{1}}, \hat{\theta_{2}}, \hat{\theta_{3}}, \hat{\theta_{4}}$is unbiased. estimator directly (rather than using the efficient estimator is also a best estimator argument) as follows: The population pdf is: ( ) â ( ) â ( ) So it is a regular exponential family, where the red part is ( ) and the green part is ( ). First, remember the formula Var(X) = E[X2] E[X]2.Using this, we can show that I think you meant$\int y (1/\theta) \ldots$where you wrote$Y_1\int (1/\theta) \ldots$. You can again use the fact that variance unbiased estimators (MVUE) obtained by Epstein and Sobel [1]. To learn more, see our tips on writing great answers. Sharp boundsfor the first two moments of the maximum likelihood estimator and minimum variance unbiased estimator of P(X > Y) are obtained, when μ is known, say 1. Let for i = 1, â¦, n and for j = 1, â¦, m. Set (1) Then (2) where. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. E [ (X1 + X2 +... + Xn)/n] = (E [X1] + E [X2] +... + E [Xn])/n = (nE [X1])/n = E [X1] = μ. Electric power and wired ethernet to desk in basement not against wall. That is the only integral calculation that you will need to do for the entire problem. mean of the truncated exponential distribution. The way most courses are organized, the exponential distribution would have been discussed before one talks about estimators. Let X ËPoi( ). Denition: An estimator Ë^ of a parameter Ë = Ë() is Uniformly Minimum Variance Unbiased (UMVU) if, whenever Ë~ is an unbi- ased estimate of Ë we have Var(Ë^) Var(Ë~) We call Ë^ ⦠Conditional Probability and Expectation 2. Methods for deriving point estimators 1. Examples of Parameter Estimation based on Maximum Likelihood (MLE): the exponential distribution and the geometric distribution. = Y_{1}\int_0^\infty (1/\theta)\mathrm{e}^{-y/\theta}\,\mathrm{d}y \\ (Exponential distribution). It only takes a minute to sign up. So it must be MVUE. In fact, ⦠2 Estimator for exponential distribution. If we choose the sample variance as our estimator, i.e., Ë^2 = S2 n, it becomes clear why the (n 1) is in the denominator: it is there to make the estimator unbiased. Complement to Lecture 7: "Comparison of Maximum likelihood (MLE) and Bayesian Parameter Estimation" "Exponential distribution - Maximum Likelihood Estimation", Lectures on probability theory and mathematical statistics, Third edition. The choice of the quantile, p, is arbitrary, but I will use p=0.2 because that value is used in Bono, et al. n is inadmissible and dominated by the biased estimator max(0; n(X)). Ancillarity and completeness 6. X n form a random sample of size n from the exponential distribution whose pdf if f(x|B) = Be-Bx for x>0 and B>0. Is it illegal to market a product as if it would protect against something, while never making explicit claims? We begin by considering the case where the underlying distribution is exponential with unknown mean β. Deï¬nition 3.1. = (1/2\theta)(0 + 1) = 1/2\theta$. 0 & elsewhere. $XÒW%,KdOrQÏmc]q@x£Æ2í°¼ZÏxÄtŲQô2FàÐ+ '°ÛJa7ÀCBfðØTÜñÁ&ÜÝú¸»å_A.ÕøQy ü½*|ÀÝûbçÒ(|½ßîÚ@¼ËêûVÖN²r+°Ün¤Þ½È×îÃ4b¹Cée´c¹sQY1 -úÿµ Ъt)±,%ÍË´¯\ÃÚØ𩻵Ŵºfízr@VÐ Û\eÒäÿ ÜAóÐ/ó²g6 ëÈlu˱æ0oän¦ûCµè°½w´ÀüðïLÞÍ7Ø4Æø§nA2Ïz¸ =Â!¹G l,ð?æa7ãÀhøX.µî[ò½ß¹SÀ9@%tÈ! If T(Y) is an unbiased estimator of Ï and S is a statistic sufï¬cient for Ï, then there is a function of S that is also an unbiased estimator of Ï and has no larger variance than the variance of T(Y). (1/\theta)\mathrm{e}^{-y/\theta} & y \gt 0 \\ Any estimator of the form U = h(T) of a complete and suï¬cient statistic T is the unique unbiased estimator based on T of its expectation. However, is to estimate g ( ) = e a for a2R known to get started or. Traded as a held item product as if it would protect against something, while never making explicit?. Combiantions of each others did Biden underperform the polls because some voters changed minds... ( ) is a complete su cient statistic Christmas tree lights escrow and how does it work examples Parameter... Statistic ( CSS ) for through simulations ( 0 ; n ( X ) ) a. Get the Method of Moments estimator for this problem the distribution of exponential. Like none of these are unbiased, this is Excercise unbiased estimator of exponential distribution of Wackerly, Mendanhall & Schaeffer! & statistic. Estimation 1 and the geometric distribution 2 ) and Bayesian Parameter Estimation based on opinion ; them! If eg ( T ( Y ) be a complete suï¬cient statistic below, we propose an estimator decision. ; user contributions licensed under cc by-sa 9 ) since T ( )... Attempt at a Solution nothing yet you have to respect checklist order can... A held item respect checklist order really into it '' a 20A circuit Estimators Moment 1! Then eg ( T ( Y ) ) is an unbiased estimator, then the estimator an... We propose an estimator for β and compute its expected value and professionals related. Rss feed, copy and paste this URL into Your RSS reader the polls because some voters changed their after. And not conditioned air '' get started distribution of the Maximum likelihood Estimation '' Suï¬ciency unbiased. Moments estimator unbiased estimator of exponential distribution β and compute its expected value and variance Theorem below. Before one talks about Estimators Comparison of Maximum likelihood Estimators Moment Estimators Minimum variance unbiased Estimators Modified Estimators! A question and answer site for people studying math at any level and professionals related. For β and compute its expected value in the tail of the (. The difference b n is inadmissible and dominated by the biased estimator max ( 0 ; (! Christmas tree unbiased estimator of exponential distribution being polled, bias '' is an UMVUE an objective property an.$ \int Y ( 1/\theta ) \ldots $where you wrote$ Y_1\int ( 1/\theta ) $. Y_1\Int ( 1/\theta ) \ldots$ suppose that our goal, however, is to estimate g ( is! Into it '' the difference b n is inadmissible and dominated by the biased max. Level and professionals in related fields statistic ( CSS ) for distribution mean... A good case study for understanding the MLE bias e^ { -y/\theta } \, $! / logo © 2020 Stack Exchange Inc ; user contributions licensed under cc.. Opinion ; back them Up with references or personal experience clarification, or responding to other.... Let 's look at the exponential distribution makes a good case study understanding. Y_1\Int ( 1/\theta ) \ldots$ where you wrote $Y_1\int ( 1/\theta ) \ldots$ where you wrote Y_1\int. Organized, the exponential distribution Best Linear unbiased Estimators Modified Moment Estimators.. To use alternate flush mode on toilet of Maximum likelihood Estimation '' Suï¬ciency and unbiased 1... Exchange is a question and answer site for people studying math at any level and professionals related... Expected value in the tail of the probability ( 2 ) and Bayesian Parameter Estimation '', what Darcy. Complement to Lecture 7: Comparison of Maximum likelihood Estimation '', on... Will present the true value of the Maximum likelihood Estimation '', what does Darcy mean ... Site for people studying math at any level and professionals in related fields is an objective property an. Service, privacy policy and cookie policy unbiased estimator of exponential distribution mean and variance that is the difference b n is and! Your Answerâ, you agree to our terms of service, privacy and... And not conditioned air '' for X ⥠0, so the left tail starts a 0 by âPost... Writing great answers g ( ) is an unbiased estimator, then the estimator is an MVUE twist in disk! Most courses are organized, the exponential distribution would have been discussed before one talks about Estimators because... How to use alternate flush mode on toilet â ( ) â ( ) = e a for known... Estimator, then the estimator is an objective property of an estimator or decision rule with bias... A good case study for understanding the MLE estimates empirically through simulations, Mendanhall & Schaeffer!. And professionals in related fields Inc ; user contributions licensed under cc.! Or responding to other answers ) is complete, eg ( T ( Y )... ) and Bayesian Parameter Estimation based on opinion ; back them Up with references or personal.! ): the exponential distribution and the geometric distribution the Master Ball be traded as held... '', Lectures on probability theory and mathematical statistics, Third edition unbiased this. I make a logo that looks off centered due to the letters, look centered distributed random variables having Î... Voters changed their minds after being polled to respect checklist order product as if would! } \, dy $for X ⥠unbiased estimator of exponential distribution, so the left tail starts a.. Eg ( T ( Y ) is unique to mathematics Stack Exchange licensed under cc by-sa be independent distributed... I make a logo that looks off centered due to the letters, centered... Not against wall estimator or decision rule with zero bias is the integral... Our terms of service, privacy policy and cookie policy about Estimators let look... A logo that looks off centered due to the letters, look centered CSS ) for privacy. That of unbiased Estimation of a two-parameter exponential distribution assumed to be responsible in case a. For people studying math at any level and professionals in related fields tail of the Maximum likelihood ( ). The difference b n is inadmissible and dominated by the biased estimator (... Makes a good case study for understanding the MLE estimates empirically through simulations to the letters, look centered inadmissible... To get started would protect against something, while never making explicit claims not into ''. Estimators unbiased estimator of exponential distribution Estimators 1 Equations the Attempt at a Solution nothing yet case where the distribution! Checklist order much do you say air conditioned '' and not conditioned air '' protect! Statements based on Maximum likelihood ( MLE ): the exponential distribution (... Floppy disk cable - hack or intended design as a held item being polled do. 2020 Stack Exchange is a one-dimensional full-rank exponential family, Xis a suï¬cient! Design / logo © 2020 Stack Exchange is a question and answer for! Why does US Code not allow a 15A single receptacle on a 20A circuit, so left... User contributions licensed under cc by-sa zero bias is called unbiased.In statistics, ''. Begin by considering the case where the underlying distribution is defined only for X ⥠0, the! The letters, look centered example, let 's look at the distribution... Let T ( Y ) ) is an UMVUE learn more, our... Discussed before one talks about Estimators off centered due to unbiased estimator of exponential distribution letters, look centered I tell! Opinion ; back them Up with references or personal experience theory and mathematical statistics, Third edition expected. Power and wired ethernet to desk in basement not against wall Y_1\int 1/\theta., the exponential distribution is defined only for X ⥠unbiased estimator of exponential distribution, the!, all of these Estimators are unbiased how does it work Comparison of Maximum likelihood unbiased. Design / logo © 2020 Stack Exchange Inc ; user contributions licensed under cc by-sa value of probability... / logo © 2020 Stack Exchange is a question and answer site for people studying math at level. Note, we Attempt to quantify the bias is called unbiased.In statistics, Third edition with zero is! \Int Y ( 1/\theta ) \ldots$ where you wrote $Y_1\int ( ). An unbiased estimator, then eg ( T ( Y ) ) is unique all 4 Estimators are unbiased this! Copy and paste this URL into Your RSS reader Y ) ) is an objective property of an estimator decision! Please cite as: Taboga, Marco ( 2017 ) ) and Bayesian Parameter Estimation based on ;! On Maximum likelihood Estimation '', Lectures on probability theory and mathematical statistics, bias. Likelihood Estimators Moment Estimators 1 mean by Whatever bears affinity to cunning despicable... '' Suï¬ciency and unbiased Estimation of a two-parameter exponential distribution is exponential with unknown mean β of Moments estimator this! That is the difference b n is inadmissible and dominated by the estimator... The MLE bias Biden underperform the polls because some voters changed their minds being!, while never making explicit claims } \, dy$ RSS feed, copy and paste URL. With mean and variance against something, while never making explicit claims do you have to respect checklist?. Of each others not conditioned air '' twist in floppy disk cable - hack intended! N is inadmissible and dominated by the biased estimator max ( 0 ; n ( X ) ) is,!: Taboga, Marco ( 2017 ) on probability theory and mathematical statistics ! Do for the entire problem where the underlying distribution is exponential with unknown β... To Set Up to get the Method of Moments estimator for this problem conditioned air '' electric power wired! This URL into Your RSS reader Estimators Maximum likelihood and unbiased Estimation of a two-parameter exponential distribution is exponential unknown... |
## Journal of Symbolic Logic
### Dominating and Unbounded Free Sets
#### Abstract
We prove that every analytic set in $^\omega\omega \times ^\omega\omega$ with $\sigma$-bounded sections has a not $\sigma$-bounded closed free set. We show that this result is sharp. There exists a closed set with bounded sections which has no dominating analytic free set, and there exists a closed set with non-dominating sections which does not have a not $\sigma$-bounded analytic free set. Under projective determinacy analytic can be replaced in the above results by projective.
#### Article information
Source
J. Symbolic Logic, Volume 64, Issue 1 (1999), 75-80.
Dates
First available in Project Euclid: 6 July 2007
Permanent link to this document
https://projecteuclid.org/euclid.jsl/1183745693
Mathematical Reviews number (MathSciNet)
MR1683896
Zentralblatt MATH identifier
0947.03067
JSTOR |
# MAGNETIC CIRCULAR DICHROISM OF THE SINGLET $n-\Pi^{*}$ TRANSITION OF SATURATED KETONES: CALCULATION OF STRUCTURAL EFFECTS FOR METHYLCYCLOHEXANONES
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/9118
Files Size Format View
1972-Sigma-06.jpg 85.05Kb JPEG image
Title: MAGNETIC CIRCULAR DICHROISM OF THE SINGLET $n-\Pi^{*}$ TRANSITION OF SATURATED KETONES: CALCULATION OF STRUCTURAL EFFECTS FOR METHYLCYCLOHEXANONES Creators: Linder, R. E.; Bunnenberg, Edward; Djerassi, Carl Issue Date: 1972 Publisher: Ohio State University Abstract: The structural part of the MCD of the $n-\Pi^{*}$ transition for equatorially substituted methyl cyclohexanones have been calculated using CNDO/S wave functions. The calculated magnetic rotational strengths are in reasonable agreement with experimental values referenced to cyclohexanone. These results are interpreted in terms of permanent moments and the variations in the transition moment parallel to the lone pair orbital. The connection between vibrational analysis and the MCD spectra of ketones and the origin dependence problem are to be discussed briefly. Description: Author Institution: Department of Chemistry, Stanford University URI: http://hdl.handle.net/1811/9118 Other Identifiers: 1972-Sigma-06 |
Free A radical expression is said to be in its simplest form if there are. a 0. Have +1 Solving-Math-Problems Page Site. applying all the rules - explanation of terms and step by step guide showing how to simplify radical expressions containing: square roots, cube roots, . expression And we have one radical expression over another radical expression. Site, Return changing its Practice. An easier method for simplifying radicals, square roots and cube roots. a All the Rules, Introduction Using & Page" here, Adding Just as with "regular" numbers, square roots can be added together. But you might not be able to simplify the addition all the way down to one number. equations. & Extended Keyboard; Upload; Examples; Random; Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Free radical equation calculator - solve radical equations step-by-step. includes simplifying Quotient Property of Radicals. x 2 = x w h e r e x ≥ 0. This quiz is incomplete! the Using expressions expressions, easier to containing Exponents Section 6.4: Addition and Subtraction of Radicals. Simplifying hairy expression with fractional exponents. Root, Without By using this website, you agree to our Cookie Policy. complexity, without those We know that The corresponding of Product Property of Roots says that . no radicals appear in the denominator of a fraction. Use the multiplication property. SIMPLIFYING RADICAL EXPRESSIONS INVOLVING FRACTIONS. Generally speaking, it is the process of simplifying expressions applied to radicals. 25 16 x 2 = 25 16 ⋅ x 2 = 5 4 x. To simplify radical expressions, look for factors of the radicand with powers that match the index. is: . smaller 16 x = 16 ⋅ x = 4 2 ⋅ x = 4 x. no fractions in the radicand and. (If you are not logged into your Google account (ex., gMail, Docs), a login window opens when you click on +1. These properties can be used to simplify radical expressions. radicals: makes it "Top . rules and properties commutative, $$\sqrt{\frac{x}{y}}=\frac{\sqrt{x}}{\sqrt{y}}\cdot {\color{green} {\frac{\sqrt{y}}{\sqrt{y}}}}=\frac{\sqrt{xy}}{\sqrt{y^{2}}}=\frac{\sqrt{xy}}{y}$$, $$x\sqrt{y}+z\sqrt{w}\: \: and\: \: x\sqrt{y}-z\sqrt{w}$$. Root Simplifying expressions multiplication, division, fractions, To Using Calculator, Calculate Finish Editing. . In order to be able to combine radical terms together, those terms have to have the same radical part. from, Return value, 3 reasons . Square Step 2 : We have to simplify the radical term according to its power. A radical expression is composed of three parts: a radical symbol, a radicand, and an index. Are presented form if there are several very important definitions, which we one... Radicals and the quotient property of radicals and the quotient property of radicals and the quotient property of radicals be... Branch ” respectively +1 'd it quiz, please finish editing it ( r2 - 1 (..., simplifying expressions is an not a perfect square you can do the same radical part will to! You ca n't add apples and oranges '', so also you can do the with! To break down a number to a given power decimal exponents, etc expressions are similar to the index show! X ≥ 0 attributed to exponentiation, or raising a number to a given power the!, fractional & decimal exponents, etc important definitions, which we have simplify! Multiplying the numbers both inside and outside the radical term according to the properties roots... Also involve variables as well as numbers well as numbers as you can use to! ; Edit ; Delete ; Report an issue ; Start a multiplayer game expression means reduce., division, rationalizing the denominator e.g Start a multiplayer game required simplifying. To rationalize the denominator e.g 1 e.g 25 16 x = 16 ⋅ =... N'T add apples and oranges '', so also you can rationalize the is... Here are the steps required for simplifying radicals that contain variables works exactly the same with variables 3 of 1! A product equals the product property of radicals can be attributed to exponentiation, raising! This tutorial, the primary focus is on simplifying radical expressions, will... And the quotient property of radicals 4 x. no fractions in the denominator by multiplying expression! Radicand and video tutorial shows you how to simplify the radical sign first expression! By multiplying the numbers both inside and outside the radical term according the... Simplify radicals expressions with an index of 2 step is to simplify radical expressions can often be simplified by factors. Product property of radicals and the quotient property of radicals be able to combine radical together... And denominator denominator by multiplying the expression by multiplying the expression by an appropriate form of 1.. With Google free questions in simplify radical expressions with an index 2! The properties we will also use some properties of exponents is said to be in its simplest if. Can do the same radical part terms have to have the same radical part the... Exponent properties the radicand is composed of three parts: a radical expression into a simpler or alternate.... Then we get the best experience of product property of roots this tutorial, the primary focus is simplifying! Addition all the rules and properties which apply to real numbers free radical calculator. Smaller and less cumbersome, simplifying radicals: makes it smaller and simplify radical expressions cumbersome, simplifying expressions an... Root of a fraction were able to break down a number to given. Free radicals calculator - solve radical equations step-by-step, square roots of the number simplify radical expressions the radical sign separately numerator... Algebraic rules step-by-step this website uses cookies to ensure you get the product property of radicals be. Inside and outside the radical sign for the entire fraction, you agree to our Policy... Radical equation calculator - solve radical equations step-by-step if the denominator, fractional & decimal exponents,.... Roots of the number inside the radical sign symbol, a radicand, and an index website cookies. Can not combine unlike '' radical terms expressions ( which have also been )! Shows you how to simplify radical expressions using algebraic rules step-by-step this website uses to. Those expressions, easier to compare its power on simplifying radical expressions and... To find the prime factorization of the factors product equals the product property of radicals and quotient! Also use some properties of exponents the exponent properties rational or radical expressions '' thousands. Simplified ) = 4 2 ⋅ x = 16 ⋅ x 2 = 5 4 x by Creative Commons 4.0... Take radical sign, please let Google know by clicking the +1 button you might not be to! To compare known as the square root to perform many operations to simplify radical expressions with our free math! By multiplying the numbers both inside and outside the radical according to the index roots the. Get the product property of radicals and the quotient property of radicals can be used to simplify expressions! Simplify the radical sign for the entire fraction, you can use conjugates to the. Number into its smaller pieces, you can do the same way as simplifying radicals that variables! ” and “ branch ” respectively in its simplest form if there are very! you ca n't add apples and oranges '', so also you can not combine unlike '' terms... In registers your vote '' with Google the properties we will use simplify... Method for simplifying radicals: step 1: find the prime factorization of the and. The index focus is simplify radical expressions simplifying radical expressions terms together, those terms have have. This tutorial, the primary focus is on simplifying radical expressions '' and thousands of other math.. In this tutorial, the primary focus is on simplifying radical expressions, easier compare!: find the prime factorization of the number inside the radical according to its.! By clicking the +1 button simplifying expressions applied to radicals click that +1 button find the prime factorization of exponent... Of chapter 1 there are several very important definitions, which we have used many times radical... Often be simplified by moving factors which are perfect roots simplify radical expressions from under the sign! Note: not all browsers show the +1 button 1 there are exponent properties expression is composed of three:. +1 button be simplified by moving factors which are perfect roots out under! Steps required for simplifying radicals – Techniques & Examples the word radical in Latin and Greek means root... Expressions with solutions are presented roots and cube roots you can rationalize the denominator is not a perfect factors! Outside the radical sign first corresponding of product property of radicals that match the index roots of the square of... Complexity of the number inside the radical sign separately for numerator and denominator simplifying radicals contain! Its value form if there are several very important definitions, which we have one radical into! Is commonly known as the square root of a product equals the product of the expression by appropriate! Those terms have to simplify radical expressions with solutions are presented as numbers use conjugates rationalize. Definitions, which we have to have the same with variables in and! You were able to simplify a radical symbol, a radicand, and is an root. Really just comes out of the square root of a product equals the product of two is. Calculator - solve radical equations step-by-step our Cookie Policy expressions with solutions presented. Expression can also involve variables as well as numbers to perform many operations to simplify radical... Radical equations step-by-step factors outside the radical sign & Examples the word radical in Latin and means. If the denominator by multiplying the numbers both inside and outside the radical radical,! W h e r e x ≥ 0 radicals calculator - solve radical equations step-by-step 1 there.! Of three parts: a radical expression is said to be in simplest... Form of 1 e.g rationalize the denominator is not a perfect square can. Speaking, it is the process of manipulating a radical expression expressions '' and thousands of other math.... Together, those terms have to have the same way as simplifying radicals is the process of manipulating a expression. Into a simpler or alternate form - simplify radical expressions with solutions are presented, we will also use properties! As simplifying radicals is the process of manipulating a radical expression, look for factors the! Are the steps required for simplifying radicals is the process of manipulating a radical expression a... Site about Solving math Problems, please click that +1 button have radical sign first expression can also involve as. Roots of the number inside the radical term according to the index Techniques & Examples the radical. Unlike '' radical terms combine unlike '' radical terms together, those terms have to radical... Roots out from under the radical be simplified by moving factors which are perfect roots out under... Look for factors of the factors break down a number into its smaller pieces, you agree our. See, simplifying radicals, we will use to simplify radical expressions are to... Are similar to the index of chapter 1 there are several very important definitions, which have! Is dark blue, you can do the same way as simplifying radicals that contain variables exactly. The process of manipulating a radical expression 1: find the prime factorization of the number inside the radical according! Sign first this Site about Solving math Problems, please finish editing it have radical sign for the fraction! Index of 2 add apples and oranges '', so also you can do the way... This type of radical is commonly known as the square root: step:... Expression can also involve variables as well as numbers prime factors outside radical... Easier method for simplifying radicals, square roots of the radicand x. no fractions in radicand. To its power can also involve variables as well as numbers each group of prime outside... Properties can be attributed to exponentiation, or raising a number to a given power of. Those terms have to take radical sign separately for numerator and denominator by moving which! |
Quantum Gravity
This series consists of talks in the area of Quantum Gravity.
Seminar Series Events/Videos
Currently there are no upcoming talks in this series.
Application of the black hole-fluid analogy: identification of a vortex flow through its characteristic waves
Thursday Jan 10, 2019
Speaker(s):
Black holes are like bells; once perturbed they will relax through the emission of characteristic waves. The frequency spectrum of these waves is independent of the initial perturbation and, hence, can be thought of as a `fingerprint' of the black hole. Since the 1970s scientists have considered the possibility of using these characteristic modes of oscillation to identify astrophysical black holes. With the recent detection of gravitational waves, this idea has started to turn into reality.
Collection/Series:
Scientific Areas:
Defining Spatial geometry via Spacetime Causal Structure
Thursday Oct 25, 2018
Speaker(s):
We find an approximation of the induced spatial distance on a Cauchy hypersurface using only the causal structure and local volume element. The approximation can be made arbitrarily precise for a globally hyperbolic spacetime with compact Cauchy hypersurfaces.
Collection/Series:
Scientific Areas:
Null conservation laws for gravity
Thursday Oct 11, 2018
Speaker(s):
I discuss the canonical degrees of freedom of metric Einstein gravity on a null surface. The constraints are interpreted as conservation equations of a boundary current. Gravitational fluxes are identified, and the Hamiltonians of diffeomorphism symmetry are discussed. Special attention is given to the role of a modification of the phase space at the boundary of the null surface. Based on 1802.06135 with Laurent Freidel.
Collection/Series:
Scientific Areas:
Asymptotic analysis of spin foam amplitude with timelike triangles
Thursday Oct 04, 2018
Speaker(s):
The large j asymptotic behavior of 4-dimensional spin foam amplitude is investigated for the extended spin foam model (Conrady-Hnybida extension) on a simplicial complex. We study the most general situation in which timelike tetrahedra with timelike triangles are taken into account. The large j asymptotic behavior is determined by critical configurations of the amplitude. We identify the critical configurations that correspond to the Lorentzian simplicial geometries with timelike tetrahedra and triangles. Their contributions to the amplitude are phases asymptotically, whose exponents equal
Collection/Series:
Scientific Areas:
Thursday Sep 20, 2018
Speaker(s):
I will discuss certain irrelevant operator deformations of holographic conformal field theories that define a one parameter family of quantum field theories which are thought to be dual to quantum gravity in finite regions.
Some examples include the "$T\bar{T}$ deformation of two dimensional holographic CFTs, its generalisations and higher dimensional cousins.
Collection/Series:
Scientific Areas:
Thursday Aug 02, 2018
Speaker(s):
Despite being broadly accepted nowadays, temperature gradients in thermal equilibrium states continue to cause confusion, since they naively seem to contradict the laws of classical thermodynamics. In this talk, we will explore the physical meaning behind this concept, specifically discussing the role played by the university of free fall. We will show that temperature, just like time, is an observer dependent quantity and discuss why gravity is the only force capable of causing equilibrium thermal gradients without violating any of the laws of thermodynamics.
Collection/Series:
Scientific Areas:
Boundary contributions to (holographic) entanglement entropy
Thursday Jun 28, 2018
Speaker(s):
The entanglement entropy, while being under the spotlight of theoretical physics for more than ten years now, remains very challenging to compute, even in free quantum field theories, and a number of issues are yet to be explored.
Collection/Series:
Scientific Areas:
Towards the top quark mass from asymptotic safety
Friday Jun 08, 2018
Speaker(s):
I will discuss a proposed mechanism to fix the value of the top quark mass from asymptotic safety of gravity and matter, and will review the status of the proposal.
Collection/Series:
Scientific Areas:
Beyond isotropic & homogeneous loop quantum cosmology: theory and predictions
Thursday Jun 07, 2018
Speaker(s):
In this talk I will discuss several recent advances in loop quantum cosmology and its extension to inhomogeneous models. I will focus on spherically symmetric spacetimes and Gowdy cosmologies with local rotational symmetry in vacuum. I will discuss how to implement a quantum Hamiltonian evolution on these quantizations. Then, I will focus on how we can extract predictions from those quantum geometries, and finally analyze a concrete example: cosmological perturbations on Bianchi I spacetimes in LQC.
Collection/Series:
Scientific Areas:
The shape of a more fundamental theory?
Thursday May 31, 2018
Speaker(s):
I suggest a minimal practical formal structure for a more fundamental theory than the Standard Model + GR and review a mechanism that produces such a structure. The proposed mechanism has possibilities of producing non-canonical phenomena in SU(2) and SU(3) gauge theories which might allow conditional predictions that can be tested.
The slides and other writings are posted on my web page
Collection/Series:
Scientific Areas: |
Can you draw a continuous line through $16$ numbers on this grid so that the total of the numbers you pass through is as high as possible? |
[ 作者:秦泗甜 来源:理学院 浏览:10 录入时间:2021年04月27日 ]
pattern. However, in practice the orthogonality usually fails. When the orthogonality in the standard patterns fails, in this paper we propose a new strategy which transfers the problem to a case with orthogonality and the standard patterns are $\varepsilon$-independentlysymptotically stable. Numerical simulations are provided to illustrate the approach. |
# 국내 고등학생들의 탄소발자국 산정과 비교에 관한 연구: 대.중.소도시 통학패턴을 중심으로
The goal of this study, as an effort to reduce national greenhouse gas (GHG) emissions, is to calculate the carbon footprint of students based on the commute pattern of high school students in big (Seoul), middle (Suwon) and small (Icheon) size city. By conducting a survey, the commute pattern and method of students as well as students' carbon footprint were evaluated. As a result, the carbon footprint of the high school student in Icheon ($1.698kgCO_2$) had 2~3 times higher than student's carbon footprint in Seoul ($0.623kgCO_2$) and Suwon ($0.699kgCO_2$). One of the reasons for the different carbon footprint result between big and small city was whether the public pedestrian facilities and a bicycle path or not. Based on our research results, we pointed out the problems and suggested some ways to reduce carbon footprint of students. |
## Match the pairs. A B 1) 23 × 83 a. 1 2) $$\frac{8^3}{2^3}$$ b. 163 3) (23)0 c. 43 d. 8 Check this: Class 9
Answers to popular Homework/Schoolwork assignments, tests, and more. |
## CSC411/2515 Project 1 bonus: k-Nearest Neighbours
Bonus projects give you an opportunity to explore a topic in machine learning in more depth. They are optional. The quality of your write-up will be considered when grading: you are exploring a complex topic, so if the writing is unclear, we won’t be able to figure out what exactly you are talking about.
For a really good write-up of a problem that’s not unlike the problem in P1b, see Swirszcz et al, Local Minima in Training of Neural Networks. Obviously, we don’t expect anything nearly as detailed as that paper. Two pages is likely plenty for a good bonus project submission, and less may be possible. We do expect an explanation of what you did and why,
### Project 1 bonus
In class, we saw that the performance of k-NN on the training set is 100% for $k = 1$, with the performance stabilizing to the proportion of the plurality label in the training set when $k$ becomes very large. The behaviour for the performance on the validation set is different: there is an optimal $k$ between $1$ and the size of the training set.
For this part of the project, you will explore the question of whether, and under what circumstances, the performance on the training set can improve when k becomes larger. Here are some possible ways to answer the question:
• Generate random datasets and set up experiments that convincingly demonstrate a scenario where the performance on the training set goes up when $k$ goes up. Explain how you came up with the scenario and why it makes sense.
• Attempt to describe all the circumstances in which performance on the training set will always go up as $k$ increases for some ranges of $k$. Make a mathematical argument that you correctly described the circumstances.
### Setting up experiments
Unlike with the main part of Project 1, you are allowed to use scikit-learn or other libraries. Of course, any algorithm you use should be completely described in your report.
## What to submit
The project should be implemented using Python 2 and should be runnable on the CS Teaching Labs computers. If you choose to deviate from that, state your system configuration clearly in your report so that we can reproduce your work if needed.
Your report should be in PDF format. You should use LaTeX to generate the report, and submit the .tex file as well. A sample template is on the course website. You will submit at least three files: knn.py, knn.tex, and knn.pdf. You may submit additional Python files if necessary.
Reproducibility counts! We should be able to obtain all the graphs and figures in your report by running your code. The only exception is that you may pre-download the images (what and how you did that, including the code you used to download the images, should be included in your submission.) Submissions that are not reproducible will not receive full marks. If your graphs/reported numbers cannot be reproduced by running the code, you may be docked up to 20%. (Of course, if the code is simply incomplete, you may lose even more.) Suggestion: if you are using randomness anywhere, use numpy.random.seed().
You must use LaTeX to generate the report. LaTeX is the tool used to generate virtually all technical reports and research papers in machine learning, and students report that after they get used to writing reports in LaTeX, they start using LaTeX for all their course reports. In addition, using LaTeX facilitates the production of reproducible results.
### Using our code
You are free to use any of the code available from the CSC411/CSC2515 course website. |
## Solution
Intuition
Put the odd nodes in a linked list and the even nodes in another. Then link the evenList to the tail of the oddList.
Algorithm
The solution is very intuitive. But it is not trivial to write a concise and bug-free code.
A well-formed LinkedList need two pointers head and tail to support operations at both ends. The variables head and odd are the head pointer and tail pointer of one LinkedList we call oddList; the variables evenHead and even are the head pointer and tail pointer of another LinkedList we call evenList. The algorithm traverses the original LinkedList and put the odd nodes into the oddList and the even nodes into the evenList. To traverse a LinkedList we need at least one pointer as an iterator for the current node. But here the pointers odd and even not only serve as the tail pointers but also act as the iterators of the original list.
The best way of solving any linked list problem is to visualize it either in your mind or on a piece of paper. An illustration of our algorithm is following:
Figure 1. Step by step example of the odd and even linked list.
Complexity Analysis
• Time complexity : . There are total nodes and we visit each node once.
• Space complexity : . All we need is the four pointers. |
Constraint Programming Example
This tutorial is aimed at readers with some acquaintance with optimization and probability theory; for example graduate students in operations research, or academics/ practitioners from a different. In dist2, the weight of 0 is 20, 6 is 10 and 7 is 20 while 1 through 5 share a total weight of 50, thus have 10 each. 1 An integer program is a linear program in which all variables must be integers. 1 Constrained quadratic programming problems A special case of the NLP arises when the objective functional f is quadratic and the constraints h;g are linear in x 2 lRn. The condition that at least one of the constraints must hold cannot be for-mulated in a linear programming model, because in a linear program all con-straints must hold. The problem is formulated as a linear program where the objective is to minimize cost and the constraints are to satisfy the specified nutritional requirements. 15 Title Interface to 'Lp_solve' v. Summary: The goal of the diet problem is to select a set of foods that will satisfy a set of daily nutritional requirement at minimum cost. Constraint programming is a proven technology for solving complex combinatorial decision or optimisation problems of this kind in many disciplines, such as: scheduling; industrial design; aviation; banking; combinatorial mathematics; and the petrochemical and steel industries, to name but a few examples. All constraints are tradeoffs. The constraints are (1) having only three machines, (2) the budget, and (3) the profitability functions for each of the. Example 6 / 44 The model is implemented as class SendMoreMoney, which inherits from the class Space Declares an array x of 8 new integer CP variables that can take values from 0 to 9 To simplify posting the constraints, the constructor defines a variable of type IntVar for each letter. Constraint (disambiguation), Constraint (mathematics), constraint: Encyclopedia [home, info] Medicine (1 matching dictionary) constraint: online medical dictionary [home, info] Science (3 matching dictionaries) Constraint: Eric Weisstein's World of Physics [home, info] Constraint: Mathematical Programming [home, info]. MiniZinc is a free and open-source constraint modeling language. A consistent assignment does not violate the constraints. Randomization methods. finding an optimal solution) and focuses on the constraints and variables’ domain rather than the objective function. Another technique that should be considered is "Constraint Programming" (sometimes embedded in Prolog-like languages to form "Constraint Logic Programming"). In this section we present a mathematical formulation of soft constraints, give an example of how soft constraints can be modeled with GAMS EMP and introduce the EMP annotations specific to soft. A constraint logic program (CLP), or program, is a finite set of rules. Peridot uses visual programming, programming by example, constraints, and plausible inferencing to allow nonprogrammers to create menus, buttons, scroll bars, and many other interaction techniques easily and quickly. Google had introduced android constraint layout editor at Google I/O Conference 2016. Today most Prolog implementations include one or more libraries for constraint logic programming. CSP is class of problems which may be represented in terms of variables (a, b, …), domains (a in [1, 2, 3], …), and constraints (a < b, …). Constraints. For problems with nonlinear constraints, these subroutines do not use a feasible-point method; instead, the algorithms begin with whatever starting point you specify, whether feasible or infeasible. Describe the assumptions of linear program-ming. For example, consider a problem in which variable x is an integer ranging from 1 to 6 and y is an integer ranging from 3 to 7. 298 Chapter 11. A constraint that ensures that a location receives its demand is a demand constraint. 1 Objectives By the end of this unit you will be able to: • formulate simple linear programming problems in terms of an objective function to be maxi-mized or minimized subject to a set of constraints. The minimum shipping requirement gives me x + y > 200; in other words, y > - x + 200. An example of calling sqp:. Since unary constraints are dealt with by preprocessing the domains of the affected variables,. However, it is also possible to implement constraint programming algorithms in general-purpose programming languages, or specialist de-clarative languages. You'll see that binding constraints-- price ceilings and floors-- actually don't allow equilibrium to be reached. Its general form is minimize f(x) := 1 2 xTBx ¡ xTb (3. In this tutorial, we will present an introduction to musical constraints, starting with automatic harmonisation, and then exploring examples in contemporary music and in sound processing. In order to establish our main theorem on the relation of the constraint qualifications introduced above, we need the following auxiliary result. Often the rhs of a constraint represents the quan-tity of a resource that is available. The following simple example is adapted from the JAMS solver manual:. The next section describes the constraints and ranges of the input. A Constraint Satisfaction Problem is characterized by: a set of variables {x1, x2,. Gecode is an open source C++ toolkit for developing constraint-based systems and applications. "Logic" and Constraint Programming Extensions For combinatorial or discrete optimization, AMPL has always provided the option of including integer-valued variables in algebraic objectives and constraints. as a formula, or. Solving a linear programming problem for integer values of the variables only is called integer programming and is a significantly more difficult problem. If there is any violation between the constraint and the data action, the action is aborted. A table can have only ONE primary key; and in the table, this primary key can consist of single or multiple columns (fields). A binding constraint is a constraint used in linear programming equations whose value satisfies the optimal solution; any changes in its value changes the optimal solution. Introduction The first thing we have to understand while dealing with constraint programming is that the way of thinking is very different from our usual way of thinking when we sit down to write code. The NLP (4. •The Lagrange multipliers for redundant inequality constraints are negative. The Python constraint module offers solvers for Constraint Satisfaction Problems (CSPs) over finite domains in simple and pure Python. Let us understand the composite constraint with an example. Finding a solution to a model involves constraint propagation and search. Fixed or Constraint. Because of constraints 2. 4, we looked at linear programming problems that occurred in stan-dard form. The Five Focusing Steps are used to continuously remove constraints. In order to ensure consistency in data, we sometimes require missing data or non-entered values to be set to a default value. BINARY CONSTRAINT binary constraint relates two variables. PL-SQL Programming Assignment Help, Example of alternative formulation as a table constraint, Example of Alternative formulation as a table constraint Example: Alternative formulation as a table constraint ALTER TABLE EXAM_MARK ADD CONSTRAINT Must_be_enrolled_to_take_exam_alternative2 CHECK (EXISTS (SELECT StudentId, CourseId. As you can see in the following Create table statement, we have imposed the city_bc_unique unique constraint at the table level and here we specified the City and. For example, constraint programming can be used as a heuristic to find solutions for mixed integer programs. Open your layout in Android Studio and click the Design tab at the bottom of the editor window. A basic constraint programming problem model consists of decision variables and constraints on those variables. 6, the importance of time I(T) will be 0. Consider this problem:. The Servo Trigger The SparkFun Servo Trigger is a small board dedicated to driving hobby servos. Example using NOT NULL constraint. Embedded SQL C Program Example Embedded C program to do the following: Starting with a station name (Denver, in this example), look up the station ID. However, the theory behind linear programming drastically reduces the number of feasible solutions that must be checked for optimality. The semantic of each constraint is given together with some typical usage and filtering algorithms, and with reformulations in terms of graph properties, automata, and/or logical formulae. Summary: The goal of the diet problem is to select a set of foods that will satisfy a set of daily nutritional requirement at minimum cost. Solve a Quadratic Programming Problem. Therefore, do not enter the nonnegativity constraints. Particularly, the study has the following sub-objectives: 1. For example, a domain constraint can contain a subquery, thus allowing the implementation of cross-field constraints. Price of apples = £1. The following SQL adds a constraint named "PK_Person" that is a PRIMARY KEY constraint on multiple columns (ID and LastName): ALTER TABLE Persons. CSP is the gathering point for variables, domains, and constraints. Chapter 16: Introduction to Nonlinear Programming A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints, and variable bounds. Constraint: Programming by specifying a set of constraints. The new Layout Editor has a set of powerful tools to allow developers to create flat-ui hierarchies for their complex layouts. Objective function: Max Z: 250 X + 75 Y. It also prevents one user with the same email address from creating multiple accounts in the system. monadiccp ===== Acknowledgments -----. DEPT_ID NUMBER DEFAULT 10); — Example of NOT NULL, UNIQUE, DEFAULT AND CHECK Constraint For a given column, single or multiple constraints can be applied. Constraint Satisfaction Problems in Python Michael Sioutis Outline Introduction Constraints in Python Example Questions Constraint Satisfaction Problems in Python Michael Sioutis Department of Informatics and Telecommunications National and Kapodistrian University of Athens July 18, 2011 Michael Sioutis Constraint Satisfaction Problems in Python. Code Example – VB nonlinear programming ' 0 <= x0, x1, x2 <= 42 For I As Integer = 0 To XDim - 1 Problem. Learning by Example Using VHDL - Advanced Digital Design With a Nexys FPGA Board. Java Programming by Example introduces software developers to Java, the object-oriented programming language of choice for Internet development. One important point to note about this constraint is that it cannot be defined at table level. JComboBox basic tutorial and examples. (The word "programming" is a bit of a misnomer, similar to how "computer" once meant "a person who computes". The number of binding constraints in such a case will be more than the number of decision variables. Let us understand the composite constraint with an example. at Java One, 2010 OpenRules, Inc. vertex-cover. Constraint propagation is a deductive activity which consists in deducing new cong constraints. An introductory textbook on CP is Apt (2003), while the state-of-the-art on CP is contained in Rossi et al. Minimize z = 200x 1 + 300x 2. For pure academic problems (for example, job-shop, open-shop and flow-shop), it finds solutions that are comparable to solutions found by state-of-the-art, specialized algorithms. The increase in the objective function will be 5×1. CP Optimizer contains a robust optimizer that handles the side constraints that are invariably found in such challenges. (For example, this would be the case if a 1, a 2, and a 3 had been 40, 40, and 30, respectively. Suppose we have $$n$$ different stocks, an estimate $$r \in \mathcal{R}^n$$ of the expected return on each stock, and an estimate $$\Sigma \in \mathcal{S}^{n}_+$$ of the covariance of the returns. The problem is formulated as a linear program where the objective is to minimize cost and the constraints are to satisfy the specified nutritional requirements. There are many different methods to solve GPs, and it depends on the different constraints and conditions for the specific GP. A consistent assignment does not violate the constraints. Solving linear programming problems efficiently has always been a fascinating pursuit for. The objective function is also called. For example, you can constrain a button so that it is horizontally centered with an Image view and so that the button’s top edge always remains 8 points below the image’s bottom. – Say that {1,2,3} is the set of values for each of these variables then:. NUnit provides a new Constraint Model to improve the test method readability. A Tutorial” by Gert Smolka, Christian Schulte, and Jörg Würtz for a previous version of Oz. These examples are extracted from open source projects. In business, it is often desirable to find the production levels that will produce the maximum profit or the minimum cost. The algorithm terminated normally. A typical example would be taking the limitations of materials and labor, and then determining the "best" production levels for maximal profits under those conditions. , xn}, for each variable xi a domain Di with the possible values for that variable, and a set of constraints, i. outline a conceptual framework for constraint management. Two airplane parts: no. The following simple example is adapted from the JAMS solver manual:. Standard form linear program Input: real numbers a ij, c j, b i. Chance Constrained Programming in a Nutshell † Single Chance Constraint(s) ƒ Ti xed) LP! (Tix ‚ F ¡1(fi)) ƒ Ti normal) convex! (Solve as SOCP). Moreover, the constraint x =0 or 1 can be modeled as x(1 −x) =0 and the constraint x integer as sin (πx) =0. However, I do not know the number of colors in advance. You can use MiniZinc to model constraint satisfaction and optimization problems in a high-level , solver-independent way, taking advantage of a large library of pre-defined constraints. 298 Chapter 11. As can be seen, the Q matrix is positive definite so the KKT conditions are necessary and sufficient for a global optimum. +, in-l} with: 2. Rather, they are pointers to lp_solve 'lprec' structures which are created and store externally. AddBounds(I, 0. A feasible design must satisfy precisely all equality constraints. As x ≥ 0 and y ≥ 0, work in the first quadrant. • find feasible solutions for maximization and minimization linear programming problems using. For example, use a programming text editor to prepare the following script and save as "load_products. However, the. Linear programming is a mathematical method of optimizing an outcome in a mathematical model using linear equations as constraints. Algorithmic Fragments of Arithmetic. For example, if we want to restrict the number of digits in a phone number or limit the customer age between 18 to 60, then we can assign Sql Server Check Constraint to that column. REST Architectural Constraints REST stands for Re presentational S tate T ransfer, a term coined by Roy Fielding in 2000. It turns out to be quite easy (about one page of code for the main idea and two pages for embellishments) using two ideas: constraint propagation and search. Because of constraints 2. Include nonlinear constraints by writing a function that computes both equality and inequality constraint values. For pure academic problems (for example, job-shop, open-shop and flow-shop), it finds solutions that are comparable to solutions found by state-of-the-art, specialized algorithms. This ensures the accuracy and dependability of the information within the info. Linear programming is a technique used to solve models with linear objective function and linear constraints. The profit or cost function to be maximized or minimized is called the objective function. The solution to an integer programming problem is not necessarily close to the solution of the same problem solved without the integer constraint. Linear programming is the process of taking various linear inequalities relating to some situation, and finding the "best" value obtainable under those conditions. Manufacturing and Transportation: In situations involving manufacturing and transportation of goods,. "0" here specifies the same constraint as the 0th output variable. mod is more interesting from a modelling point of view. Scripts can be made quite short but also easily readable. 12, “Implementing a class-level constraint” shows constraint annotation and validator of the @ValidPassengerCount constraint you already saw in use in Example 2. In this article, I'll show you how to implement a simple Constraint Programming example that solves Sudoku puzzles using the CLP functionality in SAS Optimization. Rather, they are pointers to lp_solve 'lprec' structures which are created and store externally. Linear programming models consist of an objective function and the constraints on that function. By obtaining the object you’re interested in you can find out information about it. We can also provide JaCoP under different commercial license if open source license is not appropriate for your usage. DROP TABLE to delete a table. Constraints; Constraint: Abstract base class for constraints: FunctionConstraint: Constraint which wraps a function defining the constraint logic: AllDifferentConstraint: Constraint enforcing that values of all given variables are different: AllEqualConstraint: Constraint enforcing that values of all given variables are equal: MaxSumConstraint. To start a new constraint layout file, follow these steps: In the Project window, click the module folder and then select File. Having an equality constraint is the case of degeneracy, because every equality constraint, for example, X1 + X2 = 1, means two simultaneous constraints: X1 + X2 £ 1 and X1 + X2 ³ 1. A binding constraint is a constraint used in linear programming equations whose value satisfies the optimal solution; any changes in its value changes the optimal solution. Example: Linear Programming A linear programming problem is a nonlinear programming problem in which all functions (ob-jective function and constraint functions) are linear. Chance Constrained Programming in a Nutshell † Single Chance Constraint(s) ƒ Ti xed) LP! (Tix ‚ F ¡1(fi)) ƒ Ti normal) convex! (Solve as SOCP). Constraint Logic Programming 9-2 Introduction (1) Constraint logic programming (CLP) extends stan-dard logic programming by constraints, which can in principle be any kind of logical formulae. In the pre-emptive model, goals are ordered according to priorities. 6 = 8, which make the objective function. The point of this, of course, it to see which of your tasks are critical and which can be delayed or floated. This is a very recent example of a theory of change developed by Fiver Children’s Foundation with ActKnowledge, which provide a key foundation for evaluation, communication, planning, organization and staff development. An assumption is a condition you think to be true, and a constraint is a limitation on your project. for example- NOT NULL, UNIQUE, PRIMARY KEY etc. CSP is class of problems which may be represented in terms of variables (a, b, …), domains (a in [1, 2, 3], …), and constraints (a < b, …). MiniZinc is a free and open-source constraint modeling language. Step 5 “Identify Ground Rules and Assumptions”. Much of its success is due to the simple and elegant underlying formulation: describe the world in terms of decision variables that must be assigned values, place clear and explicit restrictions on the values. SQL-99 introduces a SIMILAR predicate to test whether strings conform to regular expression syntax. Overview The first tutorial in the Getting Started with CP Optimizer manual explains the basics of describing, modeling and solving constraint programming problems. For this example, the constraints and objective function have already been entered into TEMATH. All variables must appear on the left-hand side of the constraints, while numerical values must appear on the right-hand side of the constraints. Enter all of the data for the model. The tutorial is based on the document "Finite Domain Constraint Programming in Oz. Example : A small business enterprise makes dresses and trousers. An example of a constraint could be, for example, that certain resources, such as machine capacity or manpower, are limited. may not be desired. Verifying this is a simple exercise in problem transformations. Solving Constraint Satisfaction Problems (CSPs) using Search Alan Mackworth UBC CS 322 – CSP 2 January 28, 2013 Textbook § 4. Consider the following example. A basic solution for which all variables are nonnegative is called a basic feasible solution. He is an adjunct professor of computer science and computer programming. 1 (Total importance of constraints is 0. Furthermore, the coe cients of this constraint and the objec-tive are all non-negative. SQL (Structured Query Language) is used to modify and access data or information from a storage area called database. CP is used to solve Constraint Satisfaction Problems (CSPs). It turns out to be quite easy (about one page of code for the main idea and two pages for embellishments) using two ideas: constraint propagation and search. Linear programming is a mathematical method of optimizing an outcome in a mathematical model using linear equations as constraints. For example, use a programming text editor to prepare the following script and save as "load_products. Create a new layout. A check constraint is a constraint put on a particular column in a table to ensure that specific data rules are followed for a column. Without any constraints, the type argument could be any type. Here are some of the pages where I have collected information about the systems and models (programs). For example, it is used for limiting the values that a column can hold in a relation. It also enables logic programs to be executed efficiently as consistency techniques. When the objective function and constraints are all linear in form,. Such constraints are called disjunctive constraints. SQL UNIQUE constraint for 2 columns example The email of a user must be unique as well so that when the system sends out any notification, the corresponding user will receive it. tions to Binary Integer Linear Programming (with an example of a manager of an activity hall), and conclude with an analysis of versatility of Linear Programming and the types of problems and constraints which can be handled linearly, as well as some brief comments about its generalizations (to handle situations with quadratic constraints. You can use MiniZinc to model constraint satisfaction and optimization problems in a high-level , solver-independent way, taking advantage of a large library of pre-defined constraints. Lemma 6 Let x¯ 2 such that MFCQ is satsified at x. Enter all of the data for the model. To define a UNIQUE constraint, you use the UNIQUE keyword followed by one or more. For example, MAX 3 X1 + 4 is not allowed. I tend to list some item in both areas, as the Risk Register keeps the 'constraints' visible to the team and allows me to track & update them. Constraint (disambiguation), Constraint (mathematics), constraint: Encyclopedia [home, info] Medicine (1 matching dictionary) constraint: online medical dictionary [home, info] Science (3 matching dictionaries) Constraint: Eric Weisstein's World of Physics [home, info] Constraint: Mathematical Programming [home, info]. Linear programming models consist of an objective function and the constraints on that function. ALTER TABLE EMPLOYEES DROP CONSTRAINT EMPLOYEES_PK; Some implementations may provide shortcuts for dropping certain constraints. The term Constraint-Induced Movement Therapy (CIMT) describes a package of interventions designed to decrease the impact of a stroke on the upper-limb (UL) function of some stroke survivors. This rules out situations where there are multiple constraints, where there are non-binding constraints, and where. A project constraint is a definite and inflexible limitation or restriction on a project. The term ‘Linear’ is used to. The objective function is also called. The 'commission' must be more than. (In the example this the range D7. exe - HOST terminal program (50 KB). Disable Randomization. We saw similar examples in operands subsection also. Following is the example of defining a generic class with type parameter ( T ) as a placeholder with an angle ( <> ) brackets. For example, a domain constraint can contain a subquery, thus allowing the implementation of cross-field constraints. For example, the base class constraint tells the compiler that only objects of this type or derived from this type will be used as type arguments. - in many applications, non-binary constraints are naturally used, for example, a+b+c ≤ 5 - for such constraints we can do some local inference / propagation for example, if we know that a,b ≥ 2, we can deduce that c ≤ 1 - within a single constraint, we can restrict the domains of variables to the values satisfying the constraint. What is constraint programming? Constraint Programming (CP) is an emergent field in operations research. Package 'lpSolve' January 24, 2020 Version 5. Constraint programming is currently applied with success to many domains, such as scheduling, planning, vehicle routing, configuration, networks, and bioinformatics. Minimize f(x) = - 8x 1 - 16x 2 + x 2 1 + 4x 2 2 subject to x 1 + x 2 ≤ 5, x 1 ≤ 3, x 1 ≥ 0, x 2 ≥ 0 Solution: The data and variable definitions are given below. Linear programming 1. For example, if you add a constraint for the left and right side of a view to the left and right side of the layout, then the view becomes centered by default. In this example, the optimal. Constraints are limitations, and may suggest, for example, how much of a certain item can be made or in how much time. A constraint-based approach to invariant generation in programs translates a program into constraints that are solved using off-the-shelf constraint solvers to yield desired program invariants. • Much as each line of a computer program invokes an operation. x 1 + d 2 − = 140 x 2 + d 3 − = 200. The mathematical representation of the quadratic programming (QP) problem is Maximize. This constraint says that X1, X2, and X3 must take on different values. The objective function and the constraints can be formulated as linear functions of independent variables in most of the real-world optimization problems. Linear programming is one of the most common optimization techniques. • find feasible solutions for maximization and minimization linear programming problems using. For pure academic problems (for example, job-shop, open-shop and flow-shop), it finds solutions that are comparable to solutions found by state-of-the-art, specialized algorithms. Let us see each of the constraint in detail. The three most significant project constraints -- schedule, cost and scope -- are sometimes known as the triple constraint or the project management triangle. constraints as equalities. Conclusion We have presented an example of a nonlinear optimization problem which can be solved using Excel. Sections 6 and 7 introduce AMPL's. If you define a CHECK constraint on a table it can limit the values in certain columns based on values in other columns in the row. Constraint programming provides powerful support for decision-making; it is able to search quickly through an enormous space of choices, and infer the implications of those choices. In this example, the optimal. 2) Some of the numerical techniques offered in this chapter for the solution. - [N-Queens (clp(fd))](example/clpfd_queens. Welcome to our site “constraint. The H and V characters indicate horizontal and vertical line segments. In this chapter, we introduce Constraint Programming (CP) and the or-tools library and its core principles. Pivot on Row 1, Column 3. ): $$y_2+y_3+y_4\le2+(1-x_1)\quad y_2+y_3+y_4\ge2-2(1-x_1),$$. for example- NOT NULL, UNIQUE, PRIMARY KEY etc. For an example of finding an optimal solution to a CP problem, see Solving an Optimization Problem. Programming with Constraints: an Introduction I have developed Powerpoint for Windows 97 Version 7. Its general form is minimize f(x) := 1 2 xTBx ¡ xTb (3. Learning by Example Using VHDL - Advanced Digital Design With a Nexys FPGA Board. 3 (Linear, Interactive, Discrete Optimizer) is an interactive linear, quadratic, and integer programming system useful to a wide range of users. This paper examines a model for managing these six constraints. A CSP is defined by a triple (X,D,C) such that • X is a finite set of variables; • D is a function that maps every variable xi ∈ X to its domain D(xi), that is, the finite set of values that may be assigned to xi;. •The constraint x≥−1 does not affect the solution, and is called a redundant constraint. Often, Prolog programming revolves around constraints on the values of variables, embodied in the notion of unification. independent-set. Constraint Programming. For example, suppose that you want to add data to a table that contains a column with a NOT NULL constraint. So you would like to find out how to assign them all to jobs such that overall productivity is maximized. This framework combines together some of the best features of traditional constraint satisfaction, stochastic. For example, consider a problem in which variable x is an integer ranging from 1 to 6 and y is an integer ranging from 3 to 7. addLe(cplex. We can use Default constraint on a Status column with the default value set to Active. Constraint programming is an example of the declarative programming paradigm, as opposed to the usual imperative paradigm that we use most of the time. Assumptions and constraints are an important part of your project. +, in-l} with: 2. Constraint programming or constraint solving is about finding values for variables such that they satisfy a constraint. LpProblem (name='NoName', sense=1) ¶ This function creates a new LP Problem with the specified associated parameters. If there is any violation between the constraint and the data action, the action is aborted. The syntax for the CREATE TABLE statement in Oracle/PLSQL is: The name of the table that you wish to create. Constraints • A constraint is a relation between a local collection of variables. Dantzig’s original example was to nd the best assignment of 70 people to 70 jobs subject to constraints. 10, for example, on a food group constraint indicates that allowing a decrease or increase of 1% in the energetic contribution of the related food group constraint will decrease the minimum energy required to satisfy the constraint by 0. Balanced incomplete block design (BIBD) generation is a standard combinatorial problem from design theory. Below is the example for Unique Constraint applied on EmpID column of EmployeeDetails table. The profit on a dress is R40 and on a pair. 1 A Graphical Example Recall the linear program from Section 3. Linear programming is a mathematical method of optimizing an outcome in a mathematical model using linear equations as constraints. Linear Programming Examples. * New user interfaces make model development easier. Introduction. Overall, the chance-constraint method has many applications currently. The two basic concepts of CP are constraint propagation and. examples/cp/visu This folder contains examples that provide a graphical output based on external packages numpy and matplotlib. Step 4: Find an optimal solution. 0 Statutory constraints , international law, federal regulations, and rules of engagement (ROE) may limit a commander 's options regarding IO. Although this method can give the optimal solution, for large. 2 Constraint Programming and the Choco CP Solver Definition 1. Send More. In Inventor, some examples of objects are extrude features, work planes, constraints, windows, and iProperties. First we create an empty model x. Graphical solution method 1. This rules out situations where there are multiple constraints, where there are non-binding constraints, and where. Gecode is an open source C++ toolkit for developing constraint-based systems and applications. According to ISO C++ core guideline T. Assume that its constraint set has at least one basic feasible solution and that the LP has an optimal solution. Example : A small business enterprise makes dresses and trousers. As the Internet industry progresses, creating a REST API becomes more concrete with emerging best practices. For example, constraint programming can be used as a heuristic to find solutions for mixed integer programs. I created a graph coloring DOcplex. References. 6 Max Min with mixed constraints (Big M) Systems of Linear Inequalities in Two Variables • GRAPHING LINEAR INEQUALITIES. There are two decision variables: the number of cars x 1 in thousands and the number of trucks x 2 in thousands. We write g(x)+z = b, z ≥0. Primary keys must contain UNIQUE values, and cannot contain NULL values. • The constraint restricts the values that these variables can simultaneously have. Represent the constraints graphically. Whenever a constraint is created based on more than one column then it is called Composite Constraints in SQL Server. This page describes an experimental core language feature. The tutorial is based on the document “Finite Domain Constraint Programming in Oz. Constraint Programming Cheat Sheet Constraint Programming Links Thom Frühwirth et. 1 Systems of Linear Inequalities 5. the constraint’s right-hand side (or rhs). CP problems arise in many scientific and engineering disciplines. at Java One, 2010 OpenRules, Inc. Constraint Programming. Although quite new, it already possesses a strong theoretical foundation. This can be turned into an equality constraint by the addition of a slack variable z. • Particularly successful in scheduling and logistics. Declare the moment: Launch official transition from plan/design to build/implementation and set expectations with a multimedia message from the sponsors or sponsorship group. In the classical Job-Shop Scheduling problem, a finite set of jobs is processed on a finite set of machines. JButton basic tutorial and examples. In this tutorial, we present several ways of adding different types of constraint to your evolutions. 1 Constrained quadratic programming problems A special case of the NLP arises when the objective functional f is quadratic and the constraints h;g are linear in x 2 lRn. Introduction to Constraint Programming (CP) JSR-331: oncoming Java CP API standard allow a user to switch between different CP Solvers without changing a line in the application code Examples of practical use of Constraint Programming for Java-based decision support applications. DEFAULT CONSTRAINT. The following SQL adds a constraint named "PK_Person" that is a PRIMARY KEY constraint on multiple columns (ID and LastName): ALTER TABLE Persons. The following are Jave code examples for showing how to use findViewById() of the android. Quality is also defined by the project scope and is an output of the scope definition. The algorithm terminated normally. If you constrain budget, the project may be low quality. Constraint Programming. Examples of constraint programming Each example is self-contained and provides a textual output of the result, printed on standard output. “Linear” No x2, xy, arccos(x), etc. In this clause, X + Y > 0 is a constraint; A(X,Y), B(X), and C. Price of apples = £1. The index row of the third (optimum solution) simplex tableau (see contribution margin maximization example) shows the shadow prices in the slack variable columns, which is the location for both ≤ and ≥ constraints, while the artificial variable column is used for the = constraint, with the m value ignored. The exercise also gives maximums: x < 200 and y < 170. The sensitivity range for a constraint quantity value is also the range over which the shadow price is valid. constraints , devoted to the topic. The Five Focusing Steps are used to continuously remove constraints. For example, the quality of LP-relaxation is an important factor for deciding the quality of an integer program. Thus, CLP is just one. Those are your non-basic variables. Tutorial on Constraint Programming. Answer Set Programming: Boolean Constraint Solving for Knowledge Representation and Reasoning - Duration: 1:05:17. Step 5 “Identify Ground Rules and Assumptions”. Send More. LpProblem (name='NoName', sense=1) ¶ This function creates a new LP Problem with the specified associated parameters. A simple constraint example using a vertical guideline is shown below. This can be turned into an equality constraint by the addition of a slack variable z. CP is based on feasibility (finding a feasible solution) rather than optimization (finding an optimal solution) and focuses on the constraints and. The result was a dramatic lead time reduction from 388 days for the applicant to receive payments to 63 days. It is a relatively new technology developed in the computer science and artificial intelligence communities. Built-in form validation examples. Constraint Programming and Wave Function Collapse Explained. Example : A small business enterprise makes dresses and trousers. Introduction Many real-life problems consist of maximizing or minimizing a certain quantity subject to some constraints. The model given above is a very small zero-one integer programming problem with just 10 variables and 7 constraints and should be very easy to solve. Now in the above example, let's say you discover that the marble used on the floor and in the bathrooms was below quality. The intent of concepts is to model semantic categories (Number, Range, RegularFunction) rather than syntactic restrictions (HasPlus, Array). One of the biggest PM responsibilities is managing project constraints, which also happen to overlap with your major knowledge areas , in order to ensure that your project gets completed. The first step in the formulation is to name the decision variables and their units of measurement unless the units of measurement are obvious. See also theory of constraints. Branch and Bound The standard Microsoft Excel Solver uses a basic implementation of the Branch and Bound method to solve MIP problems. Those are your non-basic variables. Each job is characterized by a fixed order of operations, each of which is to be processed on a specific machine for a specified duration. The original problem is called primal programme and the corresponding unique problem is called Dual programme. mod uses the graph from graph. element itself. 2 Linear Programming Geometric Approach 5. 1 An integer program is a linear program in which all variables must be integers. In the previous tutorial, we left off with the formal Support Vector Machine constraint optimization problem: That's looking pretty ugly, and, due to the alpha squared, we're looking at a quadratic programming problem, which is not a walk in the park. A state is de ned as an assignment of values to some or all variables. Linear programming was revolutionized when CPLEX software was created over 20 years ago: it was the first commercial linear optimizer on the market written in the C language, and it gave operations researchers unprecedented flexibility, reliability and performance to create novel optimization algorithms, models, and applications. Constraint Layout Tutorial With Example In Android Studio [Step by Step] Constraint Layout is a ViewGroup (i. The term Constraint-Induced Movement Therapy (CIMT) describes a package of interventions designed to decrease the impact of a stroke on the upper-limb (UL) function of some stroke survivors. A constraint is a requirement that types used as type arguments must satisfy. So you would like to find out how to assign them all to jobs such that overall productivity is maximized. The algorithm terminated normally. In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. Semoga bermanfaat & Selamat Belajar database Oracle. This ensures the accuracy and reliability of the data in the table. The point of this, of course, it to see which of your tasks are critical and which can be delayed or floated. Constraint programming tools now exist which allow CSPs to be expressed easily, and provide standard strategies for nding solutions. For example, if you add a constraint for the left and right side of a view to the left and right side of the layout, then the view becomes centered by default. 0 can be added by calling: cplex. Therefore, many have attempted to solve more complicated chance constraint problems involving nonlinear and dynamic random variables. 10, for example, on a food group constraint indicates that allowing a decrease or increase of 1% in the energetic contribution of the related food group constraint will decrease the minimum energy required to satisfy the constraint by 0. Assumptions need to be analyzed, while constraints need to be identified throughout the project lifecycle. This routine implements the dual method of Goldfarb and Idnani (1982, 1983) for solving quadratic programming problems of the form $$\min(-d^T b + 1/2 b^T D b)$$ with the constraints $$A^T b >= b_0$$. Branch and Bound The standard Microsoft Excel Solver uses a basic implementation of the Branch and Bound method to solve MIP problems. FORMULATING LINEAR PROGRAMMING PROBLEMS One of the most common linear programming applications is the product-mix problem. An example of a clause including a constraint is A (X, Y):-X + Y > 0, B (X), C (Y). Output: real numbers x j. Constraint Programming is about solving problems that can be expressed in terms of integer variables and constraints on those variables. ALTER TABLE EMPLOYEES DROP CONSTRAINT EMPLOYEES_PK; Some implementations may provide shortcuts for dropping certain constraints. To start a new constraint layout file, follow these steps: In the Project window, click the module folder and then select File. that the theory of constraints and throughput accounting (TOC/TA) is not the only approach used in decision making. Once the compiler has this guarantee, it can allow methods of that type to be called in the generic class. Solving Constraint Satisfaction Problems (CSPs) using Search Alan Mackworth UBC CS 322 – CSP 2 January 28, 2013 Textbook § 4. The general hypothesis of TOC/TA is that constraints are impediments to achieving a firm’s goal and their impact reduces profits. Example: To include a CHECK CONSTRAINT on 'commission' and a DEFAULT CONSTRAINT on 'working_area' column which ensures that - 1. If you constrain budget, the project may be low quality. 9: Discontinuous Function with a Lookup Table; Example 3. Constraint programming has been identified by ACM as one of the strategic directions in computer science. Without any constraints, the type argument could be any type. For example, a column of type DATE constrains the column to valid dates. Mathematical optimization: finding minima of functions¶. All constraints are tradeoffs. We should not be overly optimistic about these. In order to illustrate some applicationsof linear programming,we will explain simpli ed \real-world" examples in Section 2. Association for Constraint Programming 3,959 views 1:05:17. CP is used to solve Constraint Satisfaction Problems (CSPs). The two basic concepts of CP are constraint propagation and. The company charges \$250/ton for air freight. You can proceed as follows (for your constraint 1. We have provided working source code on all these examples listed below. An example of a clause including a constraint is A (X, Y):-X + Y > 0, B (X), C (Y). SQL constraints are used to specify rules for the data in a table. Welcome to our site “constraint. All bank employees work 5 consecutive days. The centerpiece of our constraint-satisfaction framework is a class called CSP. In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. The model given above is a very small zero-one integer programming problem with just 10 variables and 7 constraints and should be very easy to solve. g: lRn! lRp describe the equality and inequality constraints. In constraint programming, a problem is viewed as a series of limitations on what could possibly be a valid solution. addLe(cplex. Turns out, not coincidentally, that this example qualifies as a perfect candidate for a constraint satisfaction problem. 2 Linear Programming Geometric Approach 5. The following example illustrates how to incorporate uncertainties into problem formulation in the financial industry. So in simple term, your optimal solution probably entered a binding contract with the constraint (hopefully, not in court) that any changes in the constraint cause changes in the solution! This is actually great for business managers and programmers alike. Also, most design problems have inequality constraints, sometimes called unilateral or one-sided constraints. If you constrain risk, the project may be slow and expensive. A constraint is a requirement that types used as type arguments must satisfy. about the catalog. It also enables logic programs to be executed efficiently as consistency techniques. We are now ready to state a fundamental result for linear programming solutions. Corner points:. Package 'lpSolve' January 24, 2020 Version 5. The catalog presents a list of 423 global constraints issued from the literature in constraint programming and from popular constraint systems. This constraint says that X1, X2, and X3 must take on different values. For example for the polish constraint, provided the right-hand side of that constraint remains between 50000 + 40000 =90000 and 50000 - 10000 = 40000 the objective function change will be exactly 0. In terms of its type hints, it uses generics to make itself flexible enough to work with any kind of variables and domain values (V keys and D domain values). The advantages and disadvantages of using this model for portfolio selection are:. (View the complete code for this example. The profit on a dress is R40 and on a pair. The Value Triple Constraint: Tracking Four Distinct Phases. This is sufficient to make good use of mixed-integer programming solvers that use a classical. One recommendation in the computational task is to rewrite the constraints including equivalent proportions, to avoid divisions between changing cells (decision variables) and also denominators that initially adopt a value equal to zero. Common constraint programming problems Below are the problems which I have implemented in at least two Constraint Programming systems. • Over the last 20+ years. Array: Programming with powerful array operators that usually make loops unnecessary. Parrilo and S. Constrained quadratic programming. Without any constraints, the type argument could be any type. Model file Represents balAssign0. You can use MiniZinc to model constraint satisfaction and optimization problems in a high-level , solver-independent way, taking advantage of a large library of pre-defined constraints. I was wondering what formulations are considered as good in constraint programming? Finally, I appreciate if someone could give a simple example of an integer program and compares the way(s) it can be represented as a constraint program. Often the rhs of a constraint represents the quan-tity of a resource that is available. Write the initial tableau of Simplex method. n) x(j) is_integer OR forall (j in 1. “Programming” “ Planning” (term predates computer programming). • In constraint programming , each constraint invokes a procedure that screens out unacceptable solutions. Here's a simple linear programming problem: Suppose a firm produces two products and uses three inputs in the production process. The PRIMARY KEY constraint uniquely identifies each record in a table. Whenever a constraint is created based on more than one column then it is called Composite Constraints in SQL Server. sion of constraint programming, called stochastic constraint programming, in which we distinguish between decision vari-ables, which we are free to set, and stochastic (or observed) variables, which follow some probability distribution. Establish goals and objectives. Common constraint programming problems Below are the problems which I have implemented in at least two Constraint Programming systems. The two paradigms share many important features, like logical variables and backtracking. Gecode provides a constraint solver with state-of-the-art performance while being modular and extensible. 6 - Linear Programming. The production process can often be described with a set of linear inequalities called constraints. 1 Integer Programming and LP relaxation De nition 10. A basic solution for which all variables are nonnegative is called a basic feasible solution. They do not use mozart as dr_pepper said but Gecode. Diagram showing a budget constraint and indifference curves. A consistent assignment does not violate the constraints. 4 Find the set of feasible solutions that graphically represent the constraints. Cost: The financial constraints of a project, also known as the project budget Scope: The tasks required to fulfill the project's goals Time: The schedule for the project to reach completion Basically, the Triple Constraint states that the success of the project is impacted by its budget, deadlines and features. LXer: Constraint programming by example. In this tutorial, we will present an introduction to musical constraints, starting with automatic harmonisation, and then exploring examples in contemporary music and in sound processing. That method apply a constraint to actual value. "A or B is true"), those solved by the simplex algorithm (e. Such constraints are called disjunctive constraints. Further, Duffin and Peterson (1972) pointed out that each of those posynomial programs GP can be reformulated so that every constraint function becomes posy-/bi-nomial, including at most two posynomial terms, where posynomial programming--with posy-/mo-nomial objective and constraint functions--is synonymous with linear programming. This tutorial is based on the paper by Coello Coello [CoelloCoello2002]. Solve the inequation graphically: 2x +3y ≤ 1500, and take a point on the plane, for example (0,0). So in simple term, your optimal solution probably entered a binding contract with the constraint (hopefully, not in court) that any changes in the constraint cause changes in the solution! This is actually great for business managers and programmers alike. Constrained Optimization using Lagrange Multipliers 5 Figure2shows that: •J A(x,λ) is independent of λat x= b, •the saddle point of J A(x,λ) occurs at a negative value of λ, so ∂J A/∂λ6= 0 for any λ≥0. Standard form linear program Input: real numbers a ij, c j, b i. The difference is that a nonlinear program includes at least one nonlinear function, which could be the objective function, or some or all of. The following are top voted examples for showing how to use javax. One recommendation in the computational task is to rewrite the constraints including equivalent proportions, to avoid divisions between changing cells (decision variables) and also denominators that initially adopt a value equal to zero. The coefficients of the linear objective function to be minimized. List all rows for that station ID. These constraints usually take the form of assumptions that bind the estimate’s scope, establishing baseline conditions the estimate will be built from. of the constraint equations in which at most mvariables are nonzero––the variables that are nonzero are called basic variables. The new Layout Editor has a set of powerful tools to allow developers to create flat-ui hierarchies for their complex layouts. Welcome to the On-Line Guide to CONSTRAINT PROGRAMMING designed and maintained by Roman Barták. Quadratic Programming 4 Example 14 Solve the following problem. Programming with Constraints: an Introduction I have developed Powerpoint for Windows 97 Version 7. Quality is also defined by the project scope and is an output of the scope definition. OVal is a pragmatic and extensible general purpose validation framework for any kind of Java objects (not only JavaBeans) and allows you: to easily validate objects on demand, to specify constraints for class fields and getter methods,. A Tutorial” by Gert Smolka, Christian Schulte, and Jörg Würtz for a previous version of Oz. 6 = 8, which make the objective function. Another technique that should be considered is "Constraint Programming" (sometimes embedded in Prolog-like languages to form "Constraint Logic Programming"). In some cases, the constraints of a. (noun) An example of a constraint is the fact that there are only so many hours in a day to accomplish things. 7: Using External Data Sets; Example 3. Nonlinear Optimization Examples The NLPNMS and NLPQN subroutines permit nonlinear constraints on parameters. An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. 2) Some of the numerical techniques offered in this chapter for the solution. about the catalog. Embedding consistency techniques in logic programming allows for ease and flexibility of programming and short development time because constraint propagation and tree-search programming are abstracted away from the user. Constraint Programming. Gecode provides a constraint solver with state-of-the-art performance while being modular and extensible. Find the optimum point. • Using linear programming to solve max flow and min-cost max flow. Disable Randomization. It supports the programming of new constraints. 6 = 8, which make the objective function. However, we recommend you to write code on your own before you check them. • In mathematical programming , equations (constraints) describe the problem but don’t tell how to solve it. relations, that are assumed to hold between the values of the variables. Since unary constraints are dealt with by preprocessing the domains of the affected variables,. The ADD CONSTRAINT command is used to create a constraint after a table is already created. In the Component Tree window, right-click the layout and click Convert layout to ConstraintLayout. Corner points:. The Linear Program (LP) that is derived from a maximum network flow problem has a large number of constraints There is a "Network" Simplex Method developed just for solving maximum network flow problems. In this paper we show how the constraint-based approach can be used to model a wide spectrum of program analyses in an expressive domain containing disjunctions and conjunctions of linear inequalities. , X 1 ≠ X 2 A state is defined as an assignment of values to some or all variables. Less than Type Constraints. * CP functionality in AMPL is production-ready and new features are actively added. The difference is that a nonlinear program includes at least one nonlinear function, which could be the objective function, or some or all of. • In mathematical programming , equations (constraints) describe the problem but don't tell how to solve it. Summary: in this tutorial, you will learn how to use the SQLite UNIQUE constraint to ensure all values in a column or a group of columns are unique. A basic solution for which all variables are nonnegative is called a basic feasible solution. In this paper, we aim at improving the tracking of road users in urban scenes. Sections 1 through 5 provide an in-troduction to modeling Linear Programming (LP) problems with AMPL. Conclusion We have presented an example of a nonlinear optimization problem which can be solved using Excel. As always, my key terms are in red, and my examples are in green. The Servo Trigger The SparkFun Servo Trigger is a small board dedicated to driving hobby servos. Assumptions need to be analyzed, while constraints need to be identified throughout the project lifecycle. ADD CONSTRAINT PK_Person PRIMARY KEY (ID,LastName); Try it Yourself » SQL Keywords Reference. Open your layout in Android Studio and click the Design tab at the bottom of the editor window. The constraints can be mainly termed as “equality and inequality constraints” As well as “duality constraints”. NOT NULL If we specify a field in a table to be NOT NULL. Each constraint can be represented by a linear inequality. If this were not the case (say x 1 = x 2. Suppose we have $$n$$ different stocks, an estimate $$r \in \mathcal{R}^n$$ of the expected return on each stock, and an estimate $$\Sigma \in \mathcal{S}^{n}_+$$ of the covariance of the returns. Theory of constraints education was then expanded to training for 200 healthcare managers and 2,500 teachers. In particular, R cannot duplicate them. Overview The first tutorial in the Getting Started with CP Optimizer manual explains the basics of describing, modeling and solving constraint programming problems. However, I do not know the number of colors in advance. Using Excel to solve linear programming problems Technology can be used to solve a system of equations once the constraints and objective function have been defined. In this example, the optimal. However, there are constraints like the budget, number of workers, production capacity, space, etc. Modeling in constraint programming revolves around the details of what is possible. Constraint Programming [2,10] is a declarative problem solving paradigm where the programming process is limited to the definition of the set of requirements (constraints). An example of using direct IO can be found in sg_rbuf. For an explanation of these types of problems, please see Mixed-Integer and Constraint Programming. Slack variable has 0 as costs coefficient in appropriate position in the linear program objective function. The profit or cost function to be maximized or minimized is called the objective function. Then it validates and solves the problem using one of several available constraint solvers that are JSR-331 compliant. LXer: Constraint programming by example. Google had introduced android constraint layout editor at Google I/O Conference 2016. My Constraint Programming Blog. A constraint restricts or constrains the possible values which the variable can take. What is constraint programming? Constraint Programming (CP) is an emergent field in operations research. In this context, the function is called cost function, or objective function, or energy. Similarly, a cost constraint would limit the budget available for the project. ), Springer LNCS 636, 1992. My Constraint Programming Blog. SQL constraints are used to specify rules for the data in a table. Linear programming is the study of linear optimization problems that involve linear constraints. AIMMS+CP, see my AIMMS+CP page (39 models). This is comparable to the pattern facet in XML. One recommendation in the computational task is to rewrite the constraints including equivalent proportions, to avoid divisions between changing cells (decision variables) and also denominators that initially adopt a value equal to zero. In order to establish our main theorem on the relation of the constraint qualifications introduced above, we need the following auxiliary result. The maximum number of iterations was reached. 1 A Graphical Example Recall the linear program from Section 3. All bank employees work 5 consecutive days. LXer: Constraint programming by example. Procedural knowledge is often expressed by if-then rules, events and actions are related by reaction rules, change is expressed by update rules. This example code of a JavaFX application shows how to style the graphical user interface using JavaFX CSS. This example involves a simple text with an associated and a submit. I have done 200. 1 correspond to a ten percent margin in a response quantity. If you constrain risk, the project may be slow and expensive. maximize c 1 x 1 + c 2 x. Learning by Example Using VHDL - Advanced Digital Design With a Nexys FPGA Board. Because all of these constraints must be considered when making economic decisions about the airline, linear programming becomes a crucial job. Introduction to the Object Constraint Language tutorial. • The constraint restricts the values that these variables can simultaneously have. In terms of its type hints, it uses generics to make itself flexible enough to work with any kind of variables and domain values (V keys and D domain values). 2f3pblyece6fc, kmnrx5jeqxj, mfjzbe6iaiklt, yfv0donqz28la, vyz22360a9, c5c5m5yi0m, fydwh25tp72, 45kc23oj0wdh4g7, zkbmeqq8t5, af6havv6rn15, y8l87p8idwu, lvh2xoj6xw6qf, 2l0npxs2spnd, 1h3t9ahzkmus9m, tmdm29lzua4vz7, s8sdbwsxpc, 0v47wxcvqhf7smk, dhvpo3xzjeqe1nr, 6ijrj53te27mx4z, c60ru64i1dfsf, qpbs9mkozt, 8m45uu53tplz, c2wsogrr8jp, hxvixdjw1k7p, ut64etrvbmb, ly76rzuqye58, s7i64jpe104yu, ujya1zjqf92c16, eg8etdy52q3z6n |
# Zeta Correction: A New Approach to Constructing Corrected Trapezoidal Quadrature Rules for Singular Integral Operators
We introduce a new quadrature method for the discretization of boundary integral equations (BIEs) on closed smooth contours in the plane. This quadrature can be viewed as a hybrid of the spectral quadrature of Kress (1991) and the locally corrected trapezoidal quadrature of Kapur and Rokhlin (1997). The new technique combines the strengths of both methods, and attains high-order convergence, numerical stability, ease of implementation, and compatibility with the “fast” algorithms (such as the Fast Multipole Method or Fast Direct Solvers). Important connections between the punctured trapezoidal rule and the Riemann zeta function are introduced, which enable a complete convergence analysis and lead to remarkably simple procedures for constructing the quadrature corrections. The paper reports a detailed comparison between the new method and the methods of Kress, of Kapur and Rokhlin, and of Alpert (1999).
## Authors
• 3 publications
• 11 publications
07/06/2020
### Corrected Trapezoidal Rules for Boundary Integral Equations in Three Dimensions
The manuscript describes a quadrature rule that is designed for the high...
03/09/2022
### High order corrected trapezoidal rules for a class of singular integrals
We present a family of high order trapezoidal rule-based quadratures for...
07/03/2021
### Corrected trapezoidal rules for singular implicit boundary integrals
We present new higher-order quadratures for a family of boundary integra...
10/08/2021
### High-order Corrected Trapezoidal Rules for Functions with Fractional Singularities
In this paper, we introduce and analyze a high-order quadrature rule for...
01/18/2022
### FMM-LU: A fast direct solver for multiscale boundary integral equations in three dimensions
We present a fast direct solver for boundary integral equations on compl...
07/13/2020
### Fourier smoothed pre-corrected trapezoidal rule for solution of Lippmann-Schwinger integral equation
For the numerical solution of the Lippmann-Schwinger equation, while the...
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
This paper describes techniques for discretizing boundary integral equations (BIEs) of the form
τ(x)+∫ΓG(x,y)τ(y)ds(y)=f(x),x∈Γ, (1)
where is a smooth closed contour in the plane and the arc length measure on , where is a given smooth function, and where is a given kernel with a logarithmic singularity as . Equations such as (1) commonly arise as reformulations of boundary value problems from potential theory, acoustic and electromagnetic wave propagation, fluid dynamics and many other standard problems in engineering and science. When a PDE can be reformulated as an integral equation that is defined on the boundary of the domain, there are several advantages to doing so, in particular when the BIE is a Fredholm equation of the second kind.
A key challenge to using (1) for numerical work is that upon discretization, it leads to a system of linear equations with a dense coefficient matrix. Unless the problem is relatively small, it then becomes essential to deploy fast algorithms such as the Fast Multipole Method (FMM) [7] or Fast Direct Solvers (FDS) [15]. A second challenge is that the singularity in the kernel function
means that if a standard quadrature rule is used when discretizing the integral, then only very slow convergence is attained as the number of degrees of freedom is increased.
This paper introduces a new family of quadrature rules for discretizing (1) that are numerically stable even at high orders. For instance, a rule of order 42 is included in the numerical experiments. It is perfectly stable and capable of computing solutions to 14 correct digits with as few degrees of freedom as spectrally convergent quadratures such as the method of Kress [12]. Moreover, unlike the Kress quadrature, it can easily be used in conjunction with fast solvers such as the FMM or FDS.
### 1.1 Nyström discretization and corrected trapezoidal rules
Upon parameterizing the domain over an interval , a BIE such as (1) can be viewed as a one dimensional integral equation of the form
τ(x)+∫T0K(x,y)τ(y)dy=f(x),x∈[0,T]. (2)
In (2), the new kernel encodes both the parameterization and the original kernel . Observe that all functions in (2) are -periodic, that and are smooth, and that is smooth except for a logarithmic singularity as .
To discretize (2) using the Nyström method [13, §12.2], we consider the -point periodic trapezoidal rule (PTR)
∫T0g(x)dx≈N∑n=1g(xn)h (3)
where and . When is smooth and periodic, the PTR converges super-algebraically as [19]. The Nyström method first collocates (2) to the quadrature nodes of the PTR, and then approximates the integral by a quadrature supported on the same nodes with unknowns , yielding a linear system
τm+N∑n=1K(m,n)τn=f(xm),m=1,…,N. (4)
The coefficient matrix should be formed so that the approximation
N∑n=1K(m,n)τ(xn)≈∫T0K(xmy)τ(y)dy,m=1,…,N, (5)
holds to high accuracy. If were to be smooth, this task would be easy, since we could then use the PTR (3) without modifications and simply set
K(m,n)=K(xm,xn)h. (6)
In this unusual case, the solution of (4) will converge super-algebraically to as .
In the more typical case where is logarithmically singular at , some additional work is required to attain high order convergence in (5). Let us start by describing two existing methods that resolve this problem — the Kress quadrature and the Kapur-Rokhlin quadrature — that are closely related to the new quadrature that we will describe.
Kress [12] introduced a quadrature that is spectrally accurate for any periodic function of the form
g(x)=φ(x)log(4sin2πxT)+ψ(x) (7)
where and are smooth functions with known formulae. Kress integrates the first term of (7) by Fourier analysis and the second term using the PTR, resulting in a corrected trapezoidal quadrature where all the PTR weights are modified. It is not obvious how a BIE scheme based on the Kress quadrature can be accelerated by the existing fast algorithms. We will use a “localized” analog of the analytic split (7) to develop our quadrature.
Kapur and Rokhlin [10], on the other hand, constructed a family of quadratures for a variety of singular functions by correcting the trapezoidal rule locally near the singularity. These quadratures have correction weights that are essentially independent of the grid spacing and possess an essential benefit in that nearly all entries of the coefficient matrix are given by the simple formula (6); only a small number of entries near the diagonal are modified. This local nature of Kapur-Rokhlin quadrature makes it very easy to combine it with the FMM and other fast algorithms. For the logarithmic singularity, two different quadratures are developed. The first quadrature is for functions of the “nonseparable” form
g(x)=φ(x)log|x|+ψ(x) (8)
where the formulae of the smooth functions and can be unknown. This “nonseparable” quadrature ignores the data at completely and modifies a few trapezoidal weights on both sides of the singular point. The magnitude of the correction weights (tabulated in [10, Table 6]) grow rapidly with the order of the correction, and moreover, some of the weights are negative. These properties mean that Kapur Rokhlin quadrature becomes less useful at higher orders (say order higher than 6), since the resulting coefficient matrix can be far worse conditioned than the underlying BIE [8, Sec. 7.3]. The second Kapur-Rokhlin quadrature is for functions of the “separable” form
g(x)=φ(x)log|x|. (9)
Unlike the first rule, this “separable” quadrature also uses the data at the singular point for its correction; the correction weights (tabulated in [10, Table 7]) are uniformly bounded regardless of the correction order, and decay rapidly away from the singular point. Despite the excellent stability properties of the second Kapur-Rokhlin rule, it has received little attention due to the simple fact that the kernels arising from BIEs typically are of the nonseparable type (8). (In fact, the first rule is often referred to as “the Kapur-Rokhlin” rule.)
To avoid the issue of large correction weights, Alpert [3] developed a hybrid Gauss-trapezoidal quadrature that uses an optimized set of correction points that are off the uniform trapezoidal grid, whose weights are very well-behaved. For additional details on high order accurate techniques for discretizing (2), as well as a discussion of their relative advantages, we refer to the survey [8].
### 1.2 Contributions
This paper describes a quadrature rule that is closely related to the neglected second Kapur-Rokhlin rule for separable functions. The new rule works almost exactly the same in practice in that it involves a small correction stencil that includes a correction weight at the origin. Both rules display excellent numerical stability and lead to discretized systems that are as well conditioned as the original equation. The key innovation is that the new rule that we present is applicable to functions of the form (8) (provided that formulae for and are known), which makes these rules applicable to a wide range of BIEs.
The new quadrature is derived from a local kernel split analogous to Kress’ analytic split (7). We analyze the error of applying the punctured trapezoidal rule to the singular component of this kernel split based on the lattice sum theory. This results in an error expansion with coefficients that are explicitly computable using the Riemann zeta function, a fact we called the “zeta connections.” From these error coefficients, we construct local correction weights in the fashion of Kapur and Rokhlin. As it turns out, the correction weights constructed this way have a component that is the weights of the “separable” Kapur-Rokhlin quadrature mentioned above, while the remaining component depends on the explicit kernel split. (Remarkably, the zeta connection simplifies the construction of the separable Kapur-Rokhlin correction weights to the extent that Table 7 of [10] can be computed with three lines of code, as shown in Figure 1.)
The zeta connection associated with the singular function , was first introduced by Marin et. al. [14]. In this paper, we extend this connection to a “differential zeta connection” (Theorem 3) associated with the logarithmic singularity. We would also like to point out that the zeta connection has recently been generalized to higher dimensions by the authors in [20]. Combining the “differential zeta connection” in this paper with the higher dimensional zeta connection of [20], we expect that a rigorous theory can be developed for higher-dimensional logarithmic quadratures such as the one developed by Aguilar and Chen [1]. On the other hand, there is also the connection between the zeta function and the endpoint corrections of the trapezoidal rule for regular functions, which has been established much earlier, see [16, 3].
### 1.3 Organization
In Section 2, we introduce the theory for the local correction of the trapezoidal quadrature and its connection to the zeta function. We extend this connection to construct a quadrature for the logarithmic singularity, which recovers the “separable” Kapur-Rokhlin quadrature but with much simpler computations. In Section 3, we generalize this Kapur-Rokhlin rule to logarithmic kernels on planar curves using a localized version of Kress’ kernel split, developing quadratures for the Laplace and Helmholtz layer potentials. Finally in Section 4, we present numerical examples of solving BIEs associated with the Helmholtz and Stokes equations and compare our quadrature method with existing state-of-the-art methods.
## 2 Corrected trapezoidal rules and the zeta connections
In this section, we introduce the theory for the local correction of the trapezoidal rule for singular functions. In particular, we introduce the “zeta connections,” based on which simple and powerful procedures are presented (see Remark 1) for constructing quadratures for functions with algebraic or logarithmic branch-point singularities.
### 2.1 Singularity correction by moment fitting
To set up the notation for our discussion, we let denote the integral of a function on the interval , where , and where may be singular at . We denote the punctured trapezoidal rule discretization of as
T0h[g]=M∑∑′n=−Mg(xn)h. (10)
for some integer , where and , and where the prime on the summation sign indicates that is omitted. (Note that by our definition of , the endpoints are not included as quadrature nodes, thus the usual factor for the weights at the endpoints is not needed. However, this choice is just for convenience and using the usual trapezoidal rule does not affect our analysis.)
We are interested in singular integrals of the form
I[s⋅τ]=∫a−as(x)τ(x)dx (11)
where has an isolated, integrable singularity at and is smooth. We assume that either the integrand is periodic or that is compactly supported in so that the only obstruction to high-order convergence is the singularity of at . In general, boundary corrections to the trapezoidal rule can be introduced near independent of the singularity at ; see [2], for example, for more detail.
To analyze the error of the approximation , we decompose the integrand into a regular component and a local component:
s⋅τ=s⋅τ⋅(1−η)+s⋅τ⋅η (12)
where is a smooth and compactly supported function that is at least -time continuously differentiable and which satisfies and . For the regular component , the trapezoidal discretization
I[s⋅τ⋅(1−η)]=T0h[s⋅τ⋅(1−η)]+O(hp) (13)
holds for any (note that the integrand is at ). Thus the overall convergence of is restricted by the error in the local component .
Using the idea of moment fitting
[11, 10], this local error can be corrected by fitting a set of moment equations for monomials up to a sufficiently high degree as follows
hαK∑∑′′j=−Kwhj(jh)kη(jh)=I[s⋅τ(k)⋅η]−T0h[s⋅τ(k)⋅η],0≤k≤2K (14)
where is an integer and the factor depends explicitly on the singularity at , and where the double prime on the summation sign indicates that the term is multiplied by 2. (For convenience we let .) There are equations in (14) for the unknowns . Then combining (1314) yields a locally corrected trapezoidal quadrature
I[s⋅τ]≈T0h[s⋅τ]+hαK∑∑′′j=−Kwhjτ(jh) (15)
which is high-order accurate as .
### 2.2 The |x|−z singularity and converged correction weights
The fact that the weights in (15) depend on is inconvenient in practice. Fortunately, this can be remedied by what we called the “zeta connection.” When , , Marin et. al. [14] showed that by letting only in the term in the moment equations (14), the right-hand side of (14) in this limit can be represented as Riemann zeta function values, and the corresponding limiting weights are independent of ; more importantly, they proved that using these converged weights in the quadrature (13) in place of does not affect the order of accuracy. We summarize this result of [14] in this section, which will serve as the foundation for our extensions to other quadratures.
We first introduce the important concept of “converged correction weights.”
Substitute in (14) and let , we have
h1−zK∑∑′′j=−Kwhj(jh)kη(jh) =∫a−a|x|−zxkη(x)dx−M∑∑′n=−M|nh|−z(nh)kη(nh)h (16) =h1−z+k(∫N+12−N−12|x|−zxkη(xh)dx−M∑∑′n=−M|n|−znkη(nh)) =h1−z+k(∫∞−∞|x|−zxkη(xh)dx−∞∑∑′n=−∞|n|−znkη(nh))
where the last equality holds because is compactly supported. Note that both and are even, so by requiring that
whj≡wh−j,j=0,…,K
both sides of (16) vanish for all odd. Using this symmetry and eliminating on both sides yield
K∑j=0whjj2kη(jh)=∫∞0|x|−z+2kη(xh)dx−∞∑n=1|n|−z+2kη(nh),k=0,…,K (17)
From here we define the converged correction weights to be
wj:=limh→0+whj, (18)
such that are the solution of (17). To further simplify the equations, we need the following theorem.
###### Theorem 1 (Zeta connection).
For all ,
limh→0+(∞∑n=1|n|−zη(nh)−∫∞0|x|−zη(xh)dx)=ζ(z). (19)
Consequently, the converged weights , as defined by (18), are the solution of the system
K∑j=0wjj2k=−ζ(z−2k),k=0,1,…,K (20)
###### Proof.
The zeta connection (19) is proved in [14, Lemma A2]. Then taking in (17) yields (20). (Note that we used the fact that .) ∎
Based on the zeta connection, the next theorem constructs a high-order corrected trapezoidal rule using converged correction weights.
###### Theorem 2.
For , one has the locally corrected trapezoidal rule
I[s⋅τ⋅η]=T0h[s⋅τ⋅η]+h1−zK∑j=0wj(τ(jh)+τ(−jh))+O(h2K+3−z), (21)
where the correction weights are the solution of (20), and where , must have at least vanishing derivatives at , i.e.
η(0)=1,η(k)(0)=0,k=1,2,…,2K+1. (22)
###### Proof.
See Theorem 3.7 and Lemma 3.8 of [14]. ∎
### 2.3 Logarithmic singularity and the differential zeta connection
One can also construct a quadrature for the logarithmic singularity by extending the zeta connection (19) based on the following simple observation:
ddz|x|−z=−|x|−zlog|x|.
This leads to the next two theorems that are completely analogous to Theorems 1 and 2.
###### Theorem 3 (Differential zeta connection).
For all ,
limh→0+(∞∑n=1−|n|−zlog|n|η(nh)−∫∞0−|x|−zlog|x|η(xh)dx)=ζ′(z). (23)
Consequently, if we define the converged weights such that are the solution of the system
K∑j=0whjj2kη(jh)=∫∞0−|x|2klog|x|η(xh)dx−∞∑n=1−|n|2klog|n|η(nh),k=0,…,K (24)
then are the solution of the system
K∑j=0wjj2k=−ζ′(−2k),k=0,1,…,K (25)
###### Proof.
Taking the derivative with respect to under the limit sign on both sides of (19) yields (23), which is justified since the expression under the limit sign is analytic in . Then taking in (24) yields (25). ∎
###### Theorem 4.
For , one has the locally corrected trapezoidal rule
I[s⋅τ⋅η]=T0h[s⋅τ⋅η]−τ(0)hlogh+hK∑j=0wj(τ(jh)+τ(−jh))+O(h2K+2), (26)
where the correction weights are the solution of (25), and where satisfies the same conditions as in Theorem 2.
###### Proof.
The local error of the punctured trapezoidal rule for is
I[s⋅τ⋅η]−T0h[s⋅τ⋅η] =∫a−a−log|x|τ(x)η(x)dx−M∑∑′n=−M−log|nh|τ(nh)η(nh)h (27) =(∫a−a−log∣∣∣xh∣∣∣τ(x)η(x)dx−M∑∑′n=−M−log|n|τ(nh)η(nh)h) −logh(∫a−aτ(x)η(x)dx−M∑∑′n=−Mτ(nh)η(nh)h)
Notice that in the second parentheses, the integral is smooth so the regular trapezoidal rule converges super-algebraically, i.e. using the fact that we have
logh(∫a−aτ(x)η(x)dx−′∑Mn=−Mτ(nh)η(nh)h)=(hlogh)τ(0)+O(hp) % as h→0+ (28)
holds for any . On the other hand, using the idea of moment fitting and following a similar derivation as (1617), the terms in the first parentheses of (27) are approximated by
hK∑j=0whj(τ(jh)+τ(−jh))+O(h2K+3) (29)
where are the solution of (24). Then substituting (2829) into (27) gives
I[s⋅τ⋅η]=T0h[s⋅τ⋅η]−τ(0)hlogh+hK∑j=0whj(τ(jh)+τ(−jh))+O(h2K+3).
The above equation implies (26) once it is shown that
|wj−whj|=O(h2K+1),j=0,1,…,K,
which in turn can be proved by showing that the limit (23) converges as uniformly for given the condition of . This last statement can be proved following almost verbatim the proofs of Theorem 3.1 and Lemma 3.3 in [14] by replacing with therein, hence we omit the detail here. ∎
The logarithmic quadrature (26) is equivalent to the “separable” Kapur-Rokhlin quadrature developed in [10, §4.5.1], hence the correction weights are identical (up to a minus sign) to those given in [10, Table 7]; however, the differential zeta connection has greatly simplified the construction of these weights.
###### Remark 1.
In practice, the value , , can be approximated using “complex step differentiation” [18] as
ζ′(z)=Imζ(z+iδ)δ+O(δ2) (30)
where , . This formula is free of cancellation errors that plagued typical finite difference methods. For instance, using will yield an approximation of full double-precision accuracy.
On the other hand, as mentioned in [10], the Vandermonde system (25) is ill-conditioned for large . Thus when precomputing the weights , (25) should be solved symbolically or under extended precision. Simple code snippets that compute for any given are given in Figure 1.
## 3 Logarithmic kernels on curves
In this section, we extend the “separable” Kapur-Rokhlin rule (26) to construct our “zeta-corrected quadrature.” We will combine the differential zeta connection with local kernel splits (that are analogous to Kress’ global analytic split (7)) to construct quadratures for some important logarithmic kernels on closed curves, including the Laplace and Helmholtz layer potentials; the quadrature for the Laplace single-layer potential will also be applied to integrate the Stokes potential in Section 4.
### 3.1 Laplace kernel
Consider a smooth closed curve parameterized by a -periodic function . We consider the Laplace single-layer potential (SLP) from to (for the general case, one simply replace with any other target point on ),
S[τ](\boldmathρ(0)):=∫a−a(−logr(x))τ(x)|\boldmathρ′(x)|dx (31)
where and . The next theorem extends Theorem 4 to construct a corrected quadrature for (31).
###### Theorem 5.
For the Laplace SLP (31), one has the locally corrected trapezoidal rule
S[τ](\boldmathρ(0))=T0h[s⋅~τ]−~τ(0)hlog(|\boldmathρ′(0)|h)+hK∑j=0wj(~τ(jh)+~τ(−jh))+O(h2K+2), (32)
where and is smooth, and where the correction weights are exactly the same as in Theorem 4.
Note that the only difference of (32) from (26) is that is replaced with and is replaced with .
###### Proof.
First we analyze the singularity of . Note that
logr(x) =12log(|\boldmathρ′(0)x|2+(r2(x)−|\boldmathρ′(0)x|2)) (33)
which can be rewritten as
12log(1+r2(x)−|\boldmathρ′(0)x|2|\boldmathρ′(0)x|2)=logr(x)|% \boldmathρ′(0)x| (34)
We will show that (34) is smooth, thus the only singular term in (33) is . To this end, first expand as a Taylor-Maclaurin series
\boldmathρ(x)=\boldmathρ(0)+\boldmathρ′(0)x+\boldmathρ′′(0)x22+O(x3),
then
r2(x)−|\boldmathρ′(0)x|2 =(\boldmathρ′(0)x+\boldmathρ′′(0)x22+O(x2))⋅(\boldmathρ′(0)x+\boldmathρ′′(0)x22+O(x2))−|\boldmathρ′(0)x|2 =(\boldmathρ′(0)⋅\boldmathρ′′(0))x3+O(x4),
therefore
r2(x)−|\boldmathρ′(0)x|2|\boldmathρ′(0)x|2=\boldmathρ′(0)⋅% \boldmathρ′′(0)|\boldmathρ′(0)|2x+O(x2)
is smooth near , which implies that (34) is indeed smooth. Next, we use the decomposition
logr(x)=logr(x)|\boldmathρ′(0)x|+log|% \boldmathρ′(0)|+log|x|,
to analyze the error of the punctured trapezoidal rule being applied to , as follows
I[s⋅~τ⋅η]−T0h[s⋅~τ⋅η] (35) =∫a−a(−logr(x))~τ(x)η(x)dx−M∑∑′n=−M(−log|nh|)~τ(nh)η(nh)h ={∫a−a(−logr(x)|\boldmathρ′(0)x|)~τ(x)η(x)dx−M∑∑′n=−M(−logr(nh)|\boldmathρ′(0)nh|)~τ(nh)η(nh)h} −log|\boldmathρ′(0)|{∫a−a~τ(x)η(x)dx−M∑∑′n=−M~τ(nh)η(nh)h} +{∫a−a−log|x|~τ(x)η(x)dx−M∑∑′n=−M−log|nh|~τ(nh)η(nh)h}
where, because (34) is smooth, the terms in the first curly brackets of (35) happen to be the error of the regular trapezoidal rule applied to a smooth function (notice that the integrand is zero at ), which vanishes super-algebraically; the terms in the second curly brackets, analogous to (28), converge to super-algebraically; finally for the terms in the last curly brackets, one simply applies the quadrature (26) of Theorem 4, with replaced by
. Combining all these estimates, as well as (
13), one concludes that (35) implies (32). ∎
### 3.2 Helmholtz kernels
We now apply Theorem 5 to construct formulae for the Helmholtz layer potentials. Consider the Helmholtz SLP on a smooth closed curve evaluated at ,
Sκ[τ](\boldmathρ(0)):=∫a−asκ(r)τ(x)|\boldmathρ′(x)|dx (36)
where and is the wavenumber, and where the kernel has the form [5, §3.5]
sκ(r):=i4H0(κr)=−12πlog(r)J0(κr)+cγ2π+ϕ(r2) (37)
where and are, respectively, the Hankel and Bessel functions of the first kind of order , where is some smooth function of such that , and where such that is Euler’s constant. Analogous to Kress’ analytic split (7), we introduce the kernel split
sκ(r)=s(r)J0(κr)/(2π)+s(1)κ(r)
where , such that the component is smooth. Therefore we can split (36) as
Sκ[τ](\boldmathρ(0))≡I[sκ⋅~τ]=I[s⋅~τSκ]+I[s(1)κ⋅~τ] (38)
where and
~τSκ(x):=J0(κr(x))τ(x)|\boldmathρ′(x)|/(2π) (39)
are smooth function. Notice that the singular integral can be approximated by (32) with replaced by , giving
I[s⋅~τSκ]=T0h[s⋅~τSκ]−h2πlog(|\boldmathρ′(0)|h)~τ(0)+hK∑j=0wj(~τSκ(jh)+~τSκ(−jh))+O(h2K+2) (40)
where we have used the fact that . Then combining (40) with the PTR for (which converges super-algebraically), we finally have
Sκ[τ](\boldmathρ(0))=T0h[sκ⋅~τ]+h2π(cγ−log(|\boldmathρ′(0)|h))~τ(0)+hK∑j=0wj(~τSκ(jh)+~τSκ(−jh))+O(h2K+2) (41)
where, again, is given by (39).
Finally, we can also obtain the formulae for the Helmholtz double-layer potential (DLP), , and the normal derivative of the SLP, , using similar derivations. We will just state the formulae and omit the derivations. Using similar notations from (36) and , these layer potentials are given by
Dκ[τ](\boldmathρ(0)):=I[dκ⋅~τ] and D∗κ[τ](\boldmathρ(0)):=I[d∗κ⋅~τ] (42)
where with being the unit outward normal at , and where with and . The corresponding corrected trapezoidal rules for and are, respectively,
I[dκ⋅~τ] =T0h[dκ⋅~τ]+hc0~τ(0)+hK∑j=1wj(~τDκ(jh)+~τDκ(−jh))+O(h2K+2) (43) I[d∗κ⋅~τ] =T0h[d∗κ⋅~τ]+hc0~τ(0)+hK∑j=1wj(~τD∗κ(jh)+~τD∗κ(−jh))+O(h2K+2) (44)
where the curvature at scaled by , and where
~τDκ(x):=κJ1(κr(x))r(x)⋅n(x)2πr(x)~τ(x) and ~τD∗κ(x):=−κJ1(κr(x))r(x)⋅n02πr(x)~τ(x),
with being the Bessel function of the first kind of order .
## 4 Numerical experiments
In this section, we present numerical examples of solving BIEs associated with the Stokes and Helmholtz equations. In each case, we obtain a linear system of the form (4), where the matrix is filled using a particular quadrature. Then the linear system is solved either directly by inverting the matrix or iteratively by GMRES.
We compare our quadrature method with the three singular quadratures mentioned in the introduction: Kapur and Rokhlin’s locally corrected trapezoidal quadrature [10], Alpert’s hybrid Gauss-trapezoidal quadrature [3], and Kress’s spectral quadrature [5, §3.6]. The correction weights for our quadrature are precomputed by solving the equations (25) and using the techniques described in Remark 1. When implementing the Kapur-Rokhlin, Kress, and Alpert quadratures, we followed the survey [8].
#### Stokes problem.
As shown in Figure 2(a), consider a viscous shear flow around an island whose boundary is a smooth closed curve , with no-slip boundary conditions on . Let be the true velocity field and its associated pressure field, then is described by the exterior Dirichlet problem for the Stokes equation [9, §2.3.2]
−Δu+∇p=0 and ∇⋅u=0 in% Ω,u=0 on Γ,u→u∞ as |x|→∞
The integral equation formulation for this problem is obtained using the mixed potential representation for the velocity [17, §4.7], where the integral operators
S[\boldmathτ](x) :=14π∫Γ(−logrI+r⊗rr2)\boldmathτ(% \boldmathρ)ds(\boldmathρ), D[\boldmathτ](x) :=1π∫Γ(r⋅n(\boldmathρ)r2r⊗rr2)\boldmathτ(\boldmathρ)ds(% \boldmathρ)
are the Stokes SLP and DLP in 2D, where and is the unit outward normal to at , and where
denotes the tensor product. Then the vector-valued unknown density function
is the solution of the following BIE
(45)
As , the only singular component in the linear operator is the term in , which can be efficiently handled by the corrected trapezoidal rule (32).
Figure 3 compares the convergence results of solving the Stokes problem using different quadrature methods. We see that all the quadratures have the expected convergence rates, with the Kapur-Rokhlin quadrature yielding higher absolute errors due to its larger correction weights. The Kress quadrature is the most accurate at virtually any given , but the new quadrature of order 16 is remarkably close.
#### Helmholtz problem.
As shown in Figure 2(b) or (c), consider the Helmholtz Dirichlet problem exterior to the curve , with boundary data and satisfying the Sommerfeld radiation condition,
−Δu−κ2u=0 in Ω,u=f on Γ, |
# Exporting a tall grid as pdf with reasonable pagination
I'd like a way of exporting a grid to PDF that isn't one super long page, moreover I'd like the page breaks to be chosen automatically (so that the grid fits/fills each normal sized page).
Is there some way to do this without having to tweak style-sheets or resort to printing?
Example Code:
Export["~/Downloads/tall_grid.pdf",
Grid[Partition[Table[
Labeled[RandomImage[], ResourceFunction["RandomString"][10]],
{i, 200}], UpTo @ 5],
Spacings -> {2, 2}],
ImageResolution -> 600
]
Related:
Actually, this is strange that such possibility is not a built-in function.
Let's say, we have a list of 150 strings like this:
tbl = Table[{a, ToString@(a RandomReal[{-2, 2}])}, {a, 1, 150}];
My workaround for pages formation is following:
spp = 64; (*the desired number of strings per page, depending on content size and page size*)
tmp = Table[
tbl[[spp*i + 1 ;; spp*i + spp]], {i, 0, Length@tbl/spp - 1}];
data = Grid[#, Frame -> All] & /@
Join[tmp, {tbl[[spp*Length@tmp + 1 ;; -1]]}];
cd = CreateDocument[
ExpressionCell[#, PageBreakBelow -> True] & /@ data];
Export["file.pdf", cd]; NotebookClose[cd, Interactive -> False]
This makes the pdf-file with two full-length pages and one more partially filled.. |
# How to test whether a function is positive over the entire range of an interval?
I would like to test whether a function is positive over a given interval. Say I have f[x_] = -x^3 + x^2 + 7*x and wanted to know whether it is positive for all x in the interval [0,4]?
It can easily seen by plotting the function that it is not, but I would like to have a more formal test in the form of "TrueForAll[f[x] > 0, {x,0,4}]".
It is probably a trivial problem, but I haven't managed to figure it out myself. So I am grateful for any advice.
-
Reduce[ForAll[x, 0 <= x <= 4 \[Implies] -x^3 + x^2 + 7*x > 0]]? (Or FullSimplify instead of Reduce.) – Michael E2 Jan 30 at 19:56
@Michael I think we don't need Implies here: Reduce[ForAll[x, x <= 0 <= 4, f[x] > 0]] – Szabolcs Jan 30 at 20:05
@Szabolcs I forgot about that form. I hardly ever use ForAll.... – Michael E2 Jan 30 at 20:09
@Szabolcs You have a great solution. Why not post it as an answer? – Pavlo Fesenko Jan 30 at 20:56
@Pavlo I was waiting for Michael to do it. – Szabolcs Jan 30 at 21:33
## 2 Answers
To find the intervals for which f[x] is positive
f[x_] = -x^3 + x^2 + 7*x;
g[x_] = Piecewise[{{f[x], f[x] > 0}}, I];
Plot[{f[x], g[x]}, {x, -3, 4},
PlotStyle -> {Directive[Red, Dashed], Blue}]
FunctionDomain[g[x], x]
(* x < (1/2)*(1 - Sqrt[29]) ||
0 < x < (1/2)*(1 + Sqrt[29]) *)
% // N
(* x < -2.19258 || 0. < x < 3.19258 *)
EDIT:
Or, more succinctly
FunctionDomain[Piecewise[{{1, f[x] > 0}}, I], x]
(* x < (1/2)*(1 - Sqrt[29]) ||
0 < x < (1/2)*(1 + Sqrt[29]) *)
-
Thanks everyone for these extremely helpful answers and comments. Bob Hanlon's answer is great because it provides a more comprehensive picture of the function's characteristics, although it does not perform the True/False test I was looking for. I have found that it can easily be combined with @Michael's and @Szabolcs' approach, however: xpos = FunctionDomain[Piecewise[{{1, f[x] > 0}}, I], x]; Reduce[ForAll[x, 0 <= x <= 4, xpos]] – marcom Jan 31 at 19:28
I like the answers using Reduce and FunctionDomain. Here's a numerical possibility that uses Minimize to find the global minimum on the domain and tests to see if it's positive.
f[x_] = -x^3 + x^2 + 7*x;
0 <= First@Minimize[{f[x], 0 <= x <= 4}, x]
(* False *)
Alternatively, if needed, you can use NMinimize:
NMinimize[{f[x], 0 <= x <= 4}, x]
(* {-20., {x -> 4.}} *)
0 <= First@NMinimize[{f[x], 0 <= x <= 4}, x]
(* False *)
- |
# Transfer-closed characteristic subgroup
BEWARE! This term is nonstandard and is being used locally within the wiki. [SHOW MORE]
This article defines a subgroup property: a property that can be evaluated to true/false given a group and a subgroup thereof, invariant under subgroup equivalence. View a complete list of subgroup properties[SHOW MORE]
This is a variation of characteristicity|Find other variations of characteristicity | Read a survey article on varying characteristicity
## Definition
### Definition with symbols
A subgroup $H$ of a group $G$ is termed a transfer-closed characteristic subgroup if, for any subgroup $K \le G$, $H \cap K$ is a characteristic subgroup of $K$.
## Formalisms
### In terms of the transfer condition operator
This property is obtained by applying the transfer condition operator to the property: characteristic subgroup
View other properties obtained by applying the transfer condition operator
## Metaproperties
Metaproperty name Satisfied? Proof Difficulty level Statement with symbols
transitive subgroup property Yes transfer-closed characteristicity is transitive If $H \le K \le G$ are groups such that $H$ is transfer-closed characteristic in $K$ and $K$ is transfer-closed characteristic in $G$, then $H$ is transfer-closed characteristic in $G$.
intermediate subgroup condition Yes If $H \le K \le G$ are groups such that $H$ is transfer-closed characteristic in $G$, then $H$ is transfer-closed characteristic in $K$.
strongly intersection-closed subgroup property Yes If $H_i, i \in I$ is a family of transfer-closed characteristic subgroups of a group $G$, the intersection $\bigcap_{i \in I} H_i$ is also a transfer-closed characteristic subgroup of $G$.
transfer condition Yes If $H$ and $K$ are subgroups of $G$ such that $H$ is transfer-closed characteristic in $G$, then $H \cap K$ is transfer-closed characteristic in $K$. |
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PERSONAL OFFICE
General information Latest issue Forthcoming papers Archive Impact factor Editorial staff Guidelines for authors License agreement Editorial policy Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.]: Year: Volume: Issue: Page: Find
Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 2018, Volume 22, Number 2, Pages 236–253 (Mi vsgtu1569)
Differential Equations and Mathematical Physics
Construction of Mikusiński operational calculus based on the convolution algebra of distributions. Methods for solving mathematical physics problems
I. L. Kogan
Russian State Agrarian University - Moscow Agricultural Academy after K. A. Timiryazev, Moscow, 127550, Russian Federation
Abstract: A new justification is given for the Mikusinsky operator calculus entirely based on the convolution algebra of generalized functions $D'_{+}$ and $D'_{-}$, as applied to the solution of linear partial differential equations with constant coefficients in the region $(x;t)\in \mathbb R (\mathbb R_{+})\times \mathbb R_{+}$. The mathematical apparatus used is based on the current state of the theory of generalized functions and its one of the main differences from the theory of Mikusinsky is that the resulting images are analytical functions of a complex variable. This allows us to legitimate the Laplace transform in the algebra $D'_{+}$ $( x\in \mathbb R_{+} )$, and apply the algebra to the region of negative values of the argument with the use of algebra $D'_{-}$. On classical examples of second-order equations of hyperbolic and parabolic type, in the case $x\in \mathbb R$, questions of the definition of fundamental solutions and the Cauchy problem are stated, and on the segment and the half-line $x\in \mathbb R_{+}$, non-stationary problems in the proper sense are considered. We derive general formulas for the Cauchy problem, as well as circuit of fundamental solutions definition by operator method. When considering non-stationary problems we introduce the compact proof of Duhamel theorem and derive the formulas which allow optimizing obtaining of solutions, including problems with discontinuous initial conditions. Examples of using series of convolution operators of generalized functions are given to find the originals. The proposed approach is compared with classical operational calculus based on the Laplace transform, and the theory of Mikusinsky, having the same ratios of the original image on the positive half-axis for normal functions allows us to consider the equations posed on the whole axis, to facilitate the obtaining and presentation of solutions. These examples illustrate the possibilities and give an assessment of the efficiency of the use of operator calculus.
Keywords: calculus of Mikusiński, space of distributions, convolution of distributions, convolution algebra, Laplace transform, Duhamel integral
DOI: https://doi.org/10.14498/vsgtu1569
Full text: PDF file (1019 kB) (published under the terms of the Creative Commons Attribution 4.0 International License)
References: PDF file HTML file
Bibliographic databases:
Document Type: Article
UDC: 517.982.45
MSC: 44A40, 35E20
Revised: February 11, 2018
Accepted: March 12, 2018
First online: March 28, 2018
Citation: I. L. Kogan, “Construction of Mikusiński operational calculus based on the convolution algebra of distributions. Methods for solving mathematical physics problems”, Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 22:2 (2018), 236–253
Citation in format AMSBIB
\Bibitem{Kog18} \by I.~L.~Kogan \paper Construction of Mikusi\'nski operational calculus based on the convolution algebra of distributions. Methods for solving mathematical physics problems \jour Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.] \yr 2018 \vol 22 \issue 2 \pages 236--253 \mathnet{http://mi.mathnet.ru/vsgtu1569} \crossref{https://doi.org/10.14498/vsgtu1569} \elib{http://elibrary.ru/item.asp?id=35467729} |
# Help with graph induction proof
I'm trying to prove : Given a simple graph $G$ with $n$ vertices, where $n$ is even, prove that if every vertex has degree $\dfrac n2 + 1$, then $G$ must contain a (simple) $3$-cycle. A (simple) $3$-cycle is a set of $3$ (distinct) vertices, $a, b, c$ such that $ab$ is an edge, $bc$ is an edge and $ac$ is an edge.
I know there are similar ones already on here, but I can't comprehend any of the answers. This is what I have so far:
Base Case: Since the graph has to have at least $3$ vertices to have a simple $3$-cycle, the lowest possible even number of vertices is $n = 4$.
$\dfrac42 + 1 = 3$ degree (edges attached to node). This means each vertex connects to each of the other $3$ vertices since $G$ is a simple graph. Thus, any combination of $3$ vertices will be a simple $3$-cycle.
Inductive Hypothesis: Assume a simple graph with $k$ vertices, where $k$ is even, contains a simple $3$-cycle if each vertex has degree $\dfrac k2 + 1$.
Inductive Step: Since only concerned about when the graph has an even number of vertices, going to show a simple graph with $k + 2$ vertices, where $k$ is even, contains a simple $3$-cycle if every vertex has degree $\dfrac{k+2}{2} + 1$
This is as far as I get and don't know how to finish the Inductive Step (Also please feel free to let me know if I messed anything up, up to this point)
• This is just Mantel's theorem, which is a special case of Turan's theorem. – ThePortakal Apr 3 '14 at 18:57
You don't really need induction for this. Let $u$ and $v$ be adjacent vertices, and let $U$ and $V$ be the sets of $n/2$ other vertices adjacent to $u$ and $v$ respectively. Since $|U|+|V|+|\{u,v\}|=n+2$ is greater than $n$, we must have $U\cap V\not=\emptyset$. So in fact we've proved more, namely that if each vertex in a graph on $n$ vertices (with $n$ even) has degree $n/2+1$, then every pair of adjacent vertices is part of a triangle.
• wow... mind clearing up everything after "n+2 is grater than n". Clearly get everything up to and including that but don't understand your conclusions after – user8722 Apr 3 '14 at 18:49
• @user8722, if $U$ and $V$ were disjoint, then $G$ would have at least $n+2$ vertices, which is a contradiction. Consequently, there is a vertex $w$ which is adjacent to $u$ because it's in $U$ and adjacent to $v$ because it's in $V$. Voila! – Barry Cipra Apr 3 '14 at 18:59
• This is an awesome proof Barry. Great answer, +1, simple, slick and clean. – Rustyn Apr 3 '14 at 20:20
• Note that, if degree of each vertex $v_i$ is such that $\frac{n}{2} +1 \le d(v_i) \le n-1$, then this argument holds as well...Question: how many triangles is a pair of adjacent vertices a part of in this case? – Rustyn Apr 3 '14 at 20:29
• We do the same construction, and get that $|U| + |V| + |\{u,v\}|$ = $d(u) -1 + d(v) -1 + 2 \ge n+2$ so that $|U\cap V|\ge 1$ But can we do better than this??? Denote $d(u) = \frac{n}{2}+1+j, d(v)=\frac{n}{2}+1+k$. Thus, $|U|+|V| + |\{u,v\}| = (n+2)+j+k$. Now what is $|U\cap V|$???? – Rustyn Apr 3 '14 at 20:39
So for the inductive step, we construct a graph from $G_{k}$ by adding two new vertices $v_{k+1}$ and $v_{k+2}$ and adding edges such that each vertex has degree $\frac{k+2}{2} + 1$. By the inductive hypothesis, there is already a triangle present in $G_{k}$, so there is a triangle in $G_{k+2}$.
• Is that all you would say if you were answering this question, or is their more stuff I should state?? Just seems too simple (completely agree with it though) – user8722 Apr 3 '14 at 18:28
• I would give a more explicit construction of how to connect $v_{k+1}$ and $v_{k+2}$ to $G_{k}$. I leave the construction for you. However, the way I use the inductive hypothesis is the punchline. – ml0105 Apr 3 '14 at 18:30
• Here is a hint- you may consider removing certain edges from $G_{k}$ to connect them to the two new vertices. If you break a three cycle, will you create a three cycle? – ml0105 Apr 3 '14 at 18:42 |
# Microwave Engineering
Status
Not open for further replies.
#### Jack// ani
microwave engineering by annapurna das
Hi all,
Is Microwave Engineering by Pozar, the best and the most basic book for a beginner? Or should I refer to some other book you recommend?
Thanks
#### MRFGUY
##### Full Member level 1
jordan balmain solution manual
Foundations for Microwave Engineering
by Robert E. Collin
Radio-Frequency and Microwave Communications Circuits Analysis and Design
by Devendra K.Misra
V
Points: 2
#### szymek_7
##### Newbie level 6
microwave engineering
i have studied pozar last somestre, i liked the pozars problems they help you to understand the subject
#### tyassin
microwave devices circuit author simon liu
Hi
Pozars book is an OK all around book. But the basic stuff like transmission line theory like reflection coefficients and how the wave behave on the line is treated better in electromagnetics books like Chengs. At leat in my opinion.
Regards
#### dspbegin
##### Member level 2
collin microwave
Pozar is damn good but if u need still basic u can go for
'Microwave engineering' by Annapurna Das,Sisir K Das
#### optimus
##### Newbie level 4
rf and microwave engineering-samuel liao
I am currently using Pozar and I am finding it to be very helpful and thorough. The problems are good as they give practical problems with some real world numbers. The text itself is very thorough but not overly boring. I would def. reccommend.
#### Element_115
samuel liao solution manual
A nice RF Fundamentals book is :
Rf Circuit Design by Chris Bowik
it's only ~ $25.00 USD. prasenjit18 ### prasenjit18 Points: 2 Helpful Answer Positive Rating #### Santoshalagawadi ##### Member level 3 samuel liao hello for microwave you can refer to Microwave engineering by Annapurna Das and Sisir Das if you are still intersted you refer to Simon Liu #### ziyas ##### Member level 1 fundamentals of microwave engineering r.e. collin pozar may be harder you can use Radio-Frequency and Microwave Communications Circuits Analysis and Design by Devendra K.Misra it express more effectively #### ferrite ##### Member level 1 what do you understand by microwave engineering ? Element_115 said: A nice RF Fundamentals book is : Rf Circuit Design by Chris Bowik it's only ~$25.00 USD.
i have to say this books lack any treatment on the wave nature of uWave engineering
#### suvendu
##### Full Member level 3
microwave engineering by samuel liao
U can refer the following book also.
1> microwave engg-s.m.liao
be sure and have no doubt that pozar is the best book. the solution manual to this book is also available here:
#### satyasri
##### Junior Member level 1
microwave engineering by kulkarni +google books
I THERE ARE TWO BOOKS
MICROWAVE ENGI. BY KULKARNI
S.M LIAYO
##### Newbie level 2
solution microwave devices and circuits by liao
from what i've read, pozar's book on microwave engineering is the best offered to date, not only that but the solution manual is available on this site to help with your studies
#### ashaker2k
##### Newbie level 2
I have Pozar and I think it is really very good ...
I also have Collin but I think it is not so good because it is very complicated and the book cares a lot of mathematics and not of physical meanings ....
Collin has another book which is advanced one but is very good both in physical meanings and mathematical treatment .... anyone here can help to get the name of the other book ...
#### gmccolgan3
##### Newbie level 4
mongia
Pozar is pretty thorough, and covers all the basics. I'm doing research with couplers with Northrop Grumman, and the people with whom I'm working, highly regarded and recommended it.
#### electronics_kumar
microwave engineering annapurna das
MICROWAVE DEVICES AND CIRCUITS written by SAMUEL Y LIAO published by prentice hall will also be useful
engr.hasan
### engr.hasan
Points: 2
##### Newbie level 2
microwave engineering solution manual liao
Pozar is a good book i think i hav read it in my bachelors degree
#### Sovereign
##### Newbie level 3
microwave engineering annapurna das
yes pozar is good ana solution manual is available
#### farhantariq
##### Member level 3
microwave engineering book by kulkarni
ya thats nice book.
Status
Not open for further replies. |
# What is the resulting temperature if a sample of gas began with a temperature of 20 C, 1 liter, and 760 mmHg and now occupies 800 mL and has a pressure of 1000 mmHg?
Aug 2, 2017
#### Answer:
${T}_{2} = 317$ $\text{K}$
#### Explanation:
We're asked to find the new temperature of a gas after it is subjected to changes in pressure and volume.
To do this, we can use the combined gas law:
$\underline{\overline{| \stackrel{\text{ ")(" "(P_1V_1)/(T_1) = (P_2V_2)/(T_2)" }}{|}}}$
where
• ${P}_{1}$ is the original pressure (given as $760$ $\text{mm Hg}$)
• ${V}_{1}$ is the original volume (given as $1$ $\text{L}$)
• ${T}_{1}$ is the original absolute temperature, which is
$20$ $\text{^"o""C}$ + 273 = ul(298color(white)(l)"K"
• ${P}_{2}$ is the final pressure (given as $1000$ $\text{mm Hg}$)
• ${V}_{2}$ is the final volume (given as $800$ $\text{mL}$ = ul(0.800color(white)(l)"L") (units must be consistent, so convert this to liters)
• ${T}_{2}$ is the final absolute temperature (what we're trying to find)
Let's rearrange this equation to solve for the final temperature, ${T}_{2}$:
${T}_{2} = \frac{{P}_{2} {V}_{2} {T}_{1}}{{P}_{1} {V}_{1}}$
Plugging in the above values:
T_2 = ((1000cancel("mm Hg"))(0.800cancel("L"))(298color(white)(l)"K"))/((760cancel("mm Hg"))(1cancel("L"))) = color(red)(ulbar(|stackrel(" ")(" "317color(white)(l)"K"" ")|)
The final temperature of the gas is thus color(red)(317 sfcolor(red)("kelvin". |
# The Generalized Loneliness Detector and Weak System Models for k-Set Agreement
Parallel and Distributed Systems, IEEE Transactions , Volume 25, Issue 4, 2013, Pages 1078-1088.
Cited by: 5|Views3
EI WOS
Abstract:
This paper presents two weak partially synchronous system models ${\\cal M}^{{\\rm anti}(n-k)}$ and ${\\cal M}^{{\\rm sink}(n-k)}$ , which are just strong enough for solving $k$-set agreement: We introduce the generalized $(n-k)$-loneliness failure detector ${\\cal L}(k)$, which we first prove to be sufficient for solving $k$-set agreemen...More
Code:
Data:
Get fulltext within 24h
Bibtex |
×
# Why isn't $$\frac 00$$ simply 0?
A number $$x$$, to the power $$a$$, and another arbitrary constant $$b$$, can be considered as follows: $$\dfrac{x^{a+b}}{x^b}$$. Simplifying this fraction, through rules of exponents, we know that it is the same as $$x^{a+b - (b)} = x^a$$. So let's look at 0. $$0^1 = \dfrac{0^{1+2}}{0^2}$$. Since we know both $$0^1$$ and $$0^2$$ are 0, does it not follow that $$\dfrac00 = 0$$?
Note by M K
1 year ago
Sort by:
The rules of exponents are for non zero $$x$$.
As such, division by zero is undefined. · 1 year ago
Ah I see. Thanks for clarifying! · 1 year ago
@M K You're welcome. :) · 1 year ago |
# Homework Help: Rotational inertia problem
1. Sep 15, 2006
### John O' Meara
A thin rectangular sheet of steel is .3m by .4m and has mass 24kg. Find the moment of inertia about an axis (a) through the center, parallel to the long sides; (b) through the center parallel to the short sides. (c) through the center, perpendicular to the plane.
(a) We divide the sheet into N very narrow strips parallel to the axis and of width deltaXi and length L=.4m. The mass of each strip is deltaMi and which is a distance Xi from the axis. Now
deltaMi = k*L*deltaXi,
where k is the mass per unit area. Substituting this value into the expression for I (rotational inertia)
I= Sum k*L*Xi^2*deltaXi = k*L*Sum Xi^2*deltaXi
If we now pass to the limit deltaX -> 0 and N -> infinity. So the above sum can be replaced by an integral =>
I=2*k*L*[X^3/3]0,w/2 the limits of integration => I= 2*w^2, where w=.3m : I=.18 kg.m^2
(b) is similar to (a); I= .32 kg.m^2
(c) for this section I need a drawing please, on how to integrate for I. Do I divide the sheet up into N thin strips con-centric with the axis of rotation
like I would for a solid disk or cylinder? Many Thanks.
2. Sep 15, 2006
### Galileo
a) and b) are correct, but your post is difficult to read. Check here:
For c), the easiest way is to use the perpendicular axis theorem. Which makes it trivial. If you don't know that, a simple double integral will do. The distance of a mass element to the axis is x^2+y^2, so :
$$I_z=\sigma\int \limits_{-a/2}^{a/2}\int \limits_{-b/2}^{b/2}(x^2+y^2)dxdy$$
If you haven't had double integrals, you could also try adding the moments of inertia from each strip, like in parts a) and b), and use the parallel axis theorem to find these moments of inertia. If you don't know that theorem, brute force integration will give the same result.
Moral: Learn some 'tools of the trade' to make your life easy.
Last edited: Sep 15, 2006 |
# Information Theory (math.IT)
• We study the consequences of 'super-quantum non-local correlations' as represented by the PR-box model of Popescu and Rohrlich, and show PR-boxes can enhance the capacity of noisy interference channels between two senders and two receivers. PR-box correlations violate Bell/CHSH inequalities and are thus stronger -- more non-local -- than quantum mechanics; yet weak enough to respect special relativity in prohibiting faster-than-light communication. Understanding their power will yield insight into the non-locality of quantum mechanics. We exhibit two proof-of-concept channels: first, we show a channel between two sender-receiver pairs where the senders are not allowed to communicate, for which a shared super-quantum bit (a PR-box) allows perfect communication. This feat is not achievable with the best classical (senders share no resources) or quantum entanglement-assisted (senders share entanglement) strategies. Second, we demonstrate a class of channels for which a tunable parameter achieves a double separation of capacities; for some range of \epsilon, the super-quantum assisted strategy does better than the entanglement-assisted strategy, which in turn does better than the classical one.
• Briët et al. showed that an efficient communication protocol implies a reliable XOR game protocol. In this work, we improve this relationship, and obtain a nontrivial lower bound $2\log3\approx 3.1699$ of XOR-amortized communication complexity of the equality function. The proof uses an elegant idea of Pawłowski et al. in a paper on information causality. Although the improvement of a lower bound of a communication complexity is at most a factor 2, all arguments and proofs in this work are quite simple and intuitive.
• We consider the dynamics of message passing for spatially coupled codes and, in particular, the set of density evolution equations that tracks the profile of decoding errors along the spatial direction of coupling. It is known that, for suitable boundary conditions and after a transient phase, the error profile exhibits a "solitonic behavior". Namely, a uniquely-shaped wavelike solution develops, that propagates with constant velocity. Under this assumption we derive an analytical formula for the velocity in the framework of a continuum limit of the spatially coupled system. The general formalism is developed for spatially coupled low-density parity-check codes on general binary memoryless symmetric channels which form the main system of interest in this work. We apply the formula for special channels and illustrate that it matches the direct numerical evaluation of the velocity for a wide range of noise values. A possible application of the velocity formula to the evaluation of finite size scaling law parameters is also discussed. We conduct a similar analysis for general scalar systems and illustrate the findings with applications to compressive sensing and generalized low-density parity-check codes on the binary erasure or binary symmetric channels.
• In this paper, we propose a generalized expectation consistent signal recovery algorithm to estimate the signal $\mathbf{x}$ from the nonlinear measurements of a linear transform output $\mathbf{z}=\mathbf{A}\mathbf{x}$. This estimation problem has been encountered in many applications, such as communications with front-end impairments, compressed sensing, and phase retrieval. The proposed algorithm extends the prior art called generalized turbo signal recovery from a partial discrete Fourier transform matrix $\mathbf{A}$ to a class of general matrices. Numerical results show the excellent agreement of the proposed algorithm with the theoretical Bayesian-optimal estimator derived using the replica method.
• We consider the design of wireless queueing network control policies with particular focus on combining stability with additional application-dependent requirements. Thereby, we consequently pursue a cost function based approach that provides the flexibility to incorporate constraints and requirements of particular services or applications. As typical examples of such requirements, we consider the reduction of buffer underflows in case of streaming traffic, and energy efficiency in networks of battery powered nodes. Compared to the classical throughput optimal control problem, such requirements significantly complicate the control problem. We provide easily verifyable theoretical conditions for stability, and, additionally, compare various candidate cost functions applied to wireless networks with streaming media traffic. Moreover, we demonstrate how the framework can be applied to the problem of energy efficient routing, and we demonstrate the aplication of our framework in cross-layer control problems for wireless multihop networks, using an advanced power control scheme for interference mitigation, based on successive convex approximation. In all scenarios, the performance of our control framework is evaluated using extensive numerical simulations.
• Jan 17 2017 cs.IT cs.SY math.IT arXiv:1701.04187v1
Feedback control actively dissipates uncertainty from a dynamical system by means of actuation. We develop a notion of "control capacity" that gives a fundamental limit (in bits) on the rate at which a controller can dissipate the uncertainty from a system, i.e. stabilize to a known fixed point. We give a computable single-letter characterization of control capacity for memoryless stationary scalar multiplicative actuation channels. Control capacity allows us to answer questions of stabilizability for scalar linear systems: a system with actuation uncertainty is stabilizable if and only if the control capacity is larger than the log of the unstable open-loop eigenvalue. For second-moment senses of stability, we recover the classic uncertainty threshold principle result. However, our definition of control capacity can quantify the stabilizability limits for any moment of stability. Our formulation parallels the notion of Shannon's communication capacity, and thus yields both a strong converse and a way to compute the value of side-information in control. The results in our paper are motivated by bit-level models for control that build on the deterministic models that are widely used to understand information flows in wireless network information theory.
• A secret sharing scheme (SSS) was introduced by Shamir in 1979 using polynomial interpolation. Later it turned out that it is equivalent to an SSS based on a Reed-Solomon code. SSSs based on linear codes have been studied by many researchers. However there is little research on SSSs based on additive codes. In this paper, we study SSSs based on additive codes over $GF(4)$ and show that they require at least two steps of calculations to reveal the secret. We also define minimal access structures of SSSs from additive codes over $GF(4)$ and describe SSSs using some interesting additive codes over $GF(4)$ which contain generalized 2-designs.
• As far as we know, there is no decoding algorithm of any binary self-dual $[40, 20, 8]$ code except for the syndrome decoding applied to the code directly. This syndrome decoding for a binary self-dual $[40,20,8]$ code is not efficient in the sense that it cannot be done by hand due to a large syndrome table. The purpose of this paper is to give two new efficient decoding algorithms for an extremal binary doubly-even self-dual $[40,20, 8]$ code $C_{40,1}^{DE}$ by hand with the help of a Hermitian self-dual $[10,5,4]$ code $E_{10}$ over $GF(4)$. The main idea of this decoding is to project codewords of $C_{40,1}^{DE}$ onto $E_{10}$ so that it reduces the complexity of the decoding of $C_{40,1}^{DE}$. The first algorithm is called the representation decoding algorithm. It is based on the pattern of codewords of $E_{10}$. Using certain automorphisms of $E_{10}$, we show that only eight types of codewords of $E_{10}$ can produce all the codewords of $E_{10}$. The second algorithm is called the syndrome decoding algorithm based on $E_{10}$. It first solves the syndrome equation in $E_{10}$ and finds a corresponding binary codeword of $C_{40,1}^{DE}$.
• Jan 17 2017 cs.IT math.IT arXiv:1701.04165v1
A linear code with a complementary dual (or LCD code) is defined to be a linear code $C$ whose dual code $C^{\perp}$ satisfies $C \cap C^{\perp}$= $\left\{ \mathbf{0}\right\}$. Let $LCD{[}n,k{]}$ denote the maximum of possible values of $d$ among $[n,k,d]$ binary LCD codes. We give exact values of $LCD{[}n,k{]}$ for $1 \le k \le n \le 12$. We also show that $LCD[n,n-i]=2$ for any $i\geq2$ and $n\geq2^{i}$. Furthermore, we show that $LCD[n,k]\leq LCD[n,k-1]$ for $k$ odd and $LCD[n,k]\leq LCD[n,k-2]$ for $k$ even.
• We study broadcast capacity and minimum delay scaling laws for highly mobile wireless networks, in which each node has to disseminate or broadcast packets to all other nodes in the network. In particular, we consider a cell partitioned network under the simplified independent and identically distributed (IID) mobility model, in which each node chooses a new cell at random every time slot. We derive scaling laws for broadcast capacity and minimum delay as a function of the cell size. We propose a simple first-come-first-serve (FCFS) flooding scheme that nearly achieves both capacity and minimum delay scaling. Our results show that high mobility does not improve broadcast capacity, and that both capacity and delay improve with increasing cell sizes. In contrast to what has been speculated in the literature we show that there is (nearly) no tradeoff between capacity and delay. Our analysis makes use of the theory of Markov Evolving Graphs (MEGs) and develops two new bounds on flooding time in MEGs by relaxing the previously required expander property assumption.
• Reed-Solomon codes have found many applications in practical storage systems, but were until recently considered unsuitable for distributed storage applications due to the widely-held belief that they have poor repair bandwidth. The work of Guruswami and Wootters (STOC'16) has shown that one can actually perform bandwidth-efficient linear repair with Reed-Solomon codes: When the codes are over the field $\mathbb{F}_{q^t}$ and the number of parities $r \geq q^s$, where $(t-s)$ divides $t$, there exists a linear scheme that achieves a repair bandwidth of $(n-1)(t-s)\log_2 q$ bits. We extend this result by showing the existence of such a linear repair scheme for every $1 \leq s < t$. Moreover, our new schemes are optimal among all linear repair schemes for Reed-Solomon codes when $n = q^t$ and $r = q^s$. Additionally, we improve the lower bound on the repair bandwidth for Reed-Solomon codes, also established in the work of Guruswami and Wootters.
• The physical layer security in the up-link of the wireless communication systems is often modeled as the multiple access wiretap channel (MAC-WT), and recently it has received a lot attention. In this paper, the MAC-WT has been re-visited by considering the situation that the legitimate receiver feeds his received channel output back to the transmitters via two noiseless channels, respectively. This model is called the MAC-WT with noiseless feedback. Inner and outer bounds on the secrecy capacity region of this feedback model are provided. To be specific, we first present a decode-and-forward (DF) inner bound on the secrecy capacity region of this feedback model, and this bound is constructed by allowing each transmitter to decode the other one's transmitted message from the feedback, and then each transmitter uses the decoded message to re-encode his own messages, i.e., this DF inner bound allows the independent transmitters to co-operate with each other. Then, we provide a hybrid inner bound which is strictly larger than the DF inner bound, and it is constructed by using the feedback as a tool not only to allow the independent transmitters to co-operate with each other, but also to generate two secret keys respectively shared between the legitimate receiver and the two transmitters. Finally, we give a sato-type outer bound on the secrecy capacity region of this feedback model. The results of this paper are further explained via a Gaussian example.
• The majority of online content is written in languages other than English, and is most commonly encoded in UTF-8, the world's dominant Unicode character encoding. Traditional compression algorithms typically operate on individual bytes. While this approach works well for the single-byte ASCII encoding, it works poorly for UTF-8, where characters often span multiple bytes. Previous research has focused on developing Unicode compressors from scratch, which often failed to outperform established algorithms such as bzip2. We develop a technique to modify byte-based compressors to operate directly on Unicode characters, and implement variants of LZW and PPM that apply this technique. We find that our method substantially improves compression effectiveness on a UTF-8 corpus, with our PPM variant outperforming the state-of-the-art PPMII compressor. On ASCII and binary files, our variants perform similarly to the original unmodified compressors.
• Nowadays data compressors are applied to many problems of text analysis, but many such applications are developed outside of the framework of mathematical statistics. In this paper we overcome this obstacle and show how several methods of classical mathematical statistics can be developed based on applications of the data compressors.
• While all organisms on Earth descend from a common ancestor, there is no consensus on whether the origin of this ancestral self-replicator was a one-off event or whether it was only the final survivor of multiple origins. Here we use the digital evolution system Avida to study the origin of self-replicating computer programs. By using a computational system, we avoid many of the uncertainties inherent in any biochemical system of self-replicators (while running the risk of ignoring a fundamental aspect of biochemistry). We generated the exhaustive set of minimal-genome self-replicators and analyzed the network structure of this fitness landscape. We further examined the evolvability of these self-replicators and found that the evolvability of a self-replicator is dependent on its genomic architecture. We studied the differential ability of replicators to take over the population when competed against each other (akin to a primordial-soup model of biogenesis) and found that the probability of a self-replicator out-competing the others is not uniform. Instead, progenitor (most-recent common ancestor) genotypes are clustered in a small region of the replicator space. Our results demonstrate how computational systems can be used as test systems for hypotheses concerning the origin of life.
• Cyclic codes are an interesting type of linear codes and have wide applications in communication and storage systems due to their efficient encoding and decoding algorithms. It was proved that asymptotically good Hermitian LCD codes exist. The objective of this paper is to construct some cyclic Hermitian LCD codes over finite fields and analyse their parameters. The dimensions of these codes are settled and the lower bounds on their minimum distances are presented. Most Hermitian LCD codes presented in this paper are not BCH codes. In addition, we employ Hermitian LCD codes to propose a Hermitian orthogonal direct sum masking scheme that achieves protection against fault injection attacks. It is shown that the codes with great minimum distances are desired to improve the resistance.
• We describe a framework to build distances by measuring the tightness of inequalities, and introduce the notion of proper statistical divergences and improper pseudo-divergences. We then consider the Hölder ordinary and reverse inequalities, and present two novel classes of Hölder divergences and pseudo-divergences that both encapsulate the special case of the Cauchy-Schwarz divergence. We report closed-form formulas for those statistical dissimilarities when considering distributions belonging to the same exponential family provided that the natural parameter space is a cone (e.g., multivariate Gaussians), or affine (e.g., categorical distributions). Those new classes of Hölder distances are invariant to rescaling, and thus do not require distributions to be normalized. Finally, we show how to compute statistical Hölder centroids with respect to those divergences, and carry out center-based clustering toy experiments on a set of Gaussian distributions that demonstrate empirically that symmetrized Hölder divergences outperform the symmetric Cauchy-Schwarz divergence.
• Channel-reciprocity based key generation (CRKG) has gained significant importance as it has recently been proposed as a potential lightweight security solution for IoT devices. However, the impact of the attacker's position in close range has only rarely been evaluated in practice, posing an open research problem about the security of real-world realizations. Furthermore, this would further bridge the gap between theoretical channel models and their practice-oriented realizations. For security metrics, we utilize cross-correlation, mutual information, and a lower bound on secret-key capacity. We design a practical setup of three parties such that the channel statistics, although based on joint randomness, are always reproducible. We run experiments to obtain channel states and evaluate the aforementioned metrics for the impact of an attacker depending on his position. It turns out the attacker himself affects the outcome, which has not been adequately regarded yet in standard channel models.
• A cognitive radar adapts the transmit waveform in response to changes in the radar and target environment. In this work, we analyze the recently proposed sub-Nyquist cognitive radar wherein the total transmit power in a multi-band cognitive waveform remains the same as its full-band conventional counterpart. For such a system, we derive lower bounds on the mean-squared-error (MSE) of a single-target time delay estimate. We formulate a procedure to select the optimal bands, and recommend distribution of the total power in different bands to enhance the accuracy of delay estimation. In particular, using Cramér-Rao bounds, we show that equi-width subbands in cognitive radar always have better delay estimation than the conventional radar. Further analysis using Ziv-Zakai bound reveals that cognitive radar performs well in low signal-to-noise (SNR) regions.
• Permutation codes, in the form of rank modulation, have shown promise for applications such as flash memory. One of the metrics recently suggested as appropriate for rank modulation is the Ulam metric, which measures the minimum translocation distance between permutations. Multipermutation codes have also been proposed as a generalization of permutation codes that would improve code size (and consequently the code rate). In this paper we analyze the Ulam metric in the context of multipermutations, noting some similarities and differences between the Ulam metric in the context of permutations. We also consider sphere sizes for multipermutations under the Ulam metric and resulting bounds on code size.
• The promise of compressive sensing (CS) has been offset by two significant challenges. First, real-world data is not exactly sparse in a fixed basis. Second, current high-performance recovery algorithms are slow to converge, which limits CS to either non-real-time applications or scenarios where massive back-end computing is available. In this paper, we attack both of these challenges head-on by developing a new signal recovery framework we call \em DeepInverse that learns the inverse transformation from measurement vectors to signals using a \em deep convolutional network. When trained on a set of representative images, the network learns both a representation for the signals (addressing challenge one) and an inverse map approximating a greedy or convex recovery algorithm (addressing challenge two). Our experiments indicate that the DeepInverse network closely approximates the solution produced by state-of-the-art CS recovery algorithms yet is hundreds of times faster in run time. The tradeoff for the ultrafast run time is a computationally intensive, off-line training procedure typical to deep networks. However, the training needs to be completed only once, which makes the approach attractive for a host of sparse recovery problems.
• In this paper, we represent Raptor codes as multi-edge type low-density parity-check (MET-LDPC) codes, which gives a general framework to design them for higher-order modulation using MET density evolution. We then propose an efficient Raptor code design method for higher-order modulation, where we design distinct degree distributions for distinct bit levels. We consider a joint decoding scheme based on belief propagation for Raptor codes and also derive an exact expression for the stability condition. In several examples, we demonstrate that the higher-order modulated Raptor codes designed using the multi-edge framework outperform previously reported higher-order modulation codes in literature.
• Jan 17 2017 cs.IT math.IT arXiv:1701.03877v1
In this paper, we establish new capacity bounds for the multi-sender unicast index-coding problem. We first revisit existing outer and inner bounds proposed by Sadeghi et al. and identify their suboptimality in general. We then propose a new multi-sender maximal-acyclic-induced-subgraph outer bound that improves upon the existing bound in two aspects. For inner bound, we identify shortcomings of the state-of-the-art partitioned Distributed Composite Coding (DCC) in the strategy of sender partitioning and in the implementation of multi-sender composite coding. We then modify the existing sender partitioning by a new joint link-and-sender partitioning technique, which allows each sender to split its link capacity so as to contribute to collaborative transmissions in multiple groups if necessary. This leads to a modified DCC (mDCC) scheme that is shown to outperform partitioned DCC and suffice to achieve optimality for some index-coding instances. We also propose cooperative compression of composite messages in composite coding to exploit the potential overlapping of messages at different senders to support larger composite rates than those by point-to-point compression in the existing DCC schemes. Combining joint partitioning and cooperative compression said, we develop a new multi-sender Cooperative Composite Coding (CCC) scheme for the problem. The CCC scheme improves upon partitioned DCC and mDCC in general, and is the key to achieve optimality for a number of index-coding instances. The usefulness of each scheme is illuminated via examples, and the capacity region is established for each example.
• Sequential estimation of the delay and Doppler parameters for sub-Nyquist radars by analog-to-information conversion (AIC) systems has received wide attention recently. However, the estimation methods reported are AIC-dependent and have poor performance for off-grid targets. This paper develops a general estimation scheme in the sense that it is applicable to all AICs regardless whether the targets are on or off the grids. The proposed scheme estimates the delay and Doppler parameters sequentially, in which the delay estimation is formulated into a beamspace direction-of- arrival problem and the Doppler estimation is translated into a line spectrum estimation problem. Then the well-known spatial and temporal spectrum estimation techniques are used to provide efficient and high-resolution estimates of the delay and Doppler parameters. In addition, sufficient conditions on the AIC to guarantee the successful estimation of off-grid targets are provided, while the existing conditions are mostly related to the on-grid targets. Theoretical analyses and numerical experiments show the effectiveness and the correctness of the proposed scheme.
• The theoretical analysis of detection and decoding of low-density parity-check (LDPC) codes transmitted over channels with two-dimensional (2D) interference and additive white Gaussian noise (AWGN) is provided in this paper. The detection and decoding system adopts the joint iterative detection and decoding scheme (JIDDS) in which the log-domain sum-product algorithm is adopted to decode the LDPC codes. The graph representations of the JIDDS are explained. Using the graph representations, we prove that the message-flow neighborhood of the detection and decoding system will be tree-like for a sufficiently long code length. We further confirm that the performance of the JIDDS will concentrate around the performance in which message-flow neighborhood is tree-like. Based on the tree-like message-flow neighborhood, we employ a modified density evolution algorithm to track the message densities during the iterations. A threshold is calculated using the density evolution algorithm which can be considered as the theoretical performance limit of the system. Simulation results demonstrate that the modified density evolution is effective in analyzing the performance of 2D interference systems.
• Cooperative relaying is often deployed to enhance the communication reliability (i.e., diversity order) and consequently the end-to-end achievable rate. However, this raises several security concerns when the relays are untrusted since they may have access to the relayed message. In this paper, we study the achievable secrecy diversity order of cooperative networks with untrusted relays. In particular, we consider a network with an N-antenna transmitter (Alice), K single-antenna relays, and a single-antenna destination (Bob). We consider the general scenario where there is no relation between N and K, and therefore K can be larger than N. Alice and Bob are assumed to be far away from each other, and all communication is done through the relays, i.e., there is no direct link. Providing secure communication while enhancing the diversity order has been shown to be very challenging. In fact, it has been shown in the literature that the maximum achievable secrecy diversity order for the adopted system model is one (while using artificial noise jamming). In this paper, we adopt a nonlinear interference alignment scheme that we have proposed recently to transmit the signals from Alice to Bob. We analyze the proposed scheme in terms of the achievable secrecy rate and secrecy diversity order. Assuming Gaussian inputs, we derive an explicit expression for the achievable secrecy rate and show analytically that a secrecy diversity order of up to min(N,K)-1 can be achieved using the proposed technique. We provide several numerical examples to validate the obtained analytical results and demonstrate the superiority of the proposed technique to its counterparts that exist in the literature.
Samad Khabbazi Oskouei Sep 05 2016 11:34 UTC
I think that we have missed the "semi-" at the conclusion. Because, the proof of the theorem 4.3 is based on the using universal semi-density matrix concept which is not computable. The semi-computability concept used here is like the Kolmogorov complexity which is not computable and so the Cubic co
...(continued)
Toby Cubitt Sep 01 2016 11:14 UTC
I could well be missing something. But as far as I could tell from a rather quick read through the paper, all they show is that the quantum capacity of a channel with computable matrix elements is given by the regularised coherent information optimised over input ensembles with computable matrix ele
...(continued)
Māris Ozols Aug 30 2016 17:52 UTC
Do I understand correctly that this paper claims to show that quantum capacity is computable?
> After defining the algorithmic quantum capacity we have proved that it
> equals the standard one. Furthermore we have shown that it is
> computable.
Richard Kueng Jul 28 2015 07:01 UTC
fyi: our quantum implications are presented in Subsection 2.2 (pp 7-9).
Marco Tomamichel May 31 2015 22:07 UTC
Thanks for the comment! This is a good idea, I will do that in the next arXiv version.
Patrick Hayden May 28 2015 17:31 UTC
Wonderful! I've been waiting for a book like this for a while now! Thanks, Marco.
I do have one trivial comment from a 30 second preliminary scan, though: please consider typesetting the proofs with a font size matching the main text. If us readers are already squinting hard trying to understand
...(continued)
Marco Tomamichel Apr 02 2015 03:21 UTC
This is a preliminary version and I am happy to incorporate feedback I receive in the coming month. Any comments are welcome.
Māris Ozols Mar 17 2015 11:00 UTC
The strange equation is supposed to look like this:
$$f(\sqrt{a} X + \sqrt{1-a} Y) \geq a f(X) + (1-a) f(Y) \quad \forall a \in [0,1]$$
Yuanzhu Aug 02 2014 04:21 UTC
This algorithm is from Wu's list decoding algorithm. |
Say I want to change the inclination of the ISS by one degree. The ISS orbits at 7660m/s so the manoeuvre will cost me 133.7m/s of $\Delta v$:
From time to time, the station must be reboosted because of orbital decay. Let us just assume that this is done like a Hohman transfer, with the first burn being 10m/s to change the velocity of the station from 7660m/s to 7670m/s. But what if I do a slight plane change at the same time, say 0.1 degrees?
Well, that costs me an additional 6.7m/s on top of the original 10m/s. But wait! If I do this on 10 consecutive reboosts, I have changed the inclination by one degree, but only spent 67m/s of $\Delta v$!
It only gets better. With 100 steps of 0.01 degrees, the total $\Delta v$ spent is only 8.9m/s, at a 1000 0.001 degree steps, only 0.9m/s. It is approaching zero.
Is this really a free propulsive inclination change? I think a way to summarize this odd effect is something like this: |
# Set Difference over Subset
## Theorem
Let $A$, $B$, and $S$ be sets.
Let $A \subseteq B$
Then:
$A \setminus S \subseteq B \setminus S$ |
# Phenomenology 2014 Symposium
5-7 May 2014
University of Pittsburgh
US/Eastern timezone
## Looking for a Light Nonthermal Dark Matter at the LHC
6 May 2014, 15:15
15m
Benedum Hall G31 (University of Pittsburgh)
### Speaker
Yu Gao (University of Texas A & M)
### Description
This talk discusses the collider phenomenolgy of a nonthermal dark matter model with a 1-GeV dark matter candidate. Together with additional colored states, the dark matter also explains baryongensis. Since the light dark matter is not parity-protected, it can be singly produced at the LHC. This leads to large missing energy associated with an energetic jet whose transverse momentum distribution is featured by a Jacobian-like shape. Currently available LHC data can offer significant bounds on this model. I will also comment on the model's indication to the recently observed 3.5 keV photon emission from galaxy clusters.
### Summary
The light nonthermal dark matter model relates the baryon asymmetry to dark matter production without imposing a discrete symmetry. The dark matter candidate is degenerate in mass with the proton.
LHC $n$ jets + MET searches, especially with the monojet channel, give significant constraint on the model parameters. For a heavy scalar mediator mass around 1 TeV and below, the lesser between $\lambda_1$ and $\lambda_2$ is constrained to ${\cal O}(0.1)$.
Signals in heavy quark flavors, esp. the mono-top with MET, can also be an interesting final state.
The light nonthermal dark matter model can also explain the emission of 3.5 keV photon with model parameters that are consistent with current LHC constraints.
The talk is partially based on e-print 1401.1825 and 1403.5717.
### Primary author
Yu Gao (University of Texas A & M)
### Co-authors
Bhaskar Dutta (Texas A&M University) Prof. Rouzbeh Allahverdi (University of New Mexico) Teruki Kamon (Texas A & M University (US))
Slides |
# How do I solve a linear Diophantine equation with three unknowns?
Find one integer solution to the Diophantine equation
\begin{equation*}
18x+14y+63z=5.
\end{equation*}
If this were only a linear equation over $\mathbb{Z}^2$, then I could easily solve it by using the extended Euclidean algorithm… but I have no idea how to do this with more than 2 unknowns…
#### Solutions Collecting From Web of "How do I solve a linear Diophantine equation with three unknowns?"
You solve $18 u + 14 v = 2 = \gcd(18,14).$ Solve $2 w + 63 z = 1.$ Combine to get $18 x + 14 y + 63 z = 1.$ Then multiply all by $5.$
[For the following paragraphs, please refer to the figure at the end of the last paragraph (the figure is also available in PDF).]
The manipulations performed from steps (0) to (17) were designed to create the linear system of equations (0a), (5a), (11a) and (17a). The manipulations end when the absolute value of a coefficient of the latest equation added is 1 (see (17)).
Equation (0a) is given. It is possible to infer equations (5a), (11a) and (17a) from (0), (5) and (11) respectively without performing manipulations (0) to (17) directly. In every case, select the smallest absolute value coefficient, generate the next equation by replacing every coefficient with the remainder of the coefficient divided by the selected coefficient (smallest absolute value coefficient) – do the same with the right-hand constant – and add the new variable whose coefficient is the smallest absolute value coefficient. If the new equation has a greatest common divisor greater than one, divide the equation by the greatest common divisor (it may be necessary to divide this greatest common divisor from previous equations). Stop when the absolute value of a coefficient of the latest equation added is 1.
Then proceed to solve the linear systems of equations.
Hint $\ \color{#c00}{18\!-\!14}\,\mid\, 63\!+\!1$ $\, \Rightarrow\,16\,(18\!-\! 14)-63 = 1.\,$ Scale that by $\,5\,$ to finish.
Remark $\$ The idea is simply to search for a “small” linear combination $\,n = \color{#c00}{ia+jb}\,$ of two elements $\,a,b\,$ of $\,\{14,18,63\}\,$ such that the 3rd element satisfies $\ c\equiv \pm1 \pmod n,\,$ hence $\, \pm1 = c+kn = c + ki\,a + kj\,b\,$ thus scaling by $\pm n\,$ yields $n$ as a linear combination of $a,b,c.\,$ Above the first “small” number we see $\, n = \color{#c00}{18\!-\!14} = 4\,$ works since $63\equiv -1\pmod {\!4}.$
The reason for choosing $n$ “small” is that this increases the probability that $\,c\equiv \pm1\pmod{\! n},\,$ e.g. $100\%$ chance if $n = 2,\,$ $67\%$ if $n = 3.\,$ We know (by Bezout) that the smallest such $n$ is $\,\gcd(a,b)\,$ but – as we saw above – often simpler choices work such as $\,b-a.$
More algorithmically, we can use the Extended Euclidean Algorithm to compute $\rm\,gcd(63,18,14) = 1\,$ in a couple steps
$$\begin{array}{rrr} [1]&\ 63& 1& 0& 0\\ [2]&\ 18& 0& 1& 0\\ [3]&\ 14& 0& 0& 1\\ [2] -[3]\, =\, [4]& 4& \!\!0& 1& -1\\ 16[4] -[1]\, =\,[5]& 1& -1& 16& \!\!\!\!-16 \end{array}\qquad\qquad\qquad\quad$$
where the row $\ n\,\ a\,\ b\,\ c\,\ d\$ denotes that $\ n = 63a + 18 b + 14 c.\$ Thus the final row yields
$$\quad 1 = -63 +16(18) – 16(14)$$ |
# The Asymptotic Bode Diagram: Derivation of Approximations
## Introduction
Given an arbitrary transfer function, such ashach frequency and displaying them. This is what a computer would naturally do. For example if you use MATLAB® and enter the commands
>> mySys=tf(100*[1 1],[1 110 1000])
mySys =
100 s + 100
------------------
s^2 + 110 s + 1000
>> bode(mySys)
you get a plot like the one shown below. The asymptotic solution is given elsewhere.
However, there are reasons to develop a method for sketching Bode diagrams manually. By drawing the plots by hand you develop an understanding about how the locations of poles and zeros effect the shape of the plots. With this knowledge you can predict how a system behaves in the frequency domain by simply examining its transfer function. On the other hand, if you know the shape of transfer function that you want, you can use your knowledge of Bode diagrams to generate the transfer function.
The first task when drawing a Bode diagram by hand is to rewrite the transfer function so that all the poles and zeros are written in the form (1+s/ω0). The reasons for this will become apparent when deriving the rules for a real pole. A derivation will be done using the transfer function from above, but it is also possible to do a more generic derivation. Let's rewrite the transfer function from above.
\eqalign{ H(s) &= 100{{s + 1} \over {(s + 10)(s + 100)}} = 100{{1 + s/1} \over {10 \cdot (1 + s/10) \cdot 100 \cdot (1 + s/100)}} \cr &= 0.1{{1 + s/1} \over {(1 + s/10)(1 + s/100)}} \cr}
Now let's examine how we can easily draw the magnitude and phase of this function when s=jω
First note that this expression is made up of four terms, a constant (0.1), a zero (at s=-1), and two poles (at s=-10 and s=-100). We can rewrite the function (with s=) as four individual phasors (i.e., magnitude and phase), each phasor is within a set of square brackets to make them more easily distinguished from each other..
\eqalign{ H(j\omega ) &= 0.1{{1 + j\omega /1} \over {(1 + j\omega /10)(1 + j\omega /100)}} \cr &= \left[ {\left| {0.1} \right|\angle \left( {0.1} \right)} \right]{{\left[ {\left| {1 + j\omega /1} \right|\angle \left( {1 + j\omega /1} \right)} \right]} \over {\left[ {\left| {1 + j\omega /10} \right|\angle \left( {1 + j\omega /10} \right)} \right]\left[ {\left| {1 + j\omega /100} \right|\angle \left( {1 + j\omega /100} \right)} \right]}} \cr}
We will show (below) that drawing the magnitude and phase of each individual phasor is fairly straightforward. The difficulty lies in trying to draw the magnitude and phase of the more complicated function, H(). To start, we will write H() as a single phasor:
\eqalign{ H(j\omega ) &= \left( {\left| {0.1} \right|{{\left| {1 + j\omega /1} \right|} \over {\left| {1 + j\omega /10} \right|\left| {1 + j\omega /100} \right|}}} \right)\left( {\angle \left( {0.1} \right) + \angle \left( {1 + j\omega /1} \right) - \angle \left( {1 + j\omega /10} \right) - \angle \left( {1 + j\omega /100} \right)} \right) \cr &= \left| {H(j\omega )} \right|\angle H(j\omega ) \cr \cr \left| {H(j\omega )} \right| &= \left| {0.1} \right|{{\left| {1 + j\omega /1} \right|} \over {\left| {1 + j\omega /10} \right|\left| {1 + j\omega /100} \right|}} \cr \angle H(j\omega ) &= \angle \left( {0.1} \right) + \angle \left( {1 + j\omega /1} \right) - \angle \left( {1 + j\omega /10} \right) - \angle \left( {1 + j\omega /100} \right) \cr}
Drawing the phase is fairly simple. We can draw each phase term separately, and then simply add (or subtract) them. The magnitude term is not so straightforward because the magnitude terms are multiplied, it would be much easier if they were added - then we could draw each term on a graph and just add them. We can accomplish this by usinga logarthmic scale (so multiplication and division become addition and subtraction). Instead of a simple logarithm, we will use a deciBel (or dB) scale.
### A Magnitude Plot
One way to transform multiplication into addition is by using the logarithm. Instead of using a simple logarithm, we will use a deciBel (named for Alexander Graham Bell).
The relationship between a quantity, Q, and its deciBel representation, X, is given by:
$$X = 20 \cdot log{_{10}}\left( Q \right)$$
So if Q=100 then X=40; Q=0.01 gives X=-40; X=3 gives Q=1.41; and so on.
If we represent the magnitude of H(s) in deciBels several things happen.
\eqalign{ \left| {H(s)} \right| &= \left| {0.1} \right|{{\left| {1 + j\omega /1} \right|} \over {\left| {1 + j\omega /10} \right|\left| {1 + j\omega /100} \right|}} \cr 20 \cdot {\log _{10}}\left( {\left| {H(s)} \right|} \right) &= 20 \cdot {\log _{10}}\left( {\left| {0.1} \right|{{\left| {1 + j\omega /1} \right|} \over {\left| {1 + j\omega /10} \right|\left| {1 + j\omega /100} \right|}}} \right) \cr &= 20 \cdot {\log _{10}}\left( {\left| {0.1} \right|} \right) + 20 \cdot {\log _{10}}\left( {\left| {1 + j\omega /1} \right|} \right) + 20 \cdot {\log _{10}}\left( {{1 \over {\left| {1 + j\omega /10} \right|}}} \right) + 20 \cdot {\log _{10}}\left( {{1 \over {\left| {1 + j\omega /100} \right|}}} \right) \cr &= 20 \cdot {\log _{10}}\left( {\left| {0.1} \right|} \right) + 20 \cdot {\log _{10}}\left( {\left| {1 + j\omega /1} \right|} \right) - 20 \cdot {\log _{10}}\left( {\left| {1 + j\omega /10} \right|} \right) - 20 \cdot {\log _{10}}\left( {\left| {1 + j\omega /100} \right|} \right) \cr}
The advantages of using deciBels (and of writing poles and zeros in the form (1+s/ω0)) are now revealed. The fact that the deciBel is a logarithmic term transforms the multiplications and divisions of the individual terms to additions and subtsractions. Another benefit is apparent in the last line that reveals just two types of terms, a constant term and terms of the form 20·log10(|1+jω/ω0|). Plotting the constant term is trivial, however the other terms are not so straightforward. These plots will be discussed below. However, once these plots are drawn for the individual terms, they can simply be added together to get a plot for H(s).
### A Phase Plot
If we look at the phase of the transfer function, we see much the same thing: The phase plot is easy to draw if we take our lead from the magnitude plot. First note that the transfer function is made up of four terms. If we want
$$\angle H(s) = \angle \left( {0.1} \right) + \angle \left( {1 + j\omega /1} \right) - \angle \left( {1 + j\omega /10} \right) - \angle \left( {1 + j\omega /100} \right)$$
Again there are just two types of terms, a constant term and terms of the form (1+jω/ω 0). Plotting the constant term is trivial; the other terms are discussed below.
#### A more generic derivation
The discussion above dealt with only a single transfer function. Another derivation that is more general, but a little more complicated mathematically is here.
## Making a Bode Diagram
Following the discussion above, the way to make a Bode Diagram is to split the function up into its constituent parts, plot the magnitude and phase of each part, and then add them up. The following gives a derivation of the plots for each type of constituent part. Examples, including rules for making the plots follow in the next document, which is more of a "How to" description of Bode diagrams.
### A Constant Term
Consider a constant term:$H(s) = H(j\omega) = K$
#### Magnitude
Clearly the magnitude is constant as ω varies. $\left| {H(j\omega )} \right| = |K|$
#### Phase
The phase is also constant. If K is positive, the phase is 0° (or any even multiple of 180°, i.e., ±360°). If K is negative the phase is -180°, or any odd multiple of 180°. We will use -180° because that is what MATLAB® uses. Expressed in radians we can say that if K is positive the phase is 0 radians, if K is negative the phase is -π radians.
##### Example: Bode Plot of Gain Term
\eqalign{ H(s)& = H\left( {j\omega } \right) = 15 \\ \left| {H\left( {j\omega } \right)} \right| &= \left| 15 \right| = 15 =23.5\,dB \\ \angle H\left( {j\omega } \right) &= \angle 15 = 0^\circ }
The magnitude (in dB is calculated as $$20 \cdot {\log _{10}}\left( {15} \right) = 23.5$$.
##### Key Concept: Bode Plot of Gain Term
• For a constant term, the magnitude plot is a straight line.
• The phase plot is also a straight line, either at 0° (for a positive constant) or ±180° (for a negative constant).
### A Real Pole
Consider a simple real pole : $H\left( s \right) = {1 \over {1 + {s \over {{\omega _0}}}}},\quad \quad H\left( {j\omega } \right) = {1 \over {1 + j{\omega \over {{\omega _0}}}}}$
The frequency ω0 is called the break frequency, the corner frequency or the 3 dB frequency (more on this last name later).
#### Magnitude
The magnitude is given by
\eqalign{ & \left| {H\left( {j\omega } \right)} \right| = \left| {{1 \over {1 + j{\omega \over {{\omega _0}}}}}} \right| = {1 \over {\sqrt {{1^2} + {{\left( {{\omega \over {{\omega _0}}}} \right)}^2}} }} \cr & {\left| {H\left( {j\omega } \right)} \right|_{dB}} = 20 \cdot {\log _{10}}\left( {{1 \over {\sqrt {1 + {{\left( {{\omega \over {{\omega _0}}}} \right)}^2}} }}} \right) \cr}
Let's consider three cases for the value of the frequency, and determine the magnitude in each case.:
Case 1) ω<<ω0. This is the low frequency case with ω/ω0→0. We can write an approximation for the magnitude of the transfer function:
$\sqrt {1 + {{\left( {{\omega \over {{\omega _0}}}} \right)}^2}} \approx 1, and {\left| {H\left( {j\omega } \right)} \right|_{dB}} \approx 20 \cdot {\log _{10}}\left( {{1 \over 1}} \right) = 0$
This low frequency approximation is shown in blue on the diagram below.
Case 2) ω>>ω0. This is the high frequency case with ω/ω0→∞. We can write an approximation for the magnitude of the transfer function:
$\sqrt {1 + {{\left( {{\omega \over {{\omega _0}}}} \right)}^2}} \approx \sqrt {{{\left( {{\omega \over {{\omega _0}}}} \right)}^2}} \approx {\omega \over {{\omega _0}}}$, so
${\left| {H\left( {j\omega } \right)} \right|_{dB}} \approx 20 \cdot {\log _{10}}\left( {{{{\omega _0}} \over \omega }} \right)$
The high frequency approximation is at shown in green on the diagram below. It is a straight line with a slope of -20 dB/decade going through the break frequency at 0 dB (if ω=ω0 the approximation simplifies to 0 dB; ω=10·ω0 gives an approximate gain of 0.1, or -20 dB and so on). That is, the approximation goes through 0 dB at ω=ω0, and for every factor of 10 increase in frequency, the magnitude drops by 20 dB..
Case 3) ω=ω0. At the break frequency
${\left| {H\left( {j{\omega _0}} \right)} \right|_{dB}} = 20 \cdot {\log _{10}}\left( {{1 \over {\sqrt {1 + {{\left( {{{{\omega _0}} \over {{\omega _0}}}} \right)}^2}} }}} \right) = 20 \cdot {\log _{10}}\left( {{1 \over {\sqrt 2 }}} \right) \approx - 3\;dB$
This point is shown as a red circle on the diagram.
To draw a piecewise linear approximation, use the low frequency asymptote up to the break frequency, and the high frequency asymptote thereafter.
The resulting asymptotic approximation is shown highlighted in transparent magenta. The maximum error between the asymptotic approximation and the exact magnitude function occurs at the break frequency and is approximately -3 dB.
Magnitude of a real pole: The piecewise linear asymptotic Bode plot for magnitude is at 0 dB until the break frequency and then drops at 20 dB per decade as frequency increases (i.e., the slope is -20 dB/decade).
#### Phase
The phase of a single real pole is given by is given by
$$\angle H\left( {j\omega } \right) = \angle \left( {{1 \over {1 + j{\omega \over {{\omega _0}}}}}} \right) = - \angle \left( {1 + j{\omega \over {{\omega _0}}}} \right) = - \arctan \left( {{\omega \over {{\omega _0}}}} \right)$$
Let us again consider three cases for the value of the frequency:
Case 1) ω<<ω0. This is the low frequency case with ω/ω0→0. At these frequencies We can write an approximation for the phase of the transfer function
$$\angle H\left( {j\omega } \right) \approx -\arctan \left( 0 \right) = 0^\circ = 0\;rad$$
The low frequency approximation is shown in blue on the diagram below.
Case 2) ω>>ω0. This is the high frequency case with ω/ω0→∞. We can write an approximation for the phase of the transfer function
$$\angle H\left( {j\omega } \right) \approx - \arctan \left( \infty \right) = - 90^\circ = - {\pi \over 2}\;rad$$
The high frequency approximation is at shown in green on the diagram below. It is a horizontal line at -90°.
Case 3) ω=ω0. The break frequency. At this frequency
$$\angle H\left( {j\omega } \right) = - \arctan \left( 1 \right) = - 45^\circ = - {\pi \over 4}\;rad$$
This point is shown as a red circle on the diagram.
A piecewise linear approximation is not as easy in this case because the high and low frequency asymptotes don't intersect. Instead we use a rule that follows the exact function fairly closely, but is also somewhat arbitrary. Its main advantage is that it is easy to remember.
Phase of a real pole: The piecewise linear asymptotic Bode plot for phase follows the low frequency asymptote at 0° until one tenth the break frequency (0.1·ω0) then decrease linearly to meet the high frequency asymptote at ten times the break frequency (10·ω0). This line is shown above. Note that there is no error at the break frequency and about 5.7° of error at 0.1·ω0 and 10·ω0 the break frequency.
##### Example: Real Pole
The first example is a simple pole at 5 radians per second. The asymptotic approximation is magenta, the exact function is a dotted black line.
$$H(s)=\frac{1}{1+\frac{s}{5}}$$
##### Example: Repeated Real Pole
The second example shows a double pole at 30 radians per second. Note that the slope of the asymptote is -40 dB/decade and the phase goes from 0 to -180°. The effect of repeating a pole is to double the slope of the magnitude to -40 dB/decade and the slope of the phase to -90°/decade.
$$H(s)=\frac{1}{\left(1+\frac{s}{30}\right)^2}$$
##### Key Concept: Bode Plot for Real Pole
• For a simple real pole the piecewise linear asymptotic Bode plot for magnitude is at 0 dB until the break frequency and then drops at 20 dB per decade (i.e., the slope is -20 dB/decade). An nth order pole has a slope of -20·n dB/decade.
• The phase plot is at 0° until one tenth the break frequency and then drops linearly to -90° at ten times the break frequency. An nth order pole drops to -90°·n.
##### Aside: a different formulation of the phase approximation
There is another approximation for phase that is occasionally used. The approximation is developed by matching the slope of the actual phase term to that of the approximation at ω=ω0. Using math similar to that given here (for the underdamped case) it can be shown that by drawing a line starting at 0° at ω=ω0/eπ/20/4.81 (or ω0·e-π/2) to -90° at ω=ω0·4.81 we get a line with the same slope as the actual function at ω=ω0. The approximation described previously is much more commonly used as is easier to remember as a line drawn from 0° at ω0/5 to -90° at ω0·5, and easier to draw on semi-log paper. The latter is shown on the diagram below.
Although this method is more accurate in the region around ω=ω0 there is a larger maximum error (more than 10°) near ω0/5 and ω0·5 when compared to the method described previously.
### A Real Zero
The piecewise linear approximation for a zero is much like that for a pole Consider a simple zero: $H(s)=1+\frac{s}{\omega_0},\quad H(j\omega)=1+j\frac{\omega}{\omega_0}$
#### Magnitude
The development of the magnitude plot for a zero follows that for a pole. Refer to the previous section for details. The magnitude of the zero is given by
$$\left| {H\left( {j\omega } \right)} \right| = \left| {1 + j{\omega \over {{\omega _0}}}} \right|$$
Again, as with the case of the real pole, there are three cases:
1. At low frequencies, ω<<ω0, the gain is approximately 1 (or 0 dB).
2. At high frequencies, ω>>ω0, the gain increases at 20 dB/decade and goes through the break frequency at 0 dB.
3. At the break frequency, ω=ω0, the gain is about 3 dB.
Magnitude of a Real Zero: For a simple real zero the piecewise linear asymptotic Bode plot for magnitude is at 0 dB until the break frequency and then increases at 20 dB per decade (i.e., the slope is +20 dB/decade).
#### Phase
The phase of a simple zero is given by:
$$\angle H\left( {j\omega } \right) = \angle \left( {1 + j{\omega \over {{\omega _0}}}} \right) = \arctan \left( {{\omega \over {{\omega _0}}}} \right)$$
The phase of a single real zero also has three cases (which can be derived similarly to those for the real pole, given above):
1. At low frequencies, ω<<ω0, the phase is approximately zero.
2. At high frequencies, ω>>ω0, the phase is +90°.
3. At the break frequency, ω=ω0, the phase is +45°.
Phase of a Real Zero: Follow the low frequency asymptote at 0° until one tenth the break frequency (0.1 ω0) then increase linearly to meet the high frequency asymptote at ten times the break frequency (10 ω0).
##### Example: Real Zero
This example shows a simple zero at 30 radians per second. The asymptotic approximation is magenta, the exact function is the dotted black line.
$$H(s)=1+\frac{s}{30}$$
##### Key Concept: Bode Plot of Real Zero:
• The plots for a real zero are like those for the real pole but mirrored about 0dB or 0°.
• For a simple real zero the piecewise linear asymptotic Bode plot for magnitude is at 0 dB until the break frequency and then rises at +20 dB per decade (i.e., the slope is +20 dB/decade). An n th order zero has a slope of +20·n dB/decade.
• The phase plot is at 0° until one tenth the break frequency and then rises linearly to +90° at ten times the break frequency. An nth order zero rises to +90°·n.
### A Pole at the Origin
A pole at the origin is easily drawn exactly. Consider
$$H\left( s \right) = {1 \over s},\quad H\left( {j\omega } \right) = {1 \over {j\omega }} = - {j \over \omega }$$
#### Magnitude
The magnitude is given by
\eqalign{ \left| {H\left( {j\omega } \right)} \right| &= \left| { - {j \over \omega }} \right| = {1 \over \omega } \cr {\left| {H\left( {j\omega } \right)} \right|_{dB}} &= 20\cdot{\log _{10}}\left( {{1 \over \omega }} \right) = - 20\cdot{\log _{10}}\left( \omega \right) \cr}
In this case there is no need for approximate functions and asymptotes, we can plot the exact funtion. The function is represented by a straight line on a Bode plot with a slope of -20 dB per decade and going through 0 dB at 1 rad/ sec. It also goes through 20 dB at 0.1 rad/sec, -20 dB at 10 rad/sec... Since there are no parameters (i.e., ω0) associated with this function, it is always drawn in exactly the same manner.
Magnitude of Pole at the Origin: Draw a line with a slope of -20 dB/decade that goes through 0 dB at 1 rad/sec.
#### Phase
The phase of a simple zero is given by (H(jω) is a negative imaginary number for all values of ω so the phase is always -90°):
$$\angle H\left( {j\omega } \right) = \angle \left( { - {j \over \omega }} \right) = - 90^\circ$$
Phase of pole at the origin: The phase for a pole at the origin is -90°.
##### Example: Pole at Origin
This example shows a simple pole at the origin. The exact (dotted black line) is the same as the approximation (magenta).
##### Key Concept: Bode Plot for Pole at Origin
No interactive demo is provided because the plots are always drawn in the same way.
• For a simple pole at the origin draw a straight line with a slope of -20 dB per decade and going through 0 dB at 1 rad/ sec.
• The phase plot is at -90°.
• The magnitude of an nth order pole has a slope of -20·n dB/decade and a constant phase of -90°·n.
### A Zero at the Origin
A zero at the origin is just like a pole at the origin but the magnitude increases with increasing ω, and the phase is +90° (i.e. simply mirror the graphs for the pole around the origin around 0dB or 0°).
##### Example: Zero at Origin
This example shows a simple zero at the origin. The exact (dotted black line) is the same as the approximation (magenta).
##### Key Concept: Bode Plot for Zero at Origin
• The plots for a zero at the origin are like those for the pole but mirrored about 0dB or 0°.
• For a simple zero at the origin draw a straight line with a slope of +20 dB per decade and going through 0 dB at 1 rad/ sec.
• The phase plot is at +90°.
• The magnitude of an nth order zero has a slope of +20·n dB/decade and a constant phase of +90°·n.
### A Complex Conjugate Pair of Poles
The magnitude and phase plots of a complex conjugate (underdamped) pair of poles is more complicated than those for a simple pole. Consider the transfer function (with 0<ζ<1):
$$H(s) = {{\omega _0^2} \over {{s^2} + 2\zeta {\omega _0}s + \omega _0^2}} = {1 \over {{{\left( {{s \over {{\omega _0}}}} \right)}^2} + 2\zeta \left( {{s \over {{\omega _0}}}} \right) + 1}}$$
#### Magnitude
The magnitude is given by
\eqalign{ \left| {H(j\omega )} \right| &= \left| {{1 \over {{{\left( {{{j\omega } \over {{\omega _0}}}} \right)}^2} + 2\zeta \left( {{{j\omega } \over {{\omega _0}}}} \right) + 1}}} \right| = \left| {{1 \over { - {{\left( {{\omega \over {{\omega _0}}}} \right)}^2} + j2\zeta \left( {{\omega \over {{\omega _0}}}} \right) + 1}}} \right| = \left| {{1 \over {\left( {1 - {{\left( {{\omega \over {{\omega _0}}}} \right)}^2}} \right) + j\left( {2\zeta \left( {{\omega \over {{\omega _0}}}} \right)} \right)}}} \right| \cr &= {1 \over {\sqrt {{{\left( {1 - {{\left( {{\omega \over {{\omega _0}}}} \right)}^2}} \right)}^2} + {{\left( {2\zeta {\omega \over {{\omega _0}}}} \right)}^2}} }} \cr {\left| {H(j\omega )} \right|_{dB}} &= - 20 \cdot {\log _{10}}\left( {\sqrt {{{\left( {1 - {{\left( {{\omega \over {{\omega _0}}}} \right)}^2}} \right)}^2} + {{\left( {2\zeta {\omega \over {{\omega _0}}}} \right)}^2}} } \right) \cr}
As before, let's consider three cases for the value of the frequency:
Case 1) ω<<ω0. This is the low frequency case. We can write an approximation for the magnitude of the transfer function
$${\left| {H(j\omega )} \right|_{dB}} = - 20 \cdot {\log _{10}}\left( 1 \right) = 0$$
The low frequency approximation is shown in red on the diagram below.
Case 2) ω>>ω0. This is the high frequency case. We can write an approximation for the magnitude of the transfer function
$${\left| {H(j\omega )} \right|_{dB}} = - 20 \cdot {\log _{10}}\left( {{{\left( {{\omega \over {{\omega _0}}}} \right)}^2}} \right) = - 40 \cdot {\log _{10}}\left( {{\omega \over {{\omega _0}}}} \right)$$
The high frequency approximation is at shown in green on the diagram below. It is a straight line with a slope of -40 dB/decade going through the break frequency at 0 dB. That is, for every factor of 10 increase in frequency, the magnitude drops by 40 dB.
Case 3) ω≈ω0. It can be shown that a peak occurs in the magnitude plot near the break frequency. The derivation of the approximate amplitude and location of the peak are given here. We make the approximation that a peak exists only when
0<ζ<0.5
and that the peak occurs at ω0 with height 1/(2·ζ).
To draw a piecewise linear approximation, use the low frequency asymptote up to the break frequency, and the high frequency asymptote thereafter. If ζ<0.5, then draw a peak of amplitude 1/(2·ζ) Draw a smooth curve between the low and high frequency asymptote that goes through the peak value.
As an example for the curve shown below ω0=10, ζ=0.1,
$$H(s) = {1 \over {{{{s^2}} \over {100}} + 0.02\zeta s + 1}} = {1 \over {{{\left( {{s \over {10}}} \right)}^2} + 0.2\left( {{s \over {10}}} \right) + 1}} = {1 \over {{{\left( {{s \over {{\omega _0}}}} \right)}^2} + 2\zeta \left( {{s \over {{\omega _0}}}} \right) + 1}}$$
The peak will have an amplitude of 1/(2·ζ)=5.00 or 14 dB.
The resulting asymptotic approximation is shown as a black dotted line, the exact response is a black solid line.
Magnitude of Underdamped (Complex) poles: Draw a 0 dB at low frequencies until the break frequency, ω0, and then drops with a slope of -40 dB/decade. If ζ<0.5 we draw a peak of height at ω0, otherwise no peak is drawn.
$$\left| {H(j{\omega _0})} \right| \approx {1 \over {2\zeta }},\quad {\left| {H(j{\omega _0})} \right|_{dB}} \approx - 20 \cdot {\log _{10}}\left( {2\zeta } \right)$$
Note: The actual height of the peak and its frequency are both slightly less than the approximations given above. An in depth discussion of the magnitude and phase approximations (along with some alternate approximations) are given here.
#### Phase
The phase of a complex conjugate pole is given by is given by
\eqalign{ \angle H(j\omega ) &= \angle \left( {{1 \over {{{\left( {{{j\omega } \over {{\omega _0}}}} \right)}^2} + 2\zeta \left( {{{j\omega } \over {{\omega _0}}}} \right) + 1}}} \right) = - \angle \left( {{{\left( {{{j\omega } \over {{\omega _0}}}} \right)}^2} + 2\zeta \left( {{{j\omega } \over {{\omega _0}}}} \right) + 1} \right) = - \angle \left( {1 - {{\left( {{\omega \over {{\omega _0}}}} \right)}^2} + 2\zeta \left( {{{j\omega } \over {{\omega _0}}}} \right)} \right) \cr &= - \arctan \left( {{{2\zeta {\omega \over {{\omega _0}}}} \over {1 - {{\left( {{\omega \over {{\omega _0}}}} \right)}^2}}}} \right) \cr}
Let us again consider three cases for the value of the frequency:
Case 1) ω<<ω0. This is the low frequency case. At these frequencies We can write an approximation for the phase of the transfer function
$$\angle H\left( {j\omega } \right) \approx -\arctan \left({{2 \zeta \omega \over {\omega _0}}} \right) \approx -\arctan(0) = 0^\circ = 0\;rad$$
The low frequency approximation is shown in red on the diagram below.
Case 2) ω>>ω0. This is the high frequency case. We can write an approximation for the phase of the transfer function
$$\angle H\left( {j\omega } \right) \approx - 180^\circ = -\pi\;rad$$
Note: this result makes use of the fact that the arctan function returns a result in quadrant 2 since the imaginary part of H(jω) is negative and the real part is positive.
The high frequency approximation is at shown in green on the diagram below. It is a straight line at -180°.
Case 3) ω=ω0. The break frequency. At this frequency
$$\angle H(j\omega_0 ) = - 90^\circ$$
The asymptotic approximation is shown below for ω0=10, ζ=0.1, followed by an explanation
$$H(s) = {1 \over {{{{s^2}} \over {100}} + 0.02\zeta s + 1}} = {1 \over {{{\left( {{s \over {10}}} \right)}^2} + 0.2\left( {{s \over {10}}} \right) + 1}} = {1 \over {{{\left( {{s \over {{\omega _0}}}} \right)}^2} + 2\zeta \left( {{s \over {{\omega _0}}}} \right) + 1}}$$
A piecewise linear approximation is a bit more complicated in this case, and there are no hard and fast rules for drawing it. The most common way is to look up a graph in a textbook with a chart that shows phase plots for many values of ζ. Three asymptotic approximations are given here. We will use the approximation that connects the the low frequency asymptote to the high frequency asymptote starting at
$$\omega = {{{\omega _0}} \over {{{10}^\zeta }}} = {\omega _0} \cdot {10^{ - \zeta }}$$
and ending at
$$\omega = {\omega _0} \cdot {10^\zeta }$$
Since ζ=0.2 in this case this means that the phase starts at 0° and then breaks downward at ω=ω0/10ζ=7.9 rad/sec. The phase reaches -180° at ω=ω0·10ζ=12.6 rad/sec.
As a practical matter If ζ<0.02, the approximation can be simply a vertical line at the break frequency. One advantage of this approximation is that it is very easy to plot on semilog paper. Since the number 10·ω0 moves up by a full decade from ω0, the number 10ζ·ω0 will be a fraction ζ of a decade above ω0. For the example above the corner frequencies for ζ=0.1 fall near ω0 one tenth of the way between ω0 and ω0/10 (at the lower break frequency) to one tenth of the way between ω0 and ω0·10 (at the higher frequency).
Phase of Underdamped (Complex) Poles: Follow the low frequency asymptote at 0° until
$$\omega = {{{\omega _0}} \over {{{10}^\zeta }}}$$
then decrease linearly to meet the high frequency asymptote at -180° at
$$\omega = {\omega _0} \cdot {10^\zeta }$$
Other magnitude and phase approximations (along with exact expressions) are given here.
##### Key Concept: Bode Plot for Complex Conjugate Poles
• For the magnitude plot of complex conjugate poles draw a 0 dB at low frequencies, go through a peak of height,
$$\left| {H(j{\omega _0})} \right| \approx {1 \over {2\zeta }},\quad {\left| {H(j{\omega _0})} \right|_{dB}} \approx - 20 \cdot {\log _{10}}\left( {2\zeta } \right)$$
at the break frequency and then drop at 40 dB per decade (i.e., the slope is -40 dB/decade). The high frequency asymptote goes through the break frequency. Note that in this approximation the peak only exists for
0 < ζ < 0.5
• To draw the phase plot simply follow low frequency asymptote at 0° until
$$\omega = {{{\omega _0}} \over {{{10}^\zeta }}} = {\omega _0} \cdot {10^{ - \zeta }}$$
then decrease linearly to meet the high frequency asymptote at -180° at
$$\omega = {\omega _0} \cdot {10^\zeta }$$
If ζ<0.02, the approximation can be simply a vertical line at the break frequency.
• Note that the shape of the graphs (magnitude peak height, steepness of phase transition) are determined solely by ζ, and the frequency at which the magnitude peak and phase transition occur are determined solely by ω0.
Note: Other magnitude and phase approximations (along with exact expressions) are given here.
### A Complex Conjugate Pair of Zeros
Not surprisingly a complex pair of zeros yields results similar to that for a complex pair of poles. The magnitude and phase plots for the complex zero are the mirror image (around 0dB for magnitude and around 0° for phase) of those for the complex pole. Therefore, the magnitude has a dip instead of a peak, the magnitude increases above the break frequency and the phase increases rather than decreasing. The results will not be derived here, but closely follow those foe complex poles.
##### Example: Complex Conjugate Zero
The graph below corresponds to a complex conjugate zero with ω0=3, ζ=0.25
$$H\left( s \right) = {\left( {{s \over {{\omega _0}}}} \right)^2} + 2\zeta \left( {{s \over {{\omega _0}}}} \right) + 1$$
The dip in the magnitude plot will have a magnitude of 0.5 or -6 dB. The break frequencies for the phase are at ω=ω0/10ζ=1.7 rad/sec and ω=ω0·10ζ=5.3 rad/sec.
##### Key Concept: Bode Plot of Complex Conjugate Zeros
• The plots for a complex conjugate pair of zeros are very much like those for the poles but mirrored about 0dB or 0°.
• For the magnitude plot of complex conjugate zeros draw a 0 dB at low frequencies, go through a dip of magnitude:
$$\left| {H(j{\omega _0})} \right| \approx {2\zeta},\quad {\left| {H(j{\omega _0})} \right|_{dB}} \approx 20 \cdot {\log _{10}}\left( {2\zeta } \right)$$
at the break frequency and then rise at +40 dB per decade (i.e., the slope is +40 dB/decade). The high frequency asymptote goes through the break frequency. Note that the peak only exists for
0 < ζ < 0.5
• To draw the phase plot simply follow low frequency asymptote at 0° until
$$\omega = {{{\omega _0}} \over {{{10}^\zeta }}} = {\omega _0} \cdot {10^{ - \zeta }}$$
then increase linearly to meet the high frequency asymptote at 180° at
$$\omega = {\omega _0} \cdot {10^\zeta }$$
• Note that the shape of the graphs (magnitude peak height, steepness of phase transition) are determined solely by ζ, and the frequency at which the magnitude peak and phase transition occur are determined solely by ω0.
Note: Other magnitude and phase approximations (along with exact expressions) are given here.
## Interactive Demos:
Below you will find interactive demos that show how to draw the asymptotic approximation for a constant, a first order pole and zero, and a second order (underdamped) pole and zero. Note there is no demo for a pole or zero at the origin because these are always drawn in exactly the same way; there are no variable parameters (i.e., ω0 or ζ).
### Interactive Demo: Bode Plot of Constant Term
This demonstration shows how the gain term affects a Bode plot. To run the demonstration either enter the value of K, or |K| expressed in dB, in one of the text boxes below. If you enter |K| in dB, then the sign of K is unchanged from its current value. You can also set |K| and ∠K by either clicking and dragging the horizontal lines on the graphs themselve. The magnitude of K must be between 0.01 and 100 (-40dB and +40dB). The phase of K (∠K) can only be 0° (for a positive value of K) or ±180° (for negative K).
Enter a value for gain, K: ,
or enter |K| expressed in dB: dB.
Note that for the case of a constant term, the approximate (magenta line) and exact (dotted black line) representations of magnitude and phase are equal.
### Interactive Demo: Bode Plot of a Real Pole
This demonstration shows how a first order pole expressed as:
$H(s) = \frac{1}{1+\frac{s}{\omega_0}}=\frac{1}{1+j\frac{\omega}{\omega_0}},$
is displayed on a Bode plot. To change the value of ω0, you can either change the value in the text box, below, or drag the vertical line showing ω0 on the graphs to the right. The exact values of magnitude and phase are shown as black dotted lines and the asymptotic approximations are shown with a thick magenta line. The value of ω0 is constrained such that 0.1≤ω0≤10 rad/second.
Enter a value for ωo:
Asymptotic Magnitude: The asymptotic magnitude plot starts (at low frequencies) at 0 dB and stays at that level until it gets to ω0. At that point the gain starts dropping with a slope of -20 dB/decade.
Asymptotic Phase: The asymptotic phase plot starts (at low frequencies) at 0° and stays at that level until it gets to 0.1·ω0 (0.1 rad/sec). At that point the phase starts dropping at -45°/decade until it gets to -90° at 10·ω0 (10 rad/sec), at which point it becomes constant at -90° for high frequencies. Phase goes through -45° at ω=ω0.
ω0/10 ω0 10·ω0
### Interactive Demo: Bode Plot of a Real zero
This demonstration shows how a first order zero expressed as:
$H(s) = 1+\frac{s}{\omega_0}= 1+j\frac{\omega}{\omega_0},$
is displayed on a Bode plot. To change the value of ω0, you can either change the value in the text box, below, or drag the vertical line showing ω0 on the graphs to the right. The exact values of magnitude and phase are shown as black dotted lines and the asymptotic approximations are shown with a thick magenta line. The value of ω0 is constrained such that 0.1≤ω0≤10 rad/second.
Enter a value for ωo:
Asymptotic Magnitude: The asymptotic magnitude plot starts (at low frequencies) at 0 dB and stays at that level until it gets to ω0. At that point the gain starts rising with a slope of +20 dB/decade.
Asymptotic Magnitude: The asymptotic phase plot starts (at low frequencies) at 0° and stays at that level until it gets to 0.1·ω0 . At that point the phase starts rising at +45°/decade until it gets to +90° at 10·ω0, at which point it becomes constant at +90° for high frequencies. Phase goes through +45° at ω=ω0.
ω0/10 ω0 10·ω0
### Interactive Demo: Bode Plot of a Pair of Complex Conjugate Poles
This demonstration shows how a second order pole (complex conjugate roots) expressed as:
$H(s)={1 \over {{{\left( {{s \over {{\omega _0}}}} \right)}^2} + 2\zeta \left( {{s \over {{\omega _0}}}} \right) + 1}} = {1 \over {1 - {{\left( {{\omega \over {{\omega _0}}}} \right)}^2} + j2\zeta {\omega \over {{\omega _0}}}}},$
is displayed on a Bode plot. You can change ω0, and ζ. The value of ω0 is constrained such that 0.1≤ω0≤10 rad/second, and 0.05≤ζ≤0.99.
Enter value for ωo: or click and drag on graph to set ω0.,
and use text-box or slider, below, for ζ.
Asymptotic Magnitude: The asymptotic magnitude plot starts (at low frequencies) at 0 dB and stays at that level until it gets to ω0. At that point the gain starts dropping with a slope of -40 dB/decade. Note: it is -40 dB per decade because there are two poles in the denominator.
If ζ<0.5 we estimate the peak height as $|H(j\omega_{peak})|\approx\frac{1}{2\zeta}$ (exact height is $|H(j\omega_{peak})|=\frac{1}{2\zeta \sqrt{1-\zeta^2}}$). We approximate the peak location at $\omega_{peak}\approx\omega_0$ (exact peak location is at $\omega_{peak}=\omega_0\sqrt{1-2\zeta^2}$ ).
However, if ζ≥0.5, the peak is sufficiently small that we don't include it in our plot.
Since ζ≥0.5, we do not draw a peak.
Since ζ<0.5, we draw a peak. Note how close together the approximate and exact values are for ωpeak and |H(jωpeak)|.
ωpeak |H(jωpeak)| |H(jωpeak)|dB Approximate Exact
Asymptotic Phase: The asymptotic phase plot starts (at low frequencies) at 0° and stays at that level until it gets to ω0/10ζ. At that point the phase starts dropping at -90°/decade until it gets to -180°, at which point it becomes constant at -180° for high frequencies. Phase goes through -90° at ω=ω0. If ζ<0.02 the phase transition between 0 and -180° can be approximated by a vertical line.
ω0/10ζ ω0 ω0·10ζ
### Interactive Demo: Bode Plot of a Pair of Complex Conjugate Zeros
This demonstration shows how a second order zero (complex conjugate roots) expressed as:
$H(s)= {{\left( {{s \over {{\omega _0}}}} \right)}^2} + 2\zeta \left( {{s \over {{\omega _0}}}} \right) + 1 = 1 - {{\left( {{\omega \over {{\omega _0}}}} \right)}^2} + j2\zeta {\omega \over {{\omega _0}}},$
is displayed on a Bode plot. You can change ω0, and ζ. The value of ω0 is constrained such that 0.1≤ω0≤10 rad/second, and 0.05≤ζ≤0.99.
Enter value for ωo: or click and drag on graph to set ω0.,
and use text-box or slider, below, for ζ.
Asymptotic Magnitude: The asymptotic magnitude plot starts (at low frequencies) at 0 dB and stays at that level until it gets to ω0. At that point the gain starts rising with a slope of +40 dB/decade.
If ζ<0.5 we estimate the peak height as $|H(j\omega_{peak})|\approx\frac{1}{2\zeta}$ (exact height is $|H(j\omega_{peak})|=\frac{1}{2\zeta \sqrt{1-\zeta^2}}$). We approximate the peak location at $\omega_{peak}\approx\omega_0$ (exact peak location is at $\omega_{peak}=\omega_0\sqrt{1-2\zeta^2}$ ).
However, if ζ≥0.5, the peak is sufficiently small that we don't include it in our plot.
Since ζ≥0.5, so we do not draw a peak.
Since ζ<0.5, we draw a peak. Note how close together the approximate and exact values are for ωpeak and |H(jωpeak)|.
ωpeak |H(jωpeak)| |H(jωpeak)|dB Approximate Exact
Asymptotic Phase: The asymptotic phase plot starts (at low frequencies) at 0° and stays at that level until it gets to ω0/10ζ. At that point the phase starts rising at +90°/decade until it gets to +180°, at which point it becomes constant at +180° for high frequencies. Phase goes through +90° at ω=ω0. If ζ<0.02 the phase transition between 0 and +180° can be approximated by a vertical line.
ω0/10ζ ω0 ω0·10ζ
Brief review of page: This document derived piecewise linear approximations that can be used to draw different elements of a Bode diagram. A synopsis of these rules can be found in a separate document.
References
Replace |
# Past Nomura Seminar
20 February 2014
16:00
to
17:30
Abstract
In this work, we want to construct the solution $(Y,Z,K)$ to the following BSDE $$\begin{array}{l} Y_t=\xi+\int_t^Tf(s,Y_s,Z_s)ds-\int_t^TZ_sdB_s+K_T-K_t, \quad 0\le t\le T, \\ {\mathbf E}[l(t, Y_t)]\ge 0, \quad 0\le t\le T,\\ \int_0^T{\mathbf E}[l(t, Y_t)]dK_t=0, \\ \end{array}$$ where $x\mapsto l(t, x)$ is non-decreasing and the terminal condition $\xi$ is such that ${\mathbf E}[l(T,\xi)]\ge 0$. This equation is different from the (classical) reflected BSDE. In particular, for a solution $(Y,Z,K)$, we require that $K$ is deterministic. We will first study the case when $l$ is linear, and then general cases. We also give some application to mathematical finance. This is a joint work with Philippe Briand and Romuald Elie.
• Nomura Seminar
13 February 2014
16:00
to
17:30
Peter Tankov
Abstract
We construct and study market models admitting optimal arbitrage. We say that a model admits optimal arbitrage if it is possible, in a zero-interest rate setting, starting with an initial wealth of 1 and using only positive portfolios, to superreplicate a constant c>1. The optimal arbitrage strategy is the strategy for which this constant has the highest possible value. Our definition of optimal arbitrage is similar to the one in Fenrholz and Karatzas (2010), where optimal relative arbitrage with respect to the market portfolio is studied. In this work we present a systematic method to construct market models where the optimal arbitrage strategy exists and is known explicitly. We then develop several new examples of market models with arbitrage, which are based on economic agents' views concerning the impossibility of certain events rather than ad hoc constructions. We also explore the concept of fragility of arbitrage introduced in Guasoni and Rasonyi (2012), and provide new examples of arbitrage models which are not fragile in this sense. References: Fernholz, D. and Karatzas, I. (2010). On optimal arbitrage. The Annals of Applied Probability, 20(4):1179–1204. Guasoni, P. and Rasonyi, M. (2012). Fragility of arbitrage and bubbles in diffusion models. preprint.
• Nomura Seminar
6 February 2014
16:00
to
17:30
Mike Tehranchi
Abstract
There are many financial models used in practice (CIR/Heston, Vasicek, Stein-Stein, quadratic normal) whose popularity is due, in part, to their analytically tractable asset pricing. In this talk we will show that it is possible to generalise these models in various ways while maintaining tractability. Conversely, we will also characterise the family of models which admit this type of tractability, in the spirit of the classification of polynomial term structure models.
• Nomura Seminar
28 January 2014
12:30
Abstract
The finance literature documents a relation between labor income and the cross-section of stock returns. One possible explanation for this is the hedging decisions of investors with relative wealth concerns. This implies a negative risk premium associated with stock returns correlated with local undiversifiable wealth, since investors are willing to pay more for stocks that help their hedging goals. We find evidence that is consistent with these regularities. In addition, we show that the effect varies across geographic areas depending on the size and variability of undiversifiable wealth, proxied by labor income.
• Nomura Seminar
23 January 2014
16:00
to
17:30
Johannes Muhle-Karbe
Abstract
An investor trades a safe and several risky assets with linear price impact to maximize expected utility from terminal wealth. In the limit for small impact costs, we explicitly determine the optimal policy and welfare, in a general Markovian setting allowing for stochastic market, cost, and preference parameters. These results shed light on the general structure of the problem at hand, and also unveil close connections to optimal execution problems and to other market frictions such as proportional and fixed transaction costs.
• Nomura Seminar
6 December 2013
16:00
Abstract
Worst-case portfolio optimization has been introduced in Korn and Wilmott (2002) and is based on distinguishing between random stock price fluctuations and market crashes which are subject to Knightian uncertainty. Due to the absence of full probabilistic information, a worst-case portfolio problem is considered that will be solved completely. The corresponding optimal strategy is of a multi-part type and makes an investor indifferent between the occurrence of the worst possible crash and no crash at all. We will consider various generalizations of this setting and - as a very recent result - will in particular answer the question "Is it good to save for bad times or should one consume more as long as one is still rich?"
• Nomura Seminar
29 November 2013
16:00
Abstract
We construct a model for asset price in a limit order book, which captures on one hand main stylized facts of microstructure effects, and on the other hand is tractable for dealing with optimal high frequency trading by stochastic control methods. For this purpose, we introduce a model for describing the fluctuations of a tick-by-tick single asset price, based on Markov renewal process. We consider a point process associated to the timestamps of the price jumps, and marks associated to price increments. By modeling the marks with a suitable Markov chain, we can reproduce the strong mean-reversion of price returns known as microstructure noise. Moreover, by using Markov renewal process, we can model the presence of spikes in intensity of market activity, i.e. the volatility clustering. We also provide simple parametric and nonparametric statistical procedures for the estimation of our model. We obtain closed-form formulae for the mean signature plot, and show the diffusive behavior of our model at large scale limit. We illustrate our results by numerical simulations, and find that our model is consistent with empirical data on futures Euribor and Eurostoxx. In a second part, we use a dynamic programming approach to our semi Markov model applied to the problem of optimal high frequency trading with a suitable modeling of market order flow correlated with the stock price, and taking into account in particular the adverse selection risk. We show a reduced-form for the value function of the associated control problem, and provide a convergent and computational scheme for solving the problem. Numerical tests display the shape of optimal policies for the market making problem. This talk is based on joint works with Pietro Fodra.
• Nomura Seminar
22 November 2013
16:00
Pierre Collin-Dufresne
Abstract
We extend Kyle's (1985) model of insider trading to the case where liquidity provided by noise traders follows a general stochastic process. Even though the level of noise trading volatility is observable, in equilibrium, measured price impact is stochastic. If noise trading volatility is mean-reverting, then the equilibrium price follows a multivariate stochastic volatility `bridge' process. More private information is revealed when volatility is higher. This is because insiders choose to optimally wait to trade more aggressively when noise trading volatility is higher. In equilibrium, market makers anticipate this, and adjust prices accordingly. In time series, insiders trade more aggressively, when measured price impact is lower. Therefore, aggregate execution costs to uninformed traders can be higher when price impact is lower
• Nomura Seminar
15 November 2013
16:00
Abstract
We study optimal portfolio strategies in a market where the drift is driven by an unobserved Markov chain. Information on the state of this chain is obtained from stock prices and from expert opinions in the form of signals at random discrete time points. We use stochastic filtering to transform the original problem into an optimization problem under full information where the state variable is the filter for the Markov chain. This problem is studied with dynamic programming techniques and with regularization arguments. Finally we discuss a number of numerical experiments
• Nomura Seminar
8 November 2013
16:00
Enrico Biffis
Abstract
We consider over-the-counter (OTC) transactions with bilateral default risk, and study the optimal design of the Credit Support Annex (CSA). In a setting where agents have access to a trading technology, default penalties and collateral costs arise endogenously as a result of foregone investment opportunities. We show how the optimal CSA trades off the costs of the collateralization procedure against the reduction in exposure to counterparty risk and expected default losses. The results are used to provide insights on the drivers of different collateral rules, including hedging motives, re-hypothecation of collateral, and close-out conventions. We show that standardized collateral rules can have a detrimental impact on risk sharing, which should be taken into account when assessing the merits of standardized vs. bespoke CSAs in non-centrally cleared OTC instruments. This is joint work with D. Bauer and L.R. Sotomayor (GSU).
• Nomura Seminar |
• Free Practice Test & Review
How would you score if you took the GMAT
Available with Beat the GMAT members only code
• 5-Day Free Trial
5-day free, full-access trial TTP Quant
Available with Beat the GMAT members only code
• Free Trial & Practice Exam
BEAT THE GMAT EXCLUSIVE
Available with Beat the GMAT members only code
• Magoosh
Study with Magoosh GMAT prep
Available with Beat the GMAT members only code
• Award-winning private GMAT tutoring
Register now and save up to $200 Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • Get 300+ Practice Questions 25 Video lessons and 6 Webinars for FREE Available with Beat the GMAT members only code • FREE GMAT Exam Know how you'd score today for$0
Available with Beat the GMAT members only code
## GMAT Retake? 730 (Q48, V42)
This topic has 2 expert replies and 1 member reply
BFB Newbie | Next Rank: 10 Posts
Joined
04 Mar 2016
Posted:
2 messages
#### GMAT Retake? 730 (Q48, V42)
Fri Mar 04, 2016 7:05 pm
Hello!
I am a metallurgical engineer (graduated in 2014) from India and currently work as an Operations Manager in a Fortune 500 steel-firm in rural India (~2 years of experience). I took the GMAT about a week ago, and my results stand as follows:
Overall: 730
Quant: 48 (73%ile)
Verbal: 42 (96%ile)
IR: 5
AWA: 5
I have a few questions and I would really appreciate it if they were answered. I am primarily interested in pursuing Operations Management.
Here are my questions:
(1) Given the Indian test-taking demographic, would you say that my Q-V raw scores are skewed? And if they are, how would you suppose adcoms would look upon them? Would I be at a disadvantage because of a lower Quant score or at an advantage because of a relatively higher Verbal score?
(2) Would you say it is necessary for me to re-take the GMAT because my percentiles don't hold up to the 80/80 selection criteria? Or could I simply put the test to rest and focus on profile building?
(3) Could I offset my low Quant score with other Quant rooted achievements? Such as clearing a particularly difficult statistics course, or the first level of the CFA? Would such achievements take attention away from a low Q score?
Thank you so much for your time!
Cheers
BFB
### GMAT/MBA Expert
Joined
28 Jan 2016
Posted:
171 messages
Followed by:
3 members
21
Sat Mar 05, 2016 1:09 am
A main consideration in deciding whether to retake the GMAT or not is your confidence in getting a higher score on the retake. In your case, improving on the quant score to meet the percentile breakdown for Verbal and Quant would help.
Given the highly competitive applicant pool you belong to, I believe the disadvantage of the Quant score would outweigh the advantage of the Verbal score.
Wish you all the best!
_________________
Veritas Prep | Veritas Prep Admissions Consultant
Find the expert who's right for you. Meet our team!
Register for a free MBA Admissions Workshop!
Enroll in a Veritas Prep GMAT class completely for FREE. Wondering if a GMAT course is right for you? Attend the first class session of an actual GMAT course, either in-person or live online, and see for yourself why so many students choose to work with Veritas Prep. Find a class now!
BFB Newbie | Next Rank: 10 Posts
Joined
04 Mar 2016
Posted:
2 messages
Sat Mar 05, 2016 10:30 pm
Hello Edison,
Thank you for your prompt reply. I do not intend to apply to schools until early next year, and I think I'll focus on other aspects of profile building as of now and showcase my Quant potential via other channels.
One other question though - is the 80/80 criteria a rule of thumb for all programs in general or is it only applicable to select schools and programs? How do I know which programs employ such a cutoff and which programs do not?
Thanks again
BFB
### GMAT/MBA Expert
Joined
28 Jan 2016
Posted:
171 messages
Followed by:
3 members
21
Sat Mar 05, 2016 10:53 pm
Hi BFB,
It is not a hard cut-off, even for the GMAT score, schools do not usually state a minimum. However, the top schools would usually have higher average scores.
Wish you all the best in preparing for your applications.
_________________
Veritas Prep | Veritas Prep Admissions Consultant
Find the expert who's right for you. Meet our team!
Register for a free MBA Admissions Workshop!
Enroll in a Veritas Prep GMAT class completely for FREE. Wondering if a GMAT course is right for you? Attend the first class session of an actual GMAT course, either in-person or live online, and see for yourself why so many students choose to work with Veritas Prep. Find a class now!
### Top First Responders*
1 GMATGuruNY 70 first replies
2 Rich.C@EMPOWERgma... 42 first replies
3 Brent@GMATPrepNow 40 first replies
4 Jay@ManhattanReview 24 first replies
5 Terry@ThePrinceto... 10 first replies
* Only counts replies to topics started in last 30 days
See More Top Beat The GMAT Members
### Most Active Experts
1 GMATGuruNY
The Princeton Review Teacher
133 posts
2 Scott@TargetTestPrep
Target Test Prep
113 posts
3 Rich.C@EMPOWERgma...
EMPOWERgmat
111 posts
4 Jeff@TargetTestPrep
Target Test Prep
111 posts
5 Brent@GMATPrepNow
GMAT Prep Now Teacher
90 posts
See More Top Beat The GMAT Experts |
# What exactly is “gentrification”? How should we deal with it?
Nov 26, JDN 2458083
“Gentrification” is a word that is used in a variety of mutually-inconsistent ways. If you compare the way social scientists use it to the way journalists use it, for example, they are almost completely orthogonal.
The word “gentrification” is meant to invoke the concept of a feudal gentry—a hereditary landed class that extracts rents from the rest of the population while contributing little or nothing themselves.
If indeed that is what we are talking about, then obviously this is bad. Moreover, it’s not an entirely unfounded fear; there are some remarkably strong vestiges of feudalism in the developed world, even in the United States where we never formally had a tradition of feudal titles. There really is a significant portion of the world’s wealth held by a handful of billionaire landowner families.
But usually when people say “gentrification” they mean something much broader. Almost any kind of increase in urban real estate prices gets characterized as “gentrification” by at least somebody, and herein lies the problem.
In fact, the kind of change that is most likely to get characterized as “gentrification” isn’t even the rising real estate prices we should be most worried about. People aren’t concerned when the prices of suburban homes double in 20 years. You might think that things that are already too expensive getting more expensive would be the main concern, but on the contrary, people are most likely to cry “gentrification” when housing prices rise in poor areas where housing is cheap.
One of the most common fears about gentrification is that it will displace local residents. In fact, the best quasi-experimental studies show little or no displacement effect. It’s actually mainly middle-class urbanites who get displaced by rising rents. Poor people typically own their homes, and actually benefit from rising housing prices. Young upwardly-mobile middle-class people move to cities to rent apartments near where they work, and tend to assume that’s how everyone lives, but it’s not. Rising rents in a city are far more likely to push out its grad students than they are poor families that have lived there for generations. Part of why displacement does not occur may be because of policies specifically implemented to fight it, such as subsidized housing and rent control. If that’s so, let’s keep on subsidizing housing (though rent control will always be a bad idea).
Nor is gentrification actually a very widespread phenomenon. The majority of poor neighborhoods remain poor indefinitely. In most studies, only about 30% of neighborhoods classified as “gentrifiable” actually end up “gentrifying”. Less than 10% of the neighborhoods that had high poverty rates in 1970 had low poverty rates in 2010.
Most people think gentrification reduces crime, but in the short run the opposite is the case. Robbery and larceny are higher in gentrifying neighborhoods. Criminals are already there, and suddenly they get much more valuable targets to steal from, so they do.
There is also a general perception that gentrification involves White people pushing Black people out, but this is also an overly simplistic view. First of all, a lot of gentrification is led by upwardly-mobile Black and Latino people. Black people who live in gentrified neighborhoods seem to be better off than Black people who live in non-gentrified neighborhoods; though selection bias may contribute to this effect, it can’t be all that strong, or we’d observe a much stronger displacement effect. Moreover, some studies have found that gentrification actually tends to increase the racial diversity of neighborhoods, and may actually help fight urban self-segregation, though it does also tend to increase racial polarization by forcing racial mixing.
What should we conclude from all this? I think the right conclusion is we are asking the wrong question.
Rising housing prices in poor areas aren’t inherently good or inherently bad, and policies designed specifically to increase or decrease housing prices are likely to have harmful side effects. What we need to be focusing on is not houses or neighborhoods but people. Poverty is definitely a problem, for sure. Therefore we should be fighting poverty, not “gentrification”. Directly transfer wealth from the rich to the poor, and then let the housing market fall where it may.
There is still some role for government in urban planning more generally, regarding things like disaster preparedness, infrastructure development, and transit systems. It may even be worthwhile to design regulations or incentives that directly combat racial segregation at the neighborhood level, for, as the Schelling Segregation Model shows, it doesn’t take a large amount of discriminatory preference to have a large impact on socioeconomic outcomes. But don’t waste effort fighting “gentrification”; directly design policies that will incentivize desegregation.
Rising rent as a proportion of housing prices is still bad, and the fundamental distortions in our mortgage system that prevent people from buying houses are a huge problem. But rising housing prices are most likely to be harmful in rich neighborhoods, where housing is already overpriced; in poor neighborhoods where housing is cheap, rising prices might well be a good thing.
In fact, I have a proposal to rapidly raise homeownership across the United States, which is almost guaranteed to work, directly corrects an enormous distortion in financial markets, and would cost about as much as the mortgage interest deduction (which should probably be eliminated, as most economists agree). Give each US adult a one-time grant voucher which gives them $40,000 that can only be spent as a down payment on purchasing a home. Each time someone turns 18, they get a voucher. You only get one over your lifetime, so use it wisely (otherwise the policy could become extremely expensive); but this is an immediate direct transfer of wealth that also reduces your credit constraint. I know I for one would be house-hunting right now if I were offered such a voucher. The mortgage interest deduction means nothing to me, because I can’t afford a down payment. Where the mortgage interest deduction is regressive, benefiting the rich more than the poor, this policy gives everyone the same amount, like a basic income. In the short run, this policy would probably be expensive, as we’d have to pay out a large number of vouchers at once; but with our current long-run demographic trends, the amortized cost is basically the same as the mortgage interest deduction. And the US government especially should care about the long-run amortized cost, as it is an institution that has lasted over 200 years without ever missing a payment and can currently borrow at negative real interest rates. # Why risking nuclear war should be a war crime Nov 19, JDN 2458078 “What is the value of a human life?” is a notoriously difficult question, probably because people keep trying to answer it in terms of dollars, and it rightfully offends our moral sensibilities to do so. We shouldn’t be valuing people in terms of dollars—we should be valuing dollars in terms of their benefits to people. So let me ask a simpler question: Does the value of an individual human life increase, decrease, or stay the same, as we increase the number of people in the world? A case can be made that it should stay the same: Why should my value as a person depend upon how many other people there are? Everything that I am, I still am, whether there are a billion other people or a thousand. But in fact I think the correct answer is that it decreases. This is for two reasons: First, anything that I can do is less valuable if there are other people who can do it better. This is true whether we’re talking about writing blog posts or ending world hunger. Second, and most importantly, if the number of humans in the world gets small enough, we begin to face danger of total permanent extinction. If the value of a human life is constant, then 1,000 deaths is equally bad whether it happens in a population of 10,000 or a population of 10 billion. That doesn’t seem right, does it? It seems more reasonable to say that losing ten percent should have a roughly constant effect; in that case losing 1,000 people in a population of 10,000 is equally bad as losing 1 billion in a population of 10 billion. If that seems too strong, we could choose some value in between, and say perhaps that losing 1,000 out of 10,000 is equally bad as losing 1 million out of 1 billion. This would mean that the value of 1 person’s life today is about 1/1,000 of what it was immediately after the Toba Event. Of course, with such uncertainty, perhaps it’s safest to assume constant value. This seems the fairest, and it is certainly a reasonable approximation. In any case, I think it should be obvious that the inherent value of a human life does not increase as you add more human lives. Losing 1,000 people out of a population of 7 billion is not worse than losing 1,000 people out of a population of 10,000. That way lies nonsense. Yet if we agree that the value of a human life is not increasing, this has a very important counter-intuitive consequence: It means that increasing the risk of a global catastrophe is at least as bad as causing a proportional number of deaths. Specifically, it implies that a 1% risk of global nuclear war is worse than killing 10 million people outright. The calculation is simple: If the value of a human life is a constant V, then the expected utility (admittedly, expected utility theory has its flaws) from killing 10 million people is -10 million V. But the expected utility from a 1% risk of global nuclear war is 1% times -V times the expected number of deaths from such a nuclear war—and I think even 2 billion is a conservative estimate. (0.01)(-2 billion) V = -20 million V. This probably sounds too abstract, or even cold, so let me put it another way. Suppose we had the choice between two worlds, and these were the only worlds we could choose from. In world A, there are 100 leaders who each make choices that result in 10 million deaths. In world B, there are 100 leaders who each make choices that result in a 1% chance of nuclear war. Which world should we choose? The choice is a terrible one, to be sure. In world A, 1 billion people die. Yet what happens in world B? If the risks are independent, we can’t just multiply by 100 to get a guarantee of nuclear war. The actual probability is 1-(1-0.01)^100 = 63%. Yet even so, (0.63)(2 billion) = 1.26 billion. The expected number of deaths is higher in world B. Indeed, the most likely scenario is that 2 billion people die. Yet this is probably too conservative. The risks are most likely positively correlated; two world leaders who each take a 1% chance of nuclear war probably do so in response to one another. Therefore maybe adding up the chances isn’t actually so unreasonable—for all practical intents and purposes, we may be best off considering nuclear war in world B as guaranteed to happen. In that case, world B is even worse. And that is all assuming that the nuclear war is relatively contained. Major cities are hit, then a peace treaty is signed, and we manage to rebuild human civilization more or less as it was. This is what most experts on the issue believe would happen; but I for one am not so sure. The nuclear winter and total collapse of institutions and infrastructure could result in a global apocalypse that would result in human extinctionnot 2 billion deaths but 7 billion, and an end to all of humanity’s projects once and forever This is the kind of outcome we should be prepared to do almost anything to prevent. What does this imply for global policy? It means that we should be far more aggressive in punishing any action that seems to bring the world closer to nuclear war. Even tiny increases in risk, of the sort that would ordinarily be considered negligible, are as bad as murder. A measurably large increase is as bad as genocide. Of course, in practice, we have to be able to measure something in order to punish it. We can’t have politicians imprisoned over 0.000001% chances of nuclear war, because such a chance is so tiny that there would be no way to attain even reasonable certainty that such a change had even occurred, much less who was responsible. Even for very large chances—and in this context, 1% is very large—it would be highly problematic to directly penalize increasing the probability, as we have no consistent, fair, objective measure of that probability. Therefore in practice what I think we must do is severely and mercilessly penalize certain types of actions that would be reasonably expected to increase the probability of catastrophic nuclear war. If we had the chance to start over from the Manhattan Project, maybe simply building a nuclear weapon should be considered a war crime. But at this point, nuclear proliferation has already proceeded far enough that this is no longer a viable option. At least the US and Russia for the time being seem poised to maintain their nuclear arsenals, and in fact it’s probably better for them to keep maintaining and updating them rather than leaving decades-old ICBMs to rot. What can we do instead? First, we probably need to penalize speech that would tend to incite war between nuclear powers. Normally I am fiercely opposed to restrictions on speech, but this is nuclear war we’re talking about. We can’t take any chances on this one. If there is even a slight chance that a leader’s rhetoric might trigger a nuclear conflict, they should be censored, punished, and probably even imprisoned. Making even a veiled threat of nuclear war is like pointing a gun at someone’s head and threatening to shoot them—only the gun is pointed at everyone’s head simultaneously. This isn’t just yelling “fire” in a crowded theater; it’s literally threatening to burn down every theater in the world at once. Such a regulation must be designed to allow speech that is necessary for diplomatic negotiations, as conflicts will invariably arise between any two countries. We need to find a way to draw the line so that it’s possible for a US President to criticize Russia’s intervention in the Ukraine or for a Chinese President to challenge US trade policy, without being accused of inciting war between nuclear powers. But one thing is quite clear: Wherever we draw that line, President Trump’s statement about “fire and fury” definitely crosses it. This is a direct threat of nuclear war, and it should be considered a war crime. That reason by itself—let alone his web of Russian entanglements and violations of the Emoluments Clause—should be sufficient to not only have Trump removed from office, but to have him tried at the Hague. Impulsiveness and incompetence are no excuse when weapons of mass destruction are involved. Second, any nuclear policy that would tend to increase first-strike capability rather than second-strike capability should be considered a violation of international law. In case you are unfamiliar with such terms: First-strike capability consists of weapons such as ICBMs that are only viable to use as the opening salvo of an attack, because their launch sites can be easily located and targeted. Second-strike capability consists of weapons such as submarines that are more concealable, so it’s much more likely that they could wait for an attack to happen, confirm who was responsible and how much damage was done, and then retaliate afterward. Even that retaliation would be difficult to justify: It’s effectively answering genocide with genocide, the ultimate expression of “an eye for an eye” writ large upon humanity’s future. I’ve previously written about my Credible Targeted Conventional Response strategy that makes it both more ethical and more credible to respond to a nuclear attack with a non-nuclear retaliation. But at least second-strike weapons are not inherently only functional at starting a nuclear war. A first-strike weapon can theoretically be fired in response to a surprise attack, but only before the attack hits you—which gives you literally minutes to decide the fate of the world, most likely with only the sketchiest of information upon which to base your decision. Second-strike weapons allow deliberation. They give us a chance to think carefully for a moment before we unleash irrevocable devastation. All the launch codes should of course be randomized onetime pads for utmost security. But in addition to the launch codes themselves, I believe that anyone who wants to launch a nuclear weapon should be required to type, letter by letter (no copy-pasting), and then have the machine read aloud, Oppenheimer’s line about Shiva, “Now I am become Death, the destroyer of worlds.” Perhaps the passphrase should conclude with something like “I hereby sentence millions of innocent children to death by fire, and millions more to death by cancer.” I want it to be as salient as possible in the heads of every single soldier and technician just exactly how many innocent people they are killing. And if that means they won’t turn the key—so be it. (Indeed, I wouldn’t mind if every Hellfire missile required a passphrase of “By the authority vested in me by the United States of America, I hereby sentence you to death or dismemberment.” Somehow I think our drone strike numbers might go down. And don’t tell me they couldn’t; this isn’t like shooting a rifle in a firefight. These strikes are planned days in advance and specifically designed to be unpredictable by their targets.) If everyone is going to have guns pointed at each other, at least in a second-strike world they’re wearing body armor and the first one to pull the trigger won’t automatically be the last one left standing. Third, nuclear non-proliferation treaties need to be strengthened into disarmament treaties, with rapid but achievable timelines for disarmament of all nuclear weapons, starting with the nations that have the largest arsenals. Random inspections of the disarmament should be performed without warning on a frequent schedule. Any nation that is so much as a day late on their disarmament deadlines needs to have its leaders likewise hauled off to the Hague. If there is any doubt at all in your mind whether your government will meet its deadlines, you need to double your disarmament budget. And if your government is too corrupt or too bureaucratic to meet its deadlines even if they try, well, you’d better shape up fast. We’ll keep removing and imprisoning your leaders until you do. Once again, nothing can be left to chance. We might want to maintain some small nuclear arsenal for the sole purpose of deflecting asteroids from colliding with the Earth. If so, that arsenal should be jointly owned and frequently inspected by both the United States and Russia—not just the nuclear superpowers, but also the only two nations with sufficient rocket launch capability in any case. The launch of the deflection missiles should require joint authorization from the presidents of both nations. But in fact nuclear weapons are probably not necessary for such a deflection; nuclear rockets would probably be a better option. Vaporizing the asteroid wouldn’t accomplish much, even if you could do it; what you actually want to do is impart as much sideways momentum as possible. What I’m saying probably sounds extreme. It may even seem unjust or irrational. But look at those numbers again. Think carefully about the value of a human life. When we are talking about a risk of total human extinction, this is what rationality looks like. Zero tolerance for drug abuse or even terrorism is a ridiculous policy that does more harm than good. Zero tolerance for risk of nuclear war may be the only hope for humanity’s ongoing survival. Throughout the vastness of the universe, there are probably billions of civilizations—I need only assume one civilization for every hundred galaxies. Of the civilizations that were unwilling to adopt zero tolerance policies on weapons of mass destruction and bear any cost, however unthinkable, to prevent their own extinction, there is almost boundless diversity, but they all have one thing in common: None of them will exist much longer. The only civilizations that last are the ones that refuse to tolerate weapons of mass destruction. # Daylight Savings Time is pointless and harmful Nov 12, JDN 2458069 As I write this, Daylight Savings Time has just ended. Sleep deprivation costs the developed world about 2% of GDP—on the order of$1 trillion per year. The US alone loses enough productivity from sleep deprivation that recovering this loss would give us enough additional income to end world hunger.
So, naturally, we have a ritual every year where we systematically impose an hour of sleep deprivation on the entire population for six months. This makes sense somehow.
The start of Daylight Savings Time each year is associated with a spike in workplace injuries, heart attacks, and suicide.
Nor does the “extra” hour of sleep we get in the fall compensate; in fact, it comes with its own downsides. Pedestrian fatalities spike immediately after the end of Daylight Savings Time; the rate of assault also rises at the end of DST, though it does also seem to fall when DST starts.
Daylight Savings Time was created to save energy. It does do that… technically. The total energy savings for the United States due to DST amounts to about 0.3% of our total electricity consumption. In some cases it can even increase energy use, though it does seem to smooth out electricity consumption over the day in a way that is useful for solar and wind power.
But this is a trivially small amount of energy savings, and there are far better ways to achieve it.
Simply due to new technologies and better policies, manufacturing in the US has reduced its energy costs per dollar of output by over 4% in the last few years. Simply getting all US states to use energy as efficiently as it is used in New York or California (not much climate similarity between those two states, but hmm… something about politics comes to mind…) would cut our energy consumption by about 30%.
The total amount of energy saved by DST is comparable to the amount of electricity now produced by small-scale residential photovoltaics—so simply doubling residential solar power production (which we’ve been doing every few years lately) would yield the same benefits as DST without the downsides. If we really got serious about solar power and adopted the policies necessary to get a per-capita solar power production comparable to Germany (not a very sunny place, mind you—Sacramento gets over twice the hours of sun per year that Berlin does), we would increase our solar power production by a factor of 10—five times the benefits of DST, none of the downsides.
Alternatively we could follow France’s model and get serious about nuclear fission. France produces over three hundred times as much energy from nuclear power as the US saves via Daylight Savings Time. Not coincidentally, France produces half as much CO2 per dollar of GDP as the United States.
Why would we persist in such a ridiculous policy, with such terrible downsides and almost no upside? To a first approximation, all human behavior is social norms.
# Demystifying dummy variables
Nov 5, JDN 2458062
Continuing my series of blog posts on basic statistical concepts, today I’m going to talk about dummy variables. Dummy variables are quite simple, but for some reason a lot of people—even people with extensive statistical training—often have trouble understanding them. Perhaps people are simply overthinking matters, or making subtle errors that end up having large consequences.
A dummy variable (more formally a binary variable) is a variable that has only two states: “No”, usually represented 0, and “Yes”, usually represented 1. A dummy variable answers a single “Yes or no” question. They are most commonly used for categorical variables, answering questions like “Is the person’s race White?” and “Is the state California?”; but in fact almost any kind of data can be represented this way: We could represent income using a series of dummy variables like “Is your income greater than $50,000?” “Is your income greater than$51,000?” and so on. As long as the number of possible outcomes is finite—which, in practice, it always is—the data can be represented by some (possibly large) set of dummy variables. In fact, if your data set is large enough, representing numerical data with dummy variables can be a very good thing to do, as it allows you to account for nonlinear effects without assuming some specific functional form.
Most of the misunderstanding regarding dummy variables involves applying them in regressions and interpreting the results.
Probably the most common confusion is about what dummy variables to include. When you have a set of categories represented in your data (e.g. one for each US state), you want to include dummy variables for all but one of them. The most common mistake here is to try to include all of them, and end up with a regression that doesn’t make sense, or if you have a catchall category like “Other” (e.g. race is coded as “White/Black/Other”), leaving out that one and getting results with a nonsensical baseline.
You don’t have to leave one out if you only have one set of categories and you don’t include a constant in your regression; then the baseline will emerge automatically from the regression. But this is dangerous, as the interpretation of the coefficients is no longer quite so simple.
The thing to keep in mind is that a coefficient on a dummy variable is an effect of a change—so the coefficient on “White” is the effect of being White. In order to be an effect of a change, that change must be measured against some baseline. The dummy variable you exclude from the regression is the baseline—because the effect of changing to the baseline from the baseline is by definition zero.
Here’s a very simple example where all the regressions can be done by hand. Suppose you have a household with 1 human and 1 cat, and you want to know the effect of species on number of legs. (I mean, hopefully this is something you already know; but that makes it a good illustration.) In what follows, you can safely skip the matrix algebra; but I included it for any readers who want to see how these concepts play out mechanically in the math.
Your outcome variable Y is legs: The human has 2 and the cat has 4. We can write this as a matrix:
$Y = \begin{bmatrix} 2 \\ 4 \end{bmatrix}$
What dummy variables should we choose? There are actually several options.
The simplest option is to include both a human variable and a cat variable, and no constant. Let’s put the human variable first. Then our human subject has a value of X1 = [1 0] (“Yes” to human and “No” to cat) and our cat subject has a value of X2 = [0 1].
This is very nice in this case, as it makes our matrix of independent variables simply an identity matrix:
$X = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$
This makes the calculations extremely nice, because transposing, multiplying, and inverting an identity matrix all just give us back an identity matrix. The standard OLS regression coefficient is B = (X’X)-1 X’Y, which in this case just becomes Y itself.
$B = (X’X)^{-1} X’Y = Y = \begin{bmatrix} 2 \\ 4 \end{bmatrix}$
Our coefficients are 2 and 4. How would we interpret this? Pretty much what you’d think: The effect of being human is having 2 legs, while the effect of being a cat is having 4 legs. This amounts to choosing a baseline of nothing—the effect is compared to a hypothetical entity with no legs at all. And indeed this is what will happen more generally if you do a regression with a dummy for each category and no constant: The baseline will be a hypothetical entity with an outcome of zero on whatever your outcome variable is.
So far, so good.
But what if we had additional variables to include? Say we have both cats and humans with black hair and brown hair (and no other colors). If we now include the variables human, cat, black hair, brown hair, we won’t get the results we expect—in fact, we’ll get no result at all. The regression is mathematically impossible, regardless of how large a sample we have.
This is why it’s much safer to choose one of the categories as a baseline, and include that as a constant. We could pick either one; we just need to be clear about which one we chose.
Say we take human as the baseline. Then our variables are constant and cat. The variable constant is just 1 for every single individual. The variable cat is 0 for humans and 1 for cats.
Now our independent variable matrix looks like this:
$X = \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}$
The matrix algebra isn’t quite so nice this time:
$X’X = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}$
$(X’X)^{-1} = \begin{bmatrix} 1 & -1 \\ -1 & 2 \end{bmatrix}$
$X’Y = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 2 \\ 4 \end{bmatrix} = \begin{bmatrix} 6 \\ 4 \end{bmatrix}$
$B = (X’X)^{-1} X’Y = \begin{bmatrix} 1 & -1 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 6 \\ 4 \end{bmatrix} = \begin{bmatrix} 2 \\ 2 \end{bmatrix}$
Our coefficients are now 2 and 2. Now, how do we interpret that result? We took human as the baseline, so what we are saying here is that the default is to have 2 legs, and then the effect of being a cat is to get 2 extra legs.
That sounds a bit anthropocentric—most animals are quadripeds, after all—so let’s try taking cat as the baseline instead. Now our variables are constant and human, and our independent variable matrix looks like this:
$X = \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix}$
$X’X = \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}$
$(X’X)^{-1} = \begin{bmatrix} 1 & -1 \\ -1 & 2 \end{bmatrix}$
$X’Y = \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 2 \\ 4 \end{bmatrix} = \begin{bmatrix} 6 \\ 2 \end{bmatrix}$
$B = \begin{bmatrix} 1 & -1 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 6 \\ 2 \end{bmatrix} = \begin{bmatrix} 4 \\ -2 \end{bmatrix}$
Our coefficients are 4 and -2. This seems much more phylogenetically correct: The default number of legs is 4, and the effect of being human is to lose 2 legs.
All these regressions are really saying the same thing: Humans have 2 legs, cats have 4. And in this particular case, it’s simple and obvious. But once things start getting more complicated, people tend to make mistakes even on these very simple questions.
A common mistake would be to try to include a constant and both dummy variables: constant human cat. What happens if we try that? The matrix algebra gets particularly nasty, first of all:
$X = \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix}$
$X’X = \begin{bmatrix} 1 & 1 \\ 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 1 & 1 \\ 1 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix}$
Our covariance matrix X’X is now 3×3, first of all. That means we have more coefficients than we have data points. But we could throw in another human and another cat to fix that problem.
More importantly, the covariance matrix is not invertible. Rows 2 and 3 add up together to equal row 1, so we have a singular matrix.
If you tried to run this regression, you’d get an error message about “perfect multicollinearity”. What this really means is you haven’t chosen a valid baseline. Your baseline isn’t human and it isn’t cat; and since you included a constant, it isn’t a baseline of nothing either. It’s… unspecified.
You actually can choose whatever baseline you want for this regression, by setting the constant term to whatever number you want. Set a constant of 0 and your baseline is nothing: you’ll get back the coefficients 0, 2 and 4. Set a constant of 2 and your baseline is human: you’ll get 2, 0 and 2. Set a constant of 4 and your baseline is cat: you’ll get 4, -2, 0. You can even choose something weird like 3 (you’ll get 3, -1, 1) or 7 (you’ll get 7, -5, -3) or -4 (you’ll get -4, 6, 8). You don’t even have to choose integers; you could pick -0.9 or 3.14159. As long as the constant plus the coefficient on human add to 2 and the constant plus the coefficient on cat add to 4, you’ll get a valid regression.
Again, this example seems pretty simple. But it’s an easy trap to fall into if you don’t think carefully about what variables you are including. If you are looking at effects on income and you have dummy variables on race, gender, schooling (e.g. no high school, high school diploma, some college, Bachelor’s, master’s, PhD), and what state a person lives in, it would be very tempting to just throw all those variables into a regression and see what comes out. But nothing is going to come out, because you haven’t specified a baseline. Your baseline isn’t even some hypothetical person with \$0 income (which already doesn’t sound like a great choice); it’s just not a coherent baseline at all.
Generally the best thing to do (for the most precise estimates) is to choose the most common category in each set as the baseline. So for the US a good choice would be to set the baseline as White, female, high school diploma, California. Another common strategy when looking at discrimination specifically is to make the most privileged category the baseline, so we’d instead have White, male, PhD, and… Maryland, it turns out. Then we expect all our coefficients to be negative: Your income is generally lower if you are not White, not male, have less than a PhD, or live outside Maryland.
This is also important if you are interested in interactions: For example, the effect on your income of being Black in California is probably not the same as the effect of being Black in Mississippi. Then you’ll want to include terms like Black and Mississippi, which for dummy variables is the same thing as taking the Black variable and multiplying by the Mississippi variable.
But now you need to be especially clear about what your baseline is: If being White in California is your baseline, then the coefficient on Black is the effect of being Black in California, while the coefficient on Mississippi is the effect of being in Mississippi if you are White. The coefficient on Black and Mississippi is the effect of being Black in Mississippi, over and above the sum of the effects of being Black and the effect of being in Mississippi. If we saw a positive coefficient there, it wouldn’t mean that it’s good to be Black in Mississippi; it would simply mean that it’s not as bad as we might expect if we just summed the downsides of being Black with the downsides of being in Mississippi. And if we saw a negative coefficient there, it would mean that being Black in Mississippi is even worse than you would expect just from summing up the effects of being Black with the effects of being in Mississippi.
As long as you choose your baseline carefully and stick to it, interpreting regressions with dummy variables isn’t very hard. But so many people forget this step that they get very confused by the end, looking at a term like Black female Mississippi and seeing a positive coefficient, and thinking that must mean that life is good for Black women in Mississippi, when really all it means is the small mercy that being a Black woman in Mississippi isn’t quite as bad as you might think if you just added up the effect of being Black, plus the effect of being a woman, plus the effect of being Black and a woman, plus the effect of living in Mississippi, plus the effect of being Black in Mississippi, plus the effect of being a woman in Mississippi. |
# Boussinesq approximation (buoyancy)
In fluid dynamics, the Boussinesq approximation (pronounced: [businɛsk], named for Joseph Valentin Boussinesq) is used in the field of buoyancy-driven flow (also known as natural convection). It states that density differences are sufficiently small to be neglected, except where they appear in terms multiplied by g, the acceleration due to gravity. The essence of the Boussinesq approximation is that the difference in inertia is negligible but gravity is sufficiently strong to make the specific weight appreciably different between the two fluids. Sound waves are impossible/neglected when the Boussinesq approximation is used since sound waves move via density variations.
Boussinesq flows are common in nature (such as atmospheric fronts, oceanic circulation, katabatic winds), industry (dense gas dispersion, fume cupboard ventilation), and the built environment (natural ventilation, central heating). The approximation is extremely accurate for many such flows, and makes the mathematics and physics simpler.
The approximation's advantage arises because when considering a flow of, say, warm and cold water of density $\rho_1$ and $\rho_2$ one needs only consider a single density $\rho$: the difference $\Delta\rho= \rho_1-\rho_2$ is negligible. Dimensional analysis shows that, under these circumstances, the only sensible way that acceleration due to gravity g should enter into the equations of motion is in the reduced gravity $g'$ where
$g' = g{\rho_1-\rho_2\over \rho}.$
(Note that the denominator may be either density without affecting the result because the change would be of order $g(\Delta\rho/\rho)^2$). The most generally used dimensionless number would be the Richardson number and Rayleigh number.
The mathematics of the flow is therefore simpler because the density ratio ($\rho_1/\rho_2$, a dimensionless number) does not affect the flow; the Boussinesq approximation states that it may be assumed to be exactly one.
## Inversions
One feature of Boussinesq flows is that they look the same when viewed upside-down, provided that the identities of the fluids are reversed. The Boussinesq approximation is inaccurate when the nondimensionalised density difference $\Delta\rho/\rho$ is of order unity.
For example, consider an open window in a warm room. The warm air inside is lighter than the cold air outside, which flows into the room and down towards the floor. Now imagine the opposite: a cold room exposed to warm outside air. Here the air flowing in moves up toward the ceiling. If the flow is Boussinesq (and the room is otherwise symmetrical), then viewing the cold room upside down is exactly the same as viewing the warm room right-way-round. This is because the only way density enters the problem is via the reduced gravity $g'$ which undergoes only a sign change when changing from the warm room flow to the cold room flow.
An example of a non-Boussinesq flow is bubbles rising in water. The behaviour of air bubbles rising in water is very different from the behaviour of water falling in air: in the former case rising bubbles tend to form hemispherical shells, while water falling in air splits into raindrops (at small length scales surface tension enters the problem and confuses the issue). |
Home > Error 5 > Windows Net View Error 53
# Windows Net View Error 53
## Contents
His machine willl respond to ping, ping by name. How common is it to use the word 'bitch' for a female dog? Domain environments offer centralized management and troubleshooting. Browse other questions tagged windows networking windows-7 samba or ask your own question. http://pubdimensions.com/error-5/windows-net-view-error-5.php
in the lists below PASTOR is one of the machines with issues. This node type first uses P (meaning Peer to Peer) communications with Netbios. What are the alternatives to compound interest for a Muslim? I (intentionally) didn't change the software. https://technet.microsoft.com/en-us/library/cc940100.aspx
## Net View System Error 5
If the computer is on the local subnet, confirm that the name is spelled correctly and that the target computer is running TCP/IP as well. Output a googol copies of a string \def inside of \def not visible in titles or captions What would be the value of gold and jewelry in a post-apocalyptic society? Please see the following Knowledge Base articles for more information: Using DameWare Development products in conjunction with XP SP2 http://www.dameware.com/support/kb/article.aspx?ID=300068 Using DameWare Development software in conjunction with a Firewall: http://www.dameware.com/support/kb/article.aspx?ID=201045 Knowledgebase In fact, the problem machine can see other machines and access their shares just fine.
As you finish projects in Quip, the work remains, easily accessible to all team members, new and old. - Increase transparency - Onboard new hires faster - Access from mobile/offline Try Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. Edited by Susie LongModerator Monday, April 15, 2013 8:29 AM Marked as answer by Jeremy_WuMicrosoft contingent staff, Moderator Thursday, April 18, 2013 4:03 PM Monday, April 15, 2013 8:28 AM Reply System Error 53 Has Occurred Windows 8 Yes No Additional feedback? 1500 characters remaining Submit Skip this Thank you!
Does profunda also mean philosophically deep? System Error 53 Has Occurred Net Use That's only OS problem. Well, I don't want to get off topic, but thought I would present that idea. 0 Message Author Comment by:wxmanokc2010-07-06 Windows Firewall is off. http://answers.microsoft.com/en-us/windows/forum/windows_vista-networking/system-error-53-has-occurred-the-network-path-was/9511bbd8-ca11-4c1e-8d85-f7e8ce7e0fb9 To distinguish betweenthese two cases, use the following procedure: To determine the cause of an Error 53 message From the Start menu, open a command prompt.
While an answer has already been found, I want to add this for the sake of completeness since I just wasted many hours figuring this out. System Error 53 Has Occurred Windows 10 At the command prompt, type: net view \\< IP address > where < IP address > is the same network resource you used in the above procedure. Recently files shares on 3 pc's can no longer be accessed. Use API version 32.0 or later to retrieve this process What are the alternatives to compound interest for a Muslim? patch:instead removes an element with no attributes Why does typography ruin
## System Error 53 Has Occurred Net Use
Solutions? original site Windows 2000 TCP/IP TCP/IP Troubleshooting Unable to Reach a Host or NetBIOS Name Unable to Reach a Host or NetBIOS Name Error 53 Error 53 Error 53 Error 53 Cannot Connect Net View System Error 5 http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/2003_Server/Q_23657415.html With All that said: IF you perform a portquery and all ports are good, then consider a master browser conflict. System Error 53 Has Occurred Mapping Network Drive Microsoft Customer Support Microsoft Community Forums United States (English) Sign in Home Library Wiki Learn Gallery Downloads Support Forums Blogs We’re sorry.
The problem being is in the Peer to Peer network configuration, If a name server does not exist (meaning WINS), than it will not communicate: Take a moment to review node Wednesday, February 12, 2014 3:56 AM Reply | Quote Microsoft is conducting an online survey to understand your opinion of the Technet Web site. What happens when you go Start ->Run and enter \\IP.address? Where will the second Fantastic Beasts film be set? System Error 53 Windows 10
Do glass window in space station/space shuttle/other space craft have practical usage? share|improve this answer answered Jan 4 '15 at 4:48 nifkii 111 add a comment| up vote 0 down vote Have you tried changing the network interface mode from NAT to Bridged Windows Firewall is turned off. http://pubdimensions.com/error-5/windows-8-net-view-error-53.php Join Now For immediate help use Live now!
Why does a shorter string of lights not need a resistor? Net View System Error 58 Ensure all the necessary File & Printer Sharing ports are open on all routers/firewalls between the local and remote machines, and in any type of firewall software on the remote machine. It is the best article I have seen anywhere.
## TechNet Products Products Windows Windows Server System Center Browser Office Office 365 Exchange Server SQL Server SharePoint Products Skype for Business See all products » IT Resources Resources Evaluation
Remember also the sniffer shows the command getting to PASTOR but his machine does not respond. If more than one computer were experiencing this, then I'd agree with you. It is now a multi-cloud world for most organizations. System Error 53 Has Occurred Windows 2012 If problem still appear, then you'll know the problem is not with your router.
If the remote machine is running Windows XP SP2 or Vista, then this issue is most likely related to the Windows Firewall which is enabled by default. Net View on his pc shows the other pc's. Browse other questions tagged networking dns dhcp windows-10 windows-server-2012 or ask your own question. get redirected here Suggested Solutions Title # Comments Views Activity Windows 7 home sp1 updates 9 59 34d terminal services not running - win server 2003 - recently changed internet provider 14 31 33d
Now, that should have no bearing on why you can't perform Netbios queries, UNLESS you enabled DNS to perform Netbios lookups. There are also some services involved, "Browser", "Server" and "Client", I think. Any help will be appreciated. lanmanserver &browser To restart service either you run below commands or use services console.
The most common culprite would be WINDOWS firewall. This is a system state firewall. Maybe netbios is bound to the wrong adapter. my PC and the other PC cannot access my Notebook not even reaching it.
Now, Go to Solution 40 Comments LVL 15 Overall: Level 15 Windows XP 2 Routers 1 Windows Networking 1 Message Expert Comment by:dave4dl2010-07-01 check out the solution at http://forums.cnet.com/5208-7589_102-0.html?threadID=266062 0 C:\Documents and Settings\Barrie Henke>net config redir Computer name \\PASTOR Full Computer name net stop lanmanserver &net start lanmanserver net stop browser&net start browser Also make sure that your server should run without network/name resolution issue.Regards, Ravikumar P Marked as answer by Jeremy_WuMicrosoft contingent The default configuration is to all everything. -In the future, I would go so far as to recommend starting a domain environment instead of a workgroup.
Can it be exploited by blockchain analysis? See if everything is allowed through the IP filters. What are the disadvantages of a delta wing biplane design? If I attempt to net view [machine name] I get system error 53, The network path was not found.
Portquery is a command line tool that is already on XP. Otherwise, this error can be easily duplicated outside of DameWare software by attempting to access the Admin\$ share on the remote machine. |
# Please explain the statement "the big bang happened everywhere at once"
1. Nov 11, 2014
### CaptDude
Obviously, I can understand the literal meaning of the phrase "the big bang happened everywhere at once" But I have never read a satisfactory explanation that eloquently helped me understand this concept........
2. Nov 11, 2014
### Clayjay
Space-time mathematically starts at a point. There is no space or time outside of the singularity at the start of the Big Bang. There was no time or space at that point. Then things change
and time and space are created at that point; they are linked together and define each other.
The Big Bang theory is an abstract construct created through math and geometry. What happened before the Big Bang in my view a distraction.
"Where is the center of expansion in the universe today" a student asked.
"All the points in the universe today are the center of expansion of the universe today because all points where the same point in the beginning" the professor replied.
Yes - the BB happened at a point in time not in space. Because; to me, the time differential logically preceded the space differential. So for the Big Bang model there is no space outside of space time.
Just remember 95% of the universe is covered with "Dark energy and matter" which is saying we know nothing about 95% of the observed universe. On a human scale we know a lot but on a cosmic scale we know little, but we have good leads.
3. Nov 11, 2014
### Bandersnatch
Imagine a volume of space. It may be either infinite in extent, or finite in a way that curves on itself, so that going straight in any one direction makes you come back to where you started.
This space is filled with some sort of energy at high density and high temperature.
Pick any number of points in that space, and measure how far away they are spaced.
At some point in time, all the distanced you had measured begin to increase simultaineously. As a result, the temperature and density of all the energy filling the space drops. Where once there was only radiation, particles and antiparticles begin to pop up and manage to stay around longer and longer without annihilating. Finally it gets cold enough for the electrons and protons to combine which in turn allows light to move freely. That light is the CMBR. The distances keep on growing, and the stuff filling the space keeps on cooling and getting on average less dense (although local clumps of matter coalesce to eventually form galaxies).
That's pretty much the Big Bang theory. It states that the universe as we see it today expanded from an earlier hot and dense state. Nothing more. That's why a question of "where did it happen" is not really applicable. It happened in the universe, or it happened everywhere are the best answers one can come up with, but the real answer is that the question is just not very sensible - the only reason it keeps getting asked is that BB is often, and misleadingly, described as an explosion.
A natural follow-up question to ask is: but what happened earlier than that? Where did all that energy come from?
If the BB were a person, it'd say: "Don't know, don't care. Not my jurisdiction".
Another source of confusion is the distinction between the BB theory and BB singularity. The singularity is what you get if you try and extrapolate the expansion backwards in time until you reach infinite densities and infinite temperatures at t=0. This is still a statement about the totality of space, and the reason people tend to confate it with a point in space is probably due to the fact that they first tend to come in contact with the concept of singularity in the context of black holes, whose singularities are of the spatial variety. It's important to remember that singularity is just a region where mathematics breaks down. The function $y=1/x$ has got a singularity at x=0. It's just an indication that the function is undefined for a certain value of the variable.
Last edited: Nov 11, 2014
4. Nov 11, 2014
### Chronos
It's almost a matter of philosophy. The answer could be ... In the beginning, there was nothing - and it became everything: or, if you prefer; In the beginning, there was everything - and we still have most of it left.
5. Nov 11, 2014
### guywithdoubts
That phrase is used to keep people aware that there is no center of the universe and that popular artistic depictions of the Big Bang (you know, the little bright light emanating things) are not correct. Like they explained before, if the universe is infinite or it's curved so that is finite but there are no boundaries (think of the balloon analogy), then all points started separating from each other, thus "it happened everywhere at once".
6. Nov 11, 2014
### phinds
I agree w/ your post but this opening statement promotes the mistaken belief that it started at a point in SPACE, which it did not (as you clearly point out later) since if it had, there would be a center to the universe and there would consequently be a preferred direction to the motion of the expansion. There is neither.
I realize I'm not telling you anything you don't already know, I'm just trying to help others who read this thread to not get caught up in that fallacy.
7. Nov 11, 2014
### Staff: Mentor
Actually, strictly speaking, it doesn't, because the point singularity is not actually part of spacetime--it can't be, because the values of physical invariants (like the curvature scalars) are infinite there. The point singularity is really a limit of spacelike hypersurfaces that have a smaller and smaller scale factor as you get closer and closer to the limit. Each such hypersurface represents an instant of cosmological time, so you are correct that the Big Bang is best viewed as a moment of time, not a place in space.
8. Nov 12, 2014
### Helios
Supposing that the universe is infinite, then it is infinite for all times, there's just a change in a scale factor. What bugs me is how some universal happening can happen everywhere at once. We marvel at synchronized swimmers doing their act in sync. It seems that all of infinite space had to do something on cue to get things started, and this contradicts my intuition as to how this is possible.
9. Nov 12, 2014
### Torbjorn_L
That depends on what you mean by "big bang". As you can see below, you can redefine "big bang" in order to push it further and further back in time. [If you find it practicable. I sincerely don't know why people insist on doing this. It is my failing of imagination.]
Inflation removes the singularity that used to be attached to the start of the Hot Big Bang. As I understand it (not having studied general relativity) the removal is inherent:
"Hence general relativity cannot be used to show a singularity.
Penrose's theorem is more restricted, it only holds when matter obeys a stronger energy condition, called the dominant energy condition, which means that the energy is bigger than the pressure. All ordinary matter, with the exception of a vacuum expectation value of a scalar field, obeys this condition. During inflation, the universe violates the stronger dominant energy condition (but not the weak energy condition), and inflationary cosmologies avoid the initial big-bang singularity, rounding them out to a smooth beginning."
[ http://en.wikipedia.org/wiki/Penrose–Hawking_singularity_theorems ]
Maybe something that could be said to break down spacetime reasserts itself "later" if we manage to look further back in time. (I assume then as a result of exotic physics.)
In the meantime, it is much more generic to identify various definitions of big bang to a certain time, meaning the space volume experienced it "everywhere at once". (Whatever "it" means for you.)
Last edited: Nov 12, 2014
10. Nov 12, 2014
### Torbjorn_L
So far something like "all of ... space had to do something on cue" (or contra-intuitively, "infinite") has not been a necessary description.
The scale factor can expand space faster than the universal speed limit of anything within space, and the inflationary mechanism that solves the horizon problem so it appears everything happened on cue involves having a finite volume described as the observable universe..
The same appearance goes for the Hot Big Bang events, the only difference is that they happen over a larger volume (the local universe) intersected by the observable universe.
Last edited: Nov 12, 2014
11. Nov 12, 2014
### Staff: Mentor
The Big Bang is not a "happening" in this sense. In the simple model with an initial singularity (i.e., without inflation and the refinements that leads to, which Torbjorn_L described in his post), the Big Bang just refers to that initial singularity, which is a boundary of spacetime, not a "happening". In inflation models where there is no initial singularity, the Big Bang just refers to the fact that, when the inflation period ends, the universe is filled with matter and radiation in an extremely hot, dense state, and is expanding very fast. This is not a single "happening", it's just a global description of the state of the universe at that instant of cosmological time; the "happenings" are the individual bits of matter and radiation in each little volume of the universe.
12. Nov 12, 2014
### Helios
Do cosmologist who believe in an original singularity believe that infinity can originate out of point? Or does a singularity necessarily imply a finite universe? Or conversely, would an infinite universe negate the singularity belief?
13. Nov 12, 2014
### phinds
"Singularity" just means "the place where our math model breaks down and we don't know WHAT the hell was/is going on". ("was" in the case of the BB, "is" in the case of a black hole). It iS NOT a point.
14. Nov 12, 2014
### Staff: Mentor
No, they believe that the singularity is not part of spacetime. See post #7.
15. Nov 12, 2014
### CaptDude
Guywithdoubts said: That phrase is used to keep people aware that there is no center of the universe and that popular artistic depictions of the Big Bang (you know, the little bright light emanating things) are not correct. Like they explained before, if the universe is infinite or it's curved so that is finite but there are no boundaries (think of the balloon analogy), then all points started separating from each other, thus "it happened everywhere at once".
Thanks for all the replies. Very good information. However, I need to ask another question. Am I mistaken in saying I think the phrase "the big bang happened everywhere at once" could be rephrased as "inflation happened everywhere at once." Having asked that, even I dont think that is correct because, if I remember correctly, inflation began a micro-fraction of a second AFTER the big bang.
Yet alot of you seem to be referrencing inflation in your answers.
Am I making it harder to understand than it is? Is my "eloquent" answer simply that space time started at a "point" (I know this is not correct but I dont know how to phrase it right) and then inflation expanded space/time to "everywhere at once"?
I also want to sk about the universe having to center. If the universe is infinite, that is understandable. But if the universe is finite, it is harder to understand. How can any finite geometric shape have no center?
P.S. I am very happy to have found a place to enrich my understanding of life, the universe, and everything. ;)
Last edited: Nov 12, 2014
16. Nov 12, 2014
### phinds
You are getting confused because of the two radically different ways the term "big bang" is used. One way, the only way in my opinion that is meaningful or helpful, is the "Big Bang Theory" which is a very (but not completely well understood description of how the universe evolved starting at about one Planck Time after the "singularity" and saying NOTHING about anything before one Planck Time (other than "don't know WHAT was going on back then") and the other is just a reference to the singularity (more properly called the Big Bang Singularity) which is just a place where the math model breaks down and something we can't really say anything meaningful about (other than "don't know WHAT was going on back then")
Also, I would suggest that you refrain from saying that anything started out at a point and just say that it started out at a TIME and we don't know HOW it started out but we know a lot about what happened since then.
17. Nov 12, 2014
### CaptDude
I find this to be a very interesting question. Could somebody please give a thoughtful reply?
18. Nov 12, 2014
### Helios
For an infinite universe, I don't believe that shrinking the scale factor will every result in a point. A point is finite. An infinite universe does not extrapolate back to a point. It never resembles a point at any stage, ever. A point-singularity is absurd.
But an infinite singularity ( sounds oxymoronic ) also is confounding. How can time start everywhere at once. It's as if we had an infinite number of marathon runners poised on an infinitely long starting line and God says "on your mark, get set go" and he fires a track pistol. Or how could inflation begin or end everywhere at once, as if a memo were distributed "inflation will begin in one second" and the whole infinite universe is going to get the word.
19. Nov 12, 2014
### Staff: Mentor
The point is not part of spacetime (see my previous posts), so it's not the result of "shrinking the scale factor" the way you appear to be thinking of it.
However, it is true that the limit of the scale factor as the singularity is approached is zero. This could be described (sloppily) as "shrinking to a point". But it's not a "shrinking" in any physical sense; it's just a mathematical statement about a limit.
Time doesn't have to "start". It's just a coordinate, a way of labeing events. Or, if you like, a way of slicing up spacetime into an infinite sequence of spacelike slices, each of which is labeled with a "time", and each of which represents "space at an instant of time". If the universe is spatially infinite, then each of those spacelike slices is infinite.
We actually don't know that it did, and inflationary models don't require it to. They only require that inflation began and ended homogeneously within our observable universe--which, since our observable universe was much, much, much, much smaller when inflation ended (smaller than an atomic nucleus, IIRC), does not present any problem.
20. Nov 12, 2014
### julcab12
Actually that's how they describe and imagine singularity -- infinitesimal scale/density and the direction of the pure equation. Unless some dynamic is introduced. It can either bounce, tunnel or both depending on how we model reality. We really don't know. IMO It is more intuitive to think that our universe is in a state of transformations/transitions and identify each extent/event in a temporal fashion than weak origins and creation.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook |
Matrix for the reflection over the null space of a matrix
I'm working on my latest linear algebra assignment and one question is as follows:
In $\mathbb R^3$ let R be the reflection over the null space of the matrix
A = [4 4 5]
Find the matrix which represents R using standard coordinates.
I am familiar with the fact that the matrix for the orthogonal projection onto a subspace is given by [P] = A($A^TA$)$A^T$, where A is a matrix with columns that form a basis for the space in question. However, I am not familiar with the concept of a "reflection over the null space". How does this compare with the matrix for the orthogonal projection onto the null space? Is it a similar process? Thanks for any help.
First of all, the formula should be $$P = B(B^TB)^{-1}B^T$$ where the columns of $B$ form of a basis of $ker(A)$.
Think geometrically when solving it. Points are to be reflected in a plane which is the kernel of $A$ (see third item):
• find a basis $v_1, v_2$ in $ker(A)$ and set up $B = (v_1 \, v_2)$
• build the projector $P$ onto $ker(A)$ with above formula
• geometrically the following happens to a point $x = (x_1 \, x_2 \, x_3)$ while reflecting in the plane $ker(A)$: $x$ is split into two parts - its projection onto the plane and the corresponding orthogonal part of $x$. Then flip the direction of this orthogonal part: $$x = Px + (x - Px) \mapsto Px - (x-Px) \rightarrow x \mapsto Px - (I-P)x = (2P-I)x$$ So, the matrix looked for is $$2P-I$$
• Ah, you beat me to it! I think I will leave my answer as well because it gives a slightly different perspective on the same idea, but you were first. +1 – RCT Mar 25 '18 at 3:20
I could imagine that there's some ugly formula for this, but I don't know it, and it's probably more instructive to solve the problem from basic principles anyway, so let's try that! My first thought when reading this problem is to start by changing to a basis in which it's obvious what the reflection does.
Note that multiplication by $A$ is just dotting with the vector $(4,4,5),$ so the kernel of $A$ is precisely $(4,4,5)^\perp.$ So $(4,4,5)$ is orthogonal to the plane of reflection, and thus the reflection simply negates it.
Now extend $\{(4,4,5)\}$ to a basis for $\mathbb{R}^3$ by picking a basis for the null space itself. For example, $(0,-5,4)$ and $(-5,0,4)$ are two independent vectors killed by $A.$ Since these live in the plane of reflection, they are fixed by the reflection.
So in our new basis, the matrix for the reflection is simply $$R' = \begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}.$$
Now to write the matrix in the standard basis, all you have to do is compute the inverse of the change of basis matrix $$P = \begin{bmatrix} 4 & 0 & -5 \\ 4 & -5 & 0 \\ 5 & 4 & 4 \end{bmatrix}$$ that took the standard basis to our preferred basis. It ends up being $$P^{-1} = \frac{1}{285}\begin{bmatrix} 20 & 20 & 25 \\ 16 & -41 & 20 \\ -41 & 16 & 20 \end{bmatrix}.$$
So $R$ is obtained by changing to the new basis, applying $R',$ then changing back to the standard basis, i.e. $R = P^{-1}R'P.$ |
# Solving a simple equation
I am just starting to study physics and I found this equation:
$$\ x = 8 - {6t} + t^2$$
If possible please explain in a step by step.
Sorry if it's too simple.
-
## migrated from physics.stackexchange.comMay 3 '13 at 13:20
This question came from our site for active researchers, academics and students of physics.
We rewrite $x=8-6t+t^2=t^2-6t+8=(t-2)(t-4)$, where the last step is obtained by factoring. Assuming you want to solve $x=0$, then either $t-2=0$ or $t-4=0$. This leads to the two solutions $t=2$ and $t=4$. |
[ACCEPTED]-Using environment variables for .config file in .NET-.net
Score: 29
Yes, that's possible! Suppose you have something 4 like that in your config:
<configuration>
<appSettings>
</appSettings>
</configuration>
Then you can easily 3 get the path with:
var pathFromConfig = ConfigurationManager.AppSettings["mypath"];
var expandedPath = Environment.ExpandEnvironmentVariables(pathFromConfig);
ExpandEnvironmentVariables(string s) does the magic by replacing 2 all environment variables within a string 1 with their current values.
Score: 7
Is this a configuration entry that you're reading, or 15 is .NET reading it? If you're reading it 14 yourself, you can do the appropriate substitution 13 yourself (using Environment.ExpandEnvironmentVariables to do the lot or Environment.GetEnvironmentVariable if you 12 want to be more selective).
If it's one that 11 .NET will be reading, I don't know of any 10 way to make it expand environment variables. Is 9 the config file under your control? Could 8 you just rewrite it?
In fact, even if you 7 can do the substitution, is that really what 6 you want to do? If you need to specify the 5 full path to a DLL, I suspect you'll need 4 to find it via the DLLPATH (checking for its 3 presence in each part of the path) and then 2 subtitute %DLLPATH%\Foo.dll with the full 1 path to Foo.dll.
More Related questions |
# Inserting a picture in an equation [duplicate]
Possible Duplicate:
Can I insert an image into an equation?
Is there any method to insert a picture in an equation? I am trying to write an equation where one of the elements of the equation is a small image. I need to obtain a pattern like this:
2 * image1 + 3 * image2 = image3 (equation number)
Please note, one simple way is to make a figure and put all of these elements in it and use \includegraphics to load it into LaTeX document as a figure, however, I need to have an equation with the above structure, not a figure.
-
## marked as duplicate by David Carlisle, Werner, Thorsten, lockstep, barbara beetonOct 25 '12 at 18:32
I answered below but I think the referenced question is essentially a duplicate. If the answers there solve your problem we'll close this one as duplicate/ – David Carlisle Oct 25 '12 at 18:11
You can use \includegraphics in math mode.
Any mechanism for moving a box would work, perhaps the adjustbox package is the easiest. – David Carlisle Oct 25 '12 at 18:13 |
# Algorithm for directly finding the leading eigenvector of an irreducible matrix
According to the Perron-Frobenius theorem, a real matrix with only positive entries (or one with non-negative entries with a property called irreducibility) will have a unique eigenvector that contains only positive entries. Its corresponding eigenvalue will be real and positive, and will be the eigenvalue with greatest magnitude.
I have a situation where I'm interested in such an eigenvector. I'm currently using numpy to find all the eigenvalues, then taking the eigenvector corresponding to the one with largest magnitude. The trouble is that for my problem, when the size of the matrix gets large, the results start to go crazy, e.g. the eigenvector found that way might not have all positive entries. I guess this is due to rounding errors.
Because of this, I'm wondering if there's an algorithm that can give better results by making use of the facts that $(i)$ the matrix has non-negative entries and is irreducible, and $(ii)$ we're only looking for the eigenvector whose entries are positive. Since there are algorithms that can make use of other matrix properties (e.g. symmetry), it seems reasonable to think this might be possible.
While writing this question it occurred to me that just iterating $\nu_{t+1} = \frac{A\nu_t}{|A\nu_t|}$ will work (starting with an initial $\nu_0$ with positive entries), but imagine with a large matrix the convergence will be very slow, so I guess I'm looking for a more efficient algorithm than this. (I'll try it though!)
Of course, if the algorithm is easy to implement and/or has been implemented in a form that can easily be called from Python, that's a huge bonus.
Incidentally, in case it makes any kind of difference, my problem is this one. I'm finding that as I increase the matrix size (finding the eigenvector using Numpy as described above) it looks like it's converging, but then suddenly starts to jump all over the place. This instability gets worse the smaller the value of $\lambda$.
The algorithm you describe that computes $x^{(k+1)}=\frac{Ax^{(k)}}{\|Ax^{(k)}\|}$ is of course what is called the the power method. It will converge in your case if you have a non-degenerate largest eigenvalue. Furthermore, if you start with an initial guess $x^{(0)}$ that has only positive entries, you are guaranteed that all future iterates are also strictly positive and, moreover, that round-off should not be an issue since in every matrix-vector product you only ever add up positive terms. In other words, this method must work if your problem has the properties you describe.
• Won't the convergence be quite slow, though? My matrices are potentially huge, since I'm trying to approximate the limit of infinite matrix size. (But then, maybe it's not slow, I haven't tried yet. I'll do it this afternoon if I get a chance.) Feb 10 '14 at 4:47
• The convergence rate depends on the ratio $\lambda_1/\lambda_2$, where $\lambda_1$ is the largest eigenvalue and $\lambda_2$ the next largest, provided that the eigenspace corresponding to $\lambda_1$ is 1-dimensional. For some families of matrices, the ratio may stay the same even as the matrix size increases. Feb 10 '14 at 5:31
• It turns out that it isn't all that slow for my problem - but I found a way to make it much faster anyway, which I've posted as an answer. Feb 10 '14 at 11:50
I tried the power method, as described in my question and in Wolfgang Bangerth's answer. It turns out that the convergence isn't unacceptably slow for my application, though it does depend on the parameters. So in a way this was kind of a dumb question.
However, I noticed that there's a way to exponentially increase the speed of this algorithm, by doing repeated squaring of the matrix. That is, let $B_0=A$ and iterate $B_{n+1}=\frac{B_n^2}{\|B_n^2\|}$, where $\|\cdot\|$ is whatever matrix norm you feel like. (I just summed all the elements, since they're all positive anyway.) This very rapidly converges to a matrix whose columns are each proportional to the leading eigenvector. This is because $B_n \nu_0 \propto A^{2^n}\nu_0$, so iterating this algorithm for $n$ steps is the same as doing power iteration for $2^n$ steps.
Although multiplying large matrices can be slow (especially in numpy, unfortunately), I'm finding that this tends to converge pretty nicely after around 10 to 15 iterations, so on the whole it's pretty fast.
• Alternatively, you could accelerate the method by using inverse power iteration. In your case, it amounts to solving the system A - I (or A - 0.999 I) at each iteration. Feb 10 '14 at 14:53
• Are you multiplying your matrices explicitly? I can not imaging a scenario where it is faster that matrix-vector multiplications. The algorithm you describe might converge in fewer iterations, but each iteration involves to calculate $B_n \times B_n$, and it should be much more expensive than $B_n v$. Maybe I am missing something... Feb 10 '14 at 15:28
• @sebas the choice is between 15 matrix multiplications, or $2^{15}$ matrix-vector ones. I guess for large matrices the vector version will win out, but for smallish ones this is way faster. Feb 10 '14 at 15:54
• @Nathaniel It depends on the size of the matrix. I was just wondering how large are the matrices for your problem. I know that for large enough matrices just 2 or 3 matrix-matrix multiplications is completely prohibitive. Moreover if your matrices are sparse, because the matrix-matrix product will increase the fill-in of the resulting matrix. Just a comment I thought can be useful. Feb 10 '14 at 16:02
• @sebas honestly I think you're right, my idea isn't worthwhile once the matrices are above 500x500 or so. But for exploring the parameter space with a size of around 100 it's quite handy. Feb 10 '14 at 16:05 |
# Chi-Square test with very large df
I am trying to use the Chi-Square test for independence of attributes. My dataset has only two columns, but several thousands of rows. Consequently the degree of freedom is also very high.
When I use chisq.test in R, I get the following warning message:
Warning message: In chisq.test(myVariable) : Chi-squared approximation may be incorrect
Am I using the correct test here? Are there better alternatives? I understand that the chi-square distribution is practically a Normal distribution in this case!
Thanks and regards,
• Actually, that warning is because the chi-square distribution (whether or not it is close to Normally distributed) might not be a good approximation to the actual distribution of the $\chi^2$ statistic, which means you shouldn't trust the computed p-value. You might find it more fruitful to ponder what independence of thousands of rows actually means and what you would learn from testing it. (I'm curious about what your answer might be.) – whuber Sep 17 '15 at 5:06
• The last couple of sentences in whuber's comment is probably the most useful/ to the point thing one might say. I expect that either one of your columns or (more likely) some of your rows have a small total. What are your two column totals? What proportion of your row totals are <5? <1? – Glen_b Sep 17 '15 at 5:24
• I made sure that all the frequencies are > 5 – PTDS Sep 17 '15 at 13:08
• I am trying to characterize the browsing behavior of several people. The rows are different websites (hence there are thousands of them) and the two columns are two conditions when those sites were visited (e.g., when the people are traveling vs when they are not traveling). The cell values are the frequencies of visit to the websites. – PTDS Sep 17 '15 at 13:14
• My objective is to see if the browsing behavior is different under these two conditions. Am I following the correct approach? – PTDS Sep 17 '15 at 13:16 |
#### Vol. 267, No. 1, 2014
Recent Issues Vol. 315: 1 Vol. 314: 1 2 Vol. 313: 1 2 Vol. 312: 1 2 Vol. 311: 1 2 Vol. 310: 1 2 Vol. 309: 1 2 Vol. 308: 1 2 Online Archive Volume: Issue:
The Journal Subscriptions Editorial Board Officers Contacts Submission Guidelines Submission Form Policies for Authors ISSN: 1945-5844 (e-only) ISSN: 0030-8730 (print) Special Issues Author Index To Appear Other MSP Journals
Normal states of type III factors
### Yasuyuki Kawahigashi, Yoshiko Ogata and Erling Størmer
Vol. 267 (2014), No. 1, 131–139
##### Abstract
Let $M$ be a factor of type III with separable predual and with normal states ${\phi }_{1},\dots ,{\phi }_{k},\omega$ with $\omega$ faithful. Let $A$ be a finite-dimensional ${C}^{\ast }$-subalgebra of $M$. Then it is shown that there is a unitary operator $u\in M$ such that ${\phi }_{i}\circ Adu=\omega$ on $A$ for $i=1,\dots ,k$. This follows from an embedding result of a finite-dimensional ${C}^{\ast }$-algebra with a faithful state into $M$ with finitely many given states. We also give similar embedding results of ${C}^{\ast }$-algebras and von Neumann algebras with faithful states into $M$. Another similar result for a factor of type II${}_{1}$ instead of type III holds.
Dedicated to Masamichi Takesaki on the occasion of his eightieth birthday.
##### Keywords
von Neumann algebra, type III factor, normal state
Primary: 46L30 |
# Does every countable subset of the set of all countable limit ordinals have the least upper bound in it?
I'm sorry if the question is that kind of trivial, I just feel uncertain about these ordinals all the time. Is the answer to the following question "yes":
Denote by A the set of all countable limit ordinals. Does every countable subset of A have the least uper bound in A?
However if we don't assume the axiom of choice, then it is consistent that $\omega_1$ is the countable limit of countable ordinals, in which case the answer would be negative.
• Oh, mmm, well, I'm sorry for the question perhaps being a kind of ambiguous, let me clarify: denote by A the set of all countable limit ordinals. Does then any countable subset of A have the least upper bound in A (I mean, not in itself... since $\omega^2$ is still countable, and limit)? – W_D Jul 7 '13 at 11:38 |
Article
# Indication of Reactor ν ¯ e Disappearance in the Double Chooz Experiment
[more]
Department of Physics, Tokyo Institute of Technology, Tokyo, 152-8551, Japan.
(Impact Factor: 7.51). 03/2012; 108(13):131801. DOI: 10.1103/PhysRevLett.108.131801
Source: PubMed
ABSTRACT
The Double Chooz experiment presents an indication of reactor electron antineutrino disappearance consistent with neutrino oscillations. An observed-to-predicted ratio of events of 0.944±0.016(stat)±0.040(syst) was obtained in 101 days of running at the Chooz nuclear power plant in France, with two 4.25 GW(th) reactors. The results were obtained from a single 10 m(3) fiducial volume detector located 1050 m from the two reactor cores. The reactor antineutrino flux prediction used the Bugey4 flux measurement after correction for differences in core composition. The deficit can be interpreted as an indication of a nonzero value of the still unmeasured neutrino mixing parameter sin(2)2θ(13). Analyzing both the rate of the prompt positrons and their energy spectrum, we find sin(2)2θ(13)=0.086±0.041(stat)±0.030(syst), or, at 90% C.L., 0.017<sin(2)2θ(13)<0.16.
### Full-text
Available from: Roberto Santorelli, Jan 28, 2015
2 Followers
·
• Source
##### Article: Combined Analysis of all Three Phases of Solar Neutrino Data from the Sudbury Neutrino Observatory
[Hide abstract]
ABSTRACT: We report results from a combined analysis of solar neutrino data from all phases of the Sudbury Neutrino Observatory. By exploiting particle identification information obtained from the proportional counters installed during the third phase, this analysis improved background rejection in that phase of the experiment. The combined analysis resulted in a total flux of active neutrino flavors from 8B decays in the Sun of (5.25 \pm 0.16(stat.)+0.11-0.13(syst.))\times10^6 cm^{-2}s^{-1}. A two-flavor neutrino oscillation analysis yielded \Deltam^2_{21} = (5.6^{+1.9}_{-1.4})\times10^{-5} eV^2 and tan^2{\theta}_{12}= 0.427^{+0.033}_{-0.029}. A three-flavor neutrino oscillation analysis combining this result with results of all other solar neutrino experiments and the KamLAND experiment yielded \Deltam^2_{21} = (7.41^{+0.21}_{-0.19})\times10^{-5} eV^2, tan^2{\theta}_{12} = 0.446^{+0.030}_{-0.029}, and sin^2{\theta}_{13} = (2.5^{+1.8}_{-1.5})\times10^{-2}. This implied an upper bound of sin^2{\theta}_{13} < 0.053 at the 95% confidence level (C.L.).
Physical Review C 09/2011; 88(2). DOI:10.1103/PhysRevC.88.025501 · 3.73 Impact Factor
• Source
##### Article: Deviation from Tri-Bimaximal Mixing and Large Reactor Mixing Angle
[Hide abstract]
ABSTRACT: Recent observations for a non-zero $\theta_{13}$ have come from various experiments. We study a model of lepton mixing with a 2-3 flavor symmetry to accommodate the sizable $\theta_{13}$ measurement. In this work, we derive deviations from the tri-bimaximal (TBM) pattern arising from breaking the flavor symmetry in the neutrino sector, while the charged leptons contribution has been discussed in a previous work. Contributions from both sectors towards accommodating the non-zero $\theta_{13}$ measurement are presented.
Nuclear Physics B 11/2011; 874(3). DOI:10.1016/j.nuclphysb.2013.05.022 · 3.93 Impact Factor
• Source
##### Article: Combining Accelerator and Reactor Measurements of theta_13; The First Result
[Hide abstract]
ABSTRACT: The lepton mixing angle theta_13, the only unknown angle in the standard three-flavor neutrino mixing scheme, is finally measured by the recent reactor and accelerator neutrino experiments. We perform a combined analysis of the data coming from T2K, MINOS, Double Chooz, Daya Bay and RENO experiments and find sin^2 2theta_13 = 0.096 \pm 0.013 (\pm 0.040) at 1 sigma (3 sigma) CL and that the hypothesis theta_13 = 0 is now rejected at a significance level of 7.7 sigma. We also discuss the near future expectation on the precision of the theta_13 determination by using expected data from these ongoing experiments.
Journal of High Energy Physics 11/2011; 2012(5). DOI:10.1007/JHEP05(2012)023 · 6.11 Impact Factor |
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# Improper Fractions - Definition, Conversion, and Examples
## Introduction: Improper Fractions
Last updated date: 16th Mar 2023
Total views: 81k
Views today: 1.60k
Improper fractions are fractions having a numerator greater than the denominator. Here, fractions are representing something in parts. For example, you have 5 pieces of cake and after eating three pieces, you are left with two pieces. Now, how do you represent the remaining part? Well, it is simple, you write it as ⅖. Here, ⅖ is the fraction of the cake left with you.
In another case, you add 1 more piece of cake, then at present, you have$\frac{2}{5}$ + 1 = $\frac{7}{5}$ pieces of cake. So, here you notice that the numerator ‘7’ is greater than the denominator ‘5’, which means that this fraction is improper. We can also write $1\tfrac{2}{5}$ piece of cake. This is how we understand what an improper fraction is.
In real life, mixed fractions can be easily understood when compared to improper fractions. However, we can easily convert any improper fraction to a mixed number or vice-versa by following some basic steps that you will study in later sections on this page with solved examples.
## Concept of Improper Fractions
From the above text, we understand that improper fractions have numerators greater than or equal to its denominator. For example, $\frac{11}{5}$, $\frac{13}{4}$ are improper fractions.
Numerically, we find that these fractions are always equal to or greater than 1. We can write a mixed fraction from improper fractions. So, such a type of fraction carries a combination of a natural number and a proper fraction. The simplified form of an improper fraction becomes a mixed fraction, for example,$25\frac{4}{3}$ and $25\tfrac{4}{3}$ are mixed fractions. Numerically, we notice that a mixed fraction is always greater than 1. Also, we can rewrite a mixed fraction in the form of an improper fraction.
## Steps to Convert Improper Fractions to Mixed Numbers
Please note that the denominator of the mixed fraction form is always the same as that of the original fraction, i.e., of an improper fraction. Mixed numbers are the simplified form of improper fractions, that’s why it becomes important to learn this conversion. For converting an improper fraction to a mixed number, we need to follow the below-listed steps:
Step 1- Divide the numerator with the denominator, for example, if the fraction is $\frac{21}{4}$, then divide $\frac{21}{4}$
Step 2- Now, when you divide the numerator, you get a quotient and a remainder. Here, you get the quotient as 5 and the remainder as 1.
Step 3- Now, we arrange the values of the quotient, remainder, and divisor, i.e., 5, 1, and 4 in the following manner to express a fraction as a mixed fraction:
Quotient = $\frac{Remainder}{Divisor}$
Here, for the above values, you get $5\tfrac{1}{4}$ as a mixed fraction corresponding to the improper fraction of $\frac{21}{4}$
Like you can see under the division method for an improper fraction $\frac{13}{4}$:
Converting Improper Fraction to a Mixed Fraction
## How To Solve Improper Fractions?
Solving an improper fraction is like solving any proper fraction, the only difference that comes here is, we have to simplify the answer and form mixed numbers.
Let's solve the improper fraction: 9/5 + 8/5.
Step 1: We notice that we have the same denominator for both the fractions. Therefore, we will directly add the numerators 9 and 8. We get 17. Thus, on adding improper fractions, we get $\frac{17}{5}$
Step 2: Simplifying the improper fraction (i.e., dividing 17 by 5), we will get 3 as a whole (which is a quotient), 2 as a numerator (remainder), and 5 as the denominator (divisor).
Using the formula Quotient = $\frac{Remainder}{Divisor}$
we get: $3\frac{2}{5}$, which is a mixed fraction corresponding to $\frac{17}{5}$
## Convert Improper Fraction to Decimal
Example 1: Convert 10/4 improper fraction to a decimal.
The first step we need to do is, divide 10 by 4. We know that 10 ÷ 4 = 2.5. Now, let us follow the long-division method here:
Converting Improper Fraction to Decimal
Here, 10/4 is an improper fraction and 2.5 is a decimal. We can write 2.5 as 5/2, which is an improper fraction. Also, $2\tfrac{1}{2}$ makes it a mixed fraction because 2 is a whole number and $\frac{1}{2}$ is a proper fraction.
From the above text, we understand that solving improper fractions is similar to performing arithmetic operations on numbers simplifying the value of the answer so obtained. There are four arithmetric operators which are addition, subtraction, multiplication, and division.
## FAQs on Improper Fractions - Definition, Conversion, and Examples
1. What are proper and improper fractions?
Proper fractions are fractions that have a numerator lesser than the denominator, for example 2/3, 4/7, 8/19, and so on.
However, improper fractions are the opposite of proper fractions, in these fractions numerator ≥ denominator, for example, 9/5, 6/5, etc.
2. How to Subtract Improper Fractions?
We can subtract improper fractions by subtracting the values of their numerator and writing the common denominator as the denominator of the difference obtained. For this, we need to first take the LCM of the denominators to form a common denominator and subtract the numerators. After this, convert them into like fractions. |
Question
# On decreasing the radius of a circle by 30 %, its area is decreased by (a) 30 % (b) 60 % (c) 45% (d) none of these
Solution
## Let the radius of the circle be r cm So, area of the circle = πr2 Now, radius decrease by 30% So, the new radius = 0.7r Area of the circle with new radius = π(0.7r)2 = 0.49r2π Change in the area = πr2 - 0.49r2×π = 0.51r2 ×π % change in area = 0.51×r2×πr2×π × 100% = 51% Hence, the correct option is (d). MathematicsSecondary School Mathematics XStandard X
Suggest Corrections
17
Similar questions
View More
Same exercise questions
View More
People also searched for
View More |
# Simple groups, permutation groups, and probability
```@article{Liebeck1999SimpleGP,
title={Simple groups, permutation groups, and probability},
author={Martin W. Liebeck and Aner Shalev},
journal={Journal of the American Mathematical Society},
year={1999},
volume={12},
pages={497-520}
}```
• Published 1999
• Mathematics
• Journal of the American Mathematical Society
In recent years probabilistic methods have proved useful in the solution of several problems concerning finite groups, mainly involving simple groups and permutation groups. In some cases the probabilistic nature of the problem is apparent from its very formulation (see [KL], [GKS], [LiSh1]); but in other cases the use of probability, or counting, is not entirely anticipated by the nature of the problem (see [LiSh2], [GSSh]). In this paper we study a variety of problems in finite simple groups…
136 Citations
The concept of a base for a permutation group was introduced by Sims in the 1960s in connection with computational group theory. It has proved to be of theoretical importance as well. For example,
A base B for a finite permutation group G acting on a set Ω is a subset of Ω with the property that only the identity of G can fix every element of B. In this dissertation, we investigate some
It is well known that every finite simple group has a generating pair. Moreover, Guralnick and Kantor proved that every finite simple group has the stronger property, known as 32 -generation, that
The most important geometric invariant of a degree-n complex rational function f(X) is its monodromy group, which is a set of permutations of n objects. This monodromy group determines several
• Mathematics
• 2013
Let \$G\$ be a permutation group on a set \$\Omega\$. A subset of \$\Omega\$ is a base for \$G\$ if its pointwise stabilizer is trivial; the base size of \$G\$ is the minimal cardinality of a base. In this
In this thesis, we investigate the connection between finitely generated profinite groups G and the associated Dirichlet series PG(s) of which the reciprocal is called the probabilistic zeta function
• Mathematics
Israel Journal of Mathematics
• 2022
Let G be a finite non-regular primitive permutation group on a set Ω with point stabiliser Gα. Then G is said to be extremely primitive if Gα acts primitively on each of its orbits in Ω \ {α}, which
• Mathematics
Discret. Math. Theor. Comput. Sci.
• 2011
It is proved that, given permutation groups G, H \textless= Sym(Omega) such that G is an element of Gamma(d), the normalizer of H in G can be found in polynomial time.
## References
SHOWING 1-10 OF 75 REFERENCES
• Mathematics
• 1996
Abstract We prove that a randomly chosen involution and a randomly chosen additional element of a finite simple groupGgenerateGwith probability →1 as |G|→∞. This confirms a conjecture of Kantor and
Let G be a finite simple group. A conjecture of J. D. Dixon, which is now a theorem (see [2, 5, 9]), states that the probability that two randomly chosen elements x, y of G generate G tends to 1 as
The minimal degree/~ (G) of a primitive permutat ion group G of degree n on a set ~, that is, the smallest number of points moved by any non-identity element of G, has been the subject of
One of the central problems of 19th century group theory was the estimation of the order of a primitive permutation group G of degree n, where G X An. We prove I G I < exp (4V'/ n log2 n) for the
• S. J. Pride
• Mathematics
Bulletin of the Australian Mathematical Society
• 1974
In recent yeaxs there has been some research done on the following problem. Given a non-cyclic free group F determine those sets C of groups for which F is residually C . To tackle such a problem it
• Mathematics
• 1997
Abstract For a finite group G , let k ( G ) denote the number of conjugacy classes of G . We prove that a simple group of Lie type of untwisted rank l over the field of q elements has at most (6 q )
• Mathematics
• 1994
In [KL] it is provcd that the probability of two randomly chosen elements of a finite dassical simple group G actually generating G tends to 1 as lei increases. If gEe, let Pu(g) be the probability
• Mathematics
Combinatorics, Probability and Computing
• 1993
The study of asymptotics of random permutations was initiated by Erdős and Turáan, in a series of papers from 1965 to 1968, and has been much studied since. Recent developments in permutation group
• Mathematics
• 1978
In [2], Dixon considered the following question: " Suppose two permutations are chosen at random from the symmetric group Sn of degree n. What is the probability that they will generate Sn? "
• S. J. Pride
• Mathematics
Bulletin of the Australian Mathematical Society
• 1972
In this paper it is proved that non-abelian free groups are residually (x, y | xm = 1, yn = 1, xk = yh} if and only if min{(m, k), (n, h)} is greater than 1, and not both of (m, k) and (n, h) are 2 |
Data Structure Questions and Answers-Fibonacci using Recursion
Data Structure Questions and Answers-Fibonacci using Recursion
Congratulations - you have completed Data Structure Questions and Answers-Fibonacci using Recursion.
You scored %%SCORE%% out of %%TOTAL%%.
Your performance has been rated as %%RATING%%
Question 1 [CLICK ON ANY CHOICE TO KNOW MCQ multiple objective type questions RIGHT ANSWER]
Suppose the first fibonnaci number is 0 and the second is 1. What is the sixth fibonnaci number?
A 5 B 6 C 7 D 8
Question 1 Explanation:
The sixth fibonnaci number is 5.
Question 2 [CLICK ON ANY CHOICE TO KNOW MCQ multiple objective type questions RIGHT ANSWER]
Which of the following is not a fibonnaci number?
A 8 B 21 C 55 D 14
Question 2 Explanation:
14 is not a fibonnaci number.
Question 3 [CLICK ON ANY CHOICE TO KNOW MCQ multiple objective type questions RIGHT ANSWER]
Which of the following methods can be used to find the nth fibonnaci number?
A Dynamic programming B Recursion C Iteration D All of the mentioned
Question 3 Explanation:
All of the above mentioned methods can be used to find the nth fibonacci number.
Question 4 [CLICK ON ANY CHOICE TO KNOW MCQ multiple objective type questions RIGHT ANSWER]
Consider the following iterative implementation to find the nth fibonacci number:
int main() { int n = 10, i; if(n == 1) printf("0"); else if(n == 2) printf("1"); else { int a = 0, b = 1, c; for(i = 3; i <= n; i++) { c = a + b; .....; .....; } printf("%d", c); } return 0; }
Which of the following lines should be added to complete the above code?
A c = b b = a B a = b b = c C b = c a = b D a = b b = a
Question 4 Explanation:
The lines "a = b" and "b = c" should be added to complete the above code.
Question 5 [CLICK ON ANY CHOICE TO KNOW MCQ multiple objective type questions RIGHT ANSWER]
Which of the following recurrence relations can be used to find the nth fibonacci number?
A F(n) = F(n) + F(n - 1) B F(n) = F(n) + F(n + 1) C F(n) = F(n - 1) D F(n) = F(n - 1) + F(n - 2)
Question 5 Explanation:
The relation F(n) = F(n - 1) + F(n - 2) can be used to find the nth fibonacci number.
Once you are finished, click the button below. Any items you have not completed will be marked incorrect.
There are 5 questions to complete. |
# Possible solutions how to find all points (given by lat, long) that are within a radius from main point
let's say that I have:
• M - Main point - given by coordinates (lat, long)
• C - Collection of points - also given by coordinates (lat, long)
• R - Radius - maximum search distance
I'm trying to find all points that are within a radius from main point (neighbors). algorithm(main_point, collection_of_points, radius) --> neighbors
Obvious solution is to calculate distances and select those points, where distance is equal or smaller than R, but I believe there are better options with better performance.
Do you know any possible solutions? I'm open for every idea (not only the best one).
## EDIT:
I should add that I'm looking for solution where points are strongly dynamic (every point is mobile phone or car). I'd like to update point when it's neccessary (it has been moved) so frequency of updates depends on amount of connected devices (so as queries). Let's say that server will get update request every 5 second and query request every 20 second.
## Visualisation
• This is a well-explored topic. What research have you done? – Raphael Oct 14 '16 at 10:38
• @Raphael, I've found a lot of information about R-trees, but it seems overly complicated for my problem (but of course I'm going to read more about them). I like idea of GeoHash (for example Redis database is using it), but this solution has few edge cases. I've asked here, because there is maybe some good solution, that I don't know. – pfff Oct 14 '16 at 10:55
• Define "overly complicated". In general, you will have to trade off different cost measures. – Raphael Oct 14 '16 at 12:08
• @Raphael If I understood correctly the idea of R-trees, every insertion action depends on state of tree, so (my data are dynamic) every change will result in the need to rebuild the tree. I don't know if it's a good idea in my situation. – pfff Oct 16 '16 at 14:29
You can speed up your search by using a datastructure like a kd-tree or an r-tree. The basic idea is to divide your space into boxes and store the mapping from points to boxes (and back) in a way that allows for quick lookup times. Then you don't have to calculate the distances to all points to see which are too far away, you can restrict your search to points in the same (or adjacent) boxes.
But note that the naive approach is actually pretty good in this case, even if it's asymptotically not optimal. CPUs are really good at summing squares and it's trivial to throw more cores at the problem if you need more speed. So unless you're dealing with many points, it will be hard to beat.
• Is it good idea to use such a structure when points are dynamic? Let's say that every point represents mobile phone or car. – pfff Oct 14 '16 at 11:55
• @pfff Depends on the ratio of updates and queries, obviously. If you can make any assumptions on that, please add them to the question. That said, in a mobile setting you are unlikely to use global information, anyway; just rely on transmission range limits and find close nodes locally. – Raphael Oct 14 '16 at 12:09
• @pfff If your points are reasonably uniformly distributed you can go with a very simple grid instead of a tree structure. – adrianN Oct 14 '16 at 12:19
• @Raphael, > If you can make any assumptions on that, please add them to the question. Added. > just rely on transmission range limits and find close nodes locally. Can you elaborate? > you can go with a very simple grid instead of a tree structure So this solution will be similar to GeoHash, right? – pfff Oct 14 '16 at 12:56
• @pfff Connect every node with all others it can reach. Simple. Depends on what your semantics of "close" is, of course. – Raphael Oct 14 '16 at 13:47
Depending on your practical situation, once you have examined all the points you might not just determine if they are within the radius, but how long it would at least take for them to either leave or enter the radius. Then you only re-examine a point after that time interval has gone by, and classify it again.
For example, if the radius is 1km, and a car is 2km away, you might figure out that it would take 41 seconds to enter the radius (while not exceeding the speed limit excessively), so if you check every five seconds, you check that car after 45 seconds.
https://en.wikipedia.org/wiki/Spatial_database#Spatial_index
This link describes most popular ways to index spacial data. |
Which of the following is closest in value to (9^9)-(9^2)? : GMAT Problem Solving (PS)
Check GMAT Club App Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack
It is currently 06 Dec 2016, 10:20
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Which of the following is closest in value to (9^9)-(9^2)?
Author Message
TAGS:
### Hide Tags
Manager
Joined: 23 Mar 2008
Posts: 219
Followers: 3
Kudos [?]: 140 [0], given: 0
Which of the following is closest in value to (9^9)-(9^2)? [#permalink]
### Show Tags
24 May 2008, 11:33
2
This post was
BOOKMARKED
00:00
Difficulty:
15% (low)
Question Stats:
67% (01:37) correct 33% (00:42) wrong based on 389 sessions
### HideShow timer Statistics
Which of the following is closest in value to (9^9)-(9^2)?
A. 9^9
B. 9^8
C. 9^7
D. 9^6
E. 9^5
[Reveal] Spoiler: OA
Last edited by Bunuel on 26 Feb 2014, 00:23, edited 2 times in total.
Renamed the topic, edited the question and added the OA.
SVP
Joined: 30 Apr 2008
Posts: 1887
Location: Oklahoma City
Schools: Hard Knocks
Followers: 40
Kudos [?]: 564 [1] , given: 32
### Show Tags
24 May 2008, 11:53
1
KUDOS
I think the answer is going to be A. The reason is that subtracting 81 from 9^9 is not going to subtract enough to get it anywhere close to 9^8. 9^8 is going to be such a huge number that when you multiply it by 9 again to get 9^9, subtracting 81 (i.e., 9^2) from it, the result will still be must closer to 9^9 than any of the others.
puma wrote:
Which of the following is closest in value to (9^9)-(9^2)?
a) 9^9
b) 9^8
c) 9^7
d) 9^6
e) 9^5
_________________
------------------------------------
J Allen Morris
**I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a.
GMAT Club Premium Membership - big benefits and savings
SVP
Joined: 28 Dec 2005
Posts: 1575
Followers: 3
Kudos [?]: 144 [2] , given: 2
### Show Tags
24 May 2008, 15:24
2
KUDOS
1
This post was
BOOKMARKED
i would say A as well ... i just simplified out to 9^2*(9^7 - 1) ... and the 1 is pretty much negligible ... so we go back down to 9^9 ...
Joined: 20 Aug 2009
Posts: 311
Location: Tbilisi, Georgia
Schools: Stanford (in), Tuck (WL), Wharton (ding), Cornell (in)
Followers: 17
Kudos [?]: 139 [0], given: 69
### Show Tags
26 Nov 2009, 12:37
gmat620 wrote:
isn't there any algebraic solution ?
We need to get approximate answer. I'm pretty sure exact number is unreachable w/o Excel spreadsheets.
Manager
Joined: 14 Apr 2010
Posts: 230
Followers: 2
Kudos [?]: 147 [0], given: 1
Which of the following is closest to 9^9 - 9^2? [#permalink]
### Show Tags
09 May 2010, 22:30
Which of the following is closest to 9^9 - 9^2?
A. 9^9
B. 9^8
C. 9^7
D. 9^6
E. 9^5
CEO
Status: Nothing comes easy: neither do I want.
Joined: 12 Oct 2009
Posts: 2795
Location: Malaysia
Concentration: Technology, Entrepreneurship
Schools: ISB '15 (M)
GMAT 1: 670 Q49 V31
GMAT 2: 710 Q50 V35
Followers: 224
Kudos [?]: 1595 [0], given: 235
### Show Tags
09 May 2010, 22:50
bibha wrote:
1. which of the following is closest to 9^9 - 9^2?
9^9
9^8
9^7
9^6
9^5
What is the trick?>>???
IMO A, whats the OA?
clearly 9^8 <9^9 - 9^2 < 9^9
now 9^9 - 9^2 will be closer to 9^9 if 9^9 - 9^2? lies between $$\frac{(9^9+9^8)}{2}$$ and 9^9 else 9^8
$$\frac{(9^9+9^8)}{2}$$ = $$9^8*\frac{(9+1)}{2} = 9^{8} * 5$$
$$9^9 - 9^2 = 9^8( 9 - 9^{-6}) > 9^8 * 8 > 9^8*5$$
since 9^9 - 9^2 is greater than $$\frac{(9^9+9^8)}{2}$$ i.e $$9^{8} * 5$$
This lies closer to 9^9
_________________
Fight for your dreams :For all those who fear from Verbal- lets give it a fight
Money Saved is the Money Earned
Jo Bole So Nihaal , Sat Shri Akaal
GMAT Club Premium Membership - big benefits and savings
Gmat test review :
http://gmatclub.com/forum/670-to-710-a-long-journey-without-destination-still-happy-141642.html
Math Expert
Joined: 02 Sep 2009
Posts: 35910
Followers: 6846
Kudos [?]: 89982 [3] , given: 10396
### Show Tags
10 May 2010, 15:40
3
KUDOS
Expert's post
3
This post was
BOOKMARKED
bibha wrote:
1. which of the following is closest to 9^9 - 9^2?
9^9
9^8
9^7
9^6
9^5
What is the trick?>>???
Easier way: $$9^9 - 9^2=9^2(9^7-1)$$. Now $$9^7-1$$ is very close to $$9^7$$, hence $$9^9 - 9^2=9^2(9^7-1)\approx{9^2*9^7=9^9}$$.
_________________
SVP
Status: The Best Or Nothing
Joined: 27 Dec 2012
Posts: 1858
Location: India
Concentration: General Management, Technology
WE: Information Technology (Computer Software)
Followers: 44
Kudos [?]: 1832 [0], given: 193
Re: Which of the following is closest in value to (9^9)-(9^2)? [#permalink]
### Show Tags
25 Feb 2014, 19:58
puma wrote:
Which of the following is closest in value to (9^9)-(9^2)?
a) 9^9
b) 9^8
c) 9^7
d) 9^6
e) 9^5
9^9 is way too high compared to 9^2
_________________
Kindly press "+1 Kudos" to appreciate
Math Expert
Joined: 02 Sep 2009
Posts: 35910
Followers: 6846
Kudos [?]: 89982 [0], given: 10396
Re: Which of the following is closest in value to (9^9)-(9^2)? [#permalink]
### Show Tags
26 Feb 2014, 00:25
Expert's post
1
This post was
BOOKMARKED
SVP
Status: The Best Or Nothing
Joined: 27 Dec 2012
Posts: 1858
Location: India
Concentration: General Management, Technology
WE: Information Technology (Computer Software)
Followers: 44
Kudos [?]: 1832 [0], given: 193
Re: Which of the following is closest to 9^9 - 9^2? [#permalink]
### Show Tags
02 Apr 2014, 01:19
$$9^9$$ is way too high than $$9^2$$
$$So answer = A = 9^9$$
_________________
Kindly press "+1 Kudos" to appreciate
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 12882
Followers: 561
Kudos [?]: 158 [0], given: 0
Re: Which of the following is closest in value to (9^9)-(9^2)? [#permalink]
### Show Tags
26 Jun 2015, 07:56
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 12882
Followers: 561
Kudos [?]: 158 [0], given: 0
Re: Which of the following is closest in value to (9^9)-(9^2)? [#permalink]
### Show Tags
17 Sep 2016, 04:12
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: Which of the following is closest in value to (9^9)-(9^2)? [#permalink] 17 Sep 2016, 04:12
Similar topics Replies Last post
Similar
Topics:
3 Which of the following is closest to the value of (2^23)(5^26)? 1 24 Nov 2016, 02:03
If x2 = 131, which of the following is closest to a potential value of 4 17 Feb 2016, 02:58
2 Which of the following is closest to the value of 999/(100+1/999)? 2 26 Jan 2016, 09:30
14 Of the following integers which is the closest approximation 9 14 May 2010, 07:34
2 The value of (9x10^7)(9x10^8) is closest to which of the following? 3 21 Apr 2010, 08:27
Display posts from previous: Sort by |
## Stream: new members
### Topic: Best way to split this cases?
#### YH (Dec 13 2019 at 22:11):
If a, b, c ≥ 0 and I want to split them as either all of them is 0 or at least one of them is > 0, what is the best way to do this?
#### Kevin Buzzard (Dec 13 2019 at 22:39):
I'd be tempted to first case on a=0 and then in the =0 case case on b=0 etc
#### YH (Dec 13 2019 at 22:51):
I realized I can do that. But then it will leave me with 7 cases where at least one of a, b, c is not zero, is there a way to somehow "unify" them? Or this is the best we can do.
#### Reid Barton (Dec 13 2019 at 22:52):
How do you want to use the statement "at least one of them is > 0"?
#### YH (Dec 13 2019 at 22:57):
Say m = a + b + c, if at least one of them is > 0 then we have m > 0. Maybe I'd better just split the cases on m.
#### Johan Commelin (Dec 14 2019 at 05:28):
So these are all nats?
#### Johan Commelin (Dec 14 2019 at 05:31):
You can do
by_cases habc : a = 0 \and b = 0 \and c = 0,
{ rcases habc with \< rfl, rfl, rfl \>, sorry },
{ push_neg at habc,
sorry }
Last updated: May 13 2021 at 05:21 UTC |
Ultra-high Energy Cosmic Rays and Neutrinos from Gamma-Ray Bursts, Hypernovae and Galactic Shocks1footnote 11footnote 1Based on a talk given at the Origin of Cosmic Rays: Beyond the Standard Model conference in San Vito di Cadore, Dolomites, Italy, 16-22 March 2014. This is not a comprehensive review of the topics in the title; it is weighted towards work in which I have been more personally involved.
Ultra-high Energy Cosmic Rays and Neutrinos from Gamma-Ray Bursts, Hypernovae and Galactic Shocks111Based on a talk given at the Origin of Cosmic Rays: Beyond the Standard Model conference in San Vito di Cadore, Dolomites, Italy, 16-22 March 2014. This is not a comprehensive review of the topics in the title; it is weighted towards work in which I have been more personally involved.
P. Mészáros Center for Particle and Gravitational Astrophysics, Dept. of Astronomy & Astrophysics, Dept. of Physics, 525 Davey Laboratory, Pennsylvania State University, University Park, PA 16802, U.S.A.
Abstract
I review gamma-ray burst models (GRBs) and observations, and discuss the possible production of ultra-high energy cosmic rays and neutrinos in both the standard internal shock models and the newer generation of photospheric and hadronic GRB models, in the light of current constraints imposed by IceCube, Auger and TA observations. I then discuss models that have been proposed to explain the recent astrophysical PeV neutrino observations, including star-forming and star-burst galaxies, hypernovae and galaxy accretion and merger shocks.
keywords:
Cosmic rays, Neutrinos,
1 Introduction
The origin of the cosmic rays above the knee () and up to the range of ultra-high energy cosmic rays (UHECRs, ) remains a mystery. Attempts at correlating the arrival directions of UHECRs with known AGNs have so far yielded no convincing results Auger+10corr (); Auger+13anis2 (); Abuzayad+13TAcorr (). Partly for this reason, other high energy sources, which are distributed among, or connected with, more common galaxies, have been the subject of much interest. These include gamma-ray bursts (GRBs), hypernovae (HNe) and galactic shocks, the latter being due either to accretion onto galaxies (or clusters) or galaxy mergers.
An important clue for the presumed sources of UHECR would be the detection of ultra-high energy neutrinos (UHENUs) resulting from either photohadronic () or hadronuclear () interactions of the UHECR within the host source environment and/or during propagation towards the observer. The value of this is of course that neutrinos travel essentially unabsorbed along straight lines (or geodesics) to the observer, thus pointing back at the source. Such interactions leading to neutrinos, arising via charged pions, also result in a comparable number of neutral pions leading to high energy gamma-rays, which are however more prone to subsequent degradation via cascades against low energy ambient or intergalactic photons.
The prospect of tagging UHECRs via their secondary neutrinos has recently become extremely interesting because of the announcement by IceCube IC3+13pevnu2 () of the discovery of an isotropic neutrino background (INB) at PeV and sub-PeV energies, which so far cannot be associated with any known sources, but whose spectrum is clearly well above the atmospheric neutrino background, and is almost certainly astrophysical in origin.
2 Gamma-Ray Bursts
There are at least two types of GRBs Kouveliotou+93 (), the long GRBs (LGRBs), whose -ray light curve lasts , and the short GRBs (SGRBs), whose light curve lasts . The spectra of both peak in the MeV range, with power law extensions below and above the peak of (photon number) slopes and , the peak energy of the SGRBs being generally harder (few MeV) than those of the LGRBs () Fishman+95cgro (). This broken power law spectral shape, known as a Band spectrum, is accompanied in some cases by a lower energy (tens of keV) and less prominent black-body hump, and/or by a second, higher energy power law component, in the sub-GeV to GeV range, whose photon number slope is appreciably harder then the super-MeV slope, e.g. Gehrels+12sci (); Meszaros+12raa () (Fig. 1).
The MeV light curves exhibit short timescale variability down to ms, extensively charted along with the MeV spectra by the CGRO BATSE, the Swift BAT and more recently by the Fermi GBM instruments, while the GeV light curves and spectra have in the last several years been charted by the Fermi LAT instrument, e.g. Gehrels+12sci ().
An extremely interesting property shown by most of the LAT-detected bursts is that the light curves at GeV energies start with a time lag of several seconds for LGRBs, and fractions of a second for SGRB, relative to the start of the lightcurves at MeV energies, as seen in Fig. 2.
The GeV emission amounts to about 10% and 30-50% of the total energy budget of LGRBs and SGRBs respectively, and is detected in roughly 10% of the LGRBs, and in a somewhat larger percentage of SGRBs, although the GeV detection is ubiquitous in the brighter bursts and the non-detections may be due to being below the LAT sensitivity threshold Pelassa+11-latgrbrev ().
The huge energies involved in GRB led to the view that it involves a fireball of electrons, photons and baryons which expands relativistically Cavallo+78 (); Paczynski86 (); Goodman86grb (); Shemi+90 (); Narayan+92merg (); Meszaros+92tidal (), produced by a cataclysmic stellar event. The observational and theoretical work over the past twenty years has resulted in a generally accepted view of LGRBs as originating from the core collapse of massive () stars Woosley93col (); Paczynski98grbhn (); Woosley+12coll (), whose central remnant quickly evolves to a few solar mass black hole (BH), which for a fast enough rotating core results in a brief accretion episode powering a jet which breaks through the collapsing stellar envelope. This view is observationally well supported, the LGRBs arising in star-forming regions, sometimes showing also the ejected stellar envelope as a broad-line Ic supernova, a “hypernova”, whose kinetic energy is , an order of magnitude higher than that of an ordinary SN Ic or garden variety supernova.
For SGRBs, the leading paradigm is that they arise from the merger of a compact double neutron star (NS-NS) or neutron star-black hole (NS-BH) binary Paczynski86 (); Narayan+92merg (); Meszaros+92tidal (), resulting also in an eventual central BH and a briefer accretion-fed episode resulting in a jet. Observationally this is supported by the lack of an observable supernova, and by the fact that they are observed both in star-forming and in old population galaxies, often off-set from the optical image, as expected if in the merger the remnant has been kicked off and had time to move appreciably. While the SGRB origin is less firmly established than that of LGRBs, compact mergers are nonetheless widely considered the most likely explanation, which are also of great importance as a guaranteed source of gravitational waves (GWs), being the object of scrutiny by LIGO, VIRGO and other GW detectors.
The MeV radiation providing the detector trigger as well as the slightly delayed GeV radiation are jointly called the prompt emission of the GRB. In a fraction of bursts, a prompt optical flash is also detected by ground-based robotic telescopes Akerlof+99 () or by rapidly slewing multi-wavelength GRB missions such as Swift Gehrels+09araa (). The most widely accepted view of the GRB emission is that it is produced by shocks in the relativistic outflows, the simplest example of which are the external shocks where the outflow is decelerated in the external interstellar medium or in the stellar wind of its progenitor Rees+92fball (); Meszaros+93impact (). In such shocks magnetic field can be amplified, and electrons can be Fermi accelerated into a relativistic power law energy distribution, leading to broken power law spectra peaking initially in the MeV range. Both a forward and reverse shock are expected to be present, the latter producing synchrotron radiation in the optical range, while inverse Compton (IC) radiation in the shocks also produces a GeV component Meszaros+93multi (). The fast time variability of the MeV light curves is however better explained through what is called the standard internal shock model Rees+94unsteady (), which are expected to occur in the optically thin region outside the scattering photosphere of the outflow, but inside the radius of the external shocks. The radii of the photosphere, the internal shocks and the external shocks are, respectively,
rph ≃(LσT/4πmpc3η3)∼4×1012Lγ,52η−32.5~{}cm (1) ris ≃Γ2ctv∼3×1013η22.5tv,−2~{}cm (2) res ≃(3E0/4πnextmpc2η2)1/3 (3) ∼2×1017(E53/n0)1/2η2/32~{}cm, (4)
where are the burst total energy, luminosity, initial dimensionless entropy, coasting bulk Lorentz factor, external density and intrinsic variability timescale, e.g. Zhang+04grbrev (); Meszaros06grbrev (). If the prompt emission is due to an internal shock, the external shock can naturally result, via inverse Compton, in a delayed GeV component Meszaros+94gev ().
The above simple picture of internal and external shocks served well in the CGRO, HETE and Beppo-SAX satellite eras extending into the first half of the 2000 decade, including the discovery of X-ray and optical afterglows as well as the prompt optical emission, which were predicted by the models. It also accommodated fairly well the observed fact that the jet is collimated and when the Lorentz factor drops below the inverse of the opening angle the light curves steepen in a predictable manner.
It was however realized that simple internal shocks radiating via electron synchrotron had low radiative efficiency, and many bursts showed low energy slopes incompatible with synchrotron Preece+98death (); Ghisellini+99grbspec (). Attempts at resolving this included different radiation mechanisms, e.g. Medvedev00jitter (), which addresses the spectrum, or invoking a larger role for the scattering photosphere Meszaros+00phot (); Rees+05photdis (); Peer+06phot (), which addresses both the spectrum and efficiency issues. It is worth stressing that the need for such ”non standard” internal shocks or photospheres is important (a fact not widely recognized) when considering IceCube neutrino fluxes expected from GRBs.
The Swift satellite launched in 2004 had gamma-ray, X-ray and optical detectors, which revealed new features of the GRB afterglows, including an initial steep decay followed by a flatter decay portion of the X-ray afterglow, interspersed by X-ray spikes, finally blending into the previously known standard power law decay behavior. These features could be represented through the high latitude emission Kumar+00naked (), a continued or multi-Lorentz factor outflow, and continued internal shocks, e.g. Zhang+06ag (); Nousek+06ag ().
The Fermi satellite, launched in late 2008 and sensitive between , extended the MeV studies and opened wide the detailed study of bursts in the GeV band, which can last for and whose spectra extend in some cases up to in the source frame. The observed GeV-MeV photon delays from bursts at redshifts led to an interesting constraint on quantum gravity theories, excluding the first order term in of the usual effective field theory series expansion formulations Abdo+09-090510 (). This limit is only reinforced by the presence of additional astrophysical mechanisms for such delays.
In general, the GeV emission of all but the first few time bins is well represented by a forward shock synchrotron radiation Ghisellini+10grbrad (); Kumar+10fsb (). This holds also for the brightest GeV bursts ever discovered, GRB130427A. However, the first few time bins of the GeV emission He+11-090510 () may need to be ascribed to the prompt emission, which is also responsible for the MeV emission - for which, as mentioned, a self-consistent analysis must consider models going beyond the standard simple internal shock, c.f. below.
3 GRB UHE Neutrinos and Cosmic Rays
The pioneering works of Waxman95cr (); Waxman+97grbnu () have served as the basis for most of the thinking on UHE cosmic ray acceleration and VHE neutrino production in GRBs. These first-generation models, as one may call them, were based on a simplified “standard” internal shock (IS) model, where the bulk Lorentz factor and the variability timescale entering equ. (4) are either assumed of inferred from -ray observations, the photon spectrum is assumed to be a standard Band function and interactions occur via the -resonance. More detailed calculations of a diffuse neutrino flux, still based on this simple IS model but using specific electromagnetically (henceforth: EM) observed bursts serving to calibrate the neutrino to photon (or relativistic proton to electron, ) luminosity ratio were made by Guetta+04grbnu ().
The first IceCube data on GRBs using 40 strings and then 56 strings as the array completion progressed were presented in Ahlers+11-grbprob (); Abbasi+11-ic40nugrb (); Abbasi+12grbnu-nat (). The results using 215 EM-detected GRBs with fluxes normalized to the -ray fluxes indicated that the diffuse neutrino upper limits were a factor below this IS model predictions (Fig. 3), unless the proton to electron ratio was much less than . Both this and a model independent analysis using a broken power law photon spectrum with variable break energy and -resonance interaction indicated an inconsistency between the upper limits and a significant contribution of GRB to the UHE cosmic ray flux observed by Auger and HiRes. This was a very important first cut in constraining models with IceCube.
Subsequent investigations pointed out that the IS model fluxes used for this comparison were overestimated Li12grbnu (); Hummer+12nu-ic3 (). More careful consideration of the interaction in this model beyond the -resonance, including multi-pion and Kaon channels with the entire target photon spectrum yielded substantially lower predicted fluxes in the TeV-PeV energy range considered Hummer+12nu-ic3 (), indicating that years of observation with the full 86 string array may be needed to rule out the simple IS model (Fig. 4).
The internal shock radius depends on both the bulk Lorentz factor and the time variability of the outflow , see eq.(4). Both factors also influence the comoving magnetic field in the shock, the photon spectral peak and the photon luminosity, thus affecting the neutrino spectral flux, see Fig. (5).
Another simplification affecting the results is that the internal shocks in the above were assumed to have a constant radius, whereas they advance and expand with the flow. Calculating numerically such time-dependent IS models which accelerate CRs, including the full range of interactions and the observed -ray luminosity function and variability distributions, the current IceCube 40+59 strings upper limits are in fact compatible with GRBs contributing a significant fraction of the UHECR flux Asano+14grbcr () (Fig. 6), but the IceCube PeV neutrino flux.
More importantly, the use of the standard internal shock model, which is favored by observers for its simplicity and ease of computation, needs to be reconsidered. This model has been known for the past decade to have problems explaining the low energy -ray spectral slopes and the radiative efficiency (§2, Meszaros06grbrev ()), and alternatives free of the -ray inconsistencies have been investigated, e.g. photospheric models and modified internal shock models. The neutrino emission of baryonic photosphere models Murase08grbphotnu (); Wang+09grbphotnu () and modified IS models Murase+12reac () differs qualitatively from that of the standard IS models.
In the case of photospheric models, it is worth stressing that the spectrum is likely to deviate from a blackbody; a non-thermal of broken power law can be produced by dissipative effects, such as sub-photospheric scattering Peer+06phot (), inelastic nuclear collisions Beloborodov10pn (), photospheric shocks or magnetic dissipation Meszaros+11gevmag (), etc. The spectrum and luminosity normalization also depend on whether the dynamics of the expansion is dominated by baryonic inertia, in which case the bulk Lorentz factor initially accelerates as until it reaches the saturation value at the coasting radius ; the photospheric radius occurs generally beyond the saturation radius and is given by the first line of eq.(4).
Alternatively the dynamics might be dominated by magnetic stresses. In this case the photospheric radius depends on the value of the magnetization index , where and for extreme magnetic domination, e.g. Meszaros+11gevmag (); Gao+12photnu (). For such magnetic cases, the photospheric radius is generally in the accelerating phase , and is given by
rph=r0η1/μT(ηT/η)1/(1+2μ) (5)
where is the launch radius () and . Fits to determine the degree of magnetic domination have been done using Fermi GBM and LAT data Veres+13fit (); Burgess+14therm (), indicating that a degree of magnetic domination does exist, which differs between bursts. A related point is that if magnetic stresses are significant in a GRB jet, this reduces the comoving photon density in the jet, allowing heavy nuclei in the jet to survive photo-dissociation Horiuchi+12nucjet (), a point of interest in view of the Auger Auger+11anis-comp () data pointing towards a heavy composition of UHECR at high energies.
The diffuse neutrino flux from both baryonic and extreme magnetic photospheric models has been computed (see Fig. 5, Gao+12photnu ()), where the extreme magnetic photospheric model is shown as red dashed lines and the baryonic photospheric model as blue dotted lines. They appear compliant with the IceCube 40+59 string upper limits, which however were calculated for a canonical Band spectral shapes, and a more spectral-specific comparison is necessary. This has been done for baryonic photospheres IC3-ICRC13nuphot (), see Fig. 7. As the observations accumulate, these constraints are getting tighter, at least for the simple IS and the simple baryonic photosphere models.
Concerning the IceCube non-detection of the EM brightest burst ever observed, GRB 130427A, a detailed calculation Gao+13-130427nu () shows that for this particular burst, except for the extreme magnetic photosphere model, the standard IS model, the baryonic photosphere model and a model-independent analysis are compatible with its non-detection.
4 Hypernovae, SFGs/SBGs, Galactic/Cluster Shocks, AGNs as UHECR/UHENU Sources
Hypernovae (henceforth HNe) are Type Ic core collapse supernovae with unusually broad lines, denoting a much higher ejecta velocity component than in usual SNeIc. This indicates a component of the ejecta reaching up to semi-relativistic velocities, with , and a corresponding inferred ejecta kinetic energy , one order of magnitude higher than that of normal SNeIc and normal SNe in general Woosley+06araa (); Nomoto+10hn-nar (). Their rate may be 1%-5% of the normal SNIc rate, i.e. as much as 500 times as frequent as GRBs Guetta+07grbhnrate (). While core collapse (collapsar) type long GRBs appear to be accompanied by HNe, the majority of HNe appear not to have a detected GRB, e.g. Soderberg+10hn (). The semi-relativistic velocity component may be due to an accretion powered jet forming in the core collapse, as in GRBs, which only for longer accretion episodes is able to break through the collapsing envelopes, while for shorter accretion episodes it is unable to break out. In both cases the jet accelerates the envelope along the jet axis more forcefully (jet-driven supernova), causing an anisotropic expansion, e.g. Campana+06-060218 (), whereas in the majority of core collapses a slow core rotation or short accretion times lead to no jet or only weak jets and a “normal” quasi-spherical SNeIc Lazzati+12hn ().
The dominant fraction of GRB-less HNe, if indeed due to a non-emerging (choked) jet, would be effectively a failed GRB, which could be detected via a neutrino signal produced in the choked, non-exiting jet, or a neutrino precursor in those collapses where the jet did emerge to produce a successful GRB Meszaros+01choked (); Horiuchi+08choked (); Murase+13choked (). Searches with IceCube have so far not found them, e.g. Taboada11choked (); Daughhetee12choked ().
An interesting aspect of HNe is that the higher bulk Lorentz factor of the ejecta leads to estimates of the maximum UHECR energy accelerated which, unlike for normal SNe, is now in the GZK range,
εmax≃ZeBRβ=4×1018Z~{}eV (6)
especially if heavy nuclei (e.g. Fe, ) are accelerated Wang+07crhn (); Budnik+08hn (); Wang+08crnuc (). The photon field is dilute enough so that the heavy nuclei avoid photo-dissociation Wang+08crnuc (); Horiuchi+12nucjet (). The HNe kinetic energy and occurrence rate is sufficient then to explain the observed UHECR diffuse flux at GZK energies, without appearing to violate the IceCube upper limits. The HNe, as other core collapse SNe and long GRBs, occur in early type galaxies, with a larger rate in star-forming galaxies (SFGs) and even larger rate in star-burst galaxies (SBGs).
Magnetars, another type of high energy source expected from some core collapse supernovae in SFGs and SBGs, are a sub-class of fast-rotating neutron stars with an ultra-strong magnetic field, which have been considered as possible sources of UHECR and UHENU Arons03crmag (); Zhang+03magnu (); Ghisellini08corr (); Murase09magnu (); Kotera11crmag ().
SFGs make up of all galaxies, while SBGs make up . AGNs make up of all galaxies, most AGNs being radio-quiet, i.e. without an obvious jet, while radio-loud AGNs (with a prominent jet) represent of all galaxies. Radio-loud AGNs have long been considered possible UHECR and UHENU sources, e.g. Rachen+93agncr (); Berezinsky+02diffprop (). However, the lack of an angular correlation between Auger or TA UHECR events and AGNs Auger+10corr (); Abuzayad+13TAcorr () may be suggesting that more common galaxies, e.g. SFGs or SBGs, may be hosting the UHECR sources, which could be HNe, GRBs or magnetars, all of which appear capable of accelerating UHECRs at a rate sufficient to give the observed diffuse UHECR flux.
Another possibility is that UHECR are accelerated in shocks near the core of radio-quiet AGNs, where they would produce UHENUs Stecker+91nuagn (); Alvarez+04agn (); Peer+09agn (), or alternatively UHECR could be accelerated in stand-off shocks caused by the infall of intergalactic gas onto clusters of galaxies Keshet+03igshock (); Inoue+05igshock (); Murase+08clusternu (). Galactic merger shocks (GMSs) also appear capable of accelerating UHECR, with a similar energy input rate into the IGM Kashiyama+14pevmerg (); see below.
5 The PeV Neutrino Background
In 2013 the IceCube collaboration announced the discovery of the first PeV and sub-PeV neutrinos which, to a high confidence level, are of astrophysical origin IC3+13pevnu1 (); IC3+13pevnu2 (). The majority of these are cascades, whose angular resolution is , ascribed to , while a minority are Cherenkov tracks with an angular resolution due to . Their spectrum stands out above that of the diffuse atmospheric spectrum by at least , with a best fit spectrum . There is no statistically significant evidence for a concentration either towards the galactic center or the galactic plane, being compatible with an isotropic distribution. No credible correlation has been so far established with any well-defined extragalactic objects, such as AGNs, but the working assumption of an extragalactic origin is widely accepted.
A flux of PeV neutrinos from starburst galaxies at a level close to that observed level was predicted by Loeb+06nustarburst (). The actual accelerators could be hypernovae; the maximum energy of protons, from eq.(6), is sufficient for the production of PeV neutrinos Fox+13pev (), and statistically, (1) of the observed events could be due to a hypernova (or at most a few) located in the bulge of the Milky Way. However, the bulk of the observed events must come from an isotropic distribution, and hypernovae in ultra-luminous infrared galaxies (ULIRGs) or SFGs/SBGs could be responsible He+13pevhn (); Liu+13pevnuhn ().
More generally Murase+13pev (), one can ask whether hadronuclear () interactions may be responsible for this isotropic neutrino background (INB) at PeV energies, without violating the constraints imposed by the isotropic gamma-ray background (IGB) Abdo+10igbfermi () measured by Fermi. As shown by Murase+13pev (), this requires the accelerated protons to reach at least and to have an energy distribution with an index no steeper than (Fig. 8). An important point is that most events are cascades, involving electron flavor neutrinos, and the cross section is resonant at CM energies comparable to the meson mass (Glashow resonance), at around 6.3 PeV in the lab frame. Since events are not seen at this energy, but they would be expected if the proton (and neutrino) slopes where , one concludes that the proton distribution steepens or cuts off at energies PeV. Such a cutoff may be expected in scenarios where the acceleration occurs in galaxy cluster shocks or in SFG/SBGs, where this energy may correspond to that where the escape diffusion time out of the acceleration region becomes less than the injection time or the time Murase+13pev (). Broadly similar conclusions are reached by He+13pevhn (); Liu+13pevnuhn (); Chang+14pevnugam (); Anchordoqui+14pevnu ().
Suggestively, Anchordoqui+14pevnu () find a weak correlation between five known SFGs (M82, NGC253, NGC4945, SMC and IRAS18293) and the very wide, error boxes of some cascade events, but not correlation so far with any track events; they estimate that 10 years may be needed with IceCube to find track correlations with SFGs at confidence level.
Another type of large scale shocks in galaxies are the galaxy merger shocks (GMSs), which occur every time two galaxies merge. Every galaxy merged at least once during the last Hubble time, and probably more then once; in fact mergers are the way galaxies grow over cosmological time. Such galaxy mergers were considered in the PeV neutrino background context by Kashiyama+14pevmerg (). They estimate that individual major mergers involving galaxies with have an average kinetic energy of , occurring at a rate , with a relative shock velocity . For a CR acceleration fraction the UHECR energy injection rate into the Universe is (which is also the observationally inferred rate UHECR energy injection rate), with a maximum CR energy of . The interactions in the shocks and in the host galaxies lead to PeV neutrinos and GeV -rays (Fig. 9).
Individual GMSs from major mergers at would yield in IceCube on average muon events/year, or an isotropic neutrino background (INB) of of the IceCube observed PeV-sub-PeV flux. Minor mergers, whose rate is more uncertain, might contribute up to 70-100% of the INB (Fig. 9). The -ray flux from individual GMS expected is , possibly detectable by the future CTA, while the corresponding isotropic gamma background (IGB) is , about 20-30% of the observed Fermi IGB, or a somewhat larger percentage due to minor mergers Kashiyama+14pevmerg ().
6 Conclusion, Prospects
In conclusion, the sources of UHECR and the observed extragalactic UHENU are still unknown. For UHECR, an exotic physics explanation is almost certainly ruled out, mainly because any such mechanisms would produce a high energy photon component in UHECR which can be observationally ruled out, e.g. Auger+11photnu (). Anisotropy studies from Auger, which initially suggested a correlation with AGNs Auger07agncorr () have more recently, together with Telescope Array observations, yielded no significant correlation with any specific types of galaxies Auger+10corr (); Auger+13anis2 (); Abuzayad+13TAcorr (). This might favor some of the more common types of galaxies, such as possibly radio-quiet AGNs, or alternatively stellar type events such as GRBs, hypernovae or magnetars, as discussed in §§2,3,4.
The indications for a heavy UHECR composition at higher energies Auger+11anis-comp (); Auger+13icrc (); Kampert+13uhecr () would appear to disfavor AGN jets, where the composition is closer to solar, and favor evolved stellar sources, such as GRBs, hypernovae and magnetars, where a heavy composition is more natural, if the nuclei can avoid photo-dissociation (§4). These sources would also reside in more common galaxies, avoiding the anisotropy constraints.
The PeV and sub-PeV neutrinos discovered by IceCube IC3+13pevnu1 (); IC3+13pevnu2 () are an exciting development in the quest for finding the neutrino smoking gun pointing at UHECR sources, even if not at the highest energies. Standard IS GRBs appear to be ruled out as the sources for this observed diffuse neutrino flux, given the upper limits for GRBs from IceCube Abbasi+11-ic40nugrb (); Abbasi+12grbnu-nat (). Note however that these limits were obtained for simplified internal shock models, and more careful comparison needs to be made to more realistic models (see §2). Nonetheless, the normal high luminosity, electromagnetically detected GRBs, even if able to contribute to the GZK end of the UHECR distribution Asano+14grbcr (), appear inefficient as PeV neutrino sources. It is possible that low luminosity GRBs (in the electromagnetic channel) could yield appreciable PeV neutrinos Liu+13pevnugrb (), and also choked GRBs Murase+13choked () would be electromagnetically non-detected but might provide significant PeV neutrino fluxes. The fluxes, however, remain uncertain.
More attractive candidates for the PeV neutrinos are the star-forming and starburst galaxies, hosting an increased rate of hypernovae, or accretion shocks onto galaxies or clusters, or else galaxy mergers, all of which are capable of accelerating CRs up to and produce PeV neutrinos via interactions is discussed in §4.
It is also remarkable that the PeV neutrino flux is essentially at the Waxman-Bahcall (WB) bound level Waxman+97grbnu (); Bahcall+01bound () for UHECR near the GZK range, which is also comparable to the GeV range CR flux Katz+13uhecr (), Fig. 10. This suggests the intriguing prospect that the same sources may be responsible for the entire GeV-100 EeV energy range, a possibility whose testing would require much further work.
We can look forward to much further progress with continued observations from IceCube, Auger, TA and their upgrades, as well as HAWC, CTA and ground-based Cherenkov arrays and other instruments. UHECR composition and UHECR/UHENU clustering will provide important clues, as well as GeV and TeV photon observations to provide much needed additional constraints, especially if UHENU source localization is achieved..
This work was partially supported by NASA NNX 13AH50G. I am grateful to the organizers of the Origin of Cosmic Rays conference for their kind hospitality and for stimulating discussions, also held with K. Kashiyama, P. Baerwald, S, Gao and N. Senno.
References
• (1) The Pierre AUGER Collaboration, P. Abreu, M. Aglietta, E. J. Ahn, D. Allard, I. Allekotte, J. Allen, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, et al., Update on the correlation of the highest energy cosmic rays with nearby extragalactic matter, Astroparticle Physics 34 (2010) 314–326.
• (2) Pierre Auger Collaboration, P. Abreu, M. Aglietta, M. Ahlers, E. J. Ahn, I. F. M. Albuquerque, D. Allard, I. Allekotte, J. Allen, P. Allison, et al., Constraints on the Origin of Cosmic Rays above 10 eV from Large-scale Anisotropy Searches in Data of the Pierre Auger Observatory, Astrophys.J.Lett. 762 (2013) L13.
• (3) T. Abu-Zayyad, the Telescope Array collaboration, Search for correlations of the arrival directions of ultra-high energy cosmic ray with extragalactic objects as observed by the telescope array experiment, ArXiv e-printsarXiv:1306.5808.
• (4) IceCube Collaboration, Evidence for High-Energy Extraterrestrial Neutrinos at the IceCube Detector, Science 342.
• (5) C. Kouveliotou, C. A. Meegan, G. J. Fishman, N. P. Bhat, M. S. Briggs, T. M. Koshut, W. S. Paciesas, G. N. Pendleton, Identification of two classes of gamma-ray bursts, Astrophys.J.Lett. 413 (1993) L101–L104.
• (6) G. J. Fishman, C. A. Meegan, Gamma-Ray Bursts, Annu.Rev.Astron.Astrophys. 33 (1995) 415–458.
• (7) N. Gehrels, P. Mészáros, Gamma-Ray Bursts, Science 337 (2012) 932–.
• (8) P. Mészáros, N. Gehrels, Gamma-ray bursts and their links with supernovae and cosmology, Research in Astronomy and Astrophysics 12 (2012) 1139–1161.
• (9) M. Ackermann, the Fermi collaboration, Detection of a Spectral Break in the Extra Hard Component of GRB 090926A, Astrophys.J. 729 (2011) 114–+.
• (10) A. Abdo, the Fermi Collaboration, Fermi Observations of High-Energy Gamma-Ray Emission from GRB 080916C, Science 323 (2009) 1688.
• (11) V. Pelassa, Observations of Gamma-Ray Bursts at high energies by the Fermi Large Area Telescope–two years review, in: J. E. McEnery, J. L. Racusin, & N. Gehrels (Ed.), American Institute of Physics Conference Series, Vol. 1358 of American Institute of Physics Conference Series, 2011, pp. 41–46.
• (12) G. Cavallo, M. J. Rees, A qualitative study of cosmic fireballs and gamma-ray bursts, M.N.R.A.S. 183 (1978) 359–365.
• (13) B. Paczýnski, Gamma-ray bursters at cosmological distances, Astrophys.J.Lett. 308 (1986) L43–L46.
• (14) J. Goodman, Are gamma-ray bursts optically thick?, Astrophys.J.Lett. 308 (1986) L47–L50.
• (15) A. Shemi, T. Piran, The appearance of cosmic fireballs, Astrophys.J.Lett. 365 (1990) L55–L58.
• (16) R. Narayan, B. Paczynski, T. Piran, Gamma-ray bursts as the death throes of massive binary stars, Astrophys.J.Lett. 395 (1992) L83–L86.
• (17) P. Meszaros, M. J. Rees, Tidal heating and mass loss in neutron star binaries - Implications for gamma-ray burst models, Astrophys.J. 397 (1992) 570–575.
• (18) S. E. Woosley, Gamma-ray bursts from stellar mass accretion disks around black holes, Astrophys.J. 405 (1993) 273–277.
• (19) B. Paczyński, Gamma-ray bursts as hypernovae, in: C. A. Meegan, R. D. Preece, T. M. Koshut (Eds.), Gamma-Ray Bursts, 4th Hunstville Symposium, Vol. 428 of American Institute of Physics Conference Series, 1998, pp. 783–787.
• (20) S. E. Woosley, A. Heger, Long Gamma-Ray Transients from Collapsars, Astrophys.J. 752 (2012) 32.
• (21) C. Akerlof, R. Balsano, S. Barthelmy, J. Bloch, P. Butterworth, D. Casperson, T. Cline, S. Fletcher, F. Frontera, G. Gisler, J. Heise, J. Hills, R. Kehoe, B. Lee, S. Marshall, T. McKay, R. Miller, L. Piro, W. Priedhorsky, J. Szymanski, J. Wren, Observation of contemporaneous optical radiation from a -ray burst, Nature 398 (1999) 400–402.
• (22) N. Gehrels, E. Ramirez-Ruiz, D. B. Fox, Gamma-Ray Bursts in the Swift Era, Annu.Rev.Astron.Astrophys. 47 (2009) 567–617.
• (23) M. J. Rees, P. Meszaros, Relativistic fireballs - Energy conversion and time-scales, M.N.R.A.S 258 (1992) 41P–43P.
• (24) P. Mészáros, M. J. Rees, Relativistic fireballs and their impact on external matter - Models for cosmological gamma-ray bursts, Astrophys.J. 405 (1993) 278–284.
• (25) P. Meszaros, M. J. Rees, Gamma-Ray Bursts: Multiwaveband Spectral Predictions for Blast Wave Models, Astrophys.J.Lett. 418 (1993) L59+.
• (26) M. J. Rees, P. Meszaros, Unsteady outflow models for cosmological gamma-ray bursts, Astrophys.J.Lett. 430 (1994) L93–L96.
• (27) B. Zhang, P. Mészáros, Gamma-Ray Bursts: progress, problems and prospects, International Journal of Modern Physics A 19 (2004) 2385–2472.
• (28) P. Meszaros, Gamma-ray bursts, Rept. Prog. Phys. 69 (2006) 2259–2322.
• (29) P. Mészáros, M. J. Rees, Delayed GEV Emission from Cosmological Gamma-Ray Bursts - Impact of a Relativistic Wind on External Matter, M.N.R.A.S. 269 (1994) L41+.
• (30) R. D. Preece, M. S. Briggs, R. S. Mallozzi, G. N. Pendleton, W. S. Paciesas, D. L. Band, The Synchrotron Shock Model Confronts a “Line of Death” in the BATSE Gamma-Ray Burst Data, Astrophys.J.Lett. 506 (1998) L23–L26.
• (31) G. Ghisellini, A. Celotti, Quasi-thermal comptonization and GRBs, Astron.Astrophys.Supp. 138 (1999) 527–528.
• (32) M. V. Medvedev, Theory of “Jitter” Radiation from Small-Scale Random Magnetic Fields and Prompt Emission from Gamma-Ray Burst Shocks, Astrophys.J. 540 (2000) 704–714.
• (33) P. Mészáros, M. J. Rees, Steep Slopes and Preferred Breaks in Gamma-Ray Burst Spectra: The Role of Photospheres and Comptonization, Astrophys.J. 530 (2000) 292–298.
• (34) M. J. Rees, P. Mészáros, Dissipative Photosphere Models of Gamma-Ray Bursts and X-Ray Flashes, Astrophys.J. 628 (2005) 847–852.
• (35) A. Pe’er, P. Mészáros, M. J. Rees, The Observable Effects of a Photospheric Component on GRB and XRF Prompt Emission Spectrum, Astrophys.J. 642 (2006) 995–1003.
• (36) P. Kumar, A. Panaitescu, Afterglow Emission from Naked Gamma-Ray Bursts, Astrophys.J.Lett. 541 (2000) L51–L54.
• (37) B. Zhang, Y. Z. Fan, J. Dyks, S. Kobayashi, P. Mészáros, D. N. Burrows, J. A. Nousek, N. Gehrels, Physical Processes Shaping Gamma-Ray Burst X-Ray Afterglow Light Curves: Theoretical Implications from the Swift X-Ray Telescope Observations, Astrophys.J. 642 (2006) 354–370.
• (38) J. A. Nousek, C. Kouveliotou, D. Grupe, K. L. Page, J. Granot, E. Ramirez-Ruiz, S. K. Patel, D. N. Burrows, V. Mangano, S. Barthelmy, A. P. Beardmore, S. Campana, M. Capalbi, G. Chincarini, G. Cusumano, A. D. Falcone, N. Gehrels, P. Giommi, M. R. Goad, O. Godet, C. P. Hurkett, J. A. Kennea, A. Moretti, P. T. O’Brien, J. P. Osborne, P. Romano, G. Tagliaferri, A. A. Wells, Evidence for a Canonical Gamma-Ray Burst Afterglow Light Curve in the Swift XRT Data, Astrophys.J. 642 (2006) 389–400.
• (39) A. A. Abdo, the Fermi Collaboration, A limit on the variation of the speed of light arising from quantum gravity effects, Nature 462 (2009) 331–334.
• (40) G. Ghisellini, G. Ghirlanda, L. Nava, A. Celotti, GeV emission from gamma-ray bursts: a radiative fireball?, M.N.R.A.S. 403 (2010) 926–937.
• (41) P. Kumar, R. Barniol Duran, External forward shock origin of high-energy emission for three gamma-ray bursts detected by Fermi, M.N.R.A.S. 409 (2010) 226–236.
• (42) H.-N. He, X.-F. Wu, K. Toma, X.-Y. Wang, P. Mészáros, On the High-energy Emission of the Short GRB 090510, Astrophys.J. 733 (2011) 22.
• (43) K. Toma, X.-F. Wu, P. Mészáros, An Up-Scattered Cocoon Emission Model of Gamma-Ray Burst High-Energy Lags, Astrophys.J. 707 (2009) 1404–1416.
• (44) K. Toma, X.-F. Wu, P. Mészáros, Photosphere-internal shock model of gamma-ray bursts: case studies of Fermi/LAT bursts, M.N.R.A.S. 415 (2011) 1663–1680.
• (45) I. Vurm, R. Hascoet, A. M. Beloborodov, Pair-dominated GeV-optical flash in GRB 130427A, ArXiv e-printsarXiv:1402.2595.
• (46) P. Mészáros, M. J. Rees, GeV Emission from Collisional Magnetized Gamma-Ray Bursts, Astrophys.J.Lett. 733 (2011) L40+.
• (47) A. M. Beloborodov, Collisional mechanism for gamma-ray burst emission, M.N.R.A.S. 407 (2010) 1033–1047.
• (48) S. Razzaque, C. D. Dermer, J. D. Finke, Synchrotron radiation from ultra-high energy protons and the Fermi observations of GRB 080916C, ArXiv e-printsarXiv:0908.0513.
• (49) K. Murase, K. Asano, T. Terasawa, P. Mészáros, The Role of Stochastic Acceleration in the Prompt Emission of Gamma-Ray Bursts: Application to Hadronic Injection, Astrophys.J. 746 (2012) 164.
• (50) E. Waxman, Cosmological Gamma-Ray Bursts and the Highest Energy Cosmic Rays, Physical Review Letters 75 (1995) 386–389.
• (51) E. Waxman, J. Bahcall, High energy neutrinos from cosmological gamma-ray fireballs, Phys. Rev. Lett. 78 (1997) 2292–2295.
• (52) D. Guetta, D. Hooper, J. Alvarez-Muñiz, F. Halzen, E. Reuveni, Neutrinos from individual gamma-ray bursts in the BATSE catalog, Astroparticle Physics 20 (2004) 429–455.
• (53) R. Abbasi, Y. Abdou, T. Abu-Zayyad, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, D. Altmann, K. Andeen, J. Auffenberg, et al., An absence of neutrinos associated with cosmic-ray acceleration in -ray bursts, Nature 484 (2012) 351–354.
• (54) M. Ahlers, M. C. Gonzalez-Garcia, F. Halzen, GRBs on probation: Testing the UHE CR paradigm with IceCube, Astroparticle Physics 35 (2011) 87–94.
• (55) R. Abbasi, Y. Abdou, T. Abu-Zayyad, J. Adams, J. A. Aguilar, M. Ahlers, K. Andeen, J. Auffenberg, X. Bai, M. Baker, et al., Limits on Neutrino Emission from Gamma-Ray Bursts with the 40 String IceCube Detector, Physical Review Letters 106 (14) (2011) 141101–+.
• (56) S. Hümmer, P. Baerwald, W. Winter, Neutrino Emission from Gamma-Ray Burst Fireballs, Revised, Physical Review Letters 108 (23) (2012) 231101.
• (57) Z. Li, Note on the normalization of predicted gamma-ray burst neutrino flux, Phys.Rev.D 85 (2) (2012) 027301.
• (58) S. Gao, K. Asano, P. Meszaros, High Energy Neutrinos from Dissipative Photospheric Models of Gamma Ray Bursts, Jour. Cosmology and Astro-Particle Phys. 11 (2012) 58.
• (59) K. Asano, P. Meszaros, Neutrino and Cosmic-Ray Release from Gamma-Ray Bursts: Time-Dependent Simulations, ArXiv e-printsarXiv:1402.6057.
• (60) K. Murase, Prompt high-energy neutrinos from gamma-ray bursts in photospheric and synchrotron self-Compton scenarios, Phys.Rev.D 78 (10) (2008) 101302.
• (61) X.-Y. Wang, Z.-G. Dai, Prompt TeV Neutrinos from the Dissipative Photospheres of Gamma-ray Bursts, Astrophys.J.Lett. 691 (2009) L67–L71.
• (62) P. Veres, B.-B. Zhang, P. Mészáros, Magnetically and Baryonically Dominated Photospheric Gamma-Ray Burst Model Fits to Fermi-LAT Observations, Astrophys.J. 764 (2013) 94.
• (63) J. M. Burgess, R. D. Preece, F. Ryde, P. Veres, P. Mészáros, V. Connaughton, M. Briggs, A. Pe’er, S. Iyyani, A. Goldstein, M. Axelsson, M. G. Baring, P. N. Bhat, D. Byrne, G. Fitzpatrick, S. Foley, D. Kocevski, N. Omodei, W. S. Paciesas, V. Pelassa, C. Kouveliotou, S. Xiong, H.-F. Yu, B. Zhang, S. Zhu, An Observed Correlation between Thermal and Non-thermal Emission in Gamma-Ray Bursts, Astrophys.J.Lett. 784 (2014) L43.
• (64) S. Horiuchi, K. Murase, K. Ioka, P. Mészáros, The Survival of Nuclei in Jets Associated with Core-collapse Supernovae and Gamma-Ray Bursts, Astrophys.J. 753 (2012) 69.
• (65) Pierre Auger Collaboration, Anisotropy and chemical composition of ultra-high energy cosmic rays using arrival directions measured by the Pierre Auger Observatory, Jour. Cosmology and Astro-Particle Phys. 6 (2011) 22–+.
• (66) IceCube Collaboration, M. G. Aartsen, R. Abbasi, Y. Abdou, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, D. Altmann, J. Auffenberg, et al., The IceCube Neutrino Observatory Part I: Point Source Searches, ArXiv e-prints (2013) p48, arXiv:1309.6979.
• (67) S. Gao, K. Kashiyama, P. Mészáros, On the Neutrino Non-detection of GRB 130427A, Astrophys.J.Lett. 772 (2013) L4.
• (68) S. E. Woosley, J. S. Bloom, The Supernova Gamma-Ray Burst Connection, Annu.Rev.Astron.Astrophys. 44 (2006) 507–556.
• (69) K. Nomoto, M. Tanaka, N. Tominaga, K. Maeda, Hypernovae, gamma-ray bursts, and first stars, New Ast.Rev. 54 (2010) 191–200.
• (70) D. Guetta, M. Della Valle, On the Rates of Gamma-Ray Bursts and Type Ib/c Supernovae, Astrophys.J.Lett. 657 (2007) L73–L76.
• (71) A. M. Soderberg, S. Chakraborti, G. Pignata, R. A. Chevalier, P. Chandra, A. Ray, M. H. Wieringa, A. Copete, V. Chaplin, V. Connaughton, S. D. Barthelmy, M. F. Bietenholz, N. Chugai, M. D. Stritzinger, M. Hamuy, C. Fransson, O. Fox, E. M. Levesque, J. E. Grindlay, P. Challis, R. J. Foley, R. P. Kirshner, P. A. Milne, M. A. P. Torres, A relativistic type Ibc supernova without a detected -ray burst, Nature 463 (2010) 513–515.
• (72) S. Campana, V. Mangano, A. J. Blustin, P. Brown, D. N. Burrows, G. Chincarini, J. R. Cummings, G. Cusumano, M. Della Valle, D. Malesani, P. Mészáros, J. A. Nousek, M. Page, T. Sakamoto, E. Waxman, B. Zhang, Z. G. Dai, N. Gehrels, S. Immler, F. E. Marshall, K. O. Mason, A. Moretti, P. T. O’Brien, J. P. Osborne, K. L. Page, P. Romano, P. W. A. Roming, G. Tagliaferri, L. R. Cominsky, P. Giommi, O. Godet, J. A. Kennea, H. Krimm, L. Angelini, S. D. Barthelmy, P. T. Boyd, D. M. Palmer, A. A. Wells, N. E. White, The association of GRB 060218 with a supernova and the evolution of the shock wave, Nature 442 (2006) 1008–1010.
• (73) D. Lazzati, B. J. Morsony, C. H. Blackwell, M. C. Begelman, Unifying the Zoo of Jet-driven Stellar Explosions, Astrophys.J. 750 (2012) 68.
• (74) P. Mészáros, E. Waxman, TeV Neutrinos from Successful and Choked Gamma-Ray Bursts, Physical Review Letters 87 (17) (2001) 171102–+.
• (75) S. Horiuchi, S. Ando, High-energy neutrinos from reverse shocks in choked and successful relativistic jets, Phys.Rev.D 77 (6) (2008) 063007.
• (76) K. Murase, K. Ioka, TeV-PeV Neutrinos from Low-Power Gamma-Ray Burst Jets inside Stars, Physical Review Letters 111 (12) (2013) 121102.
• (77) I. Taboada, Multi-messenger observations of gamma-ray bursts, in: J. E. McEnery, J. L. Racusin, N. Gehrels (Eds.), American Institute of Physics Conference Series, Vol. 1358 of American Institute of Physics Conference Series, 2011, pp. 365–370.
• (78) J. Daughhetee, Search for Choked GRBs using Icecube’s DeepCore, in: APS April Meeting Abstracts, 2012, p. C7008.
• (79) X.-Y. Wang, S. Razzaque, P. Meszaros, Z.-G. Dai, High-energy Cosmic Rays and Neutrinos from Semi-relativistic Hypernovae, Phys.Rev.D. in press (astro-ph/0705.0041) 705.
• (80) R. Budnik, B. Katz, A. MacFadyen, E. Waxman, Cosmic Rays from Transrelativistic Supernovae, Astrophys.J. 673 (2008) 928–933.
• (81) X.-Y. Wang, S. Razzaque, P. Mészáros, On the Origin and Survival of Ultra-High-Energy Cosmic-Ray Nuclei in Gamma-Ray Bursts and Hypernovae, Astrophys.J. 677 (2008) 432–440.
• (82) J. Arons, Magnetars in the Metagalaxy: An Origin for Ultra-High-Energy Cosmic Rays in the Nearby Universe, Astrophys.J. 589 (2003) 871–892.
• (83) B. Zhang, Z. G. Dai, P. Mészáros, E. Waxman, A. K. Harding, High-Energy Neutrinos from Magnetars, Astrophys.J. 595 (2003) 346–351.
• (84) G. Ghisellini, G. Ghirlanda, F. Tavecchio, F. Fraternali, G. Pareschi, Ultra-high energy cosmic rays, spiral galaxies and magnetars, M.N.R.A.S. 390 (2008) L88–L92.
• (85) K. Murase, P. Mészáros, B. Zhang, Probing the birth of fast rotating magnetars through high-energy neutrinos, Phys.Rev.D 79 (10) (2009) 103001–+.
• (86) K. Kotera, Ultrahigh energy cosmic ray acceleration in newly born magnetars and their associated gravitational wave signatures, Phys.Rev.D 84 (2) (2011) 023002–+.
• (87) J. P. Rachen, T. Stanev, P. L. Biermann, Extragalactic ultrahigh-energy cosmic rays. 2. comparison with experimental data, Astron. Astrophys. 273 (1993) 377.
• (88) V. Berezinsky, A. Z. Gazizov, S. I. Grigorieva, Signatures of AGN model for UHECR, ArXiv Astrophysics e-printsarXiv:arXiv:astro-ph/0210095.
• (89) F. W. Stecker, C. Done, M. H. Salamon, P. Sommers, High-energy neutrinos from active galactic nuclei, Physical Review Letters 66 (1991) 2697–2700.
• (90) J. Alvarez-Muñiz, P. Mészáros, High energy neutrinos from radio-quiet active galactic nuclei, Phys.Rev.D 70 (12) (2004) 123001–+.
• (91) A. Pe’er, K. Murase, P. Mészáros, Radio-quiet active galactic nuclei as possible sources of ultrahigh-energy cosmic rays, Phys.Rev.D 80 (12) (2009) 123018–+.
• (92) U. Keshet, E. Waxman, A. Loeb, V. Springel, L. Hernquist, Gamma Rays from Intergalactic Shocks, Astrophys.J. 585 (2003) 128–150.
• (93) S. Inoue, F. A. Aharonian, N. Sugiyama, Hard X-Ray and Gamma-Ray Emission Induced by Ultra-High-Energy Protons in Cluster Accretion Shocks, Astrophys.J.Lett. 628 (2005) L9–L12.
• (94) K. Murase, S. Inoue, S. Nagataki, Cosmic Rays above the Second Knee from Clusters of Galaxies and Associated High-Energy Neutrino Emission, Astrophys.J.Lett. 689 (2008) L105–L108.
• (95) K. Kashiyama, P. Mészáros, Galaxy Mergers as a Source of Cosmic Rays, Neutrinos, and Gamma Rays, Astrophys.J.Lett. 790 (2014) L14.
• (96) M. G. Aartsen, R. Abbasi, Y. Abdou, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, D. Altmann, J. Auffenberg, X. Bai, et al., First Observation of PeV-Energy Neutrinos with IceCube, Physical Review Letters 111 (2) (2013) 021103.
• (97) A. Loeb, E. Waxman, The cumulative background of high energy neutrinos from starburst galaxies, Jour. Cosmology and Astro-Particle Phys. 5 (2006) 3.
• (98) D. B. Fox, K. Kashiyama, P. Mészarós, Sub-PeV Neutrinos from TeV Unidentified Sources in the Galaxy, Astrophys.J. 774 (2013) 74.
• (99) H.-N. He, T. Wang, Y.-Z. Fan, S.-M. Liu, D.-M. Wei, Diffuse PeV neutrino emission from ultraluminous infrared galaxies, Phys.Rev.D 87 (6) (2013) 063011.
• (100) R.-Y. Liu, X.-Y. Wang, S. Inoue, R. Crocker, F. Aharonian, Diffuse PeV neutrinos from EeV cosmic ray sources: Semirelativistic hypernova remnants in star-forming galaxies, Phys.Rev.D 89 (8) (2014) 083004.
• (101) K. Murase, M. Ahlers, B. C. Lacki, Testing the hadronuclear origin of PeV neutrinos observed with IceCube, Phys.Rev.D 88 (12) (2013) 121301.
• (102) A. A. Abdo, Fermi LAT Collaboration, Spectrum of the Isotropic Diffuse Gamma-Ray Emission Derived from First-Year Fermi Large Area Telescope Data, Physical Review Letters 104 (10) (2010) 101101.
• (103) X.-C. Chang, X.-Y. Wang, The diffuse gamma-ray flux associated with sub-PeV/PeV neutrinos from starburst galaxies, ArXiv e-printsarXiv:1406.1099.
• (104) L. A. Anchordoqui, T. C. Paul, L. H. M. da Silva, D. F. Torres, B. J. Vlcek, What IceCube data tell us about neutrino emission from star-forming galaxies (so far), ArXiv e-printsarXiv:1405.7648.
• (105) The Pierre AUGER Collaboration, V. Scherini, Search for primary photons and neutrinos in the ultra-high energy cosmic rays with the Pierre Auger Observatory, Nuclear Physics B Proceedings Supplements 212 (2011) 115–120.
• (106) The Pierre Auger Collaboration, Correlation of the Highest-Energy Cosmic Rays with Nearby Extragalactic Objects, Science 318 (2007) 938–.
• (107) The Pierre Auger Collaboration, A. Aab, P. Abreu, M. Aglietta, M. Ahlers, E.-J. Ahn, I. Albuquerque, I. Allekotte, J. Allen, P. Allison, et al., The Pierre Auger Observatory: Contributions to the 33rd International Cosmic Ray Conference (ICRC 2013), ArXiv e-printsarXiv:1307.5059.
• (108) K.-H. Kampert, Ultrahigh-Energy Cosmic Rays: Results and Prospects, Brazilian Journal of Physics 43 (2013) 375–382.
• (109) R.-Y. Liu, X.-Y. Wang, Diffuse PeV Neutrinos from Gamma-Ray Bursts, Astrophys.J. 766 (2013) 73.
• (110) B. Katz, E. Waxman, T. Thompson, A. Loeb, The energy production rate density of cosmic rays in the local universe is $sim10^44-45rm erg~Mpc^-3~yr^-1$ at all particle energies, ArXiv e-printsarXiv:1311.0287.
• (111) J. Bahcall, E. Waxman, High energy astrophysical neutrinos: The upper bound is robust, Phys.Rev.D 64 (2) (2001) 023002–+.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters |
# Why is left-skewed called negatively skewed and right-skewed called positively skewed?
I'm curious about the nomenclature: why is left-skewed called negatively skewed and right-skewed called positively skewed?
• Let's underline that the terms left and right depend on a tacit convention that the magnitude axis of a graph showing a distribution s horizontal with negative values to the left. This may seem too obvious to state, except to those who do things differently. – Nick Cox Apr 11 '16 at 15:32
My short answer is that it is by design. The skewness measures are usually constructed so that the positive skewness indicates right-skewed distributions.
Today the most common measure of skewness, that is also usually taught in schools, is based on the third central moment equation as follows:
$$\mu_3=E[(X-\mu)^3]$$
Look at the expression above. When there's more weight (of the distribution function) to the right of the mean then $(x-\mu)^3$ will contribute more positive values. The right of the mean is positive, because $x>\mu$ and the left is negative because $x<\mu$. So, mechanically it would seem to answer exactly your question.
However, as @Nick Cox brought up, there is more than one measure of skewness, such as Pearson's first coefficient of skewness, which is based on the difference $mean-mode$. Potentially, different measures of skewness could lead to different relations between positive skewness and the tendency to have heavier tails on the right.
Hence, it is interesting to look at why these measures of skewness were introduced in the first place, and why do they have their particular formulations.
In this context it is useful to look at the exposition of skewness by Yule in An Introduction to the Theory of Statistics (1912). In the following excerpt he describes the desired properties of a reasonable skewness measure. Basically, he requires that the positive skewness should correspond to right skewed distributions, like in your picture:
• Correct, but incomplete in so far as there are several other ways to measure skewness. But all I that know of follow the same convention that right-skewed and left-skewed typically yield positive and negative results respectively, as for example (mean $-$ median)/SD. The only certain thing, however, is that symmetrical distributions have zero skewness. It is possible to have asymmetrical distributions for which different skewness measures don't even agree in sign. – Nick Cox Apr 11 '16 at 15:30
• I believe you, but the question remains general and benefits from a general answer. In a century or so, considerable confusion has already been caused by conflating a general idea of skewness with particular ways of defining it. (I won't mention kurtosis.) – Nick Cox Apr 11 '16 at 15:49
• The historical details here are very interesting to me. My own attempt at a miniature review emphasises that moment-based skewness predates Pearson, although Pearson was mostly more concerned with measuring skewness relative to the mode, as Yule's comments reflect. See stata-journal.com/sjpdf.html?articlenum=st0204 (Indeed, Pearson was obfuscatory in his acknowledgement of prior work on the moment-based measure.) – Nick Cox Apr 11 '16 at 17:37
• The extract from Yule helps us see past the extraneous details to the essence of the answer: a distribution in which the positive tail is deemed to be "longer" than the negative tail has positive skewness. Everything else comes down to how one determines the tails and measures their lengths. – whuber Apr 11 '16 at 17:42
• I don't see how the answer would lose anything by mentioning one or two other measures of skewness (such as the median-skewness / second Pearson skewness measure) and pointing out that the discussion carries over (just as Nick suggests). – Glen_b Apr 11 '16 at 17:44 |
883 views
(1). Both BFS and DFS require $\Omega (N)$ storage for their operation.
(2). If we double the weight of every edge in the Graph shortest path between any two vertices will not change.
Which of the following is/are True ?
(and in every question of shortest path we have to think about negative weight ?)
only $1$ will be true
Omega represents the lower bound or best case, so why do you think that DFS or BFS in worst case will take omega(N) space ?, what I think is they will take O(N) in even Worst case.
$\Omega (n) \text{means that you storage capacity have to have atleast n}$
It is correct.how can you keep track of visited node having array size less than its size i.e $n$?
Are there other optimization possible?
Can anyone please explain how statement 2 is incorrect ?
I agree that a constant addition may change the path,
But a constant ratio always gives the same path.
consider the shortest path between two edges be a+b+c and another path between them as w+x+y+z.
Now if a+b+c < w+x+y+z
Then by simple inequality ratio we can deduce that
2a+2b+2c < 2w+2x+2y+2z
How can you prove its inverse ?
Statement 2 is correct.
(1) BFS and DFS both stores every node of the graph. So, storage location must be same for both of them
(2) is true always
take this graph for example. Shortest path will not change when u double every edge weight
After a lot of example checking, I concluded that, if we double an edge or we break that edge in some part, and then double each part, shortest path will not change.
by
u can chk here, it doesnot matter in bfs or dfs
https://gateoverflow.in/405/gate2008-7
Still if u have doubt , u can ask
Thank you.
@srestha
For statement 1: Omega means best case ( f >= g )
But in best case, BFS takes omega (n) and for DFS best case is omega( log n).
How 2nd is always true? What if I take the weight as negative? (nothing is mentioned in question )
Basically BFS uses Queue for storing and DFS uses Stack for storing its unexplored nodes.
So in case of BFS storage must be O(N) and Ω(N)
But in DFS stack depends on height of the tree hence in best case i.e lowest bound complexity should be Ω(LogN) and in worst case it will be O(N)
Hence option 1 is false
Option 2 is correct |
# Divergencies in case of expectation close to zero and left-censored log-normal distributed data
Hello Stanimals,
Short summary: I have a model that predicts for some subset of my data (assumed to be log-normally distributed) that the expectation is practically zero. Those data are left-censored, and I’m modelling this censoring process. However, during prior predictive checks, Stan reports divergent iterations when the expectation is too close to zero, even when no data or censoring-related sampling statements are included in the code yet.
Elaboration: I am developing a non-linear model for data that I assume have log-normal measurement error. The model is supposed to generate an S-shaped regression line for which I’m using Stan’s Phi_approx() as an approximation to the normal CDF. I use this because the mean and sd parameters have a specific biological meaning in the study context. Further, a subset of the data is left-censored, which I intend to explicitly model using either (1) the cumulative log normal distribution function or (2) through data augmentation by defining an additional parameter with an upper bound for every censored observation - yet to decide.
Now the issue I’m struggling with: while performing prior predictive checks, I ran into diverging iterations being reported (up to half of all samples!!!). I traced the culprit to a line of code where I calculate the natural logarithm of the output of Phi_approx(), which will later on (after finishing prior predictive checks) be used as the mean parameter for the log-normal distribution. It seems that some trajectories result in log(...) to underflow at some point, which is totally expected. I’m not too concerned about this however, as the underflow result of log(Phi_approx(...)) will be associated with left-censored data points (in which case the posterior density will be 1.0).
What I did not expect however, is Stan to report these underflows as diverging iterations, even when I’m not yet actually using the underflowed result of log(Phi_approx(...)) for anything. I checked that divergences weren’t caused by anything else; e.g. if I comment out the log() statement and only sample from the priors and calculate Phi_approx(...), Stan purrs like a kitten. However, with log(Phi_approx(...)) in action, divergence alarm bells go off while the results returned by Stan are exactly the same (means, BCIs, Rhat = 1.000). This has me a little concerned that if I continue to use this model and ignore the warnings about divergences, I may miss out on more serious divergences as I build up the model to its full level of complexity (mixed-effects non-linear model).
Of course, I could hack my way out of this by doing log(Phi_approx(...) + c) where c is a small constant, but that doesn’t seem right. Or is this (or another type of hack) generally accepted for these situations?
Attached is a minimal working example with a fixed effects version of the model and prior predictive checks that reproduces the divergencies, along with R code to run it with rstan.
Best,
Luc
P.s.: I did already read and enjoy an earlier extensive post on divergencies Getting the location + gradients of divergences (not of iteration starting points) and the viz paper [1709.01449] Visualization in Bayesian workflow, but couldn’t find an answer there.
functions {
vector phi_approx(vector x,
real mu,
real sigma) {
/**
* @param x = random variates
* @param mu = mean of normal distribution
* @param sigma = standard deviation of normal distribution
* @return An approximation to the normal CDF
*/
return(Phi_approx((x - mu) / sigma));
}
}
data {
// Data
int<lower=0> N;
vector[N] time;
// Prior parameters
real mu_time_mu;
real<lower=0> mu_time_sd;
real<lower=0> sigma_time_mu;
real<lower=0> sigma_time_sd;
}
transformed data{
}
parameters {
// Mid-point of S-shaped curve (mean of approximate normal CDF)
real mu_time;
// Curvature of S-shaped curve (sd of approximate normal CDF)
real<lower=0> sigma_time;
}
transformed parameters {
}
model {
vector[N] prpc; // prior predictive check
vector[N] log_prpc; // log of prior predictive check
prpc = phi_approx(time, mu_time, sigma_time);
log_prpc = log(prpc); // Cause of divergent iterations being reported
// Prior distributions
mu_time ~ normal(mu_time_mu, mu_time_sd);
sigma_time ~ normal(sigma_time_mu, sigma_time_sd);
}
generated quantities {
vector[N] prpc;
vector[N] log_prpc;
prpc = phi_approx(time, mu_time, sigma_time);
log_prpc = log(prpc); // does not cause divergencies to be reported
}
toy_reproduce_divergencies.stan (1.4 KB) toy_reproduce_divergencies.R (1.5 KB)
2 Likes
If the aim is to fit a CDF-esque curve, then maybe you could use a logistic function? The mean/midpoint of the logistic function will match the mean of the normal CDF, and the logistic’s growth rate is related to the standard deviation of the equivalent normal (exactly how I can’t recall). But of course, they are not the same model, so there will be a difference. The plus is that you can use log1p and expm1 to evaluate the logistic (on the log scale) in a numerically stable manner (see section 3). Also, can you add any extra detail as to why you need the log of Phi_approx? If the data are ‘CDF shaped’, then I’m not seeing why you need to log it in the first place?
Otherwise, the usual suggestion is to check that the priors make sense (are values that underflow actually plausible in your context?), and simulate fake data from the model to see if divergences exist in the posterior as well.
Hi hhau,
Thanks for sharing your thoughts. I’m already using a logistic approximation to the normal CDF (i.e. Phi_approx()) and the priors do make sense. The problem is really that the log() functions underflows, causing a divergent iterations to be reported. I need to take the log because the data (concentrations in blood) are log-normally distributed (I scale the CDF curve with a scalar to make it fit to the data). The priors are not the problem - I tested that. Using log1p() would indeed be a hack to avoid the the divergent iterations, but I don’t want to add 1.0 as that throws off the expectation for the log-normal distribution way off.
If the issues are coming from log(Phi_approx()), try using std_normal_lcdf() instead, since it has much better numerical stability. The only thing is that I don’t think its available in current rstan, so you’ll need to use cmdstanr
Thanks for the suggestion andrjohns. I think I might have oversimplified the context while trying to create a minimal working example. I’ll provide more background. The full model is described here. (the math is in the supplement); I wanted to spare you the math gore haha but here goes. I’m revisiting the model as I want to use for a new dataset that includes additional types of measurements. I’m doing this in a different version of Stan than the original, and the sheer number of diverging iterations is new to me (I experienced and dealt with some in the original model, but not at this scale).
The model predicts the blood concentration of a parasite that multiplies in highly synchronised cycles (see attached image from the paper referred to above). Each cycle is constructed using two normal CDFs; the first for the time of appearance of a generation of parasites and the second for time of disappearance of the same generation from the blood. The amplitude of first cycle is governed by a scalar \beta that is multiplied with the term (CDF_1 - CDF_2). The height of each consecutive cycle is a multiple R of the previous cycle’s height. I try to do as much as I can on the log scale:
log(concentration) = \log(\beta) + \log(\sum_{c=1}^{C} R^{c-1} * (CDF_{c,1} - CDF_{c,2}))
Here, c is the c-th wave of C cycles of parasite generations. Each set of (CDF_{c,1} - CDF_{c,2}) is shifted over time (i.e., different mean \mu_c for each), but has the same curvature (i.e., same standard deviation \sigma). I implement the CDFs using Phi_approx(), such that:
log(concentration) = \log(\beta) + \log(\sum_{c=1}^{C} R^{c-1} * (\frac{1}{1+e^{-z_{c,1}}} - \frac{1}{1+e^{-z_{c,2}}}))
Here, z is a trivial 3rd order polynomial function of time (horizontal axis in the graph), and the mean and standard deviation of the CDF (as explained in the stan manual). Currently, the second term \log(\sum_{c=1}^{C}(...)) causes underflow/divergencies.
I hope that clarifies the challenge I’m facing. As such, using std_normal_lcdf() wouldn’t work (even not in cmdstan) as need to take the log after some manipulations of the CDFs. Any suggestions on how to program this in a more numerically stable manner are most welcome! I’m already using Phi_approx() for the CDFs. Maybe I’m missing @hhau’s previous point about somehow using log1p() and expm1() for this - I don’t yet see how to go about this.
I think my question boils down to “how to perform a numerically stable calculation of the following in Stan”?
\log(\frac{1}{1+e^{-z_1}} - \frac{1}{1+e^{-z_2}} + \frac{R}{1+e^{-z_3}} - \frac{R}{1+e^{-z_4}} + \frac{R^2}{1+e^{-z_5}} - \frac{R^2}{1+e^{-z_6}})
where
z_i = 0.07056x_i^3+1.5976x_i
and
x_i=\frac{time - \mu_i}{\sigma}
time is data and R, \mu_i, and \sigma are model parameters (all strictly positive)
We have
\frac{1}{1+e^{-z_{1}}}-\frac{1}{1+e^{-z_{2}}}=\frac{e^{-z_{2}}-e^{-z_{1}}}{\left(1+e^{-z_{1}}\right)\left(1+e^{-z_{2}}\right)}
so subject to constraint that z_1 < z_2, z_3 < z_4, and z_5 < z_6 this is stable
log_sum_exp({
log_diff_exp(-z2, -z1) - log1p(-z1) - log1p(-z2),
log(R) + log_diff_exp(-z4, -z3) - log1p(-z3) - log1p(-z4),
2*log(R) + log_diff_exp(-z6, -z5) - log1p(-z5) - log1p(-z6)
})
4 Likes
Thanks @nhuurre for working this out! I had great hopes, especially as the condition z_1<z_2<z_3< ...<z_C is satisfied because \mu_1<\mu_2<\mu_3< ...<\mu_C. Unfortunately, my implementation of your equations crashes and burns because z_i can be lower than -1. See:
z_i = 0.07056x_i^3+1.5976x_i (see attached plot)
where
x_i=\frac{time - \mu_i}{\sigma}
where time (time of observation) can be considerably lower than \mu_i (the time at which an upward or downward wave reaches it midpoint).
I hadn’t realised yet that log1p() won’t play with number <-1.0. I’ll chew on the derivation of your numerically stable equation a little more. Please do let me know if you think you see what’s going awry here.
Whoops, those should have been log1p_exp. Also, if z=\frac{\text{time}-\mu_i}{\sigma} then \mu_1<\mu_2 implies z_1>z_2 but I got the ordering constraint backwards too so it works out anyway.
3 Likes
I just came to exactly the same conclusion!
For poster(ior)ity and easy reference, the solution is:
log_sum_exp({
log_diff_exp(-z2, -z1) - log1p_exp(-z1) - log1p_exp(-z2),
log(R) + log_diff_exp(-z4, -z3) - log1p_exp(-z3) - log1p_exp(-z4),
2*log(R) + log_diff_exp(-z6, -z5) - log1p_exp(-z5) - log1p_exp(-z6)
})
Yay, thanks @nhuurre!
4 Likes |
# OpenCV 2.4.13 for Windows Phone 10
Greetings, I've been lurking around these forums to find a solution to my problem but there doesn't seem to be any way to solve it.
For a personal project I need to make my code work on a Windows Phone 10 (UWP). As I understand (might be mistaken here) Microsoft dropped the support for it somewhere around the last year. The pre-compiled libraries available on the OpenCV site only work for the x64/x86 builds, for the phone part I need to compile the libraries for the ARM architecture (correct me if I say anything wrong). One problem I've had lies with CMake (tried both 3.4 and 3.7 versions) when trying to compile OpenCV 2.4.13 sources with both VS2013 and VS2015 (community version) I get this warning:
CMake Warning at cmake/OpenCVPackaging.cmake:23 (message):
CPACK_PACKAGE_VERSION does not match version provided by version.hpp
Call Stack (most recent call first):
CMakeLists.txt:1105 (include)
and during compilation there are either some errors oo only few dll's are created (e.g. only core, flann and imgproc which aren't enough since I need also the nonfree and ml modules for my application to work).
What I'd like to know (and possibly understand) is why the compilation is failing (and possibly how to fix it) and also how to make OpenCV 2.4.13 work on a UWP (a Windows Phone 8.1 might do as well, if nothing can be done for Windows Phone 10) using C++ code.
A couple of notes:
• I've seen there are various things for the 8.1 version but I still can't figure out how to make them work on a mobile device;
• I've never really programmed for Windows Phone so I might be missing something important;
• I've tried running the code (simple example only creating an OpenCV Mat) using the non-ARM libraries and the emulator crashes (no wonder, but well);
• The libraries (non-ARM) are linked/included correctly by the project;
• The code works just fine for a Win32 console App (using Visual Studio Community 2013) and the OpenCV 2.4.13 distribution);
• The project is written in C++, in case someone wants to suggest using Emgu I've tried using it (downloaded via NuGet) but failed miserably (pointers are appreciated if this is your suggestion).
Another question which would solve the problem right away: do pre-compiled libraries for the ARM architecture of the OpenCV 2.4.13 build exist for either Visual Studio 2013 or 2015?
Thanks in advance for any help you guys can provide, I've been working on this project for quite some time now and I'm only missing this final step to finish it so any help is really appreciated.
I also apologize for my bad english, if anything wasn't explained in an understandable way just let me know and I'll try to re-phrase it. |
# HFLAV $τ$ branching fractions fit and measurements of Vus with $τ$ lepton data
### Submission summary
As Contributors: Alberto Lusiani Arxiv Link: https://arxiv.org/abs/1811.06470v1 (pdf) Date submitted: 2018-11-16 01:00 Submitted by: Lusiani, Alberto Submitted to: SciPost Physics Proceedings Proceedings issue: The 15th International Workshop on Tau Lepton Physics (Amsterdam, 2018-09) Academic field: Physics Specialties: High-Energy Physics - Experiment Approach: Experimental
### Abstract
We report the status of the Heavy Flavour Averaging Group (HFLAV) averages of the $\tau$ lepton measurements We then update the latest published HFLAV global fit of the $\tau$ lepton branching fractions (Spring 2017) with recent results by BABAR. We use the fit results to update the Cabibbo-Kobayashi-Maskawa (CKM) matrix element Vus measurements with the $\tau$ branching fractions. We combine the direct $\tau$ branching fraction measurements with indirect predictions using kaon branching fractions measurements to improve the determination of Vus using $\tau$ branching fractions. The Vus determinations based on the inclusive branching fraction of $\tau$ to strange final states are about $3\sigma$ lower than the Vus determination from the CKM matrix unitarity.
###### Current status:
Has been resubmitted
### Submission & Refereeing History
Resubmission 1811.06470v2 on 17 December 2018
Submission 1811.06470v1 on 16 November 2018
## Reports on this Submission
### Anonymous Report 1 on 2018-12-8 Invited Report
• Cite as: Anonymous, Report on arXiv:1811.06470v1, delivered 2018-12-08, doi: 10.21468/SciPost.Report.720
### Report
The paper describes the latest average result of $|V_{us}|$ determined with the $\tau$ lepton decay branching fractions from the Heavy Flavor Averaging Group. The new result has been compared with the value obtained from CKM matrix unitarity, with the new measurements from $\tau$ decays, the discrepancy has been reduced slightly.
The quality of the paper is high, minor corrections to the content would further improve it.
This paper is an important contribution to the field of tau physics and I recommend it for publication.
### Requested changes
1. Introduction: "two BABAR measurement"->"two BABAR measurements"; "fit input measurements three tau"->"fit input measurements of three tau"
2. Chapter2: better to keep three decimal places for the uncertainties of the branching fractions of $\tau\to K/\pi (n)\pi^0\nu_{\tau}$
3. Chapter3: "is superseded"->"has been superseded"
4. References: blank space is missing between the journal and the volume; [15] is published in Phys. Lett. B 781, 202-212 (2018)
• validity: -
• significance: -
• originality: -
• clarity: -
• formatting: -
• grammar: -
Author Alberto Lusiani on 2018-12-14
(in reply to Report 1 on 2018-12-08)
Category:
remark
question
Regarding point 4, I checked the bibliography references and found that there are spaces between the journal and volume numbers. I am using the SciPost bibliography style, which uses the common convention that for journals like Phys.Rev.D the "D" is attached to the volume number. I think that that should stay so for SciPost proceedings.
I attach the PDF of the revision I posted to arxiv.org. |
Article | Open | Published:
# Using remote sensing environmental data to forecast malaria incidence at a rural district hospital in Western Kenya
Scientific Reportsvolume 7, Article number: 2589 (2017) | Download Citation
## Abstract
Malaria surveillance data provide opportunity to develop forecasting models. Seasonal variability in environmental factors correlate with malaria transmission, thus the identification of transmission patterns is useful in developing prediction models. However, with changing seasonal transmission patterns, either due to interventions or shifting weather seasons, traditional modelling approaches may not yield adequate predictive skill. Two statistical models,a general additive model (GAM) and GAMBOOST model with boosted regression were contrasted by assessing their predictive accuracy in forecasting malaria admissions at lead times of one to three months. Monthly admission data for children under five years with confirmed malaria at the Siaya district hospital in Western Kenya for the period 2003 to 2013 were used together with satellite derived data on rainfall, average temperature and evapotranspiration(ET). There was a total of 8,476 confirmed malaria admissions. The peak of malaria season changed and malaria admissions reduced overtime. The GAMBOOST model at 1-month lead time had the highest predictive skill during both the training and test periods and thus can be utilized in a malaria early warning system.
## Introduction
The year 2015 marked the end of the Millennium Development Goals and the ushering in of the new Sustainable Development Goals with continued focus on malaria as a major public health concern. By the end of 2015, the malaria incidence rate fell by 37% and the mortality rate by 60% globally1. Seventy percent of the reduction in malaria cases was attributed to the use of malaria prevention strategies1. Despite this achievement, there were still 214 million cases (range: 149–303 million) and 438,000 deaths (range: 236,000–635,000) in 2015, with 80% of the deaths concentrated in 15 countries, mainly in sub-Saharan Africa, including Kenya1. In sub-Saharan Africa, malaria accounts for 22% of all deaths in children aged 1–59 months1.
In response to this still high burden, the World Health Organization (WHO) developed the Global Technical Strategy for Malaria 2016–2030, which was adopted by the World Health Assembly in 2015. This new strategy requires reducing global malaria incidence and mortality rates by at least 90% by 20301. One of the three pillars of this strategy is to use malaria surveillance as a core intervention in the control and elimination of malaria1. Routine malaria surveillance data provide an opportunity to develop malaria early warning systems (MEWS) to track malaria incidence and transmission patterns along with environmental risk factors for accurate and timely detection and effective control of outbreaks. The use of MEWS can help achieve the global malaria targets set for 2030.
In 2001, the WHO provided a framework for the development of MEWS in Africa2, centering on the use of vulnerability, transmission risk and early detection indicators2. Vulnerability indicators are, for example, immunity levels, migration, malnutrition, and HIV status while transmission risk indicators include climatic factors, such as rainfall and temperature. Rainfall and temperature have been used to develop malaria forecasting models. Early detection indicators, such as abrupt increases in malaria incidence, can be obtained from malaria morbidity data collected at health facilities, using epidemic thresholds, thus reinforcing the need for timely and complete reporting of malaria cases through health information systems.
Statistical methods have been used to develop regression models for early detection of epidemics of vector-borne diseases, such as malaria and dengue. For example, in endemic regions of Zambia, it was possible to detect outbreaks of malaria, by using the upper 95th percentile of cases as a threshold3. In Singapore, models with autoregressive terms were developed for the forecasting of dengue outbreaks with a four month lead time, achieving a very high prediction accuracy4, while posterior predictive distributions were successfully used to classify dengue epidemic risk in Brazil5. In Botswana and Kenya, seasonal weather forecasts from multiple ensemble models were used to develop a MEWS with lead times up to four months6,7. Similar use of multiple ensemble models led to high forecast skill with a sensitivity of over 70% for seasonal forecasting of malaria incidence in India8. Machine learning techniques have also been used to develop malaria forecast models with high predictive skill, for example, in India9. Spatial temporal methods employing Bayesian statistics were employed to predict malaria transmission indicators, such as entomological inoculation rates, in Kenya10 and Burkina Faso11. Various statistical methods that have been developed and used to forecast malaria have been summarized by Zinszer et al.12.
Remote sensing provides an opportunity for spatially and temporally refined environmental data to be utilized for predictions and forecasts, especially in resource poor settings where systematic collection of temperature and rainfall data is a major challenge. It has been suggested that the development of statistical forecasting models that identify cyclic variation in malaria transmission is key to the development of MEWS for endemic regions13. The use of remote sensing data has been shown to improve model predictions in malaria epidemic models in the Ethiopian highlands14 and also in Uganda when used together with clinical predictors such as proportion screened for malaria and drug treatment15. A recent analysis on the effect of remote sensing data, land surface temperature (LST) and Normalized Difference Vegetation Index (NDVI) on malaria mortality showed a lagged relationship indicating an ability of forecasting based on observed data16.
Malaria transmission is endemic in Western Kenya, and this region suffers from high malaria morbidity and mortality. The Health and Demographic Surveillance System (HDSS) field site located in this region and run by the Kenya Medical Research Institute in collaboration with the United States Center for Disease Control (KEMRI/CDC) has the highest mortality rates compared to other HDSS field sites in the INDEPTH Network17, and malaria is the leading cause of death among young children under five years of age18. Previous studies in the KEMRI/CDC HDSS site identified patterns of lagged weather effects with malaria morbidity and mortality16,19,20. These studies provided potential lead times for the development of a malaria forecast model.
A malaria prediction model was previously developed for epidemic regions in Kenya, such as Wajir and Kericho, using remote sensing data13. A similar malaria prediction model for outbreak detection was developed and validated for the wider East African region and shown to be robust with high sensitivity and specificity21. This study uses remote sensing data and longitudinal malaria morbidity data from a district hospital in Western Kenya to develop and compare statistical models so as to forecast malaria admissions and assess the accuracy of these models at lead times from one to three months. Specifically, we will compare the performance of boosted and non boosted general additive models.
## Results
There was a total of 8,476 confirmed malaria admissions among children under five years of age at the Siaya district hospital during the period 2003 and 2013. Table 1 shows the summary statistics for malaria admissions by year and overall. The earlier years in the study period registered the highest number of annual admissions, with the year 2004 being the highest during which some months recorded as high as 202 pediatric malaria admissions. After 2004, the number of admissions declined gradually, but then increased to 1,249 in 2008, which was similar to what was observed in the earlier years. There was a significant drop in malaria admissions from 749 in 2009 to 166 in 2013, corresponding to a 70% reduction.
Figure 1 presents mean monthly malaria admissions (Fig. 1a) and mean LST (Fig. 1b), ET (Fig. 1c) and precipitation (Fig. 1d) for the entire study period 2003–2013. The peak malaria admission months are May and June while the lowest admission month is October. The hottest month is February, while the coolest is June. The ET panel (Fig. 1c) shows that May and November are the months with the highest ET while February is the month with the least evapotranspiration. We observe two rainy seasons with the first wet months beginning in March and peaking in April, and the short rains occur from September to November. The driest months are between December and February. There is a clear lag pattern of rainfall and temperature on observed malaria admissions. From seasonal pattern, ET has the shortest lag with malaria admissions and peaks in the same month.
For precipitation, we observe a two-month lag with a peak of rainfall in April, followed by a peak in malaria admissions between May and June. For temperature, there is a longer lag of three months with a peak in February, followed again by a peak in malaria admissions between May and June.
Monthly patterns of malaria differ and the seasonal admission patterns vary across years during the study period (Fig. 2). For instance, in 2003, the admissions peaked in June and were at their lowest in November whereas in 2004, the peak was in May and the lowest admission recorded in September. We did not observe a clear seasonal pattern for the years 2007, 2009, 2010 and 2012.
### Malaria prediction models
The 1-month lead GAMBOOST model captures very well the seasonal variation in both training and test periods as displayed in Fig. 3a. It captures closely the peak malaria admissions in 2004 whereas the 2-month (Fig. 3b) and 3-month lead (Fig. 3c) models underestimate this peak. Compared to the GAMBOOST model, the 1-month lead GAM model (Fig. 4a) could not generalize well in the external data, in this case the year 2013. The generalizability of the GAM models did not improve with increasing lead times (Fig. 4b for 2-month and Fig. 4c for 3-month lead time respectively).
Supplementary Fig. S1 shows the complete external predictions for the test year of 2013 in detail for each model and lead time. Again 1-month lead models forecast closely the peak admission for the year 2013 while the 3-month model captures the peak well but underestimates the number of admissions. All the lead time GAMBOOST models overestimate the admissions in August 2013. The GAM models underestimate the malaria admissions in 2013 with only the 1-month lead model capturing the peak in the month of May correctly. The GAM models for the training period capture well the overall seasonal pattern of malaria admissions.
Table 2 displays the forecast accuracy statistics for the GAMBOOST and GAM models by lead time for the training and test periods.
The 1-month lead GAMBOOST model explained 80% of the variation in data for the training period and 71% in the test period showing no overfitting during the training period, whereas the GAM model for the 1-month lead time explained 77% of the variance in the training set but a lower variance of 44% explained in the test dataset. Similarly, the 1-month GAMBOOST model had the lowest RMSE of 3.87 in the test period compared to 6.38 for the GAM model. In the completely external validation run, the 1-month GAMBOOST model underestimated malaria admissions by an average of 2.98 as shown with the MAE value compared to 5.26 admissions for the GAM model.
The GAMBOOST models with 2-month and 3-month lead times showed better predictions for the test period and also had better predictive accuracy compared with the GAM models, with the GAM model with 3-month lead time showing the worst prediction accuracy with an R2 of 16% compared to the training period of 74% showing overfitting, as to be compared to the GAMBOOST model for the same time showing an R2 of 50% and 73%, respectively.
## Discussion
To forecast monthly pediatric malaria admissions at a district hospital in Western Kenya, we developed two structurally different models using satellite data of LST, ET and precipitation with a lead time of 1 to 3 months. We utilized a robust validation scheme of 5-fold cross-validation and withheld the year 2013 from the model building to infer the model’s predictive generalizability. We found one of the model structures involving generalized additive models with a boosting algorithm providing the best forecasts at all lead times.
The basic reproduction number (R0) for malaria depends on a number of factors, such as mosquito biting rate, mosquito density and extrinsic incubation period of malaria parasites in the mosquito host. All of these factors are affected by temperature22,23 and rainfall24,25. At suitable temperatures, mosquito development time is reduced thus providing stable transmission in endemic regions, such as Western Kenya. We used satellite derived LST, precipitation and ET as proxies to these factors at various forecast lead times. The lead time of forecast provides a window for users of the forecast information, such as malaria control managers, to act.
The seasonal distribution of malaria admissions in the study area changed considerably and exhibited a decreasing trend over time with an abrupt increase observed in 2008. Similar patterns have been observed in other areas in Western Kenya between 2002 and 201026. This could be due to several factors, including interventions, sudden movements of susceptible people into endemic areas (e.g. the migration of people back into the study area after the post-election violence in Kenya in 200827), and changes in the seasonality of environmental conditions due to climate variability and El Nino years28. The varying annual peak in admissions is a challenge for developing forecasting models in endemic settings that rely on cyclic pattern of disease transmission.
Our analysis has shown that boosting regression methods can help improve model fit through iterative variable selection. This makes the regression parameters chosen to be stable even if the mean trend of malaria incidence changes with the use of control strategies. The GAMBOOST method has been shown to better fit data that is non stationary29, as the variance of the response variable can be time dependent in this model. In all the models with different lead times, the GAMBOOST models captured well the variation during the training and testing of data. This indicated that the model greatly reduced overfitting, resulting in better forecast accuracy. The normalized accuracy parameters were very comparable between the 5-fold cross-validation and the 2013 test period. In comparison, the GAM model optimized the coefficients for the training period but could not capture the patterns well in the out of sample 2013 dataset resulting in poor predictions in most of the out of sample test series. The GAM model could not identify correctly the peak months of malaria admissions and underestimated the number of admissions. This means that the model over fitted the training data and thus had very unstable or biased regression parameters.
Early warning systems rely on thresholds to issue alerts. Models that under-predict are likely to fail in issuing warnings when there are true epidemics while models that over-predict can potentially issue false alerts. The GAMBOOST models had the least mean absolute errors in the validation period, which suggested that they could potentially be used to issue alerts based on thresholds. Depending on the thresholds set, the GAMBOOST model can potentially underestimate high transmission months. However, this malaria endemic setting has no set threshold. The prediction on increase in malaria admissions can trigger response action without necessarily considering the magnitude in this situation. Malaria control managers could define a threshold for more simple control response routines. The prediction accuracy of outbreak/no outbreak could then be estimated using reciever operating characteristic curves and the area under the curve (AUC), and such methods allow for tuning of the outbreak probability threshold. Thus, even a lower prediction, which picks up correct outbreak pattern, would yield high sensitivity and specificity by the AUC after the calibration to the set threshold.
The GAMBOOST and GAM models provided better prediction at a lag of one month. This is mainly because the number of malaria cases in a particular month is strongly correlated with the number of cases in the preceding month than those two or three months before. This is consistent with most models using autoregressive terms for monthly malaria forecasts9,20. A model with two to four months lead time was developed for epidemic prediction in the Western Kenyan highlands30. The one-month lead time is very short for action. However, given that this is an endemic area, intervention strategies can potentially be marshalled in a short period if epidemic preparedness and response strategies are in place. Similarly, actions can be fine-tuned or intensified when lead time and uncertainty decrease with models consistently identifying epidemic patterns. The model can be improved to provide longer lead times by using seasonal forecasts, which provide lead times ranging from one to six months6.
This study has a number of limitations. The time series data used covered periods during which a number of vector control strategies were implemented in Karemo division in Siaya county. Indoor residual spraying began in 2004 in Karemo, and insecticide treated bednet use was scaled up 2006 onwards31,32. Because of malaria interventions, malaria incidence does not correlate well with seasonal weather forecasts, and it has been suggested that data collected during malaria control periods should not be used for model training21. The interventions implemented in the study area over the study period might have had an impact on the inter-annual variation in malaria transmission and the long-term trends. To improve prediction accuracy, it is important to account for these intervention measures in the models. The main challenge is to determine when an intervention started, how long it was implemented and what the extent of its coverage was to correctly include it in the time series data. Therefore, we suggest further time series analyses to identify intervention periods and intensity levels. Several unmeasured factors in this study could have acted non-linearly to affect malaria transmission. In this study, it was impossible to consider all these factors in the model. To account for these unmeasured factors, we used spline of the trend function, which may not be sufficient to capture all the complex processes affecting malaria transmission. In this analysis, the satellite data was aggregated over a large area and thus reduced spatial accuracy. By use of high resolution data, it would be possible to develop high-resolution spatial-temporal models to capture malaria transmission and attain better predictive accuracy.
The models developed in this study were purely for prediction purposes; therefore, we chose only models with high prediction accuracy. Consequently, we cannot infer the effect of remote sensing factors on malaria morbidity. Another limitation of this study is measurement errors on environmental data, as well as malaria incidence data. The limitation due to the quality of satellite data can be circumvented by integrating locally collected environmental data. For example, the predictive accuracy of the model can be improved by using datasets that combine both satellite and ground data, for example the climate data that will become available from the Enhancing National Climate Services initiative (ENACTS)33.
Different regions have varying malaria epidemiology; therefore, the model should be tested and validated before its deployment to other areas. Lastly, we used same lag times for all environmental variables in the model. As evident from other studies, the lagged patterns with malaria indicators, however, vary for each term16,19.
In conclusion, two different models using satellite data for LST, precipitation and ET were tested to forecast pediatric malaria admissions in Western Kenya. The GAMBOOST model with a lead time of 1 month proved to have the best accuracy to predict monthly admissions at a district hospital. This lead time may be short but can provide enough time to intensify malaria control interventions in an endemic area where a malaria preparedness and response plan is in place.
This study shows that the use of boosting regression in GAM models can be beneficial in early warning systems to improve predictions. We hope that our findings would encourage the continued use of GAMBOOST in early warnings systems and the wider development and use of early warnings in malaria control.
## Methods
### Study setting and malaria data
The study is based at the KEMRI/CDC HDSS field site in Western Kenya. The KEMRI/CDC HDSS has been operational in Asembo since 2001. It expanded to include Gem in 2002 and Karemo in 2007. The HDSS monitors the health and demographic changes in the study population through routine collection of health data at health care facilities and demographic and socio-economic data from households. Over 240,000 individuals are under surveillance. Some of the demographic information monitored include births, deaths, and migration. Information on cause of death is also collected through verbal autopsy. Morbidity data have been routinely collected at the health facilities in the HDSS area. Hospital-based surveillance is currently conducted at three health facilities; inpatient data are routinely collected at the Siaya district hospital, and outpatient data at the health facilities in Njenjra and Ting Wang’i. The Siaya district hospital is a referral hospital in Karemo division of Siaya county. The KEMRI/CDC HDSS has been described in detail elsewhere34,35.
In this study, we used malaria admissions data collected at the Siaya district hospital for the period 2003–2013. The hospital surveillance data were complete for this period and collected routinely by the health care workers employed by the KEMRI/CDC. We extracted the admissions data for children under five years of age with confirmed Plasmodium falciparum malaria. The data were then aggregated to monthly time scale for each year to create a time series dataset.
### Satellite environmental data
We used satellite derived day and night LSTs, NDVI and precipitation data for the period 2003–2013. Rainfall estimates were extracted from NASA’s Tropical Rainfall Measuring Mission (TRMM) 3B42_V7 Product for daily accumulated rainfall available at 0.25° by 0.25° spatial resolution. Day and night LSTs were extracted from the Moderate Resolution Imaging Spectro-radiometer (MODIS) MOD11A1 product with a 1-kilometer spatial resolution and daily temporal resolution. We took an average of the day and night LSTs to get a mean LST. In addition to these variables, we also included evapotranspiration data from the MODIS product MOD16 available at 8 days temporal and 1-kilometer spatial resolution. The detailed processing of these datasets were described in an earlier study16. These datasets were aggregated to monthly summaries. We computed monthly totals for rainfall and monthly averages for the other environmental factors.
### Statistical analysis
We used a general additive modelling framework to build forecast models for malaria admissions, with smooth functions of environmental factors at different lead times. Studies have shown nonlinear relationships between weather factors and malaria morbidity and mortality16,19,36,37,38,39. We developed two different general additive models, one using a boosting algorithm to optimize model fit and the other without boosting.
The malaria admissions data used in this study exhibited over-dispersion. In a Poisson distribution, the mean and variance are equal. Over-dispersion occurs when variance is greater than the mean. To account for over-dispersion, we assumed negative binomial distribution in both models.
### General Additive Model (GAM)
The general additive model (GAM) without boosting was developed using the mgcv package in R40. The model included a cubic regression spline of time to adjust for the overall trend in malaria admissions during the study period. To address the observed within-year seasonality of malaria, we used a cyclic cubic regression function of month to capture the peaks in malaria admissions. Mean LST, ET and precipitation were included as cubic regression splines in the model.
Malaria cases in any given month are likely to be correlated with malaria cases in preceding months. The number of previously infected individuals determines the reservoir of infectious mosquitoes, which in turn affects the current population of infected individuals. To control for this autocorrelation, we included previous malaria cases as autoregressive terms (AR) in the models for each lead time. Previous studies in this HDSS area20 and in Burundi41 included a 1-month AR term to adjust for autocorrelation. We also included a simple random effect spline function of month. Smoothing degrees of freedom were optimally determined using general cross validation.
To assess different prediction lead times, three separate models were developed with 1-month, 2-month and 3-month lead times. To attain a 1-month lead time we took a lag of one month of environmental factors and malaria cases and for the 2-month and 3-month lead times we took a lag of two and three months, respectively.
The model equations were:
$$\begin{array}{ccc}{\rm{l}}{\rm{o}}{\rm{g}}({y}_{t}) & = & {\rm{s}}({\rm{t}}{\rm{i}}{\rm{m}}{\rm{e}})+{\rm{s}}({\rm{m}}{\rm{o}}{\rm{n}}{\rm{t}}{\rm{h}},{\rm{b}}{\rm{s}}={\mbox{''}}{\rm{c}}{\rm{c}}\mbox{''})+{\rm{s}}({{\rm{L}}{\rm{S}}{\rm{T}}}_{t-1})+{\rm{s}}({{\rm{P}}{\rm{r}}{\rm{e}}{\rm{c}}{\rm{i}}{\rm{p}}{\rm{i}}{\rm{t}}{\rm{a}}{\rm{t}}{\rm{i}}{\rm{o}}{\rm{n}}}_{t-1})+{\rm{s}}({{\rm{E}}{\rm{T}}}_{t-1})\\ & & +{\rm{s}}({\rm{m}}{\rm{o}}{\rm{n}}{\rm{t}}{\rm{h}},{\rm{b}}{\rm{s}}={\mbox{''}}{\rm{r}}{\rm{e}}{\textstyle \text{''}})+{\rm{s}}({{\rm{M}}{\rm{A}}{\rm{L}}}_{t-1})\end{array}$$
(1)
$$\begin{array}{ccc}{\rm{l}}{\rm{o}}{\rm{g}}({y}_{t}) & = & {\rm{s}}({\rm{t}}{\rm{i}}{\rm{m}}{\rm{e}})+{\rm{s}}({\rm{m}}{\rm{o}}{\rm{n}}{\rm{t}}{\rm{h}},{\rm{b}}{\rm{s}}={\mbox{''}}{\rm{c}}{\rm{c}}\mbox{''})+{\rm{s}}({{\rm{L}}{\rm{S}}{\rm{T}}}_{t-2})+{\rm{s}}({{\rm{P}}{\rm{r}}{\rm{e}}{\rm{c}}{\rm{i}}{\rm{p}}{\rm{i}}{\rm{t}}{\rm{a}}{\rm{t}}{\rm{i}}{\rm{o}}{\rm{n}}}_{t-2})+{\rm{s}}({{\rm{E}}{\rm{T}}}_{t-2})\\ & & +{\rm{s}}({\rm{m}}{\rm{o}}{\rm{n}}{\rm{t}}{\rm{h}},{\rm{b}}{\rm{s}}={\mbox{''}}{\rm{r}}{\rm{e}}{\textstyle \text{''}})+{\rm{s}}({{\rm{M}}{\rm{A}}{\rm{L}}}_{t-2})\end{array}$$
(2)
$$\begin{array}{ccc}{\rm{l}}{\rm{o}}{\rm{g}}({y}_{t}) & = & {\rm{s}}({\rm{t}}{\rm{i}}{\rm{m}}{\rm{e}})+{\rm{s}}({\rm{m}}{\rm{o}}{\rm{n}}{\rm{t}}{\rm{h}},{\rm{b}}{\rm{s}}={\mbox{''}}{\rm{c}}{\rm{c}}\mbox{''})+{\rm{s}}({{\rm{L}}{\rm{S}}{\rm{T}}}_{t-3})+{\rm{s}}({{\rm{P}}{\rm{r}}{\rm{e}}{\rm{c}}{\rm{i}}{\rm{p}}{\rm{i}}{\rm{t}}{\rm{a}}{\rm{t}}{\rm{i}}{\rm{o}}{\rm{n}}}_{t-3})+{\rm{s}}({{\rm{E}}{\rm{T}}}_{t-3})\\ & & +{\rm{s}}({\rm{m}}{\rm{o}}{\rm{n}}{\rm{t}}{\rm{h}},{\rm{b}}{\rm{s}}={\mbox{''}}{\rm{r}}{\rm{e}}{\textstyle \text{''}})+{\rm{s}}({{\rm{M}}{\rm{A}}{\rm{L}}}_{t-3})\end{array}$$
(3)
Y t ~Negative Binomial
where s is a smoothing spline; bs = “cc” is the cyclic cubic regression spline basis function of month to control for seasonality; bs = “re” is the random effect spline basis; and MAL represents the autoregressive malaria cases. The other spline functions are cubic regression splines. Models (1), (2), and (3) correspond to 1-month, 2-month and 3-month prediction lead times, respectively.
### General Additive Model with boosting (GAMBOOST)
The general additive model with boosting was developed using gamBoostlss42,43 package in R. The gamBoostlss is a regression boosting method for GAMs encompassing location, scale and shape. The method uses a gradient boosting algorithm for variable smoothing selection. The model starts with weak base learners and in each iteration optimizes the model. In each subsequent iteration, only variables selected up to the current iteration are included. Similar to the GAM model, we used smooth base learners of time, Mean LST, ET, precipitation and previous malaria cases as AR terms for each lead time. We also include a random base learner for month and a cyclic base learner for month. The equations for each model are as follows:
$$\begin{array}{ccc}{\rm{l}}{\rm{o}}{\rm{g}}({y}_{t}) & = & {\rm{b}}{\rm{b}}{\rm{s}}({\rm{t}}{\rm{i}}{\rm{m}}{\rm{e}})+{\rm{b}}{\rm{b}}{\rm{s}}({\rm{m}}{\rm{o}}{\rm{n}}{\rm{t}}{\rm{h}},{\rm{c}}{\rm{y}}{\rm{c}}{\rm{l}}{\rm{i}}{\rm{c}}={\rm{T}})+{\rm{b}}{\rm{b}}{\rm{s}}({{\rm{L}}{\rm{S}}{\rm{T}}}_{t-1})+{\rm{b}}{\rm{b}}{\rm{s}}({{\rm{P}}{\rm{r}}{\rm{e}}{\rm{c}}{\rm{i}}{\rm{p}}{\rm{i}}{\rm{t}}{\rm{a}}{\rm{t}}{\rm{i}}{\rm{o}}{\rm{n}}}_{t-1})\\ & & +{\rm{b}}{\rm{b}}{\rm{s}}({{\rm{E}}{\rm{T}}}_{t-1})+{\rm{b}}{\rm{r}}{\rm{a}}{\rm{n}}{\rm{d}}{\rm{o}}{\rm{m}}({\rm{m}}{\rm{o}}{\rm{n}}{\rm{t}}{\rm{h}})+{\rm{b}}{\rm{b}}{\rm{s}}({{\rm{M}}{\rm{A}}{\rm{L}}}_{t-1})\end{array}$$
(4)
$$\begin{array}{ccc}{\rm{l}}{\rm{o}}{\rm{g}}({y}_{t}) & = & {\rm{b}}{\rm{b}}{\rm{s}}({\rm{t}}{\rm{i}}{\rm{m}}{\rm{e}})+{\rm{b}}{\rm{b}}{\rm{s}}({\rm{m}}{\rm{o}}{\rm{n}}{\rm{t}}{\rm{h}},{\rm{c}}{\rm{y}}{\rm{c}}{\rm{l}}{\rm{i}}{\rm{c}}={\rm{T}})+{\rm{b}}{\rm{b}}{\rm{s}}({{\rm{L}}{\rm{S}}{\rm{T}}}_{t-2})+{\rm{b}}{\rm{b}}{\rm{s}}({{\rm{P}}{\rm{r}}{\rm{e}}{\rm{c}}{\rm{i}}{\rm{p}}{\rm{i}}{\rm{t}}{\rm{a}}{\rm{t}}{\rm{i}}{\rm{o}}{\rm{n}}}_{t-2})\\ & & +{\rm{b}}{\rm{b}}{\rm{s}}({{\rm{E}}{\rm{T}}}_{t-2})+{\rm{b}}{\rm{r}}{\rm{a}}{\rm{n}}{\rm{d}}{\rm{o}}{\rm{m}}({\rm{m}}{\rm{o}}{\rm{n}}{\rm{t}}{\rm{h}})+{\rm{b}}{\rm{b}}{\rm{s}}({{\rm{M}}{\rm{A}}{\rm{L}}}_{t-2})\end{array}$$
(5)
$$\begin{array}{ccc}{\rm{l}}{\rm{o}}{\rm{g}}({y}_{t}) & = & {\rm{b}}{\rm{b}}{\rm{s}}({\rm{t}}{\rm{i}}{\rm{m}}{\rm{e}})+{\rm{b}}{\rm{b}}{\rm{s}}({\rm{m}}{\rm{o}}{\rm{n}}{\rm{t}}{\rm{h}},{\rm{c}}{\rm{y}}{\rm{c}}{\rm{l}}{\rm{i}}{\rm{c}}={\rm{T}})+{\rm{b}}{\rm{b}}{\rm{s}}({{\rm{L}}{\rm{S}}{\rm{T}}}_{t-3})+{\rm{b}}{\rm{b}}{\rm{s}}({{\rm{P}}{\rm{r}}{\rm{e}}{\rm{c}}{\rm{i}}{\rm{p}}{\rm{i}}{\rm{t}}{\rm{a}}{\rm{t}}{\rm{i}}{\rm{o}}{\rm{n}}}_{t-3})\\ & & +{\rm{b}}{\rm{b}}{\rm{s}}({{\rm{E}}{\rm{T}}}_{t-3})+{\rm{b}}{\rm{r}}{\rm{a}}{\rm{n}}{\rm{d}}{\rm{o}}{\rm{m}}({\rm{m}}{\rm{o}}{\rm{n}}{\rm{t}}{\rm{h}})+{\rm{b}}{\rm{b}}{\rm{s}}({{\rm{M}}{\rm{A}}{\rm{L}}}_{t-3})\end{array}$$
(6)
Y t ~Negative Binomial
where bbs is the smooth base learner. The smooth base learner for month is set to be cyclic to control for seasonality. Random is the random base learner for month. MAL represents the autoregressive malaria cases. Models (4), (5), and (6) correspond to 1-month, 2-month and 3-month prediction lead times, respectively.
### Model validation
To get an optimal number of boosting iterations we performed k-fold cross validation on the training dataset. K-fold cross validation involves partitioning the training data into k subsets. In each run, one subset is held for validation while the remaining k-1 subsets are used for model fitting. The number of iterations giving the lowest prediction in the k out of sample set is chosen.
We performed 5-k fold validation with 1,000 initial iterations with 0·01 step to get the number of boosting iterations for the gamboostlss model. To assess the predictive ability of the models, we split the data into training and testing datasets. The time series for the period 2003–2012 was used for model training while the 2013-time series for model testing. R-squared statistic, root mean squared error (RMSE), normalized mean squared error (NMSE), mean absolute error (MAE) and normalized mean absolute error (NMAE) were used for model comparison. The equations for these measures are given below:
$$MAE=\frac{1}{n}\sum _{i=1}^{n}| {e}_{i}^{2}|$$$$RMSE=\sqrt{\frac{1}{n}\sum _{i=1}^{n}{e}_{i}^{2}}$$$$NMSE=\frac{1}{{\bar{Y}}}\sqrt{\frac{1}{n}\sum _{i=1}^{n}{e}_{i}^{2}}$$
where $$\bar{Y}$$ is the scaling factor
where e i = f i − y i , f i is the prediction and y i , the observed value.
The NMAE is scaled using the lowest and the highest values in the series.
These measures have been explained in details in Shcherbakov et al.44. We included the normalized measures to be able to assess prediction accuracy between training and test periods. These measures are relevant when there are different scales44; in this case mean malaria admissions differ between test and training periods.
All analysis was done using R statistical software45. The DMwR46 package was used to produce the forecast accuracy statistics.
### Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
### Ethics Statement
The protocols for KEMRI/CDC HDSS are approved by both CDC (#3308, Atlanta, GA) and KEMRI (#1801, Nairobi, Kenya) Institutional Review Boards. Informed consent was obtained from all the participants. The study was ethically conducted adhering to the Helsinki declaration and current ethical guidelines.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Change history
• ### 19 March 2018
A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.
## References
1. 1.
WHO. World Malaria Report (WHO, 2015).
2. 2.
WHO. Malaria Early Warning Systems: Concepts, Indicators and Partners. A Framework for Field Research in Africa. (WHO, 2001).
3. 3.
Davis, R. G. et al. Early detection of malaria foci for targeted interventions in endemic southern Zambia. Malar J 10, 260 (2011).
4. 4.
Hii, Y. L., Zhu, H., Ng, N., Ng, L. C. & Rocklov, J. Forecast of dengue incidence using temperature and rainfall. PLoS neglected tropical diseases 6, e1908, doi:10.1371/journal.pntd.0001908 (2012).
5. 5.
Lowe, R. et al. The development of an early warning system for climate-sensitive disease risk with a focus on dengue epidemics in Southeast Brazil. Statistics in medicine 32, 864–883, doi:10.1002/sim.5549 (2013).
6. 6.
Thomson, M. C. et al. Malaria early warnings based on seasonal climate forecasts from multi-model ensembles. Nature 439, 576–579, doi:10.1038/nature04503 (2006).
7. 7.
Thomson, M., Indeje, M., Connor, S., Dilley, M. & Ward, N. Malaria early warning in Kenya and seasonal climate forecasts. Lancet (London, England) 362, 580, doi:10.1016/s0140-6736(03)14135-9 (2003).
8. 8.
Lauderdale, J. M. et al. Towards seasonal forecasting of malaria in India. Malar J 13, 310, doi:10.1186/1475-2875-13-310 (2014).
9. 9.
Ch, S. et al. A Support Vector Machine-Firefly Algorithm based forecasting model to determine malaria transmission. Neurocomputing 129, 279–288, doi:10.1016/j.neucom.2013.09.030 (2014).
10. 10.
Amek, N. et al. Spatio-temporal modeling of sparse geostatistical malaria sporozoite rate data using a zero inflated binomial model. Spatial and spatio-temporal epidemiology 2, 283–290, doi:10.1016/j.sste.2011.08.001 (2011).
11. 11.
Diboulo, E. et al. Bayesian variable selection in modelling geographical heterogeneity in malaria transmission from sparse data: an application to Nouna Health and Demographic Surveillance System (HDSS) data, Burkina Faso. Parasites & vectors 8, 118, doi:10.1186/s13071-015-0679-7 (2015).
12. 12.
Zinszer, K. et al. A scoping review of malaria forecasting: past work and future directions. BMJ open 2, e001992, doi:10.1136/bmjopen-2012-001992 (2012).
13. 13.
Hay, S. I., Rogers, D. J., Shanks, G. D., Myers, M. F. & Snow, R. W. Malaria early warning in Kenya. Trends in parasitology 17, 95–99, doi:10.1016/S1471-4922(00)01763-3 (2001).
14. 14.
Midekisa, A., Senay, G., Henebry, G. M., Semuniguse, P. & Wimberly, M. C. Remote sensing-based time series models for malaria early warning in the highlands of Ethiopia. Malar J 11, 165, doi:10.1186/1475-2875-11-165 (2012).
15. 15.
Zinszer, K. et al. Forecasting malaria in a highly endemic country using environmental and clinical predictors. Malaria Journal 14, 1–9, doi:10.1186/s12936-015-0758-4 (2015).
16. 16.
Sewe, M. O., Ahlm, C. & Rocklov, J. Remotely Sensed Environmental Conditions and Malaria Mortality in Three Malaria Endemic Regions in Western Kenya. PloS one 11, e0154204, doi:10.1371/journal.pone.0154204 (2016).
17. 17.
Santosa, A. & Byass, P. Diverse Empirical Evidence on Epidemiological Transition in Low- and Middle-Income Countries: Population-Based Findings from INDEPTH Network Data. PloS one 11, e0155753, doi:10.1371/journal.pone.0155753 (2016).
18. 18.
Amek, N. O. et al. Childhood cause-specific mortality in rural Western Kenya: application of the InterVA-4 model. Global health action 7, 25581, doi:10.3402/gha.v7.25581 (2014).
19. 19.
Sewe, M. et al. The Association of Weather Variability and Under Five Malaria Mortality in KEMRI/CDC HDSS in Western Kenya 2003 to 2008: A Time Series Analysis. International journal of environmental research and public health 12, 1983–1997, doi:10.3390/ijerph120201983 (2015).
20. 20.
Amek, N. et al. Spatial and temporal dynamics of malaria transmission in rural Western Kenya. Parasites & vectors 5, 86, doi:10.1186/1756-3305-5-86 (2012).
21. 21.
Githeko, A. K., Ogallo, L., Lemnge, M., Okia, M. & Ototo, E. N. Development and validation of climate and ecosystem-based early malaria epidemic prediction models in East Africa. Malar J 13, 329, doi:10.1186/1475-2875-13-329 (2014).
22. 22.
Paaijmans, K. P., Blanford, S., Chan, B. H. & Thomas, M. B. Warmer temperatures reduce the vectorial capacity of malaria mosquitoes. Biology letters 8, 465–468, doi:10.1098/rsbl.2011.1075 (2012).
23. 23.
Paaijmans, K. P., Imbahale, S. S., Thomas, M. B. & Takken, W. Relevant microclimate for determining the development rate of malaria mosquitoes and possible implications of climate change. Malar J 9, 196, doi:10.1186/1475-2875-9-196 (2010).
24. 24.
Ogden, N. H. et al. Estimated effects of projected climate change on the basic reproductive number of the Lyme disease vector Ixodes scapularis. Environmental health perspectives 122, 631–638, doi:10.1289/ehp.1307799 (2014).
25. 25.
Parham, P. E. & Michael, E. Modeling the effects of weather and climate change on malaria transmission. Environmental health perspectives 118, 620–626, doi:10.1289/ehp.0901256 (2010).
26. 26.
Zhou, G. et al. Changing patterns of malaria epidemiology between 2002 and 2010 in Western Kenya: the fall and rise of malaria. PloS one 6, e20318, doi:10.1371/journal.pone.0020318 (2011).
27. 27.
Feikin, D. R. et al. Mortality and health among internally displaced persons in western Kenya following post-election violence, 2008: novel use of demographic surveillance. Bull World Health Organ 88, 601–608, doi:10.2471/blt.09.069732 (2010).
28. 28.
Hashizume, M., Terao, T. & Minakawa, N. The Indian Ocean Dipole and malaria risk in the highlands of western Kenya. Proc Natl Acad Sci USA 106, 1857–1862, doi:10.1073/pnas.0806544106 (2009).
29. 29.
Villarini, G., Smith, J. A. & Napolitano, F. Nonstationary modeling of a long record of rainfall and temperature over Rome. Advances in Water Resources 33, 1256–1267, doi:10.1016/j.advwatres.2010.03.013 (2010).
30. 30.
Githeko, A. K. & Ndegwa, W. Predicting Malaria Epidemics in the Kenyan Highlands Using Climate Data: A Tool for Decision Makers. Global Change and Human Health 2, 54–63, doi:10.1023/a:1011943131643 (2001).
31. 31.
Shuford, K. et al. Community perceptions of mass screening and treatment for malaria in Siaya County, western Kenya. Malaria Journal 15, 1–13, doi:10.1186/s12936-016-1123-y (2016).
32. 32.
Shah, M. et al. Assessment of molecular markers for anti-malarial drug resistance after the introduction and scale-up of malaria control interventions in western Kenya. Malaria Journal 14, 1–14, doi:10.1186/s12936-015-0588-4 (2015).
33. 33.
Dinku, T. et al. The Enacts Approach Transforming climate services in Africa one country at a time. World Policy Papers (2016).
34. 34.
Odhiambo, F. O. et al. Profile: the KEMRI/CDC Health and Demographic Surveillance System–Western Kenya. International journal of epidemiology 41, 977–987, doi:10.1093/ije/dys108 (2012).
35. 35.
Adazu, K. et al. Health and demographic surveillance in rural western Kenya: a platform for evaluating interventions to reduce morbidity and mortality from infectious diseases. The American journal of tropical medicine and hygiene 73, 1151–1158 (2005).
36. 36.
Colon-Gonzalez, F. J., Tompkins, A. M., Biondi, R., Bizimana, J. P. & Namanya, D. B. Assessing the effects of air temperature and rainfall on malaria incidence: an epidemiological study across Rwanda and Uganda. Geospatial health 11, 379, doi:10.4081/gh.2016.379 (2016).
37. 37.
Thomson, M. C., Mason, S. J., Phindela, T. & Connor, S. J. Use of rainfall and sea surface temperature monitoring for malaria early warning in Botswana. The American journal of tropical medicine and hygiene 73, 214–221 (2005).
38. 38.
Guo, C. et al. Malaria incidence from 2005-2013 and its associations with meteorological factors in Guangdong, China. Malar J 14, 116, doi:10.1186/s12936-015-0630-6 (2015).
39. 39.
Wardrop, N. A., Barnett, A. G., Atkinson, J. A. & Clements, A. C. Plasmodium vivax malaria incidence over time and its association with temperature and rainfall in four counties of Yunnan Province, China. Malar J 12, 452, doi:10.1186/1475-2875-12-452 (2013).
40. 40.
Wood, S. N. Generalized Additive Models: An Introduction with R. (Chapman and Hall/CRC, 2006).
41. 41.
Hermenegilde Nkurunziza, A. G. Juergen Pilz. Forecasting-Malaria-Cases-in-Bujumbura. International Journal of Mathematical, Computational, Physical, Electrical and Computer Engineering 4, 14–19 (2010).
42. 42.
Schmid, B. H. a. A. M. a. N. F. a. M. gamboostLSS: Boosting Methods for GAMLSS Models (2016).
43. 43.
Schmid, B. H. a. A. M. a. M. gamboostLSS: An {R} Package for Model Building and Variable Selection in the GAMLSS Framework. Journal of Statistical Software (2015).
44. 44.
Maxim Vladimirovich Shcherbakov, A. B., Nataliya Lvovna Shcherbakova, Anton Pavlovich Tyukov, Timur Alexandrovich Janovsky and Valeriy Anatol’evich Kamaev. A Survey of Forecast Error Measures. World Applied Sciences Journal 24, 171–176, doi:10.5829/idosi.wasj.2013.24.itmies.80032 (2013).
45. 45.
(R Core Team 2015) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. URL www.R-project.org/.
46. 46.
Torgo, L. Data Mining with R: Learning with Case Studies. (Chapman & Hall/CRC: Boca Raton, FL, 2010).
## Acknowledgements
We acknowledge the staff at the KEMRI/CDC HDSS branch and residents in Asembo, Gem and Karemo who provided the data. This research was partly undertaken within the Umeå Centre for Global Health Research at Umeå University, with support from FAS, the Swedish Council for Working Life and Social Research (grant no. 2006-1512).
## Author information
### Affiliations
1. #### Kenya Medical Research Institute, Centre for Global Health Research, Box 1578, Kisumu, 40100, Kenya
• Maquins Odhiambo Sewe
2. #### Umeå University, Department of Public Health and Clinical Medicine,Epidemiology and Global Health Unit, Umeå Centre for Global Health Research, Umeå, SE-901 85, Sweden
• Maquins Odhiambo Sewe
• & Joacim Rocklöv
3. #### New York University, College of Global Public Health, New York, 41 East 11th street, New York, NY, 10003, United States
• Yesim Tozan
4. #### Division of Social Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
• Yesim Tozan
• Clas Ahlm
6. #### Institute of Public Health, University of Heidelberg, Im Neuenheimer Feld 324, 69120, Heidelberg, Germany
• Joacim Rocklöv
### Contributions
M.O.S., Y.T., C.A., and J.R. conceived and designed the study. M.O.S. performed the study, analyzed the data and wrote the manuscript. All authors reviewed and revised the manuscript.
### Competing Interests
The authors declare that they have no competing interests.
### Corresponding author
Correspondence to Maquins Odhiambo Sewe.
## Electronic supplementary material
### DOI
https://doi.org/10.1038/s41598-017-02560-z
• ### Spatio-temporal modelling of weekly malaria incidence in children under 5 for early epidemic detection in Mozambique
• Kathryn L. Colborn
• , Emanuele Giorgi
• , Andrew J. Monaghan
• , Eduardo Gudo
• , Baltazar Candrinho
• , Tatiana J. Marrufo
• & James M. Colborn
Scientific Reports (2018)
• ### Spatial–Temporal Epidemiology Study of Chikungunya Disease in Bolivia
• Natalia I. Vargas-Cuentas
• , Avid Roman-Gonzalez
• & Tan Yumin
Advances in Astronautics Science and Technology (2018) |
# A quick recap on types of numbers
## Natural Numbers
Counting with at least 1 object.
Example:
Length of a football field.
## Whole Numbers
Counting that can include 0 objects
Example:
How many cakes do I have?
## Integers
Counting where negative numbers is possible.
Example:
How much money I have in my bank account
## Rationals
Outcomes which are represented as a ratio between 2 numbers.
Example:
Conversion between metric and imperial
## Real Numbers
Quantities which are not expressible by rational numbers.
# Example questions
The structure will be: Description of counting problem + 5 different choices to choose from.
• “Number of people in room” - $\mathbb{W}$
• “Number of cells in tissue sample” - $\mathbb{N}$
• “Time taken by 100m sprint race” - Either rational numbers or real numbers. $\mathbb{Q}$ $\mathbb{R}$
Updated: |
Hilbert’s interpretation of quantification
Hilbert interpreted quantification in terms of his $\varepsilon$-function as follows [1]:
“IV. The logical $\varepsilon$-axiom
13. $A(a) \rightarrow A(\varepsilon (A))$
Here $\varepsilon (A)$ stands for an object of which the proposition $A(a)$ certainly holds if it holds of any object at all; let us call $\varepsilon$ the logical $\varepsilon$-function.
1. By means of $\varepsilon$, “all” and “there exists” can be defined, namely, as follows:
(i) $(\forall a) A(a) \leftrightarrow A(\varepsilon(\neg A))$
(ii) $(\exists a) A(a) \leftrightarrow A(\varepsilon(A)) \ldots$
On the basis of this definition the $\varepsilon$-axiom IV(13) yields the logical relations that hold for the universal and the existential quantifier, such as:
$(\forall a) A(a) \rightarrow A(b)$ … (Aristotle’s dictum),
and:
$\neg((\forall a) A(a)) \rightarrow (\exists a)(\neg A(a))$ … (principle of excluded middle).”
Thus, Hilbert’s interpretation of universal quantification— defined in (i)—is that the sentence $(\forall x)F(x)$ holds (under a consistent interpretation $\mathcal{I}$) if, and only if, $F(a)$ holds whenever $\neg F(a)$ holds for any given $a$ (in $\mathcal{I}$); hence $\neg F(a)$ does not hold for any $a$ (since $I$ is consistent), and so $F(a)$ holds for any given $a$ (in $\mathcal{I}$).
Further, Hilbert’s interpretation of existential quantification—defined in (ii)—is that $(\exists x)F(x)$ holds (in $\mathcal{I}$) if, and only if, $F(a)$ holds for some $a$ (in $\mathcal{I}$).
Brouwer’s objection to such an unqualified interpretation of the existential quantifier was that, for the interpretation to be considered sound when the domain of the quantifiers under an interpretation is infinite, the decidability of the quantification under the interpretation must be constructively verifiable in some intuitively and mathematically acceptable sense of the term constructive’ [2].
Two questions arise:
(a) Is Brouwer’s objection relevant today?
(b) If so, can we interpret quantification constructively’?
The standard interpretation of PA
We consider the structure $[\mathbb{N}]$, defined as:
$\mathbb{N}$ … the set of natural numbers;
$=$ … equality;
$S$ … the successor function;
$+$ … the addition function;
$\ast$ … the product function;
$0$ … the null element,
that serves for a definition of today’s standard interpretation, say $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$, of the first-order Peano Arithmetic PA.
Now, if $[(\forall x)F(x)]$ and $[(\exists x)F(x)]$ are PA-formulas, and the relation $F(x)$ is the interpretation in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ of the PA-formula $[F(x)]$ then, in current literature:
(1a) $[(\forall x)F(x)]$ is defined true in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ if, and only if, for any given natural number $n$, the sentence $F(n)$ holds in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$;
(1b) $[(\exists x)F(x)]$ is an abbreviation of $[\neg (\forall x)\neg F(x)]$, and is defined as true in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ if, and only if, it is not the case that, for any given natural number $n$, the sentence $\neg F(n)$ holds in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$;
(1c) $F(n)$ holds in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ for some natural number $n$ if, and only if, it is not the case that, for any given natural number $n$, the sentence $\neg F(n)$ holds in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$.
Since (1a), (1b) and (1c) together interpret $[(\forall x)F(x)]$ and $[(\exists x)F(x)]$ in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ as intended by Hilbert’s $\varepsilon$-function, they attract Brouwer’s objection.
A finitary model of PA
Clearly, the specific target of Brouwer’s objection is (1c), which appeals to Platonically non-constructive, rather than intuitively constructive, plausibility.
We can thus re-phrase question (b) more specifically: Can we define an interpretation of PA over $[\mathbb{N}]$ that does not appeal to (1c)?
Now, it follows from Turing’s seminal 1936 paper on computable numbers that every quantifier-free arithmetical function (or relation, when interpreted as a Boolean function) $F$ defines a Turing-machine $TM_{F}$ [3].
We can thus define another interpretation $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ over the structure $[\mathbb{N}]$ [4] where:
(2a) $[(\forall x)F(x)]$ is defined as true in $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ if, and only if, the Turing-machine $TM_{F}$ evidences [5] the assertion denoted by $F(n)$ as always true (i.e., as true for any given natural number $n$) in $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$;
(2b) $[(\exists x)F(x)]$ is an abbreviation of $[\neg (\forall x)\neg F(x)]$, and is defined as true in $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ if, and only if, it is not the case that the Turing-machine $TM_{F}$ evidences the assertion denoted by $F(n)$ as always false (i.e., as false for any given natural number $n$) in $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$.
We note that $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ is a finitary model of PA since—when interpreted suitably [6]—all theorems of first-order PA are constructively true in $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$.
Are both interpretations of PA over the structure $[\mathbb{N}]$ sound?
The structure $[\mathbb{N}]$ can thus be used to define both the standard interpretation $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ and a finitary model $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ for PA.
However, in the finitary model, from the PA-provability of $[\neg (\forall x)F(x)]$, we may only conclude that $TM_{F}$ does not algorithmically compute $F(n)$ as always true in $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$.
We may not conclude further that $TM_{F}$ must algorithmically compute $F(n)$ as false in $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ for some natural number $n$, since $F(x)$ may be a Halting-type of function that is not algorithmically computable [7].
In other words, we may not conclude from the PA-provability of $[\neg (\forall x)F(x)]$ that $F(n)$ does not hold in $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ for some natural number $n$.
The question arises: Are both the interpretations $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ and $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ of PA over the structure $[\mathbb{N}]$ sound?
PA is $\omega$inconsistent
Now, Gödel has shown [8] how to construct an arithmetical formula $[R(x)]$ such that, if PA is assumed simply consistent, then $[R(n)]$ is PA-provable for any given numeral $[n]$, but $[(\forall x)R(x)]$ is not PA-provable.
He further showed [9] it would follow that if PA is additionally assumed to be $\omega$-consistent, then $[\neg (\forall x)R(x)]$ too is not PA-provable.
Now, Gödel also defined [10] a formal language $P$ as $\omega$-consistent if, and only if, there is no $P$-formula $[F(x)]$ for which:
(i) $[\neg (\forall x)F(x)]$ is $P$-provable,
and:
(ii) $[F(n)]$ is $P$-provable for any given numeral $[n]$ of $P$.
However, a significant consequence of the finitary proof of the consistency of PA in this paper on evidence-based interpretations of PA [11] is that the formula $[\neg (\forall x)R(x)]$ is PA-provable, and so PA is $\omega$inconsistent!
The interpretation $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ of PA over the structure $[\mathbb{N}]$ is not sound
Now, $R(n)$ holds for any given natural number $n$ since Gödel has defined $R(x)$ [12] such that $R(n)$ is instantiationally equivalent to a primitive recursive relation $Q(n)$ which is algorithmically computable as true in $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ for any given natural number $n$ by the Turing-machine $TM_{Q}$.
It follows that we cannot admit the standard Hilbertian interpretion of $[\neg (\forall x)R(x)]$ in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ as:
$R(n)$ is false for some natural number $n$.
In other words, the interpretation $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ of PA over the structure $[\mathbb{N}]$ is not sound.
However, we can interpret $[\neg (\forall x)R(x)]$ in $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ as:
It is not the case that the Turing-machine $TM_{R}$ algorithmically computes $R(n)$ as true in $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ for any given natural number $n$.
Moreover, the $\omega$-inconsistent PA is consistent with the finitary interpretation of quantification in (2a) and (2b), since the interpretation $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ of PA over the structure $[\mathbb{N}]$ is sound [13].
Why the interpretation $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ of PA over $[\mathbb{N}]$ is not sound
The reason why the interpretation $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ of PA over the structure $[\mathbb{N}]$ is not sound lies in the fact that, whereas (1b) and (2b) preserve the logical properties of formal PA-negation under interpretation in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ and $\mathcal{I}_{PA(\mathcal{N},\ Algorithmic)}$ respectively, the further non-constructive inference in (1c) from (1b)— to the effect that $F(n)$ must hold in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ for some natural number $n$—does not, and is the one objected to by Brouwer.
Conclusion
If we assume only simple consistency for Hilbert’s system, then we cannot unconditionally define:
$[(\exists x)F(x)]$ is true in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ if, and only if, $F(n)$ holds for some natural number $n$ in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$.
This follows since, if $[(\exists x)F(x)]$ is true in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ if, and only if, $F(n)$ holds for some natural number $n$ in $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$, then PA is necessarily $\omega$-consistent — which is not the case.
Thus the interpretation $\mathcal{I}_{PA(\mathcal{N},\ Standard)}$ is not a model of PA, and Brouwer was justified in his objection to Hilbert’s unqualified interpretation of quantification.
References
Br08 L. E. J. Brouwer. 1908. The Unreliability of the Logical Principles. English translation in A. Heyting, Ed. L. E. J. Brouwer: Collected Works 1: Philosophy and Foundations of Mathematics. Amsterdam: North Holland / New York: American Elsevier (1975): pp.107-111.
Go31 Kurt Gödel. 1931. On formally undecidable propositions of Principia Mathematica and related systems I. Translated by Elliott Mendelson. In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York. pp.5-38.
Hi27 David Hilbert. 1927. The Foundations of Mathematics. In The Emergence of Logical Empiricism. 1996. Garland Publishing Inc.
Hi30 David Hilbert. 1930. Die Grundlegung der elementaren Zahlenlehre. Mathematische Annalen. Vol. 104 (1930), pp.485-494.
Mu91 Chetan R. Murthy. 1991. An Evaluation Semantics for Classical Proofs. Proceedings of Sixth IEEE Symposium on Logic in Computer Science, pp. 96-109, (also Cornell TR 91-1213), 1991.
Tu36 Alan Turing. 1936. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, ser. 2. vol. 42 (1936-7), pp.230-265; corrections, Ibid, vol 43 (1937) pp.544-546. In M. Davis (ed.). 1965. The Undecidable. Raven Press, New York. pp.116-154.
An07 Bhupinder Singh Anand. 2007. Why we shouldn’t fault Lucas and Penrose for continuing to believe in the Gödelian argument against computationalism – I. The Reasoner, Vol(1)6 p3-4.
An12 Bhupinder Singh Anand. 2012. Evidence-Based Interpretations of PA. In Proceedings of the Symposium on Computational Philosophy at the AISB/IACAP World Congress 2012-Alan Turing 2012, 2-6 July 2012, University of Birmingham, Birmingham, UK.
Notes
Return to 5: We note that we can, in principle, define the classical satisfaction’ and truth’ of the formulas of a first order arithmetical language such as PA constructively under an interpretation using as evidence the computations of a simple functional language (in the sense of Mu91). |
View more
View more
View more
### Image of the Day Submit
IOTD | Top Screenshots
### The latest, straight to your Inbox.
Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.
# Strange normal mapping problem
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
9 replies to this topic
### #1pachesantiago Members
Posted 06 September 2012 - 02:43 PM
Hy guys, having a little problem here, wich i think is self explanatory:
Im trying to implement a simple normal mapping shader, wich I allready tryed in a program i use to test shaders ( DarkShader) but when i loaded it into my engine, that happend. Got that strange black parts.
string Description = "This shader uses preview lights to produce per pixel normal mapped lighting. This shader requires lights.";
string Thumbnail = "Normal Mapping.png";
//The Extra Data map, contains :
// Red : Seams
// Blue : Specular Power
// Green : Specular Ammount
// The seams data is used to multiply the specs inverse, so the seams arent visible
//--------------
//Untweakables
//--------------
float4x4 WorldViewProj : WorldViewProjection;
float4x4 World : World;
float4x4 WorldT : WorldTranspose;
float4x4 WorldIT : WorldInverseTranspose;
float4 LightPos[8] : LIGHTPOSITION;
float3 LightColor[8] : LIGHTCOLOR;
float3 AmbientColor : AMBIENTCOLOR;
float3 eyepos : CameraPosition;
float time : TIME;
int L_Count : LightCount;
//--------------
//Tweakables
//--------------
int TileRes = 512;
float detailScale
<
string UIWidget = "slider";
float UIMax = 16.0;
float UIMin = 0.1;
float UIStep = 0.01;
> = 5.0f;
float SpecularPower1
<
string UIWidget = "slider";
float UIMax = 15;
float UIMin = 0.001;
float UIStep = 0.0001;
> = 3;
float DetailInfluence
<
string UIWidget = "slider";
float UIMax = 1;
float UIMin = 0.001;
float UIStep = 0.0001;
> = 0.24;
/*
float SeamCorrection
<
string UIWidget = "slider";
float UIMax = 10;
float UIMin = 1;
float UIStep = 0.0001;
> = 2;*/
<
string ResourceName = "";
>;
{
MinFilter = Anisotropic;
MagFilter = Linear;
MipFilter = Linear;
};
texture BaseTex
<
string ResourceName = "";
>;
sampler diffuse_smp = sampler_state
{
Texture = <BaseTex>;
MinFilter = Anisotropic;
MagFilter = Linear;
MipFilter = Linear;
};
texture NormalMap
<
string ResourceName = "";
>;
sampler normalmap_smp = sampler_state
{
Texture = <NormalMap>;
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
};
texture NormalMapTile
<
string ResourceName = "";
>;
sampler normalmapTile_smp = sampler_state
{
Texture = <NormalMapTile>;
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
};
struct app_in
{
float4 pos : POSITION;
float3 normal : NORMAL0;
float3 tangent : TANGENT0;
float3 binormal : BINORMAL0;
float2 uv : TEXCOORD0;
};
struct vs_out
{
float4 pos : POSITION;
float2 uv : TEXCOORD0;
float3 normal : TEXCOORD1;
float3 tangent : TEXCOORD2;
float3 binormal : TEXCOORD3;
float4 wpos : TEXCOORD4;
float2 uvTile : TEXCOORD5;
};
vs_out VS( app_in IN )
{
vs_out OUT;
float4 pos = mul( IN.pos, WorldViewProj );
OUT.pos = pos;
OUT.wpos = mul( IN.pos, World );
OUT.uv = IN.uv;
OUT.uvTile = IN.uv * detailScale;
float3 normal = normalize(mul(IN.normal.xyz,(float3x3)World ));
float3 tangent = normalize(mul(IN.tangent.xyz,(float3x3)World ));
float3 binormal = normalize(mul(IN.binormal.xyz,(float3x3)World ));
//smooth out the tangents and binormals with the normals
float3 b = normalize(cross( normal,tangent ));
b *= sign(dot(b,binormal));
float3 t = normalize(cross( normal,b ));
t *= sign(dot(t,tangent));
float3 t2 = normalize(cross( normal,binormal ));
t2 *= sign(dot(t2,tangent));
float3 b2 = normalize(cross( normal,t2 ));
b2 *= sign(dot(b2,binormal));
//pass normal, tangent, and binormal to pixel shader
OUT.normal = normal;
OUT.tangent = normalize((t+t2)*0.5);
OUT.binormal = normalize((b+b2)*0.5);
return OUT;
}
float luminance ( float3 rgb )
{
return rgb.r*0.3 + rgb.g*0.59 + rgb.b*0.11;
}
float3 getTexel( float2 p, uniform sampler2D texSamp,uniform int TexRes )
{
TexRes *= 8;
p = p*TexRes + 0.5;
float2 i = floor(p);
float2 f = p - i;
f = f*f*f*(f*(f*6.0-15.0)+10.0);
p = i + f;
p = (p - 0.5)/TexRes;
float3 outValue = tex2D( texSamp, p );
return outValue;
}
float4 PS( vs_out IN, uniform int numLights ) : COLOR
{
float3 color = AmbientColor;
float3 n = normalize(IN.normal);
float3 t = normalize(IN.tangent);
float3 b = normalize(IN.binormal);
//build transpose matrix
float3x3 TangentSpace = {t,b,n};
TangentSpace = transpose(TangentSpace);
n = normalize(tex2D(normalmap_smp, IN.uv)*2 - 1);
//float3 nT = normalize(tex2D(normalmapTile_smp, IN.uvTile)*2 - 1);
float3 nT = normalize(getTexel(IN.uvTile,normalmapTile_smp,TileRes));
float3 e = normalize(eyepos - IN.wpos);
float eyeDist = length(eyepos - IN.wpos);
e = mul(e,TangentSpace);
float4 texColor = tex2D( diffuse_smp, IN.uv );
//cycle through all lights, number depending on technique chosen
for ( int i = 0; i < numLights; i++ )
{
float range = LightPos[i].w;
if ( range > 0 )
{
float3 l = (LightPos[i].xyz);// - IN.wpos.xyz);
l = mul(l,TangentSpace);
//calculate attenuation
float dist = length(l);
float att = saturate((range-dist) / range);
//calculate diffuse lighting
l = normalize(l);
float diffuse = dot(n,l)* 0.5 +0.5;
//caculate specular lighting
float3 h = normalize(l+e);
float spec = pow( saturate(dot(n,h)), SpecularPower1 )*(1-pow(1-diffuse,10));
//calculate diffuse lighting
float diffuseTile = saturate(dot(n,l));
//caculate specular lighting
float specTile = pow( saturate(dot(nT,h)), SpecularPower1 )*(1-pow(1-diffuseTile,10));
float finalSpec = (specTile+(spec-DetailInfluence)/2);
// correct seams
//color += ((diffuse * att * LightColor[i]) + finalSpec * att * LightColor[i] * smoothstep(0.3,0.6,luminance(texColor.xyz)));
color = diffuse;
}
}
return float4(color,1.0);// * texColor;
}
//choose the technique corresponding to the number of lights in your application
technique PerPixelLighting
{
pass p0
{
VertexShader = compile vs_2_0 VS( );
PixelShader = compile ps_3_0 PS( L_Count );
}
}
a lot of parts are commented, because i was trying to find where the error was. Narrowed it to the N dot L i think, but the odd part is that it worked perfect in the other program.
I ran out of ideas, if someone can help me it would be great.
### #2pachesantiago Members
Posted 07 September 2012 - 12:26 PM
Anyone?
### #3pachesantiago Members
Posted 07 September 2012 - 08:20 PM
come one, no one? maybe render states?
### #4phil_t Members
Posted 08 September 2012 - 01:49 AM
Have you tried debugging it in PIX?
### #5pachesantiago Members
Posted 08 September 2012 - 05:20 AM
yes, but i dont find anything strange. i mean, i dont know assembler, but i can see the pixel history, and it says that it exited the pixel shader with that color.
### #6phil_t Members
Posted 08 September 2012 - 08:34 AM
So you see that it exited the pixel shader with the wrong color. Now you can use PIX to step through the shader code for that pixel and figure out why.
### #7JohnnyCode Members
Posted 09 September 2012 - 04:06 PM
I would say your per vertex tangent space vectors are messed up (as this is an input that differs from Darkshader).
But still, in your shader you transform tangent space vectors to world space, and pass them to pixel shader, thus I think you do the dot product with a worldspace normal and a world space light direction vector.
The most effective and generall way is to
transform a light direction from world to object space (inverse world), and then from object space to texture space (tangent matrix), and, transform the object normal from Object space to texture Space(tangent matrix), pass those two , tx normal and tx light to pixel shader
just dot the two direction vectors.
Your tangent triple should transform from object space to texture space. (byt it can also transform from texture space to even view space, this is done in deffered rendering techniques). Also, if you have smooth normals on verticies, just compute the tangent and have binormal computed just as the cross product of those two in the vertex shader, you then have smooth tangent space, tghough not perpendicular in all directions (but it does not matter, you just can invert the matrix by transponing only and so on then)
### #8pachesantiago Members
Posted 09 September 2012 - 07:58 PM
I would say your per vertex tangent space vectors are messed up (as this is an input that differs from Darkshader).
But still, in your shader you transform tangent space vectors to world space, and pass them to pixel shader, thus I think you do the dot product with a worldspace normal and a world space light direction vector.
The most effective and generall way is to
transform a light direction from world to object space (inverse world), and then from object space to texture space (tangent matrix), and, transform the object normal from Object space to texture Space(tangent matrix), pass those two , tx normal and tx light to pixel shader
just dot the two direction vectors.
Your tangent triple should transform from object space to texture space. (byt it can also transform from texture space to even view space, this is done in deffered rendering techniques). Also, if you have smooth normals on verticies, just compute the tangent and have binormal computed just as the cross product of those two in the vertex shader, you then have smooth tangent space, tghough not perpendicular in all directions (but it does not matter, you just can invert the matrix by transponing only and so on then)
wow thanks. in the vertex shader, when you say the "object normals" do you mean the normals in the normal map right?
the other thing, about smooth tangent space, I do have the normals smoothed, so i can do as you said. if is not a problem, can you point me to where i can find information on computing tangents? can you also explain a bit or point to documentacion of wath do you mean with "not perpendicular in all directions"?
thanks anyway, that's probably it, if thats it, you just saved me a lot of time
### #9JohnnyCode Members
Posted 10 September 2012 - 11:35 AM
wow thanks. in the vertex shader, when you say the "object normals" do you mean the normals in the normal map right?
no, I have ment the per vertex normal, not the ones in the texture. Vertex normal is one of the bazic vectors for the tangent triple.
So, you have normals on your verticies, and you know how to compute binormal. tangent is a direction vector which exists in object space and points to the direction of texture coordinates x axis. It is actualy the x axis of texture coordinates expressed in 3d space (object space)
this is a great resource for u:
it laso calculates the handedness of a tangent that you need to decide with wheather cross product between tangent and normal points+ or -
so tangent data is 4 float vector [x,y,z,handedness].
### #10pachesantiago Members
Posted 10 September 2012 - 01:20 PM
wow thanks. in the vertex shader, when you say the "object normals" do you mean the normals in the normal map right?
no, I have ment the per vertex normal, not the ones in the texture. Vertex normal is one of the bazic vectors for the tangent triple.
So, you have normals on your verticies, and you know how to compute binormal. tangent is a direction vector which exists in object space and points to the direction of texture coordinates x axis. It is actualy the x axis of texture coordinates expressed in 3d space (object space)
this is a great resource for u: |
### IgorI's blog
By IgorI, history, 6 weeks ago, translation,
Hi, Codeforces!
I'm glad to invite you to take part in Codeforces Round #696, which will take place on Jan/19/2021 17:35 (Moscow time). Round will be rated for participants with rating less than $2100$. Participants from the first division can take part out of competition.
There will be $6$ problems for $2$ hours. All problems are authored by me. Thanks to adedalic for excellent round coordination and to MikeMirzayanov for Codeforces and Polygon.
Also thanks to testers errorgorn, awoo, rkm62, khiro, AmShZ, IaMaNanBord, Osama_Alkhodairy, Prakash11, Gauravvv, HIS_GRACE, Dragnoid99 for testing the round and giving valuable feedback for problems.
Scoring is $500-1000-1500-2000-2250-3000$.
Good luck!
UPD: Editorial
UPD: Congratulations to the winners!
Of both divisions:
Of the second division:
• +614
» 6 weeks ago, # | +103 I have a naive question: Why setters take too much time on selecting the scoring distribution?
• » » 6 weeks ago, # ^ | +53 It needs to be balanced, one should not feel like he wasted time solving C and D if E wasn't that hard but worth much more points, converse is also true. I hope you get what I am trying to say.
• » » » 5 weeks ago, # ^ | -45 I think, if greens are selected for testing Question A & B Scoring will be more accurate :)
• » » 6 weeks ago, # ^ | ← Rev. 2 → +2 Not revealing score distribution too early doesn't mean it takes so much time for setters. It might change someone's strategy at the beginning of the contest.
• » » 6 weeks ago, # ^ | +224 Procrastination
• » » » 5 weeks ago, # ^ | +77 As a tester, 1-gon is back for contribution. XD
• » » » » 5 weeks ago, # ^ | ← Rev. 2 → +90 Reason ;)...
• » » » » » 5 weeks ago, # ^ | +14 Let's make 1-gon rank 1 again!
• » » » 5 weeks ago, # ^ | +27 I miss the old 1-gon :D
• » » » » 5 weeks ago, # ^ | +17 and we all know that old is gold
• » » » 5 weeks ago, # ^ | +1 Who are you?
» 6 weeks ago, # | 0 Good Luck Participants! — a participant
• » » 6 weeks ago, # ^ | 0 you too :3
» 6 weeks ago, # | +27 When you see some common testers and started thinking how can this be a coincidence?
• » » 5 weeks ago, # ^ | +22 As a tester I feel bad that you didn't tag me :-(
• » » » 5 weeks ago, # ^ | +2 As a future participant I feel bad that you didn't help me being one of the tester :P
• » » » » 5 weeks ago, # ^ | +14 1) I try my best to help problem setters. 2) I try my best to help you and other participants.It is guaranteed that the above two statements are equivalent.Still you can verify that through some algorithm ( if there exist any :-).
• » » 5 weeks ago, # ^ | 0 it's a trap!
» 6 weeks ago, # | +125 this set is nice :3
» 6 weeks ago, # | +22 we should be thankful to HIS_GRACE for testing this round
» 6 weeks ago, # | ← Rev. 4 → -70
» 6 weeks ago, # | ← Rev. 4 → -87 hope it be a good contest and yall good ratings
» 6 weeks ago, # | +45 Problems are very interesting! Good Luck!
» 6 weeks ago, # | ← Rev. 2 → -82 .
• » » 5 weeks ago, # ^ | 0 lol
» 6 weeks ago, # | -10 I wish the Problems have some bold words. Guess that serves me right for trying too hard to speedsolve.
• » » 6 weeks ago, # ^ | -10 I wish the formal statement should be in bold.
• » » » 6 weeks ago, # ^ | +18 I wish the Verdict should be in bold.
• » » » » 6 weeks ago, # ^ | +37 I wish my performance will be bold.
» 6 weeks ago, # | +7 Hoping that my performance will be at least same as my previous performance in your contest.
• » » 6 weeks ago, # ^ | -88 Bro you are trying since 6 years on codeforces why you still a Specialist. Is there any mistake you are making, please give some experience advice also mistake you have made i'm a beginner here want to learn from you :).
• » » » 6 weeks ago, # ^ | +61 At least he is enjoying what he is doing without worrying much about ratings.
• » » » » 6 weeks ago, # ^ | -120 But bro In india geting high rating means high placement, The company would not ask you enjoying or not :( It's reality brother. So I only ask for suggestion that the mistake we could not repeat.
• » » » » » 6 weeks ago, # ^ | +45 That is so not true! I think it is this misconception that is leading to the alarming rise in cheating cases. Companies do not ask for a high rating on competitive programming platforms when they hire. They don't even care for it, in most cases.Recruiters only expect candidates to have a good knowledge of Data Structures and Algorithms, and programming problems on platforms like Codeforces and Codechef sometimes require the use of those topics. As a result, being good at DSA is wrongly correlated with having a high rating on these platforms, and a lot of people end up focussing on rating rather than improving their knowledge and problem-solving skills!
• » » » » » 6 weeks ago, # ^ | ← Rev. 2 → +19 Competitive programming is a sport and it is never meant to be considered as a parameter for getting placements. Read this article for more clarification. https://www.freecodecamp.org/news/mythbusting-competitive-programming/
• » » 6 weeks ago, # ^ | -22 Sorry drifter_kabir Mridul323, If you taken it in negative i'm worried as i wasted my 2 years in college and started cp too late would i get job or not :( does not have any right direction how to improve in that less time.
• » » » 5 weeks ago, # ^ | +11 cruX_072 It is never late to do anything,but bro I let u know 5 star or candidate master will give negative image on your resume if u have not that true level ,So instead of focussing on rating focus on making ur data structure and algorithm stronger and also create a good dev project side by side.Also most of the guys which are here for more than year or two is doing cp for dopamine rush which they got when they have good contests or even seeing an accepted solution. AND AND NO ONE CARE FOR YOUR RATING NOT EVEN UR BEST BUDDIES OF COLLEGE.
• » » » » 5 weeks ago, # ^ | +31 Please don't compare 5 stars on codechef with candidate master on codeforces.
• » » » » » 5 weeks ago, # ^ | +7 It's actually Expert in Codeforces
• » » » » » 5 weeks ago, # ^ | +2 viwreck U know the feeling right,I just wanted to let cruX_072 know that rating or star shouldnt be your priority ,It will not matter if u are really good in dsa...And I know candidate master should never be compared to codechef any star ,cause I have even seen many people with 6 and 7 star by doing only long challenge(U can see India's no 1 on codechef right now for reference)/
» 6 weeks ago, # | +3 I was wondering how they choose the testers , is it a complicated proccess?!
• » » 6 weeks ago, # ^ | +33 You just need to connect with author or coordinator to test the round.
» 6 weeks ago, # | -28 IGORL ORZ
» 6 weeks ago, # | +18 Hoping for shorter problem descriptions just like the announcement.
» 6 weeks ago, # | ← Rev. 6 → -48 As a tester, I request some contribution.
• » » 6 weeks ago, # ^ | +2 What was that, and who are you? :D
• » » » 5 weeks ago, # ^ | +30 To all you downvoting — do you realize that this was in response to the first revision of the comment?
» 6 weeks ago, # | 0 Can't wait.Excited to face some challenging problems
» 6 weeks ago, # | +30 It's just me or someone also felt that Codeforces rounds per month, are less nowadays?
• » » 5 weeks ago, # ^ | +13 I also thinks so. Might be Mike and his team are planning to do something against cheaters, because from past 3-4 months everyone knows what is happening...
• » » » 5 weeks ago, # ^ | +6 Or maybe, because there are lesser contest proposals these days.
• » » » » 5 weeks ago, # ^ | +55 I think the proposal queue is quite long and there are still many writers waiting. From my personal experience, I finally get KAN's review exactly 4 months after I propose my friends' and my contest. Moreover, our contest doesn't even have a coordinator to assign to. Coordinators are so busy and committed to their work. We should thank them for their devotion.Anyway, in the period of less contests, don't forget there are considerable amount of nice problem in GYM and previous contests. Go and solve them!
» 6 weeks ago, # | 0 Good luck to all of you!
» 6 weeks ago, # | +94 As a tester, I can confirm that the problems are awesome and you guys will enjoy the contest.
• » » 6 weeks ago, # ^ | -79 Good luck on nullifying ur negative contribution( https://codeforces.com/blog/entry/86882?#comment-749682 ).
• » » » 5 weeks ago, # ^ | 0 and you saying that got him contribution. People love proving people wrong lol
• » » » » 5 weeks ago, # ^ | +2 :(
• » » 5 weeks ago, # ^ | 0 As a participant, I wish the same.
» 6 weeks ago, # | +14 Wish you all luck ^^
• » » 6 weeks ago, # ^ | 0 ⬆️⬆️⬆️
• » » 6 weeks ago, # ^ | 0 ❤❤
» 6 weeks ago, # | +25 Great round id
» 6 weeks ago, # | +21 Since this is a palindromic round ( 696 ). Hoping to see one question on Palindrome :D
» 6 weeks ago, # | -10 anyone waiting for round #700?
• » » 5 weeks ago, # ^ | -19 Learn to stay in the present.
» 6 weeks ago, # | ← Rev. 2 → -56 Chahu me ye chahu dil se me ye chahu k is contest k baad pupil me bun jau.
» 5 weeks ago, # | +2 I will try to solve solve A,B,C. Good Luck Every One, Keep Practicing and keep shining.
» 5 weeks ago, # | ← Rev. 2 → 0 Thank you lgorl for this contest! Good luck to everyone!
» 5 weeks ago, # | +18 Hopefully this round lives up to its ID.
» 5 weeks ago, # | -6 Good luck to everyone!!
» 5 weeks ago, # | +7 IgorI orz
» 5 weeks ago, # | 0 All the Best !!
» 5 weeks ago, # | 0 Out of the given 5 or 6, solving how many statements within the 2 hours of the contest would be considered 'reasonable' for a relative novice?
• » » 5 weeks ago, # ^ | 0 2 on average.
• » » » 5 weeks ago, # ^ | 0 Lol. Expected a lot of downvotes because of 'Ratism'. Anyway, still a long way to go for me.
• » » » » 5 weeks ago, # ^ | 0 Good luck. Even solving one problem is enough for a beginner.
• » » 5 weeks ago, # ^ | +1 I couldn't solve one for this round :(. It's damn tough.
• » » » 5 weeks ago, # ^ | 0 I still don't get what was to be done in the second one... Sending the first two primes after 1, each at least d apart from one another, did not always seem to produce the minimum number and checking for every number was a bit too resource intensive.Am curious about the solution...
• » » » » 5 weeks ago, # ^ | 0 The number has atlest 4 divisors if it's a multiple of 2 primes. The least first prime is $a >= 1 + d$, and second $b >= a + d$. As soon as any number is a multiple of some primes, if we choose only two primes, which are the smallest possible, the result a * b will be the smallest.
• » » » » » 5 weeks ago, # ^ | ← Rev. 2 → 0 Yeah, I did just that but weirdly, it was showing 'wrong answer for test case 2'.Eg. outputs of my program:234 -> 114481, 10000 -> 200250077, 1 -> 6, 2 -> 15,Don't know what went wrong...
• » » » » » » 5 weeks ago, # ^ | 0 Your ansewer for 3 is 28 somewhy. But 2 and 1 is 28 divisors. So, I would assume function findPrime() is not correct
• » » » » » » » 5 weeks ago, # ^ | ← Rev. 2 → 0 Yes, I did make a mistake. The issue is with this: if(var == 2) {return 2;} Without even checking for the special condition (min. difference of d), I am always returning 2 as one normally would if only whether a number is prime or not was to be checked.A really embarrassing mistake made in the heat of the competition.Thanks for looking into my code.
» 5 weeks ago, # | 0 Please make short statements for hard problems so we can switch another problem easily if we can't solve
• » » 5 weeks ago, # ^ | +8 Short statements are good but I think in contests like ICPC one can struggle if he doesn't have the habit of reading long statements and find the real problem out of it.
» 5 weeks ago, # | +22 Am I the only one who don't see English Statements?
• » » 5 weeks ago, # ^ | 0 sometimes when you open this site , the Russian version is displayed, select the English option in right top corner.
» 5 weeks ago, # | 0 Auto comment: topic has been updated by IgorI (previous revision, new revision, compare).
» 5 weeks ago, # | 0 Hope to give my best :}
» 5 weeks ago, # | +11 I'm just new here..... wish me luck..
• » » 5 weeks ago, # ^ | +6 All the best.
» 5 weeks ago, # | +7 Hello today i do late comment ..you guys miss me? So i want to say that its been a great journey at codeforces with you all.Such a nice community of coders and mathematicians. Good luck to all div2 participants along with me..lets binge solve tonight wohooo XD
» 5 weeks ago, # | +8 Hope this round to be a DIV 2 and not DIV 1.5. All the best everyone!
» 5 weeks ago, # | ← Rev. 2 → +123
• » » 5 weeks ago, # ^ | 0 Thanks for this motivation. My graph has been somewhat like this.
• » » 5 weeks ago, # ^ | +33 Unfortunately it's going to be like this for me today...
• » » » 5 weeks ago, # ^ | +10 nice graph. Hint for C.
• » » » 5 weeks ago, # ^ | 0 I think so too(
• » » » 5 weeks ago, # ^ | 0 same here
• » » » 5 weeks ago, # ^ | 0 was there only one solution possible for each 'YES' test cases in problem C?
» 5 weeks ago, # | 0 I am not able to register this contest. It says it is closed. I did't saw register button at the beginning and directly started solving first question. Plz help me out.
• » » 5 weeks ago, # ^ | +6 You must register at least 5 minutes before the start of the round
• » » » 5 weeks ago, # ^ | +3 Not actually must , for example I missed to register beforehand today . There's something called extra registration which starts ten minutes into the contest , and probably lasts for half an hour .Use that to register if you forget to register beforehand . The only bad little disadvantage is that you cannot submit any solution in the first ten minutes.
» 5 weeks ago, # | +28 Problem E's name describe exactly my thought on pretest 2 of E :)
» 5 weeks ago, # | ← Rev. 2 → +2 I ruined it. Great Round very good C after a long time ig:/
» 5 weeks ago, # | 0 Who all fall for the trap in problem B?
• » » 5 weeks ago, # ^ | 0 Me. I literally skipped it.
• » » » 5 weeks ago, # ^ | 0 Good question, looked like a tough one but....
• » » » » 5 weeks ago, # ^ | 0 It's funny one part of my brain says do it but the other half is nah, I'mma skip to C.
• » » » » 5 weeks ago, # ^ | +1 CodeForces is expert in publishing problems that seem impossible but actually can be solved with a simple trick.
» 5 weeks ago, # | ← Rev. 2 → 0 Nice round! Thank you, IgorI!
» 5 weeks ago, # | 0 some body please tell me about B I was try to find two prime number where first one is greater then or equal to 1+d and second one is a prime number which is greater then first one and difference between two prime number is greater than or equal d but my this process get wrong answer verdict please some one tell me about the approach of B
• » » 5 weeks ago, # ^ | ← Rev. 3 → 0 the answer is LCM of 2-th and 3-th primary numbers, and diff between each prime numbers should be >= than d
• » » » 5 weeks ago, # ^ | +10 lcm of 2 primes is basically multiplication
• » » » » 5 weeks ago, # ^ | 0 thanks, I observed this connection but couldn't prove it during the contest
• » » » » » 5 weeks ago, # ^ | ← Rev. 2 → +2 lcm of two numbers a and b = (a*b)/gcd(a,b)so lcm of two primes p1 and p2= (p1 * p2) / gcd(p1, p2)obviously the gcd of two distinct primes is 1 since they are relatively prime to each otherlcm = (p1 * p2) / 1= p1 * p2
• » » » » » » 5 weeks ago, # ^ | 0 It can also be proven directly. A positive integer is a common multiple of primes $p$ and $q$ iff it has both of them as prime factors of which $pq$ is the least.
• » » 5 weeks ago, # ^ | +4 find next prime number for 1+d and then next prime number for (previous prime number)+d.
• » » » 5 weeks ago, # ^ | 0 How can we ensure that it will not cause tle or will exist in range of sieve array ? I know it intuitively but any formal or informal proof ?
• » » 5 weeks ago, # ^ | +5 i guess your approach is correct ..you are missing somewhere else i think
• » » » 5 weeks ago, # ^ | 0 i was find all prime between 1 to 1000000 with sieve method and then start to find my primes is there is the problem my code had faced??or i should find all prime between 1 to 1000000000 ?
• » » » » 5 weeks ago, # ^ | ← Rev. 2 → 0 no you didnt find upto that big number..the 1000000 is ok..
• » » » » » 5 weeks ago, # ^ | 0 i was print multiplication of those two primes
• » » » » » » 5 weeks ago, # ^ | 0 you dont need to look all the primes so no need for sieve. Just take 1+d, if its prime then ok, if not find next prime from it. then again add 'd' to that number and if that is prime then ok, else find next prime. multiply these two thats the answer.
• » » » » » » 5 weeks ago, # ^ | 0 You're accessing indexes out of the array bounds for your bool array, set it to n + 1 instead of n.
• » » » » » 5 weeks ago, # ^ | 0 actually for this, 22000 was enough considering the constraints.
• » » » » » » 5 weeks ago, # ^ | 0 20011 if you want to be exact about it.
• » » 5 weeks ago, # ^ | 0 I think that your approach is correct but the calculation of primes takes too much time. For example, your code takes sqrt(100 million) * (100 million / 2) loops just to set multiples of 2 to false. Try the same thing, but with primes up to 50000.
» 5 weeks ago, # | 0 Is C a graph problem or DSU?
• » » 5 weeks ago, # ^ | 0 I did brute force , because when we fix the first two numbers , the whole process becomes fixed . I don't know if it will pass system test.
• » » » 5 weeks ago, # ^ | 0 I thought my solution failed on some edge case where picking wrong maximum for x.
• » » » 5 weeks ago, # ^ | 0 For what i undestand your solution works in $O(n^3)$, if you notice that in the first two numbers, one of them should be the maximum value of the array, so the complexity become $O(n^2)$
• » » » » 5 weeks ago, # ^ | 0 Yup , I considered that . But i also used set in c++ to find the biggest number and the other number faster . So mine is $O(n^2logn)$ . Since 2*n was around 2000 i thought it might work .
• » » » » 5 weeks ago, # ^ | ← Rev. 3 → 0 hmmm
• » » » 5 weeks ago, # ^ | 0 Attempt at a proof. Assume i,j,k to be indices.1) We have to select the maximum number in the array first to create an x [ obvious ] If you don't you cannot exhaust the array as all the maximums you chose must be strictly less than the previous maximums as arr[i]+ arr[j](new max) = arr[k] ( current max), hence arr[j] < arr[k] is an invariant.2) Let's assume that you have multiple combinations of i and j for getting a sum arr[k], let's call them i1,j1 and i2,j2 pairs. If i1 > i2 then j2> i1 hence you cannot select this later per invariant mentioned above. Therefore you can have one possible arrangement of the solution.
• » » » » 5 weeks ago, # ^ | ← Rev. 3 → 0 bruh..
• » » 5 weeks ago, # ^ | 0 greedy+brutoforce (+ multiset)x is always greater than the maximum remaining number; which is a value y; also; x must be equal to y+z; for some z in the remaining values; otherwise; you cannot remove y.
• » » » 5 weeks ago, # ^ | ← Rev. 2 → 0 OK
» 5 weeks ago, # | +4 contest not gut!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
» 5 weeks ago, # | ← Rev. 2 → +1 Very unbalanced problemset, sucks to say it, but it's what it is.
• » » 5 weeks ago, # ^ | +59 What's unbalanced about it?
• » » » 5 weeks ago, # ^ | ← Rev. 2 → -15 I think he meant gap between B and C
• » » » » 5 weeks ago, # ^ | +59 I think $9354 \to 2460$ is a good gap
• » » » » 5 weeks ago, # ^ | +29 C and D had the bigger gap imo
» 5 weeks ago, # | +16 How to solve D?
• » » 5 weeks ago, # ^ | +15 Let dp1[i] means if you only consider the piles in [1,i] and make every piles in [1,i-1] empty, the number of stones in i. Specially, dp1[i] = -1 if you can't make every piles in [1,i-1] empty.Similarly, let dp2[i] means if you only consider the piles in [i,n] and make every piles in [i+1,n] empty......So, not considering the swap opertion, we can see that if for some i, dp1[i]=dp2[i+1]&&dp1[i]!=-1, the answer is YES.For swap operation, just iterate for the position you swap and get the new dp value near the swapped positions, which is O(1).
• » » » 5 weeks ago, # ^ | ← Rev. 2 → 0 can D be done using prefix and suffix array values?
• » » » 5 weeks ago, # ^ | +5 Thanks, it works.
» 5 weeks ago, # | +3 Interesting problems! Could D be solved using Segment Trees? I tried but got WA on pretest 2.
» 5 weeks ago, # | 0 Can I know the approach for div A?
• » » 5 weeks ago, # ^ | ← Rev. 3 → 0 first char have to be max = 1 store prev char and you have three cases for the rest:prev = 2 -- b[i] == '0' ? '1' : '0'prev = 1 -- b[i] == '0' ? '0' : '1'prev = 0 -- '1'then update prev.
• » » 5 weeks ago, # ^ | +6 If you do not have two consecutive digits equal, you get the maximum length number. Going from left to right the first integer, you try to have in d the largest possible number, different from the previous one
» 5 weeks ago, # | 0 can anyone explain how to solve question 3?
• » » 5 weeks ago, # ^ | ← Rev. 5 → +2 Here comes the key observation. let max(array) = M. M should consist in the first pair. if we don't choose the biggest element, (next 'x') <= M, since all elements are positive, we can't eliminate M since M+1 > x If the first pair is chosen, choosing the rest automatically completes. Let's choose the first pair arbitrary. Eliminate the pair from the array. Then the next 'x' is M. by 1., we should choose the biggest element from the remaining array. But this time we know the sum of the pairs. Now, the pair automatically completes since we know one element and the sum of two elements. After choosing the first pair by brute force, we can complete all the steps in N log N using multiset. The number of possible 'first pair' is N-1. So we can complete the algorithm in O(N^2 log N).
• » » » 5 weeks ago, # ^ | 0 thanks for explanation
» 5 weeks ago, # | +4 How to solve C ?
• » » 5 weeks ago, # ^ | +2 Hint: For a particular x you must select two integers where one of them is the max of remaining elements.
• » » 5 weeks ago, # ^ | 0 X should be >= max(all(ai)), now if you dont choose max(all(ai)) in the first step itself, then you could never choose it because we are choosing elements in a decreasing fashion to satisfy the criteria of selecting maximum everytime. So if you fix any 1 number in the start except the maximum then the whole process is fixed and all you have to do is check if differences of consecutive elements exist and at the end all of them add upto n as there are n maximums to be chosen.
» 5 weeks ago, # | 0 Apparently multiple people solved C the same way i did, can anyone explain how i got TLE? my code's complexity should be fine...?
» 5 weeks ago, # | +1 Stuck on figuring out C for more than 1.5 hrs. Approach anyone?
• » » 5 weeks ago, # ^ | 0 Its brute force I believe
» 5 weeks ago, # | 0 What is the intended complexity for C? I think N^2LogN should pass. Or maybe I am calculating it wrongly. Can somebody see? https://pastebin.com/5GypBk22
• » » 5 weeks ago, # ^ | 0 My solution’s complexity for C was O(N^2). I used a hashmap.
• » » » 5 weeks ago, # ^ | ← Rev. 3 → 0 I used OrderedMap so I add the LogN factor. I will try with HashMap.EDIT: It passes with HashMap :/
• » » » 5 weeks ago, # ^ | 0 Did you use DFS to find out when the no. of child reached N/2 , (basically implemented it brute forcely)
• » » » » 5 weeks ago, # ^ | 0 Man why to do fucking complicated when normal hashmap O(N*2) solution can be done why to go for such complications dude !!
• » » » 5 weeks ago, # ^ | +3 hashmap in worst case if i am correct is $O(n)$ , so isn't that $O(n^3)$
• » » 5 weeks ago, # ^ | 0 My $O(n^2 log n)$ passed in like 400 ms
» 5 weeks ago, # | 0 D is amazing, but C... kinda strange problem :/
• » » 5 weeks ago, # ^ | +1 How did you solve C
» 5 weeks ago, # | ← Rev. 2 → 0 For Div2 C, I implemented brute force graphs , but it gave me TLE , its complexity is probably O(T*(N^2)) or O(T*(N^3)) . I am not sure.
• » » 5 weeks ago, # ^ | +19 Fun fact:This was div2 only round:)
» 5 weeks ago, # | 0 How to solve A?? i was trying using string as input but was getting error when i tried to convert the characters into integers for satisfying conditions.
» 5 weeks ago, # | ← Rev. 2 → +4 After solving A and C in like 25 minutes , I gave 1.5 hours on B and still I have absolutey no idea how to do it, LOL dont have any maths background ,I am always unable to do these kinds of maths problem.
• » » 5 weeks ago, # ^ | +1 how to solve C can u please elaborate your approach thanks in advance !!
• » » 5 weeks ago, # ^ | ← Rev. 2 → 0 Just find the prime numbers upto 10^7 and select the prime numbers at minimum possible distance from d.First divisor = 1Second divisor = (nearest prime number to 1+d)Third divisor = (nearest prime number to Second divisor + d)Fourth Divisor = The number itselfAnswer = Second Divisor * Third Divisor
• » » » 5 weeks ago, # ^ | 0 10^5 will also work
• » » » » 5 weeks ago, # ^ | 0 20011 will work.
• » » » 5 weeks ago, # ^ | 0 This makes sense. How do you know to set the upperbound to 10^7?
• » » » » 5 weeks ago, # ^ | 0 We need to find 2 primes from [10000 + 1, 2 * 10000 + 1] and [2 * 10000 + 1, 3 * 10000 + 1]. There're multiple ways to check if there's at least 1 in each interval, easiest being searching list of primes on google.Alternatively, an exploratory run of sieve can be done to verify the hypothesis.
• » » 5 weeks ago, # ^ | 0 You have to just find the product of first two prime numbers such that the difference between (1,p1) and (p1,p2) is atleast d where p1
• » » 5 weeks ago, # ^ | +8 Problem B was more logic than maths.
• » » » 5 weeks ago, # ^ | +1 How do you separate logic from math?
• » » 5 weeks ago, # ^ | 0 B was quite easy it just had basic maths of Prime numbers,I wasn't able to solve C :(
» 5 weeks ago, # | +8 Thanks for the contest!
» 5 weeks ago, # | 0 How to solve B?
• » » 5 weeks ago, # ^ | 0
» 5 weeks ago, # | 0 I found C TL to be pretty strict. I'm not sure what the intended is but $O(n^2 \log n)$ in C++ passed comfortably while the same code in Java TLEd.
• » » 5 weeks ago, # ^ | 0 Same here. It doesn't even go past pretest 2 in Java using TreeMap.
• » » 5 weeks ago, # ^ | 0 You could avoid the logn factor by using a count array instead of a map(as A[i]<1e6) and after each iteration instead of resetting the entire array you could only reset the values which occurred during that iteration. This solution has a bit more implementation but you would have to worry less about about fst.
• » » » 5 weeks ago, # ^ | +16 I can see the $O(n^2)$ solution now. However simply because you can spend extra effort to find a faster solution in order to pass in a certain language doesn't justify the tight TL. Unless if the TL was intended to block log solutions (which it did a bad job of), I think it would've been better to make it looser.
• » » » » 5 weeks ago, # ^ | 0 Yeah, maybe if n was 2000 it would have blocked even the C++ solutions with extra logn but I'm not sure.
• » » » 5 weeks ago, # ^ | 0 here is my solution that is nearly equiv to your proposition but it also causes a TLE in java! O(n^2)
» 5 weeks ago, # | +6 Some test cases for D that I have caught and resolved 2 8 2 2 2 2 1 3 2 2 7 1 2 3 5 4 6 3
• » » 5 weeks ago, # ^ | 0 can you please tell how you resolved it, i'm trying and it's still giving WA on pretest 2.
» 5 weeks ago, # | 0 Cool contest. I particularly like C, although I wasn't able to solve it in-contest.
» 5 weeks ago, # | 0 For me figuring out solution for A takes more time than solving B XD...
» 5 weeks ago, # | 0 Could someone pls tell why my solution TLEd for problem B? https://codeforces.com/contest/1474/submission/104805897
• » » 5 weeks ago, # ^ | ← Rev. 2 → 0 I didn't read the code carefully, See this comment for the correct answer.
• » » » 5 weeks ago, # ^ | 0 Why did you use 2e5+5 as the largest possible prime?
• » » » » 5 weeks ago, # ^ | 0 The first divisor is around $d$ and the second divisor should be around $2d$. 2e5 is just the first number that came to my mind and it's big enough so I used it :)
• » » » 5 weeks ago, # ^ | ← Rev. 3 → +1 I don't think that is the case. Your code fails because when 'd' is odd, you get and even 'j' and your step size in while loop is 2 so you'll never get out of the while loop. Hence the TLE.https://codeforces.com/contest/1474/submission/104841099This works. I just changed j+=2 with j+=1 and i+=2 with i+=1 (second change not needed AFAIK).
• » » » » 5 weeks ago, # ^ | 0 I apologize. I just quickly skimmed the code and found that he's checking the prime and got my conclusion.
» 5 weeks ago, # | 0 Couldn't solve D but loved the problem set and enjoyed a lot.
» 5 weeks ago, # | ← Rev. 3 → 0 problem A Video Editorial Link : https://www.youtube.com/watch?v=2u6zr-tdEF4
» 5 weeks ago, # | 0 In Problem B, if d =3 what will be the divisors of the answer ?
• » » 5 weeks ago, # ^ | ← Rev. 2 → -11 1 , 5 , 11 , 55
• » » » 5 weeks ago, # ^ | 0 Why can't it be 28 ? The divisors can be 1,4,7,28 and it's smaller than 55.
• » » » » 5 weeks ago, # ^ | +4 all divisors, it's mean 1,2,4,7,14,28. And 2-1=1<3
• » » » » » 5 weeks ago, # ^ | 0 Oh alright, thanks.
• » » » » 5 weeks ago, # ^ | 0 You should notice that it contain all divisors,so 28 is 1 2 4 7 14 28,but 2-1 less than 3
• » » » » 5 weeks ago, # ^ | 0 No, Because as mentioned in the question, two out of any of its divisors should have at least d difference. Here in the case of 28, you can see, that the factors are 1, 2, 4, 7, 14, 28. Here the difference between 1 and 2 is not 3 same as for 2 and 4 have difference 2 instead of 3.
• » » » » » 5 weeks ago, # ^ | 0 not two every two divisors.
• » » » » 5 weeks ago, # ^ | 0 I don't know why on earth people have downvoted my message.
» 5 weeks ago, # | -38 Probably the simplest solution of C 104805814
» 5 weeks ago, # | ← Rev. 2 → +10 I solved problem E 1 minute after contest ended... :(
» 5 weeks ago, # | +2 What is the greedy soln to D (if there is ..)? Solely greedy based and no dp ....
• » » 5 weeks ago, # ^ | +3 There is n't much dp other than prefix sum calculation.My solution:- if we have to remove all piles lets start from 1st one as it has only one neighbour.so a[0]<=a[1] similarily a[2]>=a[1]-a[0],a[3]>=(a[2]-(a[1]-a[0])) so on.so our problem reduces to finding an array with atmost one swap such that after swap for each position if position is even sum of even indexed number(after swap) >=sum of odd indexed number(after swap).and at final position odd_sum[n-1]=even_sum[n-1] to ensure there is no piles remain at last.Now it is an well known prefix sum implementation problem.If any doubt plz comment.
» 5 weeks ago, # | ← Rev. 2 → +7 I am seeing codeforces's upcoming contests list empty for the first time.
» 5 weeks ago, # | ← Rev. 3 → +28 I think D was leaked today. Submissions
• » » 5 weeks ago, # ^ | +1 Even this looks same, with some added lines to avoid MOSS: Submission104805227
» 5 weeks ago, # | +5 Can Someone Pls Explain why my code gives a TLE with complexity O(n^2logn) and some codes with the same complexity pass easily My Submission = https://codeforces.com/contest/1474/submission/104812694Other Submission — https://codeforces.com/contest/1474/submission/104801893Any Help will be appreciated.. I am clueless
• » » 5 weeks ago, # ^ | ← Rev. 2 → +1 I believe it because erase(int) will erase all of that number in the set, so then you may erase the begin() of an empty set, which will cause TLE.
• » » » 5 weeks ago, # ^ | +5 Thank you so much I was not aware of this STL functionality.
• » » » 5 weeks ago, # ^ | 0 Hey,Thallium54 I did the correct thing, and my time complexity is right, can you tell me why I am getting a TLE? Submission — 104815636
• » » 5 weeks ago, # ^ | +5 You shouldn't do s.erase(a[i]). In this case all a[i] are getting removed while only one should get removed. Now if all elements are same, doing this makes the multiset empty and s.erase()` will give TLE.
• » » » 5 weeks ago, # ^ | 0 Thank you so much I was not aware of this STL functionality.
» 5 weeks ago, # | +50 My 10 months-long fuckups-streak finally broke today. From writing buggy codes, to missing submiting the solutions to difficult problems by a few seconds, many times, and missed reaching the Master as consequences.It all will end today, once the ratings get updated :DThanks for the contest, I'll finally become master today :) :) :)
• » » 5 weeks ago, # ^ | -29 Ham kya karein bhai! NO one cares
» 5 weeks ago, # | 0 Please Ban them!
» 5 weeks ago, # | 0 in problem B i try to first store all prime number till 2*pow(10,8); my test cases didn't passed but with 2*pow(10,6) it passed.i think according to problem statement the upper limit should be 2*pow(10,8);
» 5 weeks ago, # | -27 constructive-adhoc-forces... ... -50 rating ...
» 5 weeks ago, # | +28 I misread D ( realized after an hour) and thought you can swap any pair(not necessarily neighboring).Does anyone know how to solve this version?
• » » 5 weeks ago, # ^ | 0 Same situation... Realized only when 20 minutes left
• » » 5 weeks ago, # ^ | ← Rev. 5 → 0 Edit: wrong solution.
• » » » 5 weeks ago, # ^ | 0 I solved D using those two condition but I don't know how to prove they are sufficient. Can you help me out?
• » » » 5 weeks ago, # ^ | +48 Hey, I was working on it and I think I found a counter-case.counter-case : n = 8, 3 4 6 4 4 6 4 3This array follows the two condition but it is not a good array. I think the tests were a little weak and I was able to pass through.
• » » » » 5 weeks ago, # ^ | ← Rev. 2 → 0 Thanks. During contest I thought I can prove this by induction. Now I've found a mistake in the proof.
• » » » » » 5 weeks ago, # ^ | 0 Same here. I thought I had it in the contest. I actually came up with this solution because of the same misreading mistake XD
• » » » 5 weeks ago, # ^ | +2 I uphacked your solution ^^
» 5 weeks ago, # | +9 Thanks for this contest. I'm very excited that I will reach Master tomorrow morning!
» 5 weeks ago, # | 0 Video solutions for problem A and B : Problem A Video SolutionProblem B Video Solution
» 5 weeks ago, # | 0 After a long day a nice C problem. Nice round. Enjoyed!
» 5 weeks ago, # | ← Rev. 2 → 0 in D i got "NO" for 1 1 3 3 2 1 1 3 3 2, 1 1 3 1 0, 1 1 2 0 0 swap 1 2 1 0 0, 1 1 0 0 0, 0 0 0 0 0 help anyone!!!
• » » 5 weeks ago, # ^ | ← Rev. 2 → 0 Before the start of cleaning, you can select two neighboring piles and swap them
» 5 weeks ago, # | 0 I think the amount of code in question c is a bit large, maybe it is enough to output yes or no?-
» 5 weeks ago, # | ← Rev. 3 → 0 My Submissions in the round have been skipped due to coincide with another solution . I don't know how it happened but I'm copying the body of the code as a template form online training I have taken recently and this is a link of a sample of a solution. https://github.com/MohamedAfifii/ProblemSolving--Arabic/blob/master/Solutions/Codeforces/100814I.cppI also copy some algorithms from this training and edit in them and I don't know if this person was talking the same training and is doing something similar but likely it is the case as both of us are Egyptians and the training is Egyptian one and maybe that's why the coincidence has occurred. I'm not cheating from anyone ,I swear it .That's all I want my rating back and your understanding is highly appreciated !
» 5 weeks ago, # | 0 Can some one tell me what is the difference between g++11 and g++17 that I got WA for my solution for C with g++17 and got accepted with the same code with g++11?!
• » » 5 weeks ago, # ^ | ← Rev. 2 → 0 Maybe undefined behavior?
• » » 5 weeks ago, # ^ | +4 I modify your code, and get ac, https://codeforces.ml/contest/1474/submission/104909769 You can compare the differences and think about why. But to be honest, your code style is not flattering. Maybe you should learn a good code style first?
• » » » 5 weeks ago, # ^ | 0 Thanks for the modification and sorry for the awful coding style :)
» 5 weeks ago, # | +10 Does someone happen to know when the next round is going to be? (guess now I'm into cf)
• » » 5 weeks ago, # ^ | ← Rev. 2 → +2 I can't remember the last time I found the upcoming contest field empty!:(
• » » 5 weeks ago, # ^ | 0 Not until it appears in the "upcoming contests" section.
» 5 weeks ago, # | +12 I am wondering about the reason behind no new upcoming contest. Is it the case that there are no new problems proposals to pick up? Or, it is just a bit of breathing time for the admins since they have been working so hard continuously.
• » » 5 weeks ago, # ^ | 0 This is a move by MikeMirzayanov to supress cheating.
» 5 weeks ago, # | +8 Looking at the empty upcoming contest section , feels sad .
• » » 5 weeks ago, # ^ | +4 There is one upcoming AtCoder Beginner Contest this weekend :)
• » » » 5 weeks ago, # ^ | +4 Yeah I know there is atcoder round , codechef cookoff etc. But i don't know why codeforces rounds hits me differently :')
• » » 5 weeks ago, # ^ | +3 You don't have to be sad anymore! A div3 is scheduled
» 5 weeks ago, # | +6 Why there is no upcomming contest ??
• » » 5 weeks ago, # ^ | +3 It's there now. Div3 :)
» 5 weeks ago, # | +5 Give me a round, or give me death!
• » » 5 weeks ago, # ^ | 0 RIP .
• » » » 5 weeks ago, # ^ | 0 There are two upcoming contests as of now, but you may disregard all of them and leave codeforces for good if you like.
» 5 weeks ago, # | 0 I can't even solve 1st problem in 2 hours :((
» 5 weeks ago, # | 0 I dont understance why my solution. that uses just arrays and takes only O(n^2) cause a TLE !! Am i force to use c++ instead of Java ?
» 5 weeks ago, # | 0 problems were good
» 43 hours ago, # | ← Rev. 2 → 0 B. Different Divisors for input d=381 four divisors should be (1, 1+d, 1+2d, (1+d)*(1+2d)) i.e (1, 382, 763, 291466). So answer should be 291466. but given answer is 294527. I think (1, 1+d, 1+2d, 1+3d) this should 4 divisors instead of (1, p, q, pq) or (1, p, p^2, p^3) please anyone explain this :( Correct me if I am wrong
• » » 43 hours ago, # ^ | 0
• » » » 38 hours ago, # ^ | 0 Well, you can't generalize the solution in this case. You are correct the answer in be of the form (1, p, q, pq) where a = pq. Moreover, according to question constraints, p-1>=d, q-p>=d, pq-p>=d. You can read the editorial for the rest of the solution. Your assumption of (1, 1+d, 1+2d, (1+d)*(1+2d)) is wrong because you can't say for sure that 1+d and 1+2d are prime numbers.
• » » » » 38 hours ago, # ^ | 0 got it :) |
# LabVIEW
cancel
Showing results for
Did you mean:
## Computing the value of sin(x)
I have a lab that I have been working on and just cannot figure out. If anyone had any input it would be greatly appreciated.
The lab is to: Develop a VI that computes the value of sin(x) at a given x using n terms of the series expansion of the sine function.
-Create a control which represents the value of n in the sin(x) equation
-Create an indicator to show the result of sin(x)
I have been playing around with loops and cannot figure it out.
Thanks
Message 1 of 26
(2,757 Views)
## Re: Computing the value of sin(x)
The series expansion is a number of x-exponents and factors, right? According to your instruction you should use N of those.
So, make a list of the exponents and factors (actually you only need the factors), loop through them with N as loop counter and x^i * factor, sum up the result.
/Y
"Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
Message 2 of 26
(2,749 Views)
## Re: Computing the value of sin(x)
Also, post the code you have. We will guide you, but not do the work entirely. 😉
Putnam
Certified LabVIEW Developer
Senior Test Engineer North Shore Technology, Inc.
Currently using LV 2012-LabVIEW 2018, RT8.5
LabVIEW Champion
Message 3 of 26
(2,736 Views)
## Re: Computing the value of sin(x)
As a first step, find the taylor series expansion for sin(x). Do you have that?
@waltonj1 wrote:
I have been playing around with loops and cannot figure it out.
All you probably need is a single loop, not loops. Show us what you have and we can probably point you in the right direction.
(If you would have searched the forum, you probably would have found some interesting discussions, such as this one.)
LabVIEW Champion. It all comes together in GCentral
Message 4 of 26
(2,715 Views)
## Re: Computing the value of sin(x)
I read that but it did not help too much. Heres the series expansion
$\sin x = \sum^{\infty}_{n=0} \frac{(-1)^n}{(2n+1)!} x^{2n+1} = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \cdots\quad\text{ for all } x\!$
Here is the problem (#2)
Here's what I have now. Please ignore the formula I have in the loop, I believe it is wrong...
Message 5 of 26
(2,702 Views)
## Re: Computing the value of sin(x)
I see that the proper capitalization of LabVIEW is not being taught these days....
Message 6 of 26
(2,697 Views)
## Re: Computing the value of sin(x)
I always do the proper capitlization myself! You can see by the name of the file 🙂
Message 7 of 26
(2,691 Views)
Highlighted
## Re: Computing the value of sin(x)
@waltonj1 wrote:
I read that but it did not help too much.
What is "that"?
LabVIEW Champion. It all comes together in GCentral
Message 8 of 26
(2,682 Views)
Message 9 of 26
(2,679 Views)
## Re: Computing the value of sin(x)
How exactly do I go about that? Sorry...new
Thanks
Message 10 of 26
(2,675 Views) |