text
stringlengths 104
605k
|
---|
# Excess vertical space in \vdots
How do I get a tight \fbox around \vdots? There is an excess vertical space as discussed in
The code below produces:
## Code:
\documentclass{article}
%% Defined in https://tex.stackexchange.com/a/412418/4301
\newcommand{\myvdots}{\raisebox{.006\baselineskip}{\ensuremath{\vdots}}}
\begin{document}
p
\fboxsep=0pt\fbox{\vdots}
\fboxsep=0pt\fbox{\myvdots}
y
\end{document}
The definition of \vdots in fontmath.ltx reads
\DeclareRobustCommand
\vdots{\vbox{\baselineskip4\p@ \lineskiplimit\z@
\kern6\p@\hbox{.}\hbox{.}\hbox{.}}}
(This is inherited from plain.tex with the addition of robustness.) I don't quite know the idea behind the \kern6\p@ bit, but removing it removes the excess box height
\documentclass{article}
\makeatletter
\DeclareRobustCommand
\myvdots{\vbox{\baselineskip4\p@ \lineskiplimit\z@
\hbox{.}\hbox{.}\hbox{.}}}
\makeatother
\begin{document}
\fboxsep=0pt
p \fbox{\vdots} \fbox{\myvdots} y
\end{document}
• Since \baselineskip is set to 4pt, the total height will be 14pt plus the height of the period. It is an example of an ad hoc macro by Knuth that was blindly copied to LaTeX. – egreg Feb 17 at 9:40
• @egrer A-ha. I was just adding that the macro is in fact inherited by plain :-) – campa Feb 17 at 9:42 |
## Friday, July 15, 2011
### Review of El's Drive In, Morehead City
Every year, when we travel to Atlantic Beach, I read reviews of the restaurants in the area. And, every year, I consistently read people raving about El’s Drive-In in Morehead City. I’ve seen it called “classic” and “a definite must” but honestly, I thought it was just a classic tourist trap.
Yes, El’s has been exactly as it is since the 1959, which I suppose makes it an institution? I think it's just a classic, old school drive-in. (And by drive in, it's more of a drive up and park.) When you pull into their parking lot, it is unmarked, making for chaotic parking all around. Cars are parked in all directions with no discernible pattern. After about ten minutes, one of the two waitresses came over to the car and took our order.
Since they have a reputation for having an amazingly fresh Shrimpburger, we ordered two Super Shrimpburgers and a side of Chili-Cheese Fries. The reason why it is a Super Shrimpburger is because of the size. The normal Shrimpburger is on a typical hamburger bun, while the Super is on a sandwich bun that is larger.
About fifteen minutes later, our food was delivered to our car. With the addition of two sodas, the bill came to about \$17 dollars. Now, we were ready to try the famous Super Shrimpburger. The shrimp were fried, but rather small in size. I'd call them popcorn shrimp. They claim to be locally caught shrimp, and they tasted very fresh. On top, were ketchup and coleslaw. (Megan's was plain as she doesn't eat either.) That was the Super Shrimpburger. The end.
Basically, it was good, but nothing spectacular. I definitely built up this burger (in my head) to a far greater extent than it deserves. Perhaps their hamburgers or hotdogs would be better, but given the raving reviews on only the Shrimpburger, I doubt it.
Our Chili-cheese fries were undercooked and sloppy. They were in no way golden brown, in fact they were quite mushy. The cheese definitely was a mass produced, cheese-whiz like product that was used sparingly. The chili was very subpar and flavorless. I tend to think of chili as a thick, hearty food, but this was merely beans and meat drowned in a thin, brown, watery consistency. After about five fries total, Megan and I were both done with them and tossed some to the numerous seagulls that always seem to be present at El’s. (I guess it beats sandwich crusts on the beach?)
While, I know El’s is a local, almost historic, establishment in Morehead City, we won’t be visiting it again. The food and experience was very uninspiring. Part of me hopes we were just there on a bad day, but on Saturday, the week before the fourth of July, I sincerely doubt it. But regardless of its tourist trap characteristics, I get the feeling people will keep going to El’s due to the novelty of eating in your car.
Category Scale 1-5 stars Food Quality $\bigstar\bigstar$ Food Creativity $\bigstar$ Service $\bigstar\bigstar$ Atmosphere $\bigstar\bigstar$ Value for the price $\bigstar\bigstar$ |
# Is there any way to collect only variables with a specific power?
Suppose I've got this:
In[13]:= Expand[(a + b) (b + c) (c + a)]
Out[13]= a^2 b + a b^2 + a^2 c + 2 a b c + b^2 c + a c^2 + b c^2
And I want to collect only terms involving a^2. In other words, I want the following output:
a^2(b + c) + a b^2 + 2 a b c + b^2 c + a c^2 + b c^2
How can I do this? If I use the following:
Collect[%, a^2]
Then it simply groups terms into the highest power of a, even if the highest term is less than 2. So it results in this:
In[14]:= Collect[%, a^2]
Out[14]= b^2 c + b c^2 + a^2 (b + c) + a (b^2 + 2 b c + c^2)
Ideally, I would like to extend this further to collect all a^2, b^2, and c^2 in one expression. So that running my command would transform the original fully expanded expression into the following:
a^2(b + c) + b^2(a+c) + c^2(a+b) + 2 a b c
In a single command. It this possible?
-
FYI (its not the answer to your question but...) another symmetric form would be given by SymmetricReduction[Expand[(a + b) (b + c) (c + a)] , {a, b, c}][[1]] – chris Nov 15 '12 at 9:32
Welcome to Mathematica.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Read the FAQs! 3) When you see good Q&A, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. ALSO, remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign – Vitaliy Kaurov Nov 16 '12 at 2:14
Introducing dummy variables will do the job:
Collect[Expand[(a + b) (b + c) (c + a)]
/. {a^2 -> x, b^2 -> y, c^2 -> z}, {x, y, z}]/. {x -> a^2, y -> b^2, z -> c^2}
2 a b c + (a + b) c^2 + b^2 (a + c) + a^2 (b + c)
If you will have more variables in the future, it probably makes sense to rewrite this:
P = {a, b, c}; Q = {x, y, z};
Collect[Expand[(a + b) (b + c) (c + a)]
/. MapThread[#1^2 -> #2 &, {P, Q}], Q] /. MapThread[#2 -> #1^2 &, {P, Q}]
2 a b c + (a + b) c^2 + b^2 (a + c) + a^2 (b + c)
-
How about Coefficient?
expr = Expand[(a + b) (b + c) (c + a)]
term = Coefficient[expr, a^2]*a^2 (* get the coefficient and multiply it with the variable *)
rest = (expr - term) // Expand (* expand the rest *)
term + rest
a^2 b + a b^2 + a^2 c + 2 a b c + b^2 c + a c^2 + b c^2
a^2 (b + c)
a b^2 + 2 a b c + b^2 c + a c^2 + b c^2
a b^2 + 2 a b c + b^2 c + a c^2 + b c^2 + a^2 (b + c)
The final line contains the a^2(b + c) term at the end.
-
Your first question is quite different from the second one. To address the first one you might do as follows:
pos = Position[Collect[expr, a], a^2*b_][[1, 1]]
this yields 3. Then
Collect[expr, a][[pos]]
`
yielding a^2 (b + c)
- |
NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
## AIEEE 2011
Exam Held on Sun May 01 2011 09:30:00 GMT+0000 (Coordinated Universal Time)
Click View All Questions to see questions one by one or you can choose a single question from below.
## Chemistry
A gas absorbs a photon of 355 nm and emits at two wavelengths. If one of the emi...
Which one of the following order represents the correct sequence of the increasi...
Among the following the maximum covalent character is shown by the compound :
The hybridization of orbitals of N atom in $$NO_3^-$$, $$NO_2^+$$ and $$NH_4^+$$...
The structure of IF<sub>7</sub> is :
'a’ and `b’ are van der Waals’ constants for gases. Chlorine is more easily liqu...
The entropy change involved in the isothermal reversible expansion of 2 moles of...
A vessel at 1000 K contains CO<sub>2</sub> with a pressure of 0.5 atm. Some of t...
Identify the compound that exhibits tautomerism.
Ozonolysis of an organic compound gives formaldehyde as one of the products. Thi...
In a face centred cubic lattice, atom A occupies the corner positions and atom B...
A 5.2 molal aqueous solution of methyl alcohol, CH<sub>3</sub>OH, is supplied. W...
Ethylene glycol is used as an antifreeze in a cold climate. Mass of ethylene gly...
The degree of dissociation ($$\alpha$$ ) of a weak electrolyte, A<sub>x</sub>B<s...
The reduction potential of hydrogen half cell will be negative if :
The rate of a chemical reaction doubles for every 10<sup>o</sup>C rise of temper...
Which of the following statement is wrong ?
Which of the following statements regarding sulphur is incorrect ?
Boron cannot form which one of the following anions ?
In context of the lanthanoids, which of the following statements is not correct ...
Which of the following facts about the complex [Cr (NH<sub>3</sub>)<sub>6</sub> ...
The magnetic moment (spin only) of [NiCl<sub>4</sub>]<sup>2−</sup> is
Phenol is heated with a solution of mixture of KBr and KBrO<sub>3</sub>. The maj...
Which of the following reagents may be used to distinguish between phenol and be...
Trichloroacetaldehyde was subjected to Cannizzaro’s reaction by using NaOH. The ...
The strongest acid amongst the following compounds is :
Sodium ethoxide has reacted with ethanoyl chloride. The compound that is produce...
Silver Mirror test is given by which one of the following compounds ?
The presence or absence of hydroxyl group on which carbon atom of sugar differen...
Outer electronic configuration of Gd (Atomic no : 64) is -
The outer electron configuration of $$Gd$$ (Atomic No. $$64$$) is :
## Mathematics
If $$A = {\sin ^2}x + {\cos ^4}x,$$ then for all real $$x$$:
Let $$\alpha \,,\beta$$ be real and z be a complex number. If $${z^2} + \alpha ... If$$\omega ( \ne 1)$$is a cube root of unity, and$${(1 + \omega )^7} = A + B\...
These are 10 points in a plane, out of these 6 are collinear, if N is the number...
<br> <b> Statement - 1: </b> The number of ways of distributing 10 identical bal...
The coefficient of $${x^7}$$ in the expansion of $${\left( {1 - x - {x^2} + {x^3... A man saves ₹ 200 in each of the first three months of his service. In each of t... The lines$${L_1}:y - x = 0$$and$${L_2}:2x + y = 0$$intersect the line$${L_3...
The two circles x<sup>2</sup> + y<sup>2</sup> = ax, and x<sup>2</sup> + y<sup>2<...
Equation of the ellipse whose axes of coordinates and which passes through the p...
$${{{d^2}x} \over {d{y^2}}}$$ equals:
For $$x \in \left( {0,{{5\pi } \over 2}} \right),$$ define $$f\left( x \right) =... The shortest distance between line$$y-x=1$$and curve$$x = {y^2}$$is Let$$A$$and$$B$$be two symmetric matrices of order$$3$$. <br><b>Statement... The number of values of$$k$$for which the linear equations <br>$$4x + ky + 2z...
The value of $$\int\limits_0^1 {{{8\log \left( {1 + x} \right)} \over {1 + {x^2}... The area of the region enclosed by the curves$$y = x,x = e,y = {1 \over x}$$an... Let$$I$$be the purchase value of an equipment and$$V(t)$$be the value after ... If$${{dy} \over {dx}} = y + 3 > 0\,\,$$and$$y(0)=2,$$then$$y\left( {\ln ...
Consider $$5$$ independent Bernoulli's trials each with probability of success $... If $$C$$ and $$D$$ are two events such that $$C \subset D$$ and $$P\left( D \rig... If the angle between the line$$x = {{y - 1} \over 2} = {{z - 3} \over \lambda }... The vectors $$\overrightarrow a$$ and $$\overrightarrow b$$ are not perpendicu... <b>Statement - 1:</b> The point $$A(1,0,7)$$ is the mirror image of the point <... If $$\overrightarrow a = {1 \over {\sqrt {10} }}\left( {3\widehat i + \widehat ... The domain of the function f(x) =$${1 \over {\sqrt {\left| x \right| - x} }}$$...$$\mathop {\lim }\limits_{x \to 2} \left( {{{\sqrt {1 - \cos \left\{ {2(x - 2)} ... The value of $$p$$ and $$q$$ for which the function <br><br>$$f\left( x \right) ... Consider the following statements <br>P : Suman is brilliant <br>Q : Suman is ... If the mean deviation about the median of the numbers a, 2a,........., 50a is 50... ## Physics A screw gauge gives the following reading when used to measure the diameter of a... A water fountain on the ground sprinkles water all around it. If the speed of wa... An object, moving with a speed of 6.25 m/s, is decelerated at a rate given by : ... A thin horizontal circular disc is rotating about a vertical axis passing throug... A mass$$m$$hangs with the help of a string wrapped around a pulley on a fricti... A pulley of radius$$2m$$is rotated about its axis by a force$$F = \left(... Two bodies of masses $$m$$ and $$4$$ $$m$$ are placed at a distance $$r.$$ The g... Work done in increasing the size of a soap bubble from a radius of $$3$$ $$cm$$ ... Water is flowing continuously from a tap having an internal diameter $$8 \times ... A Carnot engine operating between temperatures$${{T_1}}$$and$${{T_2}}$$has e... A thermally insulated vessel contains an ideal gas of molecular mass$$M$$and r... Three perfect gases at absolute temperatures$${T_1},\,{T_2}$$and$${T_3}$$are...$$100g$$of water is heated from$${30^ \circ }C$$to$${50^ \circ }C$$. Ignorin... Two particles are executing simple harmonic motion of the same amplitude$$A$$a... A mass$$M,$$attached to a horizontal spring, executes$$S.H.M.$$with amplitud... The transverse displacement$$y(x, t)$$of a wave on a string is given by$$y\le... The electrostatic potential inside a charged spherical ball is given by $$\phi ... Two identical charged spheres suspended from a common point by two massless stri... If a wire is stretched to make it$$0.1\% $$longer, its resistance will: A current$$I$$flows in an infinitely long wire with cross section in the form ... A fully charged capacitor$$C$$with initial charge$${q_0}$$is connected to a ... A boat is moving due east in a region where the earth's magnetic fields is$$5.0... A resistor $$'R'$$ and $$2\mu F$$ capacitor in series is connected through a swi... Let $$x$$-$$z$$ plane be the boundary between two transparent media. Medium $$1... This question has a paragraph followed by two statements, Statement$$-1$$and ... A car is fitted with a convex side-view mirror of focal length$$20cm$$. A ... Energy required for the electron excitation in$$L{i^{ + + }}$$from the first ... This question has Statement -$$1$$and Statement -$$2$$. Of the four choices g... This question has Statement -$$1$$and Statement -$$2$$. Of the four choices g... The half life of a radioactive substance is$$20$\$ minutes. The approximate time...
### Joint Entrance Examination
JEE Main JEE Advanced WB JEE
### Graduate Aptitude Test in Engineering
GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN
NEET
Class 12 |
# wigner semi-circle distribution random numbers generation
I am trying to generate random numbers in Wigner semi-circle distribution. Since this one does not have the analytical solution for the inverse function of the pdf. I wonder if anyone familiar with a standard way to generation random numbers (RNs) follow this distribution and what are the pros and cons for each method.
My initial guess from what I have researched is, I can use the rejection method or sample method on the uniform distributive random numbers to get the Wigner semi-circle one.
Previously, I have generated normally distributive RNs from uniformly distributive RNs using the Box-Muller transformation. Even I am not a statistics major, it was pretty straight forward process. However, I am having hard time grasp my head around other distributions, specifically this Wigner semi-circle.
Any instructions or source recommendations will be highly appreciated. Thank you.
• This is a shifted, scaled Beta$(3/2,3/2)$ distribution. Its CDF does have an explicit analytic inverse called the inverse regularized incomplete beta function, but you scarcely need that to generate random values: just look at the graph of the PDF--what kind of geometric figure is it? Hint: the answer lies in the name of the distribution. – whuber Apr 15 at 20:24
• @whuber thank you for your answer. The graph of the Wigner semi circle pdf is a semicircle or semi ellipse. Are you saying that I just need to use this pdf and sample the uniform random variables to get this Wigner dist? – Lac Apr 15 at 23:50
• Given the shape of the subgraph of the density, there is no need for rejection. – Xi'an Apr 26 at 11:50
[Following whuber's comments:] Since $$f(x)=\frac{2}{\pi R^2}\sqrt{R^2-x^2}$$the sub-graph of $$f$$ $$\mathcal S_R=\{(x,y);0\le y\le f(x)\}$$ is the half-disk of radius $$R$$. Thus, by the fundamental lemma of simulation, simulating $$X\sim f$$ is equivalent to simulating $$(X,Y)$$ uniformly on $$\mathcal S$$, which corresponds in spherical coordinates to simulating $$(\rho,\theta)\sim \frac{2}{\pi R}\rho \Bbb I_{(0,R)}(\rho) \Bbb I_{(0,\pi)}(\theta)$$ which is obvious:
1. simulate $$U_r,U_a\sim\mathcal U(0,1)$$
2. compute $$X=R\, \sqrt{U_r} \cos (\pi U_a)$$ [and do not compute $$Y=R\, \sqrt{U_r} \sin (\pi U_a)$$!]
3. return $$X$$
[and shows proximity with the Box-Mueller algorithm, although for the latter $$\rho$$ is distributed as an Exponential $$\mathcal E(1/2)$$].
• It may be worth noting that $Y$ need not be calculated at all. This suggests rejection actually might be more efficient because it requires generating only $4/\pi$ URNs per realization rather than $2$ per realization. – whuber Apr 30 at 12:26 |
PLoS ONE
The Gender Gap Tracker: Using Natural Language Processing to measure gender bias in media
DOI 10.1371/journal.pone.0245533 , Volume: 16 , Issue: 1
Article Type: research-article, Article History
•
•
• Altmetric
### Notes
Abstract
We examine gender bias in media by tallying the number of men and women quoted in news text, using the Gender Gap Tracker, a software system we developed specifically for this purpose. The Gender Gap Tracker downloads and analyzes the online daily publication of seven English-language Canadian news outlets and enhances the data with multiple layers of linguistic information. We describe the Natural Language Processing technology behind this system, the curation of off-the-shelf tools and resources that we used to build it, and the parts that we developed. We evaluate the system in each language processing task and report errors using real-world examples. Finally, by applying the Tracker to the data, we provide valuable insights about the proportion of people mentioned and quoted, by gender, news organization, and author gender. Data collected between October 1, 2018 and September 30, 2020 shows that, in general, men are quoted about three times as frequently as women. While this proportion varies across news outlets and time intervals, the general pattern is consistent. We believe that, in a world with about 50% women, this should not be the case. Although journalists naturally need to quote newsmakers who are men, they also have a certain amount of control over who they approach as sources. The Gender Gap Tracker relies on the same principles as fitness or goal-setting trackers: By quantifying and measuring regular progress, we hope to motivate news organizations to provide a more diverse set of voices in their reporting.
Asr, Mazraeh, Lopes, Gautam, Gonzales, Rao, Taboada, and Kehler: The Gender Gap Tracker: Using Natural Language Processing to measure gender bias in media
## Introduction: The Gender Gap in media and in society
Women’s voices are disproportionately underrepresented in media stories. The Global News Monitoring Project has been tracking the percentage of women represented in mainstream media since 1995, when it was 17%. Twenty years later, in 2015, it had increased to only 24%, with a worrisome stalling in the previous decade [1]. At this rate, it would take more than 70 years to see 50% women in the media, a true reflection of their representation in society.
The underrepresentation of women is pervasive in most areas of society, from elected representatives [25] and executives [3, 6, 7] to presidents and faculty in universities [5, 8, 9]. Women are also underrepresented in political discussion groups [10]. It is, therefore, not entirely surprising that news stories mostly discuss and quote men: Many news stories discuss politicians and business executives, drawing on expert opinion from university professors to do so. Perversely, in many stories where women are overrepresented, it is because they are portrayed as having little or no agency, as in the case of victims of violence [1114] or politician’s spouses [15]. During international gatherings like G7/G8 or G20 summits, a set of stories often discuss the parallel meetings of spouses with a focus on their attire, and humorously commenting on cases when the lone man joins activities clearly planned for wives only [16, 17]. Countless studies have pointed out how the representation of women in media is different; e.g., [1824]. In our project, we first tackle the question of how much of a difference there is in the representation of women; cf. [25].
Not a great deal of progress seems to have been made since Susan Miller found, in 1975, that photos of men outnumbered photos of women by three to one in the pages of the Washington Post, and by two to one in the Los Angeles Times . Among the more than 3,600 photos that Miller studied, women outnumbered men only in the lifestyle section of the two papers [26].
Most previous studies of gender representation in media have performed manual analyses to investigate the gap. Informed Opinions, our partner organization in this project, carried out a study in 2016, analyzing 1,467 stories and broadcast segments in Canadian media between October and December 2015, to find that women were quoted only 29% of the time [27]. The work was laborious and intensive. Similarly, the enormous effort of the Global News Monitoring Project is only possible thanks to countless volunteers in 110 countries and many professional associations and unions around the globe. Thus, it only takes place every five years. Shor et al.’s [24] study of a historical sample of names in 13 US newspapers from 1983 to 2008 found that the ratio went only from 5:1 in 1983 to 3:1 by the end of the period. It seems to be stubbornly stuck at that level. A recent analysis of news coverage of the COVID-19 pandemic [28] used a mix of manual and automatic methods and found that men were quoted between three and five times more often than women in the news media of six different countries.
The causes and solutions to the underrepresentation of women in society in general and in news articles in particular are too complex to discuss in this paper (but see [2933]). We focus here on the first step in any attempt at change: an accurate characterization of the current situation. Just like a step tracker can motivate users to increase their physical activity, we believe that the Gender Gap Tracker can motivate news organizations to bring about change in areas they have control over. It is obvious that, if a news story requires a quote from the Prime Minister or a company’s president, the journalist does not have a choice about the gender of those quoted. Journalists, however, do have control over other types of sources, such as experts, witnesses, or individuals with contrasting viewpoints.
Indeed, when journalists keep track of their sources and strive to be more inclusive, both anecdotal and large-scale evidence show that parity is, in fact, possible. Ed Yong, staff writer for The Atlantic who covers science news, reported that keeping track of his sources was the simple solution to ensure gender parity in his articles [34]. Ben Bartenstein, who covers financial news for Bloomberg, improved the gender ratio in his sources by keeping lists of qualified women and tracking the sources in his stories [35]. The BBC’s 50:50 project (https://www.bbc.co.uk/5050) also uses strategic data collection and measurement to achieve 50% women contributing to BBC programs and content.
It is with this goal in mind—of motivating news organizations to improve the ratio of people they quote—that the Gender Gap Tracker was born. The Gender Gap Tracker is a collaboration between Informed Opinions (https://informedopinions.org), a non-profit organization dedicated to amplifying women’s voices in media, and Simon Fraser University, through the Discourse Processing Lab (http://www.sfu.ca/discourse-lab) and the Big Data Initiative (https://www.sfu.ca/big-data).
We harness the power of large-scale text processing and big data storage to collect news stories daily, perform Natural Language Processing (NLP) to identify who is mentioned and who is quoted by gender, and show the results on a public dashboard that is updated every 24 hours (https://gendergaptracker.informedopinions.org). The Tracker monitors mainstream Canadian media, seven English-language news sites (a French Tracker is in development), motivating them to improve the current disparity. By openly displaying ratios and raw numbers for each outlet, we can monitor the progress of each news organization towards gender parity in their sources. Fig 1 shows a screenshot of the live page. In addition to the bar charts for each organization and the doughnut chart for aggregate values, the web page also displays a line graph, charting change over time (see Fig 2 below).
Fig 1
Fig 2
Dates: October 1, 2018 to September 30, 2020. Female sources constitute less than 30% of the sources overall. CBC News (blue line) and HuffPost Canada (green line) show a better gender balance compared to other outlets; The Globe and Mail (light blue) and The National Post (orange) are at the bottom, quoting women less than 25% of the time. Reprinted from https://gendergaptracker.informedopinions.org/ under a CC BY license, with permission from Informed Opinions, original copyright 2018.Counts and percentages of male vs. female sources of opinion across seven news outlets.
For the two years since data collection started on October 1, 2018 until September 30, 2020, the average across the seven news outlets is 29% women quoted, versus 71% men, with a negligible number of unknown or other sources. We have, however, observed an increase in the number of women quoted between the first and the last month in that period, from 27% in October 2018 to 31% in September 2020. Some of that increase can be directly attributed to an increase in the quotes by public health officers during the COVID-19 crisis. It just so happens that a large number of those public health officers across Canada are women [36]. We report some of the analyses and insights we are gathering from the data in the section Analysis and observations.
In this paper, we describe the data collection and analysis process, provide evaluation results and a summary of our analysis and observations from the data. We also outline other potential uses of the tool, from quantifying gender representation by news topic to uncovering emerging news topics and their protagonists. We start, in Related work, with a review of existing literature on quotation patterns, extracting information from parsed text, and potential biases in assigning gender to named entities. We then provide, in Data acquisition and NLP processing pipeline, a high-level description of the data acquisition process and how we deploy NLP to extract quotes, identify people, and predict their gender. More detail for each of those steps is provided in the S1 Appendix. Throughout the development of the Gender Gap Tracker, we were mindful of the need for accuracy, in both precision and recall of quotes, but also in terms of any potential bias towards one of the genders (e.g., disproportionately attributing names or quotes to one gender). In order to ensure that the Gender Gap Tracker provides as accurate a picture as possible, we have performed continuous evaluations. We describe that process in the section on Evaluation. The section Analysis and observations answers the most important questions that we posed at the beginning of the project: Who is quoted, in what pro-portions? We add more nuanced analyses about the relationship between author gender and the gender breakdown of the people those authors quote. Finally, Conclusion offers some reflections on the use of the Gender Gap Tracker as a tool for change, also discussing future improvements and feature additions.
Before delving into the technical aspects of the Gender Gap Tracker and the insights it provides about the gender gap in media, we would like to acknowledge that the language we choose to describe people matters and that the terms we use are simplifications of a complex reality. We use ‘women’ and ‘men’ and ‘female sources’ and ‘male sources’, implying a binary opposition that we know is far from simple. Gender is more nuanced than that. We know, at the same time, that lack of gender representation in many aspects of society is a reality. Our goal is to quantify that lack of representation by using language and the traditional associations of names and pronouns with men and women. We discuss this issue in more detail in the section on on Gender prediction and gender bias in Natural Language Processing.
## Related work
The Gender Gap Tracker involves the application of different insights and research findings in linguistics and Natural Language Processing. To our knowledge, there is no comparable project extracting both direct and indirect quotes on a continuous basis. Because so many different research fields are involved, it is challenging to provide a succinct summary of related existing work. We have focused our survey in this section on three aspects that have informed our work the most: descriptions of direct and indirect speech in linguistics, prediction of gender based on names in text, and extraction of dependency structures and quotes in Natural Language Processing to make a connection between entities and quotes.
### Reported speech
Reported speech is a recreation, or a reconstruction, of what somebody said in a certain context [37]. Note that even though we refer to it as reported speech , the concept applies equally to quotes from written text, such as a press release [38]. The structure is also used to recreate thought (I thought “Okay. What am I gonna do?”) or even action (I was like “[choking/gagging sound]” ), especially in colloquial language [39, p. 44]. Vološinov [40] characterized reported speech as both ‘speech within speech’ and ‘speech about speech’.
It is this nature of reported speech as the recreation of an event, whether involving speech or not, that makes it so important in interaction and in narrative. Goddard and Wierzbicka [41] propose that, regardless of typological differences in how it is expressed across the world’s languages, reported speech is fundamental to human society: Our environment is largely made up of other people’s utterances in our stories, dreams, memories, and thoughts. Talk is certainly fundamental to cognition; more specifically, however, it is often talk about talk that “binds groups and communities together” [41, p. 173]. See also Vološinov [40] and Goddard and Wierzbicka [42].
In linguistics, a distinction is drawn between two types of reported speech: direct and indirect speech. Direct speech involves direct quotation of somebody’s exact words, typically enclosed in quotation marks in writing. With indirect speech, we report on those words, perhaps altering the exact original formulation, and involving deictic shifts, i.e., changes in tense and point of view [38]. Thus, the direct speech “I’ve chosen to start wearing a mask,” Mr. Trudeau said becomes Mr. Trudeau said that he had chosen to start wearing a mask in indirect form, with a change from I to he and ’ve chosen to had chosen . The distinction may be labelled as direct vs. indirect quotation, or direct vs. indirect report [43]. In English and many other languages, it is generally understood that direct speech is used when the intention is to reproduce the speaker’s words verbatim, that is, to be faithful not only to the content of the message, but also to the form in which it was uttered [4446]. We will be using the term ‘reported speech’ for any recreation of what somebody said, as a broad term including both direct and indirect speech.
Reported speech has played an important role in our common cultural stock, including oral narrative and written fiction. We have progressively used it more and more as a form of evidence. Consider the so-called Miranda warning in the United States: Upon arrest, a suspect is told that anything they say may be used as evidence against them. Citations in scientific articles are also a form of reported speech as evidence. We cite or paraphrase other scientists’ words as part of a scientific argument, and as part of the dialogue we engage in as researchers [47]. Reported speech, especially in its direct version, features in news discourse as a direct reproduction of somebody’s exact words, as a safeguard against interpretation by the reporter in the form of indirect speech. (Note also that we refer to journalists as reporters, signalling their role in telling us the news.) The use of reported speech as evidence in news articles is what makes it such an interesting object of study. By identifying who is quoted in news articles, we capture whose words are considered important and worthy of repetition.
As we will see in Quotation extraction below, our analysis focuses on patterns of quotation typically found in news articles: quotes with a matrix clause, whether as direct or indirect speech, and direct quotes that appear in their own sentence (floating quotes). A lively debate in linguistics tries to elucidate whether reported speech is a syntactic, a semantic, a pragmatic, or a paralinguistic phenomenon [38, 39, 42, 4851]. While reported speech probably requires a syntactic, semantic, and pragmatic analysis for a full account, here we use a structural approach and rely on NLP tools rooted in syntactic patterns to identify and extract quoted material, the reporting verb (the verb introducing the reported speech), and the speaker (or source) of the quote.
### Extracting quotes with Natural Language Processing
Reported speech, both direct and indirect, features specific syntactic structures that can be identified through automatic parsing. In direct speech, the presence of quotation marks, together with the presence of a reporting verb, signals a quote. For indirect speech, it is the reporting verb plus a specific syntactic structure, the dependent clause, that points to the presence of reported speech. The most reliable way to find that information, and to find the beginning and end of quotes, is to first create a parse tree or a dependency tree of the structure of the text.
The slightly different flavours of automatic parsing in NLP all result in a reading of the structure of sentences in constituents, with dependency structures identified either implicitly (through tree structure) or explicitly [52]. The focus of our attention in those structures are the complement clauses of reporting verbs.
Much of the research on dependency structures, especially for reported speech, involves the Penn Treebank, the Penn Discourse Treebank [53], and related collections of annotated news articles that are widely used in computational linguistics research. Early in the development of the Penn Discourse Treebank, it was clear that the discourse relations involved in reported speech needed to be addressed, as they featured prominently in the news articles present in the corpus. Consequently, a great deal of attention was paid to annotating attribution and its features in the original corpus, including source, level of factuality, and scope [54]. An extension of the annotations, the PARC corpus of attribution relations [55], contains a more fine-grained annotation with more relations, which can be used to train machine learning systems to detect quotations [56, 57]. We do not follow a machine learning approach here, as we believe not enough annotations are available for the wide range of reported speech types that we have encountered.
An approach that also relies on parse trees of sentences is that of van Atteveldt et al. [58], who extract the text of quotes from news reports. They compare this to a baseline and find that, although there are errors in the parse tree, a syntactic parsing method outperforms a baseline which relies on word order. Likewise, a method using a mix of parse trees and regular expressions is deployed by Krestel et al. [59] to identify both direct and indirect speech in news text.
In a large-scale approach similar to ours, but using rules, Pouliquen et al. [60] identify direct quotes (those surrounded by quotation marks) in news reports from 11 languages. This research led to the pioneering Europe Media Monitor (http://emm.newsexplorer.eu/), which tracks news events, top stories, and emerging themes in the news of over 70 countries (but with a focus on Europe). The quotation extraction, however, seems to have been discontinued in recent versions of the tool.
Our approach is medium-scale, in that it concentrates on Canadian English-language news sources, but is comprehensive enough in the sphere of the Canadian media landscape that trends in gender representation can be gleaned. By using reliable parsing information, we are confident that we detect the majority of quotes in different formats, covering both direct and indirect speech. To our knowledge, this is the most extensive quote analysis performed on a continuous basis.
### Gender prediction and gender bias in Natural Language Processing
The statistics that we are interested in the most, i.e., the gender breakdown of people quoted in the news, rely on accurate prediction of gender based on people’s names. Although gender prediction based on this approach is straightforward and accuracy can be quite high, it is, like many other aspects of NLP, a site for potential bias.
Automatic gender prediction typically relies on the predictable gender associations of people’s first and sometimes last names. For English-speaking countries, a common source of these associations is the US census and the Social Security Administration, where names are mapped to their most frequent sex association at birth (https://www.ssa.gov/oact/babynames/). Clearly, this is a problematic practice, as it assumes that gender is binary, that sex and gender have perfect correlation, and that people’s names are accurate predictors of their sex or gender. We acknowledge and respect the complex nature of this matter, and we are open to further refinements of our approach, as discussions are underway at many levels. For instance, the US census is considering how to best capture sexual orientation and gender identity [61].
The other main method for automatic gender prediction is the entity-based approach where a label is given based on the individual, i.e., an association of a first-last name combination with a specific person and their public gender identity. This is feasible with public figures, as their gender can be extracted from online resources such as Wikidata or HathiTrust [62]. As we will section on Identifying people and predicting their gender, we apply both first name and first-last name methods by extracting information from online services.
We acknowledge that gender is non-binary, and that there are different social, cultural, and linguistic conceptualizations of gender. For this project, we rely on self- and other-identification of gender through names in order to classify people mentioned and quoted as female, male, or other. In English, the third person singular pronoun he encodes male gender and she encodes female gender. First names tend to be used distinctively by persons of different genders. We recognize that some people prefer a gender neutral pronoun (they) and that some people adopt or have been given names that are not strongly associated with one gender (e.g., Alex). We are aware that, because our technical approach is based on a simplified view of gender prediction, it glosses over the many possible gender identities, does not quantify the bias of our tools towards traditional white Western names (which tend to be overrepresented in training data), or intersectionality. This is just a start in the conversation about representation in the media, and we tackle this first attempt through the encoding of gender in language, which in English is mostly binary. All our statistics and analyses include a categorization of gender in three parts: female, male, and other. The latter includes cases where the gender of a person, based on their name, is unknown (because the name is used for both genders), or non-binary (because the person identifies as non-binary).
One issue that we would like to point out here is the inherent bias in many standard NLP tools, datasets, and methods. While we have not fully measured how such biases affect our results, we do bear them in mind when making generalizations about the data. For instance, Garimella et al. [63] show that different syntactic patterns displayed by men and women can lead to different levels of accuracy in part-of-speech tagging and parsing. Therefore, if the parsing method we rely on has been trained on data primarily written by men and quoting men, it is quite possible then that its accuracy is lower when parsing and extracting quotes from women. Caliskan et al. [64] make a compelling case that implicit human biases are learned when using standard machine learning methods to extract associations from data. Among the biases Caliskan and colleagues found are associations of gender from names and careers (e.g., female names more associated with family than career words; more associated with the arts than with mathematics). Gender biases have also been found in coreference resolution [65, 66], visual semantic role labelling [67], and machine translation [68, 69].
In general terms, the type of bias that we are concerned about is what Blodgett et al. [70] term representational harm , specifically two types of representational harm: i) a difference in system performance for different social groups (different parsing accuracy for male and female voices); and ii) system performance that leads to misrepresentation of the distribution of different groups in the population (incorrect gender prediction that misrepresents the true proportion of men and women quoted). Ultimately, we are aware that these biases exist in text because they reflect inherent biases in society, and that attempts at minimizing them are not always successful [71]. We report error rates for our gender prediction process in the Evaluation section, and also make some observations in the Most frequent sources by category section about error rates for categories of people quoted. In general, we can say that our error rate is very low and that it does not seem to show bias.
## Data acquisition and NLP processing pipeline
This section provides a summary of the steps in acquiring data and processing it so that we can extract quotes, the people who said them, and the gender of those speakers (or sources). This is an overview of the process, which is described in much more detail in the S1 Appendix.
### Data scraping
Scraping public data from the web appears to be a simple task. We have found, however, that daily data scraping from heterogeneous sources is actually quite complex and requires customization of existing libraries. We had to deal with a variety of challenges arising from the different technologies, standards, and layouts used by the news outlet websites. This made it difficult to find a common pattern and write a script that could collect the data from different news outlets efficiently and in real time. The S1 Appendix contains further information on the techni-cal aspects of this process.
The final pipeline is a 24/7 online service composed of a set of scrapers in the background of the Gender Gap Tracker website. Each scraper is an individual process for a specific news outlet, scheduled to run twice a day, collect the daily publication of the target website, and store them in our central database. Each process takes between 5 and 30 minutes to execute each day, depending on the target outlet and the number of daily articles published, which tends to range from 800 to 1,500.
Once we have the article and all its metadata in the database, we move onto the Natural Language Processing piece of the pipeline, which involves extracting quotes, identifying people, and predicting their gender.
### Quotation extraction
To measure the gender representation gap in news media, we identify the number of men and women who are quoted in news articles; in other words, people who have not only been mentioned but have also seen their voices reflected in news. We consider both direct speech (surrounded by quotation marks) and indirect speech (She stated that…) to be quotations. We refer to the speaker of such quotes as a source in the news article. In order to identify sources, we first need to extract quotes from the news article text, to then align quoted speakers with the unified named entities that are gender-labelled through the procedure described in the next section. While reported speech in general may be described as a semantic—rather than a syntactic—phenomenon [48], from an NLP point of view, the most reliable mechanism to identify it is the syntactic structure of sentences. Based on study of the literature on reported speech and our initial study of the data, we separate quotes into two different types and apply different procedures to each: syntactic quotes and floating quotes.
What we refer to as syntactic quotes follow a structure whereby a framing or matrix clause, containing the identity of the speaker (the subject) and a reporting verb, introduces a reporting clause, containing the material being quoted [38]. They may be direct or indirect quotes, but they share a common syntactic structure. Such quotes can be identified by finding patterns in a syntactic or dependency parse of the text, as in Example (1), where the structure Janni Aragon… says introduces the content of what the speaker said.
(1) Janni Aragon, a political science instructor at the University of Victoria, says research shows different adjectives are used to describe female leaders compared to male counterparts.
When multiple quotes by the same speaker are present in a news article, it is often the case that only one syntactic quotative structure is used, with subsequent quotes receiving their own sentence or sentences, all in quotes, as in Example (2). The first quote contains a quotative verb and speaker (Kim told). The second quote, The fact that… is a separate sentence without a quotative verb. We label these cases ‘floating quotes’.
(2) “Honestly, it feels like we’re living our worst nightmare right now” Kim told CTV News Friday. “The fact that we are being accused right now of an unethical adoption is crazy.”
In the above example, the second sentence is a continuation of Kim’s quotation. However, Kim’s name is not mentioned as the speaker of the quote anymore. Readers understand implicitly that this second quotation is from the same person mentioned in the previous sentence. These are also referred to as open quotations [50]. Spronck and Nikitina [38] characterize them as ‘defenestrated’, because the framing or matrix clause that typically introduces reported speech is absent. We identify floating quotes by following the structure of the text and matching their speaker to the most recently mentioned speaker.
Using the above two procedures, we capture a variety of syntactic and floating quotes with their verb and speaker. We also introduced a heuristic system for detecting quotes that were initially missed by the syntactic process. Further details on how we extract each type of quote are provided in the S1 Appendix. The next step connects each of these quotations to an entity identified as a source, labelled by gender.
### Identifying people and predicting their gender
We apply gender prediction techniques not only to sources (i.e., the people quoted), but also to all the people mentioned in the text as well as the authors of the articles. Since the main goal of the study is tracking the gender gap, it is very important that the identification of people and gender predictions are performed as accurately as possible.
As a first step towards extracting mentions of people in text, we use Named Entity Recognition (NER), a commonly used procedure in NLP. Current NER techniques work fairly well on English data. These methods are statistical in nature, relying on large amounts of annotated data and supervised or semi-supervised machine learning models, with neural network models being the most commonly used models nowadays [72, 73].
We first extract only entity types tagged with the label PERSON. This excludes organizations and locations that may look like names of people (e.g., Kinder Morgan or Don Mills). We then proceed to entity clustering. The same person may be referred to in the same article with slightly different names or pronouns (e.g., Justin Trudeau, the Prime Minister, Mr. Trudeau, he, his ). To unify these mentions into clusters, and ensure that we attribute quotes to the right person, we apply a coreference resolution algorithm, described in the S1 Appendix.
The coreference process results in a unique cluster for each person containing all mentions in the text that refer to that person. Thus, we can count the number of people mentioned in the text and move to the next step, i.e., predicting their genders.
For gender prediction of each unified named entity (cluster of mentions), we rely on gender prediction web services that use large databases to look up a name by its gender. Initially, we experimented with using pronouns to predict gender (he or she ), but found that this method was not reliable, because not all clusters of reference to an individual include a pronoun (see S1 Appendix for details).
The gender prediction web services that we use perform lookups by first names only, based on databases of names and sex as assigned at birth, or lookups by first and last name, using information for that specific individual and how they are identified publicly. We also keep an internal cache of names that we have previously looked up. In addition, the cache contains manual entries for names that we know are not available in public databases, or are incorrectly tagged by a gender service.
We apply the gender prediction algorithm to three different lists of names: people mentioned in the article, people quoted (who we refer to as sources ), and authors of articles. This process, especially for authors of articles, involves extensive data cleaning (see S1 Appendix).
To match the name and gender of the speaker to quotes, we find the corresponding named entity for each extracted quote. In order to do so, we compare the character indices of a quote’s speaker against the indices of each named entity mention in our unified clusters. If a mention span and a speaker span have two or more characters of overlap, we assume that the mention is the speaker and attribute the quote to the unified named entity (coreference cluster) of the mention. After trying to align all quotation speakers with potential named entities, there may still remain some quotes with speakers that could not be matched with any of the named entities. There are several categories of these cases, such as quotes with a pronoun speaker (e.g., she said) where the pronoun is still a singleton after all named entity and coreference cluster merging. Our current version of the software ignores these cases. We provide statistics on these and other missed cases in the evaluation section below.
## Evaluation
Evaluation of the system was continuous in the development phase, with each new addition and improvement being tested against the previous version of the system, and against manual annotations. Evaluation was carried out separately for each component (quote extraction, people and source extraction, and gender prediction) several times over the course of the project to test out new ideas and to enhance the system. In this section, we discuss the main annotation and results of our evaluation for the most recent release of the system, V5.3. Further details on a pilot annotation and the format of the manually-annotated dataset can be found in the S1 Appendix.
For evaluation, we selected 14 articles from each of the seven news outlets, for a total of 98 articles, chosen from months of recently scraped data at the time (December 2018-February 2019). We chose articles that were representative of the overall statistics according to our system, i.e., contained less than 30% female and more than 70% male sources (calculated based on the latest system release at the time of annotation). The articles were picked in a way that they were distributed across different days of the week and each was selected to have at least 3,000 characters.
We draw articles from our database, rather than using unseen data, for two reasons. First of all, since none of the processes involve supervised learning on this data, there is no risk that the system will have learned anything from the test data. The NLP methodology uses a combination of pre-trained language models (from spaCy), linguistic rules, and custom phrase matching. Thus, we can safely assume that any true positives captured during evaluation will generalize to the rest of our data as well. Second, we are primarily interested in how the system performs specifically on the data we are processing. While evaluation on news articles by other organizations may be useful, we are most of all interested in our performance on the data the Gender Gap Tracker collects daily.
An experienced annotator, who had participated in our pilot annotation, completed the data labelling. The annotations were then also validated and corrected when necessary by a second annotator.
For each of the 98 articles, we have a JSON file which contains an array of extracted quotes, verbs, and speakers, together with their character span indices in the text. We evaluate the output of our system by comparing it to these human annotations. To do so, first we need to align the annotations with the extracted quotes. Let qa be the span of an annotated quote and qe the span of an extracted quote. The match between these two quotes is defined as:
$\begin{array}{c}\hfill score=\frac{len\left({q}_{a}\cap {q}_{e}\right)}{len\left({q}_{a}\right)}\end{array}$
For each annotated quote qa, the best matching quote from among all extracted quotes is the one with the highest matching score, assuming the score is above a certain threshold. We experimented with 0.3 and 0.8 as easy and hard thresholds, respectively. We found that 0.3 captured a relatively large portion of each quote, and 0.8 captured the majority of the content. In the following example, the human annotated and automatically extracted quote spans are highlighted using italic and underlined text, respectively. The alignment score is 0.45, which is the ratio of the length of the overlapping portion (69 characters) to the overall length of the annotated span (153 characters).
(3) “It’s premature for us to make any sort of pronouncement about that right now, but I can tell you this thing looks and smells like a death penalty case.
After alignment, we examine how many of the quotes were correctly detected (true positives), how many were not detected (false negatives), and whether we have some non-quote sentences detected as quotes (false positives). With these numbers, we report the precision, recall, and F1-score of the system in Tables 1 and 2.
Table 1
Quote extraction evaluation on manually annotated data.
Quotation contentVerbSpeaker
PrecisionRecallF1-scoreaccuracyaccuracy
Easy match threshold (0.3)84.6%82.7%83.7%91.8%86.0%
Hard match threshold (0.8)77.0%75.2%76.1%93.1%86.9%
Table 2
Entity extraction evaluation based on manually annotated data.
Human annotation, nSystem annotation, nPrecisionRecallF1-score
Female people2,9063,38772.4%77.6%75.0%
Male people8,38110,03477.4%92.1%84.2%
Female sources1,4421,10494.6%64.6%76.8%
Male sources3,8093,34687.7%76.5%81.8%
Table 1 shows the result of evaluating the quotation extraction code on the manually-annotated dataset. The first three columns of numbers reflect how well the system captures the quotation content span (according to each of the set threshold of overlap 0.3 and 0.8) and the last two columns show system accuracy on verb and speaker detection. We consider the verb to have been correctly detected if the verb extracted by the system has exactly the same span as the expert-annotated span for the verb of that quotation. In order to evaluate the speaker detection quality at the surface textual level, we apply a simple overlap threshold: If the system-annotated span for the speaker has at least one character overlap with the expert-annotated text span for the speaker, it will be accepted as a correct annotation. For example, if the system-annotated span was [12:17], corresponding to the string Obama, while the human-annotated span was [8:17], corresponding to the string Mr. Obama, the span overlap of five characters would mean they were considered the same speaker. Verb and speaker evaluations are applied only to the matched quotes (the quotations that are already passed as aligned between system and expert based on the content span overlap). That is why the accuracy scores for Verb and Speaker in the table were higher when we used a stricter quote matching technique (hard match threshold).
### People and sources
The most important data point with respect to the goal of our project is the ratio of female and male sources. Therefore, we compare the raw number of people and sources of each gender extracted by our system against the corresponding numbers in the human expert annotation.
Furthermore, we would like to know how many of the people mentioned in the text were correctly detected and how many were missed. According to the annotation instructions, the most complete name of each person in the text needs to be provided by the annotators in the annotation files. We have the following arrays of names for each article: female people, male people, other/unknown-gender people, female sources, male sources, and other/unknown-gender sources. Using these manually annotated lists, we can calculate the number of entities our system detects and misses. We first convert all system- and expert-annotated entities in these lists to lowercase and trim the start/end space characters. Then we perform exact string matching on the elements of the arrays to calculate the precision, recall, and F1-score of each identification task. Note that this is a strict evaluation of the system performance and it is directly motivated by our goal to reveal the proportion of female and male sources in news publications.
Table 2 shows the results of entity matching between the system- and expert-annotated people and sources. We see better precision scores in detection of sources in comparison with people. The reason is that the quote extraction step narrows down the people list by filtering out the captured entities that were not quoted at all (so some errors such as location names tagged as people names would automatically be excluded). The recall measure shows the opposite trend: Recall is better for people than for sources. This is because the same narrowing down that improves the precision for sources results in an increase in the number of missed sources, thus negatively affecting recall.
One more interesting gender-related pattern we found was that, in general, we had better recall for male people mentioned and sources, compared to the female mentions and sources. This motivated us to take a closer look at the data and see whether there was any systematic bias in our entity recognition and/or gender recognition procedures, by carrying out a manual analysis.
### Manual analysis of top sources
In addition to the comparison to a full set of articles described above, we also checked the gender accuracy for the top sources in each of the 24 months between October 2018 and September 2020. The results are gratifyingly accurate: The overall error rate is 0.1%. Table 3 provides a breakdown of the error rate per gender (false positives). Note that we examined the top 100 male and female sources per month, but each of those people is quoted multiple times. As a consequence, the number of quotes examined is quite large (over 195,000). There are three aspects to highlight in Table 3:
• Considering that we are examining a constant number of sources per month (top 100 men and top 100 women quoted), it is clear that men are overrepresented in the dataset. That is, the top 100 men each month are quoted much more frequently than the top 100 women. We discuss this further in Analysis and observations.
• The error rate for quotes by women is higher. In our list of quotes by women, we see a higher proportion of names that were actually men (0.2%). That is, the system is more accurate in recognizing the gender of male names. This means that there probably is, in fact, a slightly lower number of quotes from women than our official statistics on the dashboard show, as more quotes are incorrectly attributed to women.
• Most of the errors in the female name list are names that are actually male names or ambiguous (Ashley, Robin). Most of the errors in the male name list are names that are actually not people’s names. They include Raymond James, an investment firm, and Thomas Cook, a travel agency. We correct both types of errors on a regular basis, by adding information to our internal caches.
Table 3
Gender prediction accuracy for the top sources of opinion.
Number of quotesError rate (false positives)
Total quotes by men140,156Error rate for men0.1%
Quotes by men incorrectly identified147
Total quotes by women55,149Error rate for women0.2%
Quotes by women incorrectly identified117
Overall error rate0.1%
## Analysis and observations
In this section, we provide statistics on the data extracted from the seven news outlets, processed and tagged by the Gender Gap Tracker in the time frame of October 1, 2018 to September 30, 2020, 24 months of data and about 613,000 news articles. All numbers are based on the calculations of the Gender Gap Tracker version 5.3 (the most recently released version at the time of publication of this paper).
### Male vs. female sources
Fig 2 shows the statistics available on the Gender Gap Tracker dashboard online. The aggregated counts and ratios of female vs. male sources across different news outlets within the time interval of October 2018 to September 2020 are presented in the bar and the doughnut charts at the top. The bottom line graph shows the percentage of women quoted in the publications of each outlet week by week. Most numbers are in the range of 20 to 30 percent, meaning that women are consistently quoted far less often than men. While some outlets such as Huffington Post and CBC News are more gender-balanced than others, such as The National Post and The Globe and Mail , the numbers suggest that, overall, media outlets disproportionately feature male voices. This may be the result of unconscious bias on the part of the reporters (e.g., reaching out to men more often than to women, when a choice exists). We, of course, also know it is a result of societal bias. In a context where 71% of the Members of Parliament are male [74], it is natural to expect that we hear more often from male politicians. The fact that the current (in 2020) federal cabinet is gender-balanced probably helps. It does not, however, make up for the fact that the person at the top is a man. As shown in Table 4, Justin Trudeau, the Prime Minister, is quoted 8.3 times more often than Chrystia Freeland, arguably the most prominent woman politician in the country. At the top of the list of women is Bonnie Henry, the Public Health Officer for the province of British Columbia, a reflection of how important public health officers have become in the COVID-19 pandemic. And, clearly, Donald Trump is the most quoted person by far in that time period. Perhaps the style of a person’s statements, in addition to their content, makes the press more likely to find them quotable.
Table 4
Top 15 quoted men and women in Canadian media between October 1, 2018 and September 30, 2020.
Identified as menIdentified as women
Name# of quotesSectorName# of quotesSector
Donald Trump15,746PoliticsBonnie Henry2,239Public health
Justin Trudeau13,422PoliticsChristine Elliott1,918Politics
Doug Ford6,760PoliticsChrystia Freeland1,890Politics
Jason Kenney4,190PoliticsNancy Pelosi1,718Politics
Andrew Scheer3,679PoliticsTheresa Tam1,627Public health
François Legault2,754PoliticsJody Wilson Raybould1,493Politics
John Tory2,401PoliticsRachel Notley1,365Politics
Jagmeet Singh2,039PoliticsDeena Hinshaw1,106Public health
John Horgan1,910PoliticsAndrea Horwath1,053Politics
Joe Biden1,667PoliticsValérie Plante979Politics
Mike Pompeo1,661PoliticsPatty Hajdu950Politics
Blaine Higgs1,659PoliticsCatherine McKenna861Politics
Boris Johnson1,553PoliticsElizabeth May671Politics
Scott Moe1,528PoliticsTheresa May622Politics
Total62,564Total19,173
It is important at this point to emphasize that we do not distinguish between Justin Trudeau as a source of information (the way source is typically used by reporters) and Justin Trudeau saying something that reporters felt the need to quote, even if it is not new or privileged information, of the type that sources typically provide. To us, they are both instances of a ‘quote’, and Justin Trudeau is equally the source in both cases. Either of these cases is significant enough in that it points to reporters giving people who are already quoted frequently more of a voice.
The occupation of people quoted, as shown in Table 4, is also quite illuminating. Politics dominates, including international figures (Nancy Pelosi, Boris Johnson), and Canadian politicians at the federal (Elizabeth May, Jagmeet Singh), provincial (Rachel Notley, Doug Ford), and municipal (Valérie Plante, John Tory) levels. The diversity in the list of quoted women is perhaps more interesting. It includes Meng Wanzhou, Chief Financial Officer of Huawei, who was arrested in Vancouver in December 2018 and is in the middle of a legal extradition process as of 2020.
The three other female names that are not politicians are public health officers (Bonnie Henry, British Columbia’s Public Health Officer; Theresa Tam, Chief Public Health Officer of Canada; and Deena Hinshaw, Alberta’s Public Health Officer). One could also include Patty Hajdu (federal Minister of Health) and Christine Elliott (Ontario’s Minister of Health) in the list of public health officers. All these women have been frequently quoted as a consequence of the COVID-19 pandemic, and started appearing in monthly top lists only in January 2020. By comparison, in the top 15 women quoted in December 2019 are Bonnie Lysyk, Auditor General of Ontario, who released her annual report that month, and environmental activist Greta Thunberg.
From these top-15 lists, it does seem that lack of equal representation in sources is partly due to lack of equal representation in society in general and in politics in particular. Indeed, political empowerment is the area where women are most underrepresented across the world [4]. We do not believe, however, that news organizations and journalists are powerless to change the overall numbers we see on the Gender Gap Tracker dashboard. We know that the bias is pervasive and extends to expert sources and other areas where a choice does exist. Franks and Howell [75] discuss how the gender gap in broadcast media applies to prominent public figures and expert sources alike. They find that the source of the gap may be in who is hired and promoted within news organizations, with more men being hired, despite the fact that a larger number of women graduate from TV and broadcast university programs.
### Most frequent sources by category
In order to obtain a more extensive snapshot of who is being quoted by occupation, we conducted an annotation experiment. We extracted the top 100 men and women quoted for each of the 24 months between October 1, 2018 and September 30, 2020. We then manually annotated each of those sources and labelled them according to their occupation or the reason they were being quoted. The categories in Table 5 were based on previous work on manual source classification [27]. Most of the categories are self-explanatory. We assign ‘Unelected government official’ to cases such as attorneys general, government auditors, and (Canadian) governors, that is, cases where the person fills a political or representative role, but they were appointed, not elected. Health professionals can be considered unelected government officials (e.g., the Public Health Officer). However, given their prominence during COVID-19, we chose to assign them to a separate category, ‘Health profession’. ‘Perpetrators’ may be accused (i.e., alleged perpetrators) or convicted. In ‘Creative industries’ we include artists, actors, and celebrities. Journalists and anchors are assigned to ‘Media’. The category ‘Person on the street interviews’ is used for random interviews, or cases where the person is affected by an event (e.g., a flood), but cannot be considered a victim. ‘Error’ refers to cases where a name was wrongly identified as that of a person (e.g., Thomas Cook as a person). Errors in gender prediction are reported in Table 3.
Table 5
Top 100 male/female sources, by category, in each of the 24 months between October 1, 2018 and September 30, 2020.
Identified as menIdentified as women
CategoryQuotesUnique personsQuotesUnique persons
Politician103,37873.8%29540.4%29,00752.6%27024.7%
Sports10,7237.7%11315.5%1,4152.6%605.5%
Unelected government official9,1756.5%7510.3%4,5838.3%15314.0%
Health profession5,3273.8%212.9%9,21716.7%585.3%
Police1,7631.3%334.5%1,4712.7%575.2%
Legal profession1,3190.9%334.5%1,1712.1%696.3%
Creative industries1,2780.9%192.6%1,0111.8%494.5%
Perpetrator9480.7%182.5%2640.5%171.6%
Victim/witness6340.5%101.4%1,4242.6%948.6%
Media5300.4%111.5%3910.7%211.9%
Non-governmental organization2450.2%71.0%9591.7%534.9%
Error910.1%40.5%80.0%10.1%
Person on the street interviews240.0%10.1%2620.5%222.0%
Total140,15673155,1491,091
There are some interesting observations with regard to Table 5. First of all, we notice that there are more women than men being quoted overall for this period (1,091 women vs. 731 men). That is, we see more variety in women in terms of the number of people being quoted. The difference in the number of quotes, however, is astounding: Men are quoted almost three times as often as women. That is, even though we hear from more women, we hear from men more often. This fact probably accounts, in large part, for the gap that we see on the Gender Gap Tracker dashboard, which counts unique quotes (not unique persons). Additionally, we see that there is a large difference between the most frequently quoted category and the second most frequently quoted. For men, number 1 is politicians (103,378) and number 2 is sports figures (10,723). For women, politicians are also at the top (29,007), with health professionals second to politicians (9,217). (Note that the rows are sorted by frequency of quotes for people identified as men).
These two findings, that much more space is given to men (in terms of number of quotes) and that much more space is given to the top category or occupation, point to a possible Pareto distribution [76], the principle that a large proportion of the resources is held by a small percentage of the population. Originally applied to wealth inequality, Pareto distributions have been found for the size of cities, internet traffic, scientific citations [77], and for the reward systems in science [78]. The related Pareto principle, also known as the 80-20 rule, preferential attachment, or the Matthew effect (‘the rich get richer’), quantifies the difference in distribution (80% of the wealth held by 20% of the population). It seems that the main obstacle to hearing more from women in the media is a form of preferential treatment to those who already have a voice. This effect has been described as a winner-take-all distribution [24], in society in general and in news media in particular. We do have to bear in mind, however, that the numbers in Table 5 are based on the top 100 citations for men and women in each month. That is, they inherently capture the top of the Pareto distribution. Table 4 captures an even smaller fragment of that distribution, because it considers only the top 15 across the two-year period.
Finally, we would like to make some observations about the relative distribution of men and women by category. It is interesting to observe that, by number of unique persons, politicians seem to be close to parity (295 men and 270 women). There is a stark difference, again, in the number of quotes, that is, the number of times they were quoted: over 103,000 quotes by the 295 male politicians compared to just over 29,000 by the 270 female politicians. In other words, when we hear a quote by a politician, that politician is a man 78% of the time. The difference is even higher in sports. An interesting asymmetry is found between perpetrators (78% of the quotes by perpetrators are by men) and victims or witnesses (31% of the quotes by victims are men). Categories where women outnumber men both in terms of quotes and unique persons quoted in the category include health professionals, non-governmental organizations, and academics or researchers.
### The role of author gender
Now that we have established that the majority of quotes in news articles are from men, it would be interesting to check whether this bias has any correlation with the gender of the authors. Our hypothesis is that authors may prefer to feature and interview people of their gender, that is, articles written by female authors may contain a higher ratio of female sources compared to articles written by male authors.
In order to test this hypothesis, we first tagged the gender of the author or authors of each article, using the same name-gender services that we utilized for gender recognition on people and sources mentioned in texts. The process for cleaning up the author fields is described in the S1 Appendix, Section A.3.3. We then extracted statistics for female and male sources within the publications of each news outlet, broken down into several categories: articles written by female authors only (155,197 articles), by male authors only (213,487 articles), by several authors of different genders (21,825 articles), and articles without a byline (222,041 articles). The last category encompasses different types of situations. It contains articles that had no byline or named author, such as editorials or newswire content. It also includes articles written by specific authors for which our system did not find a gender due to different limitations (e.g., the name does not exist in the gender databases). We know that our gender recognition services work quite well, because the rate of ‘other’ for sources mentioned (as opposed to authors) is quite low, at less than 1% for the entire period. Note also that this category is quite variable across news organizations. For instance, in the case of CBC News, where ‘no byline’ makes up the majority of the articles, this is because many articles do not have an author, but are posted as ‘CBC News’ or ‘CBC Radio’, or come from newswire sources.
From Fig 3, we see that, overall, the number of male authors exceeds the number of female authors in all outlets, except for The Huffington Post (49% women vs. 36% men, and 13% with no byline), which is also consistently the best performer in terms of female sources (see the line chart at the bottom of Fig 2).
Fig 3
Dates: October 1, 2018 to September 30, 2020.Percentages of authors by gender, by outlet.
Fig 4 shows, for each of the categories of authors described above, the percentage of times that they quoted female voices. The group at the bottom shows the aggregated percentage across all outlets, which speaks in favour of our hypothesis: Female authors are on average more likely than male authors to quote women in their articles. The chart shows that 34% of the sources are women in the articles authored by women, whereas this number is 25% in those authored by men.
Fig 4
Dates: October 1, 2018 to September 30, 2020.Percentages of female sources across seven news outlets by author gender.
Now let us examine the performance of male and female authors working for each of the news outlets. In all cases, without exception, articles written by women quote far more women than articles written by the other three groups. This suggests that part of the solution to the gender gap in media includes having more women reporters. This is true not only because women quote more women, but also because they seem to have a positive influence when part of a group. In most cases, articles written by a group that includes both men and women have more women quoted than articles written by men only. The two exceptions are CTV News (by a small margin, 26.3% women quoted with male-only authors vs. 24.4% with multiple genders) and HuffPost (by a slightly larger margin, 27.0% vs. 24.6%).
It is difficult to comment on the ‘no byline’ author category, as it includes many different types of authors, from editorials and newswire content to authors whose name we could not assign to a gender. In most cases, however, the trend is also that those articles tend to quote women more than articles under a male-only byline (with the exception of Global News, which also had the lowest percentage of articles without a byline).
In summary, the analyses in this section indicate that the bias towards quoting men seems to be strongest in articles written by men, a trend that has been observed in academic citations [7983], Twitter mentions [84], including the Twitter circles of political journalists [85], and certainly in news articles [86]. Articles co-authored by a mix that includes male and female writers seem to contain a better balance of male and female sources of opinion; this observation points to collaboration between genders as a path towards closing the gender quote gap. This is, however, by no means a silver bullet. Recent analyses of the relationship between leadership in news organizations and balanced gender representation have found no correlation between the proportion of women producing the news and the proportion of women featured in the news [87]. This may have to do with a male-dominated culture in newsrooms, where professional identity overrides gender identity [8789].
### The role of out-of-house content
One objection that news organizations may have is that, in some cases, they have no control over the breakdown of sources in an article, because they republish content, either from newswire or from other news publishers. Thus, it would be interesting to know whether there is a difference in the ratios depending on the source of the article.
As it turns out, classifying articles by source is a rather difficult task. We rely on the data we obtain by scraping, and specifically the author field. Unfortunately, the author field in an article does not always clearly indicate the author’s affiliation or the source. We restricted our analyses to The Toronto Star, because that organization had expressed an interest in a more fine-grained analysis. Note that this analysis is for slightly different dates, the 18 months between October 2018 and March 2020.
Using a combination of patterns and regular expression searches, we classified all the articles of The Star into three categories: in-house, out-of-house, or newswire. Out-of-house articles were labelled using an extensive list of external publishers that The Star re-publishes (e.g., LA Times, Washington Post, and Wall Street Journal). Newswire articles were determined to originate from a handful of news agencies: Canadian Press, Associated Press, Bloomberg, and Reuters. We were careful to restrict our pattern matching to author fields, as articles written in-house sometimes contain photos from newswire organizations.
Using this method, we obtained the results in Table 6. (Note that our method has a margin of error: In a manually labelled sample of 10,000 articles, we found an error rate of 4.41%, almost always in the in-house articles. That is, articles that are out-of-house or newswire may be incorrectly identified as in-house.) We find that, regardless of the origin of the article, men are the dominant source, and that the proportions are quite similar for out-of-house and newswire content. It is encouraging, however, to see that articles written by The Star reporters are more inclusive than those originating outside the organization. The Star has publicly stated that they want to improve the proportion of female sources that they quote [90], and it seems to be the case that their reporters do better, even if the proportion is still far from parity.
Table 6
Articles from The Star only. Dates: October 1, 2018 to March 31, 2020.
Gender ratio in sources by article type.
Article typeArticles nMale sourcesFemale sourcesOther sourcesMale sources %Female sources %Other sources %
In-house22,52829,76611,28128572.0%27.3%0.7%
Out-of-house13,40017,3596,24618273.0%26.3%0.8%
Newswire32,34149,85614,19362077.1%21.9%1.0%
## Conclusion
The main goal of the Gender Gap Tracker database and dashboard is to motivate news outlets to diversify their sources. This applies to all forms of diversity. While the Gender Gap Tracker can only capture one kind of diversity, because it relies on names to assign gender to sources, we believe that other forms of diversity should be considered, as we know that many other groups are underrepresented in the news [9196].
Gender equality is one of the United Nations’ 17 Sustainable Development Goals [30]. We are, sadly, far from achieving gender equality in many areas of our societies. Gender representation in the media is, however, within our reach, if enough effort is devoted to this goal and if we incorporate accountability into the effort. We hope that the Gender Gap Tracker provides the type of accountability tool that will encourage and facilitate gender parity in sources. Two results from our analyses that we would like to highlight here suggest a path towards equality.
First of all, we saw in Fig 4 that articles by authors of multiple genders tend to quote women more often. That is, when the author list is diverse, so are the sources quoted. Other research suggests that diversity at the top, in editors and publishers, also has a positive effect on the proportion of women mentioned in the news [24], although it is not sufficient to have parity in the newsroom or increased female leadership in news organizations [87, 88, 97]. The relationship between female leadership and improved representation for women in the news is indeed quite complex [98].
Second, results (Tables 4 and 5) point to a lack of equality in how many times men and women are quoted overall, not just in how many men and women are quoted. Thus, although we see a certain tokenism in having female voices present in the news, their voices are drowned out by the overwhelming number of times that we hear from men, often from just a handful of men. It looks like women are given a presence, but then men get the majority of the space. This also points to a concentration of power at the top, which can be balanced by diversifying sources in general.
Journalists report that it takes more time and effort to reach diverse sources. There are many barriers for women to participate in civil society, and in particular for engaging with the media. One particularly harrowing issue that needs to be addressed is the abuse and harassment that women experience when they speak publicly, especially when they speak to controversial topics [99103]. Women who engage in online discussions experience trolling, abusive comments, death and rape threats, and also threatening offline encounters, such as name-calling and public abuse [104, 105]. Jane [106] argues that the extent of the harassment online has offline consequences for women, which are manifested socially, psychologically, financially, and politically. Many women, understandably, self-censor to avoid such consequences. True equal representation in public discourse will be much more difficult to achieve if the rewards and consequences of participating are unequal across genders.
The size and richness of the data in the Gender Gap Tracker database lends itself to many interesting further analyses. One area that we are investigating is the relationship between the topic of the article and the gender of those quoted. The research question, simply put, is whether men are quoted more in financial news and women in arts and lifestyle articles. Our preliminary answer is that, indeed, this is the case, with a bright spot in the prominence of female voices in healthcare during the COVID-19 pandemic [107]. Topic-based analyses can also help identify emerging topics, such as one-time events (terrorist attacks, sports events) or new developments that stay in the news (Brexit, COVID-19).
We have also informally explored the relative prominence of political candidates in several elections [108]. We found that eventual winners of elections were more likely to be quoted in the period leading up to the election in most elections we studied (but not all). We, of course, do not propose a causal relation between presence in the media and likelihood of being elected. Even if there is a causal relation, the cause and effect direction is unclear. It could be that the more well-known the candidate, the higher their chances of being elected. It could also be the case that when a candidate seems to be leading in the polls, they are more likely to be quoted in the news media. Further analyses as new elections take place would shed more light onto those questions.
Other research avenues that could be pursued relate to questions of salience and space, i.e., whether quotes by men are presented more prominently in the article, and whether men are given more space (perhaps counted in number of words). Finally, more nuanced questions that involve language analysis include whether the quotes are presented differently in terms of endorsement or distance from the content of the quote (stated vs. claimed). We plan to pursue some of those questions, but also invite researchers to join in this effort. The data collected for this project can be made available, upon request, for non-commercial research purposes.
## Data and code
The data was downloaded from public and subscription websites of newspapers, under the ‘fair dealing’ provision in Canada’s Copyright Act. This means that the data can be made available only for private study and/or research purposes, and not for commercial purposes. As such, the data will be made available upon request and after signing a license agreement. Contact for data access: Maite Taboada (mtaboada@sfu.ca) or Research Computing Group at Simon Fraser University (research-support@sfu.ca).
The code is available on GitHub under a GNU General Public License (v3.0). The authors of this paper are the creators of the code and own the copyright to it: https://github.com/sfu-discourse-lab/GenderGapTracker.
A light-weight version of the NLP module is also made available for processing one article at a time: https://gendergaptracker.research.sfu.ca/apps/textanalyzer.
## Acknowledgements
The Gender Gap Tracker is a collaboration between the Discourse Processing Lab, the Big Data Initiative at Simon Fraser University, and Informed Opinions. Our thanks and admiration to Shari Graydon of Informed Opinions for initiating this project and for being a tireless advocate for gender equality. We would like to thank Kelly Nolan, Dugan O’Neil, and John Simpson for bringing us together, and John especially for the initial design of the database. Yanlin An, Danyi Huang, and Nilan Saha contributed to the heuristic quote extraction process, as part of a capstone project in the Master of Data Science program at the University of British Columbia. Thank you to members of the Discourse Processing Lab at SFU for feedback, insight, and help with evaluation: Laurens Bosman, Lucas Chambers, Katharina Ehret, Rohan Ben Joseph, and Varada Kolhatkar. Special thanks to Lucas Chambers for tracking down references and for editing assistance.
## References
Macharia S. Who Makes the News? Global Media Monitoring Project; 2015.
ChesserSG. Women in National Governments Around the Globe: Fact Sheet. Washington, DC: Congressional Research Service; 2019.
World Economic Forum. The Global Gender Gap Report 2018. Geneva, Switzerland: World Economic Forum; 2018.
World Economic Forum. The Global Gender Gap Report 2020. Geneva, Switzerland: World Economic Forum; 2019.
Pew Research Center. The Data on Women Leaders. Pew Research Center; 2019.
JalalzaiF. Shattered, Cracked, or Firmly Intact?: Women and the executive glass ceiling worldwide. Oxford: Oxford University Press; 2013.
Zillman C. The Fortune 500 has more female CEOs than ever before. Fortune. 2019; May 16, 2019. Available from: https://fortune.com/2019/05/16/fortune-500-female-ceos/.
JohnsonGF, HowsamR. Whiteness, power and the politics of demographics in the governance of the Canadian academy. Canadian Journal of Political Science. 2020;53(3): 676694.
10
BeauvaisE. The gender gap in political discussion group attendance. Politics & Gender. 2019;16: 124.
11
JiwaniY, YoungML. Missing and murdered women: Reproducing marginality in news discourse. Canadian journal of communication. 2006;31(4): 895917.
12
StregaS, JanzenC, MorganJ, BrownL, ThomasR, CarriéreJ. Never innocent victims: Street sex workers in Canadian print media. Violence Against Women. 2014;20(1): 625.
13
GreerC. News Media, Victims and Crime. London: Sage; 2007.
14
HollanderJA, RodgersK. Constructing victims: The erasure of women’s resistance to sexual assault. Sociological Forum. 2014;29(2): 342364.
15
HigginsM, SmithA. ‘My husband, my hero’: Selling the political spouses in the 2010 general election. Journal of Political Marketing. 2013;12(2-3): 197210.
16
Morgan C. Give us a wave! World leaders’ wives—and a slightly awkward-looking Philip May—pose for photos at G20 summit in Japan (but Melania Trump stays at home). The Daily Mail. 2019; June 28, 2019. Available from: https://www.dailymail.co.uk/femail/article-7191709/Wives-world-leaders-joined-Philip-pose-photos-G20-summit-Japan.html
17
DobsonH. Where are the women in global governance? Leaders, wives and hegemonic masculinity in the G8 and G20 summits. Global Society. 2012;26(4): 429449.
18
TrimbleL. Ms. Prime Minister: Gender, media, and leadership. Toronto: University of Toronto Press; 2018.
19
García-BlancoI, Wahl-JorgensenK. The discursive construction of women politicians in the European press. Feminist Media Studies. 2012;12(3): 422441.
20
PowerK, RakL, KimM. Women in business media: A critical discourse analysis of representations of women in Forbes, Fortune and Bloomberg BusinessWeek, 2015-2017. Critical Approaches to Discourse Analysis Across Disciplines. 2019;11(2): 126.
21
Van der PasDJ, AalderingL. Gender differences in political media coverage: A meta-analysis. Journal of Communication. 2020;70(1): 114143.
22
CarlinDB, WinfreyKL. Have you come a long way, baby? Hillary Clinton, Sarah Palin, and sexism in 2008 campaign coverage. Communication Studies. 2009;60(4): 326343.
23
TrimbleL, CurtinJ, WagnerA, AuerM, WoodmanVKG, OwensB. Gender novelty and personalized news coverage in Australia and Canada. International Political Science Review. Forthcoming.
24
ShorE, van de RijtA, MiltsovA, KulkarniV, SkienaS. A paper ceiling: Explaining the persistent underrepresentation of women in printed news. American Sociological Review. 2015;80(5): 960984.
25
O’NeillD, SavignyH, CannV. Women politicians in the UK press: Not seen and not heard? Feminist Media Studies. 2016;16(2): 293307.
26
MillerSH. The content of news photos: Women’s and men’s roles. Journalism Quarterly. 1975;52(1): 7075.
27
Morris M. Gender of sources used in major Canadian media. Informed Opinions; 2016. Available from https://informedopinions.org/journalists/informed-opinions-research/
28
Kassova L. The Missing Perspectives of Women in COVID-19 News: A special report on women’s under-representation in news media. AKAS Consulting; 2020. Available from https://www.iwmf.org/women-in-covid19-news/
29
Parker K, Horowitz J, Igielnik R. Women and Leadership 2018. Pew Research Center; 2018. Available from https://www.pewsocialtrends.org/2018/09/20/women-and-leadership-2018/
30
Nations U. The Sustainable Development Goals Report 2020. United Nations; 2020. Available from https://unstats.un.org/sdgs/report/2020/
31
CCPA. Unfinished business: A parallel report on Canada’s implementation of the Beijing Declaration and Platform for Action. Canadian Centre for Policy Alternatives; 2019. Available from https://www.policyalternatives.ca/publications/reports/unfinished-business
32
McInturff K. Closing Canada’s gender gap: Year 2240 here we come! Canadian Centre for Policy Alternatives; 2013. Available from https://www.policyalternatives.ca/publications/reports/closing-canadas-gender-gap
33
BardallG, BjarnegårdE, PiscopoJM. How is political violence gendered? Disentangling motives, forms, and impacts. Political Studies. 2020;68(4): 916935.
34
Yong E. I spent two years trying to fix the gender imbalance in my stories. The Atlantic. 2018; February 6, 2018. Available from https://www.theatlantic.com/science/archive/2018/02/i-spent-two-years-trying-to-fix-the-gender-imbalance-in-my-stories/552404/
35
Hawkins-Gaar K. Journalism has a gender representation problem. Bloomberg is looking for a solution. Poynter. 2019; January 30, 2019. Available from https://www.poynter.org/business-work/2019/journalism-has-a-gender-representation-problem-bloomberg-is-looking-for-a-solution/
36
Fitzpatrick M. Chief medical officers are leading Canada through COVID-19 crisis—and many are women. CBC News. 2020; April 2, 2020. https://www.cbc.ca/news/health/women--chief--medical--officers--canada--1.5518974.
37
TannenD. Talking voices: Repetition, dialogue, and imagery in conversational discourse. New York: Cambridge University Press; 2007
38
SpronckS, NikitinaT. Reported speech forms a dedicated syntactic domain. Linguistic Typology. 2019;23(1): 119159.
39
D’ArcyA. Quotation and advances in understanding syntactic systems. Annual Review of Linguistics. 2015;1(1): 4361.
40
VološinovVN. Marxism and the Philosophy of Language. Cambridge: Harvard University Press; 1973 [1929].
41
GoddardC, WierzbickaA. Reported speech as a pivotal human phenomenon: Commentary on Spronck and Nikitina. Linguistic Typology. 2019;23(1): 167175.
42
GoddardC, WierzbickaA. Direct and indirect speech revisited: Semantic universals and semantic diversity In: CaponeA, García-CarpinteroM, FalzoneA, editors. Indirect Reports and Pragmatics in the World Languages. Cham: Springer; 2019 p. 173199.
43
Arús-HitaJ, TeruyaK, BardiMA, KashyapAK, MwinlaaruIN. Quoting and reporting across languages: A system-based and text-based typology. WORD. 2018;64(2): 69102.
44
CoulmasF. Reported speech: Some general issues In: CoulmasF, editor. Direct and Indirect Speech. Berlin: Mouton de Gruyter; 1986 pp. 128.
45
DavidsonD. On saying that. Synthese. 1968;19(1-2): 130146.
46
WierzbickaA. The semantics of direct and indirect discourse. Paper in Linguistics. 1974;7(3-4): 267307.
47
Siddharthan A, Teufel S. Whose idea was this, and why does it matter? Attributing scientific work to citations. In: Proceedings of Human Language Technologies/Conference of the North American Chapter of the Association for Computational Linguistics. Rochester, NY; 2007. pp. 316–323.
48
DancygierB. Reported speech and viewpoint hierarchy. Linguistic Typology. 2019;23(1): 161165.
49
VandelanotteL. Dependency, framing, scope? The syntagmatic structure of sentences of speech or thought representation. Word. 2008;59(1-2): 5582.
50
RecanatiF. Open quotation. Mind. 2001;110(439): 637687.
51
CaponeA. The Pragmatics of Indirect Reports: Socio-philosophical considerations. Cham: Springer; 2016.
52
JurafskyD, MartinJH. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. 2nd ed Upper Saddle River, NJ: Prentice Hall; 2009.
53
Prasad R, Lee A, Dinesh N, Miltsakaki E, Campion G, Joshi AK, et al. Penn Discourse Treebank version 2.0. In: Proceedings of the Sixth International Conference on Language Resources and Evaluation. Marrakesh, Morocco; 2008. pp. 2961–2968.
54
PrasadR, DineshN, LeeA, JoshiAK, WebberBL. Attribution and its annotation in the Penn Discourse TreeBank. TAL. 2006;47(2): 4364.
55
Pareti S. PARC 3.0: A corpus of attribution relations. In: Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC). Portorož, Slovenia; 2016. pp. 3914–3920.
56
Newell C, Cowlishaw T, Man D. Quote extraction and analysis for news. In: Proceedings of the Workshop on Data Science, Journalism and Media, KDD 2018. London, UK; 2018. pp. 1–6.
57
Muzny G, Fang M, Chang A, Jurafsky D. A two-stage sieve approach for quote attribution. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Valencia, Spain; 2017. p. 460–470.
58
van AtteveldtW, SheaferT, ShenhavSR, Fogel-DrorY. Clause analysis: Using syntactic information to automatically extract source, subject, and predicate from texts with an application to the 2008–2009 Gaza War. Political Analysis. 2017;25(2): 207222.
59
Krestel R, Bergler S, Witte R. Minding the source: Automatic tagging of reported speech in newspaper articles. In: Sixth International Conference on Language Resources and Evaluation (LREC 2008). Marrakech, Morocco; 2008. pp. 2823–2828.
60
Pouliquen B, Steinberger R, Best C. Automatic detection of quotations in multilingual news. In: Proceedings of Recent Advances in Natural Language Processing. Borovets, Bulgaria; 2007. pp. 487–492.
61
Edgar J, Phipps P, Kaplan R, Holzberg JL, Ellis R, Virgile M, et al. Assessing the feasibility of asking about sexual orientation and gender identity in the current population survey: Executive summary. United States Census Bureau; 2018. RSM2018-02. Available from https://www.bls.gov/osmr/research-papers/2017/html/st170220.htm
62
PengZ, ChenM, KowalczykS, PlaleB. Author gender metadata augmentation of HathiTrust digital library. Proceedings of the American Society for Information Science and Technology. 2014;51(1): 14.
63
Garimella A, Banea C, Hovy D, Mihalcea R. Women’s syntactic resilience and men’s grammatical luck: Gender-Bias in Part-of-Speech Tagging and Dependency Parsing. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy; 2019. pp. 3493–3498.
64
CaliskanA, BrysonJJ, NarayananA. Semantics derived automatically from language corpora contain human-like biases. Science. 2017;356(6334): 183186.
65
Rudinger R, Naradowsky J, Leonard B, Van Durme B. Gender bias in coreference resolution. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. New Orleans, LA; 2018. pp. 8–14.
66
Trista Cao Y, Daumé III H. Toward gender-inclusive coreference resolution. In: Proceedings of the Conference of the Association for Computational Linguistics (ACL). Seattle, WA; 2020. pp. 4568–-4595.
67
Zhao J, Wang T, Yatskar M, Ordonez V, Chang KW. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Copenhagen; 2017. pp. 2979–2989.
68
Font JE, Costa-jussà MR. Equalizing gender bias in neural machine translation with word embeddings techniques. In: Proceedings of the First Workshop on Gender Bias in Natural Language Processing. Florence, Italy; 2019. pp. 147–154.
69
Bolukbasi T, Chang KW, Zou JY, Saligrama V, Kalai AT. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In: Proceedings of the Conference on Advances in Neural Information Processing Systems (NeurIPS). Barcelona; 2016. pp. 4349–4357.
70
Blodgett SL, Barocas S, Daumé III H, Wallach H. Language (technology) is power: A critical survey of ‘bias’ in NLP. In: Proceedings of the Conference of the Association for Computational Linguistics (ACL). Seattle, WA; 2020. pp. 5454–-5476.
71
Gonen H, Goldberg Y. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Minneapolis, MN; 2019. pp. 609–614.
72
Derczynski L, Nichols E, van Erp M, Limsopatham N. Results of the WNUT2017 shared task on novel and emerging entity recognition. In: Proceedings of the 3rd Workshop on Noisy User-generated Text. Copenhagen, Denmark; 2017. pp. 140–147.
73
Lample G, Ballesteros M, Subramanian S, Kawakami K, Dyer C. Neural architectures for Named Entity Recognition. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego, CA; 2016. pp. 260–270.
74
Parliament Canada. Women Candidates in General Elections; 2020. Available from: https://lop.parl.ca/sites/ParlInfo/default/en_CA/ElectionsRidings/womenCandidates.
75
FranksS, HowellL. Seeking women’s expertise in the UK broadcast news media In: CarterC, SteinerL, AllanS, editors. Journalism, Gender and Power. New York: Routledge; 2019 pp. 4961.
76
ArnoldBC. Pareto distribution In: BalakrishnanN, ColtonT, EverittB, PiegorschW, RuggeriF, TeugelsJL, editors. Wiley StatsRef: Statistics Reference Online. Wiley; 2015 pp. 110.
77
PriceDDS. A general theory of bibliometric and other cumulative advantage processes. Journal of the American Society for Information Science. 1976;27(5): 292306.
78
MertonRK. The Matthew effect in science. The reward and communication systems of science are considered. 1968;159(3810): 5663.
79
DionML, SumnerJL, MitchellSM. Gendered citation patterns across political science and social science methodology fields. Political Analysis. 2018;26(3): 312327.
80
KingMM, BergstromCT, CorrellSJ, JacquetJ, WestJD. Men set their own cites high: Gender and self-citation across fields and over time. Socius. 2017;3: 2378023117738903
81
MitchellSM, LangeS, BrusH. Gendered citation patterns in international relations journals. International Studies Perspectives. 2013;14(4): 485492.
82
GeraciL, BalsisS, BuschAJB. Gender and the h-index in psychology. Scientometrics. 2015;105(3): 20232034.
83
FerberMA, BrünM. The gender gap in citations: Does it persist? Feminist Economics. 2011;17(1): 151158.
84
ZhuJM, PelulloAP, HassanS, SiderowfL, MerchantRM, WernerRM. Gender differences in Twitter use and influence among health policy and health services researchers. JAMA Internal Medicine. 2019;179(12): 17261729.
85
UsherN, HolcombJ, LittmanJ. Twitter makes it worse: Political journalists, gendered echo chambers, and the amplification of gender bias. The International Journal of Press/Politics. 2018;23(3): 324344.
86
ArmstrongCL. The influence of reporter gender on source selection in newspaper stories. Journalism & Mass Communication Quarterly. 2004;81(1): 139154.
87
Kassova L. The Missing Perspectives of Women in News: A report on women’s under-representation in news media; on their continual marginalization in news coverage and on the under-reported issue of gender inequality. AKAS Consulting; 2020. Available from https://www.iwmf.org/missing-perspectives/
88
HanitzschT, HanuschF. Does gender determine journalists’ professional views? A reassessment based on cross-national evidence. European Journal of Communication. 2012;27(3): 257277.
89
RodgersS, ThorsonE. A socialization perspective on male and female reporting. Journal of Communication. 2003;53(4): 658675.
90
English K. ‘Mirrored in media’ project aims to boost voices of women. The Toronto Star. 2019; June 29, 2019. Available from https://www.thestar.com/opinion/public_editor/2019/06/29/mirrored-in-media-project-aims-to-boost-voices-of-women.html
91
92
EberlJM, MeltzerCE, HeidenreichT, HerreroB, TheorinN, LindF, et al The European media discourse on immigration and its effects: A literature review. Annals of the International Communication Association. 2018;42(3): 207223.
93
MinSJ, FeasterJC. Missing children in national news coverage: Racial and gender representations of missing children cases. Communication Research Reports. 2010;27(3): 207216.
94
UngerleiderCS. Media, minorities and misconceptions: The portrayal by and representation of minorities in Canadian news media. Canadian Ethnic Studies/Etudes Ethniques au Canada. 1991;23(3): 158163.
95
TolleyE. Framed: Media and the coverage of race in Canadian politics. Vancouver: UBC Press; 2015.
96
Corbett E. When disinformation becomes deadly: The case of missing and murdered Indigenous women and girls in Canadian media. In: Disinformation and Digital Democracies in the 21st Century. Toronto: NATO Association of Canada; 2019. Available from https://natoassociation.ca/disinformation-and-digital-democracy-in-the-21st-century/
97
SteinerL. Failed theories: Explaining gender difference in journalism. Review of Communication. 2012;12(3): 201223.
98
ShorE, van de RijtA, MiltsovA. Do women in the newsroom make a difference? Coverage sentiment toward women and men as a function of newsroom composition. Sex Roles. 2019;81(1): 4458.
99
VeletsianosG, HouldenS, HodsonJ, GosseC. Women scholars’ experiences with online harassment and abuse: Self-protection, resistance, acceptance, and self-blame. New Media & Society. 2018;20(12): 46894708.
100
PolandB. Haters: Harassment, abuse, and violence online. Lincoln: University of Nebraska Press; 2016.
101
Turner C. Women fighting climate change are targets for misogynists. Chatelaine. 2020; March 5, 2020. Available from https://www.chatelaine.com/news/women-climate-change-attacks/
102
Dubois E, Owen T. Understanding the digital ecosystem: Findings from the 2019 federal election. McGill University and University of Ottawa; 2020. Available from https://www.digitalecosystem.ca/report
103
Ross A. Death threats aimed at Dr. Bonnie Henry mirror contempt faced by female leaders, experts say. CBC News. 2020; September 23, 2020. Available from https://www.cbc.ca/news/canada/british-columbia/dr-bonnie-henry-women-leaders-death-threats-1.5736198
104
EckertS. Fighting for recognition: Online abuse of women bloggers in Germany, Switzerland, the United Kingdom, and the United States. New Media & Society. 2018;20(4): 12821302.
105
PosettiJ, HarrisonJ, WaisbordS. Online attacks on women journalists leading to ‘real world’ violence, new research shows. International Center for Journalists; 2020.
106
JaneEA. Misogyny Online: A short (and brutish) history. Thousand Oaks, CA: Sage; 2016.
107
Taboada M. The coronavirus pandemic increased the visibility of women in the media, but it’s not all good news. The Conversation. 2020; November 25, 2020. Available from https://theconversation.com/the-coronavirus-pandemic-increased-the-visibility-of-women-in-the-media-but-its-not-all-good-news-146389
108
Taboada M, Chambers L. Who is quoted and who is elected? Media coverage of political candidates. Canadian Science Policy Centre. 2020; November 3, 2020. Available from https://sciencepolicy.ca/posts/who-is-quoted-and-who-is-elected-media-coverage-of-political-candidates/
109
De MarneffeMC, ManningCD. Stanford typed dependencies manual. Stanford University; 2008.
110
BerglerS. Conveying attitude with reported speech In: ShanahanJG, QuY, WiebeJ, editors. Computing attitude and affect in text: Theory and applications. Berlin: Springer; 2006 pp. 1122.
111
LevinB. English Verb Classes and Alternations: A preliminary investigation. Chicago: University of Chicago Press; 1993.
112
MartinJR, WhitePRR. The Language of Evaluation. New York: Palgrave; 2005.
113
D’ArcyA. Discourse-Pragmatic Variation in Context: Eight hundred years of LIKE. Amsterdam: John Benjamins; 2017.
9 Oct 2020
PONE-D-20-22487
The Gender Gap Tracker: Using Natural Language Processing to measure gender bias in media
PLOS ONE
Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
Two expert reviewers have weighed in on the manuscript, and I have read it carefully myself. I found the manuscript to be very clearly written and easy to read, and I believe we all would concur that this represents a substantial piece of research that addresses an important societal problem. That having been said, the reviewers and I also have a number of comments, criticisms, and questions, some of which are summarized below. I am following the recommendation of both reviewers in adopting a Major Revision decision.
Reviewer 1 remarks that the paper is unnecessarily lengthy and times loses its focus. I agree. In general, I would discourage you from overly elaborating on purely implementational details (e.g., what python libraries were used; Section 3), approaches you considered but didn’t pursue (e.g., the section on pronoun-based gender prediction), narrated walkthroughs of either your research process (e.g., Section 4.3) or particular algorithms (e.g., Sections 3, 4.2), and data structures (e.g., Figure 4) unless there is a compelling reason why readers need to be apprised of such details (in which case one could still strive for brevity). The reviewer cites several other places that in their view were overly detailed. I’ll also add that whereas I think the discussion of issues regarding binary notions of gender is important, I also think it suffices to have this discussion occur just once, presumably early in the paper (discussions currently occur in Sections 1, 2.3, and 4.2.)
Reviewer 2 points out that the paper would benefit from a more rigorous analysis to evaluate the question under scrutiny while controlling for other factors. I concur here as well. This reviewer provides the popularity of an individual as an example. Adding to this, it seems apparent that there are a number of reasons a quote might appear in a news outlet, some of which seem orthogonal to the question of gender bias in news sources. For instance, the submission notes that Donald Trump is the most quoted person by far, but it seems doubtful that he’s being used as a source in many of these cases; it is news itself when the US President speaks. If Hillary Clinton was the current US President instead, the numbers would presumably be quite different, but that fact seems incidental to the question. Further, as the submission acknowledges, journalists often don’t have a choice when citing a source -- for example, if one seeks a quote from the Police Chief of a certain area (say, where riots are occurring), one has no control over that person’s gender. Section 7 includes interesting discussions of such issues that in my view add considerable value to the paper, but given that “the main goal of the database and the dashboard is to motivate news outlets to diversify their sources” (first line of the conclusion), it remains unclear to me how the various confounds can be sorted through so as to yield a more fair and actionable measurement of media bias. I’m not sure of the precise remedy here, but would nonetheless encourage any additional analyses the authors might offer.
Finally, a small comment, regarding the evaluation of quote extraction on page 20 – to be a valid evaluation, I would think that the corpus would have to be distinct from the larger corpus used in the development process outlined at the top of Section 6, but it’s not obvious to me that it was. So this could use clarification. (An even smaller point – I didn’t understand the “at least” part in the first line of page 19, if the corpus totaled 98 documents).
Although we cannot accept your submission in its current form, the reviewers and I are agreed that there is considerable value in your submission, and hence I would encourage you to submit a revised version of your manuscript if you're inclined to do so after taking on a thorough consideration of the aforementioned feedback and other comments made by the reviewers. Upon resubmission, I would ask the same two reviewers to evaluate a resubmission, only recruiting new ones if one or both were to decline.
Please submit your revised manuscript by Nov 23 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
• A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
• A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
• An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols
We look forward to receiving your revised manuscript.
Kind regards,
Andrew Kehler, Ph.D
PLOS ONE
Journal Requirements:
1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at
https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf
2. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.
We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:
1. You may seek permission from the original copyright holder of Figure(s) [2 and 6] to publish the content specifically under the CC BY 4.0 license.
We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:
“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”
Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.
2. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.
4. We note that Figure [2] includes an image of a participant in the study.
As per the PLOS ONE policy (http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research) on papers that include identifying, or potentially identifying, information, the individual(s) or parent(s)/guardian(s) must be informed of the terms of the PLOS open-access (CC-BY) license and provide specific permission for publication of these details under the terms of this license. Please download the Consent Form for Publication in a PLOS Journal (http://journals.plos.org/plosone/s/file?id=8ce6/plos-consent-form-english.pdf). The signed consent form should not be submitted with the manuscript, but should be securely filed in the individual's case notes. Please amend the methods section and ethics statement of the manuscript to explicitly state that the patient/participant has provided consent for publication: “The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details”.
If you are unable to obtain consent from the subject of the photograph, you will need to remove the figure and any other textual identifying information or case descriptions for this individual.
[Note: HTML markup is below. Please do not edit.]
Reviewer's Responses to Questions
1. Is the manuscript technically sound, and do the data support the conclusions?
The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.
Reviewer #1: Yes
Reviewer #2: Partly
**********
2. Has the statistical analysis been performed appropriately and rigorously?
Reviewer #1: Yes
Reviewer #2: Yes
**********
3. Have the authors made all data underlying the findings in their manuscript fully available?
The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.
Reviewer #1: Yes
Reviewer #2: Yes
**********
4. Is the manuscript presented in an intelligible fashion and written in standard English?
PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.
Reviewer #1: Yes
Reviewer #2: Yes
**********
5. Review Comments to the Author
Reviewer #1: This paper investigates gender bias in media by analyzing the number of men and women quoted in Canadian news texts. Additionally, the authors have developed and made publicly available a tool Gender Gap Tracker that enables tracking of daily publications from a number of Canadian news websites. Prior work was manual to high extent and has not addressed both direct and indirect quotes on a continuous basis.
- Well written and researched introduction
- Because of some lengthy sections the paper looses its focus. A lot of NLP and CS concepts are explained in very much detail. Evaluation takes as much space as the actual results and their analysis. Suggestion: Make use of Appendix
- Some results are presented already in the methodological chapters, whereas some concepts that should be mentioned in methodology are described in the middle of evaluation.
- I would wish for a stronger conclusion chapter to point out the most important take aways.
- The paper in general is unnecessarily lengthy. Some concepts described in the section related work are repeated again in the methods section. For instance, in chapter 4 a quite long description of NeuralCoref, in chapter 4.2 a very long description of the gender prediction process, in section 4.3 the explanation why regular expressions are inadequate, or description of Figure 8.
- (Data Scraping, Lines 299-300): Have you tried parallelization? Even small parallelisation might considerably speed up the process without overloading the website.
- (Identifying people and predicting their gender/Quotation extraction): Swapping these two chapters feels more natural. Quotation extraction is the first step before people can be identified.
- (Identifying people and predicting their gender): In section 4.2 about Name-based gender prediction it is mentioned that web services are used and the errors are corrected when encountered. When is it tested? Link to the results from chapter 6?
- (Identifying people and predicting their gender, Lines 439-441): Link to the results from chapter 6?
- (Identifying people and predicting their gender): Section 4.3 list of author names containing major organisation names, any attempt to create it automatically? Add the manually created list to the Appendix?
- (Quotation extraction, 5.1): What are the attempts to include “according to” quotes? What was the reason/challenge that they are not included now?
- (Quotation extraction, 5.2): Manually investigated 10 articles - with how many quotes? More details? Shouldn’t it be covered in the evaluation section and not methods?
- (Evaluation, 6.2) Annotations only on 8 articles and 3 annotators with 1 final? Is it enough?
- (Evaluation, 6.3) Description of JSON format and annotator's challanges to be shortened.
- (Evaluation, 6.4) Concepts like scare quotes and what counts as a quote should be explained earlier in Section 5
- (Evaluation, 6.5, Lines 810-812) How were the threshold values chosen? Any previous testing in this regard or arbitrary?
- (Analysis and observation, 7.2, Lines 945-947): The professions can be also extracted from wiki to save manual labor
Reviewer #2: The paper attempts to characterize gender bias in media reporting by a large scale quantitative analysis of quoting patterns of seven Canadian outlets. To achieve this, the authors present a natural language processing system that applies various natural language processing (NLP) techniques to drive the above analysis which suggests that there is still a significant gender gap biased against women in media reporting. The research questions investigated are of immense importance to society and public policy and will play an important role in efforts undertaken to improve diversity in organizations and society. Yet another strength of the paper is an attempt to characterize the accuracy of the various NLP methods before applying them to answer the research question. Broadly, the methods used appears reasonable and the conclusions are inline with observations made in prior work. While the NLP system built to computationally extract named entities and quoting patterns looks pretty solid, several concerns arise in the subsequent analyses: In particular, the paper could significantly benefit by adopting a more rigorous analysis approach to ascertain the bias while controlling for various factors (using linear fixed effects models). For example, what is the effect of popularity of the individual (number of mentions of name) on the gender gap? In particular, a very related work by Shor et.al (2015) also conducted a very similar analysis where they observe that when covering famous individuals the gender gap in printed media coverage observed is much larger than the coverage of no t so popular individuals (see Figure 2 of Shor et. al (2015)). More broadly, the paper also does not justify its focus on only looking at reported speech (when male or female entities are quotes) as opposed to just looking at all mentions of people (this was done by Shor et. al (2015)). It could be possible that people could be covered but they are not quoted (either syntactically or directly). For example, Serena Williams won the championship mentions (gives media coverage to Serena) "Serena Williams" but does not quote her. In general, the paper would benefit by placing their results and analysis in context and how they relate to findings presented in Shor et. al(2015). Similarly when presenting the analysis of role of gender (by outlet), it would be useful to control for other aspects like (newspaper section etc).
Line 288: Why is 1-b not considered a valid article/url? It Seems that the cut-off of 5 directories is arbitrary.
Also does the index page of the media outlet contain all the articles that were published that day or just the top X? If just the top X then there might be a coverage issue of the data (since articles crawled would be biased towards those stories/articles that are perceived by editor to boost viewership count).
References:
Shor, Eran, et al. "A Paper ceiling: Explaining the persistent underrepresentation of women in printed news." American Sociological Review 80.5 (2015): 960-984.
**********
6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.
If you choose “no”, your identity will remain anonymous but your review may still be made public.
Reviewer #1: No
Reviewer #2: No
[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]
While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
30 Oct 2020
3 Dec 2020
PONE-D-20-22487R1
The Gender Gap Tracker: Using Natural Language Processing to measure gender bias in media
PLOS ONE
Thank you for submitting your revised manuscript to PLOS ONE. Upon receipt of the submission, I requested reviews from the original two reviewers; Reviewer 2 accepted but Reviewer 1 declined. Based on my own reading of the revision, I felt comfortable basing my decision on the judgment of Reviewer 2 and my own, and hence opted not to bring a new reviewer into the process.
Reviewer 2 and I agree that the original comments from both reviewers have been acted on in good faith, and that the paper is publishable. Because submissions that receive Accept decisions at the journal proceed straight to production, I'm taking the action of issuing a Minor Revision decision, to give you the opportunity to address the minor comments that Reviewer 2 makes in their new review, as well as any other minor modifications that you deem appropriate before the article sees print. Barring any new substantial changes, I intend to accept the revised manuscript upon a spot check. Hence it will not go out for further external review.
Please submit your revised manuscript by Jan 17 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
• A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
• A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
• An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols
We look forward to receiving your revised manuscript.
Kind regards,
Andrew Kehler, Ph.D
PLOS ONE
[Note: HTML markup is below. Please do not edit.]
Reviewer's Responses to Questions
1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.
Reviewer #2: (No Response)
**********
2. Is the manuscript technically sound, and do the data support the conclusions?
The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.
Reviewer #2: Yes
**********
3. Has the statistical analysis been performed appropriately and rigorously?
Reviewer #2: Yes
**********
4. Have the authors made all data underlying the findings in their manuscript fully available?
The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.
Reviewer #2: Yes
**********
5. Is the manuscript presented in an intelligible fashion and written in standard English?
PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.
Reviewer #2: Yes
**********
6. Review Comments to the Author
Reviewer #2: The revised version of the paper addresses most of the comments I had in the previous version satisfactorily. Since the authors defer the more rigorous/deeper analysis to future work (including topical analysis etc.) and the primary contribution here is the Gender Gap Tracker software, I would encourage the authors to emphasize that here they are primarily interested in demonstrating the rich analyses that the Gender Tracker would enable in the future (and some of the analyses may be strengthened further as noted in my previous comments). Finally, one point that the authors mention the response letter is that to prevent overfitting due to iterations over their rules, they ensured they always evaluated on an extension of the test set. This detail is not mentioned in the revised paper's main content -- an important point which should be added.
**********
7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.
If you choose “no”, your identity will remain anonymous but your review may still be made public.
Reviewer #2: No
[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]
While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
23 Dec 2020
Please see response to reviewers letter.
4 Jan 2021
The Gender Gap Tracker: Using Natural Language Processing to measure gender bias in media
PONE-D-20-22487R2
We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.
Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.
An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.
Kind regards,
Andrew Kehler, Ph.D
PLOS ONE
8 Jan 2021
PONE-D-20-22487R2
The Gender Gap Tracker: Using Natural Language Processing to measure gender bias in media
I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.
If we can help with anything else, please email us at plosone@plos.org.
Thank you for submitting your work to PLOS ONE and supporting open access.
Kind regards,
PLOS ONE Editorial Office Staff
on behalf of
Dr. Andrew Kehler |
×
Get Full Access to Atkins' Physical Chemistry - 11 Edition - Chapter 11a - Problem P11a.10
Get Full Access to Atkins' Physical Chemistry - 11 Edition - Chapter 11a - Problem P11a.10
×
ISBN: 9780198769866 2042
Solution for problem P11A.10 Chapter 11A
Atkins' Physical Chemistry | 11th Edition
• Textbook Solutions
• 2901 Step-by-step solutions solved by professors and subject experts
• Get 24/7 help from StudySoup virtual teaching assistants
Atkins' Physical Chemistry | 11th Edition
4 5 1 298 Reviews
23
2
Problem P11A.10
The Gaussian shape of a Doppler-broadened spectral line reflects the Maxwell distribution of speeds (see Topic 1B) in the sample at the temperature of the experiment. In a spectrometer that makes use of phase-sensitive detection the output signal is proportional to the first derivative of the signal intensity, dI/dν. Plot the resulting lineshape for various temperatures. How is the separation of the peaks related to the temperature?
Step-by-Step Solution:
Step 1 of 3
Chapter 3: Atomic Structure Keyterms ● Electrolysis: chemical reactions caused by electricity ● Electrolyte: Compound that conducts electricity when melted or dissolved in water ● Electrodes: Carbon rods or metal strips inserted into a molten compound or a solution to carry the electric current ● Anode: The electrode that bears a positive charge ● Cathode: Negatively charged electrode ● Ion: an atom or a group of atoms bonded together that has an electric charge ● Anion: an ion with a negative charge ● Cation: A positively charged ion ● Cathode Ray: a beam of current produces a green fluorescence ● Electrons: negatively charged units in atoms ● Electromagnetic radiation: energy with electric and magnetic components ● Radioactivity: spontaneous emission of radiation from an atomic mass ● Alpha particle: mass four times that of a hydrogen atom and a charge twice the magnitude of, but opposite in sign, to that of an electron ● Beta particle: an electron, although it has much more energy than an electron in an atom ● Gamma rays: A form of electromagnetic radiation, much like the xrays used in medical work but even more energetic and more penetrating ● Nucleus: all the positive charge and nearly all the mass of an atom are concentrated at the center of the atom in a tiny core ● Proton: has a charge equal in magnitude to that of th
Step 2 of 3
Step 3 of 3
ISBN: 9780198769866
This full solution covers the following key subjects: . This expansive textbook survival guide covers 327 chapters, and 1120 solutions. Since the solution to P11A.10 from 11A chapter was answered, more than 201 students have viewed the full step-by-step answer. The full step-by-step solution to problem: P11A.10 from chapter: 11A was answered by Aimee Notetaker, our top Chemistry solution expert on 04/25/22, 03:45PM. Atkins' Physical Chemistry was written by Aimee Notetaker and is associated to the ISBN: 9780198769866. The answer to “?The Gaussian shape of a Doppler-broadened spectral line reflects the Maxwell distribution of speeds (see Topic 1B) in the sample at the temperature of the experiment. In a spectrometer that makes use of phase-sensitive detection the output signal is proportional to the first derivative of the signal intensity, dI/d?. Plot the resulting lineshape for various temperatures. How is the separation of the peaks related to the temperature?” is broken down into a number of easy to follow steps, and 67 words. This textbook survival guide was created for the textbook: Atkins' Physical Chemistry, edition: 11.
Discover and learn what students are asking
Calculus: Early Transcendental Functions : Multiple Integration
?In Exercises 1 and 2, evaluate the integral. $$\int_{0}^{2 x} x y^{3} d y$$
Statistics: Informed Decisions Using Data : The Randomized Complete Block Design
?How does the completely randomized design differ from a randomized complete block design?
Related chapters
Unlock Textbook Solution |
Distributed Rate Allocation for Wireless Networks
# Distributed Rate Allocation for Wireless Networks
Jubin Jose and Sriram Vishwanath J. Jose and S. Vishwanath are with the Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712 USA (email(s): jubin@austin.utexas.edu; sriram@austin.utexas.edu).
###### Abstract
This paper develops a distributed algorithm for rate allocation in wireless networks that achieves the same throughput region as optimal centralized algorithms. This cross-layer algorithm jointly performs medium access control (MAC) and physical-layer rate adaptation. The paper establishes that this algorithm is throughput-optimal for general rate regions. In contrast to on-off scheduling, rate allocation enables optimal utilization of physical-layer schemes by scheduling multiple rate levels. The algorithm is based on local queue-length information, and thus the algorithm is of significant practical value.
The algorithm requires that each link can determine the global feasibility of increasing its current data-rate. In many classes of networks, any one link’s data-rate primarily impacts its neighbors and this impact decays with distance. Hence, local exchanges can provide the information needed to determine feasibility. Along these lines, the paper discusses the potential use of existing physical-layer control messages to determine feasibility. This can be considered as a technique analogous to carrier sensing in CSMA (Carrier Sense Multiple Access) networks. An important application of this algorithm is in multiple-band multiple-radio throughput-optimal distributed scheduling for white-space networks.
Wireless networks, Throughput-optimal rate allocation, Distributed algorithms
## I Introduction
The throughput of wireless networks is traditionally studied separately at the physical and medium access layers, and thus independently optimized at each of these two layers. As a result, conventionally, data-rate adaptation is performed at the physical layer for each link, and link scheduling is performed at the medium access layer. There are significant throughput gains in studying these two in a cross-layer framework [27, 8, 11, 19, 4]. This cross-layer optimization results in a joint rate allocation for all the links in the network.
Maximum Weighted (Max-Weight) scheduling introduced in the seminal paper [27] performs joint rate allocation and guarantees throughput-optimality111For cooperative networks, throughput-optimal rate allocation does not follow from classical Max-Weight scheduling. In [17], modified algorithms are developed for certain cooperative networks that guarantee throughput-optimality.. However, Max-Weight algorithm and its variants have the following disadvantages. (a) It requires periodic solving of a possibly hard optimization problem. (b) The optimization problem is centralized, and thus introduces significant overhead due to queue-length information exchanges. Thus, in order to overcome these disadvantages, we need efficient distributed algorithms for general physical-layer interference models [19].
The goal of this paper is to perform joint rate allocation in a decentralized manner. A related problem is distributed resource allocation in networks, and this problem has received considerable attention in diverse communities over years. In data and/or stochastic processing networks, resource-sharing is typically described in terms of independent set constraints. With such independent set constraints, the resource allocation problem translates to medium access control (or link scheduling) in wireless networks. For such on-off scheduling, recently, efficient algorithms have been proposed for both random access networks [12, 26] and CSMA networks [21, 2]. More recently, with instantaneous carrier sensing, a throughput-optimal algorithm with local exchange of control messages that approximate Max-Weight has been proposed in [25], and a fully decentralized algorithm has been proposed in [15]. The decentralized queue-length based scheduling algorithm in [15] and its variants have been shown to be throughput-optimal in [14, 20, 13]. This body of literature on completely distributed on-off scheduling has been extended to a framework that incorporates collisions in [16, 24]. Further, this decentralized framework has been validated through experiments in [18].
However, independent set constraints can only model orthogonal channel access which, in general, is known to be sub-optimal [5] (Section ). For wireless networks, the interaction among nodes require a much more fine-grained characterization than independent set constraints. This can be fully captured in terms of the network’s rate region, i.e., the set of link-rates that are simultaneously sustainable in the network. As long as the data-rates of links are within the rate region, simultaneous transmission is possible even by neighboring links in the network. Therefore, it is crucial to perform efficient distributed joint rate allocation (and not just distributed link scheduling) in wireless networks. Although distributed rate allocation is a very difficult problem in general, in this work, we show that this problem can be solved by taking advantage of physical-layer information.
In this work, we consider single-hop222For networks that do not employ cooperative schemes, the results in this paper are likely to generalize using multi-hop by combining “back-pressure” with the algorithmic framework of this paper. wireless networks. We develop a simple, completely distributed algorithm for rate allocation in wireless networks that is throughput-optimal. In particular, given any rate region for a wireless network, we develop a decentralized (local queue-length based) algorithm that stabilizes all the queues for all arrival rates within the throughput region. Thus, we can utilize the entire physical-layer throughput region of the system with distributed rate allocation. To the best of our knowledge, this is the first paper to obtain such a result. This is a very exciting result as our decentralized algorithm achieves the same throughput region as optimal centralized cross-layer algorithms. The algorithm requires that each link can determine the global feasibility of increasing its data-rate from the current data-rate. In Section VIII-A, we provide details on techniques to determine rate feasibility, and explain reasons for using this approach in practice.
The framework developed in this paper generalizes the distributed link scheduling framework. As discussed before, the current distributed link scheduling algorithms primarily deal with binary (on-off) decisions whereas our algorithm performs scheduling over multiple data-rates. Similar to these existing distributed link scheduling algorithms, our algorithm is mathematically modeled by a Markov process on the discrete set of data-rates. However, with multiple data-rates for each link, the appropriate choice of the large number of transition rates is very complicated. Thus, a key challenge is to design a Markov chain with fewer parameters that can be analyzed and appropriately chosen for throughput-optimality. We overcome this challenge by showing that transition rates with the following structure have this property. For link , the transition rate to a data-rate from any other data-rate is , where is a single parameter associated with link that is updated based on its queue-length. The transition takes place only if the new data-rate is feasible. As expected, this reduces to the existing algorithmic framework in the special case of binary (on-off) decisions.
For the general framework mentioned above, at an intuitive level, the techniques required for proving throughput-optimality remain similar to existing techniques. However, there are few additional technical issues that arise while analyzing the general framework. First, we need to account for more general constraints that arise from the set of possible rate allocation vectors. Next, the choice of update rules for with time based on local queue-lengths that guarantee throughput-optimality does not follow directly. The mixing time of the rate allocation Markov chain plays an important role in choosing the update rules. For arbitrary throughput regions, any rate allocation algorithm that approach -close (for arbitrarily small ) to the boundary possibly requires an increasing number of data-rates per link. This leads to a potential increase in the mixing time due to the increase in the size of the state-space. Thus, the analysis performed in this paper is more general and essential to establish throughput-optimality of the algorithms considered.
An important application of this algorithmic framework is for networks of white-space radios [7], where multiple non-adjacent frequency bands are available for operation and multiple radios are available at the wireless nodes. A scheduler needs to allocate different radios to different bands in a distributed manner. This problem introduces multiple data-rates for every link even in the CSMA framework, and hence, existing distributed algorithms cannot be directly applied. We demonstrate that our framework provides a throughput-optimal distributed algorithm in this setting.
Our main contributions are the following:
• We design a class of distributed cross-layer rate allocation algorithms for wireless networks that utilize local queue-length and physical-layer measuring.
• We show that there are algorithms in this class that are (a) throughput-optimal, and (b) completely decentralized.
• We demonstrate that an adaptation of these algorithms are throughput-optimal for multiple-band multiple-radio distributed scheduling.
### I-a Notation
Vectors are considered to be column vectors and denoted by bold letters. For a vector and matrix , , where is the transpose of . For vectors, , , , and are defined component-wise. denotes all-zeros vector and denotes all-ones vector. Other basic notation used in the paper is given in Table I. Notation specific to proofs is introduced later as needed.
### I-B Organization
The next section describes the system model. Section III explains the distributed rate allocation algorithm. Section IV introduces relevant definitions and known results. Section V describes the rate allocation Markov chain and the optimization framework. Section VI establishes the throughput-optimality of the algorithm. The algorithm for multiple-band multiple-radio scheduling is given in Section VII. Further discussions and simulation results are given in Section VIII. We conclude with our remarks in Section IX. For readability, the proofs of the technical lemmas in Section V and Section VI are moved to the Appendix.
## Ii System Model
Consider a wireless network consisting of nodes, labeled . In this network, we are interested in single-hop flows that correspond to wireless links labeled . Since we have a shared wireless medium, these links interact (or interfere) in a potentially complex way. For single-hop flows, this interaction among links can be captured through a -dimensional rate region for the network, which is formally defined next.
###### Definition 1 (Rate Region)
The rate region of a network is defined as the set of instantaneous rate vectors at which queues (introduced later) of all links can be drained simultaneously.
In this paper, we assume that the rate region is fixed333We consider fixed or slow-fading channels. (i.e., not time-varying). We denote the rate region associated with the network by . By definition, this rate region is compact. We assume that the rate region has the following simple property: if , then for all and . The above property states that rates can be decreased component-wise. Such an assumption is fairly mild, and is satisfied by rate regions resulting from most physical-layer schemes. Next, we define the throughput region of the network.
###### Definition 2 (Throughput Region)
The throughput region of a network, denoted by , is defined as the convex hull of the rate region of the network.
We use a continuous-time model to describe system dynamics. Time is denoted by Every (transmitter of) link is associated with a queue , which quantifies the information (packets) remaining at time waiting to be transmitted on link . Let the cumulative arrival of information at the -th link during the time interval be with . Rate allocation at time is defined as the rate vector in the rate region at which the system is being operated at time . Let the rate allocation corresponding to the -th link at time be . Then, for every link , the queue dynamics is given by
Qi(t)=Qi(s)−∫tsri(z)I(Qi(z)>0)dz+Ai(t)−Ai(s), (1)
where . The vector of queues in the system is denoted by . The queues are initially at .
We consider arrival processes at the queues in the network with the following properties.
• We assume every arrival process is such that increments over integral times are independent and identically distributed with
• We assume that all these increments belong to a bounded support , i.e., for all .
Based on these properties, the (mean) arrival rate corresponding to the -th link is . We denote the vector of arrival rates by . Without loss of generality444If , then this link can be removed from the system., we assume . It follows from the strong law of large numbers that, with probability ,
limt→∞Ai(t)t=λi. (2)
In summary, our system model incorporates general interference constraints through a arbitrary rate region and focuses on single-hop flows. We proceed to describe the rate allocation algorithm and the main results of this paper.
## Iii Rate Allocation Algorithm & Main Results
The goal of this paper is to design a completely decentralized algorithm for rate allocation that stabilizes all the queues as long as the arrival rate vector is within the throughput region. By assumption, every link can determine rate feasibility, i.e., every link can determine whether increasing its data-rate from the current rate allocation results in a net feasible rate vector. More formally, every link at time , if required, can obtain the information More details on determining rate feasibility are given in Section VIII.
The rate allocation vector at time is denoted by . For decentralized rate allocation, we develop an algorithm that uses only local queue information for choosing over time . Further, we perform rate allocation over a chosen limited (finite) set of rate vectors that are feasible. We choose a finite set of rate levels corresponding to every link, and form vectors that are feasible. The details are as follows:
1. For each link , a set of rate levels are chosen from with , and . Here, is the maximum possible transmission rate for the -th link, i.e., , and is the number of levels other than zero. Since the rate region is compact, without loss of generality555If , then this link can be removed from the system., we assume .
2. The set of rate allocation vectors, denoted by , is given by
The convex hull of the set of rate allocation vectors is denoted by . Define the set of strictly feasible rates. For rate regions that are polytopes, the partitions can be chosen such that . For any compact rate region, it is fairly straightforward to choose partitions with such that if . The trivial partition with as step size in all dimensions satisfy the above property. Thus, for any given , we can obtain a set of rate allocation vectors such that
|R|≤⌈2¯K/ϵ⌉n (3)
and if .
Before describing the algorithm, we define two notions of throughput performance of a rate allocation algorithm.
###### Definition 3 (Rate stable)
We say that a rate allocation algorithm is rate-stable if, for any , the departure rate corresponding to every queue is equal to its arrival rate, i.e., for all , with probability ,
limt→∞1t∫t0ri(z)I(Qi(z)>0)dz=λi.
From (1),(2), this is same as, for all , with probability ,
limt→∞Qi(t)/t=0.
###### Definition 4 (Throughput optimal)
We say that a rate allocation algorithm is throughput-optimal if, for any given , the algorithm makes the underlying network Markov chain positive Harris recurrent (defined in Section IV) for all such that . By definition, the algorithm can depend on the value of
Next, we describe a class of algorithms to determine as a function of time based on a continuous-time Markov chain. Recall that is the set of possible rates/states for allocation associated with the -th link. In these algorithms, the -th link uses independent exponential clocks with rates/parameters666These should not to be confused with the rates for allocation. (or equivalently exponential clocks with mean times ). The clock with (time varying) parameter is associated with the state . Based on these clocks, the -th link obtains as follows:
1. If the clock associated with a state (say ) ticks and further if transitioning to that state is feasible, then is changed to ;
2. Otherwise, remains the same.
The above procedure continues, i.e, all the clocks run continuously. Define . It turns out that the appropriate structure to introduce is as follows:
ui,j=ri,jvi,∀i∈L,j∈{0,1,…,ki},
where We denote the vector consisting of these new set of parameters by .
###### Example 1
Consider a Gaussian multiple access channel with two links as shown in Figure 1 with average power constraint at the transmitters and noise variance at the receiver. The capacity region of this channel is shown in Figure 2 where . In this case, orthogonal access schemes limit the throughput region to the triangle (strictly within the pentagon) shown using dash-line. In this example, if we allow for capacity-achieving physical-layer schemes, the rate region (and hence the throughput region) is identical to the pentagon shown in Figure 2. The natural choice for the set of rate levels at link-1 is where and . Similarly, . This leads to the set of rate allocation vectors It is clear that the convex combination of this set is the throughput region itself. For this example, the state-space of the Markov chain and transitions to and from state are shown in Figure 3.
A distributed algorithm needs to choose the parameters in a decentralized manner. For providing the intuition behind the algorithm, we perform this in two steps. In the first step, we develop the non-adaptive version of the algorithm that has the knowledge of . This algorithm is called non-adaptive as the algorithm requires the explicit knowledge of . The rate allocation at time is set to be . This algorithm uses at all times which is a function of , and is given by
v∗=argmaxv∈Rnλ⋅v−log(∑r∈Rexp(r⋅v)).
We show in Section V that, given , the above optimization problem has a unique solution that is finite, and therefore has a valid . An important result regarding this non-adaptive algorithm is the following theorem.
###### Theorem 1
The above non-adaptive algorithm is rate-stable for any given .
###### Proof:
For any , there is at least one distribution on that has expectation as . For the Markov chain specified by any , there is a stationary distribution on the state-space . The value is chosen such that it minimizes the Kullback-Leibler divergence of the induced stationary distribution from the distribution corresponding to . For the Markov chain specified by , the expected value of the stationary distribution turns out to be . This leads to rate-stable performance of the algorithm. The proof details are given in Section V. \qed
In the second step, we develop the adaptive algorithm, where is obtained as a function of time denoted by This algorithm is called adaptive as the algorithm does not require the knowledge of . The values of are updated during fixed (not random variables) time instances for . We set and . During interval the algorithm uses . The length of the intervals are . During interval , let the empirical arrival rate be
^λi(l)=Ai(τl+1)−Ai(τl)Tl (4)
and the empirical offered service rate be
^si(l)=1Tl∫τl+1τlri(z)dz. (5)
The update equation corresponding to the algorithm for the -th link is given by
vi(τl+1)=[vi(τl)+αl(^λi(l)+ϵ4−^si(l))]D (6)
where , i.e., is the projection of to the closest point in , and are the step sizes. Thus, the algorithm parameters are interval lengths , step sizes and .
###### Remark 1
Clearly, both empirical arrival rate and empirical offered service rate used in the above algorithm can be computed by the -th link without any external information. In fact, the difference is simply the difference of its queue-length over the previous interval appropriately scaled by the inverse of the length of the previous interval.
The following theorem provides -optimal performance guarantee for the adaptive algorithm.
###### Theorem 2
Consider any given , . Then, there exists some choice of algorithm parameters , and such that the appropriate network Markov chain under the adaptive algorithm is positive Harris recurrent if , i.e., the algorithm is throughput-optimal.
###### Proof:
The update in (6) can be intuitively thought of as a gradient decent technique to solve an optimization problem that will lead to whose induced stationary distribution on has expected value strictly greater than . However, the arrival rate and offered service rate are replaced with their empirical values for decentralized operation. We consider the two time scales involved in the algorithm - update interval and update intervals. The main steps involved in establishing the throughput-optimality are the following. First, we show that, sufficiently long can be chosen such that the empirical values used in the algorithm are arbitrarily close to the true values. Using this, we next show that the average offered empirical service rate over update intervals is strictly higher than the arrival rate. Finally, we show that this results in a drift that is sufficient to guarantee positive Harris recurrence. The proof details are given in Section VI. \qed
## Iv Definitions & Known Results
We provide definitions and known results that are key in establishing the main results of this paper. We begin with definitions on two measures of difference between two probability distributions.
###### Definition 5 (Kullback-Leibler (KL) divergence)
Consider two probability mass functions and on a finite set . Then, the KL divergence from to is defined as
###### Definition 6 (Total Variation)
Consider two probability mass functions and on a finite set . Then, the total variation distance between and is defined as
Next, we provide two known results that are used later. Result 1 follows directly from [3](Theorem ), and Result 2 is in [3](Theorem ).
###### Result 1 (Mixing Time)
Consider any finite state-space, aperiodic, irreducible, discrete-time Markov chain with transition probability matrix and the stationary distribution . Let be the minimum value in and the second largest eigenvalue modulus (SLEM) be . Then, for any , starting from any initial distribution (at time 0), the distribution at time associated with the Markov chain, denoted by , is such that if
τ≥12log(1/α\emph{min})+log(1/ρ)log(1/σ\emph{max}). (7)
###### Result 2 (Conductance Bounds)
Consider the setting as above. The ergodic flow out of is defined as and the conductance is defined as
(8)
Then, the SLEM is bounded by conductance as follows:
1−2Φ≤σ\emph{max}≤1−Φ2/2. (9)
Lastly, we provide the definition of positive Harris recurrence. For details on properties associated with positive Harris recurrence, see [22, 6].
###### Definition 7 (Positive Harris recurrence)
Con-sider a discrete-time time-homogeneous Markov chain on a complete, separable metric space . Let denote the Borel -algebra on , and denote the state of the Markov chain at time . Define stopping time for any . The set is called Harris recurrent if for any . A Markov chain is called Harris recurrent if there exits a -finite measure on such that if for some , then is Harris recurrent. It is known that if is Harris recurrent an essentially unique invariant measure exists. If the invariant measure is finite, then it may be normalized to a probability measure. In this case, is called positive Harris recurrent.
## V Rate allocation Markov chain & Rate Stability
Rate allocation Markov chain: The main challenge is to design a Markov chain with fewer parameters that can be analyzed and appropriately chosen for throughput-optimality. First, we identify a class of Markov chains that are relatively easy to analyze. Consider the class of algorithms introduced in Section III. The core of this class of algorithms is a continuous-time Markov chain with state-space , which is the (finite) set of rate allocation vectors. Define
f(^r,r):=exp(n∑i=1ki∑j=0ui,jI(ri=ri,j)I(ri≠^ri)), (10)
where , and are the parameters introduced in Section III. Now, the transition rate from state to state can be expressed as
q(^r,r)={f(^r,r),if ∥^r−r∥0=1,0,if ∥^r−r∥0>1.
And, the diagonal elements of the rate matrix are given by for all . This follow directly from the description of the algorithm. This class of algorithms are carefully designed such that it is tractable for analysis. In particular, the following lemma shows that this Markov chain is reversible and the stationary distribution has exponential form.
###### Lemma 3
The rate allocation Markov chain is reversible and has the stationary distribution
π(r)=exp(∑ni=1∑kij=0ui,jI(ri=ri,j))∑~r∈Rexp(∑ni=1∑kij=0ui,jI(~ri=ri,j)). (11)
Furthermore, this Markov chain converges to this stationary distribution starting from any initial distribution.
###### Proof:
The proof follows from detailed balance equations for all and known results on convergence to stationary distribution for irreducible finite state-space continuous-time Markov chains [1]. \qed
The offered service rate vector under the stationary distribution is . In general, for , we expect to find values for parameters as a function of and such that . Due the exponential form in (9), it turns out that the right structure to introduce is
ui,j=ri,jvi,∀i∈L,j∈{0,1,…,ki}, (12)
where , and obtain suitable values for as a function of and such that . To emphasize the dependency on , from now onwards, we denote the stationary distribution by and the offered service rate vector by
sv=∑r∈Rπv(r)r. (13)
Substituting (12), we can simplify (9) to obtain
πv(r)=exp(r⋅v)∑~r∈Rexp(~r⋅v). (14)
Optimization framework: We utilize the optimization framework in [15] to show that values for exist such that . In particular, we show that the unique solution to an optimization problem given by has the property . Next, we describe the intuitive steps to arrive at the optimization problem. If , then can be expressed as a convex combination of , i.e., there exists a valid probability distribution such that . For a given distribution , we are interested in choosing such that is close to . We consider the KL divergence of from given by . Minimizing over the parameter is equivalent in terms of the optimal solution(s) to maximizing over the parameter as is a constant. Simplifying leads the optimization problem as follows:
F(μ(r),πv(r)) = ∑r∈Rμ(r)logπv(r) (a)= ∑r∈Rμ(r)r⋅v−log(∑r∈Rexp(r⋅v)) (b)= λ⋅v−log(∑r∈Rexp(r⋅v)).
Here, follows from (14) and follows from the assumption . Now onwards, we denote the objective function by . To summarize, the optimization problem of interest is, given ,
maximize F(v,λ)=λ⋅v−log(∑r∈Rexp(r⋅v)) (15) subject to v∈Rn.
The following lemma regarding the optimization problem in (15) is a key ingredient to the main results.
###### Lemma 4
Let . The optimization problem in (15) has a unique solution , which is finite. In addition, the offered service rate vector under is equal to the arrival rate vector, i.e.,
###### Proof:
See Appendix. \qed
The important observations are that the objective function is concave in and the gradient with respect to is . With offered service rate equal to arrival rate, the next step is to show that the queues drain at rate equal to .
### V-a Proof of Theorem 1
Rate stability of the non-adaptive algorithm: We establish the rate stability of the non-adaptive algorithm with the result given in Lemma 4 as follows.
Consider time instances for with , and interval length . The queue at the -th link can be upper bounded as follows. The offered service during the time interval is is used to serve the arrivals during the time interval alone. Consider a time , and choose such that . Using (1) and the above upper bounding technique, we obtain
Qi(t) = Ai(t)−∫t0ri(z)I(Qi(z)>0)dz (16) ≤ l−2∑k=0[Ai(νk+1)−Ai(νk)−∫νk+2νk+1ri(z)dz]+ +Ai(t)−Ai(νl−1),
where
For each interval , define the following two random variables:
αi(k):=Ai(νk+1)−Ai(νk)Γk, and
βi(k):=1Γk∫νk+1νkri(z)dz.
It follows from the strong law of large numbers that, with probability , . From Lemma 4 and ergodic theorem for Markov chains, it follows that, with probability , Since the arrival process is non-decreasing and the increments are bounded by , we have
Ai(t)−Ai(νl−1) ≤ Ai(νl+1)−Ai(νl−1) (17) ≤ K(νl+1−νl−1) = K(Γl−1+Γl).
Rewriting (16) with above defined random variables and applying (17) along with and , we obtain
Qi(t)t ≤ 1νll−2∑k=0Γk[αi(k)−βi(k+1)]+ (18) +K(Γl−1+Γl)νl.
In (18), the second term on the right hand side (RHS) goes to zero as as . The first term on the RHS of (18) goes to zero with probability as , and Thus, for any given , with probability ,
limt→∞Qi(t)t=0,∀i∈L,
which completes the proof.
This result is important due to the following two reasons.
1. The result shows that this algorithm has good performance, and an algorithm that approaches the operating point of this algorithm has the potential to perform “well.” Essentially, this aspect is utilized to obtain the adaptive algorithm.
2. The non-adaptive algorithm does not require the knowledge of the number of nodes or , as required by the adaptive algorithm. This suggests the existence of similar gradient-like algorithms that perform “well” with different algorithm parameters that may not depend on the number of nodes or . We do not address this question in the paper, but the non-adaptive algorithm will serve as the starting point to address such issues.
## Vi Throughput Optimality of Algorithm
In this section, we establish the throughput-optimality of the adaptive algorithm for a particular choice of parameters. The algorithm parameters used in this section are dependent on the number of links and . It is evident from the theorem that determines how close the algorithm is to optimal performance. Define
C(n):=35(2¯K+K)2(¯K2n22+n).
We set all the step sizes (irrespective of interval) to
αl=α(n,ϵ):=ϵ2/C(n), (19)
and used in the projection to
D=D(n,ϵ):=16¯KK––nϵlog⌈2¯Kϵ⌉+¯K. (20)
All the interval lengths (irrespective of interval) are set to
Tl=T(n,ϵ):=exp(^K(n2ϵlognϵ)) (21)
for some large enough constant .
###### Remark 2
The large value of in (21) is due to the poor bound on the conductance of the rate allocation Markov chain. The parameters given by (19), (20) and (21) are one possible choice of the parameters. We would like to emphasize that this choice is primarily for the purpose of the proofs. The choice of right parameters (and even the update functions) in practice are subject to further study especially based on the network configuration and delay requirements. Some comments on this are given in Section VIII.
We start with the optimization framework developed in the previous section. For the adaptive algorithm, the relevant optimization problem is as follows: given such that ,
maximize Fϵ(v):=F(v,λ+ϵ41) subject to v∈Rn.
The following result is an extension of Lemma 4.
###### Lemma 5
Consider any given and . Then, the optimization problem in (VI) is strictly concave in with gradient and Hessian
H(F(v))=−(Eπv[rrT]−Eπv[r]Eπv[rT]).
Further, let . Then, it has a unique solution , which is finite, such that the offered service rate vector under is equal to , i.e., In addition, if , then the optimal value is such that
∥v∗∥∞≤16¯KK––nϵlog⌈2¯Kϵ⌉. (23)
###### Proof:
See Appendix. \qed
The update step in (6), which is central to the adaptive algorithm, can be intuitively thought of as a gradient decent technique to solve the above optimization problem. Technically, it is different as the arrival rate and offered service rate are replaced with their empirical values for decentralized operation. The algorithm parameters can be chosen in order to account for this. This forms the central theme of this section.
### Vi-a Within update interval
Consider a time interval . During this interval the algorithm uses parameters . For simplicity, in this subsection, we denote by and the vector by and by . For the rate allocation Markov chain (MC) introduced in Section V, we obtain an upper bound on the convergence time or the mixing time.
To obtain this bound, we perform uniformization of the CTMC (continuous-time MC) and use results given in Section IV on the mixing time of DTMC (discrete-time MC). The uniformization constant used is . The resulting DTMC has the same state-space with transition probability matrix . The transition probability from state to state is , and from state to itself is . With our choice of parameters given by (12), we can simplify (10) to
f(^r,r)=exp(n∑i=1riviI(ri≠^ri)). (24)
For all , clearly . Since at most elements in every row of the transition rate matrix of the CTMC is positive for all . Therefore, is a valid probability transition matrix.
The DTMC has the same stationary distribution as the CTMC. In addition, the CTMC and the DTMC have one-to-one correspondence through an underlying independent Poisson process with rate In this subsection, time denotes the time within the update interval, i.e., denotes global time . Let be the distribution over given by the CTMC at time , and be a Poisson random variable with parameter . Then, we have
μ(t) = ∑m∈Z+Pr(ζ=m)μ(0)Pm (25) = μ(0)exp(At(P−I)),
where is the identity matrix. Next, we provide the upper bound on the mixing time of the CTMC.
###### Lemma 6
Consider any . Then, there exists a constant , such that, if
t≥exp(K1(n∥v∥∞+nlog1ϵ))log1ρ1, (26)
then the total variation between the probability distribution at time given by (25) and the stationary distribution given by (14) is smaller than , i.e.,
###### Proof:
See Appendix. \qed
Lemma 6 is used to show that the error associated with using empirical values for arrival rate and offered service rate in the update rule (6) can be made arbitrarily small by choosing large enough . This is formally stated in the next lemma.
###### Lemma 7
Consider . Then, there exists a constant , such that, if the updating period
T≥exp(K2(n∥v∥∞+nlog1ϵ))1ρ2,
then for any time interval
E[∥∥^λ(l)−λ∥∥1]+E[∥^s(l)−sv∥1]≤ρ2. (27)
###### Proof:
See Appendix. \qed
Thus, the important result is that due to the mixing of the rate allocation Markov chain, the empirical offered service rate is close to the offered service rate. The next step is to address whether the offered service rates over multiple update intervals is higher than the arrival rates.
### Vi-B Over multiple update intervals
We consider multiple update intervals, and establish that the average empirical offered service rate is strictly higher than the arrival rate. This result follows from the observation that, if the error in approximating the true values by empirical values are sufficiently small, then the expected value of the gradient of over sufficiently large number of intervals should be small. In this case, we can expect the average offered service rate to be close to . Since, is strictly higher than arrival rates, we can expect the average offered service rate to be strictly higher than the arrival rate. The result is formally stated next.
###### Lemma 8
Consider update intervals. Then, the average of empirical service rates over these update intervals is greater than or equal to , i.e.,
1NN∑l=1E[^s(l)]≥λ+ϵ81.
###### Proof:
See Appendix. \qed
Now, we proceed to show that the appropriate ‘drift’ required for stability is obtained.
### Vi-C Proof of Theorem 2
Consider the underlying network Markov chain consisting of all the queues in the network, the update parameters, and the resulting rate allocation vectors at time , i.e., for It follows from the system model and the algorithm description that is a time-homogenous Markov chain on an uncountable state-space The -field on considered is the Borel -field associated with the product topology. For more details on dealing with general state-space Markov chains, we refer readers to [22].
We consider a Lyapunov function of the form, for . In order to establish positive Harris recurrence, for any such that , we use multi-step888This is a special case of the state-dependent drift criteria in [22]. Lyapunov and Foster’s drift criteria to establish positive recurrence of a set of the form , for some From the assumption on the arrival processes, it follows that is a closed petite set (for definition and details see [22, 13]). It is well known that these two results imply positive Harris recurrence [22].
Next, we obtain the required drift criteria. For simplicity, we denote by in the rest of this section. Consider
E[Q2i(TN)−Q2i(0)] = E[(Qi(TN)−Qi(0))2 +2Qi(0)(Qi(TN)−Qi(0))] (a)≤ (max(K,¯K)TN)2+ 2Qi(0)E[Qi(TN)−Qi(0)].
Here, follows from the fact that over unit time queue difference belong to . Now, we look at two cases. If , clearly during interval as service rate is less than or equal to . For this case, from Lemma 8,
2Qi(0)E[Qi(TN)−Qi(0)] = 2Qi(0)T(N∑l=1(λi−E[^si(l)]) ≤ −ϵ4TNQi(0) (a)≤ −ϵ4TNQi(0)+ϵ4¯K(TN)2.
Here, is trivial, but the extra term is added to ensure that the RHS evaluates to a non-negative value for . If , then clearly Since the bounds for each case do not evaluate to negative values for the other case, we have
E[Q2i(TN)−Q2i(0)]≤−ϵ4TNQi(0)+((K+¯K)2+ϵ4¯K)(TN)2.
Since both and are bounded, there exists some fixed such that
E[v2i(TN)−v2i(0)]+E[r2i(TN)−r2i(0)]≤M(n,ϵ).
Summing up over all , we obtain
E[V(X(N))−V(X(0))]≤−ϵ4TN(n∑i=1Qi(0)) +nM(n,ϵ)+n((K+¯K)2+ϵ4¯K)(TN)2.
This shows that there exists some such that for all with there is strict negative drift. Hence, the set |
## Types & Summary of Cracks in Reinforced Concrete Column
The occurrence of various crack patterns in the building mostly takes place during construction and/or after completion. A building component develops cracks whenever the stress in the components exceeds its strength. Stress in the building component is caused by externally applied forces/loads.
To start with, not all the cracks are harmful however it should not go unnoticed. Cracks in reinforced concrete column occur mainly due to inadequate size of their section or reinforcing steel, and corrosion in reinforcement etc.
##### Types & Summary of Cracks in Reinforced Concrete Beam
Following are the major cracks that usually occur in reinforced concrete columns. We have tried to describe the possible reasons and important characteristics of cracks in reinforced concrete column.
### Splitting Cracks in Reinforced Concrete Column:
Figure below shows splitting cracks in reinforced concrete column which fails due to inadequate steel reinforcement and/or inferior concrete quality. This type of concrete cracks occurs due to load carrying capacity of the column reaches to its maximum.
Courtesy - Gujarat Ambuja Cements Ltd.
#### Possible Reasons
Reinforced Concrete Column Splitting Cracks in Reinforced Concrete Column
• Building in that region.
• Short parallel vertical cracks.
• Varying widths
• Inferior quality concrete.
• Load carrying capacity of the column exceeded either due to inadequate cross-section or reinforcement insufficient.
### Diagonal Cracks in Reinforced Concrete Column:
The figure below shows diagonal cracks in the reinforced concrete column due to inadequate cross-section and insufficient reinforcement steel. Figure in the first left shows diagonal cracks in end column due to inadequate load carrying capacity.
Courtesy - Gujarat Ambuja Cements Ltd.
#### Possible Reasons
Reinforced Concrete Column Diagonal Cracks in Reinforced Concrete Column
• Runs diagonally across the section.
• Can occur anywhere in the height.
• Uniform thickness
• Cross-section or main reinforcement is insufficient
### Horizontal Cracks in Reinforced Concrete Column:
The figure below shows a horizontal crack in reinforced concrete column at the beam-column junction due to shear force.
Courtesy - Gujarat Ambuja Cements Ltd.
#### Possible Reasons
Reinforced Concrete Column Horizontal Cracks in Reinforced Concrete Column
• Occurs near the beam-column junction.
• Moment resistance capacity of column inadequate in the corresponding region.
• Inadequate quantum of reinforcement or disposition of reinforcement not satisfactory.
### Corrosion Cracks in Reinforced Concrete Column:
The figure below shows corrosion cracks in reinforced concrete column and it appears along the line of reinforcement. It is also called as corrosions cracks. These types of concrete cracks expand with time.
##### Identify Risk of Steel Corrosion in Concrete Structure
Courtesy - Gujarat Ambuja Cements Ltd.
#### Important Characteristics
##### Possible Reasons
Reinforced Concrete Column Corrosion Cracks in Reinforced Concrete Column
• Runs along the line of reinforcement.
• Uniform width in general
• Bond between reinforcing bars and concrete not satisfactory.
• May be due to corrosion of bars
## Material Exhibition
Explore the world of materials. |
# How to find $k$-th number whose digits are all even?
As my question says I have to find $k$-th number whose digits are all even. I figure out that all those numbers are made of of $\{0,2,4,6,8\}$ and there is a sequence in which the numbers change their value like:$$0,2,4,6,8,20,22,24,26,28,40,42,44,46,48\dots80,82,84,86,88,200,202,204,206,208\dots$$ But I am unable to figure out some kind of formula or short way to reach the $k$-th such number easily.
• Note that this is a duplicate of math.stackexchange.com/questions/1818439/… (but neither question has an answer yet). – almagest Jun 8 '16 at 13:21
• Half all the digits. What sequence do you get then? – Henno Brandsma Jun 8 '16 at 13:24
• half the digits will be: {0,1,2,3,4,10,11,12,13,14,20,21,22,23,24......and so on}.....how does it help me?? – agangwal Jun 8 '16 at 15:14
Let's denote digits $\{0,2,4,6,8\}$ as $\{0,1,2,3,4\}$. Then sequence represents as $$0, 1, 2, 3, 4, 10, 11, 12, \dots ,40, 41, 42,43, 44, 100, 101,\dots$$ which is looks like numbers in base-$5$ positional notation. So if we want to find $k_{10}$-th (index $10$ means $k$ is in decimal notation) we need to take number $k_5$ and replase digits as we did it in the beginning, i.e. using substitution $$\begin{pmatrix} 0 & 1 & 2 & 3 & 4 \\ 0 & 2 & 4 & 6 & 8 \end{pmatrix}.$$
For example, let's calculate numeber from this sequence with number $8$. As $8_{10} = 1\cdot 5^1 + 3\cdot 5^0 = 13_5$ we get number $26$ (replacing in $13$ digits $1$ with $2$ and $3$ with $6$) which is correct. Note that we count $1$ as first number, not zero. |
# What If They Find the Bodies
A recurring dream
<p>
The dream always begins with the fear that I hadn't buried the bodies deep enough. You see, the floor of the fruit cellar was made of very hard-packed dirt, making it difficult to dig. Especially when in a hurry. I had buried them long ago and was not entirely certain that I'd done a sufficient job of covering my tracks.
</p>
<p>
Now, the police were snooping around while investigating an entirely unrelated crime. Surely they'll find something. If they do, there's no doubt the evidence will lead them quickly to me. A small crowd had gathered in the yard, anticipating something gruesome. Little do they know! I watch helplessly as several officers cautiously descend the small concrete steps leading into the cellar. Maybe it'll be fine. But then as they pull the short chain on the single bare bulb hanging from the ceiling, I can see beyond them and into the darkness. The floor shows obvious signs of being disturbed. My stomach lurches and I begin to run.
</p>
<p>
That is where the dream always ends. So far the bodies remain buried.
</p>
</div>
</div>
</div> |
# Double Displacement Reaction
1. Oct 10, 2011
### Bashyboy
What exactly is a double displacement reaction? I searched my textbook for the term--yes, I did check the glossary first--and it proved futile. My teacher alludes to this term and even has one power point slide to it. It says, "When an aqueous solution of sodium carbonate is added to an aqueous solution of nitric acid, a gas evolves." And that is all he has pertaining to double displacement reactions. Now, I did come across precipitation and gas-evolution reactions, and they appear to be similar. Could someone give me a good definition of this sort of reaction?
Thank you
2. Oct 10, 2011 |
### Symmetry, Integrability and Geometry: Methods and Applications (SIGMA)
SIGMA 14 (2018), 068, 10 pages arXiv:1807.04442 https://doi.org/10.3842/SIGMA.2018.068
Contribution to the Special Issue on Painlevé Equations and Applications in Memory of Andrei Kapaev
### Numerical Approach to Painlevé Transcendents on Unbounded Domains
Christian Klein and Nikola Stoilov
Institut de Mathématiques de Bourgogne, UMR 5584, Université de Bourgogne-Franche-Comté, 9 avenue Alain Savary, 21078 Dijon Cedex, France
Received April 18, 2018, in final form July 02, 2018; Published online July 12, 2018
Abstract
A multidomain spectral approach for Painlevé transcendents on unbounded domains is presented. This method is designed to study solutions determined uniquely by a, possibly divergent, asymptotic series valid near infinity in a sector and approximates the solution on straight lines lying entirely within said sector without the need of evaluating truncations of the series at any finite point. The accuracy of the method is illustrated for the example of the tritronquée solution to the Painlevé I equation.
Key words: Painlevé equations; spectral methods.
pdf (522 kb) tex (93 kb)
References
1. Bornemann F., On the numerical evaluation of distributions in random matrix theory: a review, Markov Process. Related Fields 16 (2010), 803-866, arXiv:0904.1581.
2. Bornemann F., On the numerical evaluation of Fredholm determinants, Math. Comp. 79 (2010), 871-915, arXiv:0804.2543.
3. Boutroux P., Recherches sur les transcendantes de M. Painlevé et l'étude asymptotique des équations différentielles du second ordre, Ann. Sci. École Norm. Sup. (3) 30 (1913), 255-375.
4. Boutroux P., Recherches sur les transcendantes de M. Painlevé et l'étude asymptotique des équations différentielles du second ordre (suite), Ann. Sci. École Norm. Sup. (3) 31 (1914), 99-159.
5. Clarkson P.A., Painlevé equations - nonlinear special functions, in Orthogonal Polynomials and Special Functions, Lecture Notes in Math., Vol. 1883, Springer, Berlin, 2006, 331-411.
6. Costin O., Huang M., Tanveer S., Proof of the Dubrovin conjecture and analysis of the tritronquée solutions of $P_{\rm I}$, Duke Math. J. 163 (2014), 665-704, arXiv:1209.1009.
7. Crespo S., Klein C., Stoilov N., Vallée C., Multidomain spectral method for the hypergeometric function, in preparation.
8. Driscoll T.A., Bornemann F., Trefethen L.N., The chebop system for automatic solution of differential equations, BIT 48 (2008), 701-723.
9. Driscoll T.A., Weideman J.A.C., Optimal domain splitting for interpolation by Chebyshev polynomials, SIAM J. Numer. Anal. 52 (2014), 1913-1927.
10. Dubrovin B., Grava T., Klein C., On universality of critical behavior in the focusing nonlinear Schrödinger equation, elliptic umbilic catastrophe and the tritronquée solution to the Painlevé-I equation, J. Nonlinear Sci. 19 (2009), 57-94.
11. Fasondini M., Fornberg B., Weideman J.A.C., Methods for the computation of the multivalued Painlevé transcendents on their Riemann surfaces, J. Comput. Phys. 344 (2017), 36-50.
12. Fokas A.S., Its A.R., Kapaev A.A., Novokshenov V.Yu., Painlevé transcendents: the Riemann-Hilbert approach, Mathematical Surveys and Monographs, Vol. 128, Amer. Math. Soc., Providence, RI, 2006.
13. Fornberg B., Weideman J.A.C., A numerical methodology for the Painlevé equations, J. Comput. Phys. 230 (2011), 5957-5973.
14. Frauendiener J., Klein C., Computational approach to hyperelliptic Riemann surfaces, Lett. Math. Phys. 105 (2015), 379-400, arXiv:1408.2201.
15. Gradshteyn I.S., Ryzhik I.M., Table of integrals, series, and products, 6th ed., Academic Press, Inc., San Diego, CA, 2000.
16. Grava T., Kapaev A., Klein C., On the tritronquée solutions of ${\rm P}_{\rm I}^2$, Constr. Approx. 41 (2015), 425-466, arXiv:1306.6161.
17. Grava T., Klein C., Numerical study of a multiscale expansion of Korteweg-de Vries and Camassa-Holm equation, in Integrable Systems and Random Matrices, Contemp. Math., Vol. 458, Amer. Math. Soc., Providence, RI, 2008, 81-98, math-ph/0702038.
18. Grava T., Klein C., A numerical study of the small dispersion limit of the Korteweg-de Vries equation and asymptotic solutions, Phys. D 241 (2012), 2246-2264, arXiv:1202.0962.
19. Hastings S.P., McLeod J.B., A boundary value problem associated with the second Painlevé transcendent and the Korteweg-de Vries equation, Arch. Rational Mech. Anal. 73 (1980), 31-51.
20. Ince E.L., Ordinary differential equations, Dover Publications, New York, 1956.
21. Joshi N., Kitaev A.V., On Boutroux's tritronquée solutions of the first Painlevé equation, Stud. Appl. Math. 107 (2001), 253-291.
22. Kapaev A.A., Monodromy deformation approach to the scaling limit of the Painlevé first equation, in The Kowalevski Property (Leeds, 2000), CRM Proc. Lecture Notes, Vol. 32, Amer. Math. Soc., Providence, RI, 2002, 157-179, nlin.SI/0105002.
23. Kapaev A.A., Monodromy approach to the scaling limits in isomonodromy systems, Theoret. and Math. Phys. 137 (2003), 1691-1702, nlin.SI/0211022.
24. Kapaev A.A., Quasi-linear stokes phenomenon for the Painlevé first equation, J. Phys. A: Math. Gen. 37 (2004), 11149-11167, nlin.SI/0404026.
25. Lanczos C., Trigonometric interpolation of empirical and analytic functions, J. Math. and Phys. 17 (1938), 123-199.
26. Novokshenov V.Yu., Padé approximations for Painlevé I and II transcendents, Theoret. and Math. Phys. 159 (2009), 853-862.
27. Olver F.W.J., Lozier D.W., Boisvert R.F., Clark C.W. (Editors), NIST handbook of mathematical functions, available at https://dlmf.nist.gov/.
28. Olver S., Numerical solution of Riemann-Hilbert problems: Painlevé II, Found. Comput. Math. 11 (2011), 153-179.
29. Trefethen L.N., Spectral methods in MATLAB, Software, Environments, and Tools, Vol. 10, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2000.
30. Trefethen L.N., Approximation theory and approximation practice, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2013.
31. Weideman J.A.C., Reddy S.C., A MATLAB differentiation matrix suite, ACM Trans. Math. Software 26 (2000), 465-519. |
# Direct from Dell
Euler problem 9.2
.
There exists exactly one Pythagorean triplet for which a + b + c = 1000. Find the product abc.
```g p =
[ [a, b, c]
| m <- [2 .. limit],
n <- [1 .. (m - 1)],
let a = m ^ 2 - n ^ 2,
let b = 2 * m * n,
let c = m ^ 2 + n ^ 2,
a + b + c == p
]
where
limit = floor . sqrt . fromIntegral \$ p
```
.
Euclid’s formula is a fundamental formula for generating Pythagorean triples given an arbitrary pair of integers $m$ and $n$ with $m > n > 0$. The formula states that the integers
$\displaystyle{ a=m^{2}-n^{2},\ \,b=2mn,\ \,c=m^{2}+n^{2}}$
form a Pythagorean triple. The triple generated by Euclid’s formula is primitive if and only if $m$ and $n$ are coprime and one of them is even. When both $m$ and $n$ are odd, then $a$, $b$, and $c$ will be even, and the triple will not be primitive; however, dividing $a$, $b$, and $c$ by 2 will yield a primitive triple when $m$ and $n$ are coprime.
Every primitive triple arises (after the exchange of $a$ and $b$, if $a$ is even) from a unique pair of coprime numbers $m$, $n$, one of which is even.
— Wikipedia on Pythagorean triple
— Me@2022-12-10 09:57:27 PM
.
. |
# statsmodels.tsa.deterministic.TimeTrendDeterministicTerm.constant¶
property TimeTrendDeterministicTerm.constant
Flag indicating that a constant is included |
## Algebra 2 (1st Edition)
$8.5 \ dollars \ per \ hour$
The total money earned is 85 dollars. The amount of time is 10 hours. Then we must find the rate by dividing dollars by hours. That equals $85/10$or $8.5$ dollars per hour. This is correct by unit analysis as it is a rate and the unit is $units/time$ |
# Homework Help: Measure space, null set
1. May 24, 2010
### complexnumber
1. The problem statement, all variables and given/known data
Let $$(X,\mathcal{A},\mu)$$ be a fixed measure space.
Let $$A_k \in \mathcal{A}$$ such that $$\displaystyle \sum^\infty_{k=1} \mu(A_k) < \infty$$. Prove that
\begin{align*} \{ x \in X | x \in A_k \text{ for infinitely many k} \} \end{align*}
is a null set.
2. Relevant equations
3. The attempt at a solution
Let $$S = \{ x \in X | x \in A_k \text{ for infinitely many k} \}$$.
Suppose $$\mu (S) > 0$$. Then $$\displaystyle \mu(\bigcap A_k) > 0, A_k \ni x, x \in S$$. Then $$\mu (A_k) > 0, A_k \ni x, x \in S$$ and hence
$$\displaystyle \sum^\infty_{k=1} \mu(A_k) = \infty$$, which |
# NAG Library Function Document
## 1Purpose
nag_ode_bvp_ps_lin_coeffs (d02uac) obtains the Chebyshev coefficients of a function discretized on Chebyshev Gauss–Lobatto points. The set of discretization points on which the function is evaluated is usually obtained by a previous call to nag_ode_bvp_ps_lin_cgl_grid (d02ucc).
## 2Specification
#include #include
void nag_ode_bvp_ps_lin_coeffs (Integer n, const double f[], double c[], NagError *fail)
## 3Description
nag_ode_bvp_ps_lin_coeffs (d02uac) computes the coefficients ${c}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n+1$, of the interpolating Chebyshev series
$12 c1 T0 x- + c2 T1 x- + c3T2 x- +⋯+ cn+1 Tn x- ,$
which interpolates the function $f\left(x\right)$ evaluated at the Chebyshev Gauss–Lobatto points
$x-r = - cos r-1 π/n , r=1,2,…,n+1 .$
Here ${T}_{j}\left(\stackrel{-}{x}\right)$ denotes the Chebyshev polynomial of the first kind of degree $j$ with argument $\stackrel{-}{x}$ defined on $\left[-1,1\right]$. In terms of your original variable, $x$ say, the input values at which the function values are to be provided are
$xr = - 12 b - a cos πr-1 /n + 1 2 b + a , r=1,2,…,n+1 , $
where $b$ and $a$ are respectively the upper and lower ends of the range of $x$ over which the function is required.
## 4References
Canuto C (1988) Spectral Methods in Fluid Dynamics 502 Springer
Canuto C, Hussaini M Y, Quarteroni A and Zang T A (2006) Spectral Methods: Fundamentals in Single Domains Springer
Trefethen L N (2000) Spectral Methods in MATLAB SIAM
## 5Arguments
1: $\mathbf{n}$IntegerInput
On entry: $n$, where the number of grid points is $n+1$. This is also the largest order of Chebyshev polynomial in the Chebyshev series to be computed.
Constraint: ${\mathbf{n}}>0$ and n is even.
2: $\mathbf{f}\left[{\mathbf{n}}+1\right]$const doubleInput
On entry: the function values $f\left({x}_{\mathit{r}}\right)$, for $\mathit{r}=1,2,\dots ,n+1$.
3: $\mathbf{c}\left[{\mathbf{n}}+1\right]$doubleOutput
On exit: the Chebyshev coefficients, ${c}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n+1$.
4: $\mathbf{fail}$NagError *Input/Output
The NAG error argument (see Section 3.7 in How to Use the NAG Library and its Documentation).
## 6Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 2.3.1.2 in How to Use the NAG Library and its Documentation for further information.
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_INT
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}>1$.
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: n is even.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
See Section 2.7.6 in How to Use the NAG Library and its Documentation for further information.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 2.7.5 in How to Use the NAG Library and its Documentation for further information.
## 7Accuracy
The Chebyshev coefficients computed should be accurate to within a small multiple of machine precision.
## 8Parallelism and Performance
nag_ode_bvp_ps_lin_coeffs (d02uac) is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
nag_ode_bvp_ps_lin_coeffs (d02uac) makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the x06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
The number of operations is of the order $n\mathrm{log}\left(n\right)$ and the memory requirements are $\mathit{O}\left(n\right)$; thus the computation remains efficient and practical for very fine discretizations (very large values of $n$). |
It is currently 17 Mar 2018, 21:14
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
After 4,000 gallons of water were added to a large water tan
Author Message
TAGS:
Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 44290
After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
28 Jan 2014, 01:55
Expert's post
6
This post was
BOOKMARKED
00:00
Difficulty:
5% (low)
Question Stats:
88% (00:58) correct 12% (01:10) wrong based on 589 sessions
HideShow timer Statistics
The Official Guide For GMAT® Quantitative Review, 2ND Edition
After 4,000 gallons of water were added to a large water tank that was already filled to 3/4 of its capacity, the tank was then at 4/5 of its capacity. How many gallons of water does the tank hold when filled to capacity?
(A) 5,000
(B) 6,200
(C) 20,000
(D) 40,000
(E) 80,000
Problem Solving
Question: 61
Category: Algebra First-degree equations
Page: 69
Difficulty: 600
GMAT Club is introducing a new project: The Official Guide For GMAT® Quantitative Review, 2ND Edition - Quantitative Questions Project
Each week we'll be posting several questions from The Official Guide For GMAT® Quantitative Review, 2ND Edition and then after couple of days we'll provide Official Answer (OA) to them along with a slution.
We'll be glad if you participate in development of this project:
2. Please vote for the best solutions by pressing Kudos button;
3. Please vote for the questions themselves by pressing Kudos button;
4. Please share your views on difficulty level of the questions, so that we have most precise evaluation.
Thank you!
[Reveal] Spoiler: OA
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 44290
Re: After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
28 Jan 2014, 01:55
4
KUDOS
Expert's post
1
This post was
BOOKMARKED
SOLUTION
After 4,000 gallons of water were added to a large water tank that was already filled to 3/4 of its capacity, the tank was then at 4/5 of its capacity. How many gallons of water does the tank hold when filled to capacity?
(A) 5,000
(B) 6,200
(C) 20,000
(D) 40,000
(E) 80,000
4,000 gallons of water comprise 4/5 - 3/4 = 1/20, which makes the capacity of the tank equal to 4,000*20 = 80,000 gallons.
_________________
Senior RC Moderator
Status: It always seems impossible until it's done!!
Joined: 29 Aug 2012
Posts: 1116
Location: India
WE: General Management (Aerospace and Defense)
Re: After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
28 Jan 2014, 02:32
1
KUDOS
2
This post was
BOOKMARKED
Initially the tank is 75% full.
and 4000 gallons are added. then the tank becomes 4/5 or 80% full.
so 5% is 4000. Then the 100% will be
= $$4000 *100/5= 80,000$$.
_________________
Intern
Joined: 10 Apr 2012
Posts: 46
Concentration: Finance
Schools: Goizueta '19 (I)
WE: Analyst (Commercial Banking)
Re: After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
28 Jan 2014, 03:26
2
KUDOS
We can set up the following equation for the info in the question:
3/4 of x+4000=4/5 of x; where x is the capacity of the water tank.
x=80,000
We can alternatively work this out using proportion: we know that the additional 4000 gallons account for 4/5-3/4=1/20 of the tank. So if 4000 gallons account for 1/20 of the full tank, then 20/20 of the tank is 80,000.
Senior Manager
Joined: 06 Aug 2011
Posts: 380
Re: After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
28 Jan 2014, 08:54
1
KUDOS
1/20x=4000..x=80000..
_________________
Bole So Nehal.. Sat Siri Akal.. Waheguru ji help me to get 700+ score !
Intern
Status: I'm trying to GMAT?
Joined: 12 Feb 2013
Posts: 25
Location: United States
Concentration: Finance, General Management
GMAT Date: 06-22-2013
WE: Engineering (Consulting)
Re: After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
28 Jan 2014, 12:12
1
KUDOS
x=tank capacity
4000/x = (4/5-3/4)/1
4000 = 1/20(x)
x=80000
Please vote for the best solutions by pressing Kudos button;
Manager
Status: GMATting
Joined: 21 Mar 2011
Posts: 108
Concentration: Strategy, Technology
GMAT 1: 590 Q45 V27
Re: After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
28 Jan 2014, 21:41
1
KUDOS
Let us assume the total capacity of tank to be x.
Since the tank was already filled to 3/4th(or 75%) of its capacity and 4,000 gallons of water were added to bring it to 4/5 th(or 80%) of its capacity, we can make the following inference:
5% of capacity(80%-75%) = 4000;
(5/100) * x = 4,000
x = 80,000;
Ans is (E).
Math Expert
Joined: 02 Sep 2009
Posts: 44290
Re: After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
01 Feb 2014, 07:51
SOLUTION
After 4,000 gallons of water were added to a large water tank that was already filled to 3/4 of its capacity, the tank was then at 4/5 of its capacity. How many gallons of water does the tank hold when filled to capacity?
(A) 5,000
(B) 6,200
(C) 20,000
(D) 40,000
(E) 80,000
4,000 gallons of water comprise 4/5 - 3/4 = 1/20, which makes the capacity of the tank equal to 4,000*20 = 80,000 gallons.
_________________
Manager
Joined: 04 Oct 2013
Posts: 161
Location: India
GMAT Date: 05-23-2015
GPA: 3.45
Re: After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
02 Feb 2014, 06:48
After 4,000 gallons of water were added to a large water tank that was already filled to 3/4 of its capacity, the tank was then at 4/5 of its capacity. How many gallons of water does the tank hold when filled to capacity?
(A) 5,000
(B) 6,200
(C) 20,000
(D) 40,000
(E) 80,000
$$\frac{3}{4}$$ of Tank = 75% of Tank
$$\frac{4}{5}$$ of Tank = 80% of Tank
Given that, difference between 80% and 75% of Tank is 4000 gallons
Or, 5% of Tank = 4000 gallons
Or, Full capacity of Tank $$= 20 * 4000 = 80000$$ gallons of water
SVP
Status: The Best Or Nothing
Joined: 27 Dec 2012
Posts: 1838
Location: India
Concentration: General Management, Technology
WE: Information Technology (Computer Software)
Re: After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
11 Jul 2014, 01:07
1
KUDOS
Let full capacity of tank = x
3/4th of full capacity $$= \frac{3x}{4}$$
Addition of 4000 gallons $$= \frac{3x}{4} + 4000$$
Given that addition of water takes the level to 4/5th
$$\frac{3x}{4} + 4000 = \frac{4x}{5}$$
x = 4000 * 20
= 80000
_________________
Kindly press "+1 Kudos" to appreciate
Intern
Joined: 10 Jul 2014
Posts: 2
Re: After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
11 Jul 2014, 05:54
Let x be the capacity of tank
(4/5 - 3/4) x = 4000
(0.8-0.75)x = 4000
0.05 x = 4000
x = 4000 * 20 = 80,000
BSchool Forum Moderator
Joined: 12 Aug 2015
Posts: 2493
GRE 1: 323 Q169 V154
Re: After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
29 Jul 2016, 12:00
1
KUDOS
Non-Human User
Joined: 09 Sep 2013
Posts: 6532
Re: After 4,000 gallons of water were added to a large water tan [#permalink]
Show Tags
11 Mar 2018, 11:15
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: After 4,000 gallons of water were added to a large water tan [#permalink] 11 Mar 2018, 11:15
Display posts from previous: Sort by |
# Category:Primitives involving Root of x squared minus a squared
This category contains results about primitives of expressions involving $\sqrt {x^2 - a^2}$. |
# For the cell reaction $2Fe^{3+}(aq)+2I^{-}(aq)\rightarrow2Fe^{2+}(aq)+I_2(aq)$ $E^{\ominus}_{cell}$ cell E 0.24V at 298 K. The standard Gibbs energy $(\triangle^{\circ}_{r}G^{\ominus})$of the cell reaction is : [Given that Faraday constant $F = 96500 C \;mol^{-1}$
( A ) $-23.16 kJ\; mol^{-1}$
( B ) $- 46.32 kJ \;mol^{-1}$
( C ) $23.16 kJ \;mol^{-1}$
( D ) $46.32 kJ\; mol^{-1}$ |
# I have a curve in 2D space (x,y), described by the following equation: ax^2+bxy+cy^2+d=0 where a,b,c,d are known. It is obvious that it is a 1D curve embedded in a 2D space. So I would think there could be such a description of the curve, where only single parameter is present.
2D curve with two parameters to single parameter
I have been thinking about the following problem. I have a curve in 2D space (x,y), described by the following equation: $a{x}^{2}+bxy+c{y}^{2}+d=0$ where a,b,c,d are known. It is obvious that it is a 1D curve embedded in a 2D space. So I would think there could be such a description of the curve, where only single parameter is present.
It is obvious that you can plug x and then solve a quadratic equation for y, but that is not what I'm looking for. The solution I expect is in the form $x={f}_{x}\left(t\right),y={f}_{y}\left(t\right),t\in ?$
which can be used in the case of circle equation with sine and cosine of angle.
My goal is to plot the curve in python, so I would like to start at some point of the curve and trace along it. Could you point me to a solution or some materials which are dedicated for such problems?
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Brienueentismvh
Explanation:
Assuming $a>0$ and $c>0$, why not to use $x=\frac{r\mathrm{cos}\left(t\right)}{\sqrt{a}}\phantom{\rule{2em}{0ex}}\text{and}\phantom{\rule{2em}{0ex}}y=\frac{r\mathrm{sin}\left(t\right)}{\sqrt{c}}$ which make $a{x}^{2}+bxy+c{y}^{2}+d$ to become ${r}^{2}\left(1+\frac{b\mathrm{sin}\left(2t\right)}{2\sqrt{ac}}\right)+d=0$ then r that you replace now in x and y.
jlo2ni5x
Step 1
The solution of the quadratic equation gives you one kind of parameterization of the ellipse, though you may not like it. You could find the minimum and maximum values of x and let x go from the minimum to the maximum and back again to the minimum as t increases. Going in one direction you set y to one of the solutions of the quadratic, and in the other direction you set y to the other solution, so that y is also a function of t.
There are other ways, however.
Since the equation is a quadratic equation in x and y, there are only a limited number of kinds of shapes that can solve it, and we can quickly rule out all of these shapes except for an ellipse. It is possible to find the center, major axis a, and minor axis b of the ellipse, as well as the angle the axes are rotated from the coordinate axes. Let ${x}^{\prime }=a\mathrm{cos}\theta$ and ${y}^{\prime }=b\mathrm{sin}\theta$ to get an axis-aligned ellipse centered at (0,0), the apply a coordinate transformation $x=f\left({x}^{\prime },{y}^{\prime }\right),$ $y=g\left({x}^{\prime },{y}^{\prime }\right)$ to translate and rotate the ellipse as needed. (The translation is zero in this case since it turns out the center of your ellipse is at (0,0).)
Step 2
You can save a lot of bother from the previous paragraph by knowing that at the end of the procedure you'll have something equivalent to $\begin{array}{rl}x& =h+m\mathrm{cos}\theta -n\mathrm{sin}\theta ,\\ y& =k+n\mathrm{cos}\theta +m\mathrm{sin}\theta .\end{array}$
Take a few points on the ellipse and solve for h, k, m, and n. (In this case $h=k=0,$, but I've written this for a general ellipse.) |
# caption package in twocolumn mode: Distinct styles for figure and (wide) figure*
I'm using a KOMA-script based twocolumn layout and modifying float captions with the caption package. Unfortunately, when redefining the default style, the new options will apply to both column-wide figures and text-wide figure*s. This results in captions that do not always have the intended width:
\documentclass[a4paper,twocolumn,DIV=16]{scrartcl}
\usepackage{blindtext}
\usepackage[
justification=RaggedRight,
width=.9\columnwidth,
]{caption}
\begin{document}
\blindtext
\begin{figure}[htpb]
\rule{\columnwidth}{2cm}
\caption{A few words that consume more space than a single line.
The width of this caption is as intended.
}
\end{figure}
\blindtext
\clearpage
\begin{figure*}[htpb]
\rule{\textwidth}{2cm}
\caption{A few words that consume more space than a single line.
The width of this caption is too small.
}
\end{figure*}
\clearpage
\captionsetup{
justification=RaggedRight,
width=.9\textwidth,
}
\blindtext
\begin{figure}[htpb]
\rule{\columnwidth}{2cm}
\caption{A few words that consume more space than a single line.
The width of this caption is too large.
}
\end{figure}
\blindtext
\clearpage
\begin{figure*}[htpb]
\rule{\textwidth}{2cm}
\caption{A few words that consume more space than a single line.
The width of this caption is as intended.
}
\end{figure*}
\clearpage
\end{document}
I'm circumventing this for now by manually switching to a separate style defined through \DeclareCaptionStyle{colfigure}{...} for column figures using \captionsetup{style=colfigure} but I'd prefer not to do this manually. Is there a way to automatise this?
• Usually using width=... as global option is a bad idea; it makes more sense when using as (local) option within an environment. Try something like calcwidth=.9\linewidth instead. – Axel Sommerfeldt Jul 5 '18 at 21:31
• @AxelSommerfeldt Can you add an answer too? I did not found option calcwidth in the package documentation. – esdd Jul 6 '18 at 7:43
• @esdd Yes, unfortunately the calcwidth option is only mentioned in the CHANGELOG file. I just took a look, it's available for about 7 years. Shame on me, hopefully an updated documentation will be available at the end of this year. – Axel Sommerfeldt Jul 6 '18 at 8:05
While the width=... option sets the width of the caption to a fixed amount immediately, calcwidth=... will evaluate the value if needed, i.e. every time a caption is actually typeset.
So usually using width=... as global option is a bad idea; it makes more sense when using as (local) option within a single figure or table.
So try something like calcwidth=.9\linewidth instead:
\documentclass[a4paper,twocolumn,DIV=16]{scrartcl}
\usepackage{blindtext}
\usepackage[
justification=RaggedRight,
calcwidth=.9\linewidth,
]{caption}
\begin{document}
\blindtext
\begin{figure}[htpb]
\rule{\columnwidth}{2cm}
\caption{A few words that consume more space than a single line.
The width of this caption is as intended.
}
\end{figure}
\blindtext
\clearpage
\begin{figure*}[htpb]
\rule{\textwidth}{2cm}
\caption{A few words that consume more space than a single line.
The width of this caption is as intended.
}
\end{figure*}
\end{document}
calcwidth=... was introduced with v3.2 of the caption package on 2011/07/30 but is still not part of the documentation. (It's only mentioned in the CHANGELOG file.) See also: https://gitlab.com/axelsommerfeldt/caption/issues/1
A new version v3.4 with a completely revised documentation is planned for the end of this year.
• Thanks @axel, that's precisely what I was looking for (and couldn't find in the docs ;-). I had guessed that this might be related to expansion issues, but didn't really know how to debug it further. – Wisperwind Jul 6 '18 at 13:26
• – Axel Sommerfeldt Jul 7 '18 at 10:53
Here is a suggestion that needs at least KOMA-Script version 3.25 and works only without package caption.
\documentclass[a4paper,twocolumn,DIV=16]{scrartcl}[2018/03/30]% needs version 3.25 or newer
\usepackage{blindtext}
\setcaptionalignment{l}% needs version 3.25
\setcapdynwidth{.9\linewidth}% needs version 3.20
\begin{document}
\blindtext
\begin{figure}[htpb]
\rule{\columnwidth}{2cm}
\caption{A few words that consume more space than a single line.
The width of this caption is as intended.
}
\end{figure}
\blindtext
\clearpage
\begin{figure*}[htpb]
\rule{\textwidth}{2cm}
\caption{A few words that consume more space than a single line.
The width of this caption is too small.
}
\end{figure*}
\end{document}
Result:
Or with package ragged2e and caption alignment L:
\documentclass[a4paper,twocolumn,DIV=16]{scrartcl}[2018/03/30]% needs version 3.25 or newer
\usepackage{blindtext}
\usepackage{ragged2e}
\setcaptionalignment{L}% needs version 3.25
\setcapdynwidth{.9\linewidth}% needs version 3.20
\begin{document}
\blindtext
\begin{figure}[htpb]
\rule{\columnwidth}{2cm}
\caption{A few words that consume more space than a single line.
The width of this caption is as intended.
}
\end{figure}
\blindtext
\clearpage
\begin{figure*}[htpb]
\rule{\textwidth}{2cm}
\caption{A few words that consume more space than a single line.
The width of this caption is too small.
}
\end{figure*}
\end{document}
\documentclass{article} \begin{document} [ \left{ x\in\mathbf{R} \middle] 0<{|x|}<\frac{5}{3} \right} ]
\end{document}
Result:
• But this should work with caption package, too, so it seems I have additional work to do. I opened an issue: gitlab.com/axelsommerfeldt/caption/issues/23 – Axel Sommerfeldt Jul 6 '18 at 8:50
• Thank you, that looks like a reasonable fix to the problem. The example was somewhat simplified, my actual caption style uses more options (maybe that could be done in KOMA, too, didn't try). I prefer the key-value input syntax of the caption package over KOMA-scripts syntax however, thus accepting @AxelSommerfeldt 's answer. – Wisperwind Jul 6 '18 at 13:31 |
# Why is the random intercept variance so much larger in R than in SPSS in my model and how do I interpret the results?
I am new to Cross Validated so please forgive me if this question has been asked before. However, I did not find any post that answered my question, so here it is:
I am running a 3 level multilevel binary logistic regression (one binary outcome variable and one binary predictor variable) with 839 observations nested in 171 study participants nested in 29 groups. I am using the glmer() function of the lme4 package in R. When I am specifying the empty model and testing it against a “normal” logistic regression without a random intercept for groups and participants, the results clearly tell me that my data is clustered at the participant level and that I do need to use multilevel modeling.
Models:
M0_simple: Outcome ~ 1
M0: Outcome ~ (1 | Group/Person)
Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
M0_simple 1 975.15 979.87 -486.57 973.15
M0 3 831.07 845.25 -412.54 825.07 148.07 2 < 2.2e-16
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Moreover, when I look at the estimates of the empty multilevel model, the random intercept variance at the participant level (level 2) is very high. And when I calculated the VPC for the participant level, the result of .975 is also extremely high.
Random effects:
Groups Name Variance Std.Dev.
Person:Group (Intercept) 127.4 11.29
Group (Intercept) 0.0 0.00
Number of obs: 839, groups: Person:Group, 171; Group, 29
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 8.8693 0.8254 10.74 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
These are the results of the random intercept model once I put in my predictor variable:
M1 <- glmer(Outcome ~ Predictor + (1 | Group/Person), family =
binomial("logit"), data = data_M)
Random effects:
Groups Name Variance Std.Dev.
Person:Group (Intercept) 129.1 11.36
Group (Intercept) 0.0 0.00
Number of obs: 839, groups: Person:Group, 171; Group, 29
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 9.5390 0.9430 10.116 <2e-16 ***
Predictor -0.8154 0.4899 -1.664 0.0961 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
How should I interpret these huge random intercept variances at the participant level? I realize that the data is obviously strongly clustered at the participant level. Is this variance so high because the level 1 variance is fixed to 3.29 for multilevel logistic regressions? And does this large variance also affect the fixed effects?
I tried to calculate the predicted probabilities for the random intercept level and ended up with 99.9 %. Moreover, odds-ratios for the intercept of 13891.05 do seem weird. Did I misspecify the model somehow or what might be the issue here? When I run the same model with SPSS 23 it gives out much more reasonable results:
GENLINMIXED
/DATA_STRUCTURE SUBJECTS=Group*Person
/FIELDS TARGET=Outcome TRIALS=NONE OFFSET=NONE
/TARGET_OPTIONS DISTRIBUTION=BINOMIAL LINK=LOGIT
/FIXED EFFECTS=Predictor USE_INTERCEPT=TRUE
/RANDOM USE_INTERCEPT=TRUE SUBJECTS=Group
COVARIANCE_TYPE=VARIANCE_COMPONENTS
/RANDOM USE_INTERCEPT=TRUE SUBJECTS=Group*Person
COVARIANCE_TYPE=VARIANCE_COMPONENTS
/BUILD_OPTIONS TARGET_CATEGORY_ORDER=DESCENDING
INPUTS_CATEGORY_ORDER=DESCENDING
MAX_ITERATIONS=100 CONFIDENCE_LEVEL=95 DF_METHOD=RESIDUAL COVB=ROBUST
PCONVERGE=0.000001(ABSOLUTE)
SCORING=0 SINGULAR=0.000000000001
/EMMEANS_OPTIONS SCALE=ORIGINAL PADJUST=LSD.
Random effects:
Groups Name Variance Std.Error
Person:Group (Intercept) 5.052 .831
Group (Intercept) 0.0 …
Number of obs: 839, groups: Person:Group, 171; Group, 29
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.571 0.2413 10.652 .000
Predictor -0.445 0.2400 -1.853 .064
---
While the p-values do seem in the same ball park, the results from SPSS make much more sense to interpret. When I calculate the predicted probabilities from the fixed effects of the intercept and the predictor I get 89.3 % and 92.9 % and the odds-ratio for the intercept of 13.076 seem much more likely than 13891.05.
So what it comes down to, I guess, are the following questions:
I have read that if I want to use likelihood ratio tests to determine the significance of a predictor, I have to use a statistical program that uses Maximum Likelihood (ML) and not Restricted Maximum Likelihood (REML). This is why I use glmer() in R.
However, once I have established that a certain model (with the predictor) is correct or not, how do I interpret the results? Can I simply look at the estimates and interpret the odds-ratio and calculate VPCs and predicted probabilities? Or is this susceptible to mistakes, since the variance at level 1 for multilevel logistic regressions is fixed at 3.29 and the “higher” random variances are scaled accordingly?
Am I even allowed to calculate predicted probabilities from the random intercept model and why are the results of SPSS and R so different?
• I suspect you have mostly zeroes or mostly ones and also that many participants scored all zero or all one but that is only a guess. Sep 13, 2018 at 17:09
• Thanks for the comment @mdewey! Yes, I already checked the data and out of the 171 persons in the study, 141 had only either zeros or ones. So I guess there just is very little variation at the observation level, which rescales the person-level variance into this obscure high number, right? For another variable I investigated, only 93 out of the 141 persons had always either zeros or ones and here the variances seemed much more reasonable. Sep 13, 2018 at 19:13
## 1 Answer
Differences may potentially stem from the fact that SPSS is using the Penalized Quasi Likelihood (PQL) method to fit generalized linear mixed effects models. However, it is known in the literature that the PQL is not a good algorithm, especially for Bernoulli data and Poisson data with small counts. The glmer() function uses by default the Laplace approximation, which is better but still can be inferior to the "gold standard" which is the adaptive Gaussian quadrature (AGQ). In case you have random intercepts and only a single grouping factor, you can fit the model using AGQ with glmer() by setting a higher value into the nAGQ argument. If you want to include more than random intercepts, e.g., random slopes, you can have a look at the GLMMadaptive package.
• Thanks @Dimitris! This answer makes a lot of sense and I have found similar information elsewhere, too. However, in my case I can't use higher numerical integration with the nAGQ argument and will have to stay with the Laplace approximation because I have to estimate random intercepts for persons AND groups, right? Also, just to clarify I am not doing something wrong: I have read that using likelihood ratio tests to test for fixed effects is superior to just simply looking at the p-level for a predictor in the model. Is this correct? And can I still interpret the odds-ratio from the model? Sep 13, 2018 at 19:21
• @Sebastian yes, if you want to include random effects for both persons & groups, you can only do Laplace in R. With regard to the estimated coefficients, you have to be aware of the fact that in GLMMs because of the nonlinear link function used in the specification of the model, they have an interpretation conditional on the random effects. For more on this, check the discussion in this question: stats.stackexchange.com/questions/365907/… Sep 13, 2018 at 19:36
• Hi @Dimitris, thank you so much. You're helping me a lot! :) To make sure I unterstand correctly: The coefficients have to be interpreted by essentially saying "holding co-variables (which I don't have) and the random intercepts for persons and groups constant, the odds ratio for the coefficient is X and therefore the predictor increases the odds by...", right? So would the odds ratio give me the change in odds for the average person in the average group? I guess I'm still a bit unsure how to interpret it exactly. Sep 14, 2018 at 9:53
• Moreover, I tried using your GLMMadaptive package and it gives similar results than the glmer package I've used so far. So I guess the odds ratio from the coefficients from these models would be the change in odds for the same person in the same group, right? And the marginal_coefs() function doing in GLMMadaptive is giving me the change in odds across persons and groups , right? Sep 14, 2018 at 9:56
• @Sebastian yes, the interpretation will be conditional on the person. Most often you're interested in marginal interpretation. That is, what is the odds ratio between the group of persons with predictor value $x$ and the group of persons with predictor value $x + 1$. For example, what is the odds ratio between males and females (i.e., groups of people) not the odds ratio if you changed the sex of a specific person. For a summary of these points, check slide 332 of my course notes: drizopoulos.com/courses/EMC/CE08.pdf Sep 14, 2018 at 10:16 |
Let $\mathbf{R} = [x,y,z]$ be a cartesian vector, $R_\alpha$ it's tensor representation with $\alpha = x,y,z$ and let $R=\sqrt{x^2 + y^2 + z^2}$ be its norm. I want to do tensor derivatives of the Coulomb potential $1/R$. The first derivative is $\frac{\partial}{\partial R_\alpha} \frac{1}{R} = -\frac{R_\alpha}{R^3}$ and the second derivative is $\frac{\partial}{\partial R_\beta} \frac{\partial}{\partial R_\alpha} \frac{1}{R}= \frac{\delta_{\alpha\beta}R^2 - R_\alpha R_\beta }{R^5}$. I want to make further derivatives in Mathematica.
I tried
R = Sqrt[x^2 + y^2 + z^2]
$\sqrt{x^2 + y^2 + z^2}$
rR = 1/R
$\frac{1}{\sqrt{x^2 + y^2 + z^2}}$
drR = Grad[rR, {x, y, z}, "Cartesian"]
$\{-\frac{x}{(x^2 + y^2 + z^2)^{3/2}}, -\frac{y}{(x^2 + y^2 + z^2)^{3/2}}, -\frac{z}{(x^2 + y^2 + z^2)^{3/2}} \}$
So can I make Mathematica identify the denominators as $R^3$ and the numerators as $R_\alpha$ and get it to the compact form $-\frac{R_\alpha}{R^3}$, or is there some other way to do tensor calculus/arithmetics compactly?
Perhaps something like the following will suffice?
R /: D[R, R[α_], NonConstants->{R}] := R[α]/R
R /: D[R[α_], R[β_], NonConstants->{R}] := KroneckerDelta[α, β]
R /: MakeBoxes[R[α_], fmt_] := MakeBoxes[Subscript[R,α], fmt]
D[1/R, R[α], NonConstants->{R}] //TeXForm
$-\frac{R_{\alpha }}{R^3}$
D[1/R, R[α], R[β], NonConstants->{R}] //TeXForm
$\frac{3 R_{\alpha } R_{\beta }}{R^5}-\frac{\delta _{\alpha ,\beta }}{R^3}$
• This is beautiful, thanks a lot! Maybe you put in Clear[R] at the top, because I didn't get it to work first since $R$ was already defined in the notebook. – Jonatan Öström May 2 '17 at 10:06 |
Consider a spatial discretization of the domain in N = 3 regular intervals. Reasons for the selection of its problem. MATLAB programming is selected for the computation of numerical solutions. For a linear advection equation, we want the amplification factor to be 1, so that the wave does not grow or decay in time. The Matlab code should run under both Octave and Matlab. requirements for the upwind scheme to generate stable solutions. The methods of choice are upwind, downwind, centered, Lax-Friedrichs, Lax-Wendroff, and Crank-Nicolson. In some trampoline games (e. However, even within this restriction the complete investigation of stability for initial, boundary value problems can be. from advanced classes that provide a simnple linear algebra review to provide examples for an intro to MAPLE. That is, 2nd-order centred di erences in both space and time. Then stability analysis and numerical simulation are conducted. A family of statistical viewing algorithms aspired by biological neural networks which are used to estimate tasks carried on large number of inputs that are generally unknown in Artificial Neural Networks Projects. We will teach you Von-Neumann Stability analysis along with a practical example. (Similar to Fourier methods) Ex. von Neumann stabilty of the -method. svd_circle, a MATLAB program which analyzes a linear map of the unit circle caused by an arbitrary 2x2 matrix A, using the singular value decomposition. , distractor inhibition) in a sample of healthy human subjects and developed an efficient and easy-to-implement analysis approach to assess BOLD-signal variability in event. Excerpt from GEOL557 Numerical Modeling of Earth Systems by Becker and Kaus (2016) 1 Finite difference example: 1D explicit heat equation Finite difference methods are perhaps best understood with an example. III Finite difference approaches and Von Neumann Stability revisited A) Forward Time, Centered Step (FTCS) B) Fully implict methods C) Crank Nicholson IV Jacobi method and Successive Over-Relaxation (SOR) V Operator splitting methods. I have a question concerning the von Neumann stability analysis of finite difference approximations of PDEs. Turning back the clock, in 1946 von Neumann and his associates saw n = 100 as the large number on the horizon. experiment with the complex roots of a quadratic to determine what is included in the stability region. Pole, Zero Analysis B. Our aims in this paper are to estimate the Von Neumann stability criteria and. In this second edition, the. Parabolic equations. von Neumann stability analysis of first-order accurate discretization schemes for one-dimensional (1D) and two-dimensional (2D) fluid flow equations Computers & Mathematics with Applications November 1, 2017; von Neumann Stability Analysis of a Segregated Pressure-Based Solution Scheme for 1D and 2D Flow Equations. 6 von Neumann Stability Analysis For Wave Equation. considered von Neumann stability analysis for two linear systems as well as for acoustic wave equations. Next: von Neumann stability analysis Up: The diffusion equation Previous: An example 1-d diffusion An example 1-d solution of the diffusion equation Let us now solve the diffusion equation in 1-d using the finite difference technique discussed above. Introduction. We now discuss the transfer between multiple subscripts and linear indexing. [email protected] We seek the solution of Eq. Analysis of \noise" and round o errors and their rela-tion to high speed computing 1. Time series data analysis means analyzing the available data to find out the pattern or trend in the data to predict some future values which will, in turn, help more effective and optimize business decisions. Fourier series with applications, partial differential equations arising in science and engineering, analytical solutions of partial differential equations. Jumping on trampolines is a popular backyard recreation. The comparison was done by computing the root mean. Through visualization and analysis of twelve thousand case study EA runs, we illustrate that we are able to distinguish between EA stability and instability depending upon perturbation and performance metrics. Stability estimates which grow linearly with nand s 6. In 2D (fx,zgspace), we can write rcp ¶T ¶t = ¶ ¶x kx ¶T ¶x + ¶ ¶z kz ¶T ¶z +Q (1) where, r is density, cp heat capacity, kx,z the thermal conductivities in x and z direction,. Still, the matrix stability method is an indispensible part of the numerical analysis toolkit. Fourier Series and von Neumann stability analysis. Subsequently, the mean of the 38 4i stainings was calculated for each SOM node. Are there any alternatives to von neumann analysis that I can use?. General von Neumann stability conditions, and application in practice 4. In this second edition, the. Therefore, we have shown by von Neumann analysis that the finite-difference scheme Eq. is a set of particular solutions of the problem. Apply von Neumann analysis to determine how ∆t and ∆x should be related for the method to be stable in the well-posed case(s). Use the sparse matrices which are implemented in Matlab. 8) the amplification factor g(k) becomes. Stability analysis von Neumann analysis (not rigorous) Fourier transform in space: u(x)= # k e ikxu(k) Each u(k) evolves independently in time (at least for linear problems with constant coeffs). One way is to use the ODE method, but that requires knowing the eigenvalues of the matrices. rst-order backward di erence for u x. Governing differential equations for a model of plaque growth along with the formulation of the computational domain are outlined in Section 2. As shows von Neumann analysis, stability is obtained for Courant numbers smaller than one. For each method, the corresponding growth factor for von Neumann stability analysis is shown. 5p) Implement the scheme in a Matlab code and illustrate the conclusions by numerical experimentation. Patrick Cousot awarded John von Neumann Medal Patrick Cousot is the recipient of the IEEE John von Neumann medal, given "for outstanding achievements in computer-related science and technology". Secondly, there is also a mixed spatiotemporal derivative term in the second equation. Computed the 2D heat transfer flux on automobile frame using FTCS and implicit ADI discretization in MATLAB. The David Kleinfeld Laboratory at UCSD investigates how the vibrissa sensorimotor system of rat extracts a stable world view through its actively moving sensors, the nature of binding orofacial actions into behavior, the biophysical nature of blood flow and stroke at the level of single capillaries in neocortex, the nature of neuromodulatory dynamics in cortex, and new technologies for. Use von Neumann stability analysis to show that the scheme above is unconditionally unstable Write the matlab code that solves the following problems. On the other hand, in the analysis of bifurcations (i. In this paper, we firstly derive the stability conditions of high-order staggered-grid schemes for the three-dimensional (3D) elastic wave equation in heterogeneous media based on the energy method. Partial differential equations: stability, accuracy and convergence, Von Neumann and CFL conditions, finite difference solutions of hyperbolic and parabolic equations. Von-Neumann Stability Analysis of FD-TD methods in complex media B. Stability: Von Neumann analysis (Fourier mehod) gives the stability condition vdt/dx≤1 Efficiency: Not an issue, 1D problem, generous stability condition. On this page, you’ll find a list of past seminar papers that have been submitted to the Department of Wind Energy Technology. Unfortunately numerical experiments give evidence that the method is unstable for any choice of parameters. Computational Electromagnetics is a young and growing discipline, expanding as a result of the steadily increasing demand for software for the design and analysis of electrical devices. I will describe von Neumann stability analysis, CFL conditions, the Lax-Richtmeyer equivalence theorem (consistency + stability = convergence), dissipation and dispersion. V, pp 768–770; Page 622 Author-created using the software from MATLAB. Bidégaray-Fesquet Laboratoire de Modélisation et de Calcul CNRS, Grenoble, France Electric and Magnetic Fields 2006 B. von Neumann machines have shared signals and memory for code and data. Theory Volume 55, Issue 5, May 2009 Page(s):2250 - 2259 1 de mayo de 2005. Introduction to statistical packages (R / S-Plus / MATLAB / SAS) and data analysis – financial data, exploratory data analysis tools, kernel density estimation; Basic estimation and testing; Random number generator and Monte Carlo samples; Financial time series analysis – AR, MA, ARMA. develop programming skills in MATLAB/IDL/C++/Fortran. Our purpose was to investigate epidermal growth factor receptor (EGFR) as a potential therapeutic target in MPNSTs. For a linear advection equation, we want the amplification factor to be 1, so that the wave does not grow or decay in time. Iterative methods for sparse symmetric and non-symmetric linear systems: conjugate-gradients, preconditioners. Patients and methods. Fourier / Von Neumann Stability Analysis • Also pertains to finite difference methods for PDEs • Valid under certain assumptions (linear PDE, periodic boundary conditions), but often good starting point • Fourier expansion (!) of solution • Assume - Valid for linear PDEs, otherwise locally valid. Finally we use what we have learned in the case study to provide a methodology for more general EAs. Arnold c 2009 by Douglas N. Fourier series with applications, partial differential equations arising in science and engineering, analytical solutions of partial differential equations. The construction of this method using Mendeleev's quadrature by Pleshakov [Comp. Our aims in this paper are to estimate the Von Neumann stability criteria and. 6 von Neumann Stability Analysis For Wave Equation. Stability is a standard requirement for control systems to avoid loss of control and damage to equipment. 55 and matlab solution using explicit Numerical solution of partial di erential equations, K. Introduction to numerical solutions of partial differential equations. 10 Hyperbolic systems 224. The multiple-relaxation-time (MRT) LBEs and its. Partial differential equations: stability, accuracy and convergence, Von Neumann and CFL conditions, finite difference solutions of hyperbolic and parabolic equations. Lax Equivalence Theorem. experiment with the complex roots of a quadratic to determine what is included in the stability region. Optimization: Theory, Algorithms, Applications MSRI - Berkeley SAC, Nov/06 Henry Wolkowicz Department of Combinatorics & Optimization University of Waterloo. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Hands-on introduction to computer engineering practice and research, including computer hardware, robotics, and embedded systems. e, first order in space and time. Introduction to statistical packages (R / S-Plus / MATLAB / SAS) and data analysis – financial data, exploratory data analysis tools, kernel density estimation; Basic estimation and testing; Random number generator and Monte Carlo samples; Financial time series analysis – AR, MA, ARMA. Random subsampling of 400 cells and subsequent partial correlation analysis was performed 1600 times. , with increasing n) the magnitude of each mode must not grow unboundedly, and this means that the magnitude of r must be less than or equal to unity. Just construct the stiffness matrix including the nodes at the Neumann boundary, and solve the equation (do whatever you do to the Dirichlet part, as there can be many ways to implement it). Cleve Barry Moler is an American mathematician and computer programmer specializing in numerical analysis. Stability: von Neumann Analysis! 1141 2 < Δ −<− h αt 2 1 0 2 < Δ ≤ h αt Fourier Condition! εn+1 εn =1−4 αΔt h2 sin2k h 2 ⎡⎣G=1−4rsin2(β/2)⎤⎦ Explicit Method: FTCS - 3! Computational Fluid Dynamics! Domain of Dependence for Explicit Scheme! BC! BC! x t Initial Data! h P Δt Boundary effect is not ! felt at P for many. • Not appropriate if you actually want to study shocks. Analysis and design for nonlinear systems using describing function, state-variables, Lyapunov's stability criterion and Popov's method. Army Research Labo-ratory and the U. Levy The quantity λa is often called the Courant number and measures the "numerical. MATLAB programming is selected for the computation of numerical solutions. in Engineering program is to produce graduates able to conduct research independently at the highest level of originality and quality. Information about the open-access journal Mathematical Problems in Engineering in DOAJ. This is an 1D advection-diffusion equation. What does the Von Neumann's stability analysis tell us about non-linear finite difference equations? Optimal transport warping implementation in Matlab. Course Description Numerical methods for steady-state differential equations. Iterative methods for sparse symmetric and non-symmetric linear systems: conjugate-gradients, preconditioners. Table 2 shows the results for von-Neumann stability. modified symplectic Euler method is applied to separable Hamiltonian PDEs. a natural b. SH2774 Numeriska metoder inom kärnkraftsteknik 6,0 Von Neumann stability analysis; such as MATLAB or Phyton, is encouraged, but not mandatory. 6 of Finite Difference Schemes and Partial Diff Eqs by Strikwerda. For each method, the corresponding growth factor for von Neumann stability analysis is shown. Numerical methods for solution of partial differential equations: iterative techniques, stability and convergence, time advancement, implicit methods, von Neumann stability analysis. Section 3 presents a rigorous stability analysis for the linearization of the differential equations derived as well as sufficient conditions for the system equilibrium to be asymptotically stable. In the limit this becomes:. • Analysis of radioheads functioning with multi-SDR payloads operating as part of CMOSS compliant architecture. Levy The quantity λa is often called the Courant number and measures the "numerical. Von-Neumann analysis, CFL condition 4. Stability: von Neumann Analysis! 1141 2 < Δ −<− h αt 2 1 0 2 < Δ ≤ h αt Fourier Condition! εn+1 εn =1−4 αΔt h2 sin2k h 2 ⎡⎣G=1−4rsin2(β/2)⎤⎦ Explicit Method: FTCS - 3! Computational Fluid Dynamics! Domain of Dependence for Explicit Scheme! BC! BC! x t Initial Data! h P Δt Boundary effect is not ! felt at P for many. The construction of this method using Mendeleev's quadrature by Pleshakov [Comp. rst-order forward di erence for u x. You may use any language. 10 of the most cited articles in Numerical Analysis (65N06, finite difference method) in the MR Citation Database as of 3/16/2018. Introduction to statistical packages (R / S-Plus / MATLAB / SAS) and data analysis – financial data, exploratory data analysis tools, kernel density estimation; Basic estimation and testing; Random number generator and Monte Carlo samples; Financial time series analysis – AR, MA, ARMA. The modified differential equation and truncation errors. The multiple-relaxation-time (MRT) LBEs and its. Floquet stability analysis Let U(x, y, t) be the two-dimensional wake (base flow) of period T whose stability is sought. For dt = 0. Matlab interlude 3. In this paper, we firstly derive the stability conditions of high-order staggered-grid schemes for the three-dimensional (3D) elastic wave equation in heterogeneous media based on the energy method. 1 Direct solution. This is indeed the case, provided we restrict ourselves to fairly smooth wave-forms. The student is able to describe the methods mentioned in the course description and to state the keywords and basic notions. Consider the time evolution of a single Fourier mode of wave-number :. For each method, the corresponding growth factor for von Neumann stability analysis is shown. Another major goal of the book is to provide students with enough practical understanding of the methods so they are able to write simple programs on their own. discontinuous Galerkin, hyperbolic conservation laws, Courant-Friedrichs-Lewy condition, time-setpping, numerical stability AMS subject classi cations. Let's study the forward-time backward-space scheme. This course is organized by the NGSSC Graduate School. Let us try to establish when this instability occurs. Karris Detailed lecture notes and worked out examples will be available on the website for each chapter. von Neumann stability analysis In the case of the scheme (2. In this second edition, the. Arbitrary subsets V of the stability region 6. We now discuss the transfer between multiple subscripts and linear indexing. Course Project As part of this class, you must complete a course project. If the solution is unstable, then it can be analyzed by the von Neumann of stable analysis. The course schedule is displayed for planning purposes - courses can be modified, changed, or cancelled. Use the von-Neumann stability analysis to investigate the stability of the discrete form of $\frac{\partial c}{\partial x} = \frac{\partial^2 c}{\partial y^2}$. NUMERICAL INVESTIGATION OF THERMAL TRANSPORT MECHANISMS DURING ULTRA-FAST LASER HEATING OF NANO-FILMS USING 3-D DUAL PHASE LAG (DPL) MODEL Illayathambi Kunadian University of Kentucky, [email protected] Next we show B34S, MATLAB and SAS command files to obtain analysis of this data set which can be done on PC or unix. Abstract; Ernst Hairer and Christian Lubich, Energy behaviour of the Boris method for charged-particle dynamics, BIT 58 (2018) 969-979 Abstract. ] to approximate the integral. Fourier Series and von Neumann stability analysis. Erfahren Sie mehr über die Kontakte von Ari Gazeryan und über Jobs bei ähnlichen Unternehmen. (PETSc/MAGMA/P-Matlab) Lyapunov method of stability analysis. 5 (New York: Macmillan, 1963) Vol. University College Dublin An Col aiste Ollscoile, Baile Atha Cliath The Von Neumann model of a computer, memory hierarchies, the compiler. , Publication. Related results include the work of Viswanath and Trefethen (1998). Stability: Von Neumann analysis (Fourier mehod) gives the stability condition vdt/dx≤1 Efficiency: Not an issue, 1D problem, generous stability condition. For linear feedback systems, stability can be assessed by looking at the poles of the closed-loop transfer function. Let’s study the forward-time backward-space scheme. Krivovichev: Numerical Stability Analysis of Lattice Boltzmann scalar parameters are introduced. He identi ed the focus of expan-sion (FOE) as the point where the length of the ow vectors is zero. Lecture 02 Part 5: Finite Difference for Heat Equation Matlab Demo, 2016 Numerical Methods for PDE - Duration: 14 minutes. Numerical methods for solution of partial differential equations: iterative techniques, stability and convergence, time advancement, implicit methods, von Neumann stability analysis. Gib-son studied optic ows, especially radial expanding ows which occur during landing an airplane. In this case, dt/dx^2 is equal to 0. Another major goal of the book is to provide students with enough practical understanding of the methods so they are able to write simple programs on their own. B = isstable(sys,'elem') returns a logical array of the same dimensions as the model array sys. What is the order of accuracy of the scheme? 10. Sehen Sie sich das Profil von Ari Gazeryan auf LinkedIn an, dem weltweit größten beruflichen Netzwerk. As we saw in the eigenvalue analysis of ODE integration methods, the integration method must be stable for all eigenvalues of the given problem. When the Reynolds number is increased a Hopf bifurcation occurs. svd_faces_test. ACKNOWLEDGEMENTS This material is based upon work supported by, or in part by, the U. Arnold c 2009 by Douglas N. The Brusselator reaction-diffusion model is a paradigm for the understanding of dissipative structures in systems out of equilibrium. For each method, the corresponding growth factor for von Neumann stability analysis is shown. Heat equation in two dimensions. Analysis of hyperbolic equations 2. Let’s study the forward-time backward-space scheme. (1941) discussed the Mean Square of Successive Differences as a measure of variability that takes into account gradual shifts in mean. 2 Numerical Stability Chapter 5 Finite Difference Methods. Von-Neumann stability analysis shows that the numerical scheme is unconditionally stable. Check model and code compliance using formal methods and static analysis. A comparison between exact analytical solutions and numerical predictions. If you do not have a strong preference of language, I suggest that you use MATLAB, because it is easy to use and very powerful. Hyperbolic PDEs. In this article, numerical study for the fractional Cable equation which is fundamental equations for modeling neuronal dynamics is introduced by using weighted average of finite difference methods. Moreover, numerical diffusion increases when the Courant number diminishes. Laboratory assignments provide hands-on experience with design, simulation, implementation, and programming of digital systems. Show that first derivatives approx-imated using the cubic polynomial at y = 0 are third order accurate when the interval is mapped to one of length h. Section 3 presents a rigorous stability analysis for the linearization of the differential equations derived as well as sufficient conditions for the system equilibrium to be asymptotically stable. Matlab Codes. In fact, computer-aided analysis is useful for nonlinear analysis. Analysis of stability. Another major goal of the book is to provide students with enough practical understanding of the methods so they are able to write simple programs on their own. Recall the CFL condition its relation with stability. Numerical methods for solution of partial differential equations: iterative techniques, stability and convergence, time advancement, implicit methods, von Neumann stability analysis. Zobrazte si profil uživatele Juan Manzanero na LinkedIn, největší profesní komunitě na světě. Neumann analysis which allows us to study their stability. 65M12, 65M60. Alternating direction implicit methods, non linear equations. For a linear advection equation, we want the amplification factor to be 1, so that the wave does not grow or decay in time. Perform a von Neumann stability analysis. 3) 04/10/2012 Lec 22 Hyperbolic Partial Differential Equations: Examples, linear advection, upwind method (11. At a low Reynolds number the ow will b e time independent, a \steady state". Doing Physics with Matlab 1 DOING PHYSICS WITH MATLAB WAVE MOTION THE [1D] SCALAR WAVE EQUATION THE FINITE DIFFERENCE TIME DOMAIN METHOD Ian Cooper School of Physics, University of Sydney ian. Stability estimates which grow slower than linearly with n 6. Let's study the forward-time backward-space scheme. An Example of Linear Stability Analysis. What is the order of accuracy of the scheme? 10. We have also proved that this scheme is stable in a much stronger sense. - the fundamental of numerical analysis and the main techniques for the solution of differential equations, that describe the principles of fluvial hydraulics; - how to build easy numerical models by means of MatLab language. Approximates solution to u_t=u_x, which is a pulse travelling to the left. Our aims in this paper are to estimate the Von Neumann stability criteria and. considered von Neumann stability analysis for two linear systems as well as for acoustic wave equations. The von Neumann stability analysis, in particular, has been applied in broad contexts for de-veloping an understanding of how the characteristic eigenstructure of a particular system can be used to predict and preserve the stability behavior of a numerical method. Lax Equivalence Theorem. 6 von Neumann Stability Analysis For Wave Equation. Von-Neumann stability analysis of proposed algorithms are used to achieve linear stability criteria to model problem, nonlinear KdV equation. anaylical solution f(x-at). I the section 7. The methods are compared for stability using Von Neumann stability analysis. The optimal values of the parameters for all families are obtained using the von Neumann method. 2 Numerical Stability Chapter 5 Finite Difference Methods. To reach this goal, convergence analysis, extrapolation, von Neumann stability analysis, and dispersion analysis are introduced and used frequently throughout the book. The numerical results show that the proposed method is a successful numerical technique for solving these problems. Show that first derivatives approx-imated using the cubic polynomial at y = 0 are third order accurate when the interval is mapped to one of length h. Matlab *Lecture 42 (05/06) Section 6. How do we pick a good test matrix A? This is where von Neumann and his colleagues first introduced the assumption of random test matrices distributed with elements from independent normals. 8) the amplification factor g(k) becomes. PROGRAMMING OF FINITE DIFFERENCE METHODS IN MATLAB 3 smoothers, then it is better to use meshgrid system and if want to use horizontal lines, then ndgrid system. 6 von Neumann Stability Analysis For Wave Equation. Implementation. Fourier series with applications, partial differential equations arising in science and engineering, analytical solutions of partial differential equations. Reporting and presenting problems and their solutions, introducing LATEX and/or Scientific Workplace, Typesetting text and mathematical formulae,graphing, making. Algorithms and information, fundamental to technological and biological organization, are also an essential aspect of many elementary physical phenomena, such as molecular self-assembly. The Von Neumann Method for Stability Analysis Various methods have been developed for the analysis of stability, nearly all of them limited to linear problems. Finite difference method basics: convergence, stability and consistency, von Neumann stability analysis and Fourier transforms. Apply von Neumann analysis to determine how ∆t and ∆x should be related for the method to be stable in the well-posed case(s). Submit solutions to four (and no more) of the following six problems. Consider the time evolution of a single Fourier mode of wave-number :. Problem 2: von Neumann analysis Consider the Leap Frog method for the advection problem u t + au x = 0. That is, 2nd-order centred di erences in both space and time. Keyan Ghazi-Zahedi Max Planck Institute for Mathematics in the Science, Leipzig, Germany {zahedi,montufar,nay}@mis. Use von-Neumann's stability analysis to establish the timestep-size. After consideration of alternative explanations, these results were found to support von Neumann’s conclusion that the mind of the observer is an inextricable part of the measurement process. problems using the widely available MATLAB software. Journal of Applied Nonlinear Dynamics. Then in Sec. In this case, dt/dx^2 is equal to 0. Von-Neumann analysis for several schemes:. Artificial Neural Networks Projects. Recall the CFL condition its relation with stability. As systems of interconnected ‘neurons’ to calculate values from input users Artificial Neural Networks that are capable of machine learning. au DOWNLOAD DIRECTORY FOR MATLAB SCRIPTS The following mscripts are used to solve the scalar wave equation using. Reporting and presenting problems and their solutions, introducing LATEX and/or Scientific Workplace, Typesetting text and mathematical formulae,graphing, making. Numerical stability implies that as time increases (i. The Von Neumann method is based on the assumptions of the existence of a Fourier decomposition of. Engineering Planning & Design I. Notationally,. von Neumann Stability Analysis - Basic properties of complex numbers; von Neumann Stability Analysis (old version) & Application to FTCS Scheme for the Advection-Diffusion Equation; von Neumann Stability Analysis for UDS in the Advection-Diffusion Equation; Excel Tools for stability analysis: For FTCS and UDS; For Leapfrog and DuFort-Frankel. Relate your results to the forward Euler (classic. Please contact me for other uses. CES - Chair for Embedded Systems, ITEC - Institute of Computer Science and Engineering, Department of Computer Science, KIT - Karlsruhe Institute of Technology. Stability property. 12), the amplification factor g(k) can be found from (1+α)g2 −2gαcos(k x)+(α−1)=0. 1D wave equation in 6 Mar 2011 centered space (FTCS), the backward time, centered space (BTCS), and spacing and time step. Try to get pen-and-paper arguments. Carry out a von Neumann stability analysis to nd under what restrictions on the parameter an approximate solution U converges toward a solution of the PDE. dispersion and stability analysis for acoustic wave propagation [1, 19]. Matrix analysis produces also a necessary condition for stability since the matrices of coefficients associated with the algorithms are not symmetric. FUNDAMENTALS OF ENGINEERING NUMERICAL ANALYSIS SECOND EDITION Since the original publication of this book, available computer power has increased greatly. dispersion and stability analysis for acoustic wave propagation [1, 19]. (Similar to Fourier methods) Ex. In 1928, Courant, Friedrichs and Lewy (CFL) determined the numerical stability criteria for time marching solutions forward. Stability of PDEs. However, even within this restriction the complete investigation of stability for initial, boundary value problems can be. On completion of this subject the student is expected to: Formulate strategies for the solution of engineering problems by applying the differential equations governing fluid flow, heat transfer and mass transport. 21 Math6911, S08. The evolution of road expansion and traffic growth (motor vehicle) of urban system is a quite complex process. Computed the 2D heat transfer flux on automobile frame using FTCS and implicit ADI discretization in MATLAB. · Developed spectral tools for large-scale graph analysis in MATLAB higher-order regularizing kernels and finite difference stencils and that it satisfies von Neumann’s stability condition. Physical interpretation of the CFL condition. 2 Numerical Stability Chapter 5 Finite Difference Methods. Numerical analysis using MATLAB and Excel von Neumann stability analysis and • Do not send me excel files or Matlab files—I will not. With the stability analysis, we were already examining the amplitude of waves in the numerical solution. Von Neumann analysis yields only a necessary condition for stability because it does not consider the overall effect of the boundary conditions between subdomains. Problem 2: von Neumann analysis Consider the Leap Frog method for the advection problem u t + au x = 0. An example will make von Neumann's technique clear. Engineering Modelling & Analysis I and II. For dt = 0. Download with Google Download with Facebook or download with email. These earlier works, however, did not explain the inconsistencies that have been observed between the theoretical predictions and numerical experiments. Designed for students without previous background in computer engineering. MATLAB programming is selected for the computation of numerical solutions. 5, where h = min(h x, h y). Di erent numerical methods are used to solve the above PDE. Nash used the mapping underlying these dynamics to prove existence of equilibria in general games. Overview of Taylor Series Expansions. • Can be controlled (stabilized) by numerical viscosity. Try to get pen-and-paper arguments. Lecture 02 Part 5: Finite Difference for Heat Equation Matlab Demo, 2016 Numerical Methods for PDE - Duration: 14 minutes. When we reach this point in the lecture, you are will have the essential knowledge in Math, Programming and Fluid Physics to start CFD. Schumacher’s quantum noiseless coding theorem. Since its publication, the evolution of this domain has been enormous. Course availability will be considered finalized on the first day of open enrollment. This was done by comparing the numerical solution to the known analytical solution at each time step. B = isstable(sys,'elem') returns a logical array of the same dimensions as the model array sys. The problem can be further simpli ed and tailored for stability using asymptotic analysis. ( Hint: cos2 = 1 2sin2 ). Mathematical biology is a highly interdisciplinary area that defies classification into the usual categories of mathematical research, although it has involved all areas of mathematics (real and complex analysis, integral and differential systems, metamathematics, algebra, geometry, number theory, topology, probability and statistics, as well. Course Project As part of this class, you must complete a course project. Numerical Partial Differential Equations: Finite Difference Methods (Texts in Applied Mathematics) Finite Difference Methods for Ordinary and Partial Differential Equations: Steady-State and. Chapter 3 dis-cusses the mathematical formulation of the governing equations and the application of von Neumann method to stability analysis, and the expressions of the compre-hensive stability criteria are derived. The numerical methods are also compared for accuracy. The von Neumann stability analysis actually also provides the information about propagation (phase) speed of the waves. With the stability analysis, we were already examining the amplitude of waves in the numerical solution. 10 Hyperbolic systems 224. Perform a von Neumann stability analysis. Governing differential equations for a model of plaque growth along with the formulation of the computational domain are outlined in Section 2. Downloadable! I consider some of the leading arguments for assigning an important role to tracking the growth of monetary aggregates when making decisions about monetary policy. This book introduces three of the most popular numerical methods for simulating electromagnetic fields: the. When we reach this point in the lecture, you are will have the essential knowledge in Math, Programming and Fluid Physics to start CFD. To investigate the interaction between them, a coevolution dynamics model is proposed in this paper to capture the relationships among traveler, vehicle and road. To reach this goal, convergence analysis, extrapolation, von Neumann stability analysis, and dispersion analysis are introduced and used frequently throughout the book. Introduction. a natural b. In particular, we implement Python to solve, $$- abla^2 u = 20 \cos(3\pi{}x) \sin(2\pi{}y)$$. ) Computational mathematics : models, methods, and analysis with MATLAB and MPI / Robert E. Stability condition *Lecture 43 (05/08) Convergence. This course is organized by the NGSSC Graduate School. Obuda University John von Neumann Faculty of Informatics Institute of Applied Informatics Name and code: Control Engineering (NIRCE1SERD) Credits: 3 Science Without Borders program (for Brazilian students) 2014/15 year I. Download with Google Download with Facebook or download with email. 1 von Neumann stability analysis of FTCS method. : Heat equation u t = D· u xx Solution: u(x,t) = e − Dk 2 t ·eikx. Von Neumann stability analysis. Numerical solutions shown in class for the heat equation in two dimensions. FUNDAMENTALS OF ENGINEERING NUMERICAL ANALYSIS SECOND EDITION Since the original publication of this book, available computer power has increased greatly. The methods of choice are upwind, downwind, centered, Lax-Friedrichs, Lax-Wendroff, and Crank-Nicolson. 2 A Few Words on Writing Matlab Programs The Matlab programming language is useful in illustrating how to program the nite element method due to the fact it allows one to very quickly code numerical methods and has a vast prede ned mathematical library. |
# There is an intermediate extension of degree $p$
Let $$K$$ be a Galois extension of $$F$$ and $$p$$ be a prime factor of the degree $$[K:F]$$.
I want to show that there is an intermediate extension $$F\subseteq L\subseteq K$$ with $$[K:L]=p$$.
Do we us for that the fact that the intermediate fields are in one to one correspondence with the subgroups of the Galois group?
Then since $$p$$ is a prime factor of the degree $$[K:F]$$ there must be an intermediate field with that degree according to Lagrange theorem.
Is that correct?
• So, it is a matter of finding a subgroup of the Galois group, of order $p$. This is done by Cauchy's theorem. – Sungjin Kim Nov 19 '18 at 22:24
• Here is Cauchy'theorem I am referring to: en.wikipedia.org/wiki/Cauchy%27s_theorem_(group_theory) – Sungjin Kim Nov 19 '18 at 22:26
• Ah ok! So since $p\mid [K:F]=|\text{Gal}(K/F)|$ the Galois group $\text{Gal}(K/F)$ has an element of order $p$, so there is a subgroup of the Galois group of order $p$, and so there is an intermediate field $L$ with $[K:L]=p$. Is everything correct? Do we use the same argument also for the following: Let $K$ be a finite and normal extension of $F$ and suppose that the extension $K/F$ has no proper intermediate extension. Show that the degree $[K:F]$ is a prime number. Do we use here again the one correspondence with the subgroups? But we don't have here a Galois group. @i707107 – Mary Star Nov 19 '18 at 22:54
• @MaryStar With $G = Aut(K/F)$ then $K/F$ is Galois iff $F = K^G$ (the subfield fixed by $G$). If $K/F$ is normal : if $G$ is non-trivial then look at $K^G$ and $K^H,H \le G$. If $G$ is trivial then show the minimal polynomial of any $a\in K$ is of the form $f(t) = (t-a)^n$ with $f$ irreducible thus $\gcd(f(t),f'(t)) = 1$ thus $n = {p^m}, p= char(F)$. – reuns Nov 20 '18 at 1:24
• Could you explain that further to me? I got stuck right now. @reuns – Mary Star Nov 24 '18 at 16:44 |
Seurat "Expression Level" units
1
0
Entering edit mode
2.2 years ago
paulranum11 ▴ 70
I am trying to understand what the correct units are for the violin plots output by Seurat. The Y axis is labeled "Expression Level" by default on their violin plots. If I input a matrix of counts values will my units then be log counts? likewise, if i input a matrix of TPM values will the units be log TPM?
Or alternatively are the units changed by the internal Seurat normalization process? Seurat (v1.4.0.8) has normalization process run using setup.
Setup(object, project, min.cells = 3, min.genes = 1000, is.expr = 0, do.logNormalize = T, total.expr = 10000, do.scale = TRUE, do.center = TRUE, names.field = 1, names.delim = "_", meta.data = NULL, save.raw = TRUE)
Some of the arguments for setup seem to include additional data normalization steps.
do.logNormalize = whether to normalize the expression data per cell and transform to log space.
do.scale = In object@scale.data, perform row-scaling (gene-based z-score)
do.center = In object@scale.data, perform row-centering (gene-based centering)
What are the appropriate units for data imported in this way?
NOTE: yes i know Seurat (v1.4.0.8) is outdated. But i am trying to understand data that was processed using this version.
RNA-Seq • 4.1k views
1
Entering edit mode
2.2 years ago
I'm assuming that the behavior did not change in Seurat v2 -- in Seurat v2, the data stored in the data slot (not the counts, which are typically stored in raw.data) are used for the visualizations, and that slot will only be filled if you used the normalization parameters you mentioned above. Santosh, another biostars user, pointed me to this helpful FAQ page that explains the three different types of data that Seurat stores if you follow their standard workflow (Answer 7). The slot names may have changed in v2, but conceptually, I don't think Seurat would have ever used raw counts for the types of visualization that you mentioned.
Traffic: 1237 users visited in the last hour
FAQ
API
Stats
Use of this site constitutes acceptance of our User Agreement and Privacy Policy. |
# Math Help - Need help proving mathematical induction
1. ## Need help proving mathematical induction
Prove that 1(1!) + 2(2!) + ... + n(n!) = (n+1)! - 1, for n >= 1
Here's what I have:
Base case: n = 1.
s(n) = 1(1!) + 2(2!) + ... + n(n!) = (n+1)! - 1
Therefore, (1+1)! - 1 = 1
Correct...
Inductive step:
For n+1,
$n(n!) = (n+1)*(n+1)!$
$s(n+1) = s(n) + (n+1)*(n+1)!$
or...
$s(n+1) = (n+1)! - 1 + (n+1)*(n+1)!$
This is where I am stuck. What next?
2. Originally Posted by kalel918
Prove that 1(1!) + 2(2!) + ... + n(n!) = (n+1)! - 1, for n >= 1
Here's what I have:
Base case: n = 1.
s(n) = 1(1!) + 2(2!) + ... + n(n!) = (n+1)! - 1
Therefore, (1+1)! - 1 = 1
Correct...
Inductive step:
For n+1,
$n(n!) = (n+1)*(n+1)!$
$s(n+1) = s(n) + (n+1)*(n+1)!$
or...
$s(n+1) = (n+1)! - 1 + (n+1)*(n+1)!$
This is where I am stuck. What next?
$(n+1)! - 1 + (n+1)*(n+1)! = (n+1)! (1 + [n+1]) - 1 = (n+1)!(n+2) - 1 = (n+2)! - 1$
3. Originally Posted by mr fantastic
$(n+1)! - 1 + (n+1)*(n+1)! = (n+1)! (1 + [n+1]) - 1 = (n+1)!(n+2) - 1 = (n+2)! - 1$
Thank you so much for your help. I cannot thank you enough.
However, I am still confused as to how you reached the solution.
$(n+1)! (1 + [n+1]) - 1$
From what I see here, you factored out (n+1)!, correct? I follow at this point...
$= (n+1)!(n+2) - 1 = (n+2)! - 1$
Okay, you lost me here. Can you please clarify this once more?
4. Originally Posted by kalel918
Thank you so much for your help. I cannot thank you enough.
However, I am still confused as to how you reached the solution.
$(n+1)! (1 + [n+1]) - 1$
From what I see here, you factored out (n+1)!, correct? I follow at this point...
$= (n+1)!(n+2) - 1 = (n+2)! - 1$
Okay, you lost me here. Can you please clarify this once more?
$(n + 2)! = (n+2) {\color{red}(n+1)n(n-1) ..... 1} = (n+2) {\color{red}(n+1)!}$.
5. Originally Posted by mr fantastic
$(n + 2)! = (n+2) {\color{red}(n+1)n(n-1) ..... 1} = (n+2) {\color{red}(n+1)!}$.
Wow, it just clicked for me. Thank you so much for your help. |
# Math Help - Constructing one-sided CI from MLE
1. ## Constructing one-sided CI from MLE
$X_1,X_2,...,X_n$ is a sample from the distribution whose density is:
$f_{X}(x) = e^{-x-\theta} \mbox{ if }x \geq \theta$
Based on the MLE estimator of $\theta$ construct a one-sided confidence interval for the unknown parameter at confidence level $1-\alpha$
Here's what I have so far.
$f(x_1,...,x_n|\theta) = e^{-x_{i} + \theta}\cdot\cdot\cdot e^{-x_{n} + \theta}$
$=e^{\sum_{1}^{n} -x_i + n\theta}
$
$\log{f(x_1...x_n|\theta)}=\sum_{i=1}^{n}-x_i + n\theta$
$\frac{d}{d\theta}f(x_1...x_n|\theta) = n$
$n=0 ?$
I'm not too sure where to go from here. Any help? Thanks.
2. YOU are ignoring the indicator function
BESIDES TWO TYPOs (the i and the negative sign in front of theta)
THE MLE is the smallestorder stat.
YOU need to use common sense, not calculus (incorrectly)
to maximize the likelihood function
$f(x_1,...,x_n|\theta) = e^{-x_{1} + \theta}\cdot\cdot\cdot e^{-x_{n} + \theta}I(X_{(1)}>\theta)$
$= e^{-\sum x_{i} + n\theta}I(X_{(1)}>\theta)$
This is smallest, ZERO, when $X_{(1)}<\theta$
while you want $e^{-\sum x_{i} + n\theta}I(X_{(1)}>\theta)$ as large as possible WITH RESPECT TO THETA
So, you will need $X_{(1)}>\theta$
so to make $e^{-\sum x_{i} + n\theta}I(X_{(1)}>\theta)$ as large as possible
you want $n\theta$ or $\theta$ as big as possible
and it CANNOT exceed the data, hence the MLE (is our sufficient statistic by the way) $X_{(1)}$
3. Thank you! I was wondering what X_{(1)} means? Is that just X_1?
Also, what is the sufficient statistic? I looked it up on wikipedia but I don't really understand it.
4. Originally Posted by BERRY
Thank you! I was wondering what X_{(1)} means? Is that just X_1?
Also, what is the sufficient statistic? I looked it up on wikipedia but I don't really understand it.
$X_{(1)}$ is the minimum of $X_1, X_2,...,X_n$ |
###### Variance development
lectures · 87 · 3 · 7 Jul '21
Hello, When rereading the course, I noticed that the following development (p. 93) does not look…
lectures · 35 · 0 · 7 Jul '21
This section is labelled as optional material in the lecture notes. Does that mean its contents wil…
###### Lecture10 - Duality [Slide 7] - Dual Lasso Problem
lectures · 77 · 2 · 6 Jul '21
Dear CS-439 Team, In the Q&A session, I remember that with Professor Jaggi, we discussed that the …
###### Iterated Expectations, Thrm 10.1
lectures · 39 · 3 · 1 Jul '21
Hello, When rereading the lecture notes about coordinate descent, I struggle understanding how…
###### lecture 10, slide 7
lectures · 59 · 5 · 1 Jul '21
Hi, In slide 7 of lecture 10, I don't understand how we can get from max to min in the last two …
###### Convexity of Open Domains
lectures · 17 · 1 · 30 Jun '21
This is probably trivial but for quite a few theorems dom(f) needs to be convex. However, often dom…
###### Mini-Batch SGD
lectures · 36 · 1 · 30 Jun '21
Hello, could you explain the second equality? And why the inner product after the third equality is…
###### Frank wolfe referential
lectures · 21 · 1 · 25 Jun '21
Hello, In the definition of the linear minimization oracle of Frank Wolfe algorithm, it makes se…
###### Question regarding Lipschitz continuity.
lectures · 44 · 2 · 23 Jun '21
I have a couple of questions regarding Lipschitz continuity: 1- If a function f is smooth, does it…
###### Lecture 10 slide 11
lectures · 46 · 4 · 8 Jun '21
Hello, In the slide 11 of the Lecture 10. I don't understand why we have the expectation of the g…
###### Lecture Notes for Lecture09 Coordinate Descent
lectures · 31 · 1 · 5 May '21
Hi, is there lecture notes for this lecture? Thanks.
###### Lecture 8
lectures · 75 · 6 · 3 May '21
Hello, Please correct me if I'm wrong, I think there is a problem on the video of lecture 8 at min…
###### Possible typo lemma 7.2
lectures · 23 · 1 · 29 Apr '21
Hello, in the following: ![one.JPG](https://oknoname-crm1.s3.amazonaws.com/spirit/images/116/5860…
###### Lecture notes for week 8
lectures · 34 · 2 · 29 Apr '21
Hello, I believe you forgot to update the lecture notes with the Frank-Wolfe content. It would b…
###### Lecture 6 slide 29
lectures · 32 · 2 · 28 Apr '21
Hello, regarding the proof for the potential function decrease in slide 29 in the last step I don'…
###### Update lecture notes
lectures · 27 · 1 · 18 Apr '21
Please update the lecture notes for chap. 7. Thank you in advance!
###### Slides for lecture 6
lectures · 41 · 2 · 15 Apr '21
Hello, I think you forgot to upload the slides on github for this week's video. Would it be poss…
###### No lectures this week?
lectures · 25 · 1 · 1 Apr '21
Hi, How come there are no video lectures posted this week? Best
###### Lecture 5 : slide 13
lectures · 88 · 4 · 26 Mar '21
Hello, I have a trouble understanding how the last equality have been made in the slide 13. I f…
###### Can we have a full version of the lecture notes?
lectures · 24 · 1 · 25 Mar '21
Hello, I am wondering if we could have a full version of the lecture notes rather than updating … |
Our Discord hit 10K members! 🎉 Meet students and ask top educators your questions.Join Here!
GN
# A certain shop repairs both audio and video components. Let $A$ denote the event that the next component brought in for repair is an audio component, and let $B$ be the event that the next component is a compact disc player (so the event $B$ is contained in $A$ ). Suppose that $P(A)=.6$ and $P(B)=.05 .$ What is $P(B | A) ?$
Check back soon!
#### Topics
Probability Topics
### Discussion
You must be signed in to discuss.
### Video Transcript
All right, sir. All this sort of rehash the question quite quickly. Sir, We have a certain shop which repairs birth order and for your components. Theo, event A here. Well, dinner to be the event. That next component which is brought in for repair, is an audio compartment. And B is the event that the next compartment is a C. D. Player. So what that basically means is that the event B is contained in a So, in other words, bees, like a subset of, so to speak. Okay, sir, if we have that, the probability of a Is there a 0.6 on the probability of being 0.5? What is the probability off be given A. So, by definition, this will be a quick mathematical calculation. Be the probability off be intersect with a divided by the probability of a so just going off the definition off. Conditional probability. Um, we already have the denominator to the probability of a Is your port six. So, what's the probability of be insect? A. So I was gonna get blur here. So since B is a subset of a, then the probability off a sick B is really Just probability Off be, as the intersection of A and B is the smaller off the two sets, if they are subsets. So in other words, this will just end up being the probability off B. And what we have is zero point serif. I defied apart 0.6, which ends up being 1/12. So I'm gonna leave it there. Thank you very much. |
# Throw error if font is not defined
I need to throw an error, should a certain font not be installed.
I have a font in a subfolder: /font/NinjaFont.ttf. This must be installed via certain commands (not important for this - but here: http://math.stanford.edu/~jyzhao/latexfonts.php)
EDIT:
If it's not installed I get about 200 error messages and 2 warnings: ...MiKTeX\2.9\tex\latex\pst-text\pst-text.sty:31: LaTeX Font Warning: Font shape 'T1/Fonts/NinjaFont/m' undefined(Font) using 'T1/cmr/m/n' instead on input line 31. ...pst-text.sty:31: LaTeX Font Warning: Some font shapes were not available, defaults substituted.
I need to detect, whether T1/Fonts/NinjaFont/m is defined.
The following code can help me some of the way:
\PackageError{NinjaPackage}{The ninja font is not defined}{See ninja instructions.}
\stop
Not I just need to detect the font, so I can throw the error. Anyone have a solution or pointers to where I can find the answer?
More EDIT:
Example files:
T1-WGL4.enc: http://math.stanford.edu/~jyzhao/T1-WGL4.enc
t1ninja.df:
\ProvidesFile{t1ninja.fd}
\DeclareFontFamily{T1}{ninja}{}
\DeclareFontShape{T1}{ninja}{m}{n}{ <-> ninja}{}
\pdfmapline{+ninja\space <ninja.ttf\space <T1-WGL4.enc}
I think it would be around here, that the compiler realizes, that the font does not exist in the folder with other fonts.
Can I do a \xifelsethen{ {warningThrown} {Error} {noError} }?
EDIT 4:
Based on previous answers this seems to work out nice. I have to install the package globally, then I'm for sure it will work.
\documentclass{article}
\usepackage{etoolbox}
\makeatletter
\def\define@newfont{%
\begingroup
\let\typeout\@font@info
\escapechar\m@ne
\expandafter\expandafter\expandafter
\split@name\expandafter\string\font@name\@nil
\expandafter\ifx
\csname\curr@fontshape\endcsname \relax
\expandafter\gdef\csname \curr@fontshape/sub\endcsname{}% new
\wrong@fontshape\else
\extract@font\fi
\endgroup}
\newcommand\ninjaFont[1]{%
{ \fontfamily{ninja}\selectfont #1
\ifcsname \f@encoding/\f@family/\f@series/n/sub\endcsname
\PackageError{NinjaSetup}{Not installed}{Do this}
\stop
\else
\fi
}}
\makeatother
\begin{document}
\ninjaFont{It works?}
\end{document}
• T1/Fonts/NinjaFont/m looks odd, I doubt that you did setup the font commands correctly. Beside this: You get only a warning not an error if this font is undefined. Better make a complete example that demonstrates your problem. – Ulrike Fischer Jan 30 '15 at 12:04
• You are correct. I messed up! – Rasmus Bækgaard Jan 30 '15 at 12:23
• Your font setup is wrong, you seem to use a foldername as familyname. You could probably adapt the code here tex.stackexchange.com/questions/218539/… but without an example I won't test. – Ulrike Fischer Jan 30 '15 at 12:47
• Added minimum example – Rasmus Bækgaard Jan 30 '15 at 13:32
• If your fd-filed is named t1ninja and font family "ninja" you must call the font as \usefont{T1}{ninja}{m}{n} . And if the ttf-file is named NinjaFont.ttf you must adapt the \pdfmapline command. Beside this it is unclear if you want to check if the fd-file or the ttf-file exists. – Ulrike Fischer Jan 30 '15 at 13:37 |
# How to average quantized and truncated data?
So I have data that has been quantized by an analogue to digital converter. (continuous data has been turned into discrete data and the values range from 0 to the saturation value , which is 127 in this case).
This particular instrument I used to gather the data is quite noisy, let's say there is added Gaussian noise to the real value. Luckily, when taking single measurements, I have enough time to take multiple measurements and average them to reduce the noise. Note that sampling rate is not an issue here since the thing that I'm taking measurements of is completely stable.
Obviously , taking the simple mean will produce a biased result because values cannot exceed 0 or 127 (for example, if you attempt to use a plain old averaging on something with a "real" value of 126, you will get an estimated value that is less than 126. This is because the added Gaussian noise will not give you any value higher than 127 because of the truncation). So how do I take the average so the result will give me an unbiased estimator of the real value?
• en.wikipedia.org/wiki/Winsorized_mean – whuber Mar 16 '11 at 22:59
• I guess that could help but I would really need to change how to take the winsorized mean when i'm close to saturation and zero so it really doesn't solve my problems.. I was looking for an approach with more mathematical foundations. I mean, this problem seems like such a common thing that someone must have done work on it already. – umps Mar 17 '11 at 14:50
• I don't understand the problem. The computation is simple. You only need to compute what proportion of your data are 0 and what proportion are 127 ("saturation"). Let $q$ be the larger of those two proportions. If $q\ge.50$ you're out of luck no matter what. Otherwise, let $\bar{m}$ be the mean of the middle $1-2q$ of the data (lying between the $q$ and $1-q$ quantiles), $x_{-}$ be the $q$ quantile, and $x_{+}$ be the $1-q$ quantile. The Winsorized mean equals $(1-2q)\bar{m}+q(x_{-}+x_{+})$. I don't understand the reference to "more mathematical foundations," either. – whuber Mar 17 '11 at 14:59
http://en.wikipedia.org/wiki/Truncation_%28statistics%29
This is not much help, but at least it gives the correct buzzword (truncated, not quantized; quantization is not your problem) and one pointer to a paper. This should do as a starting point for further search.
Oh, and Winsorized mean is the exact opposite from what you want.
• He's using an ADC, so he's got both truncation (i.e., saturation) and quantization effects. It's straightforward to show that if the noise is zero-mean Gaussian, then the expected value of each sample is something like $\sum_{i=1}^m q^\star_i \cdot (\Phi((q_{i} - \mu)/\sigma) - \Phi((q_{i-1}-\mu)/\sigma)$ where $q^\star_i$ is the quantization value of the $i$th bin, $q_i$ is the $i$th bin boundary and $\Phi(\cdot)$ is the standard normal cdf. If he wants something unbiased he has to find a way to get that $\mu$ outside of the $\Phi(\cdot)$ functions! – cardinal Mar 18 '11 at 3:52
• Yes, I have both truncation and quantization. That is correct, I will look into the truncation further to see if I can find anything. – umps Mar 18 '11 at 18:07
• "Exact opposite" in what sense? – whuber Mar 18 '11 at 19:51
If your data follow a truncated normal distribution, this link gives you a implementation in R language for the computation of the mean and variance of a truncated normal distribution :
http://www.r-bloggers.com/truncated-normal-distribution/ |
# Graph classes in which CLIQUE is known to be NP-hard?
Given a graph $G$ and a positive integer $k$, the CLIQUE problem asks if $G$ contains a clique (complete subgraph) on at least $k$ vertices. This problem is long known to be NP-complete --- in fact, it was one of Karp's list of 21 NP-complete problems. My question is:
For what restricted families of graphs is CLIQUE known to be NP-complete?
I could find one such graph class with Google's help: the class of $t$-interval graphs for any $t\ge 3$ (Butman et al., TALG 2010) [1].
Do you know of other graph classes where this problem has been shown to be NP-complete?
[1] Butman, Hermelin, Lewenstein, Rawitz. Optimization problems in multiple-interval graphs. ACM Transactions on Algorithms 6(2), 2010
• ISGCI is your friend. – Tsuyoshi Ito Nov 26 '10 at 17:58
• @Tsuyoshi Ito - that's a fantastic resource, certainly worthy of answer status. – s8soj3o289 Nov 27 '10 at 2:52
• @blackkettle: It's in the FAQ. – András Salamon Nov 28 '10 at 21:22
It's NP-complete to find maximum cliques in claw-free graphs [Faudree, Ralph; Flandrin, Evelyne; Ryjáček, Zdeněk (1997), "Claw-free graphs — A survey", Discrete Mathematics 164 (1–3): 87–147] and in string graphs [Jan Kratochvíl and Jaroslav Nešetřil, INDEPENDENT SET and CLIQUE problems in intersection-defined classes of graphs, Commentationes Mathematicae Universitatis Carolinae, Vol. 31 (1990), No. 1, 85–93]. At least as of the 1990 paper it was open whether the problem remained hard for intersection graphs of straight line segments.
However, finding maximum cliques easy for planar graphs, for minor-closed graph families, or more generally for any family of graphs with bounded degeneracy: find the minimum degree vertex, search for the largest clique among its O(1) neighbors, remove the vertex, and repeat. It's also easy for perfect graphs and the many important subfamilies of perfect graphs.
Although maximum independent set is hard for many other interesting graph classes, that doesn't generally lead to interesting hardness results for clique, because the complement of an interesting graph class is not necessarily itself interesting.
The equivalence of CLIQUE on $G$ and INDEPENDENT SET on $\overline{G}$ will perhaps help you find more classes for which the problem remains NP-complete.
Although this doesn't answer the question as stated (with NP-hardness) I'd like to point out that even though CLIQUE is known polytime solvable on perfect graphs, I believe it is still open to find a (non-ellipsoid-like) combinatorial algorithm for CLIQUE even on perfectly orderable graphs... (that is, a perfectly orderable graph when the perfect order is not part of the input.) |
# Magnetic Vector Potential
1. Dec 8, 2008
### lovinuz
1. The problem statement, all variables and given/known data
There is a disc with radius R which has a uniformly-distributed total charge Q, rotating with a constant angular velocity w.
(a) in a coordinate system arranged so that the disc lies in the xy plane with its center at the origin, and so that the angular momentum point in the positive z direction, the local current density can be written J(x,y,z) = K(x,y) d(z). determine the surface current K(x,y) in terms of Q, w, and R.
(b) using the law of Biot and Savart, determine the magnetic field at point r=sk, k is the vector direction. find the same for r=-sk.
2. Relevant equations
3. The attempt at a solution
Last edited: Dec 8, 2008
2. Dec 8, 2008
### lovinuz
i might add that we can use cylindrical coordinates, expressing this as K(r,phi) where r=sqrt(x square + y square) and phi = tan inverse (y/x). this is for part (a). |
# Is there a name for a neither-increasing-nor-decreasing-but-alternating sequence like below?
A sequence of at least two distinct integers ${ \left\{x_1, x_2, x_3, x_4, \ldots, x_n | n \geq 2, x_i \ne x_j \forall i \ne j \right\} }$ that satisfies either the property: $$x_1 < x_2 > x_3 < x_4 \cdots <> x_n$$
or:
$$x_1 > x_2 < x_3 > x_4 \cdots <> x_n$$
Is there a name for such a sequence?
Given any sequence of distinct integers, it is possible to arrange (or, "sort") it to satisfy the above property.
Positive integers can be arranged like http://oeis.org/A065190 or http://oeis.org/A103889.
The "sorted" arrangements of $\{ 1, 2 \}$ is: $1 > 2$ and $2 > 1$. The "sorted" arrangements of $\{ 1, 2, 3 \}$ is: $1 < 3 > 2$ and $3 > 1 < 2$, and their mirrors: $2 < 3 > 1$ and $2 > 1 < 3$.
How many such (mirrored and non-mirrored) arrangements are possible?
Are there any other properties of such sequences?
-
That would just be an alternating sequence. At least, if you substract the mean, it literally is an alternating sequence. – Raskolnikov Apr 9 '12 at 3:53
When $x_1,\dots,x_n$ are a permutation of the numbers $1,\dots,n$, they are sometimes called up-down and down-up permutations, respectively. – Brian M. Scott Apr 9 '12 at 3:54
@Raskolnikov: I would normally understand alternating sequence to be one whose terms alternated in sign. – Brian M. Scott Apr 9 '12 at 3:54
@Raskolnikov: I can't see $1325476$ as the sum of a constant sequence and a sequence whose terms alternate in sign. – Rahul Apr 9 '12 at 4:32
@Rahul: You're right, I didn't think that through. Thanks for correcting me. – Raskolnikov Apr 9 '12 at 8:18
It doesn’t really matter what $x_1,\dots,x_n$ are, as long as they are distinct, so let’s take them to be $1,\dots,n$. These permutations are called zigzag permutations. A permutation of your first kind is an up-down permutation or an alternating permutation; one of your second kind is a down-up permutation.
It’s not hard to see that there are exactly as many up-down permutations as there are down-up permutations. If $\pi$ be a zigzag permutation of $1,\dots,n$, let $\hat\pi$ be the permutation obtained by changing each term $x$ into $n+1-x$. For example, if $n=5$ and $\pi=32514$, $\hat\pi=34152$. If $\pi$ is up-down, $\hat\pi$ is down-up, and if $\pi$ is down-up, $\hat\pi$ is up-down, so this bijectively pairs up the two kinds of zigzag permutations.
The number of up-down permutations of $1,\dots,n$ is often denoted by $A_n$ and have been called the Euler zigzag numbers and the up-down numbers. To get the total number of zigzag permutations of $1,\dots,n$, just double $A_n$. As you’ve discovered, $A_2=1$ and $A_3=2$. The next few values are $A_4=5$, $A_5=16$, and $A_6=61$. These numbers are sequence A000111 in the On-Line Encyclopedia of Integer Sequences. To the best of my knowledge thay have no nice closed form; a rather ugly formula for $A_n$ is derived here. |
# I just took a test and I want to know if I did this problem right? Springs+ Vibration
## Homework Statement
A 500 g block is released from rest and slides down a frictionless track that begins 2.00 m above the horizontal. At the bottom of the track, where the surface is horizontal, the block strikes and sticks to a light spring with a spring stiffness constant 20.0 N/m. Find the maximum distance the spring is compressed.
## Homework Equations
mgh= 1/2mv^2
1/2KA^2=1/2mvmax^2
## The Attempt at a Solution
9.8(2)=1/2v^2
v^2=6.26m/s
1/2(20) A^2= 1/2(.5)(6.26)^2
A= .99 m
I did this because I figured that if I found the velocity when the box first hit the spring, it would be considered the maximum velocity since the spring is at equilibrium at that point in time. Then, since it asked for the maximum distance it would be compressed, I just found the amplitude. Is this right? I really hope so because that is what I did on my test...Thanks for the help.
## Answers and Replies
Related Introductory Physics Homework Help News on Phys.org
Andrew Mason
Science Advisor
Homework Helper
## Homework Statement
A 500 g block is released from rest and slides down a frictionless track that begins 2.00 m above the horizontal. At the bottom of the track, where the surface is horizontal, the block strikes and sticks to a light spring with a spring stiffness constant 20.0 N/m. Find the maximum distance the spring is compressed.
## Homework Equations
mgh= 1/2mv^2
1/2KA^2=1/2mvmax^2
## The Attempt at a Solution
9.8(2)=1/2v^2
v^2=6.26m/s
1/2(20) A^2= 1/2(.5)(6.26)^2
A= .99 m
I did this because I figured that if I found the velocity when the box first hit the spring, it would be considered the maximum velocity since the spring is at equilibrium at that point in time. Then, since it asked for the maximum distance it would be compressed, I just found the amplitude. Is this right? I really hope so because that is what I did on my test...Thanks for the help.
Correct. But it would have been easier simply to equate gravitational potential with spring energy.
$$mgh = \frac{1}{2}kx^2$$
$$x = \sqrt{2mgh/k} = \sqrt{2*.5*9.8*2/20} = .99 m.$$
AM
Thanks. I was going to do it like that but my mind was convinced that it had to be more difficult than that.
WOOOO HOOO! I did it right and I got 100 on my test!! What an amazing day. :) |
[ Direct Download: TeXShop 3 for Lion | Lion Source | TeXShop 2 | Version 2 Source ] [ Contact ]
### TeXShop Changes 4.19
Version 4.19 cleans up three minor issues in 4.18:
• In some situations, the initial bracket or brace of a command parameter was not syntax colored. The bug was hard to see in Lite mode, but stood out like a sore thumb in Dark mode. The bug is fixed. Many users may not have noticed this bug because it only occurred when the second of the three new "Spell Checking" options in TeXShop Preferences was on.
• John Farragut pointed out that the console transparency was set to the source transparency rather than to its own value as set in Themes. This is fixed.
• Before version 4.18, the tags menu was entirely reconstructed after each new text entry. This was wasteful of computer resources and risked making the editor sluggish. In 4.18, this code was deactivated and instead the tags menu is constructed just before it is displayed.
Tag entries can be displayed in three different menus. If the toolbar is in "Icon Mode" or "Icon + Text" mode, the icon itself is a drop-down menu which displays the tags. If the toolbar is in "Text Only" mode, the "Tags text" is a drop-down menu which displays the tags. Finally, there us a TeXShop Preference setting which adds a Tags menu to the menu bar, and this menu also displays the tags. Clicking on the Tags Icon updates all three menus.
However, there was a bug in 4.18, so that clicking on the "text only" tool or on the Tag menu in the menu bar did not update any menus. This caused problems for users whose toolbars are in "Text Only" mode, since nothing they clicked updated the menus. Consequently, all tag menus remained empty. This problem also affected the new Labels item. Most users set the toolbar to "Icon" or "Icon + Text" modes, and did not run into this problem. I got only one bug report.
Unfortunately, this problem does not have an easy solution. The code in version 4.18 depended on a feature of PopUp menus in Cocoa; they can send a notification when first clicked, before displaying their popup menu, giving time for the menu to be constructed on the fly. Ordinary menus and menus attached to "Text Only" mode do not come with this Cocoa feature. I spent several days trying to find a work-around, before deciding that any work around would not be in the spirit of Cocoa and could easily break in the future. Returning to the old method of updating the Tag menu would fix the problem. But this would risk a sluggish editor, and benefit only 10% of users who set the toolbar to text only mode.
Consequently, version 4.19 introduces another tool for these special users affected by the problem. The name of the tool is "Update" because clicking it updates all tag and labels menus to the current state of the text. Users who adopt text-only toolbars should add the Update tool to their Source toolbar. If they sometimes use "single window mode", they should also add the Update tool to that toolbar. Then use Tags and Labels as usual, even after editing text. But if you begin to notice that Tags or Labels doesn't take you to exactly the correct line, hit Update before continuing. Incidentally, when syntax coloring is on, the Tags and Labels menus are updated once when a document is first opened.
### TeXShop Changes 4.18
Version 4.17 was intended to remain the release version for several months. Two days after the release and out of the blue, I received four suggested code changes from Neil Sims, the Head of the Department of Mechanical Engineering at The University of Sheffield. Version 4.18 contains his changes and a minor extra bug fix. These changes are listed first. Sims' changes led to significant improvements in the underlying TeXShop code, and these improvements are explained at the end of this section for interested readers.
• There is a new tool in the Source and Combined Windows toolbars called "Labels". It behaves like the existing Tags tool except that it lists all labels in the source code. Selecting an item in the resulting pull down menu takes the user to the definition of that particular label.
• The Tags menu now also tags lines beginning with the following commands, for users of Beamer and Powerdot:
\begin{frame}
\begin{slide}
\begin{wideslide}
\begin{notes}
• When a TeXShop engine job runs, it will find that TeXShop has set a new environmental variable for it called TS_CHAR. This variable holds the current selection location in the source file. Some engine authors may find this useful.
• Michael Beeson sent a crash report for TeXShop when using the "search" method of synchronization, a very old method mostly superseded by Lauren's SyncTeX. The crash is fixed in 4.18.
• The most controversial Sims' addition, and the most useful for some, concerns spell checking in TeXShop. When spell checking is on, many LaTeX commands are marked as misspelled. This is annoying. One common solution is to install a LaTeX-aware spell checker like cocoAspell. Thanks to Sims, TeXShop can now handle this problem --- for some users --- while using the standard Apple spell checker and standard Apple dictionaries.
Apple provides three ways to spell check text in Cocoa, and TeXShop inherits these three methods. The methods are activated for the current file in TeXShop's Edit menu, and default values can be set in TeXShop Preferences.
The first of these items is titled "Check Spelling", and has a keyboard shortcut "command + semicolon". When this combination is pressed, the first misspelled word is highlighted. Each additional press causes TeXShop to jump to the next misspelled word and highlight it. This spell check command is thus a glorified search in which only misspelled words are found.
A second way to spell check is to activate the menu item "Correct Spelling Automatically." This converts your computer into a giant iPhone, constantly standing behind you and changing what you type into what it thinks you ought to have typed. This feature can be turned off in system preferences, but users had a hard time discovering how to do it. So I added this item to TeXShop, not because I wanted users to use it, but because I wanted users to easily turn it off !
The final way to spell check is to use the menu item "Check Spelling While Typing." This item underlines misspelled words as they are typed, and the user can then go back and correct these words. This is how I spell check in TeXShop. The new spelling code works well with this style of spell checking. The new code doesn't work with the other methods, but it does no harm there.
In TeXShop Preferences, there is a new box of selections labeled "Spell Checking". These items are off by default. Leave them off if you use cocoAspell or any spell checking method except "Check Spelling While Typing."
The first spell checking item turns off spell checking for all TeX command words: \documentclass, \usepackage, \begin, \alpha and the like. The second is explained in the next paragraph and the third turns off spell checking inside comments. Some users may write little essays as source comments and prefer to leave spell checking on for them
Many TeX commands have optional parameters [...] and mandatory parameters {...}. The entries inside these parameters can also be specialized TeX words. That is true of the first example below, and it is annoying if the spell checker marks "parfill" and "parskip" as misspelled. On the other hand, in the second example the parameter is a user-supplied string which ought to be spell checked.
\usepackage[parfill]{parskip}
\emph{This remark has been made before}
The second item in the "Spell Checking" box of TeXShop Preferences turns on a somewhat crude method of handling both kinds of parameters. When this item is on, most TeX command parameters are spell checked. But TeXShop has an internal list of certain TeX commands whose parameters contain specialized words, and for these it turns off spell checking for the first two parameters (if they occur in the source). This internal list, incidentally, is exactly the list marked for special handling by cocoAspell. But cocoAspell is more intelligent, and can mark which parameters to spell check and which to skip.
There is a hidden preference setting to extend the list of specialized TeX commands. The first command below adds one more element to the existing array of user supplied exceptions. The second command erases the array so the user can start over. Note that neither command affects TeXShop's default list of special commands.
defaults write TeXShop UserCommandsWithoutParameterSpellCheck -array-add "\\verbatim"
defaults write TeXShop UserCommandsWithoutParameterSpellCheck -array
What is the mechanism used to turn off spell checking? I wish I had thought of Sims' idea. The text in the TeXShop source window is an "attributed string." This means it is an ordinary (often very long) string, with an additional data structure associated with the string that lists attributes like "text color" and "background color" for selected ranges of the string. Sims noticed that one of the available attributes is "do not spell check this selection." So Sims added lines to TeXShop's syntax coloring code which prevent the Mac from spell checking TeX commands or comments. This means in particular that the feature only works if syntax coloring is turned on.
Note that cocoAspell uses more sophisticated methods and operates at the optimal moment when the system is actually checking spelling, rather than at an earlier syntax coloring moment. So if you use cocoAspell, you will want to turn all the "Spell Checking" preferences off.
Because TeXShop doesn't act at the "spell checking moment", there are some minor glitches with our method. When a document is first opened, there can be a slight delay and then all TeX commands will be marked as misspelled. But a single click in the edit window will fix this problem. Similarly, while source is being typed, some commands may be marked as misspelled, but the mark will be removed when RETURN is pressed.
Unfortunately, the new attribute is totally ignored by the "Check Spelling" search item, so it will not help when you go through the manuscript word by word looking for misspellings.
Aside: Each time I release TeXShop, I fear a complaint will arrive that the editor has become sluggish. Last summer I got exactly that complaint from an author writing a new physics textbook. He told me that it was painful to add new source text for his book, and sent the source to me. I typed a phrase, and then looked in horror as the letters I typed appeared on the screen at a rate of one every second.
This author's book was written in Hebrew, and he told me that most technical books in Israel are in English and don't run into this problem. At first I didn't understand the significance of this clue. Then by accident I changed the font used in TeXShop's source editor and completely fixed the problem. It turned out that the author was using a source font which did not contain Hebrew, but the Macintosh was intelligent and switched to a different font every time he typed in Hebrew. Since each LaTeX command was in English and the actual text was in Hebrew; the entire document had hundreds of thousands of font switches. Selecting a source font which contained both English and Hebrew fixed the problem.
Confession: TeXShop's editor must remain crisp and rapid, but for years there has been a dirty little secret within that editor. You might think that nothing much happens when you type a few letters into the editor, but you would be wrong. Every time new letters are about to enter the source, the NSText Cocoa class warns TeXShop and allows it to modify the letters. Then just after the letters appear, the class notifies TeXShop that it has added them. At these two moments, TeXShop is able to perform other tasks, and it is quite greedy using this power.
First, TeXShop must add syntax colors to the new characters. This cannot be done in isolation, since there is no way of knowing if the user is adding to a comment or finishing a TeX command. So TeXShop syntax colors the entire visible text on the screen after each new letter.
But in addition, the new letters may form a new tag. So every time a new letter is typed, the entire tags menu is reconstructed, which means that the entire source file must be read! As you'll guess, there are optimizations to speed this up. First, tag lines start with a backslash or comment sign. So TeXShop looks at the first character of each line and discards lines that could not contain tags. Second, the menu is constructed in "chunks". TeXShop studies 1000 characters at a time, and then pauses for .02 of a second to allow other things to happen. One of these other possible things is a new typed character, and in that case the menu construction starts from scratch.
Sims added an entirely new process to the list, which had to scan every word of the entire source to make his second popup menu. He hadn't optimized his code, so every time a new letter was added, TeXShop would have to read the entire source file a second time. So clearly I needed to optimize the label code, and I looked carefully at the tag code that I hadn't read for years. One interesting piece of code was added ten years ago by someone else just before the optimization to test the first letter of each line. That code read
if 1
return;
Said another way, it turned the optimization off!
Next I read Apple's documentation about PopupButtons, and discovered that such a button can notify the calling program when it is pressed, before it displays the menu. This suggested that TeXShop could postpone constructing the Tags and Labels popup menus until the user wants to use them, entirely removing those scary calls when entering text into the source. Experiments show that both menus are constructed rapidly. Thus in TeXShop 4.18 there a new Label tool from Neil Sims and both the Tags and Label popup menus are constructed on the fly when needed. Text entry should be much more efficient.
Some New Hidden Preferences, just in case: One lesson from the upgrades this summer is that things can go wrong and it is useful to protect users from new ideas. Therefore version 4.18 contains several hidden preference settings to switch back to old code if the new code causes problems.
• The first series of hidden preferences protects the Tag and Label code. If something goes wrong with the process of constructing the Tags menu on the fly, users can switch back to the old method via
defaults write TeXShop UseNewTagsAndLabels NO
But in this case, the Labels menu is constructed in the old unoptimized manner. It can be turned off by
defaults write TeXShop CreateLabelList NO
If the old Tag code is inefficient, it can be turned off (although the user might as well switch to the new method and just never use Tags!)
defaults write TeXShop CreateTagList NO
• The second series of hidden preferences is related to an earlier experiment with spelling and TeX command parameters. Originally I thought it might be better to turn off spell checking the first two parameters for all TeX commands, and then have a list of special commands whose parameters should be checked. This is the opposite of the current logic. It is possible to test this idea, although I suspect nobody will do it:
defaults write TeXShop ExceptionListExcludesParameters YES
In this case there is a very small list of exceptions, mainly containing \emph and \verbatim, but users can again add exceptions of their own using
defaults write TeXShop ExtraCommandsNotToCheckParameter -array-add "\\emph"
defaults write TeXShop ExtraCommandsNotToCheckParameter -array
Remarks on cocoAspell: Coco-Aspell is an alternate spell checker by Anton Leuski. Leuski's project allows users to replace Apple's own dictionaries with new dictionaries that can be made "LaTeX aware", while still using all the Apple spelling facilities. Leuski made the project open source recently; see https://github.com/leuski/cocoAspell,
I think of the project as the "gold standard" for TeX-aware spell checking. It has some minor problems. It can be difficult to install, and the latest commit to the open source project was August 5, 2017. Moreover, users must then use the dictionaries supplied with cocoAspell, rather than dictionaries by Apple or others. But it is my recommended approach.
### TeXShop Changes 4.17
There are just two changes in 4.17:
• Version 4.16 had an updated OgreKit Find Panel. Unfortunately, we only briefly tested this panel, and it caused several problems when 4.16 was released.
The new panel did not support Yosemite, making TeXShop crash on Yosemite. The new panel refused to run on isolated machines running Sierra, although it worked on most machines. The panel had other minor bugs which were reproducible but with easy workarounds.
Then a serious, reproducible, and dangerous bug was discovered. If the "Find All" button was pressed in OgreKit, it worked as expected, but afterward the editor appeared to accept no new edits. Although the user could type, no new material appeared in the source. But if the source window was closed and then reopened, these edits suddenly appeared. Thus a user could close a document in one state, and open it in a different state. This is not acceptable.
Version 4.17 reverts to the old OgreKit. This has minor font size problems and problems in Dark Mode, but no serious bugs.
• Version 4.17 has only one other change. In earlier versions of TeXShop, synctex from source to preview colored rather large sections of preview text. If a user in Dark Mode selected orange rather than yellow, a small section of orange appeared in a larger yellow selection. Now synctex colors a smaller section for greater accuracy, and with only one color.
### TeXShop Changes 4.16
This version has several simple changes:
• The Ogrekit Find Panel has been modified by the author, Isao Sonobe, to use the latest Apple technologies . These changes include slight fixes to fully support Dark Mode. The changes are greatly appreciated.
• The Korean localization has been brought up to date by Kangsu Kim. Many thanks.
• The Matrix Panel has been improved slightly to support Dark Mode, although the table deliberately has a white background in both modes.
• Following a request of Kasper Steensgaard, syntax coloring for \footnote, \footcite, and \autocite is modified to also color the inserted text. For example, the entire source phrase "\footnote{This is well known}" is syntax colored. Steensgaard works in the field of law where footnotes are common, and this change makes editing them easier.
The color of these footnotes is initially the same as the color of other LaTeX commands, but this color is now editable in the Themes panel of TeXShop Preferences.
I have learned that some users object to even slight editor changes, so there is a hidden preference to turn this feature off:
defaults write TeXShop SyntaxColorFootnote NO
• The latest version of the Sparkle update code, adopted in TeXShop 4.14, supports delta updates. These updates load much faster because they only contain code that has changed since a previous version. The update to 4.16 will contain delta updates 4.14 --> 4.16 and 4.15 --> 4.16. Thus if you update from a version of TeXShop earlier than 4.14, you will download the complete program rather than the delta update.
Future updates will contain a delta update from the previous version only, so this is an incentive to keep the program up to date.
### TeXShop Changes 4.15
In TeXShop 4.14, a file named TeXShop,scriptSuite and a program named ScriptRunner were inadvertently omitted from the TeXShop Application Bundle. This broke several Applescript macros. The missing files are again present in TeXShop 4.15.
### TeXShop Changes 4.14
Version 4.14 is mainly about two small fixes:
• In macOS Lion, Apple added a "bounce" when the text in Text Edit scrolled to the top or bottom of the screen. Some users found this bounce excessive, and we added two hidden Preference Items to control it. The first,
defaults write TeXShop SourceScrollElasticity NO
was supposed to turn this bounce off, but in succeeding versions of macOS had less and less effect. The bounce also caused line number scrolling to break near the top and bottom of the text, and we added an extra fix for this problem:
defaults write TeXShop FixLineNumberScroll YES
But future versions of macOS fixed this "Line Number Scroll Bug", so our fix wasn't necessary and instead caused harm by dramatically increasing the bounce effect.
In version 4.14 we have given up on bounces and disabled both of these hidden preference items. The result is a mild bounce similar to the behavior of scrolling in other text programs. For many of us, the change improves the behavior of scrolling near the top and bottom of the source in TeXShop.
• In Dark Mode on Mojave, many symbols in the LaTeX Panel were barely visible. This is fixed.
• The Sparkle Update Framework in TeXShop has been updated to the latest version. Sparkle updates are protected by a public key encryption system. Until this update, that public key was DSA, but Sparkle has switched to EdDSA, a system based on elliptic curves. This version of TeXShop contains both public keys so updates from older versions of TeXShop still work. Once you have TeXShop 4.14, further Sparkle updates will use the EdDSA key. For some time to come, TeXShop will contain both keys to protect users who are slow to update.
• Latexmk has been updated to version 4.61.
• The German translation of buttons in the log window is fixed.
• The Sage engine documentation in TeXShop/Engines/Inactive has been updated for the current version of Sage.
• For a number of years, TeXShop has been signed using my Apple Developer ID. This protects users who download the program from the internet and have the default Apple security system enabled. The first time they run TeXShop, a dialog appears saying "TeXShop is an app downloaded from the internet. Are you sure you want to open it?" If we didn't sign the program, this dialog would instead report that it was from an unknown developer and should be thrown into the trash.
Starting next year, Apple will require two additional steps from developers. First, they will require that programs be compiled with a "Hardened Runtime." This is a system in which programs indicate that they intend to use facilities which could compromise security: camera, location services, address book, photo library, execution of JIT-compiled code, etc. Version 4.14 was compiled with the hardened runtime turned on, but did not have to turn on any of these exceptions. Note that a Hardened Runtime is NOT a sandboxed application. Sandboxing, which is required for applications in the App Store, could seriously affect TeXShop's interaction with the command line programs in TeX Live, so I have never even investigated sandboxing the program or adding it to the store
The second additional step is to send the program to Apple before signing so they can "machine check" it for viruses and other security flaws. At the 2018 Developer Conference, Apple strongly emphasized that this was not a code review or interface validation, but just an additional check for security problems. The check takes from five to ten minutes and requires a hardened runtime in advance.
The two steps are optional this year, but become mandatory next year. TeXShop 4.14 passed both steps. There is a way for users to detect that the code has been submitted to Apple: in the dialog that appears when TeXShop is first opened, the text should end with the phrase "Apple checked it for malicious software and none was detected."
### TeXShop Changes 4.13
Version 4.13 fixes two small bugs in the Themes coloring system, and makes two other minor changes.
• The Apple Color Picker has many ways to select a color: by mousing in a color wheel, by using RMK and CMYK sliders, by selecting crayons, and by directly entering color values in a box. This final method caused the Picker to close in 4.** versions of TeXShop. This is fixed.
• In TeXShop Preferences under the Editor tab, in the second column, there is an item called "Flash Back for Isolated Parens". When selected, this item causes TeXShop to flash a slightly pink background color on the screen when a bracket or parenthesis with no matching symbol is typed, and then return to the original background color. On Mojave, the screen always returned to a white background, even in dark mode. Moreover, there was no way to change the "pink" color in the Themes editor. This is fixed. Now the command works in both Lite and Dark modes, and the "Flash" color for the Editor can be changed in the Themes tab.
• Applescript macros in TeXShop can run in two ways. If the macro begins with the phrase "-- applescript", a separate small program, ScriptRunner, embedded in TeXShop runs the macro. If the macro begins with the phrase "-- applescript direct", TeXShop itself runs the macro. Herbert Schulz pointed out that ScriptRunner code has not been modified in several years and still contains both 32 bit and 64 bit code. This is fixed.
• Version 4.08 introduced a new preference "Remaining Lines Paragraph Indent" under the Editing tab. By default, this value was set to 30, which caused TeXShop to format paragraphs of source code by indenting all lines after the first line. I received more mail about this than any other change in the 4.** series, and I learned an important lesson: "When a new feature is introduced which will change the appearance of the source code, the default value should make no change!" In version 4.13, the default value of this item is 0. Users who installed earlier versions and have been living with an 'undesirable feature' will need to change the default manually.
### TeXShop Changes 4.12
• Many TeXShop macros stopped worked on Mojave. These macros use AppleTalk and AppleEvents to communicate with other programs. Apple has sandboxed AppleEvents in Mojave for security reasons. Now before such interaction is allowed, a dialog appears explaining what is about to happen, and giving the user the opportunity to allow or forbid the interaction. This dialog contains the line "TeXShop uses Apple Events to process AppleTalk scripts in the Macro Editor". This line is defined in a new element in the Info.plist file, which was absent in earlier versions of TeXShop, is present in version 4.12, and is required before sandboxed AppleEvents can be sent.
• Two users have pointed out that the preference item "Flash Back for Isolated Parens", in the second column under the Editor tab of TeXShop Preferences, breaks Dark Mode. Users of Dark Mode should turn this item off.
### TeXShop Changes 4.11
Versions 4.08, 4.09, 4.10, 4.11 are closely related, all dealing with Mojave issues. Read all of these change sections. The main purpose of 4.11 is to fix two Dark Mode problems on Mojave.
Users continue to complain that they cannot magnify source text with a keystroke. This is explained below, but to repeat, users must "Select All" first. So type
command-A command-= command-= etc.
Users also report that all but the first lines of paragraphs are indented. This is also explained below, but to repeat: To remove this feature, open TeXShop Preferences, select the Editor tab, and in the lower right corner change "Remaining Lines Paragraph Indent" from 30 to 0.
• Previously, {, }, and $were syntax colored, but [, ], & were not. Starting in 4.10, square brackets received the same syntax coloring. In 4.11, & also receives this coloring. • When users push the "Set" button to change source fonts, a Sample Text window appears showing three sample lines of text. Font, Typeface, and Size changes in the font panel are applied to this sample text until the user presses "OK" in the window. After that, the changes appear in TeXShop source windows. If the user presses "OK" these changes become permanent. If the user pressed "Cancel", these choices revert to their original values. However, in Dark Mode the text color in the Sample Window changed to white, but the background color remained white and the text became invisible. This is fixed, and reasonable values are selected for both Lite and Dark modes. • The menu command "Open Macro Editor" opens a window showing an outline view of existing macros and an editing region where these macros can be changed and new macros can be entered. But in Dark Mode, the outline view had white text on a white background and became invisible. This is fixed, and reasonable values are selected for both Lite and Dark modes. ### TeXShop Changes 4.10 • When a blank new document was opened in 4.08 and 4.09, the text was colored black regardless of the chosen Theme. This is fixed. • Previously, {, }, and$ were syntax colored, but [, ] were not. Now all of these symbols receive the same syntax coloring.
• TeXShop gives a Command Color to symbols beginning with / and continuing with 'a' - 'z' or 'A' - 'Z'. These are the typical commands used by Latex authors.
Latex Macro authors also use '@' in commands. A special hidden preference setting adds those to characters receiving a Command Color:
defaults write TeXShop MakeatletterEnabled YES
Latex3 programmers use '_', ':' and '@' in their commands, so a command begins with / and continues with 'a' - 'z', 'A' - 'Z', '_', ':', or '@'. A special hidden preference setting adds all of these to characters receiving a Command Color. This preference alone is enough; the previous setting is then irrelevant.
defaults write TeXShop expl3SyntaxColoring YES
### TeXShop Changes 4.09
This version fixes a bug in the Theme Preference code of TeXShop 4.08. Apple's color picker has several modes, including options to choose colors using CMYK values or gray scale sliders. In version 4.08, TeXShop obtained colors from color wells, and asked these colors for their RGB values without first converting colors in other color spaces to RGB. Fixed.
In TeXShop 4.08 and 4.09, a slight change in the editor requires that users "Select All" before changing fonts or font sizes.
### TeXShop Changes 4.08
This version of TeXShop works on Yosemite and above, but has been compiled on Mojave. The main purpose of the release is to fix TeXShop bugs when running on Mojave, and to support Dark Mode there. Here are the key details:
• When previous versions of TeXShop ran on Mojave, several tools in the Source and Preview toolbars were missing. These items could be restored using several tricks, including opening the "Customize Toolbar" dialog. But they would again be missing the next time TeXShop ran.
This bug is fixed in version 4.08. But users who ran an earlier TeXShop on Mojave will have to take one of two actions to restore their tools. The safest is to open a project which has both a source window and a preview window, With the source window active, select the Windows menu item "Customize Toolbar..." and drag the custom set of tools to the toolbar. Repeat this operation with the preview window active. Then with the source window active, select the Windows menu item "Use One Window." Both source and preview will appear in a single window. With this window active, select the Windows menu "Customize Toolbar..." and drag the custom tools to the single-window toolbar.
Another more drastic way to fix the problem is to make sure TeXShop is not running and throw away ~/Library/Preferences/TeXShop.plist. Then run TeXShop. Tools will reappear. Reset any preference item you may have changed.
• When line numbers were showing on the Source Window in Mojave, the source could scroll by about half an inch in the horizontal direction. Scrolling to the left made the beginnings of line vanish under the line numbers column. Scrolling to the right made half an inch of the source vanish off the right side. This turned out to be a Mojave bug, which Apple fixed in the fifth developer beta.
• Programs must be recompiled on Mojave before they support Dark Mode on that system. When TeXShop was recompiled, the magnifying glass broke, the fix for a "flash after typesetting" broke, and two other features broke. All depended on drawing into an invisible overlay view above the Preview Window. This drawing code has been revised to work on Mojave, and the revised code also works on earlier systems.
• On Mojave, the "General" preference pane for Apple's System Preferences has the ability to switch between "Light" and "Dark" appearances of the interface. In Dark mode, the toolbars of windows have a dark background, Preference and Print panels have a dark background, and so forth. But Dark Mode does not change the content regions of program displays. So in initial Mojave betas, the TeXShop editor still had black text on a white background, and the TeXShop Preview window still had the standard appearance of typeset output.
Some Apple programs on Mojave change these content regions in Dark Mode and others do not. For instance, Apple's TextEdit shows black text on a white background, but the editor in XCode switches to white text on a dark background. Apple's Preview program continues to show pdf files with their standard appearance, including black text on a white background. This is not surprising since the alternative would be to reach into the pdf file and switch colors on the fly, a more or less hopeless task.
So the question is, what should TeXShop do in Dark Mode? Note that TeXShop has had the ability for many years to change text color and background color in the Editor, the Console, and the Log file. TeX pdf output contains black text on a transparent background, so the underlying paper color shines through when printed. Thus the color of the Preview window can be changed by changing the background color of that window, an ability that has been in TeXShop for some time.
In this version of TeXShop, we allow users to design their own "Dark Mode" for content regions. By default, the editor switches to white text on a black background in Dark Mode, and the Preview window receives a darker glow in that mode. But users can decide to keep the original black on white appearance of these content regions, or design their own color theme.
To make this work, the Preference Panel's color choices have been completely rewritten. There is now a tab called "Themes" devoted to coloring various components of the program. All of the color commands have been moved to this tab. These new color commands work on all systems supported by the program, not just on Mojave. In previous versions of TeXShop, many colors could only be changed using various obscure hidden Preference settings. Now all color choices are available in the Themes tab.
• The Themes portion of Preferences is shown above. On the right are all colors currently set by TeXShop. Some items have an obvious meaning and others are obscure. A full set of such choices is called a "Theme". TeXShop allows users to create as many themes as they like. These themes are listed in three pulldown menus on the left: Lite Mode Theme, Dark Mode Them, Theme to Edit. The first menu sets the theme used on all systems below Mojave, and the theme used in Lite Mode on Mojave. The second menu sets the theme used in Dark Mode on Mojave. The final menu sets the theme which Preferences is currently editing.
TeXShop is shipped with several themes, including "LiteTheme" and "DarkTheme". These are the default themes for Lite Mode and Dark Mode. As explained later, there is a way for users to rename or remove Themes known to TeXShop. But TeXShop will always replace "LiteTheme" and "DarkTheme" and use them if other required themes are missing.
Gary Gray contributed two themes, GLG-Lite for Lite mode and GLG-Dark for Dark mode. Gray then tweaked GLG-Dark, and ended up with a dark theme that was was so my better than mine that I ended up using it as the default and thus renaming it DarkTheme. So Gray lost credit, but gained users. Thanks.
Two other themes, SolarizedLite and SolarizedDark, appeared first on the internet before Mojave was introduced. The general page by Ethan Schoonover about this design is https://ethanschoonover.com/solarized/. Specific lite and dark designs were then created in 2012 by "johannesjh": https://github.com/altercation/solarized/issues/167.
A final theme, which I call Manteuffel, was created in 2016 by Christian Manteuffel based on the design of iA Writer. See http://christian.manteuffel.info/blog/ia-writer-inspired-theme-for-texshop/
There is no distinction between themes for Lite Mode and themes for Dark Mode. Thus both Lite Mode Theme and Dark Mode Theme could be set to LiteTheme if the user always wants dark text on a white background.
After editing a theme, push "Cancel" or "OK" to end a preference session. If "Cancel" is pressed, the edited colors will not be saved and the Lite Mode and Dark Mode themes will return to choices before opening the Preference Pane. If "OK" is pressed, the edited colors will be saved and Lite Mode and Dark Mode themes will change to their new values.
But some users may want to edit several different themes during a session. When these users are finished editing their first theme, they should press "Save Edited Theme." This will save the changes for that theme permanently, even if the entire session is ended using the "Cancel" button. Repeat the process for other themes.
• To create a new theme, first change "Theme to Edit" to obtain reasonable starting colors for your new theme. Then push "Create New Theme" and fill in the resulting dialog with a title for this threme. Do not use spaces in this title. The new theme will become the "Theme to Edit" and you can begin changing colors.
• You may have set color preferences for TeXShop in previous versions of the program. These color preferences still exist, but they are no longer used by the program. To create a theme using these old preference settings, push "New Theme from Prefs". You'll be asked to name the theme; please do not use spaces in this name.
Some people on the internet developed color themes for TeXShop and made them available as shell scripts which reset various TeXShop color setting preferences. These shell scripts still work, but they no longer affect the appearance of TeXShop. After running such a script, you can use "New Theme from Prefs" to convert the "preference color scheme" to a regular Theme.
• Recall that various TeXShop items which users can customize are set in ~/Library/TeXShop where Library is the Library folder in your home directory. This folder is often hidden in the Finder, but TeXShop has a menu item "Open ~/Library/TeXShop" to take you there. This folder has various subfolders. For example, one of the folders is named Templates. This folder contains the templates that appear in the Templates toolbar item. Each is an ordinary TeX source file. Adding new files to this Templates folder automatically creates new templates.
There is a new folder in ~/Library/TeXShop named "Themes". This folder contains very small ".plist" files describing the various Themes in TeXShop. If you create a theme you like, give it to others by putting its plist file on the Internet. To install a new theme of this kind, just drop its plist form in the Themes folder.
You can also remove Themes you no longer use by removing their plist files from the Themes folder. Avoid removing themes being used for Lite Mode or Dark Mode (although TeXShop should react gracefully when it runs into this situation). As explained earlier, the themes LiteTheme and DarkTheme will be recreated if they are removed.
• When a theme is selected for editing, TeXShop colors will temporarily be reset to those colors. Revising colors is then interactive; as soon as colors change in Preferences, they will also change in TeXShop's Source and Preview windows.
• Most colors at the top of the Preferences dialog are self explanatory. The colors "Invisible Chars, Enclosed Chars, Braces" are used for some features introduced by Yusuke Terada; see the menu item "Show Invisible Characters" and the item "Parens Targets & Highlight Color" in the Source Tab of Preferences, and the items "Show Invisible Characters" and "Parens Matching Settings" in the Editor Tab of Preferences. The items "Image Copy Foreground, Background" refer to features set in the Copy Tab of Preferences.
• Finally, notice that the transparency of the Source, Preview, and Console windows can be set. These settings bring up a full Color Well, but the colors of these items are ignored and only the alpha values of the choices matter. Here "alpha = 1" is the usual value, and smaller values of alpha make the window more transparent.
There are additional features of TeXShop 4.08 that are not related to Mojave:
• The first of these features comes from a bug report by Geoff Pointer. In TeXShop, double clicking on one of {, }, [, ], (, ), or <, > finds the matching symbol and highlights everything in the source between these symbols. Pointer complained that this procedure ignored comments and escaped symbols, so double clicking } might well select a matching { in code that had been commented out, or a match of the form \{.
These problems are fixed in version 4.08. When selecting a matching symbol, comments and escaped symbols are ignored. And by the way, TeXShop understands that \% does not begin a comment.
TeXShop has another series of methods to deal with such brackets, added to the program by Terada Yusuda. These methods provide immediate feedback as the user is typing. One item flashes the matching bracket as soon as a bracket is typed; another temporarily highlights the region between matching brackets. One item momentarily flashes the screen if an unmatched bracket is typed. Some users depend on these features, while others find them distracting, so each feature can be turned on or off by preference settings at the top of the right column under the TeXShop Preferences "Editor" tab.
The bug reported by Geoff Pointer also applies to these second methods, and has not been fixed there. Because these methods are applied in real time during typing, and because they are used in small regions where the user is actively working, efficiency of code seemed more important than global accuracy. At a later time, this decision may be revisited.
• The second of these features was requested by Brian Levine. If text is selected in the TeXShop editor and the selection is longer than two characters, then pressing (, {, (, or $will enclose the selection in the appropriate brackets. This new behavior can be turned off in TeXShop Preferences by unchecking the item "Editor Can Add Brackets" under the Editor Tab. • The third feature was requested by Stephen Moye. The print dialog now contains an item to set paper size. Moye works with the AMS using a printer with trays for various paper sizes. Previously he had to select the paper size using "Page Setup" before dealing with the Print Dialog and printing. Now only one dialog is involved instead of two. When Moye initially requested this feature, I told him that printing is controlled entirely by the underlying Cocoa system, so it would be impossible to fulfill his request. This proved to be not entirely true. Hence the new feature. I'd like to use this occasion for a short aside. This aside may read like a rant about printers, but in fact its purpose is to explain why application programmers shouldn't have to deal with features of particular printers. For years I've used a$1000 Color Laserprinter weighing 60 pounds. Recently a gear broke on the printer. It would be easy to fix it except that I couldn't figure out how to get the printer down my stairs and into the car. So I decided to buy a new printer and discovered that Laserprinters now cost $400. The store I visited delivers to the doorstep. But they absolutely, positively refused to deliver on up my stairs, or remove my old printer. I asked the service representative what printer he'd recommend. He recommended a$79 HP. This seemed to me like a sort of "bait and switch in reverse," but I had to print, so I bought the $79 machine. It prints faster than my old printer. The ink doesn't smudge. It has built in internet and was immediately recognized by all my devices. It calls home when it runs out of ink and new ink is delivered to my door, but so far it ran out of ink only once. It scans. It's light and was easy to carry up stairs. Apparently I was years and years out of date regarding printers, and I have to apologize to all my friends who asked for advice on buying one. (Remember, however, that I printed often when I was teaching, and I print rarely after retiring.) What has this got to do with TeXShop? Well, TeXShop has essentially no code for handling printers. All of the messy details are handled automatically by Cocoa, Apple, and the printer manufacturers. Imagine what life would be like if programmers had to be involved in that chain. How many printers could we support even if we wanted to? There are three main interaction points between users and printers. First, printers have their own preference module in Apple's System Preferences where the default page size can be set. This makes sense for most printers, whose paper trays can be configured to hold paper of different sizes, but only one size at a time. Second, the paper size of printers can be changed in "Page Setup", a menu item in TeXShop and most Cocoa programs. And finally, the print dialog handles all sorts of choices, like saving to pdf rather than printing, or many other things. What is the point of Page Setup? Why is paper size set there? Because many programs use that knowledge to reset the behavior of the program. Should my editor for personal letters be formatted for letter paper? or a4 paper? Aha, Page Setup to the rescue. However, in TeX, paper size is set by commands in the TeX source or configuration of the entire TeX Distribution. It would make no sense for TeXShop to reach into these sources and change them when Page Setup indicates a new paper size. So the truth is that TeXShop doesn't do anything when the user changes Page Setup. That menu is useless, particularly now that paper size is in the Print Dialog. But I'm keeping it, because otherwise I'd have to answer email questions of the form "where is page setup?" • In recent versions of TeXShop, the syntax coloring code is turning off while the source file loads. Therefore, files aren't syntax colored until the user begins moving the mouse. It is possible that this code was added to fix bugs if syntax coloring is started too soon, but experiments suggest that the bug no longer exists. So in version 4.08, files are syntax colored as soon as they are opened. In case of trouble, it is possible to return to the old behavior using a hidden preference: defaults write TeXShop ColorImmediately NO • Gary Gray requested that TeXShop start paragraphs flush with the left margin, but indent remaining paragraph lines. TeXShop 4.08 has this feature. Some users are in the habit of inserting line feeds when their source lines approach the right margin; they will not notice any difference. Other users type several lines of source text between line feeds. The resulting "paragraphs" will now be visible for easier scanning. This feature is controlled by two new preference settings, available under the Edit Tab. The first sets the indent of the initial paragraph line. By default this is set to 0.0. The second sets the indent of the remaining paragraph lines. By default this is set to 30.0. The item to set the length of tabs has been grouped together with the two above preference settings. Moreover, one more setting, previously hidden, is available. This setting changes the interline spacing between lines of the source. In particular, users can double space the source text if they desire by changing this value. The tab length is an integer, and roughly measures the number of letters between tab settings. Thus small values of this setting are reasonable. The entry works even if the edit font is not monospaced. But the remaining entries for First Line Paragraph Indent, Remaining Lines Paragraph Indent, and Interline Spacing are floating point numbers measured in points in user coordinates. Only limited ranges of these preference settings are allowed, and the Preference dialog will replace unreasonably large or small values by more reasonable maximum and minimum values. • TeXShop has a preference to select the desired dictionary used by the program. Thus the system wide dictionary can be a standard Apple dictionary, while TeXShop can be configured to use a cocoAspell dictionary which does not count LaTeX commands as misspelled. During the course of preparing TeXShop 4.08, we discovered that this Preference item was disconnected in most localizations. This is fixed. If the setting seemed to affect nothing earlier, please try again. • Items in the Templates pull-down menu in the toolbar used to be listed sorted alphabetically. Later, this menu was extended to allow sub-menus, and the sorting feature was lost. It is restored in 4.08. • Another request from Stephen Moye is to add a preference item forcing TeXShop to place the source window in front of the preview window when opening files. (There is already a preference which causes TeXShop to activate the source window after each typesetting job.) A hidden preference item has been created to do this: defaults write TeXShop OpenWithSourceInFront YES This item's behavior is somewhat inconsistent, but users can try it if they wish. • Many programs on the Mac access the internet. Apple recently required that programs use the https protocol rather than http for this access, due to the added security of https. But programs can opt out of that requirement. TeXShop directly accesses the internet in only two places (although it can use iCloud indirectly via Cocoa): it uses Sparkle for program updates, and it downloads two small movies if the user doesn't have them and asks to see them in TeXShop Help. Because faculty web pages at the University of Oregon were served with http, TeXShop opted out. But the University of Oregon recently switched to https for faculty pages, so Sparkle and movie downloads have been switched to https and TeXShop no longer opts out of this security requirement. • Latexmk has been updated to version 4.60. • The "About" panel has a line giving a range of copyright dates. The range ended in 2017 because I failed to notice that that line was localized. Now it correctly ends in 2018. • Scrolling in the editor window has a "bounce" near the top. We added a hidden preference setting to remove that bounce defaults write TeXShop SourceScrollElasticity NO This preference setting still exists, but it is no longer active because we now always set SourceScrollElasticity to NO. Sadly, this is having less and less effect. • Herbert Schulz revised the "File Encoding.pdf" file in the TeXShop Help menu. • The Help document "Comment Lines and Hidden Preferences" was revised to remove misprints pointed out by Herbert Schulz. Unfortunately, the document hasn't yet been extended with new information. • When TeXShop toolbars showed Text, or both Icons and Text, the Text was broken in many localizations. That was because I did not realize that XCode could set the encoding of the localization files ToolbarItems.strings. The encodings of these files are now all set to UTF-8 Unicode, and Text in the Toolbars finally looks reasonable. • The author of the Spanish localization, Juan Luis Verona, pointed out an important consequence of changing the default encoding in TeXShop to UTF-8. Characters with accents and umlauts can be encoded in Unicode either as special characters, or as combinations of characters. For instance, ü can be encoded as U+00FC or as U+0075 and U+0308. When a LaTeX or pdfLaTeX file is encoded in UTF-8, the typesetting engine calls \usepackage[utf8]{inputenc} to interprete the input file. But this package does not understand combination characters. For an explanation of the reason these characters are hard to read, see https://tex.stackexchange.com/questions/94418/os-x-umlauts-in-utf8-nfd-yield-package-inputenc-error-unicode-char-u8̈-not/94420#94420. Luckily, source characters with accents and umlauts typed by the user are encoded as single characters in TeXShop. But if a user copies the text from a pdf and pastes it into the source, combination characters are used. These look fine in TeXShop, but typeset incorrectly because of the inputenc problem discussed above. Incidentally, this problem does not occur when using XeLaTeX or LuaLaTeX. This problem appeared much earlier in Japan, and Yusuke Terada added code to fix the problem. This code is turned on by an item in TeXShop Preferences under the Misc tab. The item used to read "During File Save (for Japan), Automatic UTF-8-Mac to UTF-8 Conversion". In version 4.08 of TeXShop, the words "for Japan" have been removed from this item, but it is still off by default. Users who run into the problem should turn it on. A little caution is required here; for instance, the item caused trouble for users writing in Hebrew (which is why we added the words "for Japan"). • In 2005 Michael Witten, then at M.I.T., added a "Wrap Lines" menu item to TeXShop. This menu offered to wrap lines "never", or "by word", or "by character". Witten added a hidden preference to set the default setting, and this preference is now made public in the Editor tab of TeXShop Preferences. Most users are likely to stick with the default setting, "by word". I've added the setting because I wanted to write a little essay about line feeds. In TeX, two line feeds produce a new paragraph; but TeX ignores single line feeds in almost all cases. Exceptions include comments, where adding a line feed in the middle adds the last half of the comment to the active text, and displayed formulas, which often break when line feeds occur in the middle. But otherwise, line feeds are irrelevant. Thus a TeX paragraph can be written as one long line, or as several sentences, or as several lines broken in the middle. The style users adopt can depend on their background. Writers like to write paragraphs unbroken by line feeds. Programmers, however, tend to add line feeds after each sentence because when they are writing programs rather than TeX, these line feeds show the logical structure of the text. As an extreme example, in Apple's programming language Swift, individual statements need not end with a semicolon if they end with a line feed, so semicolons are only needed when stringing several statements together on a single line. There are several advantages to writing TeX source as a series of lines, rather than as full paragraphs. Errors in TeX are indicated by line, so they can be found more rapidly when the source is a series of lines. Synctex also works by line and can produce more accurate syncs when lines are used. Of course some programming languages ignore line feeds, and make it possible to write programs as long multi-command paragraphs, but such paragraphs are virtually impossible to read and programmers avoid them religiously. Since programs are in practice a series of fairly short lines, programmers have many useful utilities built on the premise that they will deal with files containing short lines. One example is "diff", which can compare two files and clearly list the differences. This utility works well on TeX files written as a series of lines, but becomes more or less useless if the paragraph style is adopted. When programmers moonlight as editors of journal articles and the like, they can become frustrated when their favorite tools no longer apply. All of this is to suggest to new users that it could be handy to adopt the style of adding line feeds to keep individual lines short. But the advantages are relatively minor and seasoned users have more important things to worry about. Why is this issue related to TeXShop? The first key point to understand is that TeXShop never adds line feeds to a source file behind the user's back. Any line feed in a source file is present because the user pushed RETURN. But what should happen if the user is typing and reaches the right side of the window? By default, TeXShop adds a "soft line feed" so additional characters appear on the next line. A "soft line feed" is a line feed that affects the appearance of the text, but is not added to the source file. There are several indications that such line feeds are soft. Resize the window, and notice that the text is reformatted and line feeds appear at different places. But the source doesn't change in any way. This is actually an advantage, because users can resize windows on the fly, and because when a source is moved to a new larger screen, the full window is used rather than ending up with blank space on the right. In addition, such soft wraps are indicated in the line number column on the left of the window. The first line of a paragraph receives a line number, but if there are additional lines created by soft wraps rather than line feeds in the source, these lines have no line number because they are part of the line started above. Some programmers, however, intensely dislike soft wraps because they destroy the logical appearance of the source which the programmer has carefully created. These programmers prefer no wrapping by the editor. When the user reaches the right boundary of the text, the editor should begin horizontal scrolling so additional characters are shown on the same line. The disadvantage is that users must scroll the text horizontally to read everything (or make the window wider if the screen has room). The advantage is that the logical structure is visible. Programmers who work as editors of TeX articles may prefer no wrapping by the editor for another reason: it encourages authors to add those hard RETURN line feeds to the text and thus create source which is a series of fairly short lines. Thus the "Wrap Lines: Never" preference could be thought of as training wheels for the user. That's my little essay. Adopt the editor behavior which makes you most confortable. Even if you stick with "Wrap Lines: by Word", you might like to get in the habit of adding more hard line feeds to the source. Final question: why would anyone ever want to "Wrap Lines: by Character"? I have no idea. It is one of the options Apple provides, so it is an option Michael Witten provided, and therefore it is in Preferences. Final observation: adding this Preference gave me a chance to look closely at Michael Witten's code from so long ago. He did not pick an easy programming task. Witten had to deal with the editor, and the scroll bar, and the layout manager, and "paragraph attributes", and lots of other things. In the end, I'm impressed that it all worked. ### TeXShop Changes 4.02 - 4.07 Versions 4.02 - 4.04 of TeXShop were never released. Version 4.05, the original Mojave Beta, had a number of problems. Versions 4.06 - 4.07 were never released. ### TeXShop Changes 4.01 Daniel Nowacki discovered that in some circumstances, most file menus could be disabled in Single Window Mode. This included Show Console, Show Log File, Close, Save, Print, Print Source, Convert Tiff, Abort Typesetting, and Trash AUX Files. The problem is fixed. Other items in this menu are deliberately disabled in Single Window mode, like Duplicate, Rename, Move To, Revert To, and Page Setup. It is easy to work around these. But Daniel's expanded list was a real nuisance. ### TeXShop Changes 4.00 There are three changes in TeXShop 4.00: • Until this year, an ordinary LaTeX source file with Unicode encoding had to include the line \usepackage[utf8]{inputenc} Such an "inputenc" line tells TeX which encoding was used when the input source file was written. From 2018 on, the line is not required for UTF-8 input because Latex expects UTF-8 Unicode source files by default. Notice that a straight ASCII file is legal UTF-8, so the line above is also not required for ASCII input files. For many years the default TeXShop encoding was IsoLatin9, which contained ASCII code but also non-ASCII code for accents, umlauts, and other characters required in Western Europe. This was the default encoding in Latex, so no inputenc line was required. But from now on, if a source file is encoded in IsoLatin9 and contains non-ASCII characters, the line below is required in the header: \usepackage[latin9]{inputenc} By the way, XeTeX, XeLaTeX, LuaTeX, and LuaLaTeX require Unicode source files. The "inputenc" line tells LaTeX how to interprete source code, but it does nothing to guarantee that fonts are used which understand Unicode characters. Users in the United States with European collaborators and users in Western Europe need only deal with accents, umlauts, and the like, and this font problem is handled with one extra line, which usually comes before the inputenc line: \usepackage[T1]{fontenc} Appropriate latex commands for users in other parts of the world are beyond my expertise. These users are likely to find XeLaTeX or LuaLaTeX particularly attractive. To match this LaTeX change, the default file encoding for TeXShop files has been changed from IsoLatin9 to UTF-8 Unicode. This change will not affect current TeXShop users because TeXShop doesn't change Preference settings that users may have already set. It affects new users and it also affects old users who install a copy of TeXShop on a new machine for the first time. TeXShop has special default values for users in Japan, and their defaults have not changed. If you have already switched to UTF-8 or you use an unusual encoding, then you know all about encodings and can stop reading. The rest of this section is for users who ignored encoding issues until now. These users may want to take this opportunity to switch to UTF-8 Unicode. To understand the issues, it is best to tell the story from the beginning. If you examine a CD or DVD with a microscope, you will discover that the disk contains a long stream of whole numbers, each between 0 and 255. These are called bytes, and the ability to encode virtually any kind of information into a stream of bytes defines the current digital age. Thus a byte stream might represent music on a CD, or a movie on a DVD, or a jpg picture, or a computer program, or an encyclopedia. A large fraction of computer files are just text files. From the beginning of the personal computer era, a standard encoding of text known as ASCII has been used. This method encodes all the characters on a standard American typewriter: small letters, capitol letters, punctuation marks, numbers, tabs, carriage return. There are less than 128 such characters, so ASCII only uses bytes between 0 and 127. The original version of TeX expected ASCII input. ASCII had difficulties in Europe and in regions of the world that used completely different scripts. For instance, scripts in Western Europe use accented vowels, umlauts, upside down question marks, and unusual Scandinavian letters. To solve such problems, the unused 128 bytes in ASCII were used to represent new characters. Different encodings were invented for each country, each with different characters in those 128 spots. One of these encodings, IsoLatin1, contained all of the characters routinely used in Western Europe. When the European currency was introduced, IsoLatin1 was extended by adding the Euro symbol, to become IsoLatin9, and TeXShop adopted that as default. But many more encodings are available in TeXShop Preferences. Sadly, files encoded in this way do not contain a "magic byte" defining the encoding. So the user has to know which encoding was used to write the file, because the computer has no way to know. But meanwhile, computer manufacturers realized that the increasing globalization of the world required a new approach to text. They formed an independent organization, which created and oversees Unicode, a standard that can represent all of the scripts of the world, such as hieroglyphics, Arabic, Chinese, and mathematics. Virtually all computers have switched to Unicode; it is a central part of macOS. Their NSTestView object uses it, and thus TeXShop is entirely Unicode-based internally. You can easily type English, Cyrillic, Chinese, Hebrew, and Arabic in a single TeXShop document; the Hebrew and Arabic will even be written from right to left. Unicode does not define a single standard method to read and write Unicode to a file. But one very popular method is called UTF-8 Unicode. This method has the distinctive feature that ordinary ASCII output is correct UTF-8 output. The remaining unicode symbols are coded using characters between 128 and 255. Because there is no standard way to write Unicode to disk, every routine in macOS to read or write text requires an encoding parameter, which describes the type of encoding to be used by the operation. If this parameter is UTF-8, then the contents of the editor will be completely preserved. If the parameter is IsoLatin9, then ASCII and Western European characters will be preserved, but Chinese, Arabic, Cyrillic, etc. characters will be lost. There is one crucial difference between most of the encodings and UTF-8. Since most encodings just define characters for the bytes between 128 and 255, any random stream of bytes is an acceptable file. But UTF-8 files use coded input, so most random streams contain illegal bytes and a computer asked to read such a stream will reject the entire file. If that happens, TeXShop will put up an error dialog, and then open the file in IsoLatin9. If you are a user who ignored encodings up to now, or only used UTF-8 and IsoLatin9, then we can give easy advice about switching permanently to UTF-8. To actually switch, just choose UTF-8 Unicode as encoding in TeXShop Preferences. All new files will open fine and typeset easily with LaTeX, using only the "fontenc" header line mentioned earlier. Moreover, older files will also open fine and typeset easily if they only contain ASCII characters. But older files with accents or umlauts or other Latin9 characters will bring up an error message and then open using the IsoLatin9 encoding. When that happens, add the following to the top of the file: % !TEX encoding = IsoLatin9 and add the following to the header after the \documentclass line: \usepackage[T1]{fontenc} \usepackage[latin9]{inputenc} With these lines in place, the file will thereafter automatically open in Latin9 and typeset fine. This advice does not hold if you used other encodings or a mixture of encodings, but in that case you know how to juggle encodings and we have no extra advice to offer. • Latexmk updated to version 3.55d. • The change in version 3.98 to set interline spacing and kerning in TeXShop Preferences was hopelessly broken. This was pointed out to me in a phone call from Louis M. Guenin. The method seemed to work, but any lines added later in the editor reverted to the original style. Consequently, this method has been disabled in 4.00. It is still possible to set interline spacing and kerning for individual files as in 3.98, but Preferences cannot set default values. The Preferences code is still in place and correctly sets font and font size. It also appears to set interline spacing and kerning, but when the user clicks "OK", those settings are ignored. ### TeXShop Changes 3.99 There are two changes in TeXShop 3.99: • The German localization was updated. • The original Preference dialog did not fit on the screen when users had an 11 inch or 13 inch portable. In version 3.99 of TeXShop, an additional Editor tab was added to the dialog and the Source items were split between the Source and Editor tabs. This allows the entire dialog to be shortened. A user in Israel, Michael Gedalin, is writing a book in Hebrew using TeX. His source file often switches between English for TeX commands and Hebrew for the actual text. He complained that opening his source in TeXShop was slow and adding new text to the middle of the source file was very, very slow. Using either English or Hebrew, it was possible to type three words before any letters appeared on the screen. Debugging this problem revealed an interesting cause. If the font used in the TeXShop editor contained both ASCII and Hebrew characters, there was no slowdown. But if the source font did not contain Hebrew characters, the Macintosh was smart enough to switch to a Hebrew font for Hebrew portions of the text. Unfortunately, this switch, repeated over and over in the text, was extremely slow., The lesson is clear. If you are writing in an unusual script, pick an editor source font which contains both ASCII characters and characters for your script. The Font Book, in Applications, lists for each font the scripts supported by that font. ### TeXShop Changes 3.98 There are five changes in TeXShop 3.98: • Early versions of High Sierra contained a bug which broke updating the Page Number box in the preview window during scrolling. The bug also broke the up and down arrows in the preview toolbar. This High Sierra bug is fixed in High Sierra 10.13.4, currently in beta release for developers. TeXShop 3.91 contains a workaround for the bug. The workaround runs a small routine to update the Page Number box once a second whenever the Preview Window is active, even when the user is not scrolling. In TeXShop 3.98, the workaround only runs on early versions of High Sierra, and the original more efficient TeXShop code runs on High Sierra 10.13.4 and above. • Latexmk is updared to version 3.44a. • Version 3.94 of TeXShop contained a fix for the "flash after typesetting" bug in High Sierra when the preview window is using multipage or double multipage modes. However, the fix was also applied in single page or double page modes, where is may have caused problems. In addition, the patch caused problems for some users who worked with an external editor, or turned on the "Automatic Preview Update" TeXShop preference. The patch has been reworked slightly to avoid all of these problems. • The Font submenu in the Source menu has been enlarged with additional items from Apple allowing users to set interline spacing for the source text, and adjust the kerning and ligatures for this text. Keyboard shortcuts for interline spacing make it easy to adjust this spacing; for instance, double spaced source is possible. Additional items allow copying and pasting this style information, so once one source window has been adjusted, the adjustments can easily be applied to other source windows. • The TeXShop Preference item to set the font for TeX source has been changed to also set interline spacing, kerning, and ligatures. Thus the style changes introduced above can be made default styles for all future files. This works as follows: clicking the Set button to initiate a font change makes a small sample window drop down to show the effect of Font and Style changes. The Font submenu in the Source menu is active for this small window, allowing Style changes as above. When the OK button for the sample window is pressed, these changes appear in all open TeXShop windows. The main Cancel or OK buttons in the Preference Dialog must still be pressed, either to retreat to previous choices for fonts and styles, or to make the changes permanent. The code to set default font styles has one minor bug I haven't been able to fix. If an empty window is opened, the font is correctly set for the window, but new font styles are not set. For example, suppose a user has requested double spaced source in TeXShop. If this user selects New'' to open an empty window and then selects the LaTeX Template, the new source in the window will be single spaced. It is easy to work around this bug. If the window is saved, closed, and reopened, the styles will take'' and the source will now be double spaced. Or if another window is open with correct spacing, the font style can be copied from this window and pasted into the new window. • Following a suggestion by Emerson Mello, who provides the TeXShop localization for Brazil, an additional item has been added to the TeXShop Help menu. This item lists all "special comment" lines understood by TeXShop, and lists all hidden Preference items which can be set for the program. The special comment list is complete, but in this version of TeXShop, only some hidden preference items are listed. The list will be completed in future TeXShop versions. ### TeXShop Changes 3.97 TeXShop 3.97 has a new preference setting determining whether the source editor is placed on the left or right side in Single Page mode. Because work on MacTeX-2018 is beginning, TeXShop will not be further updated for several months. ### TeXShop Changes 3.96 This version has one important change and other minor bug fixes: • For some time, TeXShop has had the ability to combine the source and preview windows of a document into a single window showing both views, or break such a window back into two windows. This is done with the commands "Use One Window" and "Use Separate Windows" in the Window menu. There is now a preference item in TeXShop Preferences to determine the mode used when a document is first opened. See the bottom of the Preview tab in TeXShop Preferences. If "Use One Window" is selected, both Source and Preview are placed in a single window whenever documents are opened. This also applies to documents opened automatically when the program restarts. The menu items in the Window menu are still present, so after documents are opened, some or all can be split if desired. If a document has not yet been typeset, only the source is available and it will open in a standard window. Later when the document is typeset, the preview will appear in a separate window. These windows can then be combined with "Use One Window" if desired. It doesn't really make sense to use tabs when source and preview are combined, so if the source contains a magic line to set tabs, then the source and preview windows will still open in separate windows. The command "Use One Window" in the Window menu does work with tabbed source windows, pulled the source tab out of the original source window but leaving the remaining tabs. Nevertheless, users of tabbed windows should probably ignore Single Window Mode. • The Sparkle update code in TeXShop 3.96 was updated to version 1.18.1. • In High Sierra when previewing in "multipage mode", each typesetting job caused a flash in the Preview window before new material appeared. This problem was fixed in versions 3.94 and 3.95. The fix worked by placing a picture of the old preview pdf over the Preview window just before switching to the new version of this pdf. The flash still occurred, but was hidden by the picture. One second later, the picture was removed, revealing the new pdf. The steps of placing a picture and later removing it were totally invisible to the user. The fix had a small downside: it caused a one second delay after typesetting before the new material appeared. Because of user complaints about the delay, a hidden preference setting was added in version 3.95 allowing users to adjust the delay. defaults write TeXShop FlashDelay 0.25 The value of the delay is measured in seconds and can be anything between 0.0 and 2.0. Tests show that a delay of 0.50 seconds works on almost all machines and is not longer noticeable by many users. In version 3.96, this is the default delay when users first update. But users who already changed the delay in version 3.95 will retain the value they set. If the delay is still noticeable, experiment with setting it smaller. On my machine, a delay of 0.25 seconds works fine and isn't perceptible. • Because of a High Sierra bug, scrolling the Preview window did not update the Page Number box in the Preview toolbar. TeXShop 3.92 contained a workaround for this bug, but the workaround did not apply when the source and preview were contained in a single window. Now it does. • If the source window font or font size changed, and the window was later split, new text added to the bottom portion appeared with the old font and size. This bug was pointed out by J. F. Groote, and is fixed in version 3.96. ### TeXShop Changes 3.95 In High Sierra when previewing in "multipage mode", each typesetting job caused a flash in the Preview window before new material appeared. I found this behavior so disturbing that I didn't update to High Sierra until this week. TeXShop 3.94 completely fixed the problem. This fix was particularly significant because Apple revised the way they render pdf files and reported that the flash could not be repaired at their end. Without a TeXShop fix, we'd be stuck with the flash for years to come. The fix worked by placing a picture of the old preview pdf over the Preview window just before switching to the new version of this pdf. The flash still occurred, but was hidden by the picture. One second later, the picture was removed, revealing the new pdf. The steps of placing a picture and later removing it were totally invisible to the user. The fix had one small downside which I found barely noticeable: it caused a one second delay after typesetting before the new material appeared. This lost second caused several users to complain to me. A few of them used the preview in "single page mode," which does not have the flash bug. They complained of losing a second for no reason. Other users told me that they barely noticed the flash, but were annoyed every time they had to wait that additional second. Huh? Didn't notice the flash??!! The only change in TeXShop 3.95 is additional preference settings to mollify these users. The program now has two hidden Preference settings. One turns the fix off. Note that the fix is only applied in High Sierra and above, so this setting only applies to those versions of macOS. To apply the fix, quit TeXShop, open Terminal in /Applications/Utilities, and type the following line: defaults write TeXShop FlashFix NO However, I strongly recommend not applying this fix. Instead, experiment with the second fix, which reduces the delay before removing the picture of the old pdf. To apply the fix, quit TeXShop and type the following in Terminal: defaults write TeXShop FlashDelay 0.25 The value of the delay is measured in seconds and can be anything between 0.0 and 2.0. Other values are constrained to these values. If the delay is too short, the flash may still be visible, but on my High Sierra machine, a Mac Pro, the value 0.25 completely eliminates the flash and yet produces a delay of only 1/4 of a second, which is not noticeable to me. If this value works for most others, it may become the default in future versions of TeXShop. If you still see the flash with this value, try 0.5. If you don't see a flash, but are still annoyed by the delay, try 0.01. If you complain of losing 1/100 of a second from your life every time you typeset, I will sympathize silently. ### TeXShop Changes 3.93 and 3.94 Version 3.93 was never released. The main purpose of release 3.94 is to fix crucial bugs in TeXShop running on High Sierra. Here are the bugs: • After each typesetting job, there is a momentary flash in the preview window before the new version appears. This flash is caused because Mac OS draws the background of the page before the actual page is ready to draw. I tend to typeset after every few sentences, and this bug was so distracting that I avoided switching to High Sierra until now. • Scrolling with a gesture or the mouse in the preview window while in multi-page or double-multi-page modes does not update the Page Number in the preview toolbar. • If a tabbed source or preview window is split, the split bar is much lower than usual and cannot be moved. • In previous versions of Mac OS, the TeXShop Save Panel has two accessories at the bottom, one to select encoding and one to select file type. But in High Sierra, these are missing in expanded mode. Moreover, the main display in this mode is "flaky"; sometimes it covers the panel and sometimes it is restricted to only three or four lines. • The preview window drawer shows a document outline in the top portion, and a search box and search results in the lower portion. The outline only appears when the hyperref package is used; otherwise the area is blank. When the outline exists, the lower portion shows search results in two columns. The first lists the word or phrase being searched, and the second lists the outline section where the word or phrase was found. But in High Sierra, all second column entries are equal. The program gets "stuck" on one, and continues to list it for all entries. After High Sierra was released, I dutifully noted these bugs and reported many of them to Apple developer support. Then I sat back and waited for fixes. The third entry was indeed fixed in High Sierra 10.13.2, but the other entries are still outstanding. However, unexpected behavior in a new version of Mac OS can have several causes. In some cases, TeXShop might have "shady code" which happened to work in earlier systems but was never really correct. In other cases, the problem could be an Apple bug. The most interesting situations occur when Apple rewrites code to improve the experience of most users, but that code breaks features of TeXShop and cannot be repaired. All of the bugs above are fixed in TeXShop 3.94. Some of these fixes require new program logic. In these cases, the fix only runs on High Sierra and above, while the old code is still used on earlier systems to avoid problems on these systems. Because these fixes solve almost all High Sierra problems, I intend to move over to that system immediately after TeXShop 3.94 is released. There is one additional feature in 3.94. John Collins updated latexmk to version 4.54c, which fixes a problem with the previous version of latexmk. That version required a recent version of Perl and failed for users with an older version of OS X. The new version should work on all versions of OS X supported by TeXShop 3.94. The rest of this report explains the fixes of the five bugs for those who are interested. Rather than taking them in order, I'll leave the most interesting case until last. The third item was indeed an Apple bug, and was recently fixed. The fourth item was fixed in TeXShop 3.91. I do not know if the cause was an Apple bug, so the workaround might eventually be removed or improved. I found a workaround for the fifth bug. There are two useful pieces of information which could be placed in the second column of the search list. This column could display some of the surrounding text, or it could list the corresponding outline section. In High Sierra, TeXShop 3.94 shows surrounding text, and therefore avoids the bug. On earlier systems, TeXShop 3.94 shows the corresponding outline section if an outline is present, because those systems don't have this bug. But if no outline is present, TeXShop 3.94 shows surrounding text, rather than leaving the second column blank. This brings us to the second bug, which was not an Apple bug at all but instead "shady code" in earlier versions of TeXShop. The Save Panel is mostly handled by Cocoa automatically. But programmers are allowed to provide an "accessory view" which will appear just above the buttons at the bottom, and extend features of the panel. If the programmer does not provide this accessory view, Apple provides one, showing a Popup Menu allowing the user to select the File Format of the saved file, which is essentially its extension. TeXShop wants two Popup Menus in this view, one to choose the encoding of the file, and the other to choose its extension. Most users rightly ignore these popups, but they are useful in special cases. Creating an accessory view, and adding an "Encoding Popup Menu" are straightforward tasks. But Apple has already created the "File Format Popup Menu" and it is just a matter of grabbing their popup and adding it to our accessory view. Earlier versions of TeXShop contain ingenious code to do just that. Another word for "ingenious" is "shady." Unfortunately, the first reaction on rereading that code is "would that even work?" The answer is that it works in Sierra but not in High Sierra. It has been replaced with much more straightforward code. A Google search shows that other programmers faced the same problem and selected the straightforward approach rather than the shady one. Finally, we come to the "flash after typesetting" bug, which was for me the most important problem. This problem turns out to be caused by a reworking of the Mac OS code to render pdf files. The new Apple code will render large documents with greater speed, but a consequence is an unavoidable flash if a pdf file is opened and immediately repositioned to the center of the document. Let's face it, that is an unusual operation unless you are editing and typesetting a TeX document. Unfortunately, the flash is a problem that Apple cannot solve. However, TeXShop can solve the flash problem. Here's how that works. In Cocoa, "NSView" objects correspond to rectangular portions of the screen; these view objects know how to draw themselves. NSView objects can be layered, and in that case the top layer obscures lower layers unless the top layer is transparent. After typesetting is complete and just before switching to the new version of the pdf file, TeXShop 3.94 takes a snapshot of the screen. It then creates a NSView exactly the size of the old pdf view in the Preview window, and places this view on top of the old view. The new view draws by showing the appropriate portion of the screen snapshot. Then the preview window is loaded with the new pdf view, which draws, flash and all. But we see nothing because the drawing is obscured by the NSView on top. Exactly one second later, the top NSView is removed, showing the new pdf underneath. You might think that adding and removing this View would provide additional flashes, but such view manipulations have been a part of Cocoa since the beginning and the system is optimized to make such manipulations invisible. This method depends strongly on the technique to get a snapshot of the screen. Many such techniques are available, but do not work well. For instance, I first tried a technique which obtained the pdf data to draw a portion of the screen. When this data was redrawn, font weights changed, and the screen became blurred for that one second interval. Google led me to the code now used to get that snapshot, and that open source code works like a charm. See the source for details. There are a couple of possible problems with this fix, which users ought to know about. If you have several monitors, I do not know if the screen snapshot will provide the correct image. My High Sierra machine has only one monitor. I also don't know if one second is enough time to avoid the flash. It is for me, but my machine is quite fast. So in case of problems, please write. ### TeXShop Changes 3.92 • High Sierra has a bug which breaks updating the pageNumber field when scrolling in the Preview Window. TeXShop 3.91 has a workaround for this bug, but the workaround broke the ability to enter a new page number and go there. This is fixed. After going to a new page, click once in the pdf content region to activate page updates during scrolling. • TeXShop has two magic lines to activate tabs. The first, "useTabs", automatically adds "include" files as tabs, while the second, "useTabsWithFiles", gives the author much tighter control over which source files to use as tabs. Tommaso Pecorella wrote asking that the first command also automatically add "input" files as tabs. The reason I didn't do this at first is that the syntax for "input" allows situations that aren't really appropriate for tabs. For example, a source that is "input" can itself "input" other files. Some authors break the source into hundreds of pieces, inputting these pieces as necessary. So whether the request is appropriate or not depends on the writing style of the user. In TeXShop 3.92, the "useTabs" command will also create tabs for "input" files, but only if the user activates this feature with a hidden preference: defaults write TeXShop TabsAlsoForInputFiles YES ### TeXShop Changes 3.90 and 3.91 Version 3.90 was never released. Version 3.91 has the following changes: • In version 3.89, the Changes document was added to the TeXShop Help Menu, but it only listed changes in the latest version. From now on, the full document listing all changes in series 3 will be in the Help Menu. The latest changes are at the top. • In Version 3.89, the user can double click on the words "begin" or "end" in a begin-end pair (while holding down the option key) to select the matching termination word and all words in between. I'm not a fan of intricate string programming, and it took me many days to get this feature working, so I expected applause. Instead I got bug reports about edge cases I knowingly ignored. The first such message, from Jean-Pierre Olivier, contained the following example. Guess what TeXShop did with a double click on the initial "begin." \begin{enumerate} \item la liste \begin{enumerate} \item a \item b \end{enumerate} \item another \end{enumerate} I told Jean-Pierre that I was aware of the special case and didn't intend to fix it. Then I went to bed, but my conscience kept me awake. TeXShop 3.91 fixes this problem. Thus "\begin{key}" will match the appropriate "\end{key}" even when the selection between them contains one or several "\begin-\end" pairs with the same "key". TeXShop 3.91 also fixes a second special case: it now ignores comments, so "begin" will never match an "end" that has been commented out. • Updated latexmk to version 4.54. Thanks to John Collins and Herbert Schulz. • TeXShop depends on PDFKit for the preview display, and Apple has been rewriting PDFKit for the last three years and introducing bugs in the process. High Sierra was described by Apple as a release concentrating on polish and speed rather than on new features. So there were high hopes that High Sierra would fix the bugs. And indeed, it did, but it compensated by adding new bugs. The bug most often reported to me is that the page number box on the preview toolbar does not update when scrolling the pdf display with a gesture or mouse. This bug only affects multi-page and double-multi-page modes, and does not occur when the keyboard arrows or the spacebar are used to scroll. TeXShop 3.91 has a workaround for this bug. The workaround code is only activated when running High Sierra; otherwise the original code runs. When Apple fixes the bug, the workaround will be removed. The workaround is applied automatically, so you may wish to skip the remaining text in this item. Indeed, there are no other version 3.91 changes, so the rest of the text about 3.91 can be skipped. What caused this bug? TeXShop is written with Apple's Cocoa application framework. In this style of programming, duties are shared between the programmer and the API. In particular, one object, named "PDFView" in the API, displays and scrolls pdf documents automatically without programmer intervention. Handling scrolling this way is important since there are mice, and smart mice, and portable track pads, and stand alone track pads, and gestures, and so forth. If the duty of responding to them fell to the programmer, who doesn't even own many of these devices, nothing would work. TeXShop interacts with the PDFView object by sending it "messages." For instance, one message says to switch from single page mode to continuous page mode. But what if the PDFView has to interact with TeXShop? Apple programmers don't know the messages understood by TeXShop, so they cannot send messages to TeXShop. The answer is that Cocoa objects can post "Notifications". These notifications are fixed by the API; a programmer cannot extend them. One crucial notification posted by PDFView is called "PDFViewPageChanged". These notifications go to a "Notification Center" in Cocoa. Programs can ask this center to be called whenever a particular kind of notification arrives. TeXShop asks the notification center to be notified when PDFViewPageChanged is sent, and it then updates the Page TextField. The primary cause of the bug is that High Sierra does not send the notification when scrolled by a trackpad or mouse. However, there is a further problem. Listed below is the code which TeXShop runs when it gets a PDFViewPageChanged notification. aPage = [self currentPage]; theNumber = [[self document] indexForPage: aPage]; [self.pageNumber setIntegerValue:theNumber]; [self.pageNumber display]; This is objective C code, but almost self explanatory. TeXShop asks PDFView for the current Page, a data structure that has lots of information about the visible page. It asks this data structure for the page number of the page. It sets this page number as the number to be displayed by the page box. Then it redisplays this box. Unfortunately, the first line also fails. When you scroll in High Sierra, either by trackpad or by mouse and scrollbar, Cocoa does not update the “currentPage” variable. So after scrolling, this variable still has the old value. Hence the page number would not change even if the notification were sent. To work around the bug, I first had to find a replacement for [self currentPage]. Ultimately I found two replacements; one worked a little better than the other, but both worked in High Sierra. The "method that works a little better" is used by the "click once" action described below, but it does not work for the "timer" action below, which must use the second method. After working around [self currentPage], I had to find a replacement for the notification PDFViewPageChanged. Sadly, I never found a notification which worked. Several replacements are described in the documentation, but PDFView "behaves like an NSView in many cases" but "is not a formal subclass of NSView" and none of these notifications were actually posted by PDFView. Since the notification route fails, the workaround has to use a different approach. TeXShop 3.91 contains two approaches. The first requires user action to update the pageNumber box: click anywhere in the active pdf area. Thus users should scroll with a tracking action or mouse, and then click once to update the box. However, this procedure is turned off when you start TeXShop 3.91 because a second procedure is used instead. Cocoa allows programs to contain NSTimer objects which fire periodically. When running on High Sierra, the PDF display object sets up a timer to update the pageNumber box once a second. This allows automatic updating during scrolling. Since the timer has to do extra work to discover the current page, care has been taken to only do the work when necessary. The timer only updates if the corresponding PDF window is the front window. So it does no work when users are editing source. If several projects are open, at most one pdf timer is actually updating. I still have some fear that the timer will make TeXShop less responsive. This could only happen on High Sierra, and then only until Apple fixes the bug. So I've provided a way for users to turn the timer off. Quit TeXShop and then issue the following command in Terminal: defaults write TeXShop ContinuousHighSierraFix NO Then restart TeXShop. No timers will be created, so the pageNumber box will not update during scrolling. Click the mouse once in the pdf content area after scrolling to update the pageNumber. • There are no other new features in TeXShop 3.91. But perhaps a list of remaining High Sierra bugs will be useful. They are cosmetic bugs rather than bugs changing the operation of TeXShop. • When TeXShop typesets and then displays the revised pdf, there is a momentary gray flash before the new page appears. This only occurs on High Sierra and has been reported to Apple. • If a window which has been converted to a tab is split, the bar separating the split portions cannot be moved, and other things go haywire. This problem is fixed in the latest developer betas, so it should be fixed in the next High Sierra minor release. • I continue to get reports of fuzzy displays on some monitors. This problem has been with us through several iterations of OS X. It was extensively discussed in the changes document for version 3.55, where several TeXShop additions to deal with the problem are summarized in one spot. Fuzzy displays have never been a problem on machines with Retina displays. This includes almost all of Apple's current machines. It includes the 5K LG Display, made possible by Thunderbolt 3. Older displays also work fine; I have the original Thunderbolt display which was not a Retina display and yet shows very clear text. I am no longer able to obtain fuzzy output on any of my equipment. I suspect that Apple has stopped work on this problem because it will disappear as people upgrade their equipment. Let's recall an analogous situation. When color was first introduced on the Mac, it was 8-bit color which could only display 256 colors. This was not enough for high quality photographs, but the engineers had a work-around. Graphic display hardware contained a color table chip which could be programmed in real time. Thus the particular 256 colors available could be adjusted, depending on the requirements in the front-most window. So Apple introduced very elaborate color management software. A program could request, say, 40 colors that it "absolutely, positively had to have", and then 25 colors that it needed only approximately, listing how much variation was permissible for these colors. Apple also reserved a small number of colors for the system. The rules for this color management software seemed to change from system to system. One critic said "I dislike the Macintosh because when I request a color, I can never be sure of the color I'll get back.'' And then memory prices went down, and 8-bit color became 32-bit color and the color management software vanished. If you lived through those days, as I did, you probably feel that all the time spent on color management was time wasted. I recently discovered that new hires in the mathematics department who know much more than I do about computers have never heard of programmable color tables. With apologies to those dealing with the problem, I think the fuzzy display problem is a repeat of the 8-bit color situation. The problem has gone away for the majority of users, it will slowly go away for the rest of users, and it no longer makes sense to expect Apple engineers to deal with it. Sorry. ### TeXShop Changes 3.89 Version 3.89 has the following changes: • When new versions of TeXShop are released, two documents are created explaining features of the release. The first, titled "About This Release", is available as the first item in the TeXShop Help Menu, and describes those features that cannot be delivered automatically. For example, if new macros are available, they cannot be provided to users because users may have edited their macros. "About This Release" explains these features and how to obtain them. For most releases, there are no such items. The second document, titled "Changes", describes all new features in the release. In version 3.89 and future versions, it will be the second item in the TeXShop Help Menu. This document is essential reading because new features are often not visible in the interface, so the changes document is the only way to discover that they are available. The "Changes" document has always been available, but not in handy places. It is the information shown when a Sparkle "Check for Updates" command announces an update, it is available on the TeXShop web page just below the download link, and it is available in the TeXShop Help Panel under the heading "What's new". All these versions will still exist. We hope many more users will find the document in the Help menu. • If the user double clicks on {, }, [, ], or (, ), the corresponding brace will be found and the text between these delimiters will be highlighted. This feature has existed for years and is an essential debugging tool. Occasionally, users just want to select a brace, say {, rather than selecting all this text. Many users may not know that they can do this by holding down the option key while double clicking. Now they know. In version 3.89, this feature is extended to begin-end pairs, like \begin{theorem} There are infinitely many primes. \end{theorem} To select all text in the begin-end pair, hold down the option key and double click anywhere in the word "begin" or the word "end". In the above example, the computer will automatically find "{theorem}" and match with the corresponding begin-end pair for "{theorem}". Notice that there are differences in the selection of {, } pairs and the selection of begin-end pairs. For brace pairs, you must double click on a brace, while for begin-end pairs you must double click on an easier-to-hit word. Moreover, the option key is not needed for brace pairs, but is needed for begin-end pairs. Why? Most users are familiar with the following Mac convention: clicking on a spot pulls the cursor to that spot, double clicking on a spot selects a word there, triple clicking on a spot selects an entire sentence or paragraph there. It would be confusing if this behavior failed for the words "begin" and "end", but worked for all other words. To avoid this confusion, users must press the option key if they want to match pairs. I'd like to thank Claudio Beccari, who called my attention to the importance of begin-end pairs and asked for this feature in TeXShop. His request was so reasonable that I dropped everything else and implemented it. In turn, Beccari called my attention to an article in the latest Tugboat (Journal of the TeX User Group, volume 38, number 2), on debugging TeX files by Barbara Beeton. Beeton is the resident TeX expert at the American Mathematical Society. Her suggestions are given modestly, but ignoring them is the mark of a fool. There remains the$ and $$problem; the trouble with these delimiters is that there is no difference between the opening and closing symbol, making similar code difficult to program. Recall that - is equivalent to \begin{math}-\end{math} or to $$-$$ and$$-$$is equivalent to \begin{displaymath}-\end{displaymath} or to $-$. Perhaps $$-$$ and $-$ will be handled in the future, but begin-end selection already handles one equivalent of - and one of$$-$$. One possible useful habit to develop might be to use for very short expressions like \alpha, but use \begin{math}-\end{math} for all longer expressions. In that case, I recommend inventing a keyboard shortcut to enter this pair and the equivalent display pair. • One of the most important features of modern TeX distributions is SyncTeX, a creation of Jérôme Laurens. This software modifies TeX engines to output "synctex" files containing the information needed to sync between spots in the source and corresponding spots in the pdf output. Laurens also includes "synctex_parser", a C source file for front end developers allowing them to easily obtain information from the "synctex file." In 2017, Laurens substantially rewrote both engine synctex support and the synctex_parser. His new parser has been in the last several iterations of TeXShop. I strongly recommend that users updating TeXShop also update TeX Live, probably via MacTeX, so these two pieces of software will match. About two months ago, a couple of ConTeXt users complained to me that synctex doesn't work in the latest beta versions of ConTeXt. They also told me that it continues to work in other front ends. Further investigation showed that those front ends had not updated their copy of the synctex_parser, and then showed that the author of ConTeXt, Hans Hagen, wrote his own synctex code for ConTeXt, based unfortunately on the 2016 version of synctex. TeXShop 3.89 now contains both the new 2017 version of the synctex_parser, and the old 2016 version of this parser. This was not easy because Laurens used many of the same function names in both versions and the linker complained; eventually I had to change 2016 names by hand. A new magic line has been introduced for ConTeXt users: % !TEX useOldSyncParser This will work for all TeX users, but is only recommended for ConTeXt users. The magic line is read when a file is first opened, so the first time this line is added, the file should immediately be closed and then reopened to make it active. • For the last several releases, a special version of TeXShop was provided for users running High Sierra. The Sierra version of the code ran on High Sierra, but one feature was missing. When High Sierra was released, Apple also released XCode 9, making it possible to compile one copy of TeXShop which runs completely on all recent systems, and in particular on both Sierra and High Sierra. • An earlier version of TeXShop introduced the magic line % !TEX parameter = which sends a second piece of information to engines. Most engines just ignore this information, so it does no harm. But using the magic line, one or more flags can be passed to engines without rewriting the engines. Herbert Schulz rewrote many of the latexmk engines to use this magic line. For instance, it can be used to add a --shell-escape flag when pdflatex needs to call an external program during typesetting. The old engine files still work, but Schulz recommends that users visit ~/Library/TeXShop/Engines/Inactive/latexmk and replace any active latexmk engines with their new versions. New documentation in this folder gives more details. • An ".xml" file is an Extensible Markup Language file. For reasons that will be explained in a moment, TeXShop can now create xml files, and such files are marked as typesettable files, so typesetting engines can be called when one is active. XML files look a lot like HTMN files. They consist of tag pairs like <titlepage> ..... </titlepage> One difference is that each opening tag must have a corresponding closing tag; in html this requirement is often not enforced and <p> may not be followed by </p>. Comment tags are written as follows <!-- this is a comment, which contains many characters --> • TeXShop's new \begin-end selection code also works for xml tags. As with begin-end, the option key must be pressed and then the word defining the tag must be double clicked. For instance with the tag <titlepage> the double click should be on the word "titlepage" and not on the inequality signs at the beginning or end. For comments, the double click should be on the "--" symbol. In xml, the beginning tag may contain other elements, but the pairwise selection is a double click on the first word, not other symbols in the tag. Thus when facing the following tag, double click on "frontmatter." <frontmatter xml:id="index"> In rare cases, the comment tag's start contains only one dash. See the second line of the following example. TeXShop cannot select such a comment. <!-- Various third-party add-ons need some sort of token --> <!-ƒ Using an element here serves two purposes --> • And now the point of all this. From time to time, I like to introduce TeXShop users to TeX-related developments that are not strictly about TeX. The development du jour is PreTeXt, with the motto "Write Once, Read Anywhere." It is the work of Bob Beezer from the University of Puget Sound, and it is supported by a crystal-clear series of web pages, http://mathbook.pugetsound.edu/index.html. (MathBook is the original name of this project, which was renamed in June, 2017.) The goal of the project is to write a document just once, but then output the document in pdf for a book, or in HTML for the web, or in EPUB for pad-based work. Documents for the web or EPUB can be interactive. To make this possible, the document text is written in a special xml-based markup language, but the mathematical content is still in TeX. I first heard of this project in a TUG conference in Portland, Oregon, and what caught my eye was an abstract algebra textbook written in TeX by my PhD student Tom Judson that had been converted into an interactive book by rewriting in PreTeXt. (But to be honest, the thing that really caught my attention was learning that Judson and Beezer bicycled the main part of the Tour de France route in France after the official race.) Then, as happens, I gradually lost contact with the project. A month ago, I was talking to a University of Oregon faculty member, Dev Sinha, and he asked me what I knew of xml. I told him not much, and he then enthusiastically described course notes he was writing using PreTeXt. It took me a couple of days to realize that this was the Beezer project I knew from Portland. TeXShop 3.89 comes with an engine file to typeset PreTeXt documents and open the pdf output in the preview window; a second engine file typesets the same document to HTML and opens the output in Safari. These engine files are in ~Library/TeXShop/Engines/Inactive/PreTeXt. This folder contains additional documentation explaining what to download from the PreTeXt web page and where to put that material to make typesetting work. Finally, the document recommends downloading the PreTeXt source for a large sample article by Beezer, and typesetting that document as an example of the possibilities of PreTeXt. After that, you should open a new empty page in TeXShop, and use the Beezer sample to write your own PreTeXt document. When you first typeset this document, you'll be asked to save it as usual in TeXShop. In the resulting save dialog there is a pulldown menu to select the file type of the saved file. Select ".xml" rather than ".tex". I hope you'll want to learn more. Go to the PreTeXt site at http://mathbook.pugetsound.edu/index.html. This site has exciting material. Proceed. ### TeXShop Changes 3.88 Version 3.88 has the following changes: • In version 3.86, .bbl files were added to the list of files automatically removed by "Trash AUX Files". Two users complained, giving reasons. So these files are no longer automatically removed. Notice that there is a hidden preference to add file types to those removed, so users who want to remove.bbl files can still do so. • The syntax parser had a bug which could crash TeXShop. This bug was discovered by Yusuke Terada, who provided a fix. • Masson Thierry suggested three new features, and all are in version 3.88. He suggested adding \frametitle to entries added automatically to the tags menu. This should be helpful when using the Beamer slides package. • Masson suggested a new "magic line": % !TEX pdfSinglePage When this line is added to the top of a source file, the resulting pdf preview will show single pages, even if the default is to show a single scrollable document. This feature is aimed at Beamer authors, who want slides to display one slide at a time, but other documents to scroll. • Several years ago, Ramon Figueroa-Centeno provided beautiful macros to set the magic lines which determine the typesetting program, the encoding, and the program root. Immediately below these macros, the macro menu now has a menu listing all other possible magic lines as submenus. Selecting such a submenu adds the corresponding magic line at the current position of the cursor in the source editor. Thus users need no longer remember the syntax of these magic lines. See the "About This Release" document in the Help menu for instructions on how to obtain these new macros. ### TeXShop Changes 3.87 The bug fix for Bibtex allowing citation keys with spaces turns out to be a bad idea. Bibtex documentation states that citation keys cannot have spaces, and the fix broke other user's Bibtex interaction. The fix has been removed. There are no other changes. ### TeXShop Changes 3.86 TeXShop 3.86 fixes several minor issues reported by users since the release of 3.85. Most of these issues have been present for a long time. • The Chinese localization had overlapping text in Preferences; this is fixed. • Antti Knowles found two bugs and sent the code to fix them. When synctex is used to sync from the Preview window to the Source window, it colors the matching text yellow. After that in earlier versions of TeXShop, if a selection was made using only the keyboard, the selection would still be in yellow. The selection color would change to standard selection color only after a click of the mouse. This is fixed. • Knowles second fix concerns the autocomplete feature of BibTeX. If a latex label contains a space, the autocomplete feature would show the full label in the list, but selecting this label would only include the label up to the first space. The fix for this is a little iffy. If users of TeXShop and BibTeX run into problems, please write me immediately. • Tristan Hubsch pointed out that "hyperref tooltips" used with tables of contents and elsewhere could run off the page to the left or right. In that case, they were cut off. This annoying glitch is fixed (unless the page is so narrow that the tooltip could never fit on it). • Added ".engine" and ".sh" (shell script) as file types that TeXShop can write. • At the request of Simon Robinson, addd ".bbl" and ".synctex(busy)" as file types which are automatically removed by the Remove AUX File commands. • The remaining items are all motivated by email sent by Bob Kerstetter. He reported that http://tidbits.com/article/17351 had an article about the language Markdown, listing editors used on the Macintosh to create these sources, and TeXShop was in that list. Markdown is a very simple markup language invented by John Gruber whose files can be easily converted to html, pdf, latex, and other languages. Many conversion programs are available free on the internet, including a program called "pandoc". In the ~/Library/TeXShop/Engines/Inactive program, there is a folder containing pandoc engines. But I discovered that the information about pandoc was out of date. The pandoc site now contains an open source install package for OS X, making it very easy to install pandoc. So I removed the existing engines, and placed a document called Pandoc.pdf in the pandoc folder, with links to the Gruber article and the pandoc site. Note that the pandoc site contains a large number of possible conversions, and details about how they work. • I also received email from Alan Munn, who tried to create stationery for Markdown files (.md files) and failed. This caused me to revise the Stationery feature of TeXShop slightly. Originally, users could create two kinds of files and place them in ~/Library/TeXShop/Stationery. First, they could create a piece of stationery, with extension ".tex". Then they could create a comment file with the same name and extension ".comment" describing the stationery. After that, the TeXShop Stationery menu showed available stationery, with descriptions of each possibility. It turns out that the extension assigned to stationery was irrelevant. So in TeXShop 3.86, stationery files can have any extension except ".comment", or no extension at all. The extension is actually never used. Stationery is treated just like blank windows in TeXShop, except that stationery pages are marked as "dirty." If you try to close one, or typeset one, or whatever, a dialog will appear asking you to name the file and save it to a location of your choosing. This dialog contains a pull-down menu of file types which TeXShop can write, and that menu is how users actually choose filetype. Markdown stationery can be saved with type ".md" in this way, and stationery for any other file type can be handled the same way. • The folder ~/Library/TeXShop/Engines/Inactive/pandoc contains two new engines. The first, Md2pdf.engine, converts a Markdown source file to a pdf file and opens the pdf file in TeXShop. The second, Md2HTML.engine, converts a Markdown source file to an HTML file and opens the HTML file in Safari. Users should note that many other conversion engines for Markdown are available on the internet, and in most cases it is very easy to write engine files which call these conversion engines. • A few people use TeXShop as a general editor. I'm one of them, but I sort of thought I was alone. If you use TeXShop to edit other things than .tex files, the syntax coloring feature of TeXShop can be annoying. TeXShop 3.86 has a new menu item which turns syntax coloring on or off. This applies to the source window at the top of the stack. Users can have several source windows, some using syntax coloring and some not. The old "Syntax Color" item in TeXShop Preferences is still there, but it now selects the default choice for syntax coloring when a new document is opened. Changing this Preference does not affect syntax coloring in documents already open. It would, of course, be wonderful if someone would write general syntax coloring code for TeXShop, so users could choose one scheme for Markdown, one for HTML, one for C code, etc. I don't intend to do that, but I'd gratefully accept the code from someone else. ### TeXShop Changes 3.85 TeXShop 3.82 introduced "useTabs", an easy way to add tabs to projects with a root file and chapter files. TeXShop 3.84 added "useTabsWithFiles", a second method of adding tabs requiring a little more work for a lot more flexibility. Unhappily, the code for this second method broke the first method. Grrr. TeXShop 3.85 again activates both methods. In High Sierra, tabs can be given special short names in place of the names of the files they represent. As the number of tabs increases, this becomes more and more useful. The second method of adding tabs has always supported these shorter names. A similar technique is provided in TeXShop 3.85 for the first method. The magic line containing "useTabs" can be followed by an optional list of short names as in the example below: % !TEX useTabs (one, two, , short name, five) This additional parameter must be on the same line as "useTabs", but notice that single lines can wrap in the editor without adding a line feed. The short names are listed inside a pair of round brackets, and are separated by commas. White space at the beginning and end of a short name will be ignored, but a short name can contain more than one word, as in the above example. If the space between two commas is blank, the original name will be used for that file. If the list has fewer names than the number of tabs, original names will be used for the remaining tabs. If the list is longer than the number of tabs, names at the end will be ignored. Version 3.85 runs on the original list of supported systems, including High Sierra. Tabs require Sierra and higher, and short names require High Sierra and higher. Short names can be input on Sierra, but they will be ignored on that system. TeXShop 3.85 was compiled by XCode 8.3.3 running on Sierra. It runs fine on High Sierra, but the "short tab names" feature doesn't work there because XCode doesn't have API's for High Sierra. I tried compiling TeXShop on High Sierra using the beta copy of XCode provided for that system. The code worked fine in High Sierra and short tab names worked. But unfortunately, the resulting code had minor problems running on Sierra. The High Sierra version is available at the TeXShop web site at http://pages.uoregon.edu/koch/texshop/texshop.html. The TeXShop 3.85 source code has one line commented out which must be activated to get short tab names on High Sierra. If you want to compile yourself on High Sierra, search the source file TSDocument.m for "High Sierra" and uncomment the following line of code windowToTab.tab.title = self.includeFileShortNames[i]; ### TeXShop Changes 3.84 When version 3.82 of TeXShop was released, I said that it would be the final version of TeXShop until late fall. But bugs were discovered, so version 3.83 was released. These versions of TeXShop created only half of the promised support for tabs, and I found that I couldn't stop in the middle. Version 3.84 completes tab support, and should finally be the last release until late fall. Note that tabs require Sierra or higher because Apple first added tab support in that version of macOS. Tabs are not appropriate for all TeX projects. They work best on books and large articles with from five to fifteen chapters or divisions, each introduced with an \include command. Some authors prefer to divide their project into many more pieces, perhaps one file per section, and then associating a tab with each file would product unmanageably many tabs. TeXShop has two mechanisms to enhance Sierra tab support. The first is very simple. Within the top 20 lines of the root file, add the line % !TEX useTabs When this command is given, TeXShop itself searches for \include files to associate with tabs; the mechanism should cover perhaps 70 percent of cases. The second mechanism gives the user considerably more control over the tabs. Within the top 20 lines of the root file, add the line % !TEX useTabsWithFiles Below that, within the top 50 lines of the root file, add a line for each tab % !TEX tabbedFile{chapter1/Introduction.tex} (One) In this command, a path to the file shown in the tab is given in curly brackets. In the example, the path starts from the folder containing the project root file, but see more details below. Notice that the file extension must be included. That is because the second mechanism allows pdf, tiff, jpg, log, aux, and other files as tabs. Authors sometimes give source files long descriptive names, which makes the tab titles very long. The final piece of the above line in round brackets is optional, and gives a shorter tab name. The optional short name will only be recognized in High Sierra, because it requires additional Apple API first made available there. Feel free to use the term in Sierra; it will cause no harm there, but will be ignored. Finally, we list some technical details. The first mechanism searches for \include lines after \begin{document} in the body of the root file. It is common to list files without extensions, and in that case TeXShop adds the extension ".tex" when creating the tab. In the second mechanism, however, TeXShop will not change the extension given by the user, or add a missing extension, because tab files can have unusual types so the extensions provide crucial information. Both methods create at most 20 tabs and ignore lines which might create more of them. The "useTabs" mechanism only works if the root file has at most 20,000 characters, to avoid very long searches for \include lines in gigantic root files. If a window with tabs is left open when TeXShop is closed, then the next time TeXShop is opened, macOS opens the window and recreates the tabs. The new tab mechanism recognizes this behavior and lets macOS do the job without itself creating tabs. However, macOS does not understand tabs made from pdf files, graphic files, and a few others, so some of the tabs may be missing. It is easy to get these tabs back. Close the document and then reopen it. This forces TeXShop to recreate the tabs, and then all tabs come back. Or open the missing files yourself and drag their windows to missing tabs. (This macOS behavior is not a bug; other features of TeXShop depend on it. We cannot have everything.) Finally, a word about the path information between the curly brackets in the "tabbedFile" magic lines. Three types of path strings are recognized. The first are strings that start in the location of the root file. Examples include {chapter1.tex} and {Chapter1/Introduction.tex}. Longer strings of directories are allowed. When it sees this sort of string, TeXShop prepends the full path to the folder containing the root file. Another possibility is a path starting at your home directory, like {~/Galois/Equations.tex}. Here ~ denotes the home directory, so this file is probably not in the project directory. Finally, TeXShop recognizes full paths like {/Users/koch/Galois/Equations.tex}. If you use still more Unix conventions, they may or may not work. No guarantees. Tests suggest that spaces are allowed in both directory names and file names, but I'm loathe to recommend them. There are a few tricky points. The Finder often lists TeX source files without the ".tex" extension, but this extension is just hidden, not absent. It must be written as part of the tab file path. (During testing, I was confused by this point several times). When TeXShop is asked to create a tab, it opens the file exactly as if a user had dragged the file icon to TeXShop and dropped it there. Then the window described in the tab is "tabbed." This creates a few surprising cases that look like bugs but aren't. For example, then TeXShop opens a dvi file, it actually converts the file to pdf using dvips and Ghostscript, and then opens the pdf file. So tabbing a dvi file will give a pdf file as a tab. Here is another surprising case. Suppose that you are working on a project named "Galois.tex" and you earlier created a project named "Abel.tex". When you open Galois.tex, you want Abel.tex as a tab so you can refer to that source file as you write Galois. But if you drop the icon for Galois.tex on TeXShop, both Galois.tex and Galois.pdf will open in separate windows. Similarly dropping the icon for Abel.tex on TeXShop will open Abel.tex and Abel.pdf. After tabbing occurs, you'll have a tabbed window containing Galois.tex and Abel.tex, and you'll have Galois.pdf in a separate window. But you'll also have Abel.pdf in another window. The existence of this extra pdf file looks like a bug, but isn't. This release of TeXShop was compiled by XCode 8.3.3 running on Sierra. It runs fine on High Sierra, but the "short tab names" feature doesn't work there because XCode doesn't have API's for High Sierra. I tried compiling TeXShop on High Sierra using the beta copy of XCode provided for that system. The code worked fine in High Sierra and short tab names worked. But unfortunately, the resulting code had minor problems running on Sierra. No doubt these will be fixed before the release of High Sierra. Consequently, if you are beta testing High Sierra and want to use short tab names, you'll need to search the source file TSDocument.m for "High Sierra" and uncomment the following line of code windowToTab.tab.title = self.includeFileShortNames[i]; Then compile on High Sierra. ### TeXShop Changes 3.83 Murray Eisenberg discovered problems with the new "useTabs" feature and sent me his full source code to debug. This proved extremely useful! The problems I foresaw with this feature have not materialized, but Eisenberg's source revealed more elementary and embarrassing bugs, now fixed. The only files which receive tabs are those loaded by \include{myfile} statements after \begin{document} in the root file. Here "myfile" can be a file name, partial path, or full path. Murray's document loaded chapters in a more complicated way, but was easily modified to meet this condition. It would be easy to extend TeXShop so an alternate method could also be used, in which the user lists files to be tabbed using "% !TEX fileForTab = " statements. This technique could assign files to tabs even if they aren't part of the source (for instances, tables of symbols), and could specifiy which chapters are tabbed for books with enormously many chapters. Write if you want this feature, which however will not appear until fall. It is slightly possible that version 3.82 broke UTF-8 encoding in Japan and other far Eastern countries; the evidence is iffy at the moment. But if that happened, it is fixed in 3.83. ### TeXShop Changes 3.82 After the release of MacTeX-2017 in May, I have been spending time on TeXShop dealing with bugs by other programmers which crashed TeXShop --- and TeXShop bugs which were my fault. Now I want to turn to new summer projects, so this should be the last TeXShop update until late fall. I'll return earlier only if significant new bugs are discovered. This final summer release contains two features, one available only on Sierra and High Sierra, and the other only on High Sierra. We start with the High Sierra feature, which comes automatically to Cocoa applications without any new code by me. • Some time ago, TeXShop was revised to support Apple's Sharing toolbar item. For instance, if the source window is active and you select "Mail" in the item, a mail window opens containing the TeX source as an enclosure. If the preview window is active, this mail window contains the pdf output as an enclosure. Another sharing option is "Airdrop". I think of this as an option for graduate students relaxing in Starbucks. If such a student notices someone interesting drinking coffee, they can use Airdrop to share a selected portion of TeX source code, or a selected region of Preview output. I keep hoping to be invited to a Wedding due to this feature, but not yet. I have never actually used any of the features in the sharing tool. In High Sierra, the sharing tool is also available from a new "Share" menu in the File menu. This menu has an extra item called "Add People." To use it, save a TeX document in iCloud. Then in Add People, send an email message or other sharing notification to a friend offering to share this document. After that, you and your sharing partner can simultaneously edit the document. You can write the first line of a proof and your colleague can immediately add the next sentence. When the document is being shared with someone else, a gray "Share" message is displayed just right of the file title on the edit window header. • The other new feature is available in both Sierra and High Sierra. Recall that TeXShop allows large projects to be organized as a root document and various chapter files. The root contains header items and \include statements just after \begin{document}. These include statements input the source files for various chapters into the document. Chapter files include a header pointing back at the root document % !TEX root = ../MyRoot.tex but the root file has no such header. When a chapter file is typeset, this magic line tells TeX to typeset the root and thus the entire document. The magic line also helps sync and "goto error" locate the correct chapter source, including opening it if it is not yet open. In Sierra, users can use the new "tabs" feature to manually move the chapter windows into the root source window as tab entries. But this is messy work which has to be done every time the project is reopened. The new feature automates this procedure. To activate this feature, first turn off two TeXShop preferences under the Misc tab: "Open root file automatically on opening. a child" and "Miniaturize the opened root window." Both of these items probably represent bad ideas in the design of TeXShop, so the features might be removed in a later version of TeXShop. Then add a magic line to the top of the root file source: % !TEX useTabs When a project with this line is opened, the various chapter files are opened as tabs in the main window. Thus just two windows appear, the source under various chapter tabs, and the single output pdf file. Sierra already has the ability to recreate tabs in a window if the window is left open when TeXShop quits. But once such a window is closed, the tabs have to be recreated from scratch. The new header creates them automatically. If the source code has the magic line and its window is left open when TeXShop quits, then Sierra is allowed to recreate the tabs itself when the program reopens. The new code will only run if the user quits a document, and then later opens it again. This tab feature is somewhat experimental. It works fine for me now, but a number of tricky edge cases make me a little nervous. If you are going to try it, I suggest that you duplicate your project and work using the duplicate. In case of problems, carefully analyze exactly what you did that caused an error, and then send me a note. If possible, send me full source when a problem occurs. Once the tabs are active, I would expect everything to work without problems. It is only the step that creates the tabs that is slightly worrisome. But not enough to hold back this release. ### TeXShop Changes 3.81 Version 3.81 fixes a small number of bugs in version 3.80: • E. Lazard reported that inserting a single space in the search field in the preview window drawer causes a crash. This bug has existed for a long time, and is fixed. • The "Open for Preview" menu item broke in 3.80. This is fixed. Many users reporting this problem were not aware of the Preference item "Configure for External Editor," which is the more natural way to use an external editor. • In High Sierra, the list of user-defined engines in the popup engine menu was not sorted; although it was sorted in Sierra. This is fixed. • The French localization contains a translation by Rene Fritz of the latest version of Herbert Schulz's TeXShop Tips and Tricks, available in the Help menu. • After typesetting, the page number of the current page in the Preview window was set to 1, even when a later page was displayed. This is fixed. (The actual bug was that the page number was set to the current page of the lower half of the split window, even when that half was invisible.) • TeXShop's selection of the dictionary to be used when checking spelling is improved. This is a very minor matter for most users, but it requires an extensive discussion. The basic TeXShop design is that users may have several projects open on the screen at once. Using toolbar and menu items, each can be independently configured. One project may typeset using pdflatex, while a second may use LuaTeX. One source window may have magnified text while a second has regular text in a different font. The purpose of the TeXShop Preference Pane is to set default values for projects when they are first opened. Changing a default value usually does not affect files already open. On the Macintosh, the "Spelling & Grammar" pane is used to pick the spelling dictionary. Originally, TeXShop did not interact with this pane, so the pane worked via the default Apple method. This changed when cocoAspell appeared, because many TeXShop users wanted a dictionary that didn't mark TeX keywords as misspelled. These users didn't necessarily want to use that dictionary in other applications. So interaction with the spelling dictionary was implemented, but the implementation had a baroque, difficult to understand, structure. Version 3.81 of TeXShop finally treats selecting a dictionary on a par with other similar choices. A new Dictionary field in TeXShop Preferences under the Source tab has a pop-up menu which can be used to select the default dictionary. Many users will use this menu to select a cocoAspell dictionary, and then ignore everything else about spelling dictionaries. The one unexpected feature is that dictionaries are listed using the ISO 639-1 and ISO 639-2 standards rather than the localized names shown in the Spelling & Grammar pane. These are easy to decipher. When a new file is opened in TeXShop, it will be set to use the default dictionary. But this dictionary can be changed for that file by opening Spelling & Grammar and selecting a new dictionary. A user who writes in English but corresponds with a French relative can easily do that when writing a note to that relative. A more unusual situation occurs if a user has several files open at once, some written in one language and some in another. Activate each source file by clicking on the text, and then select the dictionary for that file using Spelling & Grammer. Then with Spelling & Grammar still open, click randomly on these source files and notice that the dictionary field changes to the correct dictionary for each file. Finally, if you intend to work on a file for an extended period of time and it does not use your default dictionary, the default dictionary for that file can be set with an instruction at the top similar to % !TEX spellcheck = de-DE This particular document will then open with spelling set to German. But the Spelling & Grammar panel can later be used to switch dictionaries temporarily, in case a German letter contains an English quotation which needs to be spell checked. After working on these changes, I mentioned them to a user who told me in a disappointed tone that he really just wanted to return to the days when TeXShop entirely ignored the Spelling & Grammar panel and let it "do its thing." There is a special hidden preference for that user: defaults write TeXShop OriginalSpelling YES ### TeXShop Changes 3.78 - 3.80 Versions 3.78 and 3.79 were never released. Version 3.80 has the following changes: • SyncTeX, which makes it possible to easily move back and forth between a spot in the source and the corresponding spot in the output, was written by Jérôme Laurens. This software consists of two pieces. One piece adds code to the various TeX engines, causing the creation of appropriate sync information and output of this information to the file myfile.synctex.gz during typesetting. The second piece can be used by authors of front ends; it opens the myfile.synctex.gz file, parses its contents, and deduces sync positions from the parsed data. TeXShop uses Jérôme's front end parsing code. By the way, I use SyncTeX every day and offer Jérôme a mighty thanks for creating it. In 2017, Synctex was revised by Jérôme; among other changes, syncing now works between code to input graphics in the source and the resulting image in the output. But when TeXLive 2017 was released, the revised code for front end authors was not yet ready. Luckily the old code continued to work with ordinary tex, latex, pdftex, and pdflatex. Unfortunately, this code did not work with LuaTeX and LuaLaTeX, so users working with these engines usually could not sync. Even worse, TeXShop often crashed when using these engines because the initial parsing of the file myfile.synctex.gz itself crashed. The new front end code is now available, and is used by TeXShop 3.80. The crashes of LuaTeX and LuaLaTeX have ceased, and synchronization works again, more accurately than in earlier years. Some new features require setting the flag which turns synctex on to 2 or higher. Thus users may want to write "--synctex=2" rather than "--synctex=1". This change can be made in TeXShop Preferences under the Engine tab, and in individual engines the user may have activated. • A new engine called "filltex" was written by David Gerosa, and is available in ~/Library/TeXShop/Engines/Inactive/filltex. This spectacular engine is very easy to install; here's what it does. Two databases are commonly used in the astronomy and theoretical physics scientific communities: ADS and INSPIRE. These databases list preprints and published papers, referencing each with a citation index like 2016PhRvL.116f1102A. Suppose you have written a scientific paper in one of these fields, and suppose you citations use the standard forms for ADS and INSPIRE. For instance, your paper might have many citations, like "for more details consult \cite{2016PhRvL.116f1102A}." When the paper is done, typeset it using filltex. The engine will scrape bibliographic data from ADS and INSPIRE using the web, construct a bibliography, add the bibliography to the article, and rewrite the citations appropriately. All of this happens in one run of the engine. To see an example, typeset the example in the Inactive/filltex folder. Many additional databases exist for other fields, and Gerosa tells me that using these databases with filltex is just a matter of revising the python code appropriately for these databases. He recommends that users interesting in doing this consult his git hub, as listed in the documents in TeXShop/Engines/Inactive/filltex. • A modified SageTeX engine is now in Engines/Inactive/sage, together with new instructions for setting it up. These changes are required because the latest release, SageMath-7.6, has new requirements and a new internal structure. • In some versions of macOS, opening a TeXShop Preview document in multipage mode scrolled down to the middle of the first page, rather than starting at the top. This is fixed. • Some users noticed a slight creep of the Preview Window with each typesetting job. This is fixed or at least improved. • TeXShop now contains latexmk 4.53a. • Herbert Schulz made changes in "TeXShop Tips & Tricks", available in the TeXShop Help menu. In addition to these changes, a small number of users ran into other issues running on macOS Sierra. Most users have had no trouble with Sierra, and find that it fixes a number of problems in the previous two or three systems, so these problems are rare: First, a few users included the pstricks package in the header of their document, but used no features of this package and typeset with pdflatex. Usually pstricks requires TeX + DVI mode, so including it in the header of a pdflatex document is an error. But in Sierra, typesetting such a document with pdflatex created a pdf file that crashed PDFKit, Apple's pdf rendering code, and thus crashed TeXShop. This bug is fixed in High Sierra. Second, some users writing beamer documents would typeset and scroll their document in TeXShop. A particular image in the middle of the document would create a glitch, and some following pages would be blank. Scrolling back up would give additional blank pages, even though they were correctly rendered earlier in the game. Eventually the document could crash TeXShop. This problem is caused by a PDFKit bug, and is fixed by Apple in High Sierra. But in the meantime, we discovered that typesetting the same source with LuaLaTeX or XeLaTeX produces pdf files without problems. In addition, opening a defective pdf file with Adobe Acrobat Reader, and then saving that pdf file in Reader, produces a pdf file without problems. One final problem occasionally occurs in Sierra. Many people use DropBox with TeXShop with no problems. A few of these users store their source files in the DropBox folder. A few of these folks report regular TeXShop crashes. In every case known to me, these crashes end when the TeX source files are removed from DropBox. What is the explanation? I don't know, but I have suspicions. Recall that TeXShop uses Apple's Automatic Saving code, introduced in Lion. Thus the system can save the source at random times. A source file in DropBox can also be moved to the cloud at random times. What if both the Mac and DropBox want to make changes at the same time? The Automatic Saving code is buried deep in Cocoa and isn't by me. The only piece of TeXShop code by me related to automatic saving says "turn automatic saving on." Here's all I know about this problem. • I have never heard of problems from users of iCloud with TeXShop. This is not surprising since Apple wrote both iCloud and Automatic Saving. • Users with this crashing problem report that crashes are fixed by creating symbolic links in DropBox to sources, rather than putting the sources there directly. I don't know why. • Other users with this crashing problem report that crashes are fixed by turning off DropBox syncing during typesetting sessions. • And all users report success moving their sources out of DropBox, and then dragging copies to DropBox at the end of typesetting sessions. But to repeat, many report no problems. ### TeXShop Changes 3.76 - 3.77 Version 3.76 was never released. Version 3.77 has the following changes: • Items in the Tags menu are indented to make entries easier to find. • A bug in Apple's search routines broke the search tool in the Preview Window's Drawer. This bug was fixed by Apple and search now works as before. It is conceivable that it is broken on Sierra 10.12.0 and 10.12.1; I no longer have such systems to test. Users who run into a problem on these systems should update the operating system to 10.12.2 or higher. If a user clicks in a search result at the bottom of the Drawer, the corresponding item in the pdf Preview is highlighted. The up and down arrows can be used to rapidly scan various search results. This ability temporarily broke in 3.76, but works again in 3.77. • At the suggestion of a user, the TeXShop Edit menu has an entry "Paste As Comment." This works essentially like "Paste" except that the newly pasted lines are marked as comments. This makes it possible to copy and paste a large selection from another document, and then carefully activate portions of the material. • The Sage engine in the "Inactive" portion of ~/Library/TeXShop/Engines was improved by Markus Baldauf. Thanks! • The latexmk file was updated to version 4.52c. • Updated the TeXShop Help Menu document "First Steps with TeXShop" and the document "Quickstart Guide for Command Completion" in /LibraryTeXShop/Documents. • At the request of Br. Samuel, OSB, the types gabc, gtex, glog, and gaux are now recognized by TeXShop; these types are used by the Gregario software. The files gabc and gtex are added to the types which receive syntax coloring and other "tex file" privileges, and gaux, glog, and gtex are added to file types deleted by Trash Aux Files. • TeX users on Unix platforms often define an environment variable named TEXINPUTS, which lists folder which TeX should search for input files, style files, and the like. Using this variable is actively discouraged in TeX Live and MacTeX, and these systems are configured to make the variable unnecessary. For instance, files used by an individual user can be placed in ~/Library/texmf. People who answer user questions about MacTeX sometimes run into problems associated with TEXINPUTS, since mistakes defining the variable can bring TeX crashing to a halt. And often users don't mention that they have set TEXINPUTS, leading to hours of useless debugging. With these warnings, we now confess that TeXShop has a new facility for those few users with a legitimate need to set TEXINPUTS. A user recently described such a case. This user belonged to a group whose members used common input files stored on a server. The members of this group worked on a variety of tasks which all used the same basic template, but then input different files depending on the task. These input files were given the same name, but stored in different folders on the server. To pick a task, a member of the group selected a particular server folder using TEXINPUTS, and proceeded. The user in charge of this group wanted a simple way to switch TEXINPUTS so the remaining members of the group could use the system without really understanding how it worked. To help this user, TeXShop 3.77 recognizes a new command to be added to the first twenty lines at the header of a source file. If a project has a root file and several input chapter files, the command should be in the root file. There are four entirely equivalent forms of this command: % !TEX TS-parameter = %!TEX TS-parameter = % !TEX parameter = %!TEX parameter = Each space is mandatory, and the final equal sign must be followed by one space and then a word or sequence of symbols without spaces. For example, the following command might be issued by someone adding ~/MyTeXFiles to the standard TeX search locations: % !TEX TS-parameter = ~/MyTeXFiles//: The parameter defined by this command need not be connected to TEXINPUTS. It can be used for other purposes. So --- what does TeXShop do with this parameter? The parameter is ignored unless typesetting is done by a user-defined Engine file. For example, it is ignored if the user types with TeX, LaTeX, pdfTeX, or pdfLaTeX. But it is used with the XeLaTeX Engine, and all other Engines. An Engine file is just a shell script, which TeXShop runs to typeset. This shell script is called with two parameters, "0" and "1". The parameter "0" contains a full path to the engine file run by the typesetting command, The parameter "1" contains the name of the file to be typeset. The parameter command adds a third parameter, "2", containing the word on the right side of the command. The engine script can do anything it wants with this parameter, including just ignoring it. It is instructive to look at the XeLaTeX Engine file, which is very simple. The file begins with "#!/bin/tcsh", which says that it is a shell script for the tcsh shell. (There are many different shells, and each of their shell files has a different syntax.) Next comes a command setting the path, and then a final command runs xelatex: xelatex -file-line-error -synctrex=1 "1" We can add MyTeXFiles, defined above, to TEXINPUTS by adding the following line before the call to xelatex: setenv TEXINPUTS "2" At this point, several warnings are important. The command to define TEXINPUTS is different in different shells, so copying the above line with another shell will fail. Turn back to the TEXINPUTS setting "~/MyTeXFiles//:" given earlier. The symbol // tells TeX to recursively search subfolders of MyTeXFiles, and the all-important semicolon at the end says to add this to the existing TEXINPUTS from TeX Live. Omit the senicolor and TeX will completely fail. It is easy to define engines which duplicate the built-in typesetting engines. For example, to obtain an object which typesets using pdflatex, just duplicate and rename for xelatex engine, and replace xelatex by pdflatex in the engine script. Therefore, this mechanism can be used to determine TEXINPUTS with any typesetting method. ### TeXShop Changes 3.75 There is only one change. In TeXShop 3.74 on Sierra 10.12.1, scrolling in the pdf window was jerky. This is fixed. ### TeXShop Changes 3.74 TeXShop 3.74 fixes a small number of minor issues. • In Version 3.72, when the mouse hovers over a hyperref link, a popup window shows the linked portion of the document. This did not work well for equations with an equation number on the right side of the page. Now if the link is in the right half of the page, the linked portion is selected further left. • TeXShop uses five files by Jerome Laurens to interpret the contents of synctex files. These files are occasionally updated with TeX Live updates. Previous versions of TeXShop used version 1.9 of the files; this version of TeXShop uses version 1.18. Users may notice small changes in sync due to this update. • The edit menu contains an item labeled "Experiment...". Selecting this item brings up a small source window; text can be copied from the main source window to this window, and then edited there. A "Typeset" button on the small window typesets the experiment and shows it in a second small window. Both windows can be enlarged and their contents can also be enlarged using the standard keyboard shortcuts to change font size or pdf size. This facility, suggested by Wendy McKay, is particularly useful when editing complicated equations or tables. Recently this feature was mentioned on the web site tex.stackexchange.com, and Denis Bitouzé suggested an improvement. Following his suggestion, if a selection of source is made first and then the menu is chosen, the selection is automatically copied into the experiment window. If no selection is made and instead the cursor is simply positioned by clicking at a point, then the Experiment window opens with its previous contents. Thus if a user carefully edits an equation, closes the experiment window, and then decides on a final change, the contents can be brought back for another edit. • A TeXShop preference allows users to set the background color of the preview window, but that preference was ignored by the initial Sierra release. It is working again in a developer update beta. So users will want to install that update when it is released by Apple. • CocoAspell is a spell checker by Anton Leuski which understands LaTeX and thus does not mark control words as misspelled. It is an extension of Apple's Spell Check system, controlled by a Spelling Preference Pane. Users get all the benefits of Apple's integration of spell checking with document source editors, but with a dictionary that is LaTeX aware. The preference pane associated with this Spell Checker broke in El Capitan, and the entire spell checker broke in Sierra. But luckily, Anton Leuski released a new version for El Capitan and Sierra on November 8, and converted the project to open source. See "http://people.ict.usc.edu/~leuski/cocoaspell/". Users need to download the spell checker at Anton's site because he makes many dictionaries available there depending on the language(s) needed. Highly recommended. ### TeXShop Changes 3.72 and 3.73 TeXShop 3.72 finishes the task of preparing for macOS 10.12, Sierra. It also contains the changes listed below. The only change in TeXShop 3.73 is to improve the responsiveness of popUps when mousing over links. • The LaTeX hyperref package adds active links and url's to pdf documents. Many are under author control, but the package automatically links table of contents items to the starts of chapters and sections, and links reference items to the corresponding bibliography entries. TeXShop 3.72 makes it easy to understand these links at a glance. If the mouse hovers over a link, a popup window appears for several seconds showing the linked portion of the document. This is particularly useful when checking references in the document. Normally the popup is on screen for four seconds and then disappears. If the option key is down at the end of these four seconds, the popup will remain on the screen until the mouse moves. There is one cosmetic flaw. When the mouse hovers over a link, a small popup from Apple also appears giving the page where the link points. I don't know how to eliminate their popup. It does not appear in Sierra, so if you find it bothersome, upgrade to Sierra. This feature was requested by Mark M. Wilde, who noticed that it is already present in Skim. Indeed Skim has a somewhat more elegant version. • TeXShop has a hidden preference setting to control the "ColorIndex" tool, as requested by Murray Eisenburg. Type the following command in Terminal to turn this item on by default for each new document: defaults write TeXShop IndexColorStart YES • The TeXShop magnifying glass has been enhanced, as requested by Steffen Wolfrum, but the enhancements are only available in El Capitan and higher. When either magnifying glass is being used, temporarily pushing the Command, Option, or Control keys will increase the amount of magnification, and temporarily pushing Shift-Command, Shift-Option, Shift-Control will decrease the magnification. • Herbert Schulz updated the Tips & Tricks Help File. • Following a request by Markus Gail, the Help commands "ShowHelpForPackage" and "openStyleFile" remove hidden white space, making them more robust. • TeXShop is now explicitly released under the GPLv2, and a copy of this license is available in the TeXShop menu. ### TeXShop Changes 3.71 This version differs from 3.70 only in the German and Dutch localizations: • In both German and Dutch, the pdf search field in the toolbar was not connected to the rest of the program. • Some German translations were improved by Michael Rößner. • The German source window had horizontal slack, and scrolled horizontally about half an inch. This was due to a bug in XCode 7.3.1. Merely opening an NSDocument.nib for a specific language creates the bug for that language. Luckily the bug is fixed in XCode 8.0 Beta. All localizations were tested in TeXShop 3.71 and none exhibit the bug. ### TeXShop Changes 3.69 and 3.70 Version 3.69 was never released. Version 3.70 has the following features. • Version 3.70 again uses the more extensive fixes by Martin Hairer for a memory leak problem, but with a small change to fix the bugs in the Preview toolbar that occurred in version 3.66. • The behavior of the pdf search field on the toolbar has been improved. As before, command-F activates the search field so text can immediately be typed into it. Push RETURN to select the first occurrence of the word of phrase typed. This now switches the first responder to the pdf view, so the selection is highlighted in blue and easy to find. To find the next occurrence, type command-g. To search backward, type shift-command-g. These commands again produce selections highlighted in blue. They are chosen to make searches in the text and pdf windows behave the same way. (In earlier versions, RETURN and SHIFT RETURN performed these functions, but this was not parallel to find for text, and left gray selections rather than blue ones.) To search on a particular page, go to that page and select some text, making sure the selection is not empty. The next search will begin at that spot. An empty selection will start the search at the beginning of the document. If a search reaches the end of the document, it will cycle back to the beginning. • The pdf drawer alternate search method is still available. It works on all systems, including Sierra. However, Sierra's PDF search routines seem to have significant bugs, which have been reported to Apple. If these bugs are fixed, the current version will probably fail on Sierra, but this will be very easy to fix. ### TeXShop Changes 3.67 and 3.68 Version 3.67 was never released. The changes in version 3.68 are listed below: • Version 3.66 included a fix for a memory leak written by Martin Hairer. This fix created an unfortunate bug. If the user closed the Preview window associated with a document and later typeset the document, the Preview window reappeared. But many tools in the window's toolbar no longer worked. Luckily, Martin Hairer provided two different versions of his memory leak fix, one quite conservative with just a few changes, and the other more elaborate with lots of changes. The bug only affected the more elaborate version. TeXShop 3.68 has the conservative set of Hairer bug fixes, and no longer exhibits the toolbar bug. • The new search field on the Preview window has been improved. Now when the pdf window is active, typing command-F will activate the search field so a phrase can immediately be typed there. (The drawer no longer opens.) As before, after typing a phrase in the search field, type RETURN to select the first occurrence of the phrase in the pdf file. Type RETURN again and again to select later occurrences. Type SHIFT RETURN to do a backward search. • In Sierra, the old pdf search routine in the preview window Drawer stopped working. In TeXShop 3.66, this search field was deactivated on all systems and replaced by the new search field on the window toolbar. Some users missed the old method, so in 3.68 it is restored on all systems. Users who try both search methods will notice some harmless interaction between the two methods. Sierra users will discover that the old method fails. If Apple later repairs the old method on Sierra, both methods will be kept and the interaction between these methods will be eliminated. ### TeXShop Changes 3.66 • Version 3.66 has important code changes by Martin Hairer of the Department of Mathematics, University of Warwick. These code changes fix memory leaks in previous versions of the program. The changes include replacing several instance variables with class properties, whose retain/release behavior under ARC is more predictable. But the primary change is to remove some strong reference cycles in which object A referenced object B and object B referenced object A, and neither could be released without first releasing the other. Recent versions of Apple’s PDFKit use large amounts of memory because they create and cache bitmaps for faster display. If Activity Monitor is used while opening a document in TeXShop and scrolling through the preview pages, this memory usage can seem alarming, but in practice it does not cause problems. The problems solved by Hairer are different and can be seen by closing documents without quitting TeXShop. In this case, the program did not recover the memory used by the document being closed. Hairer’s fix dramatically improves this sort of problem, although it is likely that smaller memory problems remain. • A few users reported a related bug. They would close a document, reach into the document’s folder, and throw the pdf into the trash. But the Macintosh would claim that the pdf was still in use and refuse to empty the trash until TeXShop itself closed. This problem is a symptom of a memory problem and is fixed by Hairer's code. • In Sierra, the old pdf search routine in the preview window Drawer stopped working. In TeXShop 3.65, this search field was replaced with a more useful search field in the Preview Window toolbar; the new search works in all versions of OS X. In recent Sierra releases, the old pdf search came partly to life, and interfered with the new search. Temporarily, the old search if turned off in all versions of OS X. If the old search works again in later versions of Sierra, it may be turned on again, but we suspect most users will use the new toolbar search field. ### TeXShop Changes 3.65 This fixes a few problems introduced in 3.64: • Version 3.65 again works on Lion and Mountain Lion, and thus on all versions of Mac OS from Lion to Sierra. • I omitted the QTKit framework, and made AVKit and AVFoundation optional. The AV frameworks aren't available on Lion and Mountain Lion, hence the crash of 3.64 on these systems. Demo Movies will only play on Mavericks and above. • The sync fix from source to preview for Sierra is only added on Sierra, since it seems to cause minor problems on earlier systems. • The tool to search PDF files is now localized. • Sparkle in TeXShop is updated to version 1.14.0. • Yusuke Terada added code to recalculate tab lengths whenever the user changes the font or font size. ### TeXShop Changes 3.64 TeXShop 3.64 fixes a few problems with the Mac OS 10.12 (Sierra) beta. One change may be of independent interest. • The initial beta release of Sierra had substantial PDFKit problems, but most were fixed by Apple in the next beta. One problem was jumpy, non-responsive scrolling. I fixed this in TeXShop and Apple later fixed it in general. My initial fix does no harm and is still present in 3.64. • Sync from source to preview worked in Sierra, but the preview selection was not highlighted. This bug was due to TeXShop and is now fixed. • TeXShop's Help Menu has a Demo item which can download and display two short movies for new users. This was coded using Apple's QTKit, which was deprecated in Mac OS 10.9 in favor of AVKit. In Sierra, QTKit is gone. TeXShop 3.64 switches to AVKit. The movies, which used to be in .mov format, are now in .mp4 format. • TeXShop's Preview window has a drawer, which displays an outline of the pdf if the source uses hyperref. The bottom half of the drawer allows searches of the pdf file, but does not work in the Sierra beta. Since the code for the search in the drawer comes directly from PDFKit documentation, I suspect this is an Apple bug which will eventually be fixed. In the meantime, TeXShop 3.64 adds a new search field to the Preview window's toolbar. Type a word and push RETURN and TeXShop will select the first occurrence of the word in the pdf. Push RETURN again to display the next occurrence, etc. Click the mouse in the PDF window to restart searching at the top of the document. Select some actual text in the PDF window to start searching at that point. The search item will remain after Sierra is released, even if Apple fixes the drawer bug, because I suspect that many users will prefer it. The most-often-requested new TeXShop feature is tabs in the source and preview windows. Sierra provides this feature automatically, without any additional TeXShop code. Creating these tabbed views is straightforward. Further details will be provided when Sierra is released. ### TeXShop Changes 3.63 TeXShop 3.63 is an internal version, never released. ### TeXShop Changes 3.62 • Latexmk upgraded to version 4.45 • Slight improvement in TeXShop Tips & Tricks by Herbert Schulz • The menu item "Select TeXShop as Default TeX Editor" is now active when either the Source or the Preview window is on top • George Gratzer's "First Steps", the first section of his book "More Math Into LaTeX", has been updated to the contents of the 5th edition. • The TeXShop "Experiment" command ignores smart dashes, and thus does not convert multiple dashes into a character which confuses TeX • A few error messages referred to http://tug.org/MacTeX rather than the correct http://tug.org/mactex. This is fixed. • The TeXShop Preference Setting for "Show Console" was not initialized to the current value when TeXShop was opened after quitting. This is fixed. • Added a fix by Martin Hairer when using a git server with TeXShop. Martin wrote "Here is a small improvement which is very useful to me. I use a version control system (git) for some of my papers. When git updates a file, it deletes it and then replaces it more or less immediately by an updated copy with the same name. The current version of TeXShop detects this, but it often just notices that the file was pulled away under it's feet and then either closes the document or replaces it by a blank. "I propose to introduce a small latency between the moment TeXShop gets notified of the file's deletion and the moment it checks for its status so that it correctly handles changes to the file made in that manner. More precisely, in TSDocument.h, I propose to add a sleep command at line 1580" ### TeXShop Changes 3.61 • "What's New" in TeXShop Help claimed that Sparkle was updated to version 1.31. The update version number was actually 1.13.1, the latest version. • In TeXShop Preferences under the Misc tab, the items "pTeX support" and "During File Save" are only useful in Japan. They are now labeled so users elsewhere will not be tempted to activate them. • The error message when typesetting fails to find pdflatex or another tool has been improved. • If the user is running El Capitan and has set the location of TeX binaries to /usr/texbin or /Library/TeX/texbin, but this second location does not exist, an error message explains what to do. • Slight improvements in Korean and Spanish localizations • Improved pTeX scripts by Yusuke Terada in ~/Library/TeXShop/bin. ### TeXShop Changes 3.60 • Updated to Sparkle 1.13.1 for security reasons. • TeXShop no longer adds a creator and type to files it writes. These were obsolete from the Classic version of the file system. • The command "Set TeXShop as Default TeX Editor" is now active when either the Source or Preview window is active • Slight cleanup in English and Korean localization. ### TeXShop Changes 3.59 There are several small changes in TeXShop 3.59: • A new item is provided in the TeXShop menu, "Select TeXShop as Default TeX Editor." Many programs are able to edit .tex files, so when the user double clicks such a file, it is not clear which program will open it. Selecting the new menu item informs Apple's Launch Services that TeXShop should open such files. The appropriate Launch Services API does not distinguish between different versions of a program. A few users are in the bad habit of keeping several copies of TeXShop on their computer, and then the new menu will select a random version to run. It is best to keep TeXShop in /Applications/TeX and update it via the Sparkle update mechanism. Then only one copy of TeXShop will be on the disk. The author of TeXShop sympathizes with sloppier users, since he has dozens of versions of TeXShop on his disk. He opens new files by dragging their icons to the TeXShop icon in the Dock to insure that the correct copy runs. • Alan Munn reported the following strange bug: \documentclass{article} \begin{document} \begin{enumerate} \item John started the car. \item John stopped the car. % environment completion here produces \stoped % even if the line is commented out % completing again produces the correct completion % but environment completion after these comments seems to work This bug is fixed. • The TeXShop menu has a new item just below Preferences called "Open ~/Library/TeXShop". When this item is selected, the Finder will open the ~/Library/TeXShop folder. In recent versions of OS X, ~/Library is a hidden folder. The new command makes it easy to access this folder without using special tricks. Files in this location customize TeXShop for the individual user. For example, the Templates subfolder has all templates which appear in the Templates pulldown menu. These templates can be opened and edited in TeXShop; other templates can be added to the folder. • For many years, users could type a line like % !TEX encoding = IsoLatinGreek at the top of a source file to indicate that the file should be opened or saved using the LatinGreek encoding. But if the encoding name, here "IsoLatinGreek", was not recognized by TeXShop, the program would open the file using the "Mac OS Roman" encoding without reporting an error. This bug is now fixed. An unrecognized encoding name will cause an error dialog to appear reporting the error, and then the file will be opened in "ISOLatin9", the current default encoding. The best way to avoid this sort of error is to use the Encodings Macro to write the displayed line, since this Macro lists all legal encodings and allows users to select the one they want. • The alternate front end TeXWorks uses the same encoding magic line, but has different names for many standard encodings. TeXShop now recognizes the TeXWorks names, making it easier for collaborators to share files. • The TeXShop Help menu now has a new document by Herbert Schulz titled "File Encoding in TeXShop". This document explains the significance of various encodings of the source file, a topic of importance to users who do not write in English. ### TeXShop Changes 3.58 Recent TeXShop versions have been released to fix or work around a series of El Capitan bugs, particularly in PDFKit. There are three major bugs: • Magnification broke because a Cocoa command to get the PDF data underneath a portion of a window returned a bitmap in El Capitan. A workaround for this bug was included in an earlier TeXShop release. • When a new version of a pdf document is loaded, there is a momentary flash before the document is displayed. This bug has not yet been addressed. • In several situations caused by loading a new file, PDFKit displays a blank page rather than the correct content on the page. This seems to be caused by a new Apple design to speed up pdf display by creating and caching bitmaps of recent pages. When the bug occurs, the bitmap is displayed too soon. When this bug occurs, it is fairly easy to obtain the missing image. With the blank page active, type command-shift-+ to zoom in and then acommand-shift-- to zoom back out. This causes the page to be displayed correctly. However, it is better to insure that the blank page does not occur. Several instances of this bug were fixed in earlier releases. This release fixes three other cases: • In external editor mode, typesetting caused blank pages to be displayed. This is fixed, or at least mostly fixed. It is important to configure TeXShop correctly so the fix works. There are two ways to use TeXShop in external editor mode. You can typeset with TeXShop, by making the pdf page active and typing command-T. Or you can typeset from the editor or from a shell, and configure TeXShop to update the display when the pdf file changes. In the first case, the TeXShop preference "Automatic Preview Update", under the Preview tab, should be off. In the second case, it should be on. • When the pdf window is split, the bottom half sometimes became blank. This bug is fixed. • When the pdf window is split and then the user typesets, half of the display is often blank. This version of TeXShop fixes the problem. The fix works well in "multipage format", which I recommend. It can have problems in "single page format" and "double page format", although it usually works in these cases. Because of these small problems, the fix can be turned off. To do so, type the following command in Terminal: defaults write TeXShop FixSplitBlankPages NO Incidentally, all three bugs have been reported to Apple. In addition, the following changes were made: • Max Horn fixed the "unindent" command. The previous version of this command would sometimes lose a character. • Max Horn also added #defines for NSAppKitVersion10_9 and the link in the TeXShop code base, so the source code should compile on Yosemite and Mavericks as well as on El Capitan. • James Crippen asked that "paragraph" and "subparagraph" be added as default tags because these items are used in the Memoir class. So currently in LaTeX, the following receive automatic tags: chapter section subsection subsubsection paragraph subparagraph macro environment • Fixed the French localization so both documents in Help that have been translated into French occur in the Help menu. The translations of documents by Herbert Schulz are by Rene Fritz. ### TeXShop Changes 3.57 • The command to show the Key Bindings editor broke in the German localization, and is now fixed. • The command to split the Source Window broke in 3.56 and is now fixed, at least in most localizations. This is not a bug in the code; instead it is a bug in how XCode processes nib files. Fixing it required dropping down to XCode 7.0 and editing very carefully there. The bug still exists in the text window when in single window mode. • TeXShop can now edit documents with extension lua. ### TeXShop Changes 3.56 Debugging code accidentally left in version 3.55 caused a pause between typesetting and display of the new pdf. This is fixed. ### TeXShop Changes 3.55 This version fixes two significant bugs when running in El Capitan: • The text boxes in the Preview toolbar were incorrectly sized, so numbers in them were not properly centered vertically. This includes the page number box, total page box, and magnification box. All are now fixed in all localizations. Localizations other than English are only lightly tested, so problems in the box display in these localizations should be brought to our attention. • The magnifying glass displayed bitmapped images rather than clear pdf text when run in El Capitan. This was discovered early on, and a bug report was filed with Apple on July 11. • Over the years, at least four different designs were written to create a TeXShop magnifying glass. All were eventually broken by Apple changes. The Yosemite magnifying glass contained three major calls: thePDFData = [[self documentView] dataWithPDFInsideRect: [[self documentView] visibleRect]]; NSImage *theImage = [[NSImage alloc] initWithData: thePDFData]; [theImage drawinRect: magnifiedRect fromRect: sourceRect operation: NSCompositeSourceOver fraction 1.0]; The first call asked the Preview Window to provide the raw pdf data creating the region visible on the screen. The second converted this to an NSImage, a common object for handling illustrations in Cocoa. The final call read from a section of this image, and wrote the result to a (usually larger) section, essentially enlarging the image. Our bug report claimed that the third call was broken in El Capitan. After furious work this week, it turned out that the third command works well and the bug is in the first call, which asks for the raw PDF data of a typeset document. Apple claims that drawing of pdf documents is faster in El Capitan; apparently that's because PDFKit converts several pages to bitmaps and saves them for later drawing. The first call occasionally returns raw PDF code, but more often it just returns a bitmap. Magnifying that bitmap gives unpleasant results. The bug is fixed by getting the raw PDF data a different way, on a page by page basis. Because I wanted to fix this bug rapidly, the current code only gets data from one page of a document. Imagine that several pages are shown on the screen, perhaps because scrolling left part of one page and part of another page on the screen. Or perhaps the user is displaying two pages side by side. When magnification begins by clicking on a point, only the material on that page will appear in the magnifying glass. If the glass moves off of the initial page, it will show white contents with no magnified material. Release the mouse, and push it again to magnify a different page. This code may be improved in the future. On Yosemite and earlier, the old code is used. We could use the old code everywhere if Apple fixed the bug. • One user was bothered by bouncing behavior when scrolling. Here is advice from a past TeXShop release: In Yosemite, the source code window's scroll controls have some elasticity, so the source bounces slightly at the top and bottom of the document. Yusuke Terada noticed that these bounces sometimes obscure the first or last lines of the document, making it difficult to edit these lines. Yusuke Terada added a hidden preferences to turn off this elasticity. To activate the hidden item, type the following in /Applications/Utilities/Terminal: defaults write TeXShop SourceScrollElasticity NO • Finally, complaints have surfaced that the display of pdf files in El Capitan is unsatisfactory for some combinations of monitor and computer. Everyone seems to agree that the display is sharp and clear on large Retina display machines, and the problem may be caused by Apple optimizing for these machines. If you are bothered by poor display quality, the best we can do is to repeat the advice given two years ago when this problem first surfaced: I am just barely able to notice a difference myself, using Apple's 27 inch Thunderbolt Display. A few users sent me email with pairs of png files, showing an image under Mavericks and under Yosemite and pointing out the fuzzy result on Yosemite, but in some of these cases I couldn't see the difference. So the problem seems to depend strongly on the computer screen used, and perhaps on user sensibilities. Some users may be completely happy while others are desperate for fixes. Version 3.44 of TeXShop has several hidden preference items which may help with this problem. There is no universal solution, so experimentation with these preference settings will be needed if you find the display fuzzy. TeXShop and many other graphical front ends for TeX and PDF display use Apple's PDFKit and Cocoa frameworks. These frameworks rasterize pdf images are an extremely low level not accessible to programmers. Version 3.44 tries to expose all the routines in Cocoa which could modify this rasterization. Notice that TeXWorks and Adobe Acrobat do not use PDFKit and Cocoa and thus behave differently. It does little good to call these programs to our attention since switching to a different pdf display library would betray a key feature of TeXShop (and other programs using these frameworks, namely that they are fully native applications. With these caveats, let us list possible solutions. Yusuke Terada noticed that in Japan, the display could appear fuzzy and then be made legible by tweeking the magnification. So he added code in TeXShop to do this automatically each time a pdf file is opened or typeset. To turn this feature on, type the following in Terminal: defaults write TeXShop FixPreviewBlur YES If this tweek fixes the problem, leave it on and stop reading. Otherwise turn the feature off by using the same command with YES replaced by NO, since the tweek is likely to interfere with remaining experiments. Apple's System Preferences in the General Pane has an item labeled "Use LCD font smoothing when available." A few users discovered that turning this item off cured fuzzy behavior. I think this fix won't help most users, but it might be worth a try. TeXShop also has a preference item under the Preview tab labeled "Smooth text and line art." This item was originally added to fix a different problem. One user created an illustration with very thin lines. On a previous TeXShop, the lines vanished with regular monitors, although they appeared with the Retina display. The user discovered that the lines appeared in Acrobat, and by turning on antialiasing they also appeared in TeXShop. The code provided by Cocoa to turn on antialiasing has additional features not exposed in previous versions of TeXShop. Cocoa provides the ability to set the level of antialiasing. The previous "Smooth text and line art" preference set this value as "high". In TeXShop 3.44, hidden preference settings can select the interpolation level. To test various levels of antialiasing, turn on "Smooth text and line art" in TeXShop Preferences and then set the hidden preference defaults write TeXShop InterpolationValue 3 where the final value can be any integer between 0 and 4. Apple's API documentation provides the following names for these values, which perhaps give a hint of their function. The list is as follows; the strange reversal of 3 and 4 occurs in the official list: 0 = NSImageInterpolationDefault 1 = NSImageInterpolationNone 2 = NSImageInterpolationLow 4 = NSImageInterpolationMedium 3 = NSImageInterpolationHigh Frankly, I suspect an entirely different solution will be best for most people. That solution is to change the font used for your TeX document, via the wonderful macros written by Michael Sharpe and added to TeXShop late this summer. For detailed explanation, read the description below under "TeXShop Changes 3.39." Notice that it is possible to add Michael's commands for one particular font to your default template, so that font will always be used for new documents. All of these fonts are included in the full TeX Live distribution, so using them should cause no trouble when collaborating with Windows or Linux users. The font commands take up four or five lines in the preamble, and are easily discarded once the document is complete if you want the final document to have a plain vanilla look. On the other hand, Michael's choices come from an expert and may satisfy more readers than the previous default font choices. ### TeXShop Changes 3.54 The Sparkle update mechanism was broken in 3.53. It is fixed in 3.54. For some time, TeXShop has not contained the two movies which appear in the Help Menu. They are downloaded when the user requests them. This broke in 3.53, and is fixed in 3.54. ### TeXShop Changes 3.53 El Capitan introduces a significant change for TeX users. The location /usr is no longer writeable, even by users with root privileges. Consequently, the symbolic link /usr/texbin has been relocated to /Library/TeX/texbin. This new link is automatically written by both MacTeX-2015 and BasicTeX-2015. If you updated to one of these, you have the link. Once the link exists, older versions of TeX like TeXLive-2013 also work fine. GUI programs must be reconfigured to look for the new link. TeXShop 3.52 and later does this automatically. For more information on TeX and El Capitan, see tug.org/mactex/elcapitan.html TeXShop 3.53 includes the following changes: • On Mavericks, the Preview display of text was slightly blurry on many screens. This was probably caused by changes Apple added to support the Retina display. Many work arounds for the problem has been suggested. Terada Yusuke added code which improved the display for many. His code is activated by a hidden preference defaults write TeXShop FixPreviewBlur YES Apple improved the pdf display in Yosemite, and again in El Capitan. But some users still see traces of the old blur. Terada Yusuke has modified his code for El Capitan, and users unsatisfied with the Preview output should activate the above hidden preference. • Near the end of the Macro menu, there is an item Tables --> Table'' which inserts the outline for a table into the source. The first line of this source is \begin{table}[htdp] However, d'' is not a legal parameter for tables. Previous versions of LaTeX ignored this error, but it is flagged as an error in the 2015 version. The easiest way to fix this is to open the TeXShop Macro editor and remove the d''. If you have never edited the default macros, you can also get the fix by quitting TeXShop, going to ~/Library/TeXShop and throwing the entire Macros folder in the trash. Then restart TeXShop and the current Macros folder will be created. • Some changes made in TeX Live 2015 caused a few programs to output unexpected SyncTeX code, which then caused TeXShop to lose control of the mouse. This was a TeXShop bug, not a SyncTeX bug. It was fixed in 3.52. When using SyncTeX, occasionally the system cannot find a match. In those cases, the older Sync method is called. But this older method turned out to crash for the same reason. So in 3.53, the old Sync is not called and in these unusual situations, no Sync is found. • One or two coding changes were made so TeXShop will compile with the latest XCode. ### TeXShop Changes 3.52 Many of the changes in version 3.52 prepare for OS X 10.11, El Capitan. In that system, the /usr/texbin link to the TeX binaries is replaced with a new link, /Library/TeX/texbin. Users who have installed MacTeX-2015 or BasicTeX-2015 already have this new link. • TeXShop 3.52 uses the new link automatically when appropriate, and thus needs no configuration for El Capitan. To be specific, on startup, if the path setting in TeXShop Preferences under the Engines tab is /usr/texbin AND /usr/texbin either does not exist or is not a symbolic link AND /Library/TeX/texbin exists and is a symbolic link, then the preference setting is changed to /Library/TeX/texbin. Similarly if the preference setting is /Library/TeX/texbin AND /Library/TeX/texbin either do not exist or is not a symbolic link AND /usr/texbin exists and is a symbolic link, then the preference setting is changed to /usr/texbin. In all other cases, the setting is not touched. • Engine files in ~/Library/TeXShop/Engines which define PATH have been modified by adding /Library/TeX/texbin to the path; /usr/texbin remains so these files will work on both old and new systems. • Macros in ~/Library/TeXShop/Macros has been modified to refer to /Library/TeX/texbin rather than /usr/texbin. (Only three occurrences of /usr/texbin were found in the old Macros file.) Users do not automatically get the new macros, so users with an older system will not be affected. Users who installed MacTeX-2015 or BasicTeX-2015 can safely use the new Macros because they have the new link. • The English Help Panel has been modified to mention /Library/TeX/texbin rather than /usr/texbin. • In the current El Capitan beta, sync from source to preview switches to the correct pdf page, but does not hilight the new selection in yellow. This is fixed. • In the current El Capitan beta, the magnifying glass works but displays a bitmap rather than a sharp pdf image. I suspect this is an Apple bug which will be fixed. • The German localization has been improved by Lukas Christensen. Thanks! • Klaus Tichmann reported a synctex bug and a fix. The "synctex_scanner_get_name" sometimes returns NULL rather than a valid string. This can lead to loss of mouse control. The fix is in version 3.52. ### TeXShop Changes 3.51 • TeXShop has two new macros by Michael Sharpe, tabularize and tabularize + space. These macros were suggested by Nils Enevoldsen and make it easier to construct and edit tables. To examine the Macros, go to ~/Library/TeXShop/New/Macros and copy the items tabularize.plist and tabularize.pdf to the desktop. The second is documentation for the macros. To add the Macros, select "Open Macro Editor" in the Macro menu. Then select "Add Macros from file...", which appears in this menu. Navigate to the desktop and select the file tabularize.plist. • Many missing German translations were added to the German localization by Lukas Christensen. • There is a minor fix to the Korean localization by Karnes Kim. • Latexmk was updated to 4.43. • The Help menu contains a short new document by Herbert Schulz, "TeXShop Feature Confusion". • There is a new item in the Edit Menu, "Correct Spelling Automatically", and a related new item in the Preferences Panel, "Correct Spelling". The menu item toggles spelling correction on or off for the top most document. The Preferences item sets the default setting when a document is first created or opened. Spelling correction is a new feature inherited from the iPhone. When it is on, the Mac automatically corrects the spelling of misspelled words, and often suggests completions for words or phrases that are partially typed. Note that "check spelling" and "correct spelling" are different; the first underlines misspelled words in red, while the second actually changes the text. Many of us dislike spelling correction. When this system doesn't know a word, it can replace it with a new bizarre choice. For this reason, spelling correction is off by default until the Preference setting is changed. Spelling correction only works with Apple dictionaries. If cocoAspell has been installed on your system and one of its dictionaries is chosen, spelling correction won't do anything. • System 10.10.3 introduces a new emoji feature "Skin tone modifier" for ethnic diversity. Yusuke Terada's Character Info feature has been modified to support this feature. • Previously, typesetting in single window mode left the text side of the window active. This is fixed, and now the behavior is determined by the "After Typesetting" item in TeXShop Preferences. When this item is set to "Bring Preview Forward", typesetting in single window mode makes the preview side of the window active. • Yosemite 10.10.3, released on April 8, 2015, fixes the "preview blur" problem which cropped up when Yosemite was introduced. The original system was optimized for the Retina display and produced blurry text in the Preview window on regular monitors. In 10.10.3, the display is again crisp on these monitors. Consequently, fixes for this problem introduced in TeXShop 3.44 and 3.45 are no longer needed. In particular, we recommend setting FixPreviewBlur to NO via the Terminal command defaults write TeXShop FixPreviewBlur NO ### TeXShop Changes 3.50 • Richard Koch's email address changed from koch@math.uoregon.edu to koch@uoregon.edu. All occurrences of this address in TeXShop were changed. • A TeXShop Preference item allows the user to set the size and location of the console window when it opens. This item has a button titled "Set with current position." In 3.49, this button was always active, even thought it only makes sense if the option "All consoles start at fixed position" is selected. Now it is active only in this case. This behavior is consistent with similar Preference item behaviors for the Source and Preview windows. • Terada Yusuke updated the OgreKit spell panel to the latest version. In the process, he added features and fixed bugs: • There is a Chinese localization by Wei Wang, onev@onevcat.com. • When rich text was pasted to OgreKit, the style was mistakenly pasted. This is fixed. • Previously the Japanese yen mark was incorrectly passed to OgreKit. This is fixed. • A strange behavior of OgreKit when using OS X Spaces, pointed out by Daniel Grieser, is fixed. Previously the Preview and Source windows were attached to a space, but the Find window floated to whatever space was currently active. ### TeXShop Changes 3.49 • TeXShop installs two movies for beginners in ~/Library/TeXShop/Movies/TeXShop. These movies are quite large, 5.4 MB and 9.4 MB as gzipped files. Until version 3.49, every update of TeXShop contained these movies, slowing download times. TeXShop 3.49 finally does the right thing; it no longer contains the movies. If the user asks to view one of them, TeXShop downloads that movie from the web site, installs it in the above location, and runs it. Once downloaded, the movie remains on the user's machine. This simple change reduces the size of the TeXShop download from 54 MB to 39 MB, and the size of the unzipped program from 83 MB to 53 MB. • The Experiment feature of TeXShop, introduced in version 3.37, did not handle the "% !TEX encoding = ..." line in the header of the source file correctly, so users with UTF-8 files ran into trouble using the feature. This is fixed. • The Source <—> Preview menu item and associated keyboard shortcut now work in Single Window mode. • In the French Localization, the item in TeXShop Preferences to set the font of the source file was broken. This is fixed. • Three hidden preferences were added, allowing users to change the yellow highlight color to another color, when used by inverse sync from the pdf window to the source text. This is helpful if the user has changed the default colors of the source window. The new items are defaults write TeXShop ReverseSyncRed 1.00 defaults write TeXShop ReverseSyncGreen 1.00 defaults write TeXShop ReverseSyncBlue 0.00 • Latexmk is updated to version 4.42 • MakeIndex failed on files with multiple dots in the name; now fixed. • A new TeXShop Preference item under the Console tab allows users to set the default size and location of the Console Window. To set these values, open a document and use TeXShop’s “Show Console” menu to bring the console to the foreground. Adjust its size and location as desired. Then open TeXShop Preferences, select the Console tab, choose “All consoles start at fixed position” and press the “Set with current position” button. Then click “OK.” ### TeXShop Changes 3.48.1 • Version 3.48 had a serious problem when used in French. Double clicking on a TeX file opened TeXShop, but not the source. This was caused by a missing connection in the French nib file. This was almost immediately corrected in 3.48.1. ### TeXShop Changes 3.48 • Currently the keyboard shortcut Cmd-F brings up the Find Panel. There are three panels, depending on a Preference choice. In 3.48, the command brings up the Find Panel unless the Preview Window is active. In that special case, Cmd-F opens the drawer for the Preview Window and places the cursor in its Search Field. This simple change was suggested by Markus Gail. • Sparkle has been updated from version 1.5 to version 1.8. The TeXShop Preference Pane now has items in the Misc tab controlling the action of Sparkle. Users can choose to update manually with the "Check for Updates" menu. In that case, there will be no Sparkle notification of new updates. Or Sparkle can search for updates daily, weekly, or monthly, and notify the user when an update is available. • There is another fix for the Yosemite scroll bug in Single Page and Double Page modes. • Updated to latexmk 4.40h. ### TeXShop Changes 3.47 • The height of the Templates toolbar item for the Source Window was increased by one pixel in most localizations to make it match other items in the same toolbar. • The code for keyboard shortcuts in the Preview window has been revised. TeXShop has always enhanced the left and right arrow keys to provide paging if there is no horizontal scroll bar. This code is still present. Otherwise, recent versions of TeXShop tried to fix a Yosemite bug in which scrolling paged in the wrong direction. This bug occurs in Single Page and Double Page modes, but not in Multiple Page or Double Multiple Page modes. The new code only fixes the bug in Single Page and Double Page modes, reverting to the App Kit code in the non-broken modes. Moreover, the new code only fixes the bug in Yosemite and above. If Apple fixes the bug, the fix can be turned off completely, using the hidden default defaults write TeXShop YosemiteScrollBug NO • If TeXShop is quit without closing windows, the windows will reappear when TeXShop is restarted. Initially, only the Source Window was restored, but in recent TeXShop versions the Preview window is also restored. Previously the Preview window was always restored to the main monitor, even if the user had multiple monitors. This is fixed and now the Preview window appears on the monitor it was originally on. ### TeXShop Changes 3.46 • In Yosemite with Single Page or Double Page modes in the Preview Window, the arrow keys scrolled in the wrong direction. This was fixed in a previous TeXShop version. However, the fix partly broke the behavior of the up and down arrows in all modes. In 3.46, the old behavior for up and down arrows returns; these keys scroll up or down one line until coming to the end of a page, and then page. • In previous versions of TeXShop, each typesetting job caused a slight adjustment in the portion of the page displayed. Typically, the window would gradually creep upward. This adjustment seemed to depend on magnification, the portion of the page displayed, and more. We tried various kludges to minimize the creep. It looks like Apple fixed this bug in the latest Yosemite, so we have removed the kludges. Preliminary tests show that creep is greatly reduced. Notice that the problem will definitely remain in earlier versions of OS X. Upgrade to the latest version is possible. • When TeXShop runs with Auto Saving active, some items in the file menu are modified automatically by Apple. These items are labeled Save As, Duplicate, Rename, Move To, Export, Revert to Saved, Revert In particular, Revert allows users to activate a "Time Machine" like window and revert back to an earlier version. Some of these items do not work in Single Window Mode, and Revert causes TeXShop to crash. So (at least temporarily) the items are all disabled in Single Window Mode. Obviously they continue to work in the usual mode in which the source and preview are placed in different windows. This crash has never been reported, and we suspect that Single Window users seldom try these items. Note that "Save" is unaffected by the change, and often used. Warning: Apple has experimented with the placement of these items; in Lion, for instance, Revert was on a pull down menu activated from the window title. We have not gone back to earlier versions of the operating system to make corresponding fixes. • There are minor improvements in the Japanese, Korean, and Italian localizations. • OgreKit has been updated to the latest version. Also OgreKit is now localized for Korean. • Yusuke Terada fixed "Remember last position" for the Console window. This is complementary to a previous addition of "Always Open Console at a Fixed Position". • Fixed minor toolbar glitches pointed out by Herbert Schulz: "Page" and "Scale" were improperly clipped in some localizations including English. • Added a new encoding option: Korean (Windows, DOS) ### TeXShop Changes 3.45.2 • A new TeXShop Preference item under the Console tab allows users to set the default size and location of the Console Window. To set these values, open a document and use TeXShop’s "Show Console" menu to bring the console to the foreground. Adjust its size and location as desired. Then open TeXShop Preferences, select the Console tab, choose "All consoles start at fixed position" and press the "Set with current position" button. Then click "OK." • The Source <—> Preview menu item and associated keyboard shortcut now work in Single Window mode. ### TeXShop Changes 3.45.1 • In 3.45, the Trash-AUX command did not work if some directory names in the full path to a file contained spaces. This is fixed. ### TeXShop Changes 3.45 • The left column of the Macro Editor window was essentially blank, showing separator images but no text. This error was caused because TeXShop 3.44 was compiled on Yosemite. The same source code compiled on Mavericks worked on both Mavericks and Yosemite. A work around was found and the source now compiles and works on both Mavericks and Yosemite. • The command to trash AUX files had a bug and did not fully work. The problem is now fixed. • Karnes Kim produced a Korean localization of TeXShop, which is now part of the program. • Yusuke Terada provided tweeks and fixes for the following: • The hidden preference setting defaults write TeXShop FixPreviewBlur YES has been improved so that it works if users zoom the PDF by pinch gestures with the Magic Mouse or trackpad. • In TeXShop 3.41, the maximum possible PDF window magnification was increased from 1000% to 2000%. But a few constants in the code still contained the 1000% limit. These have been changed to 2000%. • A small error prevented the fix for remembering the PDF Window location when quitting. This is fixed. • In the Macro Editor, dashes and quotes were automatically changed to smart dashes and smart quotes, breaking some applescript macros. This is fixed. ### TeXShop Changes 3.44 Version 3.44 has the following changes • Several users reported a crash just after typesetting when TeXShop ran on Yosemite. The users told me that only certain documents caused crashes, and under unusual circumstances. For instance, some documents caused crashes if typeset twice in succession. By looking at crash logs, I could see that all these crashes occurred at the same spot in the code, but at first I was unable to reproduce these crashes. Finally a user named Tim Leathart reported a crash, sent the source for a document creating the crash, and gave very precise instructions about producing the crash. Using this information, I was able to crash my machine and thus find the bug. It turned out that only documents using hyperref crashed, and the crash was caused by incorrect code in TeXShop used to process the outline that such documents often contained. It was not necessary to open the Preview drawer and view the outline to get the crash. The code is fixed. • On Yosemite using a Preview window configured for "Single Page" or "Double Page" display, the keyboard shortcuts to page up or page down were reversed. This bug was noticed by Yusuke Terada and fixed. There are no changes in "MultiPage" or "Double MultiPage" modes since Yosemite behaves correctly for these modes. • Autosave for the Preview window was broken and is fixed. • The next two items discuss hidden preferences to fix minor TeXShop annoyances. Before adopting these hidden preferences, users should determine whether they find the current behavior annoying. In Yosemite, the source code window's scroll controls have some elasticity, so the source bounces slightly at the top and bottom of the document. Yusuke Terada noticed that these bounces sometimes obscure the first or last lines of the document, making it difficult to edit these lines. Others of us do not have this problem. Yusuke Terada added a hidden preferences to turn off this elasticity. To activate the hidden item, type the following in /Applications/Utilities/Terminal: defaults write TeXShop SourceScrollElasticity NO • Herbert Schulz noticed that during scrolling of the source window, the lines numbers scroll at a slightly different rate and require a fraction of a second to catch up. Yusuke Terada pointed out that this is due to a fix implemented in Lion for a Lion bug affecting line numbers. Apparently this line number bug has been fixed by Apple in Yosemite. A hidden preference is available to turn the fix off. When this is done, line numbers and text scroll in unison except near the ends when source elasticity applies. By turning off source elasticity as well, the line numbers and text scroll together: defaults write TeXShop FixLineNumberScroll NO defaults write TeXShop SourceScrollElasticity NO • One user discovered that "Aggressive Trash AUX", which is activated by a hidden preference setting, could remove items in hidden ".git" directories and thus impact repository management systems. In version 3.44, TeXShop does not search hidden directories in aggressive mode. • Finally, a number of users switching to Yosemite reported fuzzy preview images and inadequate antialiasing on certain monitors. I'm told the image looks fine on Retina displays including the new 27 inch iMac. Perhaps Apple optimized pdf rasterization for these displays without worrying enough about other monitors. Frankly, I am just barely able to notice a difference myself, using Apple's 27 inch Thunderbolt Display. A few users sent me email with pairs of png files, showing an image under Mavericks and under Yosemite and pointing out the fuzzy result on Yosemite, but in some of these cases I couldn't see the difference. So the problem seems to depend strongly on the computer screen used, and perhaps on user sensibilities. Some users may be completely happy while others are desperate for fixes. Version 3.44 of TeXShop has several hidden preference items which may help with this problem. There is no universal solution, so experimentation with these preference settings will be needed if you find the display fuzzy. TeXShop and many other graphical front ends for TeX and PDF display use Apple's PDFKit and Cocoa frameworks. These frameworks rasterize pdf images are an extremely low level not accessible to programmers. Version 3.44 tries to expose all the routines in Cocoa which could modify this rasterization. Notice that TeXWorks and Adobe Acrobat do not use PDFKit and Cocoa and thus behave differently. It does little good to call these programs to our attention since switching to a different pdf display library would betray a key feature of TeXShop (and other programs using these frameworks, namely that they are fully native applications. With these caveats, let us list possible solutions. Yusuke Terada noticed that in Japan, the display could appear fuzzy and then be made legible by tweeking the magnification. So he added code in TeXShop to do this automatically each time a pdf file is opened or typeset. To turn this feature on, type the following in Terminal: defaults write TeXShop FixPreviewBlur YES If this tweek fixes the problem, leave it on and stop reading. Otherwise turn the feature off by using the same command with YES replaced by NO, since the tweek is likely to interfere with remaining experiments. Apple's System Preferences in the General Pane has an item labeled "Use LCD font smoothing when available." A few users discovered that turning this item off cured fuzzy behavior. I think this fix won't help most users, but it might be worth a try. TeXShop also has a preference item under the Preview tab labeled "Smooth text and line art." This item was originally added to fix a different problem. One user created an illustration with very thin lines. On a previous TeXShop, the lines vanished with regular monitors, although they appeared with the Retina display. The user discovered that the lines appeared in Acrobat, and by turning on antialiasing they also appeared in TeXShop. The code provided by Cocoa to turn on antialiasing has additional features not exposed in previous versions of TeXShop. Cocoa provides the ability to set the level of antialiasing. The previous "Smooth text and line art" preference set this value as "high". In TeXShop 3.44, hidden preference settings can select the interpolation level. To test various levels of antialiasing, turn on "Smooth text and line art" in TeXShop Preferences and then set the hidden preference defaults write TeXShop InterpolationValue 3 where the final value can be any integer between 0 and 4. Apple's API documentation provides the following names for these values, which perhaps give a hint of their function. The list is as follows; the strange reversal of 3 and 4 occurs in the official list: 0 = NSImageInterpolationDefault 1 = NSImageInterpolationNone 2 = NSImageInterpolationLow 4 = NSImageInterpolationMedium 3 = NSImageInterpolationHigh Frankly, I suspect an entirely different solution will be best for most people. That solution is to change the font used for your TeX document, via the wonderful macros written by Michael Sharpe and added to TeXShop late this summer. For detailed explanation, read the description below under "TeXShop Changes 3.39." Notice that it is possible to add Michael's commands for one particular font to your default template, so that font will always be used for new documents. All of these fonts are included in the full TeX Live distribution, so using them should cause no trouble when collaborating with Windows or Linux users. The font commands take up four or five lines in the preamble, and are easily discarded once the document is complete if you want the final document to have a plain vanilla look. On the other hand, Michael's choices come from an expert and may satisfy more readers than the previous default font choices. ### TeXShop Changes 3.43 Version 3.43 has the following changes • Fixed a bug after drawing a selection rectangle in the Preview Window. If this window became inactive and then was activated again, drawing into the window would become erratic or worse. Now when the window is deactivated, the selection rectangle is removed. • TeXShop has a hidden preference reversing the order of the source and preview windows in single-window mode defaults write TeXShop SwitchSides YES After selecting this preference setting, use "Customize Toolbar" with the single window active to rearrange tools appropriately. • Yusuke Terada fixed Automatic UTF-8-Mac to UTF-8 Conversion, which is used mainly in Japan, so it correctly opens root file documents. • Yusuke Terada also fixed a couple of cases when TeXShop commands created a "ghost window" with no content, and made (command+1) switching source window also work on console windows. ### TeXShop Changes 3.42 Version 3.42 has the following changes • In Split Full Window mode, additional toolbar tools are provided: the "Scale" tool, the "Color Index" tool, and the "Key Bindings" tool. • Toolbar items for this window now have reasonable localized names, rather than the very technical names in version 3.41. • Various bugs in the Split Full Window mode are fixed. • The Spanish localization temporarily changed to Chinese. This is fixed. • The Macro menu failed to work in Split Full Window mode if the user switched to another window and then back to the split window. This bug is fixed. • In Full Window mode, if the text portion was split, both the Macro menu and the Sync between Preview and Source always used the same piece of text rather than switching when the other piece of text was made active. This is fixed. • Yusuke Terada improved the new "Character Info" command from 3.41. He writes • National Flags: An emoji character of a national flag consists of 2 unicode characters. For example, 🇺🇸 consists of U+1F1FA (REGIONAL INDICATOR SYMBOL LETTER U) and U+1F1F8 (REGIONAL INDICATOR SYMBOL LETTER S), and 🇯🇵 consists of U+1F1EF (REGIONAL INDICATOR SYMBOL LETTER J) and U+1F1F5 (REGIONAL INDICATOR SYMBOL LETTER P). Thus, they should be displayed as "a letter consisting of 2 characters", but it is displayed as "a letter consisting of 4 characters" in the current version of TeXShop. This is because the regional indicator symbol letters are positioned outside the BMP (Unicode Basic Multilingual Plane). I fixed this issue so that these national flags are correctly displayed as "a letter consisting of 2 characters". • Support for Ideographic Variation Selector (IVS) In Japan and China, some kanji characters have multiple variants. For example, both 神 and 神󠄀 have the same meanings (they mean "God") and the same readings. They are considered as variant forms of the same kanji. IVS, Ideographic Variation Selector, is used in order to distinguish them. For example, the single unicode character U+795E brings the output "神". On the other hand, the sequence "U+795E U+E0100" brings the output "神󠄀". U+E0100 itself is not a letter. This is a variation selector. In Unicode, there are variation selectors like this in U+FE00..U+FE0F (Variation Selectors), U+E0100..U+E01EF (Variation Selectors Supplement), U+180B..U+180D (Mongolian Free Variation Selectors). In the current version of TeXShop, the character info of "神󠄀" is displayed as "a letter consisting of 3 characters". I modified TSGlyphPopoverController.m for IVS support so that this is displayed as "CJK UNIFIED IDEOGRAPH-795E (Variant)", like "RED APPLE (Emoji Style)". ### TeXShop Changes 3.40 and 3.41 Version 3.40 was never released. Version 3.41 has the following changes • When editing a document, it is possible to place the source view and preview view in a single window rather than dealing with two windows for these views. This is particularly useful when using the "full screen" option, since the screen will contain both source and preview views. To implement this feature, a number of minor changes were required throughout the TeXShop source code. The initial implementation is therefore rather conservative. At the moment, for instance, no Preference item is provided to automatically open documents in this form. If a document has both source and preview windows, activate the source window and then select the item "Use One Window" in the Windows menu. The two windows will be replaced by a single window containing both views. With this window active, select "Use Separate Windows" to replace the single window with a pair of windows. The first time this happens, the single window will not have an appropriate size and position. Resize it appropriately. TeXShop will remember this size and position, but it is even better to carefully select an optimal size and position, and then with the window active choose "Save Source Position" in the Source menu. TeXShop will use that size and position in the future. A source window cannot be converted to single window form unless the corresponding preview window also exists. If a project has a root view and additional chapter views, only the root view can be converted to single window form. Thus the new feature is not likely to be useful for projects divided in this way. • In previous versions of TeXShop, duplicate copies of many documents in the Help menu were placed in the Localized directories of TeXShop, even though these documents had never been Localized. This duplication is cleaned up in version 3.41, reducing the size of the program by almost a third. The English versions of the documents will still appear when TeXShop is used in another language. • Yusuke Terada provided a new menu item in the Edit menu called "Character Info." To use it, select one or more letters in the source file and then select the menu item. If only one letter is chosen, a balloon will appear showing a magnified version of the glyph, its full unicode name, and its unicode character. When glyphs require two or three characters, all required characters will be displayed. If several letters are selected, the balloon will list the number of characters, words, and lines in the selection, and show the unicode character for each glyph. Below are sample balloons provided by this feature: • Christian Icking discovered that TeXShop became inoperative or crashed when opening a file containing % !TEX root = but with no non-space characters after the equal sign. This bug is fixed. • Typesetting for "Experiment" is more robust. It now uses the Preference settings for pdflatex and simpdftex latex and the personallatex setting. If the user's pulldown menu is set to latex, the three choices of 1) pdflatex, 2) tex + dvi, 3) personal script are obeyed. This should make the command work for TeX in Japan. • The commands Next Source Window and Previous Source Window in the Window menu again work, and do not bring up "ghosts" of closed windows. • In the previous version, the Indent and Unindent commands were modified to insert spaces rather than tabs in the source. A few users preferred tabs. There is now a preference item for these users; they should check "Use Tabs" in TeXShop Preferences, under the Source tab. • The Preference command to select the source font was broken when TeXShop was used in English, due to a missing connection in the English Localization File. This is fixed. • Yusuke Terada doubled the maximum magnification level allowed for PDF files, so it is now possible to look at very small details of the preview output. • Yusuke Terada provided new Preference settings under the Miscellaneous tab for projects with a root file. When opening a chapter file, TeXShop previously also opened the root file and then immediately miniaturized the root source. Users can choose to avoid this miniaturization, or to avoid opening the root file at all. In the second case, the root will open when needed, for instance during typesetting. ### TeXShop Changes 3.39 • The most significant change in 3.39 is the addition of Michael Sharpe's "Recent TeX Fonts" document and associated font Macros. Michael and I attended the TeX User Group meeting in Portland, Oregon at the end of July, 2014. I knew him as an Applescript expert; several of his scripts are in TeXShop. Other macros from him will appear in a future version. To see his scripts, go to https://dl.dropboxusercontent.com/u/3825336/TeX/index.html. Pay particular attention to macrocopier.zip on this location, a stand alone program which makes it easy to maintain and extend TeXShop macros. At the TUG meeting, I discovered that Sharpe is a font expert widely known to users on other platforms. TeX Live contains a very large number of TeX fonts, but it is not that easy to use them. Most font sets don't have mathematical symbols, and it becomes a design task to find pleasing combinations of fonts for text, sans-serif sections, and mathematics. Sharpe wrote a document called "Recent TeX Fonts", now available in the TeXShop Help menu. This document describes a number of pleasing font combinations, one per page. Each page lists the features of a set, provides an extensive sample of text and mathematics typeset using it, and contains the exact LaTeX code needed to use the font set. These sets are the result of extensive work by Sharpe; I understand that some of them took four months to perfect. One way to use the document is to select an article or book written with standard fonts and copy Sharpe's implementation section into the document's header and retypeset. To make this even easier, Sharpe slightly modified the LaTeX template which comes with TeXShop, defining a section in the header bounded by %SetFonts comments. The space between two such comments can be empty when the document is originally written. Sharpe defined three macros called "GetFontSet," "SaveFontSet," and "TestFontSet." The first of these brings up a small dialog listing known font sets. When one is selected, its implementation code is written to the source document between the %SetFonts comments, replacing other implementation code there. So with one click and one typeset, the document can be seen written with a new set of fonts. Users can also put their own implementation code between the %SetFonts comments. The SaveFontSet macro reaches between the comments and saves the implementation code to a file in ~/Library/TeXShop/bin named "SetFonts", which is used by the GetFontSet macro to list known font combinations. Thus "SetFonts" gradually builds into a library of known font combinations. I'll let the TestFontSet macro speak for itself. To use these Sharpe additions, it is necessary to use the full TeX Live as installed by MacTeX, because BasicTeX doesn't contain many fonts. All the font sets defined by Sharpe have been tested and are available with TeX Live 2014, except two. The garamondx font has a license permitting free personal use provided the font is not sold. This font is on CTAN, but it cannot be in TeX Live because TUG sells a DVD containing TeX Live. However, a script named getnonfreefonts is available to download and install this font. See https://www.tug.org/fonts/getnonfreefonts/. The "SetFonts" template also has a Lucida entry. Lucida is a commercial font, sold by TUG and others. See https://www.tug.org/store/lucida/index.html. Many users own this set, and Sharpe's detailed and non-trivial code to use them will help them obtain the most from the fonts. It is our hope that the existence of these easy techniques will lead to more LaTeX documents that don't scream out "I was written with \TeX," and instead have a professional printed look. • TeXShop has a new icon by Thiemo Gamma in Switzerland, designed for Yosemite. Gamma also redesigned the dialog which appears when you select the menu item "About TeXShop". The story of TeXShop icons is complicated. The original icon was designed by Jérôme Laurens. Jérôme also designed the TeX Live Utility icon, and I like both a lot. When the Retina display appeared, it was necessary to redraw the icon, but unfortunately source code for Jérôme's icon did not exist. Other users then contributed revised icons, which I didn't quite like. So I tried myself and discovered that making icons is really, really hard. Eventually William Adams managed to create a high resolution version of Jérôme's icon and I like it a lot. Adams also provided a document icon. But Apple revised the look of icons in Yosemite. I intended to ignore the new design. Then completely unexpectedly, Gamma sent a new icon. I think it is wonderful for a reason I'll explain in a moment. Gamma is extremely modest about it. When he learns how difficult it is for others to make icons and how much in demand good designers are, he'll have a happy life. My goal for TeXShop has always been that it should vanish into the background, allowing writers to concentrate on the document they are writing without distraction. Sometimes this goal is met, and sometimes not. Gamma's icon is a symbol of this goal. It is simple and subdued, sitting there in the background. TeXShop still contains Adams' icon for documents. The source code also still contains his program icon. • TeXShop 3.38 has a significant bug, first discovered by James Crippen. When a source document is long and the preview window is active, clicking on the source window leads to a several second delay before the source window becomes active. This bug was caused by a single line of code, added to help a few users in Japan. The bug is fixed in 3.39. The line of code creating the bug was added for users who meet all of the following conditions: • They use Japanese input methods • They customize the background and foreground colors of the source window • They choose a dark color for the background These users can activate the bad line and live with the bug by typing the following in Terminal defaults write TeXShop ResetSourceTextColorEachTime YES • Herbert Shultz provided a new version of the Help document TeXShop Tips and Tricks, and René Fritz provided a French translation. • The Brazilian localization in TeXShop 3.38 was mistakenly added by Koch to the localization for Portuguese in Portugal. This is fixed in 3.39. • Will Robertson asked that the TeXShop "indent" command insert spaces rather than a tab. This makes fine adjustments afterward easier. Version 3.39 gives Robertson his wish. • Yusuke Terada provided fixes for a number of small problems in TeXShop: The color matching options for copying from the Preview window, set in TeXShop Preferences under the Copy tab, were broken in recent versions. This is fixed. The routine which selects an image from the Preview window has been improved. Toolbar tips for mouse actions in the Preview window have been added and localized. • Yusuke Terada also fixed a more significant bug. If a particular font is set in TeXShop Preferences, either by the user or by default, but that font is no longer in the system, then TeXShop would refuse to run. This may explain some very obscure problems reported in the past. The bug is fixed. ### TeXShop Changes 3.38 • If the option key is down when a source file is opened, the associated pdf file will not be opened. • TeXShop is written in Cocoa, an object oriented framework inherited from NeXT. Object oriented programs create a large number of objects dynamically during program operation. "Memory management" is the task of disposing of these objects when they are no longer being used. If an object is disposed too soon, the program crashes when another part of the program tries to use it. If an object is left dangling and not disposed, memory gradually clogs up. Recently Apple introduced "automatic reference counting", a technology which leaves the memory management task to the compiler, allowing the programmer to ignore it. TeXShop adopted this technology in version 3.35, leading to increased stability and significantly fewer crashes. In reference counting, each object keeps a reference number counting the number of parts of the program using it. When a piece of the program is done with the object, that part sends the object a "release" message and the object decreases its reference count by one. When the count reaches zero, the object is automatically removed from memory. There is one situation which the compiler cannot handle automatically. Suppose object A is using object B and object B is using object A. Each object then has a reference count at least one, and usually higher if the objects are being used by other parts of the program. Suppose now that the rest of the program is done with the two objects. The objects are sent a number of "release" messages, until eventually each has reference count one. Since the count is not zero, object A does not go away, so it does not send a release message to object B. Similarly ... The solution of this problem is to manually introduce a "weak reference" from one object to the other. We give object A a reference to object B, but only give object B a weak reference to object A. A weak reference does not increase the reference count. Thus when all other objects are done with the pair, object A has reference count zero, but object B has reference count one. So object A is removed from memory, and just before that happens it sends a "release" to object B. Now object B has reference number zero, and it too is removed from memory. TeXShop 3.38 completes the process of conversion to automatic reference counting, by correctly indicating weak references. Thus versions 3.35 through 3.37 could leave unused objects in memory, but 3.38 fixes that problem. ### TeXShop Changes 3.37 • Added a preference setting for the Preview window: smooth text and line art.'' By default this is on. The setting was requested by Tom Burke, who created an illustration using GeoGebra which looked pixelated when displayed at small size in TeXShop. Mysteriously, the illustration looked fine under the magnifying glass, or when printed. It also looked fine on a Retina display, but in that case the circles and lines were very thin. To see what the setting does, turn the new preference setting off and open the file Burke.pdf with TeXShop. Notice that the illustration is pixelated, but looks fine under a magnifying glass. Then turn the preference on and open the same file again. This pdf file was created typesetting the TeX source file Burke.tex together with the illustration abI3d8.pdf . The same preference setting is provided by Apple's Preview and other programs. Users with a Retina display may wish to turn it off. • Yusuke Terada added small changes for users in Japan. He wrote "during the time Hiragana is input until it is converted to Kanji, undecided characters' are displayed, but the source text color is not applied and they are displayed in black. This behavior has been modified." • A new item, "Experiment...", was added to the Edit menu and is available when the source window is active. This addition has been a recurring request of Wendy McKay. The item allows users to experiment with short, but complicated, fragments of TeX before copying the source into the main document. When the item is chosen, a panel appears. Type a TeX fragment into the panel, say$$\sqrt{x^2 + y^2}. Push the Typeset button at the bottom of the panel, and a second panel appears showing the result of typesetting the fragment. The fragment can contain anything: a displayed formula, ordinary text, several pages of mixed material. To typeset, TeXShop creates a new source file with the header of the current document up until "\begin{document}", the new fragment, and a final "\end{document}." This also works in a project with a root file. In that case the contents of the root file up until "\begin{document}" are used.
Both panels have close buttons. The "escape" key will also close panels when they are active.
Although the two panels do not have resize buttons, they can both be resized. TeXShop will remember the new sizes and locations and use them the next time "Experiment..." is selected. The font in the source panel will be the default TeXShop source font. The keyboard shortcuts "command +" and "command -" work in the source panel to enlarge the text if desired. Key bindings and command completion are available in the source panel, but with one caveat. Command completion uses the tab key in the panel even if it uses the escape key for regular source, since the escape key in a panel closes the panel.
The "Experiment..." feature requires a latex-like engine. It will not work with ordinary plain tex. The source panel's Typeset button looks at the main source window's toolbar to determine a typesetting engine., and also uses the "% \$TEX engine = ..." mechanism if available at the top of the source window. If "Plain TeX" or "Context" is selected, nothing happens. If "bibtex" or "make index" are chosen, pdflatex is used. Obviousy "pdflatex, xelatex, and lualatex" can be used. The panel will try to use any user-defined engine selected, but some such engines may fail if they don't expect latex-like code or output pdf.
The preview panel understands mouse scroll commands and trackpad gesture commands to scroll and resize. It understands "command-shift +" and "command-shift -" to resize contents.
If you close a panel during work and later reopen it, the contents will be remembered. But the contents are lost when quitting TeXShop. It is assumed that the panels will be used for short fragments of work; when the user is satisfied, they will transfer the source to the main document using copy and paste. Panel contents are not auto-saved and cannot be manually saved except via the copy mechanism.
Each document has its own source and preview panels, so if you have multiple documents open, you could also have multiple source and preview panels open, leading to a confusing mess. I expect users to exercise common sense and only experiment with one fragment at a time. One way to avoid confusion would be to hide the panels when a document becomes inactive. I didn't want to do that because a user constructing a complicated example might want to temporarily open a second document and copy source from that document into the panel as a starting point.
### TeXShop Changes 3.36.2
• Fixed magnification on Yosemite
• Fixed sharing icon on source toolbar
• fixed two broken IB connections
• changed [NSApp delegate] to (TSAppDelegate *)[NSApp delegate] in two spots to make the code compile on XCode 6
### TeXShop Changes 3.36.1
This version was released with version number 3.36.1 to test the operation of adding a final digit to the version number. In the future, this will be used for "silent updates" in which a minor problem is fixed in the first hours of a release. In the past we did not change the version number for such silent updates, but from now on we add a decimal digit to the end of the version in both the TeXShop home page and the program. We do not update change documents for minors updates. Source files are always updated at the same time as the program, even for minor updates.
• The command-1 shortcut to switch from the Preview Window to the Source Window failed if the Preview Window was closed and then reopened. This is fixed. Certain other keyboard shortcuts were also affected.
### TeXShop Changes 3.36
This is a minor update to clear up issues in the important version 3.35 release. It has the following changes:
• The "remember and restore" operation has been improved to remember the position and size of both the Source and Preview windows for all open files and restore them the next time TeXShop is restarted.
Recall that holding down the option key while quitting closes and forgets all windows; in that case, TeXShop will restart with a clean slate. Similarly, holding down the shift key while restarting TeXShop will restart with a clean slate.
It would be possible to remember other window positions: Console, Log File, etc. For the moment TeXShop does remember these positions since the operation seemed more confusing than helpful.
• Herbert Schulz improve Command Completion by fixing bugs and controlling whether a forward or reverse search is done during the operation.
• Fixed a bug for "Edit the Key Binding File." Closing this dialog and then bringing it up a second time produced a blank dialog window. The bug was pointed out by Emerson Mello and is now fixed.
• The "Enter Fullscreen" menu item was not localized, but as soon as a source window opened, it switched to the user's language. Now it is localized even when no file is open.
• Protected against a rare bug in which a Notification was sent to a closed Source Window.
• Attempted to make TeXShop Icon behavior more reliable by providing the extension of the Icon file in the Info.plist, by changing UTI types belonging to com.tug to instead belong to org.tug, and by using Apple defined UTI's when available. The Apple mechanism linking icons with particular applications continues to have problems in the open source world, where a particular file type can be claimed by multiple applications.
### TeXShop Changes 3.35
The step from TeXShop 2 to TeXShop 3 marked a significant boundary; version 3 has 64 bit code rather than 32 bit code and was compiled on Lion. The step from TeXShop 3.26 to TeXShop 3.35 marks a second significant boundary; version 3.35 uses Automatic Reference Counting rather than manual memory management and is compiled on Mavericks.
When Mavericks appeared a year ago, magnification code used in earlier TeXShop versions broke. It was replaced in 3.26 with code which worked on Mavericks, but not on earlier systems. TeXShop 3.26 used the older magnification code on older systems.
Later versions of TeXShop were compiled on Mavericks. Then the new magnification code worked on earlier systems, but the old magnification code broke. So TeXShop 3.35 uses the new magnification code on all systems. However, over the last couple of weeks testers discovered that this code leads to obscure crashes on Lion, but not on Mountain Lion and higher.
Consequently, in version 3.35, both magnification in the Preview window and selection of rectangular regions in the Preview window are disabled on Lion. Users of Lion should upgrade to Mountain Lion if at all possible, since these features will be active again. Users who cannot upgrade should consider moving to version 3.26 because the two disabled features work with that version. But version 3.26 will not be further upgraded and TeXShop Lion users will be stuck there in the same way that Snow Leopard users are stuck with TeXShop 2.
TeXShop 3.35 has the following additional changes:
• The OgreKit Find Panel was upgraded to the latest version by Yusuke Terada, and additional bugs in it were fixed. About the changes, he wrote "This version is based on the latest OgreKit. The latest OgreKit adopts Onigumo as its regular expression engine instead of Oniguruma, which OgreKit used previously. The development of Oniguruma is now stopped, and Onigumo is its fork version. Ruby also used to adopt Oniguruma until Ruby 1.9, but it was replaced by Onigumo since Ruby 2.0. By using Onigumo, new regular expressions like \K, \R, \X can be used.
" I built the latest OgreKit specialized for TeXShop. In order to solve the problems in which OS X replaced ordinary quotes with smart quotes, etc, I disabled the meddling functions by default:
- Smart Copy/Paste
- Smart Quotes
- Smart Dashes
- Automatic Data Detection
- Automatic Text Replacement
- Automatic Spelling Correction
"A message from Juan Luis Varona Malumbres said: Yusuke, please let to me to explain a small problem with the Ogrekit Find Panel in TeXShop: The election Origin: Top/Cursor (in Spanish Origen: Principio/Cursor) works very well in English, but not in Spanish: it always does a 'Cursor' search. Can you fix it, please?
"I've fixed it today.
"I found another bug of OgreKit in Spanish environment. When you select some range in TeX source and do 'replace all' with OgreKit Find Panel, the entire document is set to the replacement scope, even if you choose "Selection" as the scope of search. This issue occurred only in the Spanish environment. I've fixed this issue"
• Previous versions of TeXShop allowed paths with a tilde in the TeX Binary Path Setting Preference. For instance, this setting could be "~/Library/TeX/texbin". But such paths caused problems in a small number of minor TeXShop features. A full audit of the TeXShop code was performed and now such tilde's should always work in the setting.
• TeXShop now contains Michael Sharpe's GotoLabel Macro. Here is his description. "This is macro for TeXShop's Macro Menu, allowing you to bring up a list of labels containing specified text, and move to the chosen label.
"To install, choose Macros/Open Macro Editor... (in the TeXShop main menus) and then from the Macros menu, choose Add Macros from File. This brings up a file selector with which you may select GotoLabel.plist. This install the macro in the Macro Menu, where you may move it to any convenient position, and, if you wish, give it a hot key.
"This provides an alternative to adding an item to TeXShop's tags by inserting a line %:tag_name in the source. With a lengthy document with many labels, it seems advantageous to be able to filter the list of labels."
• If you open the Edit Menu --> Spelling and Grammar panel, you can set the dictionary using by the Spell Checker at the bottom of the Panel. TeXShop now remembers this choice, and will automatically use it the next time the program is started.
Unfortunately another feature of TeXShop can interfere with this process, the optional "% !TEX spellcheck = " command at the top of a file. I suspect that many users don't use the "% !TEX spellcheck" syntax. They will run into no problems.
If no files containing "% !TEX spellcheck" have been opened since TeXShop was opened, and the user chooses a different dictionary, then that different dictionary will be remembered as the new TeXShop default dictionary.
But if the user chooses a different dictionary AFTER opening some files containing the "% !TEX spellcheck" line, then that choice will not be remembered when TeXShop closes.
So if you do not use the "% !TEX spellcheck" syntax, you can change the default dictionary used by TeXShop by opening the Spelling and Grammar panel and making the change. But if you sometimes use the "% !TEX spellcheck" syntax, the foolproof way to change the default dictionary is to open TeXShop without opening files, change the dictionary with the Spelling and Grammar panel, and quit.
### TeXShop Changes 3.27 - 3.34
Versions 3.27 - 3.31 were never released. An experimental version of 3.32 had a limited release, and 3.33 was never released. Version 3.34 is the first regular release with these changes.
The most important feature of release 3.34 is that TeXShop's source code was revised to support ARC and the program was compiled using Automatic Reference Counting. Thus memory management is now done by the compiler rather than by hand. This should make the program much more stable.
Version 3.34 has the following additional changes:
• A localization for Brazilian Portuguese is provided by Emerson Ribeiro de Mello.
• Spell checking now remembers a dictionary chosen in the "Show Spelling and Grammar" panel. This makes it easier to use the CocoAspell package.
CocoAspell adds dictionaries to Apple's Spelling System which understand LaTeX and thus do not claim that LaTeX commands are misspelled. Obtain the system at http://cocoaspell.leuski.net/. After installing, notice that an extra Spelling Preference Pane has been added to Apple's System Preferences. Select a dictionary and turn on TeX/LaTeX filtering. Then either select this dictionary in TeXShop"s "Show Spelling and Grammar" panel, or select it globally in the "Keyboard" pane of Apple's System Preferences under the Text tab.
• In Mavericks, the code for selecting a region of the Preview file and the code for magnifying a region of the Preview file broke. This was fixed in an earlier version of TeXShop by using the old code for Lion and Mountain Lion, but using entirely new code for Mavericks. It appears, however, that the old code for Lion and Mountain Lion did not work. Now the Mavericks code is used in all cases and works on Lion, Mountain Lion, and Mavericks.
• Added Haskell literate script (lhs) as a type that TeXShop can edit, and that can be typeset and syntax colored
• The Pythontex engine and documentation were slightly altered
• Rene Fritz provided a new French translation of TeXShop Tips & Tricks
• TeXShop now contains Latexmk 4.40
• TeXShop has a menu command to convert tiff illustrations to png. This command uses "convert" if available, but otherwise uses Apple's "sips" program, which has been part of OS X for a long time. Thus the removal of "convert" from MacTeX-2014 will not affect TeXShop.
• In Multipage mode, each typeset produced a small upward creep in the contents of the Preview window for some users. This is slightly improved, though not yet completely fixed. fixed.
• In Single Page mode, typesetting produced a momentary flash in the Preview window showing the document's first page, immediately replaced by the current page. This is fixed.
• New code by Dirk-Willem van Guik to set the "Job Title" for print jobs.
• The "Show Log File" command has been improved. When this window is first opened, it shows the full log file. Twelve check boxes at the top set various queries to texloganalyser, a script in TeX Live which can pull various pieces of information from the log file. The Redo button then displays this information. For instance, texloganalyser can display all warniings, or all overfull boxes, or all fonts used in a document. Unchecking all boxes and pushing Redo again displays the full log file.
If "Show Log File" is selected while holding down the Command key, a dialog appears asking for an extension. If the user types the extension "aux", then the window will display the aux file rather than the log file. Since texloganalyser only works on log files, checking boxes will then have no effect.
• A new Preference item is provided: "Tags Menu in Menubar". When this item is selected, a duplicate of the Tags menu will appear in the menubar. The new item allows advanced users to hide the Source window's toolbar: use command-T to typeset, selected the appropriate engine using text at the top of the source, use the menu versions of the Macro and Tags menus, and notice that "Split Window" is provided as a menu command in the Windows menu.
• New versions of the LilyPont and MetaPost engines are provided by Nicola Vitacolonna.
### TeXShop Changes 3.26
3.26 has the following fixes:
• The combination of Preference Settings "Configure for External Editor" and "Automatic Preview Update" is now compatible with App Nap in Mavericks. Thanks to Zhiming Wang for pointing out this problem and its connection to App Nap.
In Mavericks, the Get Info panel for many applications has a check box to turn off App Nap. TeXShop has no such box. The reason turns out to be that TeXShop is compiled with XCode 5 on Mavericks, and Apple assumes that applications compiled on Mavericks have been fixed to deal with App Nap. TeXShop is now fixed in the approved manner.
• Michael Sharpe wrote a beautiful document describing Applescript in TeXShop and problems that might be encountered writing new Applescript Macros. I have added that document to the Help Menu, In addition, I added Michael to the list of TeXShop contributors, because his web page contains a number of AppleScript corrections which I have adopted in the Applescript Macros shipped with TeXShop.
• The following four Macros had problems with recent versions of OS X. Fixed versions by Michael Sharpe and included in this release.
Open Quickly
Insert Reference
New Array
New Tabular
• There is a new arara engine by Alan Munn in ~/Library/TeXShop/Engines/Inactive/Arara. The arara program is part of TeX Live 2013. Documentation for it is included in the above folder. Roughly speaking, it is a replacement for both latexmk and parts of the TeXShop engine mechanism.
• Small changes were made in the Japanese localization by Seiji Zenitani
### TeXShop Changes 3.25
TeXShop 3.25 contains the following fixes:
• Version 3.24 contained a new version of Sparkle, but unfortunately not the latest version. The version in 3.24 didn't have a Japanese localization. Terada Yusuke found and compiled the latest version, now in 3.25.
• A few users report crashes when quitting TeXShop or closing a file. I was never able to reproduce this crash. Terada Yusuke made a partial fix, so the number of such crashes should diminish.
• The default preference settings for users in Japan have a small fix for Kpsetool.
• Some locations referred to the old TeXShop web url, http://www.uoregon.edu/~koch/texshop/texshop.html. These have been changed to the new url, http://pages.uoregon.edu/koch/texshop/texshop.html.
• In version 3.17, the old Search sync method broke when syncing from the Preview page to the Source page. So it was disabled in version 3.18, and users were told to switch to the modern SyncTeX method. This is still recommended. But the Search sync method is now fixed, and will be used in those rare cases when SyncTeX does not find a match.
### TeXShop Changes 3.24
There is only one change. The Sparkle upgrade mechanism in previous versions of the TeXShop 3 series does not work on Mavericks. This is fixed in 3.24.
### TeXShop Changes 3.22 and 3.23
TeXShop 3.23 fixed one bug. In 3.22, if the user configured TeXShop to use an external editor and then set the hidden preference ExternalEditorTypesetAtStart to YES, the program crashed when opening a file. This is fixed in 3.23.
Version 3.22 has the following changes:
• If the option key is pressed, the "Typeset" menu changes to "Trash Aux & Typeset". Selecting it trashes aux and related files before typesetting. Recall that command-T is a keyboard shortcut for "Typeset". Similarly option-command-T is a keyboard shortcut for "Trash Aux & Typeset." This ingenious addition was suggested and made by Paul Smyth.
• There is a new engine for pythontex in ~/Library/TeXShop/Engines/Inactive. This folder also has a README explaining how to install pythontex, and a tex source file from the author of pythontex which can be used to test the engine and simultaneously see some of the features of pythontex in use.
• There is a new engine for Sage in ~/Library/TeXShop/Engines/Inactive/Sage. The method of installing sagetex.sty in TeX Live explained in "About Sage" in the Sage directory has also changed. These changes was suggested by Daniel Grambihler and improve the ease of use of Sage in TeX.
There were two basic problems with the previous method. First, the TeX file sagetex.sty changed with the version of Sage and thus had to be reinstalled each time Sage was updated. Second, the engine file calls the Sage binary, which is inside the Sage program wrapper. But the authors of Sage distribute the program with a name that includes the version number, so the engine file had to be rewritten whenever Sage was updated.
Daniel Gramhihler recommended that we ask users to change the name of the program to "Sage" whenever they update, and that we install a symbolic link to sagetex.tex in TeX Live rather than the actual style file. The result is that when Sage is updated, the engine file automatically finds the new binary and TeX Live automatically uses the new style file.
• TeXShop now opens and edits files of type fdd.
• There is a new hidden preference to set the line spacing in the source window:
defaults write TeXShop SourceInterlineSpace 1.0
Only values between .5 and 40.0 will be accepted. The standard line spacing is given by the default 1.0, and double spacing is given by the value 10.0.
• Nicola Vitacolonna made extensive changes to the Italian localization of TeXShop. Minor changes for other localizations were also made by their localizers.
### TeXShop Changes 3.19, 3.20, 3.21
Versions 3.19 and 3.20 were never released.
Version 3.21 has the following changes:
• Three bugs appeared when running on OS X Mavericks: the magnifying glass in the Preview window broke, rubber band selecting of a region of the Preview window broke, and sharing the png of the selected region broke. All three are fixed. The code which runs on Mountain Lion and below remains in place, so small changes in behavior will only be seen on Mavericks.
The magnifying glass has an interesting history. The original code used an unusual call in Cocoa. It broke in Leopard. This code was replaced by tricky code relying on another unusual Cocoa routine. This broke in Mavericks. The new Mavericks code in 3.21 is straightforward, drawing in a temporary transparent overlay view for both magnification and rubber banding.
• In Mountain Lion, sharing of a region of the Preview window provides a png of this region. In Mavericks and above, it provides a pdf of the region, which can be resized without losing detail.
• For a long time we have recommended that users new to TeXShop arrange the location and size of the Source and Preview windows on the desktop as desired. Most users prefer a side by side configuration with the Source window on the left and the Preview window on the right. Then activate the Source window and in the Source menu select Save Source Position''. Similarly activate the Preview window and in the Preview menu select Save Preview Position''. From that point on, all TeXShop source and preview windows will appear in the selected positions.
If you have a portable connected to a large monitor, this configuration works as long as you are attached to the monitor. But when you are traveling, the windows will appear on the screen of your portable, and probably not in ideal positions. TeXShop 3.21 has an extra configuration to fix this. Arrange a source and preview window on your portable screen in ideal position; the portable can be attached to the large monitor at the time. Then activate the source window and in the Source menu select Save Source Position for Portable''. Activate the preview window and in the Preview menu select `Save Preview Position for Portable''. After this step, windows will appear in the desired position when you are connected to the large monitor at home or office, and windows will appear in the desired position on the portable screen when you are traveling.
It will do no harm to skip this portable configuration.
If you have multiple screens, Maverick has the ability to start applications on any screen. Thus it may be convenient to use the "portable" configuration for a second screen even if you do not have a portable.
• The following bug was pointed out by Simon C. Leemann. Until 3.21, double clicking on a blank space in the source window selected the space and the words on either side of the space. This is fixed and now only the space is selected.
• TeXShop allows users to select alternate engines on a file by file basis using the syntax
% !TEX TS-program =
TeXworks and other programs use the similar command
% !TEX program =
TeXShop 3.21 now accepts this alternate syntax to specify an engine.
For compatibility reasons, the space between "%" and "!" is optional, but highly recommended.
• The next feature was requested by Alan Munn about a year ago. Apologies for the delay. In the meantime, Mark Everitt wrote an ingenious script to provide this feature. The feature provided by that script is now built into TeXShop.
The command BibTeX in the TeXShop typeset menu runs BibTeX; notice that this command has a keyboard shortcut. In Japan, a different program is used instead, so Yusuke Terada provided an item in TeXShop Preferences under the Engine tab to select the program to be run when this menu item is selected. Examples are bibtex, biber, pbibtex, etc. The Preferences item can also be used to add flags to the command.
In TeXShop 3.21, the BibTeX engine can be selected on a file-by-file basis using the syntax
% !BIB TS-program =
The alternate syntax
% !BIB program =
will also be accepted. The item after the equal sign gives the name of the program (for instance "biber") and any required flags. This line should be written within the first twenty lines of a source file.
The "% !BIB TS-program = " line takes precedence. If it is absent, the Preference item determines which command is run.
• TeXShop 3.21 contains latexmk 4.37.
• The following bug was pointed out by Basil Grammaticos, and is fixed in TeXShop 3.21. TeXShop has an AutoCompletion feature. If the user begins typing a phrase like
\begin{
and then presses the escape key, TeXShop will complete the phrase. If there are several completions, TeXShop will cycle through the possibilities each time the escape key is pressed. These completions are listed in a file which users can edit using the command "Edit Command Completion File" in the Source menu.
But if no completions are found, TeXShop reverts to a different autocompletion method which is build into Cocoa. This time, pressing the escape key opens a small window listing all possible completions. Click on one to complete the phrase. These completions come from the system dictionary.
The first kind of completion is likely to appear when typing a TeX construction, and the second kind appears when typing an ordinary word or phrase.
Basil pointed out that in many programs, the second completion list begins with phrases which already appear in the document. In earlier versions of TeXShop, these nearby phrases were missing.
This bug was caused by adding Autocompletion to TeXShop too early. If no completion was found, the code called the dictionary to provide an autocompletion list. Then Apple added a similar call to TextEdit, which first finds nearby phrases and then asks the dictionary for further words. TeXShop now calls that TextEdit routine.
• TeXShop now has a hidden command to turn off AutoSaving:
defaults write TeXShop AutoSaveEnabled NO
WARNING: This command will cause crashes on Lion, so it should only be used on Mountain Lion and above. The TeXShop developers use AutoSave. If you turn off AutoSave, you are entering untested waters.
• The Sage engine script was revised in TeXShop 3.21, and the Sage instruction document in ~/Library/Engines/Inactive/Sage was rewritten.
• Ulrich Bauer added a patch to TeXShop 3.17 for users working with a server. This patch was supposed to be inactive unless AutoSave was on. But a mistake in the code caused it to be used by a few people who didn't use AutoSave. This is fixed.
• Many users thanked us for Bauer's code, but it caused trouble for a few. There is a hidden preference to turn to patch off:
defaults write TeXShop WatchServer NO
• Finally, Yusuke Terada provided a number of changes for users in Japan.
• At the bottom of the TeXShop Preference window, a pull down menu labeled "Defaults" offers to change all preferences to their default values. The first item in this menu is labeled "Regular" and is for most users. The remaining items are for users in Japan, who interact with TeX in a variety of unusual ways. It contains six new items to replace the original two legacy items.
• An earlier version of TeXShop contained an automatic UTF-8-Mac to UFT-8 Conversion. But this routine converted some Japanese Kanji characters to different glyphs. This is fixed by excluding certain characters from the conversion, as explained in the Unicode Consortium Report http://tug.org/mactex"> http://unicode.org/reports/tr15/#Primary_Exclusion_List_Table.
### TeXShop Changes 3.18
Version 3.18 has only a single change:
TeXShop contains an obsolete sync method called Search Sync, and a modern replacement by Jerome Laurens called SyncTeX. In recent versions of TeXShop, the obsolete Search Sync from the Preview Window to the Source Window randomly hangs, making TeXShop unresponsive This was supposed to be fixed in version 3.17, but it wasn't. Unfortunately, when the modern SyncTeX cannot find a match, it calls the old Search Sync, so SyncTeX can indirectly hang as well.
It is silly to waste time on an obsolete method, so in TeXShop 3.18, Search Sync from the Preview Window to the Source Window is disabled and does nothing. Most users will notice no change. Users who misconfigured SyncTeX will lose synchronization.
Users should check that
• in TeXShop Preferences under the Typesetting tab, the "Sync Method" is set to SyncTeX;
• in TeXShop Preferences under the Engine tab, the two configuration lines for "pdfTeX" each contain the following flags
--file-line-error --syncTeX=1
• in TeXShop Preferences on the same page, the two "TeX + dvips + distiller" lines contain the following instruction
--extratexopts "-file-line-error -synctex=1"
The easy way to do this is to push the four "Default" buttons beside these four entries.
### TeXShop Changes 3.17
Version 3.17 has the following features:
• In 3.15, we introduced a hidden default to fix the rendering of the Monoco font when used in the source window on Mountain Lion.
defaults write TeXShop NSFontDefaultScreenFontSubstitutionEnabled -bool YES
This fix does not alter the display of the Monoco font when used in the log window or the console. Another hidden preference can fix Monoco in these windows. This preference causes TeXShop to call the font routine [font screenFontWithRenderingMode:NSFontDefaultRenderingMode] and pass the resulting font to an AppKit object, although Apple's documentation says not to do this. So the new hidden preference should only be used if you cannot tolerate the Monoco font's default rendering.
defaults write TeXShop ScreenFontForLogAndConsole -bool YES
• The spotlight indexer has not been distributed with the version 3 series of TeXShop. Now it is again included. This indexer was written by Norm Gail with additions by Max Horn. It was recently revised by Adam Maxwell.
• Choosing the "Source <=> Preview" menu item in External Editor Mode caused a crash. This is fixed.
• The previous version of TeXShop introduced an improved version of latexmk. TeXShop contains two latexmk engines which are active by default: pdflatexmk.engine and sepdflatexmk.engine. The "About This Release" item in TeXShop 3.17's Help Menu explains how to update these engines.
• A fix in version 3.16 replaced a deprecated method in the "Search sync" routine with a modern equivalent. Unfortunately, this fix had a bug which could crash TeXShop.
Most users use the SyncTeX method to sync. But when this method fails to find a reasonable match, TeXShop reverts to the old Search Sync method, and thus could crash the program. The old Search sync method is now fixed.
If by chance there are still problems with Search sync, a hidden TeXShop preference can turn off reverting to it when SyncTeX fails to find a match:
defaults write TeXShop SyncTeXOnly YES
### TeXShop Changes 3.16
Version 3.16 has the following features:
• After editing the Command Completion File, users had to save twice before open documents acknowledged the changes (although quitting and restarting TeXShop used the changes after a single click). This is fixed.
• New latexmk engines are available in ~/Library/TeXShop/Engines/Inactive/Latexmk. These engine files allow you to place a platexmkrc file in the same folder as the source you typeset. This "project resource" file provides further latexmk configuration. For instance, you could use this process to force latexmk to use texindy rather than the default makeindex for a given project.
This improvement was suggested by Michael McNeil Forbes and adopted to latexmk in TeXShop by Herbert Schulz.
• When the Macro Editor is activated, a new menu item named "Save selection to file..." appears. This menu was broken and could not save files, but the problem is fixed. A similar problem with "Add macros from file..." is also fixed.
• A related problem was fixed in the menu command "Save Selection To File..." which saves a selection from the preview page to disk.
• The Synctex synchronization method worked in 3.15 and earlier. When this method fails to find a match, TeXShop reverts to the older search method of synchronization, but this was broken in 3.15. The earlier method is now fixed.
• When a new version of TeXShop first runs, it updates a few subfolders of ~/Library/TeXShop. Versions 3.12 through 3.15 were broken and updated these folders every time they started. This is fixed. Thanks to Yusuke Terada for the bug report.
• Users with an external Trackpad or builtin Multi-Touch Trackpad can use the "App Expose" feature by activating it in the Trackpad Preference Pane of Apple's System Preferences. Please three fingers on the pad positioned over the TeXShop Icon in the Dock or a TeXShop Window, and swipe down (you can configure this to use four fingers in System Preferences). Then only TeXShop windows will appear on the screen, and a list of hidden accessory files will appear along the bottom of the screen. The recent switch to UTI's has activated this feature of OS X.
### TeXShop Changes 3.15
Version 3.15 has the following features:
• The default editing font has been changed to Menlo 12, which Apple now recommends as a fixed width font. This will not affect old users, whose original preference setting remains.
Some users have chosen Monoco 9 or 10 as an editing font. This font may look somewhat fuzzy on Mountain Lion because Apple has optimized the text and font rendering routines for the Retina display. To get back to the old behavior, type the following command in Terminal. This is not recommended unless you are unhappy with the appearance of text in the edit window.
defaults write TeXShop NSFontDefaultScreenFontSubstitutionEnabled -bool YES
• TeXShop 3.14 began the process of switching from Apple's old style indication of document types in the TeXShop Info.plist to the new style using Universal Type Identifiers (UTI). The process is complete in TeXShop 3.15. The change involved extensively rewriting the Info.plist file, and replacing depcrecated Cocoa file commands with modern equivalents. The change may improve system acceptance of the new high resolution icons by William Adams, but I still expect trouble and recommend the techniques outlined in the description of 3.14 to fix them.
• There are a few Japanese localization changes and a code fix by Yusuke Terada. Thanks.
• Small glitches have been reported when using magnification in the preview window. These glitches have been fixed. In case of remaining trouble, please give concrete details explaining how to reproduce the problem.
• The default encoding in TeXShop has been changed from MacOSRoman to ISOLatin9. This will not affect old users except at one minor spot.
To understand the change, recall a few encoding basics. A computer file is just a long sequence of bytes, each an integer between 0 and 255. Other data, including picture data and sound data, is encoded in this form when written to disk. The majority of computer files contain ordinary text.. Text was originally encoded in Ascii format, which assigns a byte to each key on an American typewriter; Ascii only uses the first 127 bytes, so the bytes from 128 to 255 are available for other purposes. The Ascii encoding was later extended for use in Europe and elsewhere by adding accents, umlauts, and other characters to the upper 128 vacent spots. Many such encodings were invented, and a number of them are available in TeXShop. ISO Latin 9 is such an encoding. It encodes ascii characters in the first 128 positions, and all symbols commonly used in Western Europe in the upper 128 positions. ISO Latin 9 is essentially the same as the earlier ISO Latin 1, except that it includes the Euro currency symbol.
Eventually, the computer industry invented Unicode, which is theoretically capable of handling the symbols used in all of the world's languages. Internally, TeXShop and other Mac OS X programs represent and process text in Unicode. There is no standard Unicode encoding for writing to disk, so all Apple routines which read text from disk or write text to disk require an extra parameter listing the encoding to be used. A commonly used encoding for Unicode is UTF-8. It has the advantage that ordinary ascii files are legal UTF-8 files. The disadvantage of UTF-8 is that random collections of bytes may do not contain legal UTF-8 code, so when the computer tries to open a file in UTF-8 which was written in another encoding, the computer sees garbage and returns nil. Encodings which extend ascii by adding symbols to the upper 128 places do not have this problem; if a file written with one such encoding is opened with a different encoding, the computer will not complain, but some symbols may appear with unexpected shapes.
TeXShop must deal with this design in two spots. When TeXShop is asked to open a file, it reads the first few bytes in MacOSRoman to check whether a " opens it in ISOLatin9.
Some users have requested that TeXShop's default encoding be UTF-8. Users can achieve this result by simply switching the default encoding to UTF-8 in TeXShop Preferences. UTF-8 is not the current default because I believe that many users have old files which were written with Ascii or some other earlier encoding. If these files contain straight ascii, they work fine as UTF-8 files. But if by chance a stray non-ascii character was entered by mistake, then users will see a mysterious dialog panic when TeXShop reports that the file cannot open in UTF-8.
• In version 3.14 the command "Edit Command Completion File" did not display the file to be edited. Now it does again.
• When displaying the Preview Page in fullscreen mode, users can mouse to the top of the screen and select menu options to change the Page Style and ResizeOption. In version 3.15, these new choices are remembered in TeXShop Preferences, and thus will be used again even if TeXShop quits between sessions.
• TeXShop 3.15 contains a patch by Ulrich Bauer for file handling. This patch will be important for users working with version control or with a server which might change the source while it is being edited in TeXShop. For instance, one such user report stated "we are several authors on a paper and we use svn to keep the versions coordinated. If I have a version of the file in the editor and perform an svn update in the terminal, the file changes on disk. However, if I save or typeset, the local version in the editor gets saved and I get no conflict warnings!" With Bauer's patch, "an open document is monitored for external changes to the file, and updated automatically if an external change occurs." Thanks very much to Ulrich Bauer for this important change.
### TeXShop Changes 3.12 - 3.14
Versions 3.12 and 3.13 were never released. Some users downloaded beta copies of 3.12 to fix 3.11 bugs. Version 3.14 has the following features:
• New high resolution icons are provided for TeXShop itself, and for .tex and .pdf files. The icons are by William Adams.
The original TeXShop icons were made by Jerome Laurens; I like them. With the introduction of the Retina display, high resolution icons became essential. A few users sent me samples which I claimed I'd use. But the new icons were not easily recognizable on the screen. So I tried to create my own icons,, and some users will have versions of TeXShop with these icons. This lesson taught me that I am incapable of creating icons.
Finally William Adams agreed to create icons closely following Jerome's original idea. I'm very happy with the result. The TeXShop icon itself has changed only a little. For TeX files, Adams was able to build on and improve Jerome's icons using high resolution techniques.
Thanks, William; having tried, I know it wasn't easy. And thanks Jerome for the original idea.
TeXShop has received small tweaks in hopes that OS X will pick up the icons, but it may be necessary to provide some help. Moving TeXShop into the /Applications/TeX folder will help the system notice the icons. Then select a .tex file, and click "Get Info" in the Finder. Go down to the "Open with" section and select TeXShop Then press the "Change All" button. In one case on my system, a TeX source file was displayed in the Finder with an incorrect icon and no ".tex" extension. Adding that extension caused the Finder to associate the correct icon.
• On Mountain Lion, sharing support has been added. New sharing items are available for both the source window toolbar and the Preview window toolbar. It may be necessary to execute the menu command "Customize Toolbar" to obtain them.
If text is selected in the Source window when the Sharing item is pressed, the program will offer to share the selection. If no text is selected, the program will offer to share the entire source document. Similarly when a piece of text and/or illustration is selected in the Preview window, the program will offer to share the resulting graphic fragment. If there is no selection, the program will offer to share the entire pdf output file.
Only appropriate sharing venues will appear, depending on the selection. For instance, it does not make sense to post an entire pdf document to Facebook. In all cases, the program will share to Email, Messages, or AirDrop. Depending on the selection, it will also share to Facebook, Twitter, Flickr, and other venues. Note that some services must be activated in Apple's System Preferences before sharing can take place.
• On Mountain Lion, TeXShop opened an empty window when the user tried to use the program with an external editor, and also when the user opened a pdf, png, jpg, or eps file. This is fixed.
• On Lion and Mountain Lion, selecting a region of the Preview left garbage lines on the screen as the mouse moved. This bug is mostly fixed.
• Latex make is upgraded to version 4.34. A new engine, sepdflatexmk, is available in the Inactive/LaTeXmk folder. This engine calls pdflatex with the --shell-escape flag, for users who need packages which call external programs during typesetting.
• Yusuke Terada fixed two bugs in 3.11. First, the encoding popup button was ignored in the open dialog; this is fixed. Second, problems in CommentOrIntentForTag were found and fixed.
• There is now a Preference interface to change the source text color. A preference to change the source background color was present in earlier versions.
• There are new metafun and metapost engines by Nicola Vitacolonna.
### TeXShop Changes 3.11
• The ConTeXt engines have been renamed. I promised to make this change a year ago, but checking MacTeX-2012 shortly before release, I found that the promise was ignored. The old and new names are
ConTeXt-MKIV.engine --> ConTeXt (LuaTeX).engine
ConTeXt-xetex.engine --> ConTeXt (XeTeX).engine
ConTeXt .engine --> ConTeXt (pdfTeX).engine
The new names make explicit the TeX program which will run the ConTeXt macros for that engine.
### TeXShop Changes 3.10
• The "--shell-escape'' flag has been removed from preference settings for pdftex and pdflatex. This flag presented security risks. Old users need to activate the change by selecting TeXShop Preferences, clicking the Engine tab, and pushing the "Default" buttons in the configuration section for pdfTeX and pdfLaTeX.
Recall that pdflatex can accept illustrations in several different formats, including pdf, jpg, and png. But it cannot accept eps illustrations used by many old TeX documents. The epstopdf package solved this problem by calling Ghostscript to convert eps files to pdf format automatically during typesetting. This package required --shell-escape and that is why previous versions of TeXShop set the flag.
Two years ago, TeXLive made conversion of eps files to pdf format easier and safer by introducing a restricted shell escape mode for pdflatex in which only a limited number of safe programs can be called during typesetting. This conversion was made automatic without including epstopdf, provided the graphicx package was included by the source document.
We could have dropped the --shell-escape flag at that time, but there was another reason to continue using it. Originally, pdflatex accepted tif and tiff files. Eventually this feature was removed, but it was possible to convert these files to png format during typesetting using /usr/local/convert from ImageMagick. Unfortunately, TeX Live does not label convert as safe because in the Windows world there is an unrelated program which presents security risks. TeXShop 3.10 solves this problem by introducing a new method to convert tif and tiff files to png format.
• TeXShop 3.10 has a menu command "Convert Tiff" which is active when a source window is active. This command opens a dialog which shows all tiff files in the folder containing the source file. Users can choose one tiff file or several. Push the "Convert" button to create png forms of all such illustrations. This calls convert from ImageMagick if present, and otherwise calls the native sips program.
• A new Latex Template is provided to reflect these changes. Old users can obtain this template by moving it from ~/Library/TeXShop/New/Templates to ~/Library/TeXShop/Templates.
• TeXShop 3.10 omits the Create Project Root menu item. Use the alternate "% !TEX root = " syntax instead. Old projects using Create Project Root will continue to typeset.
### TeXShop Changes 3.09
• When a pdf document is printed, TeXShop now selects Portrait or Landscape mode automatically. Moreover, "orientation selection buttons" have been added to the Print Panel, so the user can change the orientation if the auto selection mechanism fails. A "scale selection" was also added, so the user can rescale the document before printing.
• These printing changes also apply when printing TeX source. "Orientation selection buttons" and "scale selection" were added to the Print Panel.
• The split window command for the Preview window has been improved. The second portion now opens on the section of the document shown in the top portion rather than the top of the document. It has the same magnification as the top section of the window. Finally the magnification toolbar button is now in sync with magnification in appropriate sections of the split window.
• In the German localization, menu items to set the PDF display mode were mislabeled, and check marks in this menu didn't work. Both problems are fixed.
• Herb Schulz fixed a bug in Command Completion. When multiple windows were open, command completion in one window could interfere with command completion in another window. This problem is fixed.
### TeXShop Changes 3.08
• Fixed a bug when double clicking on a left brace. This click again selects the text between this brace and its matching right brace.
• "TeXShop Tips & Tricks" is updated slightly.
### TeXShop Changes 3.07
• TeXShop is now signed, as required in Mountain Lion. See the Gatekeeper documentation at http://www.apple.com/macosx/mountain-lion/features.html#gatekeeper.
• The "Sparkle" update mechanism now works with versions of TeXShop in the Lion series, 3.00 and higher.
• Herb Schulz' "TeXShop Tips & Tricks" was updated to version 0.5.3
• LatexMk was updated to version 4.31. This version of Latexmk creates a file list named "file.fls", which helps latexmk keep track of all file dependencies. The TrashAUX command has been extended to remove files with this extension.
• pdflatexmk is now one of the default engines. Only new users will notice this change.
• TeXShop now creates a ~/Library/TeXShop/Documents folder containing important documents. Currently many are duplicated from elsewhere in ~/Library/TeXShop, but this will be the spot to look in the future.
• New TeXShop releases will automatically update the Documents folder, just as they now automatically update bin, Engines/inactive, and scripts.
• In the German localization, there was a bug in the Preview Preferences for "Default page style." The buttons for "Double Sided" and "Single Sided, Continuous" were reversed, so they didn't do what they claimed to do. This is fixed.
• TeXShop contains the latest customized OgreKit by Yusuke Terada. As in TeXShop 3.06, this pane uses the same font as TeXShop source windows. Moreover, syntax coloring, parenthesis match highlighting, command completion, showing invisible characters and so on work in the OgreKit Panel. However, KeyBinding (AutoCompletion) is disabled in this OgreKit, as requested by a number of users in the TeX on OS X mailing list.
• Double clicking on one of the end characters of a "<" ... ">" pair selects both ends and all characters in between.
• If the hidden preference MakeatletterEnabled is YES, selection of sequences containing '@' by double-clicking is supported. An example is " \@latex@error"
• A small number of crashes were isolated and repaired
• "Show Full Path" is improved on Lion, if chosen by the user in an optional tool bar item for the source window
• As before, if text is selected and the "comment" item is chosen, the entire paragraph containing the selection is commented out. But now the text selection is preserved. This also works with the "indent" command.
• When the source window was active and split and a file was drag-and-dropped to the bottom view, the action did not work. Now it does.
• AppleScript macros are now saved with UTF8 encoding, so scripts can be written containing Japanese and other languages. This required a small modification in the "ScriptRunner" program which runs scripts which start with --applescript rather than --applescript direct.
Richard Koch
Department of Mathematics
University of Oregon
Eugene, Oregon 97403 |
# 2018 Student Paper Award Announcement
We are pleased to announce the winners of the 2018 INFORMS Railway Applications Section Student Paper Contest. This year the competition was unusually strong, with many very high quality papers. Fourteen papers were submitted. These papers were each reviewed by multiple judges, and ranked by composition, methodology, and contribution to the literature. The Railway Applications Section is grateful for the assistance of the judges: Carl Van Dyke, Francesco Corman, Javier Faulin, Lingyun Meng, Seyed Mohammad Nourbakhsh, Steven Tyber, and Zhijie (Sasha) Dong.
The winners of the 2018 Railway Applications Section Student Paper Award are:
First Prize: Fei Yan; Nikola Bešinović; Rob M.P. Goverde, “Multi-objective Periodic Railway Timetabling”, Technical University of Delft Presentation
Second Prize: Rolf N. van Lieshout; Paul C. Bouman; Dennis Huisman, “Determining and Evaluating Alternative Line Plans in (Near) Out-of-Control Situations”, Erasmus University Rotterdam Presentation
Third Prize: Manuel Fuentes; Luis Cadarso; Ángel Marín, “A Hybrid Model for Robust Crew Scheduling in Rapid Transit Networks”, Technical University of Madrid Presentation
The winners will present their research at the INFORMS Annual Meeting, 4 November 2018, in Phoenix, Arizona.
### Prior Announcement:
RAS (Railway Applications Section), a subdivision of INFORMS (Institute for Operations Research and Management Sciences), is sponsoring a student research paper contest on analytics and fact-based decision making in railway applications with three cash prizes totalling US$1,750. Awards are offered at three levels:$1,000 First Place, $500 Second Place,$250 Third place.
In addition, the First Place paper will be offered expedited review in the journal Networks, with a recommendation from the RAS judges. More details about this journal can be found here. RAS is grateful to its 2017 sponsors for supporting this award.
Overview
Railway Applications is a section of INFORMS (Institute for Operations Research and Management Sciences). RAS provides a forum for bringing together practitioners, consultants, and academics interested in applying Operations Research and Management Science techniques (OR/MS) to the railroad industry. The student paper contest is intended to stimulate interest in OR/MS applications to railways, and to encourage students to pursue an education in railway applications.The methodologies of OR/MS include mathematical programming, simulation, analytics, data analysis, and scientific management. The focus in RAS and INFORMS is the management of technology, resources, and service delivery.
## Rules
1. Award recipients must travel to the INFORMS Annual Meeting, usually held in the fall in the United States, at their own expense and give a presentation on their paper. Recipients will receive instructions on when and how to attend. Failure to present disqualifies the recipient and the award is re-allocated.
2. The paper must be written by one or more students enrolled in an academic institution at any time during the year ending on the submission deadline. For example, a student now employed who graduated earlier in the year is acceptable.
3. One or more advisers may appear as co-authors of a paper, but the student(s) must be the "primary author" and the content must be at least 80% attributable to the student authors. Advisers of finalist papers will be contacted to verify this information.
4. The paper must demonstrate a subject area of OR/MS that is representative of INFORMS subjects, and may apply to any railway system (freight, passenger, heavy rail, light rail, etc.).
5. The paper must represent original research (not literature reviews) and not have been previously published in a peer-reviewed journal, book, or published conference proceeding.
6. The paper must follow the formatting and length instructions provided with the submission instructions. Papers are expected to follow the general expectations of structure and length for journal publication. The judges may reject papers that are excessively long or inappropriately formatted. Generally, papers should be less than 40 pages when double spaced.
7. The judges reserve the right to reject papers at an early stage in the competition due to irrelevant subject matter or fundamental errors in composition.
8. Papers must be written in US or UK English.
9. In the event of any disagreement over the enforcement of the rules, the decision of the judges is final.
## How to enter
Deadline for submission is June 30, 2018 Extended to 14 July!
Record your contest entry at: https://dtumanagement.eu.qualtrics.com/jfe/form/SV_80S3cGQNkDT1fP7
Do not send papers or submissions to this email address.
Dr. Steven Harrod, Technical University of Denmark - email: stehar@dtu.dk
RAS reserves the right to suspend the contest if no suitable papers are received.
# Past Competition Results
2017 First Place Thomas Breugem, Erasmus University Rotterdam, An Optimization Framework for Fairness-oriented Crew Rostering Second Place Sofie Van Thielen, KU Leuven, Benefits of a dynamic impact zone when dealing with train conflicts Third Place Pan Shang, Tsinghua University, Equity-oriented skip-stopping schedule optimization in an oversaturated urban rail transit network |
# Capital Pi Symbol (Π)
The capital Greek letter (capital Pi) is used in math to represent the product operator. Typically, the product operator is used in an expression like this:
In plain language, this means take the product of the sequence of expressions represented by the expression starting from and iterating until .
Symbol Format Data
Π Unicode
928
TeX
\Pi
SVG
## Usage
The product operator is represented by the Π (capital pi) symbol and is used to represent the operation of multiplying a sequence of expressions together.
## Related Symbols
The Greek letter π (pi) is used in trigonometry as a constant to represent a half-rotation around a circle in radians. The value of π is approximately 3.14 and appears in the geometric formulas for finding the circumference and area of the circle. The value of π can be calculated by dividing any circle's circumference by its diameter. |
# Talking about derivatives of exponential functions
My older son is starting the section in Stewart’s Calculus book on exponential functions. We’ve already spent a couple of days talking about inverse functions and the topic for today was finding derivatives of exponential functions.
I started by asking how he thought you’d even approach trying to find the derivative of an exponential function. It has been a while since we’ve talked about derivatives, so it took a few minutes before he came to the idea of using the definition of the derivative. Once we began to approach the problem via the definition of the derivative, we found that finding the derivative of an exponential function came down to a single limit:
Next we went to Mathematica to see if we could make any sense of this limit. Without realizing it, I had an error in the code that was causing the code to output numerical approximations. My son noticed the error and had me fix it. Unfortunately fixing that error spoiled the surprise in the answer . . . whoops 😦
Now we went back to the board to finish our computations for the derivative of an exponential function. It is pretty neat to see that the derivatives of all exponential functions are related to each other in a fairly simply way.
This was a fun discussion. The follow up discussion later was a neat problem from Stewart that asked you to show that:
$e^x > 1 + x + x^2 / 2! + \ldots x^n / n!$ for every n. That problem was a nice exercise in derivatives of exponential functions and also techniques of proof. |
# Merging directories and keep files that have more lines
Goal
My goal is to merge directories. Whenever a file has the same name in two or more directories, then only the one that has the highest number of lines should be kept. If both files have the same number of files and differ then an error message should be thrown. Note that files having more lines are also bigger (in my specific case) which might be another way to compare files having the same name.
My Code
Here is my code that I think works fine
### Parameters ###
GeneralPath="/Users/remi/Documents/Biologie/Vancouver/PhD/Thesis/BackgroundSelection/Simulations/s_and_Pi/outputs/4.0.2_1.0.5/"
cd ${GeneralPath} Directories=( HR OR OS ) # Array of directories ot be merged with the destination directory Destination=HS errorFile="${GeneralPath}MergeDirs.err"
### Do Stuff ###
for d in ${Directories[@]};do echo "${d}"
cd $d for f in *; do echo "${f}"
if [ ! -f "../${Destination}/${f}" ];then
echo cp1
cp ${f} ../${Destination}/
else
nblinesFrom=$(wc -l${f} | awk -F" " '{print $1}') nblinesDest=$(wc -l "../${Destination}/${f}" | awk -F" " '{print $1}') if [${nblinesFrom} -gt ${nblinesDest} ]; then echo cp2 cp${f} ../${Destination}/ elif [${nblinesDest} -gt ${nblinesFrom} ];then echo "Destination is bigger - nothing to do" else DoTheyDiffer=$(diff ${f} "../${Destination}/${f}" | wc -l) if [${DoTheyDiffer} -gt 0 ];then
echo "${f} and ../${Destination}/${f} diff but have the same number of lines" >>${errorFile}
fi
fi
fi
done
cd ..
done
My code seems quite complicated and I feel like a good combinaison of find -exec, awk, cp and diff might do something much more fancy.
• Use More Quotes™.
• Use a shebang line (disclosure: I wrote that answer).
• Don't use single character variables. Maintainability is the most important feature of code.
• Rather than echo cp1 etc., simply use cp -v to print every copy command verbatim.
• You don't need to count the number of lines that diff returns, you can simply do if diff foo bar, or the safer option:
diff foo bar
exit_code=$? if [ "$exit_code" -eq 0 ]
then
[no difference]
elif [ "$exit_code" -eq 1 ] then [difference] else exit "$exit_code" # WTF
fi
• Some people love mixing languages. I think cut -d' ' -f1 is much nicer than even a short awk script.
• If you pass the file to wc on standard input, it doesn't print the filename, and so you don't need to process the output at all: wc -l < /path
• You can use if cp --no-clobber source destination to try copying the file instead of checking whether the target exists.
• I'm not sure I understand why you're copying rather than moving files (unless this is a one-off script, which you're not going to test, and it's only going to take a few seconds anyway).
• Be wary of using cd in scripts. It changes the context in a significant way, and makes it harder to reason about what the script will do. Instead, simply do for path in "\$directory"/*.
• I can definitely recommend set -o errexit -o noclobber -o nounset -o pipefail. You could also use -o xtrace to make all those logging commands obsolete.
• A common convention to get used to is to never end paths with a slash. Firstly because cp a b/ and cp a b and the same as long as b` is a directory, and secondly because it makes it more natural to concatenate paths without ending up with double slashes. |
# Compute $P\left(\int_0^1W(t)dt>\frac{2}{\sqrt3}\right)$ where $W(t)$ is a Wiener process
I'm working through problems I found on the net for which there are no answers given. Therefore I'm looking for someone to check my work.
Q: $P\left(\int_0^1W(t)dt>\frac{2}{\sqrt3}\right)$ where $W(t)$ is a Wiener process (Brownian motion).
So, let's denote $X_t = \int_0^TW(t)dt$
Then $X_t \sim N(0,\sigma(t)^2)$ since $W(t) \sim N(0,t)$ and the sum of Normal R.V.s is still Normal, correct?
Then, since Gaussians are parametarized by mean (which is zero) and variance, I just need to find (verify above) the variance $\mathbb{E}(X_t^2)$ and then I can find the probability from a Normal CDF. I started with something like:
\begin{align*} d(tW_t) = W_t dt + t dW_t \end{align*} \to
$\int_0^TW_tdt=X_t=tW_t-\int{tdW_t} \to$
$X_t^2=t^2W_t^2+\left(\int{tdW_t}\right)^2-2tW_t\int{tdW_t}$
Now by linearity of expectation and Ito's Isometry: $\mathbb{E}(X_t^2)=t^3+\frac{t^3}{3}-2t*COV(W_t, \int{tdW_t})$
The $COV(W_t, \int{tdW_t})$ part is where I'm stuck. A wild guess would be that I could write $W_t$ as $\int{dW_t}$ (is this OK to do with a stochastic process/stochastic calculus?), and then by Ito's Isometry and the fact that it "respects the inner product" (whatever that means) we can turn $COV(\int{dW_t}, \int{tdW_t}) \to \int{tdt}=\frac{t^2}{2}$
Then finally I get $\mathbb{E}(X_t^2)=\frac{t^3}{3}$....which seems like it could be legit from some other things I've found online, but not totally sure.
Can someone check my logic in all of this and verify?
$\int_0^1 W_s ds$ is indeed gaussian with mean 0, but to arrive to this conclusion and get its variance the easiest way is to write it as a Wiener integral, i.e: $\int_0^1 ... dW_s$.
As a matter of fact, we can write it as:
$$\int_0^t W_s ds = \int_0^t (t-s)dW_s$$ (see for example proof here: https://quant.stackexchange.com/a/29506/26242)
We know this Wiener integral is gaussian with mean $0$ and variance: $$Var\left(\int_0^t W_s ds\right) = \int_0^t(t-s)^2 ds =\frac{1}{3}t^3.$$
For $t = 1$, the mean is zero and variance is $\frac{1}{3}$.
Now you can compute your probability using the standard normal CDF $\Phi$: \begin{aligned} & P\left(\int_0^1 W_s ds > \frac{2}{\sqrt{3}}\right)\\ & = P\left(Z \frac{1}{\sqrt{3}}> \frac{2}{\sqrt{3}}\right)\\ & = P(Z>2)\\ & = 1 - P(Z \leq 2) \\ & = 1 - \Phi(2) \end{aligned}
In the above, $Z$ is a standard gaussian random variable. |
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
$$\frac{1}{2}$$
We simplify the expression using the fact that dividing by a fraction is the same as multiplying by the inverse: $$\frac{3}{8}\times \frac{4}{3}\\ \frac{3\times \:4}{8\times \:3}\\ \frac{4}{8}\\ \frac{1}{2}$$ |
# Bernoulli and Binomial Distribution
A Bernoulli random variable is a random variable that takes a value of 1 in case of a success and a value of 0 in case of a failure. We can also say that this random variable has a Bernoulli distribution. A classic example is a single toss of coin. When we toss a coin, the outcome can be heads (success) with a probability p or tails (failure) with a probability of (1 – p). The important point here is a single toss of coin.
Now suppose we perform n number of trials. Each trial is independent and will result in a success with a probability p or a failure with a probability (1-p). From the n trials, suppose X represents the number of successes. Then X is a binomial random variable with parameters (n, p). Note that Bernoulli random variable is a special case of binomial random variable with parameters (1, p). The variable X will have a binomial distribution.
The binomial distribution has the following characteristics:
• For each trial there are only two possible outcomes, success or failure.
• Probability of success, p, of each trial is fixed.
• There are n trials.
• Each trial is independent
• The binomial probability function defines the probability of x successes from n trials.
The binomial probability function is given using the following formula.
Let’s take an example to understand how this can be applied.
You have a pool of stocks having returns either above 5% or below 5%. The probability of selecting a stock with above 5% returns is 0.70. You are going to pick up 5 stocks. Assuming binomial distribution, what is the probability of picking 2 stocks with above 5% returns?
Let’s define our problem.
Success = Pick stock with above 5% returns
p = 0.70
n = 5
x = 2
Expected Value and Variance of a Binomial Distribution
For a binomial distribution, the expected value and variance are given as below: |
Question: WGCNA gene clusters vs eigengenes
1
10 weeks ago by
bioming10
Queen's University
bioming10 wrote:
Hello, I'm trying to go through the WGCNA tutorial on mice liver data from https://horvath.genetics.ucla.edu/html/CoexpressionNetwork/Rpackages/WGCNA/Tutorials/.
I understood the concepts from expression matrix -> soft thresholding -> adjacency matrix -> dissTOM -> hclust. This is where I'm starting to get confused: after hclust and I generate the dendrogram using "dynamic tree cut", and it detection a set of modules, I color modules with "dynamicColors". But then the tutorial uses moduleEigengenes() to generate another set of modules (albeit less modules than from hclust()).
My questions is:
does moduleEigengenes() use any information from hclust() generated modules? or is it just another way to generate modules? and you compare that to modules generated by hclust()?? but then I read from a presentation slide (https://edu.isb-sib.ch/pluginfile.php/158/course/section/65/01SIB2016_wgcna.pdf) that moduleEigengenes() merges similar modules... so... does moduleEigengenes() merge similar modules generated by hclust()?? But from the code
MEList = moduleEigengenes(datExpr, colors = dynamicColors)
the only thing moduleEigengenes() takes as input that remotely comes from the hclust() is dynamicColors, doesn't seem to be using modules generated by hclust() at all... am I missing something?
but after moduleEigengenes(), the tutorial hclust() again using "as.dist(MEDiss)" instead of "dissTOM" as was with the first hclust()...
very confused, any insight would be very appreciated thanks!
Ming
written 10 weeks ago by bioming10
This was cross-posted (and answered) on Biostars: https://www.biostars.org/p/397668/
Also note that WGCNA is not a Bioconductor package; however, one of the WGCNA developers is a member here, and may answer WGCNA-related questions. |
spockmonkey40
2022-06-30
Find the matrix A for the linear transformation T relative to the bases $B=\left\{1,x,{x}^{2}\right\}$ and $B\prime =\left\{1,x,{x}^{2},{x}^{3}\right\}$ such that $T\left(\stackrel{\to }{x}\right)=A\stackrel{\to }{x}$
Do you have a similar question?
Want to know more about Two-wayTables?
Kaya Kemp
Expert
First, we see how T acts on the members of B, and put that in terms of B':
$T\left(\stackrel{\to }{{u}_{0}}\right)=4x=4\stackrel{\to }{{v}_{1}}={\left[\begin{array}{c}0\\ 4\\ 0\\ 0\end{array}\right]}_{B\prime }$
$T\left(\stackrel{\to }{{u}_{1}}\right)=4{x}^{2}=4\stackrel{\to }{{v}_{2}}={\left[\begin{array}{c}0\\ 0\\ 4\\ 0\end{array}\right]}_{B\prime }$
$T\left(\stackrel{\to }{{u}_{2}}\right)=4{x}^{3}=4\stackrel{\to }{{v}_{3}}={\left[\begin{array}{c}0\\ 0\\ 0\\ 4\end{array}\right]}_{B\prime }$
Then we use those as the column vectors for our transformation matrix:
$A=\left[\begin{array}{ccc}0& 0& 0\\ 4& 0& 0\\ 0& 4& 0\\ 0& 0& 4\end{array}\right]$
Let's check to see if it works.
Given a polynomial $\stackrel{\to }{p}={\left[\begin{array}{c}a\\ b\\ c\end{array}\right]}_{B}=a+bx+c{x}^{2}\in {P}^{2}$ we have
$T\left(\stackrel{\to }{p}\right)=4x\left(a+bx+c{x}^{2}\right)$
$=4ax+4b{x}^{2}+4c{x}^{3}$
$=0\stackrel{\to }{{v}_{0}}+4a\stackrel{\to }{{v}_{1}}+4b\stackrel{\to }{{v}_{2}}+4c\stackrel{\to }{{v}_{3}}$
$={\left[\begin{array}{c}0\\ 4a\\ 4b\\ 4c\end{array}\right]}_{B\prime }$
$=\left[\begin{array}{ccc}0& 0& 0\\ 4& 0& 0\\ 0& 4& 0\\ 0& 0& 4\end{array}\right]{\left[\begin{array}{c}a\\ b\\ c\end{array}\right]}_{B}$
$=A\stackrel{\to }{p}$
Still Have Questions?
Lucia Grimes
Expert
The idea behind the notation of using $\stackrel{\to }{p}$ as both a vector and a polynomial is that when your vector space is ${P}^{n}$ ${ℝ}^{2}$ , for example, you can consider a vector $\left(a,b\right)$ as being the sum $a\left(1,0\right)+b\left(0,1\right)$, you can think of the analogs in ${P}^{1}$ as being $\left(a,b\right)$ and $a\cdot 1+bx$, where both vectors are being written in terms of the standard bases $\left\{\left(1,0\right),\left(0,1\right)\right\}$ or $\left\{1,x\right\}$
You can have a nonstandard basis in ${ℝ}^{2}$, such as $B\prime =\left\{\left(1,1\right),\left(-1,1\right)\right\}$, as long as the vectors are linearly independent. If we write a vector in terms of that basis, such as ${\left(a,b\right)}_{B\prime }$, then we should think of it as being the linear combination $a\left(1,1\right)+b\left(-1,1\right)$ which would have the representation $\left(a-b,a+b\right)$ in the standard basis.
Similarly, you can have nonstandard bases in ${P}^{n}$, or in any vector space for that matter. One simply must remember that a vector in a given basis is a linear combination of the elements of that basis.
While this problem uses the standard bases for ${P}^{2}$ and ${P}^{3}$, we will instead treat them as if they are arbitrary bases, using the relabelings
$B=\left\{1,x,{x}^{2}\right\}=\left\{\stackrel{\to }{{u}_{0}},\stackrel{\to }{{u}_{1}},\stackrel{\to }{{u}_{2}}\right\}$ and $B\prime =\left\{1,x,{x}^{2},{x}^{3}\right\}=\left\{\stackrel{\to }{{v}_{0}},\stackrel{\to }{{v}_{1}},\stackrel{\to }{{v}_{2}},\stackrel{\to }{{v}_{3}}\right\}$
For the problem itself, when we wish to find the matrix representation of a given transformation, all we need to do is see how the transformation acts on each member of the original basis and put that in terms of the target basis. The resulting vectors will be the column vectors of the matrix.
Free Math Solver |
seminars:arit
The Arithmetic Seminar
TOPICS: Arithmetic in the broadest sense that includes Number Theory (Elementary Arithmetic, Algebraic, Analytic, Combinatorial, etc.), Algebraic Geometry, Representation Theory, Lie Groups and Lie Algebras, Diophantine Geometry, Geometry of Numbers, Tropical Geometry, Arithmetic Dynamics, etc.
PLACE and TIME: This semester the seminar meets on Mondays at 3:30 p.m. in WH 100E, with possible special lectures at other days. Before the talks, there will be refreshments in WH-102.
ORGANIZERS: Alexander Borisov, Marcin Mazur, Adrian Vasiu, Jaiung Jun, Patrick Milano, and Micah Loverro.
To receive announcements of seminar talks by email, please join the seminar's mailing list.
The number theory group at Binghamton University presently consists of three faculty members (Alexander Borisov, Marcin Mazur, and Adrian Vasiu), one post-doc (Jaiung Jun) and several Ph.D. students (John Brown, Patrick Carney, Micah Loverro, Patrick Milano, Changwei Zhou).
Past Ph.D. students in number theory related topics that graduated from Binghamton University: Ilir Snopce (Dec. 2009), Xiao Xiao (May 2011), Jinghao Li (May 2015), Ding Ding (Dec. 2015).
—-
Fall 2017
• August 22 (Tuesday, 10:00 am – 12:00 pm)
Speaker: Micah Loverro (Binghamton)
Title: Relating G-modules and Lie(G)-modules
Abstract: Given a fixed representation V of G_K over a field K, where K is the field of fractions of a Noetherian normal domain R, and the group scheme G over R is reductive, we investigate relations between Lie(G)-modules and G-modules inside V. If M inside V is a G-module, then M is always a Lie(G)-module. We have conditions in some cases which imply that if M is a Lie(G)-module, then it is also a G-module. In particular, we show that we can reduce the problem to the case where R is a complete discrete valuation ring with residue field algebraically closed.
• August 22 (Tuesday, 2:00 am – 4:00 pm)
Speaker: John Brown (Binghamton)
Title: Classifying finite hypergeometric groups, height one balanced integral factorial ratio sequences, and some step functions
Abstract: In this talk we will discuss some connections between hypergeometric series, factorial ratio sequences, and non-negative bounded integer-valued step functions. We will start with a finiteness criterion for hypergeometric groups by Beukers and Heckman, then show how this leads to the classification by Bober of integral balanced factorial ratio sequences of height one, and thus a proof that a conjectured classification of a certain class of step functions by Vasyunin is complete.
• August 28
Speaker: N/A
Title: Organizational Meeting
Abstract: We will discuss schedule and speakers for this semester
• September 11
Speaker: Jaiung Jun (Binghamton)
Title: Geometry over hyperfields
Abstract: In this talk, we illustrate how hyperfields can be used to show that certain topological spaces (underlying topological spaces of schemes, Berkovich analytification of schemes, and real schemes) are homeomorphic to sets of rational points of schemes over hyperfields.
• September 18
Speaker: Martin Ulirsch (Michigan)
Title: Realizability of tropical canonical divisors
Abstract: We solve the realizability problem for tropical canonical divisors: Given a pair $(\Gamma, D)$ consisting of a stable tropical curve $\Gamma$ and a divisor $D$ in the canonical linear system on $\Gamma$, we develop a purely combinatorial condition to decide whether there is a smooth curve realizing $\Gamma$ together with a canonical divisor that specializes to $D$. In this talk I am going to introduce the basic notions needed to understand this problem and outline a comprehensive solution based on recent work of Bainbridge-Chen-Gendron-Grushevsky-M\”oller on compactifcations of strata of abelian differentials. Along the way, I will also develop a moduli-theoretic framework to understand the specialization of divisors to tropical curves as a natural tropicalization map in the sense of Abramovich-Caporaso-Payne.
This talk is based on joint work with Bo Lin, as well as on an ongoing project with Martin M\”oller and Annette Werner.
• September 25
Speaker: Jaiung Jun (Binghamton)
Title: Picard groups for tropical toric varieties.
Abstract: From any monoid scheme $X$ (also known as an $\mathbb{F}_1$-scheme) one can pass to a semiring scheme (a generalization of a tropical scheme) $X_S$ by scalar extension to an idempotent semifield $S$. We prove that for a given irreducible monoid scheme $X$ (with some mild conditions) and an idempotent semifield $S$, the Picard group $Pic(X)$ of $X$ is stable under scalar extension to $S$. In other words, we show that the two groups $Pic(X)$ and $Pic(X_S)$ are isomorphic. We also construct the group $CaCl(X_S)$ of Cartier divisors modulo principal Cartier divisors for a cancellative semiring scheme $X_S$ and prove that $CaCl(X_S)$ is isomorphic to $Pic(X_S)$.
• October 2
Speaker: Patrick Milano (Binghamton)
Title: Ghost spaces and some applications to Arakelov theory
Abstract: Arakelov theory provides a method for completing arithmetic curves like Spec(Z) by adding formal points “at infinity.” There is an Arakelov divisor theory for such completed arithmetic curves that is analogous to the theory of divisors on projective algebraic curves. In order to describe the cohomology of an Arakelov divisor, Borisov introduced the notion of a ghost space. After some background and motivation, we will define ghost spaces and look at some of their applications.
• October 9
Speaker: Christian Maire (Cornell, Besançon)
Title: Fixed points in p-adic analytic extensions of number fields and ramification (joint work with Farhid Hajir)
Abstract: In this talk, I will present two arithmetic applications of the presence of fixed points in p-adic analytic extensions of number fields: (i) for the mu of the p-class group; (ii) for some evidences of the tame version of the Fontaine-Mazur conjecture. As we will see, the nature of the ramification (tame versus wild) is essential. The lecture will be accessible for non-specialists.
• October 23
Speaker: Max Kutler (Yale)
Title: Faithful tropicalization of hypertoric varieties
Abstract: A hypertoric variety is a “hyperk\”ahler analogue” of a toric variety. Each hypertoric variety comes equipped with an embedding into a toric variety, called the Lawrence toric variety, and hence has a natural tropicalization. We explicitly describe the polyhedral structure of this tropicalization. Using a recent result of Gubler, Rabinoff, and Werner, we prove that there is a continuous section of the tropicalization map.
• October 30
Speaker: Alina Vdovina (CUNY, Newcastle)
Title: Buildings, quaternions and fake quadrics
Abstract: We'll present construction of buildings as universal covers of certain complexes. A very interesting case is when the fundamental group of such a complex is arithmetic, since the construction can be carried forward to get new algebraic surfaces, namely fake quadrics. Fake projective planes are already classified following series of works of D. Mumford, G. Prasad, S.-K. Young, D.Cartwright, T.Steger, but the fake quadrics remain mysterious.
• November 6
Speaker: Micah Loverro (Binghamton)
Title: G-modules and Lie(G)-modules with examples from SL_2
Abstract: Given a fixed representation $V$ of a simply-connected semisimple group $G_K$ over a field $K$, we seek to determine which Lie$(G)$-modules $M$ inside $V$ are also $G$-modules, where $G$ is a smooth affine group scheme of finite type over a Noetherian normal domain $R$ whose field of fractions is $K$. Previously we showed that we can assume $R$ is a complete discrete valuation ring with algebraically closed residue field. In this talk, we will go through the details of the case when $G$ is $SL_2$, and then show how the Frobenius map could be used in a more general setting to produce Lie$(G)$-modules which are not $G$-modules under certain conditions depending on the weights of the representation.
• November 13
Speaker: Tom Price (Toronto)
Title (tentative): A global sections functor for Arakelov bundles
Abstract: We exhibit a class of real-valued functions on Abelian groups, which have some non-trivial properties generalizing the behaviour of indicator functions of subgroups, such as the Regev Stephens-Davidowitz inequality. We construct a category of groups equipped with these functions, use this to create an analogue of a global sections functor for Arakelov bundles, and demonstrate that this functor has some properties we should expect.
• November 20
Speaker: Patrick Carney (Binghamton)
Title: Geometry and divisors on rational curves and surfaces
Abstract: In his 2014 paper A. Borisov constructed two invariants of divisorial valuations at infinity. We will discuss some algebraic geometry notions and constructions used in that paper, specifically the theory of divisors and linear equivalence on the projective line, the projective plane, and other compactifications of the affine plane. The blow-up of a point construction will also be presented in detail. This is the first of two talks, that deals with the prerequisites for the paper. It will be followed by the second talk that discusses the combinatorial methods and results of that paper.
• November 27
Speaker: Sayak Sengupta (Binghamton)
Title: Valuations
Abstract: The main topic of the talk is how to recover the valuation function from a valuation ring of a field. The talk starts with the definition of a valuation ring and an idea of how to construct valuation ring from any given field followed by a short discussion about valuation functions and discrete valuation functions leading to the final part of the talk i.e to establish the main topic of the talk.
• December 4
Speaker: Philipp Jell (Georgia Tech)
Title: Non-archimedean Arakelov theory and cohomology of differential forms on Berkovich spaces
Abstract: Arakelov theory studies varieties over number fields by combining analytic geometry over the complex numbers (representing the infinite places) with algebraic intersection theory on suitable models over the ring of intergers (representing the finite places). However, due to lack of resolution of singularities in mixed characteristic, such models are hard to come by. It has always been a goal to unify the approaches and replace intersection theory on models by analytic geometry over the finite places.
In 2012, Chambert-Loir and Ducros made a promising step in this direction, introducing real-valued differential forms and currents on Berkovich analytic spaces and proving among other things a Poincaré-Lelong formula and existence of Chern classes for line bundles.
In this talk, we will give a brief introduction to Arakelov theory and introduce the forms defined by Chambert-Loir and Ducros. We will then discuss the cohomology theory defined by these forms for varieties over non-archimedean fields. In particular we explain a Poincaré lemma result and results on duality for curves.
• December 6 (Room: WH 329)
Speaker: Patrick Carney (Binghamton)
Title: Geometry and divisors on rational curves and surfaces, Part 2
Abstract: This is a continuation of the November 20 talk. We will discuss the structure of the divisor class group on arbitrary compactifications of the affine plane that are obtained from the projective plane by a sequence of blowups. We will discuss the intersection form on this group and define the two invariants of the divisorial valuations at infinity studied in that 2014 paper by Borisov. We will explain the properties of these invariants and the main results regarding them.
seminars/arit.txt · Last modified: 2017/12/04 15:15 by borisov |
# From a cask of milk containing 30 litres, 6 litres are drawn out and the cask is filled up with water. If the same process is repeated a second, then a third time, what will be the number of litres of milk left in the cask? Option 1) Option 2) Option 3) Option 4) Option 5)
$X=30\: L$
$Y= 6\: L$
$N= 3$
$Final\: milk = X (1- \frac{Y}{X}) ^{n}$
$=30 (1-\frac{6}{30})^{3}$
$=30 \times 0.512$
$= 15.36\: L$
Exams
Articles
Questions |
# The average analytic rank of elliptic curves
@article{HeathBrown2003TheAA,
title={The average analytic rank of elliptic curves},
author={D. R. Heath-Brown},
journal={Duke Mathematical Journal},
year={2003},
volume={122},
pages={591-623}
}
All the results in this paper are conditional on the Riemann Hypothesis for the L-functions of elliptic curves. Under this assumption, we show that the average analytic rank of all elliptic curves over Q is at most 2, thereby improving a result of Brumer [2]. We also show that the average within any family of quadratic twists is at most 3/2, improving a result of Goldfeld [3]. A third result concerns the density of curves with analytic rank at least R, and shows that the proportion of such…
Average analytic ranks of elliptic curves over number fields
. We give a conditional bound for the average analytic rank of elliptic curves over an arbitrary number field. In particular, under the assumptions that all elliptic curves over a number field K are
A conditional determination of the average rank of elliptic curves
It is said that nonreal zeros of elliptic curve $L$-functions in a family have a direct influence on the average rank in this family.
Variation in the number of points on elliptic curves and applications to excess rank
Michel proved that for a one-parameter family of elliptic curves over Q(T) with non-constant j(T) that the second moment of the number of solutions modulo p is p^2 + O(p^{3/2}). We show this bound is
On the distribution of analytic ranks of elliptic curves
• Mathematics, Computer Science
• 2020
An upper bound for the probability for an elliptic curve with analytic rank $\leq a$ for $a \geq 11$ is given and an upper bound of n-th moments of analytic ranks of elliptic curves is given.
Binary quartic forms having bounded invariants, and the boundedness of the average rank of elliptic curves
• Mathematics
• 2010
We prove a theorem giving the asymptotic number of binary quartic forms having bounded invariants; this extends, to the quartic case, the classical results of Gauss and Davenport in the quadratic and
Low-lying zeros of elliptic curve L-functions: Beyond the Ratios Conjecture
• Mathematics
Mathematical Proceedings of the Cambridge Philosophical Society
• 2016
Abstract We study the low-lying zeros of L-functions attached to quadratic twists of a given elliptic curve E defined over $\mathbb{Q}$. We are primarily interested in the family of all twists
Counting elliptic curves with local conditions and its applications
• Mathematics, Computer Science
• 2020
An upper bound of $n$-th moments of analytic ranks of elliptic curves, and an upper bounds for the probability that an elliptic curve has analytic rank $\leq a$ for $a \geq 11$ under GRH for elliptic $L$-functions are given.
## References
SHOWING 1-10 OF 18 REFERENCES
On the modularity of elliptic curves over Q
• Mathematics
• 1999
In this paper, building on work of Wiles [Wi] and of Wiles and one of us (R.T.) [TW], we will prove the following two theorems (see §2.2). Theorem A. If E/Q is an elliptic curve, then E is modular.
Ranks of elliptic curves
• Mathematics
• 2002
This paper gives a general survey of ranks of elliptic curves over the field of rational numbers. The rank is a measure of the size of the set of rational points. The paper includes discussions of
Low lying zeros of families of L-functions
• Mathematics
• 1999
In Iwaniec-Sarnak [IS] the percentages of nonvanishing of central values of families of GL_2 automorphic L-functions was investigated. In this paper we examine the distribution of zeros which are at
Modular Elliptic Curves and Fermat′s Last Theorem(抜粋) (フェルマ-予想がついに解けた!?)
When Andrew John Wiles was 10 years old, he read Eric Temple Bell’s The Last Problem and was so impressed by it that he decided that he would be the first person to prove Fermat’s Last Theorem. This
Zeroes of zeta functions and symmetry
• Mathematics
• 1999
Hilbert and Polya suggested that there might be a natural spectral interpretation of the zeroes of the Riemann Zeta function. While at the time there was little evidence for this, today the evidence
Ring-Theoretic Properties of Certain Hecke Algebras
• Mathematics
• 1995
The purpose of this article is to provide a key ingredient of [W2] by establishing that certain minimal Hecke algebras considered there are complete intersections. As is recorded in [W2], a method
The Grothendieck Festschrift
• Mathematics
• 1990
The many diverse articles presented in these three volumes, collected on the occasion of Alexander Grothendieck’s sixtieth birthday and originally published in 1990, were offered as a tribute to one
Random Matrices
• Computer Science
• 2005
This workshop was unusually diverse, even by MSRI standards; the attendees included analysts, physicists, number theorists, probabilists, combinatorialists, and more.
Formules explicites et minoration de conducteurs de vari'et'es alg'ebriques
© Foundation Compositio Mathematica, 1986, tous droits réservés. L’accès aux archives de la revue « Compositio Mathematica » (http: //http://www.compositio.nl/) implique l’accord avec les conditions |
### Home > PC3 > Chapter 8 > Lesson 8.2.1 > Problem8-61
8-61.
There are no buttons for secant, cosecant, or cotangent on a calculator. In order to evaluate these expressions, the definition of the reciprocal function needs to be utilized. Use a calculator to evaluate each of the expressions below. The first answer is given as a check. All angles are in radians.
1. $\operatorname { csc } ( 3 ) \approx 7.086$
1. $\cot\left(\frac{2}{7}\right)$
$\frac{1}{\tan\left(\frac{2}{7}\right)}$
1. $\sec\left(\frac{4\pi}{5}\right)$
$\frac{1}{\cos\left(\frac{4\pi}{5}\right)}$ |
# [XeTeX] character offsets in math fonts
J P Blevins jpb39 at cam.ac.uk
Fri Jun 11 14:21:41 CEST 2004
If I use the mathpi package with XeTeX, all of the characters in the
Mathematical Pi font sets are offset by one character. So $\alpha$ gives
'beta' (0062), not 'alpha' (0061) in Math Pi 1, $\mathbb{R}$ gives
'blackboard S' (0053), not 'blackboard R' (0052) in Math Pi 6, etc. The
same source file gives the expected characters in pdflatex.
Is there any simple way to correct this?
-Jim |
# Tilesets - how to make the pink color appear transparent
recently I searched for some free to use Tilesets on the internet. And wherever I found a sheet of sprites there is this pink color in the background. i know that all the spots having that color should be transparent later on but how do I do that? And has it any other special use or effect on the tile data? An example of those sheets can be found here: http://opengameart.org/content/rpg-indoor-tileset-expansion-1
• Where exactly are you looking to do that conversion? (Do you want to convert from a file containing the image to another containing the same image but with transparency? Or do you have it in an in-memory data buffer that you want to translate to another? Or have an in-memory buffer and want to translate it only when rendering?)
– Anko
May 20 '15 at 20:19
• Also, that tileset only has pink colour in the preview image. The actual download is a transparent PNG.
– Anko
May 20 '15 at 20:21
• I already loaded this tileset into a BufferedImage using Java, now I want to convert it to another BufferedImage containing the same content but with transparency. I know that the download is transparent but whenever I draw some tiles the gaps appear pink again. May 21 '15 at 5:01
You could use the transparent png instead. These sprites with a pink color were used a lot in old games, usually using images with a palette (indexed colors). Commonly the pink color is Red = 255, Blue = 255, Green = 0, and it is defined as the first color in the palette (index 0), meaning it should be transparent pixels. Defining the color as pink helped to handle designs when the painting tools was not so cool like now (pink is easy to spot in most images). Using indexed colors could help to reduce a lot the size of images, and it is very common to store a set of images sharing a common palette (like in this tileset).
In doubt, when you can not obtain a version of the tileset where the "pink" areas are indeed transparent, you can simply replace all pink pixels of the image with transparent ones, with a method like replaceColor in this example:
import java.awt.Color;
import java.awt.GridLayout;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.SwingUtilities;
public class ColorReplaceExample
{
public static void main(String[] args)
{
SwingUtilities.invokeLater(new Runnable()
{
@Override
public void run()
{
createAndShowGUI();
}
});
}
private static void replaceColor(
BufferedImage image, Color oldColor, Color newColor)
{
for (int y=0; y<image.getHeight(); y++)
{
for (int x=0; x<image.getWidth(); x++)
{
int color = image.getRGB(x, y);
if (color == oldColor.getRGB())
{
image.setRGB(x, y, newColor.getRGB());
}
}
}
}
private static void createAndShowGUI()
{
JFrame f = new JFrame();
BufferedImage image0 = null;
BufferedImage image1 = null;
try
{
}
catch (IOException e)
{
e.printStackTrace();
}
replaceColor(image1, new Color(255,0,255), new Color(0,0,0,0));
f.getContentPane().setLayout(new GridLayout(1,2)); |
Thread: LLL in GP/Pari View Single Post
2015-11-17, 13:04 #3
paul0
Sep 2011
3·19 Posts
Quote:
Originally Posted by WraithX Be sure to check the built-in documentation for what a function does, using either ? or ??: Code: ?qflll qflll(x,{flag=0}): LLL reduction of the vectors forming the matrix x (gives the unimodular transformation matrix T such that x*T is LLL-reduced). flag is...
Thank you :)
Last fiddled with by paul0 on 2015-11-17 at 13:04 |
## Cryptology ePrint Archive: Report 2016/184
Efficiently Enforcing Input Validity in Secure Two-party Computation
Jonathan Katz and Alex J. Malozemoff and Xiao Wang
Abstract: Secure two-party computation based on cut-and-choose has made great strides in recent years, with a significant reduction in the total number of garbled circuits required. Nevertheless, the overhead of cut-and-choose can still be significant for large circuits (i.e., a factor of $\rho$ in both communication and computation for statistical security $2^{-\rho}$).
We show that for a particular class of computation it is possible to do better. Namely, consider the case where a function on the parties' inputs is computed only if each party's input satisfies some publicly checkable predicate (e.g., is signed by a third party, or lies in some desired domain). Using existing cut-and-choose-based protocols, both the predicate checks and the function would need to be garbled $\rho$ times. Here we show a protocol in which only the underlying function is garbled $\rho$ times, and the predicate checks are each garbled only \emph{once}. For certain natural examples (e.g., signature verification followed by evaluation of a million-gate circuit), this can lead to huge savings in communication (up to 80$\times$) and computation (up to 56$\times$). We provide detailed estimates using realistic examples to validate our claims.
Category / Keywords: cryptographic protocols / secure computation, garbled circuit
Date: received 22 Feb 2016, last revised 28 Feb 2016
Contact author: amaloz at cs umd edu
Available format(s): PDF | BibTeX Citation |
Please subscribe to the official Codeforces channel in Telegram via the link: https://t.me/codeforces_official. ×
D. TV Shows
time limit per test
2 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
There are $n$ TV shows you want to watch. Suppose the whole time is split into equal parts called "minutes". The $i$-th of the shows is going from $l_i$-th to $r_i$-th minute, both ends inclusive.
You need a TV to watch a TV show and you can't watch two TV shows which air at the same time on the same TV, so it is possible you will need multiple TVs in some minutes. For example, if segments $[l_i, r_i]$ and $[l_j, r_j]$ intersect, then shows $i$ and $j$ can't be watched simultaneously on one TV.
Once you start watching a show on some TV it is not possible to "move" it to another TV (since it would be too distracting), or to watch another show on the same TV until this show ends.
There is a TV Rental shop near you. It rents a TV for $x$ rupees, and charges $y$ ($y < x$) rupees for every extra minute you keep the TV. So in order to rent a TV for minutes $[a; b]$ you will need to pay $x + y \cdot (b - a)$.
You can assume, that taking and returning of the TV doesn't take any time and doesn't distract from watching other TV shows. Find the minimum possible cost to view all shows. Since this value could be too large, print it modulo $10^9 + 7$.
Input
The first line contains integers $n$, $x$ and $y$ ($1 \le n \le 10^5$, $1 \le y < x \le 10^9$) — the number of TV shows, the cost to rent a TV for the first minute and the cost to rent a TV for every subsequent minute.
Each of the next $n$ lines contains two integers $l_i$ and $r_i$ ($1 \le l_i \le r_i \le 10^9$) denoting the start and the end minute of the $i$-th TV show.
Output
Print exactly one integer — the minimum cost to view all the shows taken modulo $10^9 + 7$.
Examples
Input
5 4 31 24 102 410 115 9
Output
60
Input
6 3 28 206 224 1520 2817 2520 27
Output
142
Input
2 1000000000 21 22 3
Output
999999997
Note
In the first example, the optimal strategy would be to rent $3$ TVs to watch:
• Show $[1, 2]$ on the first TV,
• Show $[4, 10]$ on the second TV,
• Shows $[2, 4], [5, 9], [10, 11]$ on the third TV.
This way the cost for the first TV is $4 + 3 \cdot (2 - 1) = 7$, for the second is $4 + 3 \cdot (10 - 4) = 22$ and for the third is $4 + 3 \cdot (11 - 2) = 31$, which gives $60$ int total.
In the second example, it is optimal watch each show on a new TV.
In third example, it is optimal to watch both shows on a new TV. Note that the answer is to be printed modulo $10^9 + 7$. |
# Building a powerful electromagnet for repulsion
I have to make an electromagnet for my project. The role of the electromagnet is that I want it to repel a permanent magnet so that its hard to press down at that point an N42 ring Neodymium magnet (Outer diameter 26,75 mm: Inner diameter 16 mm: Height 5 mm) from around 2-3 inches distance. So that if a pulsing voltage is provided to it, it makes a vibration to be felt by permanent magnet.
EDIT
I have now some results to talk about.
I bought few EM from market but they are too weak for this purpose probably not built for repulsion purpose. I have made 4 electromagnets so far with different specifications and have positive results. Now I want to get some feedback for what now on I am thinking to do based on what results I got so far.
Here are specifications & results of my 4 build EM:
I used Ferrit core of diff dia and height and enameled copper wire of diff dia. To give repulsion from more distance, I put a permanent NeoDem-magnet on top of core. It provided more distance for repul and finished attraction of core to magnet. Total distance of repulsion that felt becomes once I provide power to EM 30-35 mm with diff repulsion push from diff EMs, so 30 mm by permanent magnet that i put on the core and added 5-7 mm by EM. Under is a table listing specifications of core and wire & then how much repul each give to permanent magnet on switching ON/OFF EM.
From these results, I concluded going for core with more diameter and using wire of 1.0 mm is way to go. Now I am thinking to use more thicker core i.e. 4cm, 6cm, and may be 8cm. I have few questions regarding this:
1. Is there any formula or online tool available to calculate what size core with what size wire is optimum or is any rule to make maximum powerful magnet. I am thinking like there may be a saturation point so to use more thicker core than that has no effect or adding more turns only add resistance. There is one available but that is for air core online electromagnet calculator. It's very helpful though for finding resistance of coil and number turns.
2. Any suggestion based on these results for what I am planning to do next will be very helpful.
• Using a pair of fairly similar magnets I got some resistance (but not a real lot) at that distance so the real question is probably how to make an electromagnet of at least the same (but probably double) the strength of a 26.75 x 16 Neodymium magnet. I'm not sure how to calculate that but my gut feel is that it will need hundreds of watts of power. Did you anticipate having that sort of power available? – PeterJ Nov 29 '13 at 13:01
• If I use two same N42 magnets of this measurement 2 repel,I can get a fair push at 2cm.Same kind of push I get with a 3x4cm coil where winding is 1cm & rest is metal inner core with 9V at 21 Ampere power supply.though the coil get hot very soon.I am thinking like some optimized coil may give more range?This is actually what I am currently thinking & looking.Off course increasing diameter & thickness of permanent N42 magnet can also be done.What I mentioned is what magnet currently I am using. – enterprize Dec 2 '13 at 15:24
• 30 cm - are you sure you don't mean 30mm? Clearly your results indicate that a bigger diameter gives you force at a greater distance but this distance is only 7mm not 30 cm. What are you saying? – Andy aka Dec 23 '13 at 22:58
• sorry,typo error,I corrected OP.Yes, 7mm by now but that much vibration is enough 2B felt as vibration is then from 30mm to 37mm.Bec from what distance repulsion is felt is one main issue so placing a permanent neodem magnet puts repulsion 2B felt at 30mm,removes attraction of core & 7mm is arm length of vib in this case.What now I want to know is about ration/extent how much can I add into diameter.I have 4,6,8cm ferrite core available,and then also current winding is 1cm deep,how about adding more layers of winding.Like if I inc core dia,should I also inc winding.Provided these r costly. – enterprize Dec 24 '13 at 1:06
• If I place a more powerful magnet on core that is now option with increased dia of core and also use more powerful reader magnet, I hope the distance can be increased. Also when I used 1mm diameter wire with 2cm core, the repulsion felt is much more stronger. – enterprize Dec 24 '13 at 1:09 |
# Tag Info
93
Firstly, Mars has a mean distance from the Sun of 1.524 AU, so by the inverse square law the energy it gets from the Sun is about 40% of what the Earth gets. But the main reason that Mars is so cold is that its atmosphere is very thin compared to Earth's (as well as very dry, see below). From Wikipedia Atmosphere of Mars: The atmosphere of Mars is much ...
87
It depends on where in outer space you are. If you simply stick it in orbit around the Earth, it'll sublimate: the mean surface temperature of something at Earth's distance from the Sun is about 220K, which is solidly in the vapor phase for water in a vacuum, and the solid-vapor transition at that temperature doesn't pass through the liquid phase. On the ...
38
I think that your thought process is flawed in that you assume that by drastically increasing the temperature you are guaranteed to get heavy elements. As odd as this may sound, this isn't the case (especially during the Big Bang Nucleosynthesis (BBN)) for a few reasons. In fact, if you took a hydrogen-only star and made it go supernova, you wouldn't get ...
27
I'm just going to expand and deepen on what the other answers already said. In the following I contrast the atmospheric transmission ($T$) and absorption ($A$, which is $A=1-T$) of Mars and Earth. The Mars plot (top) is from Prof. J. Irwin via this review by P. Read et al. 2015 and the terrestrial data (bottom) is from wikipedia. The plots of $A$ and $1-T$...
23
You can stick a thermometer in space, and if it is a super-high-tech one, it might show you the temperature of the gas. But since the interstellar medium (ISM) is so dilute, a normal thermometer will radiate energy away faster than it can absorb it, and thus it won't reach thermal equilibrium with the gas. It won't cool all the way to 0 K, though, since the ...
18
Yes, metals and other elements and molecules can exist in gaseous form under the right conditions of temperature and pressure. A "gas" is simply one of the fundamental states of matter, as in solid, liquid, or gas (and a few other states outside the scope of this question). But as a gas, these substances exist entirely as either individual atoms, individual ...
17
The answer depends on what you'd want to consider as a "star." If you're just thinking about stars on the main sequence, then you can just refer to the classical stellar type letters, "OBAFGKM" (which has relatively recently been extended to accommodate the coolest brown dwarfs with the letters "LTY"), where O-stars are the hottest stars (~30,000 K) and Y-...
16
Mars does have a greenhouse effect, only somewhat weaker than Earth's. Mars' atmosphere is very dilute, with a with a surface pressure only 0.6% of Earth's. So even if 95% of it is CO2, that's not a lot. However, it is actually a higher absolute abundance of CO2 molecules than on Earth, which only has a CO2 abundance of 0.04% (by volume; e.g. NOAA, ...
14
Hydrodynamic models of the Sun allow one method of estimating its internal properties. To do this, the Mass, radius, surface temperature, and total luminosity (radiative energy emitted)/s of the Sun must be known (determined observationally). Making several assumptions, e.g., that the Sun behaves as a fluid and that local thermodynamic equilibrium applies, ...
12
It would sublimate. The frozen mass of water would decrease in size as the water converts from a solid to a gas (without becoming a liquid) and drifts away.
9
Yes, there is a limit. If the radiation pressure gradient exceeds the local density multiplied by the local gravity, then no equilibrium is possible. Radiation pressure depends on the fourth power of temperature. Radiation pressure gradient therefore depends on the third power of temperature multiplied by the temperature gradient. Hence for stability $$T^... 9 The title of the question asks about interstellar space, but the body asks about the interstellar medium. These are two very different questions. The temperature of the interstellar medium varies widely, from a few kelvins to over ten million kelvins. By all accounts, the vast majority of the interstellar medium is at least "warm", where "warm" means several ... 9 Let n, T, and x_i be the number density of hydrogen, the temperature of the gas, and n_i/n, where n_i is the number density of the ith component of the interstellar medium. We can then write the criteria for thermal equilibrium as$$n^2\Lambda(n,T,x_i)-n\Gamma(n,T,x_i)\equiv n^2\mathcal{L}=0$$where \Lambda and \Gamma and the heating and ... 8 The answer to your first question has to do with luminosity. It's a measure of power, the energy given off by an object in a certain amount of time, which you can think of as brightness. The more luminous the object, the brighter it appears. We can treat the Sun as an idealized object called a black body, which emits thermal radiation according to something ... 8 An answer to your question is contained within What is the largest hydrogen-burning star? The hottest observed main sequence stars are of type O3V, with photospheric temperatures of about 50,000 K. However, it is indeed possible that hotter main sequence stars may exist in the present-day universe, but have simply evolved into Wolf-Rayet stars (and lost a ... 7 In astronomy, there is no formal definition of the threshold between gas and dust. Gas can be monoatomic, diatomic, or molecular (or made of photons, in principle). Molecules can be very large, and in principle, dust particles are just very large molecules. I've seen various authors use various definitions, ranging from \sim100 to \sim1000 atoms. This ... 7 No, absolutely not. The core of a core-collapse supernova is one of the hottest places in the present-day universe. The temperature as the star runs out of nuclear fuel in its core is around 6-10 billion Kelvin. As it collapses, the core gets even hotter, perhaps as high as 100 billion Kelvin for a few seconds, before neutrino cooling starts to become ... 7 Without any other information, you cannot distinguish between the two effects.$$ T = T_0 (1 + z) $$A blackbody spectrum of temperature T is identical to a blackbody spectrum of temperature T_0 with redshift z. For stellar/galactic radiation, we can use the fact that the radiation is not a perfect blackbody. For the CMB, we can use the fact that ... 6 The heliosphere is mainly defined by the region dominated by solar wind against the interstellar medium. "The solar wind is divided into two components, respectively termed the slow solar wind and the fast solar wind. The slow solar wind has a velocity of about 400 km/s, a temperature of 1.4–1.6×10^6 K and a composition that is a close match to the corona.... 6 The composition can be determined by taking spectra. Additionally, the mass can be determined through dynamics. If you combine these two, under the assumption that the star is in a state of hydrostatic equilibrium (which means that the outward thermal pressure of the star due to fusion of hydrogen into helium is in balance with the inward tug of gravity), ... 6 The Boomerang Nebula (or Bow Tie Nebula) is a cloud of gas being expelled from a dying low-mass star, at 164~\mathrm{km}~\mathrm{s}^{-1}. In general, when a gas expands, it cools (see extended explanation below). If the gas were optically thin to the CMB — that is, if it were sufficiently dilute that CMB photons could easily penetrate — it ... 6 In a white dwarf, the dense matter is not in its lowest energy configuration. Energy can still be extracted from the white dwarf material by fusion, provided it can be ignited. What exothermic nuclear reactions would there be that could take place in a neutron star? The bulk of the material is in the form of neutrons with a small number of protons and ... 6 The slowest reaction rate in the pp chain determines how quickly hydrogen can "burn" in the core of a sun-like star. That rate-determining step is actually the fusion of two protons to form deuterium via the diproton and a weak interaction decay. The fusion of lithium, whereby it fuses with a proton and then splits into two Helium nuclei is actually part of ... 6 I think that there isn't a strict answer to this question. However, I believe the answer is that there's a difference between the core of a hydrogen-burning star and the core of a protostar or star-forming, gas cloud. For a hydrogen-burning star, the core, as you say, is the region of the star where fusion is taking place. This is surrounded by the ... 6 I am not sure what you mean by "thermal" pressure. Jupiter is supported by pressure, just like all objects that are in (approximate) hydrostatic equilibrium. That pressure is provided by your everyday, temperature-dependent Maxwell-Boltzmann ideal gas pressure in the outer parts, but the free electrons in the interior become degenerate and so in these ... 6 It should probably be added that the article includes a glaring error of the type you often see when the science writer apparently did not take an elementary astronomy class (this is why we have such classes!). When the article states that the "lost matter exists as filaments of oxygen gas", you can be sure that Michael Shull never said any such thing, ... 5 It depends on the distance from the central body. This gives the temperature T at a given point as a function of the distance from that point to the center (R):$$T(R)=\left[\frac{3GM \dot{M}}{8 \pi \sigma R^3} \left(1-\sqrt{\frac{R_{\text{inner}}}{R}} \right) \right]^{\frac{1}{4}} where $G$, $\pi$, and $\sigma$ are the familiar constants, $M$ is the ...
5
Eventually, yes. Interesting information about Venus: Venus is hotter than Mercury, despite being nearly twice as far from the Sun. Earth, despite being further from the Sun, receives more energy from the Sun than Venus, due to Venus's very high albedo. As you might guess by this information, the major factor that keeps Venus hot isn't how much energy it ...
5
In the vacuum of space the most important consideration is to consider how much radiation an ice cube would absorb from, for example, nearby stars and how fast the ice cube itself would radiate away energy (using Wien's law), finding what ice cube temperature would produce an equilibrium (the temperature at which the ice cube radiate energy at the same rate ...
Only top voted, non community-wiki answers of a minimum length are eligible |
Actuarial Outpost Ito's Lemma general question
Register Blogs Wiki FAQ Calendar Search Today's Posts Mark Forums Read
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions
Investment / Financial Markets Old Exam MFE Forum
#1
09-18-2008, 12:39 AM
langstafftigerpizza Member Join Date: Jan 2007 Studying for nothing right now Favorite beer: heineken Posts: 420
Ito's Lemma general question
I wonder how is (dS)^2, which is the middle term, calculated.
For example, ASM page 279 14G, 2nd last line, how does (dZ)^2 become dt?
#2
09-18-2008, 12:50 AM
Actiger Member SOA CCA AAA Join Date: May 2007 Location: NYC Studying for Married Life Posts: 1,373
This is the multiplication rule of the SDE.
dZ^2 = dt
dt^2 = 0
dt*dZ = 0
#3
09-18-2008, 01:13 AM
langstafftigerpizza Member Join Date: Jan 2007 Studying for nothing right now Favorite beer: heineken Posts: 420
Quote:
Originally Posted by Actiger This is the multiplication rule of the SDE. dZ^2 = dt dt^2 = 0 dt*dZ = 0
Still confused. Can you explain a little further?
Also for Quiz 14-6, the solution shows (dY)^2 = 0.4^2dt?
Thanks
#4
09-18-2008, 05:36 AM
jraven Member Join Date: Aug 2007 Location: New Hampshire Studying for nothing! College: Penn State Posts: 1,305
Quote:
Originally Posted by langstafftigerpizza Still confused. Can you explain a little further? Also for Quiz 14-6, the solution shows (dY)^2 = 0.4^2dt? Please help. Thanks
The idea is that if, say, $dY = 0.1 \,dt + 0.4 \,dZ$ (I don't have a copy of the manual on-hand to see what it really uses), then
$(dY)^2 = (0.1 \,dt + 0.4 \,dZ)^2 = (0.1)^2 \,(dt)^2 + 2 (0.1) (0.4) \,dt\,dZ + (0.4)^2 \,(dZ)^2$
Then you use the multiplication table that Actiger gave to change that to
$(dY)^2 = (0.1)^2 (0) + 2 (0.1) (0.4) (0) + (0.4)^2 \,dt = (0.4)^2 \,dt$
As for why the multiplication table is what it is... that's a little (or a lot) complicated, and of no use whatsoever in understanding the material. You just need to know the multiplication table that Actiger provided.
__________________
The Poisson distribution wasn't named after a fish -- it was named after a man ... who was named after a fish.
#5
09-18-2008, 08:25 PM
langstafftigerpizza Member Join Date: Jan 2007 Studying for nothing right now Favorite beer: heineken Posts: 420
thanks a lot jraven, I guess I will just memorize the multiplication table, and it is good to go.
#6
09-24-2008, 10:50 PM
Fermat83 Member Join Date: May 2008 Posts: 295
Quote:
Originally Posted by langstafftigerpizza I wonder how is (dS)^2, which is the middle term, calculated. For example, ASM page 279 14G, 2nd last line, how does (dZ)^2 become dt? Thanks in advance!
Just memorize the multiplication table and everything will be fine. If the multiplication table seems weird its because it is. You are dealing with stochastic
random variables, and to manipulate them with calculus and see what there doing instantaneously, some bright boys had to come up with some new axioms to deal with these strange objects. I actually read a very theoretical book on the subject and it was kind of interesting but did nothing to help with the exam.
#7
09-25-2008, 11:14 AM
volva yet Note Contributor Join Date: Feb 2006 Location: Nomadic Studying for GHC/DMAC College: PSU '07 Favorite beer: Oskar Blues Old Chub Scotch Ale Posts: 4,949
Quote:
Originally Posted by jraven The idea is that if, say, $dY = 0.1 \,dt + 0.4 \,dZ$ (I don't have a copy of the manual on-hand to see what it really uses), then $(dY)^2 = (0.1 \,dt + 0.4 \,dZ)^2 = (0.1)^2 \,(dt)^2 + 2 (0.1) (0.4) \,dt\,dZ + (0.4)^2 \,(dZ)^2$ Then you use the multiplication table that Actiger gave to change that to $(dY)^2 = (0.1)^2 (0) + 2 (0.1) (0.4) (0) + (0.4)^2 \,dt = (0.4)^2 \,dt$ As for why the multiplication table is what it is... that's a little (or a lot) complicated, and of no use whatsoever in understanding the material. You just need to know the multiplication table that Actiger provided.
I am extremely interested in why, and there is no source that will tell me why. I am just forced to know that these rules are the law and I must abide. I'm not a fan of this as it does not help me truly understand the bridge between stochastic stock price modeling and the financial derivatives based on stock prices that is Ito's Lemma.
#8
09-25-2008, 11:55 AM
raidersfan Member SOA Join Date: May 2007 Studying for MLC & C Favorite beer: NewCastle Posts: 40
Quote:
Originally Posted by colby2152 I am extremely interested in why, and there is no source that will tell me why. I am just forced to know that these rules are the law and I must abide. I'm not a fan of this as it does not help me truly understand the bridge between stochastic stock price modeling and the financial derivatives based on stock prices that is Ito's Lemma.
Quote:
Originally Posted by colby2152 I am extremely interested in why, .
No you're not.
Quote:
Originally Posted by colby2152 and there is no source that will tell me why. .
Yes there is.
Quote:
Originally Posted by colby2152 I am just forced to know that these rules are the law and I must abide. .
This should be the least of your worries on this exam.
Quote:
Originally Posted by colby2152 it does not help me truly understand the bridge between stochastic stock price modeling and the financial derivatives based on stock prices that is Ito's Lemma..
No such bridge exists.
__________________
Looking to go 2 for 2:
Corporate Finance VEE: Check
MFE: Passed. In your face Vorian Atreides!
#9
09-25-2008, 06:34 PM
Fermat83 Member Join Date: May 2008 Posts: 295
Quote:
Originally Posted by colby2152 I am extremely interested in why, and there is no source that will tell me why. I am just forced to know that these rules are the law and I must abide. I'm not a fan of this as it does not help me truly understand the bridge between stochastic stock price modeling and the financial derivatives based on stock prices that is Ito's Lemma.
There are plenty of books on theory of this stuff. I was also annoyed that I had to take these multiplication rules and itos Lemma as is with no explanation of why. I spent a few days digging in a very theoretical book and saw why stochastic random variables model stock behavior well and as for stochastic calculus it was increadibly abstract and strange. Overall it was a waste of study time, and I'm guessing thats why they don't go into this stuff. I've realized you can't be an applied mathematician and also fully understand the theoretical aspects behind everything without getting 3 hours of sleep and having 0 social life. Theoretical mathematicians develoup the tools and Applied mathematicians use them to solve complicated problems in the real world.
#10
09-25-2008, 11:53 PM
Nonpareil Note Contributor Join Date: Nov 2006 Location: Rocket City Studying for Exam C Posts: 833
Quote:
Originally Posted by colby2152 I am extremely interested in why, and there is no source that will tell me why. I am just forced to know that these rules are the law and I must abide. I'm not a fan of this as it does not help me truly understand the bridge between stochastic stock price modeling and the financial derivatives based on stock prices that is Ito's Lemma.
Look at page 658 of DM, which has a decent heuristic explanation.
__________________
"One must do no violence to nature, nor model it in conformity to any blindly formed chimera." Janos Bolyai
"Theoria cum praxis." Gottfried von Leibniz |
## DMOPC '19 Contest 3 P0 - What is it?
View as PDF
Points: 3
Time limit: 2.0s
Memory limit: 256M
Author:
Problem type
Allowed languages
Ada, Assembly, Awk, Brain****, C, C#, C++, COBOL, CommonLisp, D, Dart, F#, Forth, Fortran, Go, Groovy, Haskell, Intercal, Java, JS, Kotlin, Lisp, Lua, Nim, ObjC, OCaml, Octave, Pascal, Perl, PHP, Pike, Prolog, Python, Racket, Ruby, Rust, Scala, Scheme, Sed, Swift, TCL, Text, Turing, VB, Zig
Veshy needs help in math class. He has sequences of 10 spaced terms in order of . For each sequence he wants to know if it is arithmetic, geometric, or neither. Output the answer to the sequence on the line. Terms are guaranteed to be integers.
Note:
A arithmetic sequence is a sequence such that it can be written in the form: where and are constants.
A geometric sequence is a sequence such that it can be written in the form: where and are constants.
It may be helpful to know that in an arithmetic sequence, and in a geometric sequence,
In all tests,
#### Input Specification
The first line of input is .
Each the following lines contains 10 integers, , a sequence of numbers.
#### Output Specification
Your output must have lines such that the answer to the sequence is on the line.
If the sequence is arithmetic, output arithmetic.
If the sequence is geometric, output geometric.
If the sequence is neither arithmetic nor geometric, output neither.
If the sequence is both arithmetic and geometric, output both.
#### Sample Input
4
1 2 3 4 5 6 7 8 9 10
2 4 8 16 32 64 128 256 512 1024
1 1 0 0 1 1 0 0 1 1
1 1 1 1 1 1 1 1 1 1
#### Sample Output
arithmetic
geometric
neither
both |
Article | Open | Published:
# Preclinical longitudinal imaging of tumor microvascular radiobiological response with functional optical coherence tomography
## Abstract
Radiation therapy (RT) is widely used for cancer treatment, alone or in combination with other therapies. Recent RT advances have revived interest in delivering higher dose in fewer fractions, which may invoke both cellular and microvascular damage mechanisms. Microvasculature may thus be a potentially sensitive functional biomarker of RT early response, especially for such emerging RT treatments. However it is difficult to measure directly and non-invasively, and its time course, dose dependencies, and overall importance in tumor control are unclear. We use functional optical coherence tomography for quantitative longitudinal in vivo imaging in preclinical models of human tumor xenografts subjected to 10, 20 and 30 Gy doses, furnishing a detailed assessment of vascular remodeling following RT. Immediate (minutes to tens of minutes) and early (days to weeks) RT responses of microvascular supply, as well as tumor volume and fluorescence intensity, were quantified and demonstrated robust and complex temporal dose-dependent behaviors. The findings were compared to theoretical models proposed in the literature.
## Introduction
Radiation therapy (RT), alone or in combination with other therapies, is one of the most commonly used treatment strategies for managing cancer. Typical clinical doses for targeting cancer cells in tumors are 2 Gy per fraction, administered daily for 5–6 weeks for a total cumulative dose of 50–70 Gy1. Such fractionation has been considered to be the most clinically effective, increasing the therapeutic ratio by repairing normal tissues and enhancing tumor cell kill compared to the equivalent single-fraction dose. However, recent advances in RT delivery and monitoring of the radiobiological tumor effects have led to the development of stereotactic body radiation therapy (SBRT), which delivers higher doses per fraction and fewer fractions for improved local control and lower damage to surrounding normal tissues2.
Preclinical studies provide emerging evidence that higher doses of radiation induce additional tumor cell kill through “non-classical” radiobiological mechanisms, mediated by tumor microvascular damage3,4,5,6. Specifically, Fuks and Kolesnick suggested that increased anti-tumor RT-effects are due to vascular damage7,8,9, with a minimum threshold dose of ~8–10 Gy8. Similarly, tumors receiving high dose RT were found to respond above the levels predicted with existing radiobiological models of cell death alone10. This was linked to the significantly increased proliferation rate of tumor vascular endothelial cells (EC) undergoing angiogenesis, potentially making tumor vasculature more sensitive to ionizing radiation11. Death of tumor EC was reported to initiate the inflammation cascade12, yielding hypoxic, acidic and nutrient deprived microenvironment, and enhancing radiation toxicity13.
In spite of intense research in this area, the underlying biological mechanisms of tumor response after high-dose RT remain unclear14,15,16,17. Little is also known about the dynamics of vascular changes, organization of tumor vasculature, angiogenesis and neovascularization at various post-RT stages. This is mostly due to the inability to study the dynamic response in-situ at the capillary level. A recent review of over 40 preclinical studies demonstrates lack of experimental consensus on the RT microvascular response5. The conflicting data arises from the variation in experimental protocols (animal models, cell lines, x-ray energies, dose levels), as well as differences in imaging and quantification techniques (immunohistochemistry ex-vivo, Doppler sonography and computed tomography in-vivo, etc.). Although some theoretical mechanistic models are proposed for RT vascular response effects, little direct experimental in-vivo data exists to support and validate these models. A good example is Kozin et al.’s model of neovascularization after high single-dose RT in rodents based on a thorough analysis of (conflicting) published data covering last 50 years of research in the field18. The numerous questions raised in this work about vascular dynamics in irradiated tumors demonstrate “… the urgent need for tracking vascular changes at the capillary level post-RT using advanced modern technologies” (ref.18). If successful, this line of research should enable better understanding of post-RT microvascular effects and provide early (inter-fraction) response metrics, potentially enabling personalization of the radiation treatments (adaptive RT). Addressing this problem is particularly timely because higher-dose radiation treatments such as SBRT, with their suggested greater involvement of the tumor microvasculature, are currently under active investigation in radiation oncology.
Tumor capillaries are known to be particularly sensitive to radiation5, but most of imaging modalities (ultrasound, magnetic resonance imaging, confocal fluorescence microscopy, etc.) do not have the requisite resolution capability or require potentially toxic contrast agents to visualize them and monitor their response longitudinally. Here we propose a new insight into the response of tumor microvasculature to RT using functional optical coherence tomography (OCT). OCT is an emerging label-free non-invasive 3D optical imaging modality for visualizing subsurface tissue details in-vivo at resolutions approaching microscopy and blood flow details at the microcirculation level19. Its functional extension called speckle variance OCT (svOCT) enables three-dimensional depth-resolved imaging of microvasculature in-vivo 20. The endogenous contrast of svOCT images originates from the different temporal light scattering properties between the blood within vessels and the surrounding “solid” tissues. Other than not requiring contrast agents, significant advantages of svOCT for tracking tumor vasculature post-RT include fast volumetric scanning (few seconds to a few minutes depending on the tumor size), rapid processing, 1 to 3 mm imaging depth (depending on tissue and tumor type), and blood flow/direction independence; this last characteristic is advantageous in that it maximizes microvascular detection and visualization, but may be a drawback if flow speed information is required. In addition, OCT scanners are now relatively cheap and portable.
The current “shedding light on radiotherapy” study builds on a decade of background work. Initially Mariampillai et al. developed svOCT method for microvasculature monitoring21. Leung et al. designed the heated animal restrainer for svOCT imaging, irradiation protocol and dose verification22. Maeda et al. optimized the well-established, but occasionally disadvantageous dorsal skin window chamber (DSWC) model23 and conducted a pilot study of a short-term response (2 weeks) to 30 Gy single-dose RT24. Conroy et al. developed post-processing techniques for vasculature quantification with biological metrics25. We build on this decade of previous experience, improving and refining essentially every aspect of this imaging and analysis platform, to now enable identification of vascular radiobiological response.
We selected the NOD-Rag1null IL2rγnull (NRG) mouse strain for this study because of its radio-resistant and immune-deficient nature26. Driven by current emerging clinical interest in SBRT for treating pancreatic cancer27,28,29, we used Bx-PC3 human pancreatic cancer cells to study its response to irradiation. From a variety of microvascular metrics developed by us and others over the years (vessel tortuosity, branching, length, fractal dimension, etc.25,30,31,32), here we report on the vascular volume density due to its calculation simplicity (number of vascular pixels divided by total pixels in the selected volume), robustness, minimal operator dependence and potential ease for results replication by other research groups. Two additional vasculature-independent measures were also performed for tracking RT response: tumor volume via caliper measurements and tumor cell fluorescence intensity via fluorescence microscopy after each svOCT imaging session. Tumor resections for histological staining and histopathologic evaluation were also performed at selected post-RT stages in several animals to support and validate the in-vivo longitudinal observations.
## Materials and Methods
### Mouse model, cell culture and tumor model
All animal procedures were performed in accordance with appropriate standards under protocol approved by the University Health Network Institutional Animal Care and Use Committee in Toronto, Canada (AUP #3256). Human DsRed-labeled BxPC-3 pancreatic cancer cells33 were purchased from AntiCancer Inc. (San Diego, CA, USA) and cultured in RPMI 1640 medium supplemented with 2 mM L-glutamine, 10% fetal bovine serum and 1% Penicillin Streptomycin (GIBCO BRL) at 5% CO2 and 37 °C. DsRed-labelled-BxPC-3 tumors were generated by injection of 2.5 × 105 cells prepared in 10 μL of 1:1 PBS:Matrigel (BD Biosciences, ON, Canada) solution into the dorsal skin of seven- to eight-week-old NRG mice (Jackson Labs, ME, USA) using a 30 G needle. The DSWC surgery was performed 15–21 days post injection after the tumors reached 3–5 mm diameter (Fig. 1(a–c)). Titanium window chambers were surgically implanted into the dorsal skinfold of anesthetized (mixture of 80 mg/kg of ketamine and 5 mg/kg of xylazine) mice using the procedure described in ref.23. svOCT imaging was performed after a recovery period of three to five days post-DSWC installation. Optimized dorsal skin DSWC model allowed for monitoring the response for significantly longer period of time (Fig. 1(d)) compared to similar studies reported in the literature34.
Ionizing radiation was delivered to the tumor using a commercial small animal X-ray micro-irradiator system (XRad225Cx, Precision X-Ray Inc., North Branford, CT, USA) (Fig. 1(e)). With computer control, the system delivered single focal radiation beams (225 kVp, 13 mA, added filtration of 0.32 mm Cu) at doses of 10, 20 and 30 Gy with a diameter of 8 mm directly to BxPC-3 tumors, with a dose rate of 2.63 Gy/min. The X-ray tube was mounted on a rotating gantry with a flat panel detector located opposite the isocenter, which facilitated imaging and irradiation of the target at any given angle. The irradiator was calibrated to ensure accurate dose delivery with tissue phantoms using methods previously described35.
Prior to irradiation, mice were anesthetized using 5% isoflurane and maintained using 2% isoflurane delivered through a mask. In order to align the center of the tumor within the window chamber to the isocenter of the radiation beam, fluoroscopy images were taken and animal stage position adjusted accordingly. The location of RT and dose levels (Fig. 1(f) and (g)) were confirmed with calibrated Gafchromic EBT-2 film (ISP Inc., Wayne, NJ, USA) consisting of a radiosensitive monomer that polymerizes and changes color with absorbed dose.
### Experimental study schema
The time course of conducted experiments is shown in Fig. 2(a). Initially, tumor cells were injected into the dorsal skin. After the tumor volume reached 3–5 mm in diameter ~2 weeks later, the DSWC was implanted. Irradiation was performed ~10 days after DSWC installation. This delay ensured adequate tumor and vascular growth, assessed by periodic svOCT and fluorescence imaging. At day “R”, tumor was treated with a single-dose of radiation using the small animal irradiator. For five to eight weeks following irradiation, tumor changes were monitored repeatedly with caliper measurements (tumor volume), svOCT imaging (vasculature), and epi-fluorescence microscopy (tumor cell status). Specifically, tumor size at the back side of the window chamber (Fig. 2(b)) was measured in three perpendicular directions with calipers prior to every imaging session. svOCT from the front side of the window chamber (Fig. 2(c)) was used to image tumor microvasculature (Fig. 2(d)) within the area labeled by the black rectangle. DsRed (535 nm excitation, 580 nm emission) tumor cell fluorescence images (Fig. 2(e)) were obtained with an epi-fluorescence microscope with consistent exposure settings (Leica MZ FLIII, Leica Microsystems, Richmond Hill, ON, Canada), and analyzed using MATLAB by computing the average intensity of all pixels.
To support longitudinal in-vivo observations, several animal were sacrificed and tissue sections were histologically stained at various time points. Mice were euthanized by anesthesia with ketamine/xylazine followed by cervical dislocation. Tumors were resected, fixed in 10% formalin and processed for histologic staining. Hematoxylin and eosin (H&E) were used to view cellular morphology, and labeling of DNA fragments (TUNEL antibody assay) was used to quantify cellular apoptosis. Slides were scanned by Aperio Scanner, and TUNEL positivity was measured for the entire tumor section using Aperio ImageScope software (Leica Biosystems, Concord, ON, Canada).
### Optical coherence tomography system
All OCT images were acquired using a previously-described swept source OCT system based on a quadrature interferometer to suppress the complex conjugate artifact, as shown in Fig. 3 (refs36,37). Briefly, the source (HS2000-HL, Santec, Japan) had a central wavelength of 1320 nm, a full width at half-maximum wavelength of 110 nm and an average output power of 10 mW. The repetition scan rate of the source was 20 kHz with a duty cycle of 68%. The light output was split in the first 2 × 2 coupler and 90% was directed toward the tissue. A 2D galvo scanning system enabled lateral beam translation and thus 3D volumetric imaging (GVS-012, Thorlabs, NJ, USA). Tissue back-scattered light was coupled back within the optical fiber and fed into a semiconductor optical amplifier (SOA - BOA1017, Covega, MD, USA), with gain adjusted to 35 dB, to boost the signal level. The SOA had the same center wavelength and bandwidth as the laser source. We used a polarization controller (located before the SOA) to minimize the differences between the shape of normalized light spectra in the reference arm and after the SOA. The amplified signal was combined with the reference signal in a 3 × 3 coupler followed by a 2 × 2 coupler. Two channels balanced detection was used to extract the complementary components of the complex interferometric signal. Two attenuators were used to match the optical power entering the balanced detectors (PDB150C, Thorlabs, NJ, USA) with a saturation level of 5 mW. Two detector outputs were digitized using a data acquisition card (ATS9625, Alazartech, Montreal, Canada) with 16-bit resolution and sampling rate of 250MS/s. The resultant axial and lateral resolutions (in air) were 8 µm and 15 µm, respectively. To ensure consistency of obtained in-vivo data over time and between animals, OCT optical power at the probe output was measured before each imaging session to be 5 mW, OCT probe imaging angle was set to 84°, relative to horizontal, and imaging speed was fixed at 40 frames per second.
### Imaging, data processing and representation
#### svOCT
Tumor-bearing mice (n = 60 with 45 irradiated and 15 non-irradiated tumors) were anesthetized by inhalation of 2% isoflurane and placed on a mouse restrainer22 with built-in 37 °C heating element to prevent motion artifacts and maintain physiological temperature during imaging procedures. OCT volumetric images (Fig. 4(b)) were taken over a 6 × 6 mm2 field of view with 800 A-scans per frame and a gate length of N = 8 (number of sequential same-location B-scans), to enable inter-frame comparison required for svOCT analysis (Fig. 4(c)). This gate length may be optimal for low bulk tissue motion scenarios, such as the DSWC20. The svOCT algorithm (Fig. 4(d)) was used to calculate the inter-frame intensity variance from the same spatial location, with the contrast arising from differences in time-varying speckle properties at each pixel:
$$\,S{V}_{zx}=\frac{1}{N}\sum _{i=1}^{N}{({I}_{izx}-\overline{{I}_{zx}})}^{2}$$
(1)
where N is the number of B-scans acquired at the same spatial location within a tissue volume, $${I}_{{izx}}$$ is the intensity of the (z,x)th pixel of the i-th B-scan, z is the axial coordinate, x is the lateral coordinate and $$\bar{{I}_{{zx}}}$$ is the mean intensity of i pixels from N consecutive B-scans. This procedure was then repeated for all spatial locations within the scanned tissue volume to obtain $${{SV}}_{{zx}}$$ vascular cross-sections, as shown in Fig. 4(e).
In Eq. (1), if N B-scans are acquired faster that the “stationary” solid-tissue decorrelation time, then the value of ($${I}_{{izx}}-\bar{{I}_{{zx}}}$$) for these pixels approaches zero, thereby suppressing the tissue signal in the resulting SV image. Here, the B-scan acquisition rate was set to 25 ms: this was fast enough that signals from stationary tissues did not de-correlate between frames (thus ~0 svOCTsignal), while being sufficiently slow to ensure complete inter-frame de-correlation for pixels representing vascular blood (thus high svOCTsignal).
Volumetric vascular images were composed of hundreds/mm of $${{SV}}_{{zx}}$$ vascular cross-sections taken in lateral $$y$$ dimension. Those images were post-processed for vascular volume density (VVD) calculation, vascular en-face 2D projection (Fig. 4(f)) and depth encoded 2D (Fig. 4(g)) and 3D (Fig. 4(h)) representation using (i) morphological opening/closing algorithm for noise and artifact removal38 to minimize contributions from non-vessel signals such as bulk tissue motion; (ii) binarization with Otsu’s thresholding method39 in the depth direction to retain deep-vessel information otherwise suppressed due to the exponential attenuation of the OCT signal; (iii) tumor surface masking and leveling for correct depth encoding while preserving blood vessel topology, orientation and connectivity. VVD was calculated as a fraction of vascular pixels of the total number of pixels in the analyzed volume. Green-yellow-red-grey-black color map (256 color gradations) was chosen for depth-encoding (green = top tissue layers just below the glass coverslip, black = deepest tissues). Matlab software (Mathworks, MA, USA) was used for processing the data.
### Scientific rigour and statistical considerations
Many literature studies of radiobiological microvascular responses often report conflicting results, likely due to the variations in experimental protocols (animal and tumor models, irradiation methods, etc.), imaging methodologies, and quantification techniques5; these difficulties underscore the subtle and complex nature of the problem. After more than a decade of careful background preparation, the current study finally ensures robust and unbiased experimental design and analysis of results, rigorously quantifying vascular radiobiological response of irradiated tissues. For the three reported dose levels of 10, 20 and 30 Gy, a total of 60 mice were used: 15 animals for each dose plus 15 un-irradiated. As some animals were used for validating histology, animal numbers reduced towards the latest time points (~8 weeks), from 15 to 7–8. This relative reduction of animal numbers is reflected in the size of error bars in the plots reported below; the initially large number of 15 animals per dose was chosen to ensure robust results throughout the imaging time course regardless of this histological attrition. Further, as sex is an important and potentially confounding biological variable, only female mice were used to exclude this uncertainty. A new batch of pancreatic tumor cells and ‘fresh’ chemicals were purchased from official vendors to further reduce the risk of laboratory-to-laboratory differences and increase rigor and robustness of the reported trends.
Repeated measures analysis of variance (ANOVA) was performed using SPSS Statistics software (IBM, Armonk, NY). Two-way repeated measures ANOVA with Bonferroni post-test was used for serial imaging data to compare the results for the groups irradiated with different doses. The number of samples (n) indicates the number of mice per treatment group. In all cases, P < 0.05 was considered statistically significant, and all error bars represent mean ± standard deviation.
## Results and Discussion
svOCT imaging and its associated post-processing steps provide a powerful platform for assessing volumetric tumor vasculature growth and response to radiation. As seen in Fig. 5, pancreatic tumor xenograft vasculature aggressively developed within ~2 weeks after subcutaneous injection of tumor cells into dorsal skin. At day 3 after injection seen in Fig. 5(a), there was microvascular growth from neighboring normal tissue vessels40. Vessel growth continued, forming the “claws” as seen at one-week time point in Fig. 5(b). After these connect at ~day 10 (Fig. 5(c)), further new vessels quickly sprout inside the tumor to fully vascularize it within a few days (day 16 in Fig. 5(d)). The structure of vascular bed in tumor is seen to be markedly different from that in normal tissue, where microarchitecture of vascular network is more hierarchically organized as shown in Fig. 5(e), with more ordered and evenly distributed vessels to allow adequate perfusion of nutrients and oxygen to all cells41. In contrast, tumor vessels are immature, tortuous, irregular in diameter, and often sharply bent. They form a disorganized labyrinth with a lack of conventional blood vessel hierarchy in which arterioles, capillaries, and venules are not clearly identifiable42.
Introducing single-dose radiation treatment into this course of tumor development changes its growth dynamics. Prior to examining longer-term responses (days-weeks), we look closely at the immediate (minutes-scale) response. In other tumor types43,44 and in our earlier investigations using intravital microscopy45, irradiation with high single doses causes rapid vascular alterations in human tumor xenografts. Depth-encoded svOCT panels in Fig. 6 demonstrate the immediate microvascular effects following 10 Gy irradiation. The vascular volume density (VVD) markedly decreased by 26% half an hour post-RT (Fig. 6(b)) from its initial state before irradiation (Fig. 6(a)). Interestingly, maximum response at this time point is seen in small vessels (10–30 μm in diameter); small-to-medium size vessels (30–70 μm in diameter) appear less affected.
Many of these alterations seem non-permanent, with majority of these vessels re-appearing later: at 45 min and 60 min time points, the circulation recovery was detected (Fig. 6(c,d)) reaching 90% of initial vascularity at 90 min post-RT (Fig. 6(e)). This may be an indication of temporary transient microvascular thrombosis or capillary anastomosis bypass after irradiation24,45,46. In other words, those vessels that reappeared at later time points post-RT were not permanently damaged by irradiation. Permanent disappearance of ~10% of vessels may be an indication of radiation-induced death of endothelial cells and collapse of the fragile tumor vessels as a result of an interstitial fluid pressure elevation caused by extravasation of plasma proteins47,48.
Figure 7 shows the effect of a 20 Gy single dose on tumor microvasculature over 6 weeks (from 1 week pre-RT to 5 weeks post-RT). The tumor was irradiated after being fully vascularized (“Day -0”). Initial response is seen at 1.5 hours after irradiation (“Day +0” image), where VVD = 83% of pre-RT vasculature; svOCT images at t = 2, 6 and 8 days post-RT clearly demonstrate that vessels in the tumor core are preferentially affected compared with those in the tumor rim. This supports the previous conjecture that parts of vascular networks in the tumor periphery are ~ normal tissue blood vessels sprouting by angiogenesis into the tumor mass18; these might be more resistant to radiation compared to the new tumor blood vessels in the core formed by vasculogenesis49,50.
Data beyond ~10 days provides clear evidence of tumor re-vascularization via growth of the surviving vessels, in accord with earlier studies and hypothesis that tumor regrowth after local irradiation is dependent on blood vessel formation by surviving endothelial cells18,51. It is also interesting to note that throughout this > 10 days revascularization process, the tumor region appears to be getting smaller.
The individual ‘case studies’ presented above are interesting and do provide some insights, but the real value of the developed svOCT platform is in its large imaging throughput capability and quantifiable metric extraction. We thus present a quantitative summary of the entire n = 60 animal study, for the three irradiation dose levels (plus un-irradiated controls) showing the three measured variables: tumor VVDs extracted from svOCT images (Fig. 8(a)), volumes from caliper measurements (Fig. 8(b)), and fluorescence intensity from microscopy (Fig. 8(c)).
The longitudinal monitoring data is shown over the entire ~10 week temporal observation interval. Error bars are calculated at each experimental point, but are only shown at selective intervals for clarity – in the pre-RT regime, in the midcourse, and towards the end of the post-RT observation interval. Figure 8(d) shows the three metrics on a single panel for the 20 Gy dose case. Also shown in Fig. 8(e) is the proposed literature model18 for the temporal course of microvascular changes post RT; as mentioned previously, this model was not based on direct experimental observations. It will be used here to help interpret the derived experimental data of Fig. 8(a)–(d).
Starting with Fig. 8(a), several important trends of tumor microvascular response to single-dose irradiation become evident:
• magnitude of inhibition increases with dose levels (~10% drop 2 weeks after 10 Gy, ~70% drop 4 weeks after 30 Gy); the decrease is temporary (thus single dose is not enough to permanently control the tumor), and eventually VVD returns to pre-irradiation levels;
• time-to-return increases with dose (~3.5 weeks for 10 Gy, > 8 weeks for 30 Gy);
• tumor microvasculature response within 1.5 hours after irradiation is more pronounced for higher doses (23% of microvessels exhibited temporary shutdown after 30 Gy, versus 10% after 10 Gy). The majority of these were seen to be microvessels of less than 30μm in diameter;
• the described statistical analysis of variance was performed on the three irradiated and one un-irradiated control group over the course of corresponding temporal trajectories, to check if the four dose cohorts were indeed different from each other. For t > 1.5 weeks, this was definitely so, with P-values in the 0.0001–0.01 range. Immediately following irradiation for up to 1–1.5 weeks, the situation was ambiguous, with P-values in the 0.03–0.15 range (largest P-value for the 10Gy-to-0Gy cohort difference at t < 1 weeks). We thus conclude that the differences in the temporal trajectory of the microvascular response increase with dose, and take ~1–1.5 weeks to manifest unequivocally.
These direct and robust experimental observations of longitudinal microvascular RT response in-vivo yield solid results for de-novo mechanistic model development, and can also serve as empirical foundation/validation for previously-proposed models (e.g., one shown in Fig. 8(e), as discussed below).
Tumor volume response to different doses (Fig. 8(b)) also demonstrates complex dynamics over the monitored time period, its overall shape and dose dependence being somewhat similar to the VVD behavior. The temporal response is overall slower than the microvasculature, in that the maximal tumor shrinkage (minimum tumor volumes) are reached at t ~ 4–5 weeks following dose deposition, independent of dose levels. This sequence of radiation damage events – first microvascular response followed by cellular/tissue shrinkage – makes sense in light of existing radiobiological models mentioned previously5,18. It also suggests that functional imaging approaches, such as svOCT that target earlier-responding microvasculature may indeed be preferable for potential treatment adjustment/personalization compared to ‘conventional’ anatomical tumor-volume-based imaging methods (e.g., x-ray based portal imaging or cone-beam CT52). Analogous to VVD, we note that the maximal tumor shrinkage increases with dose (10 Gy – 20%, 30 Gy – 80%), and the time to initial volume recovery is also dose dependent (10 Gy – 5.5 weeks, 20 Gy – 8 weeks, 30 Gy > 8 weeks (beyond our experimental observation interval). There is also some indication of complex early growth inhibition (without significant shrinkage) for the first 1.5–3 weeks following irradiation followed by a rapidly accelerating rate of tumor volume decrease (nadir at 4–5 weeks), and then recovery. These experimental observations will (1) need to be examined in additional tumor models to test and verify their generalizability and (2) will have to be accounted for in future predictive radiobiological models that can explain such complicated growth dynamics, including the complex interplay between vascular and cellular compartments.
Tumor DsRed fluorescence intensity has been reported to indicate cancer cell viability levels53 and to serve as indirect measure of the proportion of hypoxic cells in the tumor45. Figure 8(c) shows the response curves of this metric for the three doses. Once again, the general shape of the curves is similar to that of VVD and tumor volume, with greater resemblance to the latter; one significant difference is the sharp drop in fluorescence intensity very early following irradiation. Specifically, within one day post-RT, a significant decrease is seen in tumor cell fluorescence intensity (14% drop for 10 Gy, 30% for 20 Gy, and 48% for 30 Gy). There follows a 1–2 week long slight increase, followed by another drop (nadirs at 3 weeks and 85% for 10 Gy; 4 weeks and 60% for 20 Gy; and 5 weeks and 30% for 30 Gy). The subsequent time-to-recovery is also dose-dependent – 4 weeks (10 Gy), 6.5 weeks (20 Gy) and >7.5 + weeks (30 Gy).
Such advanced radiobiological models are indeed starting to appear in the literature. An example is shown in Fig. 8(e), put forth by Kozin et al.18 in 2012, based on the varying (and often conflicting) reports of irradiated tissue studies to date. Despite sub-optimal data for hypothesis generation, these authors were able to propose purported mechanisms of microvascular dynamics following high single dose of radiation, including the resultant effects of tumor volume shrinkage and subsequent regrowth. As seen in Fig. 8(e), the general shape of the theoretically predicted tumor volume curve (with purported vascular mechanisms shown along the abscissa axis) agrees well with the experimental data of our study (particularly VVD and tumor volume metrics of Fig. 8(a) and (b), respectively). In this context, these results can be seen as the direct and successful response to Kozin et al.’s charge to “… the urgent need for tracking vascular changes at the capillary level post-RT using advanced modern technologies” (ref.18). It will be interesting to see how this and related radiobiological models will be adjusted in light of the detailed results presented in this paper.
The exploration of additional microvascular metrics may provide more insights into radiation-induced tumor vascular response and, importantly, prediction of therapeutic outcomes. Among those in OCT angiography research, most promising may be vessel tortuosity (to evaluate the efficiency of blood transport and vascular remodeling), total and average vessel lengths (to measure vessel/capillary pruning), fractal dimension (to quantify the vascular space-filling properties and vascular network complexity), and tissue vascularity (to identify tumor regions that are likely to be hypoxic)25,30,31,32.
Fig. 9 presents Hematoxylin and Eosin (H&E) and TUNEL staining of tumor regions for the control and the three irradiated (10, 20 and 30 Gy) cohorts. Representative images at selected times (t = 2 weeks here) following irradiation are shown. Control staining (Fig. 9(a)) shows that tumor cells are in active proliferation state prior to irradiation (TUNEL), with many small and medium vessels (H&E). Two weeks following 10 Gy (Fig. 9(b)), mainly stromal cells are affected, with 16% of cancer cells undergoing apoptosis (TUNEL). Moderate damage (48%) is seen at 2 weeks post 20 Gy (Fig. 9(c)) to cancer and stromal cells (TUNEL). Finally Fig. 9(d) shows almost complete damage (98%) of stromal and cancer cells in tumor core (TUNEL) at 2 weeks post 30 Gy RT. Similar to the tumor in-vivo dynamics, these histological ex-vivo discrete point snapshots underscore the importance of (1) dose level and (2) time post-RT in furnishing the complex trajectory of tumor radiobiological response.
## Conclusion
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
2. 2.
Lo, S. S. et al. Stereotactic body radiation therapy: A novel treatment modality. Nat. Rev. Clin. Oncol. 7, 44–54 (2010).
3. 3.
Timmerman, R. D., Herman, J. & Cho, L. C. Emergence of stereotactic body radiation therapy and its impact on current and future clinical practice. J. Clin. Oncol. 32, 2847–2854 (2014).
4. 4.
Kim, D. W. et al. Noninvasive assessment of tumor vasculature response to radiation-mediated, vasculature-targeted therapy using quantified power Doppler sonography: implications for improvement of therapy schedules. J. Ultrasound. Med. 25, 1507–1517 (2006).
5. 5.
Park, H. J., Griffin, R. J., Hui, S., Levitt, S. H. & Song, C. W. Radiation-induced vascular damage in tumors: implications of vascular damage in ablative hypofractionated radiotherapy (SBRT and SRS). Radiat. Res. 177, 311–327 (2012).
6. 6.
Song, C. W. et al. Indirect tumor cell death after high-dose hypofractionated irradiation: Implications for stereotactic body radiation therapy and stereotactic radiation surgery. Int. J. Radiat. Oncol. Biol. Phys. 93, 166–172 (2015).
7. 7.
Garcia-Barros, M. et al. Tumor response to radiotherapy regulated by endothelial cell apoptosis. Science 300, 1155–1159 (2003).
8. 8.
Fuks, A. & Kolesnick, R. Engaging the vascular component of the tumor response. Cancer Cell 8, 89–91 (2005).
9. 9.
Garcia-Barros, M. et al. Impact of stromal sensitivity on radiation response of tumors implanted in scid hosts revisited. Cancer Res. 70, 8179–8186 (2010).
10. 10.
Kocher, M. et al. Computer simulation of cytotoxic and vascular effects of radiosurgery in solid and necrotic brain metastases. Radiother. Oncol. 54, 149–156 (2000).
11. 11.
Hobson, B. & Denekamp, J. Endothelial proliferation in tumours and normal tissues: continuous labelling studies. Br. J. Cancer. 49, 405–413 (1984).
12. 12.
Barker, H. E., Paget, J. T. E., Khan, A. A. & Harrington, K. J. The tumour microenvironment after radiotherapy: mechanisms of resistance and recurrence. Nat. Rev. Cancer 15, 409–425 (2015).
13. 13.
Stancevic, B. et al. Adenoviral transduction of human acid sphingomyelinase into neo-angiogenic endothelium radiosensitizes tumor cure. PLoS ONE 8, e69025 (2013).
14. 14.
Song, C. W. et al. Is there indirect cell death involved in response of tumor to SRS and SBRT? Int. J. Radiat. Oncol. Biol. Phys. 89, 924–925 (2014).
15. 15.
Sperduto, P. W., Song, C. W., Kirkpatrick, J. & Glatstein, E. A hypothesis on indirect cell death in the radiosurgery era. Int. J. Radiat. Oncol. Biol. Phys. 91, 11–13 (2015).
16. 16.
Brown, J. M., Carlson, D. J. & Brenner, D. J. The tumor radiobiology of SRS and SBRT: Are more than the 5Rs involved? Int. J. Radiat. Oncol. Biol. Phys. 88, 254–262 (2014).
17. 17.
Karam, S. D. & Bhatia, S. The radiobiological targets of SBRT: Tumor cells or endothelial cells? Ann. Transl. Med. 19, 290 (2015).
18. 18.
Kozin, S. V., Duda, D. G., Munn, L. L. & Jain, R. K. Neovascularization after irradiation: what is the source of newly formed vessels in recurring tumors? J. Natl. Cancer Inst. 104, 809–905 (2012).
19. 19.
Popescu, D. P. et al. Optical coherence tomography: fundamental principles, instrumental designs and biomedical applications. Biophys. Rev. 3, 155–169 (2011).
20. 20.
Mariampillai, A. et al. Optimized speckle variance OCT imaging of microvasculature. Opt. Lett. 35, 1257–1259 (2010).
21. 21.
Mariampillai, A. et al. Speckle variance detection of microvasculature using swept-source optical coherence tomography. Opt. Lett. 33, 1530–1532 (2008).
22. 22.
Leung, M. K. K. A platform to monitor tumour cellular and vascular response to radiation therapy by optical coherence tomography and fluorescence microscopy in-vivo. MSc thesis, Medical Biophysics, University of Toronto (2010).
23. 23.
Maeda, A. & DaCosta, R. S. Optimization of the dorsal skinfold window chamber model and multi-parametric characterization of tumorassociated vasculature. IntraVital 3, e27935 (2014).
24. 24.
Maeda, A. et al. In vivo optical imaging of tumor and microvascular response to ionizing radiation PLoS ONE 7, e42133, 1–15 (2012).
25. 25.
Conroy, L., DaCosta, R. S. & Vitkin, I. A. Quantifying microvasculature with speckle variance OCT. Opt. Lett. 37, 3180–3182 (2012).
26. 26.
Pearson, T. et al. Non-obese diabeticrecombination activating gene-1 (NOD-Rag1 null) interleukin (IL)-2 receptor common gamma chain (IL2r gamma null) null mice: A radioresistant model for human lymphohaematopoietic engraftment. Clin. Exp. Immunol. 154, 270–284 (2008).
27. 27.
Brunner, T. B., Nestle, U., Grosu, A.-L. & Partridge, M. SBRT in pancreatic cancer: What is the therapeutic window? Radiother. Oncol. 114, 109–116 (2015).
28. 28.
de Geus, S. W. L. et al. Stereotactic body radiotherapy for unresected pancreatic cancer: A nationwide review. Cancer (in press, https://doi.org/10.1002/cncr.30856 (2017).
29. 29.
Rosati, L. M., Kumar, R. & Herman, J. M. Integration of stereotactic body radiation therapy into the multidisciplinary management of pancreatic cancer. Semin. Radiat. Oncol. 27, 256–267 (2017).
30. 30.
Norrby, K. Microvascular density in terms of number and length of microvessel segments per unit tissue volume in mammalian angiogenesis. Microvasc. Res. 55, 43–53 (1998).
31. 31.
Tao, Y. K., Kennedy, K. M. & Izatt, J. A. Velocity-resolved 3D retinal microvessel imaging using single-pass flow imaging spectral domain optical coherence tomography. Opt. Express 17, 4177–4188 (2009).
32. 32.
Reif, R. et al. Quantifying optical microangiography images obtained from a spectral domain optical coherence tomography system. J. Biomed. Imaging 509783, 1–11 (2012).
33. 33.
Suetsugu, A. et al. Non-invasive fluorescent-protein imaging of orthotopic pancreatic-cancer-patient tumorgraft progression in nude mice. Anticancer Res. 32, 3063–3068 (2012).
34. 34.
Baron, V. T., Welsh, J., Abedinpour, P. & Borgström, P. Intravital microscopy in the mouse dorsal chamber model for the study of solid tumors. Am. J. Cancer Res. 1, 674–686 (2011).
35. 35.
Clarkson, R. et al. Characterization of image quality and image-guidance performance of a preclinical microirradiator. Med. Phys. 38, 845–856 (2011).
36. 36.
Mao, Y., Sherif, S., Flueraru, C. & Chang, S. 3 × 3 Mach-Zehnder interferometer with unbalanced differential detection for full-range swept-source optical coherence tomography. Appl. Opt. 47, 2004–2010 (2008).
37. 37.
Mao, Y., Flueraru, C., Chang, S., Popescu, D. & Sowa, M. High-quality tissue imaging using a catheter-based swept-source optical coherence tomography systems with an integrated semiconductor optical amplifier. IEEE Trans. Instrum. Meas. 60, 3376–3383 (2011).
38. 38.
Fitzpatrick, J.M., Sonka, M. Handbook of Medical Imaging, Volume 2. Medical Image Processing and Analysis. (SPIE Press Book, 2000).
39. 39.
Otsu, N. A. threshold selection method from gray-level histograms. IEEE Trans. Syst, Man Cybern. SMC-9 62 (1979).
40. 40.
Folkman, J. Tumor angiogenesis. Adv. Cancer Res. 43, 175–203 (1985).
41. 41.
Jain, R. K. Delivery of novel therapeutic agents in tumors: physiological barriers and strategies. J. Nat. Cancer Inst. 81, 570–576 (1989).
42. 42.
Vaupel, P., Kallinowski, F. & Okunieff, P. Blood flow, oxygenation and nutrient supply, and metabolic microenvironment of human tumors: a review. Cancer Res. 49, 6449–6465 (1989).
43. 43.
Bruberg, K. J., Thuen, M., Ruud, E. B. M. & Rofstad, E. K. Fluctuations in pO2 in irradiated human melanoma xenografts. Radiat. Res. 165, 16–25 (2006).
44. 44.
Kioi, M. et al. Inhibition of vasculogenesis, but not angiogenesis, prevents the recurrence of glioblastoma after irradiation in mice. J. Clin. Invest. 120, 694–705 (2010).
45. 45.
Maeda, A. et al. In-vivo imaging reveals significant tumor vascular dysfunction and increased tumor hypoxia-inducible factor-1α expression induced by high single-dose irradiation in a pancreatic tumor model. Int. J. Radiat. Oncol. Biol. Phys. 97, 184–194 (2017).
46. 46.
Mouthon, M. A., Vereycken-Holler, V., Van der Meeren, A. & Gaugler, M. H. Irradiation increases the interactions of platelets with the endothelium in vivo: analysis by intravital microscopy. Radiat. Res. 160, 593–599 (2003).
47. 47.
Wong, H. H., Song, C. W. & Levitt, S. H. Early changes in the functional vasculature of Walker carcinoma 256 following irradiation. Radiology 108, 429–434 (1973).
48. 48.
Song, C. W., Sung, J. H., Clement, J. J. & Levitt, S. H. Vascular changes in neuroblastoma of mice following x-irradiation. Cancer Res. 34, 2344–2350 (1974).
49. 49.
Ng, Q. S. et al. Acute tumor vascular effects following fractionated radiotherapy in human lung cancer: in vivo whole tumor assessment using volumetric perfusion computed tomography. Int. J. Radiat. Oncol. Biol. Phys. 67, 417–424 (2007).
50. 50.
Fenton, B., Lord, E. M. & Paoni, S. F. Effects of radiation on tumor intravascular oxygenation, vascular configuration, development of hypoxia, and clonogenic survival. Radiat. Res. 155, 360–368 (2001).
51. 51.
Duffy, J. P., Eibl, G., Reber, H. A. & Hines, O. J. Influence of hypoxia and neoangiogenesis on the growth of pancreatic cancer. Mol. Cancer 2, 1–10 (2003).
52. 52.
Khan, F. M., Gibbons, J. P. Khan’s the Physics of Radiation Therapy, 5 th Edition. (Wolters Kluwer, 2014).
53. 53.
Jhingran, A. et al. Tracing conidial fate and measuring host cell antifungal activity using a reporter of microbial viability in the lung. Cell Reports 2, 1762–1773 (2012).
54. 54.
Denekamp, J. Vascular endothelium as the vulnerable element in tumours. Acta Radiol. Oncol. 23, 217–225 (1984).
55. 55.
Kumar, V. et al. Radiomics: the process and the challenges. Magn. Reson. Imaging 30, 1234–1248 (2012).
56. 56.
Kang, J., Schwartz, R., Flickinger, J. & Beriwal, S. Machine learning approaches for predicting radiation therapy outcomes: A Clinician’sPerspective. Int. J. Radiation. Oncol. Biol. Phys. 93, 1127–1135 (2015).
57. 57.
Davoudi, B. et al. Quantitative assessment of oral microstructural and microvascular changes in late oral radiation toxicity, using noninvasive in-vivo optical coherence tomography. Photon. Lasers Med. 5, 21–32 (2015).
58. 58.
Maslennikova, A. V. et al. In-vivo patient study of microvascular changes in irradiated oral mucosa using optical coherence tomography. Int. J. Radiation. Oncol. Biol. Phys (in press).
## Acknowledgements
The OCT system was developed at the National Research Council of Canada with contribution from Drs. Linda Mao, Shoude Chang, Sherif Sherif and Erroll Murdock. The authors thank Dr. Ralph DaCosta of University Health Network for fruitful discussions. This study was supported by the Canadian Institutes of Health Research and the Ministry of Science and Education, Russian Federation (IAV). VD was supported by the Alexander Graham Bell Graduate Scholarship from the Natural Sciences and Engineering Research Council of Canada.
## Author information
### Affiliations
1. #### University of Toronto, Department of Medical Biophysics, Toronto, Canada
• Valentin Demidov
• , Azusa Maeda
• & I. Alex Vitkin
2. #### University Health Network, Princess Margaret Cancer Centre, Toronto, Canada
• Mitsuro Sugita
• & I. Alex Vitkin
4. #### University of Toronto, Department of Chemistry, Toronto, Canada
• Costel Flueraru
• I. Alex Vitkin
### Contributions
I.A.V. and V.D. developed the concept for the study. V.D., A.M. and V.M. designed the experimental procedures. C.F. developed the OCT system, V.D. and M.S. modified it for in-vivo studies. V.D. and partially A.M. with V.M. performed the experiments and collected the data. V.D. and S.S. processed and analyzed the data. V.D., A.M. C.F. and I.A.V. participated in writing the manuscript. All authors approved the final version of the manuscript for submission and are responsible for its content.
### Competing Interests
The authors declare that they have no competing interests.
### Corresponding author
Correspondence to Valentin Demidov. |
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > DETAIL:
### Revision(s):
Revision #2 to TR16-051 | 17th March 2017 06:31
#### Efficient Multi-Point Local Decoding of Reed-Muller Codes via Interleaved Codex
Revision #2
Authors: Ronald Cramer, Chaoping Xing, chen yuan
Accepted on: 17th March 2017 06:31
Keywords:
Abstract:
Reed-Muller codes are among the most important classes of locally correctable codes. Currently local decoding of Reed-Muller codes is based on decoding on lines or quadratic curves to recover one single coordinate. To recover multiple coordinates simultaneously, the naive way is to repeat the local decoding for recovery of a single coordinate. This decoding algorithm might be more expensive, i.e., require higher query complexity.
In this paper, we focus on Reed-Muller codes with usual parameter regime, namely, the total degree of evaluation polynomials is $d=\Theta({q})$, where $q$ is the code alphabet size (in fact, $d$ can be as big as $q/4$ in our setting).
By introducing a novel variation of codex, i.e., interleaved codex (the concept of codex has been used for arithmetic secret sharing \cite{C11,CCX12}), we are able to locally recover arbitrarily large number $k$ of coordinates of a Reed-Muller code simultaneously at the cost of querying $O(q^2k)$ coordinates. It turns out that our local decoding of Reed-Muller codes shows ({\it perhaps surprisingly}) that accessing $k$ locations is in fact cheaper than repeating the procedure for accessing a single location for $k$ times. Precisely speaking, to get the same success probability from repetition of local decoding for recovery of a single coordinate, one has to query $O(qk^2)$ coordinates. Thus, the query complexity of our local decoding is smaller for $k=\Omega(q)$. In addition, our local decoding decoding is efficient, i.e., the decoding complexity is $\poly(k,q)$. Construction of an interleaved codex is based on concatenation of a codex with a multiplication friendly pair, while the main tool to realize codex is based on algebraic function fields (or more precisely, algebraic geometry codes). Our estimation of success error probability is based on error probability bound for $t$-wise linearly independent variables given in \cite{BR94}.
Changes to previous version:
Our local decoding algorithm now works for
d=O(q). Actually, d can be as large as q/4.
Revision #1 to TR16-051 | 15th March 2017 16:26
#### Efficient Multi-Point Local Decoding of Reed-Muller Codes via Interleaved Codex
Revision #1
Authors: Ronald Cramer, Chaoping Xing, chen yuan
Accepted on: 15th March 2017 16:26
Keywords:
Abstract:
Reed-Muller codes are among the most important classes of locally correctable codes. Currently local decoding of Reed-Muller codes is based on decoding on lines or quadratic curves to recover one single coordinate. To recover multiple coordinates simultaneously, the naive way is to repeat the local decoding for recovery of a single coordinate. This decoding algorithm might be more expensive, i.e., require higher query complexity.
In this paper, we focus on Reed-Muller codes with usual parameter regime, namely, the total degree of evaluation polynomials is $d=\Theta({q})$, where $q$ is the code alphabet size.By introducing a novel variation of codex, i.e., interleaved codex (the concept of codex has been used for arithmetic secret sharing \cite{C11,CCX12}), we are able to locally recover arbitrarily large number $k$ of coordinates of a Reed-Muller code simultaneously at the cost of querying $O(q^2k)$ coordinates. It turns out that our local decoding of Reed-Muller codes shows ({\it perhaps surprisingly}) that accessing $k$ locations is in fact cheaper than repeating the procedure for accessing a single location for $k$ times. Precisely speaking,to get the same success probability from repetition of local decoding for recovery of a single coordinate, one has to query $O(qk^2)$ coordinates. Thus, the query complexity of our local decoding is smaller for $k=\Omega(q)$. In addition, our local decoding decoding is efficient, i.e., the decoding complexity is $\poly(k,q)$. Construction of an interleaved codex is based on concatenation of a codex with a multiplication friendly pair, while the main tool to realize codex is based on algebraic function fields (or more precisely, algebraic geometry codes). Our estimation of success error probability is based on error probability bound for $t$-wise linearly independent variables given in \cite{BR94}.
Changes to previous version:
We substantially improve our result.Our local decoding algorithm can be applied to d=O(q) while our previous version only works for d=O(q^0.5)
### Paper:
TR16-051 | 7th April 2016 10:47
#### On Multi-Point Local Decoding of Reed-Muller Codes
TR16-051
Authors: Ronald Cramer, Chaoping Xing, chen yuan
Publication: 7th April 2016 11:11
In this paper, we focus on Reed-Muller codes with evaluation polynomials of total degree $d\lesssim\Gs\sqrt{q}$ for some $\Gs\in(0,1)$. By introducing a local decoding of Reed-Muller codes via the concept of codex that has been used for arithmetic secret sharing \cite{C11,CCX12}, we are able to locally recover arbitrarily large number $k$ of coordinates simultaneously at the cost of querying $O(k\sqrt{q})$ coordinates, where $q$ is the code alphabet size. It turns out that our local decoding of Reed-Muller codes shows ({\it perhaps surprisingly}) that accessing $k$ locations is in fact cheaper than repeating the procedure for accessing a single location for $k$ times. In contrast, by repetition of local decoding for recovery of a single coordinate, one has to query $\Omega(k\sqrt{q}\log k/\log q)$ coordinates for $k=q^{\Omega(\sqrt{q})}$ (and query $O(kq)$ coordinates for $k=q^{O(\sqrt{q})}$, respectively). Furthermore, our decoding success probability is $1-\Ge$ with $\Ge=O\left(\left(\frac1{\sqrt{q}}\right)^k\right)$. To get the same success probability from repetition of local decoding for recovery of a single coordinate, one has to query $O(k^2\sqrt{q}\log k/\log q)$ coordinates (or $O(k^2q)$ coordinates for $k=q^{O(\sqrt{q})}$, respectively). In addition, our local decoding also works for recovery of one single coordinate as well and it gives a better success probability than the one by repetition of local decoding on curves. The main tool to realize codex is based on algebraic function fields (or more precisely, algebraic geometry codes). Our estimation of success error probability is based on error probability bound for $t$-wise linearly independent variables given in \cite{BR94}. |
# Buildings made from cubes
In this challenge, you are provided with a set of $$\n\$$ identical blocks and need to determine how many unique buildings can be constructed with them. Buildings must satisfy the following rules:
1. No overhangs - each block must either be on the ground or supported by one or more blocks directly underneath it.
2. All blocks must be aligned to a unit-sized grid.
3. All blocks in a building must be connected to at least one other block by at least one face, and the blocks must form a single connected unit.
4. Buildings are not unique if they can be mapped to another building by reflection or rotation in the X/Y plane.
e.g. These are the same:
1. If a building is rotated between horizontal and vertical, that does result in a different building
e.g. These are different:
A building with two storeys each of two rooms:
A building with one storey containing 4 rooms:
The challenge is to determine how many different house designs are possible using a given number of cubes. Input and output are both a single integer (using any standard method).
Clearly for 1 cube, only 1 design is possible. For 2 cubes, 2 designs are possible (lying down and standing up). For 3 cubes, there are 4 possibilities, and for 4 cubes there are 12 (see images below; please note the colours are just for display to make it easier to see the individual cubes, but don’t have any significance beyond that).
The first 8 terms are:
n | output
1 | 1
2 | 2
3 | 4
4 | 12
5 | 35
6 | 129
7 | 495
8 | 2101
This is . The winning entry is the one that can determine the number of buildings for the highest value of $$\n\$$. If more than one answer can calculate the result for the same $$\n\$$ within 10 minutes, the one that is fastest for that value wins. This will be tested on an 8th generation Core i7 with 16 GB RAM running Ubuntu 19.10. There must therefore be a freely available interpreter or compiler for any code posted. Default loopholes and IO rules apply.
Cube images generated using usecubes.
• If the cubes are identical, why are you shading them different colors? I don't get how the rotate horizontal then vertical example is different – HiddenBabel Jan 18 at 16:19
• @HiddenBabel one is a building with two storeys each of two rooms, the other is a one storey building with four rooms. The colours are just so that the cubes can be seen as separate cubes but are arbitrary. – Nick Kennedy Jan 18 at 17:41
• I've got a good approximation: $\sqrt{x^x}/2$, alternatively $x^{x/2}/2$ – S.S. Anne Jan 18 at 19:29
• Here's an idea if anyone want to develop it. A building is fully specified by its "base" lying on the ground, which is a connected polyomino, and the number of blocks stacked atop each base block. One could generate every possible base, then count how many ways the remaining cubes can be stacked atop that base using a standard partition formula. This is complicated by needing to consider symmetries for cubes stacked atop symmetric bases, though in the limit, most bases won't have any symmetries. – xnor Jan 18 at 20:39
• @xnor that sounds sensible. Wikipedia has details of some of the algorithms for enumerating free polyominoes, and your suggestion could be used on top of one of those. – Nick Kennedy Jan 19 at 6:56
# JavaScript (Node.js), $$\N=10\$$ in 4m02s1
1: on an Intel Code i7, 7th Gen
This only includes some trivial optimizations and is therefore quite inefficient. It does at least confirm the results listed in the challenge.
function build(n) {
let layer = [],
cube = new Set,
count = 0,
x, y;
for(y = 0; y < n; y++) {
for(layer[y] = [], x = 0; x < n; x++) {
layer[y][x] = 0;
}
}
function fill(k, alignTop) {
let x, y;
if(k == 0) {
if(!cube.has(layer + '')) {
let transf;
count++;
cube.add((transf = rotate(n, layer)) + '');
cube.add((transf = rotate(n, transf)) + '');
cube.add((transf = rotate(n, transf)) + '');
cube.add((transf = mirror(layer)) + '');
cube.add((transf = rotate(n, transf)) + '');
cube.add((transf = rotate(n, transf)) + '');
cube.add((transf = rotate(n, transf)) + '');
}
return;
}
let y0;
for(y0 = 0; !layer[y0].some(v => v); y0++) {}
for(y = Math.max(0, y0 - 1); y < n; y++) {
for(x = 0; x < n; x++) {
if(
!layer[y][x] && (
(y && layer[y - 1][x]) ||
(y < n - 1 && layer[y + 1][x]) ||
(x && layer[y][x - 1]) ||
(x < n - 1 && layer[y][x + 1])
)
) {
for(let i = 1; i <= (alignTop ? k : k - y0 - 1); i++) {
layer[y][x] = i;
fill(k - i, alignTop || !y);
layer[y][x] = 0;
}
}
}
}
}
for(y = 0; y < n; y++) {
for(let i = 1; i <= n - y; i++) {
layer[y][0] = i;
fill(n - i, !y);
layer[y][0] = 0;
}
}
return count;
}
function rotate(n, layer) {
let rot = [],
x, y;
for(y = 0; y < n; y++) {
for(rot[y] = [], x = 0; x < n; x++) {
rot[y][x] = layer[n - x - 1][y];
}
}
return align(rot);
}
function mirror(layer) {
return align([...layer].reverse());
}
function align(layer) {
while(!layer[0].some(v => v)) {
let s = layer.shift();
layer = [...layer, s];
}
while(!layer[0].some((_, y) => layer[y][0])) {
layer = layer.map(r => {
return [...r.slice(1), 0];
});
}
return layer;
}
Try it online! (up to $$\N=8\$$)
### Output
N = 1 --> 1
time: 10.352ms
N = 2 --> 2
time: 0.935ms
N = 3 --> 4
time: 0.877ms
N = 4 --> 12
time: 2.530ms
N = 5 --> 35
time: 9.060ms
N = 6 --> 129
time: 33.333ms
N = 7 --> 495
time: 157.160ms
N = 8 --> 2101
time: 1559.707ms
N = 9 --> 9154
time: 18555.900ms
N = 10 --> 41356
time: 242855.989ms
• There's got to be some formula for this. – S.S. Anne Jan 18 at 18:54
• @S.S.Anne I suspect not, but it would be really interesting if there were. – Nick Kennedy Jan 18 at 20:22
• @S.S.Anne There is no known formula to count free polyominoes. Finding a formula for this 3D version is unlikely to be any easier. – Arnauld Jan 19 at 8:18
# Java 8, $$\n=14\$$ in 2m31s1
1. Using the AdoptOpenJDK8 distribution, on a 2-core Amber Lake Intel Core i5-based Mac. On an Amazon EC2 m5.xlarge, takes 1m16s.
This takes an inductive approach where, for each rank, it builds off all the buildings of the previous rank by placing cubes in all legal positions (on top of and next to existing cubes), and then removing buildings that are (possibly transformed) duplicates of other buildings. This means that to enumerate the buildings in a rank, all previous ranks must be also be computed. Both compute time and memory are constrained resources — this algorithm relies on keeping millions of Building objects in memory — so I've had a hard time computing beyond $$\n=14\$$ on my machine even without the 10 minute time limit.
This solution includes a parallel-Stream-based approach (which can be enabled with the parallel JVM system property) which is faster on a multi-core system but also more memory-hungry. This approach was used for the timing results above. The non-parallel approach takes almost twice as long to count $$\n=14\$$ but is able to do so using only a third of the memory.
Garbage collector settings and tuning can have a significant impact on the runtime and ability of the process to complete. I've also tested with OpenJDK 13, but for whatever reason have seen the best results on 8 so far.
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Consumer;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import java.util.stream.StreamSupport;
public final class Building {
/**
* A flattened two-dimensional matrix of heights (see toIndex).
* Buildings are always assumed to be "aligned", such that they exactly
* fit within their (width, height) bounding box.
*/
private final byte[] stacks;
private final int hashCode;
private final byte width;
private final byte height;
public Building() {
this(new byte[]{1}, 1);
}
private Building(byte[] stacks, int width) {
assert stacks.length % width == 0;
this.stacks = stacks;
this.width = (byte) width;
this.height = (byte) (stacks.length / width);
this.hashCode = 31 * width + Arrays.hashCode(stacks);
}
/**
* Return the building created by adding a cube at the specified coordinates.
* The coordinates must be within the current building bounds or else
* directly adjacent to one of the sides, but this is not validated.
*/
Building add(int x, int y) {
if (x < 0) {
byte[] newStacks = widen(true);
newStacks[y * (width + 1)]++;
return new Building(newStacks, width + 1);
} else if (x < width) {
byte[] newStacks;
if (y < 0) {
newStacks = new byte[stacks.length + width];
System.arraycopy(stacks, 0, newStacks, width, stacks.length);
y = 0;
} else if (y * width < stacks.length) {
newStacks = Arrays.copyOf(stacks, stacks.length);
} else {
newStacks = Arrays.copyOf(stacks, stacks.length + width);
}
newStacks[toIndex(x, y)]++;
return new Building(newStacks, width);
} else {
byte[] newStacks = widen(false);
newStacks[x + y * (width + 1)]++;
return new Building(newStacks, width + 1);
}
}
byte[] widen(boolean shift) {
byte[] newStacks = new byte[stacks.length + height];
int writeIndex = shift ? 1 : 0;
for (int i = 0; i < stacks.length; i++) {
newStacks[writeIndex++] = stacks[i];
if (i % width == width - 1) {
writeIndex++;
}
}
return newStacks;
}
int toIndex(int x, int y) {
return x + y * width;
}
boolean inBounds(int x, int y) {
return x >= 0 && y >= 0 && x < width && y < height;
}
/**
* Return a stream of all legal buildings that can be created by adding a
* cube to this building.
*/
Stream<Building> grow() {
int wider = width + 2;
int max = (height + 2) * wider;
return StreamSupport.stream(new Spliterators.AbstractSpliterator<Building>(max, 0) {
int i = -1;
@Override
public boolean tryAdvance(Consumer<? super Building> action) {
while ((++i) < max) {
// Try adding a cube to every position on the grid,
// as well as adjacent to it
int x = i % wider - 1;
int y = i / wider - 1;
int index = toIndex(x, y);
if (x < 0) {
if (y >= 0 && y < height) {
if (stacks[index + 1] > 0) {
return true;
}
}
} else if (x < width) {
if (y < 0) {
if (stacks[index + width] > 0) {
return true;
}
} else if (y < height) {
// it is on the existing grid
if (stacks[index] > 0) {
return true;
} else {
// is it adjacent to a stack?
for (Direction d : Direction.values()) {
int x2 = x + d.x, y2 = y + d.y;
if (inBounds(x2, y2) && stacks[toIndex(x2, y2)] > 0) {
return true;
}
}
}
} else if (stacks[index - width] > 0) {
return true;
}
} else if (y >= 0 && y < height) {
if (stacks[index - 1] > 0) {
return true;
}
}
}
return false;
}
}, false);
}
Building reflect() {
byte[] newStacks = new byte[stacks.length];
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
newStacks[y + x * height] = stacks[toIndex(x, y)];
}
}
return new Building(newStacks, height);
}
Building rotate() {
byte[] newStacks = new byte[stacks.length];
for (int x = 0; x < width; x++) {
for (int y = 0, x2 = height - 1; y < height; y++, x2--) {
newStacks[x2 + x * height] = stacks[toIndex(x, y)];
}
}
return new Building(newStacks, height);
}
Collection<Building> transformations() {
List<Building> bs = new ArrayList<>(7);
Building b1 = this, b2 = this.reflect();
for (int i = 0; i < 3; i++) {
}
return bs;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Building building = (Building) o;
return width == building.width &&
Arrays.equals(stacks, building.stacks);
}
@Override
public int hashCode() {
return hashCode;
}
private enum Direction {
N(0, 1), E(1, 0), S(0, -1), W(-1, 0);
final int x;
final int y;
Direction(int x, int y) {
this.x = x;
this.y = y;
}
}
public static int count(int rank) {
long start = System.nanoTime();
Collection<Building> buildings = new HashSet<>();
for (int i = 1; i <= rank; i++) {
if (i == 1) {
buildings = Arrays.asList(new Building());
} else if (Boolean.getBoolean("parallel")) {
// Using parallel streams is generally faster, but requires
// more memory since more Buildings are retained before being
ConcurrentMap<Building, Integer> map =
new ConcurrentHashMap<>(buildings.size() * 4);
AtomicInteger atomicInt = new AtomicInteger();
buildings.parallelStream()
.flatMap(Building::grow)
.forEach(b -> {
map.putIfAbsent(b, atomicInt.incrementAndGet());
});
// Keep only the buildings that do not have a transformation
// with a lower index
buildings = map.entrySet().parallelStream()
.filter(entry -> {
int index = entry.getValue();
for (Building b2 : entry.getKey().transformations()) {
Integer index2 = map.get(b2);
if (index2 != null && index2 < index) {
return false;
}
}
return true;
})
.map(Map.Entry::getKey)
.collect(Collectors.toList());
} else {
Set<Building> set = new HashSet<>(buildings.size() * 4);
// Only add a building to the set if it doesn't already have a
// transformation in it.
buildings.stream()
.flatMap(Building::grow)
.forEach(b -> {
if (!set.contains(b)) {
for (Building t : b.transformations()) {
if (set.contains(t)) return;
}
}
});
buildings = set;
}
System.err.println(i + " --> " + buildings.size());
long now = System.nanoTime();
double ms = ((double) now - start) / 1000000;
System.err.println("time: " + (ms < 1000 ? ms + " ms" : ms / 1000 + " s"));
}
return buildings.size();
}
public static void main(String[] args) {
System.out.println(Building.count(Integer.parseInt(args[0])));
}
}
Try it online! (non-parallel, up to n=12)
### Invocation
javac Building.java
java -XX:+UseParallelGC -Xms14g -Xmx14g -Dparallel=true Building 14
ParallelGC is the default on Java 8, but is included in case you are using a later version JDK where G1GC is the default.
### Output
1 --> 1
time: 0.410181 ms
2 --> 2
time: 97.807367 ms
3 --> 4
time: 99.648279 ms
4 --> 12
time: 101.00362 ms
5 --> 35
time: 102.4856 ms
6 --> 129
time: 105.723149 ms
7 --> 495
time: 113.747058 ms
8 --> 2101
time: 130.012756 ms
9 --> 9154
time: 193.924776 ms
10 --> 41356
time: 436.551396 ms
11 --> 189466
time: 991.984875 ms
12 --> 880156
time: 3.899371721 s
13 --> 4120515
time: 18.794214388 s
14 --> 19425037
time: 151.782854829 s
19425037
(For reference, $$\15 \rightarrow 92{,}038{,}062\$$)
# Haskell, $$\n=16\$$ in about 9 minutes
As observed by @xnor in the comments, we can break the problem down into two parts: generate polyominoes (where I reused a lot from here, then count the ways to distribute the remaining cubes.
The symmetries are accounted for by using Burnside's lemma. So we need to know how many buildings of a given symmetric shape are fixed by a symmetry. Consider for example a shape has one mirror symmetry where the axis of reflection goes through $$\s\$$ squares of the shape and the reflection identifies $$\d\$$ pairs of further squares of the shape (so its size is $$\s+2d\$$). Then the buildings of this shape with $$\r\$$ additional cubes that have this symmetry correspond to the solutions of $$\x_1+\dots+x_s+2y_1+\dots+2y_d=r\$$ with nonnegative integers. The number of solutions is added to the total number of possibly equivalent buildings, and the sum divided by two. Note that a rotational symmetry always fixes zero or one square of a shape.
Compile the code with something like ghc -O2 -o buildings buildings.hs. The executable takes one optional parameter. If it is given, it will compute the number of buildings with that many cubes. Otherwise, it will compute all values.
{-# LANGUAGE BangPatterns #-}
import Data.List (sort)
import qualified Data.Set as S
import System.Environment (getArgs)
data P = P !Int !Int deriving (Eq, Ord)
start :: P
start = P 0 0
neighs :: P -> [P]
neighs (P x y) = [ p | p <- [P (x+1) y, P (x-1) y, P x (y+1), P x (y-1)],
p > start ]
count :: Int -> Int -> S.Set P -> S.Set P -> [P] -> Int
count 0 c _ _ _ = c
count _ c _ _ [] = c
count n c used seen (p:possible) =
let !c' = count n c used seen possible
!n' = n-1
next = S.insert p used
!sz = S.size next
!c'' = c' + combinations n' sz (S.toAscList next)
new = [ n | n <- neighs p, n S.notMember seen ]
in count n' c'' next (foldr S.insert seen new) (new++possible)
class Geom g where
translate :: Int -> Int -> g -> g
rot :: g -> g
mirror :: g -> g
instance Geom P where
translate !dx !dy (P x y) = P (x+dx) (y+dy)
rot (P x y) = P (-y) x
mirror (P x y) = P (-x) y
instance (Geom g, Ord g) => Geom [g] where
translate !dx !dy = map $translate dx dy rot = sort . map rot mirror = sort . map mirror normalize :: [P] -> [P] normalize fig = let (P x y) = head fig in translate (-x) (-y) fig -- fixed points of horizontal mirror symmetry myf :: Int -> Int -> [P] -> Int myf r sz fig = let w = (maximum [ x | P x _ <- fig ]) wh = w div 2 myb = sum [ 1 | P x _ <- fig, x == wh ] in if even w -- odd width! then c12 myb ((sz-myb) div 2) r else c1h sz r -- fixed points of diagonal mirror symmetry mdf :: Int -> Int -> [P] -> Int mdf r sz fig = let lo = minimum [ y | P _ y <- fig ] mdb = sum [ 1 | P x y <- fig, x == y-lo ] in c12 mdb ((sz-mdb) div 2) r combinations :: Int -> Int -> [P] -> Int combinations r sz fig = let rotated = take 4$ iterate (normalize . rot) fig
rfig = rotated !! 1
mirrored = map (normalize . mirror) rotated
alts = tail rotated ++ mirrored
cmps = map (compare fig) alts
-- All fixed points computations assume that the symmetry exists.
-- fixed points of quarter turn:
qtfc = if even sz then c1q sz r else sc1x 4 sz r
-- fixed points of half turn:
htfc = if even sz then c1h sz r else sc1x 2 sz r
-- fixed points of reflections:
mirror_fc = [ fun r sz f |
f <- [ fig, rfig ],
fun <- [ myf, mdf ] ]
-- all possibilities, i.e. fixed points of identity:
idfc = c1 sz r
fsc = [ qtfc, htfc, qtfc] ++ mirror_fc
-- fixed points of symmetries that really exist:
allfc = idfc : [ fc | (fc,EQ) <- zip fsc cmps ]
-- group size of symmetry group:
gs = length allfc
res = if r==0 then 1 else sum allfc div gs
in if any (GT ==) cmps
-- only count if we have the smallest representative
then 0 else res
-- Number of ways to express t as sum of n nonnegative integers.
-- binomial(n+t-1, n-1)
c1 n t = foldl (\v x -> v * (n+x-1) div x) 1 [1..t]
-- Number of ways to express t as twice the sum of n/2 nn-integers
c1h n t | even t = c1 (n div 2) (t div 2)
| otherwise = 0
-- Number of ways to express t as four times the sum of n/4 nn-integers.
c1q n t | t mod 4 == 0 = c1 (n div 4) (t div 4)
| otherwise = 0
-- Number of ways to express t as an nn-integer plus m times the sum
-- of n/m nn-integers
sc1x m n t = c1 (1 + n div m) (t div m)
-- Number of ways to express t as the sum of s nn-integers
-- plus twice the sum of d nn-integers
c12 s d t = sum [ c1 s (t-2*t2) * c1 d t2 | t2 <- [ 0 .. t div 2 ] ]
count_buildings :: Int -> Int
count_buildings n = count n 0 S.empty S.empty [start]
output :: Int -> IO ()
output n = putStrLn \$ show n ++ ": " ++ show (count_buildings n)
main = do args <- getArgs
case args of
[] -> mapM_ output [1..]
[n] -> output (read n)
Try it online!
### Results
15: 92038062
16: 438030079
17: 2092403558
18: 10027947217 (2 1/2 h)
19: 48198234188 (10 h)
20: 232261124908 (40 h)
• Okay, I didn't know that we were allowed to use magic. :) As far as I can tell, this solution is able to enumerate polyomino bases with fairly modest memory requirements (using roughly O(n²) memory (I think) where n is the number of cubes), and can count the number of buildings based on those polyominos with O(1) memory, so the program only requires a few megabytes of memory and is only limited by compute time. The math involved is still a bit over my head though. – Miles Feb 25 at 4:07
• "Counting polyominoes: yet another attack" by Redelmeier is helpful for understanding the definition of count – Miles Feb 25 at 8:03
• @Miles I really enjoy throwing math at problems. Not allowing that, or not allowing magic, is among the things to avoid in challenges. – Christian Sievers Feb 26 at 16:17
• "Allowed" was supposed to be tongue-in-cheek; it's a very impressive solution. – Miles Mar 1 at 5:31
• @Miles That answer also tried to be tongue-in-cheek ;-) – Christian Sievers Mar 1 at 22:04
# Java 11, $$\n=17\$$ in about 8.5 minutes
### Based on Haskell solution by Christian Sievers – upvote his!
This answer is the result of learning enough Haskell to be able to understand Christian's answer, translating it into Java, applying numerous micro-optimizations, and throwing multiple cores at it. The exact runtime varies significantly depending on the number of cores available; this timing result is from my own two-core machine. A 48-core EC2 c5.24xlarge is able to compute $$\n=17\$$ in 16 seconds, and $$\n=20\$$ in 18 minutes.
Parallelism can be disabled by adding the JVM argument -Djava.util.concurrent.ForkJoinPool.common.parallelism=0. Single-threaded performance is slightly better than double that of the Haskell solution.
Some of the optimizations include:
• Representing a point using a single int value
• Using simplified hand-rolled collections based on int arrays, avoiding the primitive boxing required for the standard Java collections
• Reimplementing polyomino enumeration based on this paper -- my initial attempt at a direct translation of the Haskell code performed extra throwaway work that didn't actually contribute to the computation
• Replacing higher-level Stream-based implementations with inlined code, making it very ugly and verbose
The bulk of the processing time is spent in Array.sort calls in normalizeInPlace. Finding a way to compare polyomino transformations without sorting could easily result in a further 4x speedup. The forking is also not done very intelligently which leads to unbalanced tasks and unused cores at higher levels of parallelism.
import java.util.Arrays;
import java.util.function.IntPredicate;
import java.util.function.IntUnaryOperator;
import java.util.function.LongSupplier;
import java.util.function.ToLongFunction;
/**
* Utility methods for working with an int that represents a pair of short values.
*/
class Point {
static final int start = p(0, 0);
static final int[] neighbors = new int[] {-0x10000, -0x1, 0x1, 0x10000};
static int x(int p) {
return (p >> 16) - 0x4000;
}
static int y(int p) {
return (short)(p) - 0x4000;
}
static int p(int x, int y) {
return ((x + 0x4000) << 16) | (y + 0x4000);
}
static int rot(int p) {
return p(-y(p), x(p));
}
static int mirror(int p) {
return p(-x(p), y(p));
}
}
/**
* Minimal primitive array-based collections.
*/
class IntArrays {
/** Concatenates the end of the first array with the beginning of the second. */
static int[] arrayConcat(int[] a, int aOffset, int[] b, int bLen) {
int aLength = a.length - aOffset;
int[] result = new int[aLength + bLen];
System.arraycopy(a, aOffset, result, 0, aLength);
System.arraycopy(b, 0, result, aLength, bLen);
return result;
}
/** Adds a new value to a sorted set, returning the new result */
static int[] setAdd(int[] set, int val) {
int[] dst = new int[set.length + 1];
int i = 0;
for (; i < set.length && set[i] < val; i++) {
dst[i] = set[i];
}
dst[i] = val;
for (; i < set.length; i++) {
dst[i + 1] = set[i];
}
return dst;
}
private static final int[] primes = new int[] {
5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61,
67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131
};
/**
* Allocate an array large enough to hold a fixed-capacity hash table
* that can contain "seen" points for generating polyominos of size n.
*/
static int[] makeHashTable(int n) {
return new int[primes[-(Arrays.binarySearch(primes, n * 3) + 1)]];
}
/** Inserts a new value to a hash table, in-place */
static void hashInsert(int[] table, int val) {
int pos = (val * 137) % table.length, startPos = pos;
if (table[pos] != 0) {
while ((table[pos = (pos + 1) % table.length]) != 0) {
if (pos == startPos) {
throw new AssertionError("table full");
}
}
}
table[pos] = val;
}
/** Checks whether a hash table contains the specified value */
static boolean hashContains(int[] table, int val) {
int pos = (val * 137) % table.length, startPos = pos;
while (true) {
int curr = table[pos];
if (curr == val) return true;
if (curr == 0) return false;
pos = (pos + 1) % table.length;
if (pos == startPos) {
throw new AssertionError("table full");
}
}
}
}
/**
* Recursively generates int arrays representing collections of Points,
* applying a function to each array to compute a long, and returns the sum
* of all such values.
*/
class PolyominoVisitor extends RecursiveTask<Long> {
PolyominoVisitor(ToLongFunction<? super int[]> func, int n) {
this(func, n, 0, 1, new int[0], IntArrays.makeHashTable(n), new int[]{Point.start});
}
private PolyominoVisitor(ToLongFunction<? super int[]> action, int n,
int i, int limit, int[] used, int[] seen, int[] untried) {
this.func = action;
this.n = n;
this.start = () -> visit(i, limit, used, seen, untried);
}
private final boolean visitSmaller = true;
private final ToLongFunction<? super int[]> func;
private final int n;
private final LongSupplier start;
@Override
protected Long compute() {
return start.getAsLong();
}
private long visit(int i, int limit, int[] used, int[] seen, int[] untried) {
long val = 0;
if (used.length + 1 == n) {
// reached the second to last level, so we can apply the function
// directly to our children
for (; i < limit; i++) {
val += func.applyAsLong(IntArrays.setAdd(used, untried[i]));
}
} else if (used.length + 6 < n && limit - i >= 2) {
// eligible to split
PolyominoVisitor[] tasks = new PolyominoVisitor[limit - i];
for (int j = 0; j < tasks.length; j++) {
tasks[j] = new PolyominoVisitor(func, n,
i + j, i + j + 1, used, seen, untried);
}
return val;
} else {
// recursively visit children
int[] newReachable = new int[4];
IntPredicate inSeen = p -> !IntArrays.hashContains(seen, p);
for (; i < limit; i++) {
int candidate = untried[i];
int[] child = IntArrays.setAdd(used, candidate);
int reachableCount = neighbors(candidate, inSeen, newReachable);
int[] newSeen = seen.clone();
for (int j = 0; j < reachableCount; j++) IntArrays.hashInsert(newSeen, newReachable[j]);
int[] newUntried = IntArrays.arrayConcat(untried, i + 1, newReachable, reachableCount);
val += visit(0, newUntried.length, child, newSeen, newUntried);
}
}
if (visitSmaller && used.length > 0 && limit == untried.length) {
val += func.applyAsLong(used);
}
return val;
}
/**
* Write the greater-than-origin neighbors of the given point
* that pass the provided predicate into the provided array,
* returning the count written.
*/
private static int neighbors(int p, IntPredicate pred, int[] dst) {
int count = 0;
for (int offset : Point.neighbors) {
int n = p + offset;
if (n > Point.start && pred.test(n)) {
dst[count++] = n;
}
}
return count;
}
}
/**
* Function that computes how many buildings are constructable on a given
* polyomino base. Considers symmetry, returning 0 if the figure is not the
* canonical version (i.e. has a smaller transformation).
*
* Adapted largely unchanged from Christian Sievers
* https://codegolf.stackexchange.com/a/199919
*/
class BuildingCounter implements ToLongFunction<int[]> {
private final int n;
public BuildingCounter(int n) {
this.n = n;
}
@Override
public long applyAsLong(int[] fig) {
return combinations(n - fig.length, fig);
}
private static int[] map(int[] fig, IntUnaryOperator func) {
int[] result = new int[fig.length];
for (int i = 0; i < fig.length; i++) {
result[i] = func.applyAsInt(fig[i]);
}
return result;
}
private static int[] normalizeInPlace(int[] fig) {
Arrays.sort(fig);
int d = fig[0] - Point.start;
for (int i = 0; i < fig.length; i++) {
fig[i] -= d;
}
return fig;
}
private static int[] rot(int[] ps) {
return normalizeInPlace(map(ps, Point::rot));
}
private static int[] mirror(int[] ps) {
return normalizeInPlace(map(ps, Point::mirror));
}
private static int myf(int r, int sz, int[] fig) {
int max = Integer.MIN_VALUE;
for (int p : fig) {
if (p > max) max = p;
}
int w = Point.x(max);
if (w % 2 == 0) {
int wh = w / 2;
int myb = 0;
for (int p : fig) {
if (Point.x(p) == wh) myb++;
}
return c12(myb, (sz - myb)/2, r);
} else {
return c1h(sz, r);
}
}
private static int mdf(int r, int sz, int[] fig) {
int lo = Integer.MAX_VALUE;
for (int p : fig) {
int tmp = Point.y(p);
if (tmp < lo) lo = tmp;
}
int mdb = 0;
for (int p : fig) {
if (Point.x(p) == Point.y(p) - lo) mdb++;
}
return c12(mdb, (sz-mdb)/2, r);
}
private static long combinations(int r, int[] fig) {
int[][] alts = new int[7][];
alts[0] = rot(fig);
alts[1] = rot(alts[0]);
alts[2] = rot(alts[1]);
alts[3] = mirror(fig);
alts[4] = mirror(alts[0]);
alts[5] = mirror(alts[1]);
alts[6] = mirror(alts[2]);
int[] rfig = alts[0];
int[] cmps = new int[7];
for (int i = 0; i < 7; i++) {
if ((cmps[i] = Arrays.compare(fig, alts[i])) > 0) {
return 0;
}
}
if (r == 0) {
return 1;
}
int sz = fig.length;
int qtfc = (sz % 2 == 0) ? c1q(sz, r) : sc1x(4, sz, r);
int htfc = (sz % 2 == 0) ? c1h(sz, r) : sc1x(2, sz, r);
int idfc = c1(sz, r);
int[] fsc = new int[] {qtfc, htfc, qtfc,
myf(r, sz, fig), mdf(r, sz, fig),
myf(r, sz, rfig), mdf(r, sz, rfig)};
int gs = 1;
int allfc = idfc;
for (int i = 0; i < fsc.length; i++) {
if (cmps[i] == 0) {
allfc += fsc[i];
gs++;
}
}
return allfc / gs;
}
private static int c1(int n, int t) {
int v = 1;
for (int x = 1; x <= t; x++) {
v = v * (n+x-1) / x;
}
return v;
}
private static int c1h(int n, int t) {
return c1d(n, t, 2);
}
private static int c1q(int n, int t) {
return c1d(n, t, 4);
}
private static int c1d(int n, int t, int q) {
if (t % q == 0) {
return c1(n / q, t / q);
} else {
return 0;
}
}
private static int sc1x(int m, int n, int t) {
return c1(1 + n / m, t / m);
}
private static int c12(int s, int d, int t) {
int sum = 0;
for (int i = t/2; i >= 0; i--) {
sum += c1(s, t-2*i) * c1(d, i);
}
return sum;
}
}
public class Main {
public static long count(int n) {
return new PolyominoVisitor(new BuildingCounter(n), n).compute();
}
public static void main(String[] args) {
if (args.length > 0) {
System.out.println(args[0] + ": " + count(Integer.parseInt(args[0])));
} else {
for (int i = 1; i <= 99; i++) {
System.out.println(i + ": " + count(i));
}
}
}
}
### Invocation
javac Main.java
java Main 17
Try it online!
### Results
(when run without an argument)
...
16: 438030079
17: 2092403558
18: 10027947217
19: 48198234188
20: 232261124908
21: 1121853426115 |
# Tangent graph: y = tan x
#### Everything You Need in One Place
Homework problems? Exam preparation? Trying to grasp a concept or just brushing up the basics? Our extensive help & practice library have got you covered.
#### Learn and Practice With Ease
Our proven video lessons ease you through problems quickly, and you get tonnes of friendly practice on questions that trip students up on tests and finals.
#### Instant and Unlimited Help
Our personalized learning platform enables you to instantly find the exact walkthrough to your specific type of question. Activate unlimited help now!
Get the most by viewing this topic in your current grade. Pick your course now.
##### Examples
###### Lessons
1. Sketch the basic trigonometric functions:
1. $y = \tan x$ |
## truncated Normal moments
An interesting if presumably hopeless question spotted on X validated: a lower-truncated Normal distribution is parameterised by its location, scale, and truncation values, μ, σ, and α. There exist formulas to derive the mean and variance of the resulting distribution, that is, when α=0,
$\Bbb{E}_{\mu,\sigma}[X]= \mu + \frac{\varphi(\mu/\sigma)}{1-\Phi(-\mu/\sigma)}\sigma$
and
$\text{var}_{\mu,\sigma}(X)=\sigma^2\left[1-\frac{\mu\varphi(\mu/\sigma)/\sigma}{1-\Phi(-\mu/\sigma)} -\left(\frac{\varphi(\mu/\sigma)}{1-\Phi(-\mu/\sigma)}\right)^2\right]$
but there is no easy way to choose (μ, σ) from these two quantities. Beyond numerical resolution of both equations. One of the issues is that ( μ, σ) is not a location-scale parameter for the truncated Normal distribution when α is fixed.
This site uses Akismet to reduce spam. Learn how your comment data is processed. |
# quadratic forms over fields of characteristic 2
I was wondering if anyone knows any good sources for the theory of quadratic forms over fields of characteristic 2 which are written in English?
• What have you done so far? Typing the title into google gives a lot of results. So what? Mar 20 '12 at 13:02
• Indeed. For instance, you could look at this: jstor.org/stable/2372942 Mar 20 '12 at 13:04
• While indeed you could have put more work into this question, I feel obligated to mention that maybe the biggest tool in the study of quadratic forms over a field of char $\ne 2$ is the bijection with certain bilinear forms. Over a field of char $=2$ there can be many quadratic forms to a single bilinear form. Therefore, one surprising place to start if you know some alg. geom is the theory of theta characteristics, where you fix the Weil pairing on an abelian variety as your bilinear form. Mar 20 '12 at 13:53
• I added the obvious tag "quadratic-forms". Mar 21 '12 at 9:45
This book by Manfred Knebusch starts with the limerick
$$\begin{array}{l}\text{A Mathematician Said Who}\cr\text{Can Quote Me a Theorem that’s True?}\cr\text{For the ones that I Know}\cr\text{Are Simply not So,}\cr\text{When the Characteristic is Two!}\end{array}$$
It gives a uniform treatment of quadratic forms in all characteristics including two.
• Sorry for the strange formatting, I don't know how to get around that... Mar 21 '12 at 0:12
• Like this. :) Mar 21 '12 at 0:48
• (two asterisks on each side of the word) Mar 21 '12 at 0:49
• Irving Kaplansky liked to quote Marshall Hall: "The trouble with two is not that it's so small. It's that it's so even." Mar 21 '12 at 0:52
• The link is now maths.ucd.ie/~tpunger/papers/book.pdf Feb 10 '18 at 19:05
The Algebraic and Geometric Theory of Quadratic Forms by Elman, Karpenko and Merkurjev is a standard recent reference for the theory of quadratic forms, paying special attention to the differences between the theory of bilinear forms and the theory of quadratic forms in characteristic 2.
http://en.wikipedia.org/wiki/Arf_invariant would be a start. |
×
# How computer multiply two numbers.?
0 1 suppose i have to perform the multupication 100*100 .. so how many multipications are actually performed inside the computers to do this multipication. I searched it on stackoverflow , but i didn't understand their logic..! help ! http://stackoverflow.com/questions/3060064/how-computer-multiplies-2-numbers asked 13 Aug '15, 14:59 462●5●28 accept rate: 14%
0 Hi va1ts7_100, Computer use Booth's algorithms for multiplication , this process is slightly different from we do normal multiplication there is shifting of bits also . There are many videos at Booth's algorithm on youtube and it is easily understandable. Booth's multiplication algorithm wiki link answered 14 Aug '15, 17:08 114●5 accept rate: 16% thanks @deepakmourya (14 Aug '15, 18:20)
0 i guess from taking an organisation course that its done in a 32bit register or 64bit register - up to ur ps - , that the two numbers first are converted into binary n digits and then summed up to n times answered 15 Aug '15, 14:25 0★genes123 1 accept rate: 0%
0 for more detail of this...you can look CO201 Computer architecture and Organization course this course generally taught in 3rd semester in B.tech(NITs)... for multiplication there two major algorithms... *Robertson's Multiplication Algorithm *Booth's Multiplication Algorithm these algorithm are really simple just look...you will understand definitely very quick.. it just state forward here two major operation (here we do multiplication at binary level..like 101*011 ) compare if multiplying bit 1 and 1 result :-1 if multiplying bit 0 and 1 or 1 and 0 or 0 and 0 result :- 0 shift as we use cross mark 'x' for shifting in 3'rd grade multiplication method here we use bit shifting.. 0101 X011 0101 01010 000000 001111 now you got it how simple it is ...just same as we do in decimal ..computer do in binary... go through for more detail... Happy coding answered 15 Aug '15, 15:57 1.0k●12●29 accept rate: 6%
toggle preview community wiki:
Preview
By Email:
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• image?![alt text](/path/img.jpg "title")
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
• mathemetical formulas in Latex between \$ symbol
Question tags:
×32
×1
question asked: 13 Aug '15, 14:59
question was seen: 2,817 times
last updated: 15 Aug '15, 15:57 |
# I/O and file systems¶
Warning
The ARCHER2 Service is not yet available. This documentation is in development.
## Using the ARCHER2 file systems¶
Different file systems are configured for different purposes and performance. ARCHER2 has three file systems available to users:
Node type Available file systems
/home /work
Compute
/work SSS
PP
Warning
Any data used in a parallel jobs should be located on /work (Lustre) or the solid state storage.
### Home¶
Warning
This file system is backed up for disaster recovery purposes only. Data recovery for accidental deletion is not supported.
Home directories provide a convenient means for a user to have access to files such as source files, input files or configuration files. This file system is only mounted on the login nodes. The home directory for each user is located at:
/home/[project code]/[group code]/[username]
where
[project code] is the code for your project (e.g., x01);
[group code] is the code for your project group, if your project has groups, (e.g. x01-a) or the same as the project code, if not;
[username] is your login name.
Each project is allocated a portion of the total storage available, and the project PI will be able to sub-divide this quota among the groups and users within the project. As is standard practice on UNIX and Linux systems, the environment variable $HOME is automatically set to point to your home directory. It should be noted that the home file system is not designed, and does not have the capacity, to act as a long term archive for large sets of results. ### Work¶ Warning There is no separate backup of data on any of the work file systems, which means that in the event of a major hardware failure, or if a user accidently deletes essential data, it will not be possible to recover the lost files. High-performance Lustre file system mounted on the compute nodes. All parallel calculations must be run from directories on the /work file system and all files required by the calculation (apart from the executable) must reside on /work. Each project will be assigned space on a particular Lustre partition with the assignments chosen to balance the load across the available infrastructure. The work directory for each user is located at: /work/[project code]/[group code]/[username] where [project code] is the code for your project (e.g., x01); [group code] is the code for your project group, if your project has groups, (e.g. x01-a) or the same as the project code, if not; [username] is your login name. Links from the /home file system to directories or files on /work are strongly discouraged. If links are used, executables and data files on /work to be used by applications on the compute nodes (i.e. those executed via the aprun command) should be referenced directly on /work. ### Solid State Storage (SSS)¶ Warning This section is in development and it will be completed as soon as possible. The 1.1 PiB ARCHER2 solid state file system significantly increase the I/O performance for all file sizes and access patterns. ## Disk quotas¶ ## Sharing data with other ARCHER2 users¶ How you share data with other ARCHER2 users depends on whether they belong to the same project as you or not. Each project has two levels of shared directories that can be used for sharing data. ### Sharing data with users in your project¶ Each project has a directory called: /work/[project code]/[project code]/shared that has read/write permissions for all project members. You can place any data you wish to share with other project members in this directory. For example, if your project code is x01 the shared project directory would be located at: /work/x01/x01/shared ### Sharing data with all users¶ Each project also has a higher level directory called: /work/[project code]/shared that is writable by all project members and readable by any user on the system. You can place any data you wish to share with other ARCHER2 users who are not members of your project in this directory. For example, if your project code is x01 the sharing directory would be located at: /work/x01/shared ## Common I/O patterns¶ There is a number of I/O patterns that are frequently used in applications: ### Single file, single writer (Serial I/O)¶ A common approach is to funnel all the I/O through a single master process. Although this has the advantage of producing a single file, the fact that only a single client is doing all the I/O means that it gains little benefit from the parallel file system. ### File-per-process (FPP)¶ One of the first parallel strategies people use for I/O is for each parallel process to write to its own file. This is a simple scheme to implement and understand but has the disadvantage that, at the end of the calculation, the data is spread across many different files and may therefore be difficult to use for further analysis without a data reconstruction stage. ### Single file, multiple writers without collective operations¶ There are a number of ways to achieve this. For example, many processes can open the same file but access different parts by skipping some initial offset; parallel I/O libraries such as MPI-IO, HDF5 and NetCDF also enable this. Shared-file I/O has the advantage that all the data is organised correctly in a single file making analysis or restart more straightforward. The problem is that, with many clients all accessing the same file, there can be a lot of contention for file system resources. ### Single Shared File with collective writes (SSF)¶ The problem with having many clients performing I/O at the same time is that, to prevent them clashing with each other, the I/O library may have to take a conservative approach. For example, a file may be locked while each client is accessing it which means that I/O is effectively serialised and performance may be poor. However, if I/O is done collectively where the library knows that all clients are doing I/O at the same time, then reads and writes can be explicitly coordinated to avoid clashes. It is only through collective I/O that the full bandwidth of the file system can be realised while accessing a single file. ## Achieving efficient I/O¶ This section provides information on getting the best performance out of the parallel /work file systems on ARCHER2 when writing data, particularly using parallel I/O patterns. ### Lustre¶ The ARCHER2 /work file systems use Lustre as a parallel file system technology. The Lustre file system provides POSIX semantics (changes on one node are immediately visible on other nodes) and can support very high data rates for appropriate I/O patterns. ### Striping¶ One of the main factors leading to the high performance of Lustre file systems is the ability to stripe data across multiple Object Storage Targets (OSTs) in a round-robin fashion. Files are striped when the data is split up in chunks that will then be stored on different OSTs across the Lustre system. Striping might improve the I/O performance because it increases the available bandwith since multiple processes can read and write the same files simultaneously. However striping can also increase the overhead. Choosing the right striping configuration is key to obtain high performance results. Users have control of a number of striping settings on Lustre file systems. Although these parameters can be set on a per-file basis they are usually set on directory where your output files will be written so that all output files inherit the settings. #### Default configuration¶ The /work file systems on ARCHER2 have the same default stripe settings: • A default stripe count of 1 • A default stripe size of 1 MiB (1048576 bytes) These settings have been chosen to provide a good compromise for the wide variety of I/O patterns that are seen on the system but are unlikely to be optimal for any one particular scenario. The Lustre command to query the stripe settings for a directory (or file) is lfs getstripe. For example, to query the stripe settings of an already created directory res_dir: [user@archer2]$ lfs getstripe res_dir/
res_dir
stripe_count: 1 stripe_size: 1048576 stripe_offset: -1
#### Setting Custom Striping Configurations¶
Users can set stripe settings for a directory (or file) using the lfs setstripe command. The options for lfs setstripe are:
• [--stripe-count|-c] to set the stripe count; 0 means use the system default (usually 1) and -1 means stripe over all available OSTs.
• [--stripe-size|-s] to set the stripe size; 0 means use the system default (usually 1 MB) otherwise use k, m or g for KB, MB or GB respectively
• [--stripe-index|-i] to set the OST index (starting at 0) on which to start striping for this file. An index of -1 allows the MDS to choose the starting index and it is strongly recommended, as this allows space and load balancing to be done by the MDS as needed.
For example, to set a stripe size of 4 MiB for the existing directory res_dir, along with maximum striping count you would use:
[user@archer2]\$ lfs setstripe -s 4m -c -1 res_dir/
## I/O Profiling¶
ARCHER2 has a number of tools available for users to profile and analyse the I/O activity of software applications. |
# Math Help - Getting a closed truth tree while it should be open!
1. ## Getting a closed truth tree while it should be open!
Hello every one!
It is about a propositional logic problem, I have tried it vey hard maybe... cause I am too tired...
This is it: a^b-> c |- (a->c)^(b->c) which means that (a->c)^(b->c) is a Logical consequence of a^b-> c. Also means that this trunk of tree:
a^b-> c -requirement
~[(a->c)^(b->c) ] -negative consequence
should give an open tree because {a=t,b=c=f} and {a=f, b=t, c=f} make true this:
((a^b) -> c)^~((a->c)^(b->c)).
This is the tree:
a^b->c(1)
~((a->c)^(b->c))(2)
---------------------
(3) ~(a^b) c from (1)
--------------------
(4) ~(a->c) (5)~(b->c) from (2)
----------------------
~a ~b from (3)
-----------------------------------------
a
~c from (4)
-------------------------------------
b
~c from (5)
------------------------------------------
this tree has all branches closed. Why is this, what I am doing wrong? Any help, any minor idea would be very appreciated!
Melsi
2. I assume you are using the method of analytic tableaux. However, I don't see a tree in your writing; I only see a linear sequence. In particular, a ^ b -> c produces two branches and the left one has the following shape.
Code:
(1) ~(a ^ b)
|
(2) ~((a -> c) ^ (b -> c))
/\
/ \
/ \
(3) ~(a -> c) ~(b -> c) from (2)
/\ /\
/ \ / \
(4) ~a ~b ~a ~b from (1)
| | | |
(5) a a b b from (3)
| | | |
(6) ~c ~c ~c ~c from (3)
So, the first and fourth branches are closed, but the second and third are open. Note that they correspond to two truth assignments that make the original two formulas (the premise and the negation of the conclusion) true.
3. ## Problem solved
Hello,
Thank you very much for your help. I appreceate it a lot and I am greatfull.
After so much effort and very very deep analization I came to a solution exactly to yours and this is great because I can confirm my solution and have no doubt about it.
E.g let say A is a sentence to be decomposed and P is the result of decomposation. Your solution says that P should apply to every branch found under A. I had misunderstood the process and thought that P should apply everywhere even to branches that are not under A!!!
I thought I knew the process very well but this example brought up this problem, I am so glad I made it clear before examination.
I wish I had found it earlier so you would not spend your time... however thank you very very much again!
Sincerely,
Melsi |
# Castrol Perfecto HT 2
#### Description
Perfecto™ HT 2 (previously called Transcal LT) is a high quality mineral oil combining low vapour pressure and high levels of thermal stability, specific heat and thermal conductivity with exceptional low temperature fluidity.
#### Application
Perfecto HT 2 is recommended primarily for non-pressurized closed liquid phase heating systems that incorporate both heating and cooling branches (eg: where an exothermic or heat releasing reaction takes place). The low temperature fluidity ensures that adequate circulation occurs in the coolest parts of the circuit. The maximum recommended bulk fluid temperature for Perfecto HT 2 is 250°C, and the fluid also operates effectively in cooling systems down to bulk temperatures of 30°C. Before being commissioned, the system should be pressure tested for leaks and then thoroughly flushed with Perfecto HT 2. Water should never be used.
With the system flushed and drained, it should be filled with fresh Perfecto HT 2. All air must be completely vented from the system before full temperature is imposed. For maximum efficiency, the heat transfer fluid should be circulated in conditions of turbulent flow. Care must be taken to ensure that bulk fluid temperature does not exceed 250°C, as this could lead to degradation of the oil. Despite the excellent oxidation stability of Perfecto HT 2 various precautions must be taken to minimize exposure to air, especially if the temperature of the fluid in the expansion chamber exceeds 50°C. A floating cover can be provided or the oil can be blanketed with inert gas. Perfecto HT 2, because of its low viscosity and freedom from additives, can also be used as the sealing medium in oil film barrier seals on rotating equipment such as gas compressors and main oil line pumps. Perfecto HT 2 has excellent low temperature properties and high temperature stability in order to give good performance over a wide range of conditions and to maximise seal life.
#### Advantages
• Excellent heat transfer properties which can be maintained over long periods of time.
• Suitability for systems incorporating both heating and cooling branches.
Looking for any other product in CASTROL ENGINE OIL & LUBRICANTS or other Lubricant ? Just give us a call. |
# How to add plots into an existing ps file
Hi,
I am trying to add new plots to an existing ps file. I think the code
``````TPostscript ps;
ps.Open("myOldPs.ps");
ps.On();
....
ps.Close();``````
should work because I just open the existing ps file. But in fact, my old ps file always is overwritten and created newly. Anybody could give me hints how to solve the problem? I don’t like the way of merging ps files.
Thanks,
Zhiyi
Once a postscript file has been closed it cannot be open again in update mode.
For writing multiple canvases to a ps file read section
"Writing several canvases to the same Postscript file"
at :http://root.cern.ch/root/htmldoc//TPad.html#TPad:Print
Rene |
# Returning Temporary Value?
## Recommended Posts
Q3: Returning Temporary Value? One last question about pointers, when I have a function that returns a value let's we have this method:
//mystring.h
class MyString{
public:
std::string MyString(string, string);
private:
std::string theString
}
std::string MyString::Concat(std::string para_one, std::string para_two){
theString=para_one;
theString+=para_two;
return theString;
}
//main.cpp
main(){
hello=&MyString("fooz","baz");
}
if I do something like the above I get a warning, address of a temporary value or something. why is it return such an error? and if I want the address of theString, then would I have to change the return type to a reference, as I intended? Q2:Reverse Inline? and another thing...
std::string ReturnAnswer(){
return "abc"
}
return 123;
}
}
int main(){
}
the above is sort of like the opposite of inlines I guess? b/c the compiler chooses which function ( ReturnAnswer() ) to use depending on the return type, which in the above case is an int, but an inline the compiler chooses which function depending on the input parameters not the return value. Q1:Passing References and Const passing references and the compiler complaining that there is no match, and thus forced to make it a const and later on, there is a type mismatch. do u know what i mean? how can I avoid these or am I understanding something wrong? damn, this is so aggrevating, [Edited by - Tradone on April 17, 2006 9:01:28 AM]
##### Share on other sites
jflanglois 1020
No, I don't know what you mean. How about a code example?
##### Share on other sites
rip-off 10976
Post code, errors or both.
I understand you have some problem relating to const and references, but I'm not going to be able to guess them...
##### Share on other sites
Antheus 2409
void foo(char *);const char *x;foo(x); // I believe this causes the problem// error C2664: 'foo' : cannot convert parameter 1 from 'const char *' to 'char *'
Or is it something else?
##### Share on other sites
Guest Anonymous Poster
That's because foo() takes a char*. Make it take a const char* and all will be tickity-boo.
##### Share on other sites
sorry about that, I thought this issue was extremely general. b/c I encounter this more than half the time.
Error Messages:
new_shenu.cpp: In function 'int main(int, char**)':new_shenu.cpp:97: error: no matching function for call to 'Skin::Display(std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<const std::string, std::string> > >)'Skin.h:127: note: candidates are: void Skin::Display()Skin.h:128: note: void Skin::Display(std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<const std::string, std::string> > >&)new_shenu.cpp:123: error: no matching function for call to 'Skin::Display(std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<const std::string, std::string> > >)'Skin.h:127: note: candidates are: void Skin::Display()Skin.h:128: note: void Skin::Display(std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<const std::string, std::string> > >&)*** Error code 1
//mainData is a Config type.std::map <std::string, std::string> Config::Get(){ return config;}//and config is a mapstd::map <std::string, std::string> config;//new_shenu.cpp:97skin.Display( mainData.Get() );//Skin.h:127void Display();//Skin.h:128void Display(std::map<std::string, std::string>& para_skinData);
##### Share on other sites
and this is what I tried next..
//mainData is a Config type.std::map <std::string, std::string> Config::Get(){return config;}//and config is a mapstd::map <std::string, std::string> config;//new_shenu.cpp:97skin.Display( mainData.Get() );//Skin.h:127void Display();//Skin.h:128void Display( [b]const[/b] std::map<std::string, std::string>& para_skinData);
New Errors:
Skin.cpp: In member function 'void Skin::Display(const std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<const std::string, std::string> > >&)':Skin.cpp:958: error: no matching function for call to 'Skin::Read(const char [5], const std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<const std::string, std::string> > >&)'Skin.cpp:172: note: candidates are: void Skin::Read(std::string)Skin.cpp:186: note: void Skin::Read(std::string, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<const std::string, std::string> > >&)*** Error code 1
SOOOO ANNOYING!!!
void Skin::Display( const std::map<std::string, std::string>& para_skinData){ print::Instance().Header( "success" ); print::Instance().Body( "" ); Read("Head"); [b]Read("Body", para_skinData);[/b] // this is 958 Read("Tail"); print::Instance().Tail( "" );}
and I search the corresponding method, which was Read.
void Skin::Read( std::string para_loadKey, std::map<std::string, std::string>& para_skinData ){ skinData=NULL; skinData=¶_skinData; /* std::cout << &*skinData; for( loop=(*skinData).begin(); loop!=(*skinData).end(); loop++ ){ std::cout << loop->first << "=" << loop->second << "\n"; } */ if ( TestingMode ) std::cout << "**************Begin Skin::Read(std::string, std::map )****************\n\n"; Read( para_loadKey); if ( TestingMode ) std::cout << "**************End Skin::Read(std::string, std::map )****************\n\n";}
Now, many other classes use Read(string, map&), so I can't change Read, but I should change Display. [cry]
##### Share on other sites
Quote:
Original post by Anonymous PosterThat's because foo() takes a char*. Make it take a const char* and all will be tickity-boo.
that's the thing there's so many things that depend on that method that is causing the problem. I need to fix so many things... if I do decide to make that one thing into a const. So I want to know why and when pointers and references when passed through functions are required to change to consts and such...
God.. I'm so tired.
Is it common that everybody has to go through this, or is this just me?
##### Share on other sites
Quote:
Original post by Antheusvoid foo(char *);const char *x;foo(x); // I believe this causes the problem// error C2664: 'foo' : cannot convert parameter 1 from 'const char *' to 'char *'Or is it something else?
I think this is what it is, except my case is a reference.
Actually, that's another thing, I more understand pointers than references. Aren't references supposed to be easier to use and more abstract? idk about that.
If you can just briefly explain to me.... thank you so much.
you just saved me like 3 days of stress and working backwords
##### Share on other sites
Antheus 2409
Quote:
Original post by Tradonethat's the thing there's so many things that depend on that method that is causing the problem. I need to fix so many things... if I do decide to make that one thing into a const. So I want to know why and when pointers and references when passed through functions are required to change to consts and such...
Here's a nice overview of const-related cases:
http://www.parashift.com/c++-faq-lite/const-correctness.html
It gives just about all cases of where const can be used, and also what effect it has.
##### Share on other sites
http://www.parashift.com/c++-faq-lite/const-correctness.html
Wow, I've seen that link like a thousand times.
Guess there was a reason why.
##### Share on other sites
iMalc 2466
Whilst const-correctness is a very good thing, usually you'd already have experience with it and would write everything correctly from the start. Adding it in later isn't as easy, but it is certainly do-able.
The easiest place to start is with parameters to functions. Change any references passed in into const references if the object is not modified.
##### Share on other sites
guess I just have to deal with it.
just sucks, i hate modifying my code.
I feel so imperfect.
##### Share on other sites
and an unrelated topic:
I am not deleting the pData, but I am recording its address pData->Set(std::string), so basically I am using memory leakage to dynamically allocate memory. Is this generally a good use of pointers? Just curious.
for( int i=(int)algorithms::Instance().ToInt( nextDbf.Get("dbf") )-1; i > 0; i-- ){ if( algorithms::Instance().DoesFileExist( "./system/db/" + mainParameter.GetParameterValue("db") + "/data/" + algorithms::Instance().ToString(i) + ".cgi" ) ){ if ( countPage == (int)algorithms::Instance().ToInt(mainParameter.GetParameterValue("page")) ){ std::cout << "./system/db/" << mainParameter.GetParameterValue("db") << "/data/" + algorithms::Instance().ToString(i) << ".cgi"; pData = new Data; pData->Set( "./system/db/" + mainParameter.GetParameterValue("db") + "/data/" + algorithms::Instance().ToString(i) + ".cgi" ); dataMap[algorithms::Instance().ToString(i)]=pData->Get(); } if ( countData == (int)algorithms::Instance().ToInt(mainConfig.Get("SnPageLine")) ){ countData=0; countPage++; } countData++; } }
##### Share on other sites
when I have a function that returns a value let's we have this method:
//mystring.hclass MyString{public:std::string MyString(string, string);private:std::string theString}std::string MyString::Concat(std::string para_one, std::string para_two){theString=para_one;theString+=para_two;return theString;}//main.cppmain(){hello=&MyString("fooz","baz");}
if I do something like the above I get a warning,
address of a temporary value or something.
why is it return such an error?
and if I want the address of theString,
then would I have to change the return type to a reference, as I intended?
and another thing...
std::string ReturnAnswer(){return "abc"}int ReturnAnswer(){return 123;}void Calculate(int somenumb){return somenumb*2}int main(){Calculate(ReturnAnswer());}
the above is sort of like the opposite of inlines I guess?
b/c the compiler chooses which function ( ReturnAnswer() ) to use depending on the return type, which in the above case is an int, but an inline the compiler chooses which function depending on the input parameters not the return value.
##### Share on other sites
TDragon 679
Quote:
Original post by TradoneOne last question about pointers,when I have a function that returns a value let's we have this method:*** Source Snippet Removed ***if I do something like the above I get a warning,address of a temporary value or something.why is it return such an error?and if I want the address of theString,then would I have to change the return type to a reference, as I intended?
The returned value of a function is "temporary" to the expression it's used in. In other words, if you were to think of the function call as being replaced by a variable, the variable would cease to exist once that statement was evaluated.
In your particular example (wherein I presume the Mystring declaration in the class to actually be Concat), I can see absolutely no reason to take the address of the return value; std::string hello = mystringobj.Concat("fooz", "baz"); should be plenty.
Quote:
and another thing...*** Source Snippet Removed ***the above is sort of like the opposite of inlines I guess?
Does the above even compile?
Quote:
b/c the compiler chooses which function ( ReturnAnswer() ) to use depending on the return type, which in the above case is an int, but an inline the compiler chooses which function depending on the input parameters not the return value.
Technically, the compiler is not allowed to do this; a decent compiler should give an error like "Redeclaration of 'ReturnAnswer()'; previously defined at xxx". You appear to be confusing the term "overloading" with "inline". In function overloading, two different functions are allowed to share the same name as long as their parameters differ; the compiler can always correctly determine which to call by the parameters passed to it, but there are far too many cases where the compiler would not be able to decide based only on return value (nor would one want it to). |
# Using different text and math fonts with unicode-math
When using pdfLaTeX it is sometimes possible, when using different text and math font sets, to take the fonts for operators like sin and cos from the text font even within formulae. Thus, using the lucimatx package, I can write
\renewcommand{\rmdefault}{bch}
\usepackage[onlymath=true]{lucimatx}
and then formulae will be set using lucida for math fonts, but charter (i.e., bch) for text-like objects like sin and cos, so matching the text using in the document body. This is what one wants from a typographic standpoint.
Is something similar possible using unicode-math? If I use
\usepackage{unicode-math}
\setmainfont{Charis SIL} % Charis is a charter clone
\setmathfont{LucidaBrightMathOT.otf}
It seems to use lucida fonts for everything in math mode. The same if I use xits math fonts. I may, of course, be missing the equivalent of the onlymath option in unicode-math. Any help would be appreciated.
Regards Geoff Vallis
• Welcome to TeX.SE! A quick hint: if you indent lines by four spaces, they will be autoformatted as LaTeX code. – Mico Oct 14 '13 at 12:24
• Thanks for the edits and the hints. I'm new to TeX.SE (and to unicode-math). – GeoffV Oct 14 '13 at 13:02
I'm not really sure you want to mix Charis SIL and Lucida Bright Math. However, here's the workaround you're looking for:
\documentclass{article}
\usepackage{unicode-math}
\setmainfont{Charis SIL} % Charis is a charter clone
\setmathfont[Scale=MatchLowercase]{LucidaBrightMathOT.otf}
\setmathfont[range={A-Z,a-z}]{Charis SIL}
\begin{document}
sin $\sin x=y$ \textit{x}
\end{document}
The “sin” outside the formula is just to show that Charis SIL is really used for the operator. The trailing italic “x” is for comparing with the math mode “x”.
You'll get warnings such as
Font 'Charis SIL' does not contain script 'Math'.
that are unavoidable. Note the Scale option for loading Lucida Bright Math.
• Thanks. Your solution almost works but not quite because upright operators like \nabla need to be taken from the original math font and not the text font. Building on your code it seems that a working solution mimicking the 'onlymath' option is: \documentclass{article} \usepackage{unicode-math} \setmainfont{Charis SIL} % Charis is a charter clone \setmathfont[Scale=MatchLowercase]{LucidaBrightMathOT.otf} \setmathfont[range=\mathup/{latin,Latin,num}]{Charis SIL} \begin{document} sin $\sin x=y$ \textit{x} and $\nabla x = 123$123 \end{document} – GeoffV Oct 14 '13 at 17:07
• @Geoff Check with the new version. – egreg Oct 14 '13 at 17:11
• @egeg Or, to get the numbers in the text font: \setmathfont[range={A-Z,a-z,1-9}]{Charis SIL} (I haven't figured out the proper formatting in posts yet, sorry!) – GeoffV Oct 14 '13 at 17:19
• I guess that range=\mathup/{latin,Latin,num} is the best approach. – egreg Oct 14 '13 at 17:24
• Yes I agree. I consider this question answered and will so mark it if I can figure that out... – GeoffV Oct 14 '13 at 17:36 |
# All Questions
410 questions
Filter by
Sorted by
Tagged with
617 views
### efficiency and flow separation
I am studying about wind turbine and turbine. I have some questions, but first, I explain some cases as follow: Low velocity air over an airfoil at a zero-incidence angle does not initiate flow ...
575 views
### What is the strongest known *metallic* material which can be used on Earth?
The discovery of nanotubes and graphene has pushed the limit of material strength high enough that building a space elevator could be managable. But most of the new materials are created from carbon ...
8k views
### How to make smoke for a small wind tunnel?
I am making a small (desktop) wind tunnel for educational purposes, I want to have 10 fairly thick smoke-streams about 3cm apart. I have experimented with incense but the stream is not thick enough ...
6k views
### How do I size metal plates to get the correct dimensions after folding?
I am designing a metal plate that will be laser-cut (or machine-cut) and then folded. I want to know how to size the pre-folded plate in order to get the right dimensions after folding. My actual ...
4k views
### How do I calculate the forces on a desk and its legs?
I have a design for a desk, and I'd like to not just guess at how strong it'll be, but I can't find an explanation on how to figure out all the forces involved that doesn't assume I already know a lot ...
433 views
### How is gas flow through (extremely long) pipelines monitored and controlled?
(This is closely linked with measuring the Mach number inside a nozzle but it is not regarding the supersonic flow) Friction and heat transfer have effects on the Mach number of the compressible flow ...
9k views
### How do train tracks handle really cold weather?
First of all, I'm interested in train track only, not the rolling stock. How are tracks built to cope with really cold weather? An example might be some place in Canada or Siberia. Ice would ...
669 views
### How do I use FEM to derive the torsional constant of an arbitrary shape?
In this question I ask about how to perform a first-principle derivation of the torsional constant of a section. It appears that there is no such analytical derivation for torsional constant, so my ...
1k views
### How do you model a real-life truss in structural analysis software?
I am trying to recreate the following model of a wooden roof truss provided to me by a truss manufacturer: My question is twofold: What is the common/correct way to model the boundary conditions and ...
216 views
### Hanging a steel mesh without significant deformation
I have an welded steel mesh, like in the following image. . Material: steel Diameter of the wires: 4mm Mesh: 15x15cm I found the following information about the welds: the minimal relative ...
677 views
### Velocity of flue gases through a pipe
I was wondering if there is any equation that helps to calculate flue gas velocity through a pipe? Can I use the Poiseuille Equation to solve this problem by assuming the flue gas is acting as a ...
20k views
### Physical meaning of shear lag [closed]
What is the physical meaning behind the concept of shear lag in fibre reinforced composite structures or the concept in general for any structure?
805 views
### How to secure a rotating disc so that it has no wobble
For a 3D printing application, I need to have a spinning disc(that I will rotate via geared stepper motors) that will wobble as little as possible. By wobble, I mean that it will only rotate around it'...
6k views
### What is Saint Venant Principle?
I am having problem with the application of Saint Venant principle.I have heard that quite often we use that knowingly or unknowingly.I would be very greatful to anyone who would explain this very ...
1k views
### Is the reliability of automotive sensor technology impeding the success of self driving car?
The reality is that sci-fi KITT is almost here. Recently I read articles Delphi car driving itself across the country as well as Tesla cars will be self-driving this summer. As of 2013 four states (...
891 views
### How to design a cooling chimney for a computer server?
I'm building a server type computer consisting of integrated motherboard, CPU, RAM and hard disk. There will also be a standard power supply unit. This is to be a Steam Punked computer, so... I ...
250 views
### How does a Levitron work?
I just watched this video: https://www.facebook.com/HigherPerspective/videos/1162007547164896/ I don't understand what I see. If the spinner's magnetic, the bowl is made of iron and that the wood ...
780 views
### Would a double helix spring be more or less effective than two springs?
Say I have a spring in the shape of the backbone of DNA. If I unwind the two parts which would have more potential energy when pushed all the way down, the double helical or both of the individual ...
60 views
### How can I calculate the power and torque required for the motor on a wheeled robot/vehicle?
How can I calculate the power and torque required for the motor on a wheeled robot or vehicle if a particular acceleration or movement up an incline is required?
6k views
### Why doesn't a lightning strike destroy the lightning rod?
Lightning strikes have been known to cause massive amounts of damage. The stats on a lightning bolt are: current levels sometimes in excess of 400 kA, temperatures to 50,000 degrees F., and speeds ...
23k views
### Would it make sense to have a 12V lighting circuit in a house?
Modern LED bulbs must convert the standard household supply (for example $240\text{V AC}$ in the UK) into a DC supply at a lower voltage (usually $12\text{V DC}$ I think) for the LED array. This is ...
2k views
### How to design a house to be cooled passively? [duplicate]
I live in Louisiana these days, in an area that is known for its numerous antebellum plantation homes (circa early 1800s). While touring one of these homes it was clear that almost everything about ...
5k views
### Why does the microwave plate start in a random direction?
...or what type of motor is used there? I found this type of motor - usually powered with low-voltage AC (~12V), but at times with 230V, in several appliances that require very slow rotation and ...
1k views
### How should the public raise questions about unsafe structures in the United States?
In a program on NPR that I was listening to, there was a bit about a bridge that from the description sounded to a layman as unsound and is still in use. The program described it as an old wooden ...
515 views
### What are the lightest springs?
I am wondering if there is a spring strong enough and light enough to inflate an airtight membrane so that it floats at Standard Temp & Pressure. As an example, a 10cm cylinder would displace ...
14k views
### How much clearance does a car need when turning a corner?
I am contemplating buying a new car. However, the approach to the underground garage in my apartment has a 90 degree frustrating turn. Given the dimensions of the approach and car, what is the max ...
852 views
### How to determine the lateral earth pressure in a double-walled cofferdam?
The design of a retaining wall commonly involves determining the lateral earth pressure using either Rankine theory or Coulomb theory. Both theories involve mobilising the shear resistance of a ...
1k views
### How to calculate Power of motor when used as a generator?
How can you calculate how much power can be generated by a motor when used as a generator (like a motor used in a wind turbine)? Does the power depend on how fast the magnet is rotated? If so, is ...
375 views
### Why is “regular” gasoline standard instead of something more knock-resistant?
The standard light petroleum distillate for vehicle engines, "regular gasoline," is (or is equivalent to) some mix of heptane (C7) and octane (C8). Higher proportions of C8 are more knock-resistant, ...
3k views
### What is the difference between Bluetooth Low Energy and Bluetooth BR/EDR in Park mode?
It is known that Bluetooth Low Energy transmits data only during short time intervals called Connection Events. Connection events occur regullary with predefined period. The rest of the time Bluetooth ...
4k views
### Reading stepper motor datasheets to get torque and speed
I don't understand how to use the information in a stepper motor datasheet to understand how much torque it can generate. I was going to do a simple board and try to lift a small weight as an example ...
1k views
### Motor for a hydraulic pump in a hydraulic system
I am a complete beginner to hydraulic systems, and I've wanted to learn more about this area. I'm designing a hydraulic system that involves using hydraulics to push/pull objects using pistons. I have ...
14k views
### What do you call the difference between the on and off temperatures in a simple thermostat?
A simple thermostat will turn on at one temperature and off at a higher temperature. This keeps the thermostat from cycling on and off too quickly. The difference between these values is sometimes ...
996 views
### Can I weaken a coil spring consisting of spring steel?
I have a steel coil spring that is used by compression. It is too strong, and I would like to reduce its strength by some fraction, ideally keeping its length. Reducing it to roughly half would be ...
6k views
### Why are concrete bridges more prevalent than steel bridges in the United States?
According to the United States 2014 National Bridge Inventory the number of concrete bridges far outstrips the number of steel bridges. Reinforced Concrete: 253,336 Prestressed Concrete: 148,333 ...
1k views
### Large deflection of a cantilever beam with distributed normal load
I have a strip of stainless steel encastree'd at one end to which is applied a constant pressure on one side, and I need to know what the deflection equation y = f(x) is at equilibrium. If the ...
1k views
### Are there standard vibration and shock specs for shipment of a product?
Suppose I want to ship my product via UPS, or whatever other professional carrier service. What vibration and shock forces should I design my product to withstand? And how can I effectively test my ...
9k views
### Force required to empty a syringe
How would you go about solving the following problem. I believe Bernoulli's equation needs to be employed, but I'm not sure how. Find the magnitude of the force that needs to be applied to a piston ...
760 views
### Remote start on a car with manual transmission
I'm making a remote starter for a car using a Raspberry Pi. The problem is that the car has a manual transmission. The driver needs to remember to place the gear shift in neutral when parking, ...
6k views
### What method can I use to make a pivot point in an aluminum linkage?
I recently had a project where I made an aluminum linkage (similar to a scissor lift) operated by linear actuators. Aluminum pieces were fabricated from 6061 "standard" aluminum stock, 6mm (0.25 inch) ...
474 views
### Are smaller heat-pumps more efficient?
I was a doing a placement at a company, and they were designing a new heat pump. I was told the more compact you can make a heat-pump, the better it is for efficiency, and they mentioned something ...
1k views
### Truss manufacturer drawings unclear
I am currently looking at a set of truss layouts produced by our truss manufacturing partner for a residential building project. I have reason to believe these layouts are generated using Alpine (...
384 views
### Differential equation of the vertical displacement of a cable
Given a cable (totally flexible) fixed at both ends, subjected to a vertical force $f(x)$ in his plane, with variable area $A(x)$, and variable elasticity $E(x)$ I want to find the differential ...
1k views
### What is the densest ceramic?
I know ceramics are generally considerably less dense than pure metals or alloys but sand is used in barbells for its probably cheapness, user-friendly and easy availability but its a decent density ...
233 views
### mass calculation from transfer function or bodeplot
I have a double mass rotary model. The two masses are connected by an axis which together acts as a spring damper system of the fourth order. The system is actuated by a motor and the angle of ...
2k views
### Involute Gear Calculation
I've written a program to generate gears and write them to dxf files. I thought all was well until I looked at the mesh of two gears. From everything I've read there should be no backlash when the ...
66 views
### What diameter of particles will result in a blocked borehole?
My mine is looking to install a 600 mm open borehole to drop particles up to 150 mm through. It's a 320m shaft to drop ballast down to save time compared to using the drift. When it gets blocked ...
1k views
### Why do bluetooth headsets get interference (choppy sound quality) outdoors?
This is more of a general physics question to help me understand how to choose sports headsets in the future, however it is too specific to a certain use case (bluetooth headset) to belong in the ... |
There was an error loading the page.
Try to answer the following questions here:
• What is this question used for?
• What does this question assess?
• What does the student have to do?
• How is the question randomised?
• Are there any implementation details that editors should be aware of?
### Feedback
From users who are members of Transition to university :
### History
#### Checkpoint description
Describe what's changed since the last checkpoint.
#### Christian Lawson-Perfect5 years, 8 months ago
Gave some feedback: Ready to use
#### Christian Lawson-Perfect5 years, 8 months ago
Saved a checkpoint:
I've done a lot of rewording, and added random names.
Published this.
#### Elliott Fletcher5 years, 8 months ago
Gave some feedback: Needs to be tested
#### Chris Graham commented 5 years, 8 months ago
Elliott, since you've been working on this one already, can we add a step with the equation for an arithmetic sequence and gaps for the first value and difference. In addition, let's put everything before "How many customers..." into the statement.
#### Chris Graham5 years, 8 months ago
Gave some feedback: Has some problems
#### Elliott Fletcher commented 5 years, 8 months ago
I have mentioned that ticket number 1 recieves strawberry ice cream in the question statement.
#### Elliott Fletcher5 years, 8 months ago
Gave some feedback: Needs to be tested
#### Elliott Fletcher commented 5 years, 8 months ago
Do you want me to edit this question as well Chris?
#### Chris Graham commented 5 years, 8 months ago
The advice assumes that the student knows that ticket number 1 receives strawberry ice cream. That works out OK, but perhaps this should be stated explicitly in the statement.
#### Chris Graham5 years, 8 months ago
Gave some feedback: Has some problems
#### Aiden McCall commented 5 years, 8 months ago
I have added advice to the question.
#### Aiden McCall5 years, 8 months ago
Gave some feedback: Needs to be tested
#### Christian Lawson-Perfect5 years, 8 months ago
Saved a checkpoint:
I've split the parts into separate questions, since they're unrelated.
This leaves the ice cream shop question, and it needs new advice.
#### Christian Lawson-Perfect5 years, 8 months ago
Gave some feedback: Has some problems
#### Hannah Aldous5 years, 8 months ago
Gave some feedback: Needs to be tested
#### Christian Lawson-Perfect5 years, 8 months ago
Gave some feedback: Has some problems
#### Christian Lawson-Perfect5 years, 8 months ago
Saved a checkpoint:
Part d should follow on from c, using the same sequence, and only ask for one (large) term. As it is, I've got to work out the formula for two new sequences and only get marked on the final answer.
Has something gone awry with the ice cream question? Jenny's friends both have tickets with larger numbers than hers. How does that help work out how many people got strawberry before Jenny? Unless you don't want to say explicitly how many flavours there are, that information is sort of a red herring.
#### Hannah Aldous5 years, 8 months ago
Gave some feedback: Needs to be tested
#### Lauren Richards5 years, 8 months ago
Gave some feedback: Has some problems
#### Lauren Richards commented 5 years, 8 months ago
• Typo in part b) - it says sequeces instead of sequences. Also, I think you should separate the different questions in part a) and part b) by i) and ii).
• i) in part c) should be in italics and the writing for the question should be on the line underneath it. I would say the sentences coming after the i) should be capitalised, too. You're missing full stops at the end of the sentences in part c).
• Is part c)ii) necessary? I'm not sure that it is particularly clear and don't think it is testing anything of importance.
• The sequence I got given in part d) was exactly the same as the sequence I got in part c). Is it randomised? Is there a way of making sure you don't get the same sequence? I think I would put part d) as an extension of part c) and get rid of c)ii).
• part e) - not sure that I would say "cycles through" - I think "alternates between sequentially" might be better.
• For part c) in the advice, you have said "We can use the formula to find the 6 term." but in the parts you have managed to formulate it so it says "6th term". Also, maybe at the end of c)ii) advice, reiterate that the answer you get is the value of the 6th term.
• I would definitely make part d)i) to be another section of part c) if the sequence is supposed to be the same.
• I don't think the middle section of the advice for part e) is particularly clear. In the question, I actually don't think I would tell them how many flavours there are, I thinl that should be part of the question.
• There is a typo in the advice for part e) - it says "are" instead of "our".
• I think part a) and b) should be swapped around. For a) they need to know how to calculate the common difference and then use that to generate values but in b) they only need to calculate common differences, which is less difficult.
• I do really like this question, and particularly part e).
#### Hannah Aldous5 years, 9 months ago
Gave some feedback: Needs to be tested
#### Christian Lawson-Perfect5 years, 9 months ago
Gave some feedback: Has some problems
#### Christian Lawson-Perfect commented 5 years, 9 months ago
In part c, I'd ask the student to give an expression for $a_n$ in terms of $n$ before getting them to compute a value. (Or, ask for $a_n$ at a particular small $n$ to check they've got the correspondence right, then ask for an expression, then calculate $a_n$ for a given big $n$)
Part d smells a lot like a fake context. Suppose you're Jenny, and you want to know how many other customers have had the strawberry ice cream before you. What information do you have? You wouldn't be told "the 1st, 6th, 11th, ... customers receive strawberry". You'd either be told "there are five flavours, and the shops cycles through them", or you'd notice that people 5 and 10 places in front of you in the queue also got strawberry. You also need to give a reason for Jenny to know which number customer she is.
Maybe turn it around slightly: Jenny counts $x$ people buy ice creams before her, and the people $y$ and $y+a$ places in front of her got strawberry. How many people were given strawberry before Jenny?
#### Christian Lawson-Perfect commented 5 years, 9 months ago
Before I even look at this question: never say "basic"!!!
#### Hannah Aldous5 years, 9 months ago
Gave some feedback: Needs to be tested
#### Hannah Aldous5 years, 9 months ago
Gave some feedback: Has some problems
#### Bradley Bush commented 5 years, 9 months ago
Great question, I only have a few really pedantic points.
For part b) of the advice, I might add a another step for 2d=\simpifly{variable 1}{variable 2} into the solution so that you aren't jumping two lines of algebra.
With the equation punctuation in the advice, I'm not sure you are treating every equation like part of a sentence.
Your last line of advice reads "..she is the 31th person", maybe either reword your question so this doesn't happen, remove variables that wont fit the current sentence or alter the "th" to be a variable dependant on the number before it to solve this.
#### Hannah Aldous5 years, 9 months ago
Gave some feedback: Needs to be tested
#### Hannah Aldous5 years, 9 months ago
Created this.
Arithmetic sequences in an ice cream shop Ready to use Hannah Aldous 20/11/2019 14:36
Compute the partial sum of an arithmetic sequence Ready to use Hannah Aldous 20/11/2019 14:36
Finding the $n^{\text{th}}$ Term of a Quadratic Sequence Ready to use Hannah Aldous 20/11/2019 14:39
Identifying different types of sequences Should not be used Hannah Aldous 01/12/2020 16:24
Finding the formula for the $n^{\text{th}}$ term of linear sequences Ready to use Hannah Aldous 20/11/2019 14:39
Partial sum of an arithmetic sequence - birthday money Ready to use Hannah Aldous 20/11/2019 14:41
Find common difference in arithmetic sequences with gaps Ready to use Christian Lawson-Perfect 20/11/2019 14:39
Fill in the gaps in an arithmetic sequence Ready to use Christian Lawson-Perfect 20/11/2019 14:39
Write down and apply the formula for an arithmetic sequence. Ready to use Christian Lawson-Perfect 20/11/2019 14:46
Finding the Formula for the $n^{\text{th}}$ Term of Linear Sequences draft Chris Graham 20/07/2017 15:08
Inbbavathie's copy of Find common difference in arithmetic sequences with gaps draft Inbbavathie Ravi 24/07/2017 03:48
Find a particular term of a sequence using the given formula Ready to use Christian Lawson-Perfect 20/11/2019 14:38
Johan's copy of Write down and apply the formula for an arithmetic sequence. Has some problems Johan Maertens 02/08/2017 17:51
Finding the formula for the $n^{\text{th}}$ term of linear sequences draft steve kilgallon 19/11/2017 08:09
Compute the partial sum of an arithmetic sequence draft steve kilgallon 19/11/2017 08:11
Identifying different types of sequences draft Luis Hernandez 31/12/2018 01:38
Calcular la suma parcial de una sucesión aritmética Ready to use Luis Hernandez 02/12/2020 15:42
Identificar diferentes tipos de sucesiones... Ready to use Luis Hernandez 02/11/2020 02:05
Suma parcial de una secuencia aritmética - Cumpleaños Ready to use Luis Hernandez 02/12/2020 15:42
Encontrar la diferencia común en una Secuencia Aritmética Ready to use Luis Hernandez 02/12/2020 15:42
Encontrar el término de una sucesión usando la fórmula dada... Ready to use Luis Hernandez 02/12/2020 15:42
Encontrar la fórmula para el término $n^{\text {th}}$ de una secuencia lineal Ready to use Luis Hernandez 02/12/2020 15:42
Simon's copy of Write down and apply the formula for an arithmetic sequence. draft Simon Thomas 25/02/2019 10:20
Simon's copy of Find common difference in arithmetic sequences with gaps draft Simon Thomas 25/02/2019 10:24
Simon's copy of Finding the formula for the $n^{\text{th}}$ term of linear sequences draft Simon Thomas 25/02/2019 10:30
Identifying different types of sequences draft Xiaodan Leng 10/07/2019 21:48
Write down and apply the formula for an arithmetic sequence. draft Xiaodan Leng 11/07/2019 01:48
Fill in the gaps in an arithmetic sequence draft Xiaodan Leng 11/07/2019 01:48
Find common difference in arithmetic sequences with gaps draft Xiaodan Leng 11/07/2019 01:49
Compute the partial sum of an arithmetic sequence draft Xiaodan Leng 11/07/2019 01:51
Partial sum of an arithmetic sequence - birthday money draft Xiaodan Leng 11/07/2019 01:54
Arithmetic sequences in an ice cream shop draft Xiaodan Leng 11/07/2019 01:55
Finding the formula for the $n^{\text{th}}$ term of linear sequences draft Xiaodan Leng 11/07/2019 01:55
Find a particular term of a sequence using the given formula draft Xiaodan Leng 11/07/2019 01:56
Finding the $n^{\text{th}}$ Term of a Quadratic Sequence draft Xiaodan Leng 11/07/2019 01:57
Paul's copy of Inbbavathie's copy of Find common difference in arithmetic sequences with gaps draft Paul Verheyen 17/04/2020 12:54
ibrahim's copy of Finding the formula for the $n^{\text{th}}$ term of linear sequences draft ibrahim khatib 21/12/2019 11:17
Sucesiones aritméticas draft David Vanegas 02/12/2020 15:42
Compute the partial sum of an arithmetic sequence Ready to use Vicky Hall 07/10/2020 12:45
Ashley's copy of Identifying different types of sequences draft Ashley Cusack 16/09/2020 20:54
Abrari's copy of Fill in the gaps in an arithmetic sequence Ready to use Abrari Hasmi 19/09/2020 16:58
Compute the sum of an arithmetic series draft Vicky Hall 22/09/2020 13:49
Sequences and Series - Finding the $n^{\text{th}}$ Term of a Quadratic Sequence Ready to use Apodytes-ATG apodytes 25/10/2020 18:53
Sequences and Series - Identifying different types of sequences Ready to use Apodytes-ATG apodytes 25/10/2020 18:54
Sequences and Series - Write down and apply the formula for an arithmetic sequence. Ready to use Apodytes-ATG apodytes 25/10/2020 18:58
Sequences and Series - Compute the partial sum of an arithmetic sequence Ready to use Apodytes-ATG apodytes 25/10/2020 18:51
Sequences and Series - Partial sum of an arithmetic sequence - birthday money Ready to use Apodytes-ATG apodytes 25/10/2020 18:54
Encontrar un término particular de una secuencia dada por una fórmula Ready to use Luis Hernandez 02/12/2020 15:42
Dados los primeros términos de una progresión aritmética, escriba su fórmula ... draft Luis Hernandez 02/12/2020 15:42
Jean jinhua's copy of Identifying different types of sequences Ready to use Jean jinhua Mathias 01/12/2020 15:29
Write down the next 2 terms. Ready to use Dr Palat Meethale Ushasree 02/01/2021 12:50
Missing terms in the sequence Ready to use Dr Palat Meethale Ushasree 02/01/2021 12:50
Compute the sum of an arithmetic series draft Kate Henderson 18/01/2021 12:48
Agnieszka's copy of Write down and apply the formula for an arithmetic sequence. draft Agnieszka Kulacka 27/02/2021 12:44
Kopy of Fill in the gaps in an arithmetic sequence draft Raul Duarte 03/02/2022 17:02
MSP Transition: Compute the sum of an arithmetic series draft Tom Lowe 25/08/2021 20:35
Ugur's copy of Write down and apply the formula for an arithmetic sequence. Ready to use Ugur Efem 12/01/2022 11:45
M1 Terry's copy of Write down and apply the formula for an arithmetic sequence. draft Terry Young 15/12/2021 17:11
M1 Terry's copy of Find a particular term of a sequence using the given formula draft Terry Young 15/12/2021 17:11
Fill in the gaps in an arithmetic sequence Ready to use Arif Hermawan 21/07/2022 01:32
Compute the partial sum of an arithmetic sequence Ready to use Arif Hermawan 21/07/2022 08:02
Finding the formula for the $n^{\text{th}}$ term of linear sequences Ready to use Arif Hermawan 21/07/2022 02:32
Finding the $n^{\text{th}}$ Term of a Quadratic Sequence draft Arif Hermawan 21/07/2022 08:06
Find common difference in arithmetic sequences with gaps Ready to use Arif Hermawan 21/07/2022 08:19
Chris's copy of Write down and apply the formula for an arithmetic sequence. draft Chris Templet 24/08/2022 18:51
Chris's copy of Fill in the gaps in an arithmetic sequence draft Chris Templet 24/08/2022 18:58
Chris's copy of Find common difference in arithmetic sequences with gaps draft Chris Templet 24/08/2022 19:10
Chris's copy of Partial sum of an arithmetic sequence - birthday money draft Chris Templet 24/08/2022 18:37
Chris's copy of Arithmetic sequences in an ice cream shop draft Chris Templet 24/08/2022 19:34
Chris's copy of Finding the formula for the $n^{\text{th}}$ term of linear sequences draft Chris Templet 24/08/2022 18:37
Chris's copy of Compute the partial sum of an arithmetic sequence draft Chris Templet 24/08/2022 18:37
7 - Dilyniannau - 9 draft Angharad Thomas 20/10/2022 10:09
Give any introductory information the student needs.
Name Type Generated Value
#### Error in variable testing condition
There's an error in the variable testing condition. Variable values can't be generated until it's fixed.
Error:
for seconds
Running for ...
No variables have been defined in this question.
This variable was automatically created because there's a reference to it somewhere in this question.
#### Warning
• When applying the function,
##### Suggestions
• Change the signature of parameter from to .
This variable is an HTML node. HTML nodes can not be relied upon to work correctly when resuming a session - for example, attached event callbacks will be lost, and mathematical notation will likely also break.
If this causes problems, try to create HTML nodes where you use them in content areas, instead of storing them in variables.
Describe what this variable represents, and list any assumptions made about its value.
### Generated value:
← Depends on:
→ Used by:
This variable doesn't seem to be used anywhere.
Name Limit
### Penalties
Name Limit
No parts have been defined in this question.
Select a part to edit.
This gap is not used in the parent part's prompt.
The student will not be able to enter an answer to this gap.
Ask the student a question, and give any hints about how they should answer this part.
The correct answer is an equation. Use the accuracy tab to generate variable values satisfying this equation so it can be marked accurately.
#### Checking accuracy
Define the range of points over which the student's answer will be compared with the correct answer, and the method used to compare them.
#### Variable value generators
Give expressions which produce values for each of the variables in the expected answer. Leave blank to pick a random value from the range defined above, following the inferred type of the variable.
#### String restrictions
For each choice, specify the number of marks to add or subtract when the student picks it.
For each choice, write 1 if the student should tick it, or 0 if they should leave it unticked.
You must set a maximum number of marks in order to use this marking method.
You must set a maximum number of marks in order to use this marking method.
For each combination of answer and choice, specify the number of marks to add or subtract when the student picks it.
For each combination of answer and choice, write 1 if the student should tick it, or 0 if they should leave it unticked.
Both choices and answers must be defined for this part.
Help with this part type
#### Test that the marking algorithm works
Check that the marking algorithm works with different sets of variables and student answers using the interface below.
Create unit tests to save expected results and to document how the algorithm should work.
There's an error which means the marking algorithm can't run:
Score:
#### Question variables
These variables are available within the marking algorithm.
Name Value
#### Marking parameters
These values are available as extra variables in the marking algorithm.
Name Value
#### Part settings
These values are available as entries in the settings variable.
Name Value
This feedback is based on your last submitted answer. Submit your changed answer to get updated feedback.
##### Warnings:
Alternative used:
These are the notes produced by this part's marking algorithm.
Note Value Feedback
Click on a note's name to show or hide it.
#### Unit tests
No unit tests have been defined. Enter an answer above, select one or more notes, and click the "Create a unit test" button.
The following tests check that the question is behaving as desired.
### This test has not been run yet This test produces the expected output This test does not produce the expected output
This test is not currently producing the expected result. Fix the marking algorithm to produce the expected results detailed below or, if this test is out of date, update the test to accept the current values.
One or more notes in this test are no longer defined. If these notes are no longer needed, you should delete this test.
Name Value
Note This note produces the expected output Messages No feedback messages. Warnings No warnings. Messages A difference in feedback messages does not cause this test to fail. No feedback messages. Warnings A difference in warnings does not cause this test to fail. No warnings.
This test has not yet been run.
When you need to change the way this part works beyond the available options, you can write JavaScript code to be executed at the times described below.
Run this script the built-in script.
This script runs after the built-in script.
To account for errors made by the student in earlier calculations, replace question variables with answers to earlier parts.
In order to create a variable replacement, you must define at least one variable and one other part.
Variable Answer to use Must be answered?
The variable replacements you've chosen will cause the following variables to be regenerated each time the student submits an answer to this part:
These variables have some random elements, which means they're not guaranteed to have the same value each time the student submits an answer. You should define new variables to store the random elements, so that they remain the same each time this part is marked.
This part can't be reached by the student.
Add a "next part" reference to this part from another part.
None of the parts which can lead to this part are reachable either.
### Next part options
Define the list of parts that the student can visit after this one.
• #### Variable replacements
No variable replacements have been defined for this next part option.
Variable Value
### Previous parts
This part can follow on from:
This part doesn't follow on from any others.
### to
Choose a type for this new .
Give a worked solution to the whole question.
Select extensions to use in this question.
• There was an error loading this extension.
Define rulesets for simplification and display of mathematical expressions.
### Functions
Define functions to use in JME expressions.
Select a function to edit.
No functions have been defined in this question.
Name Type
of
Output type
## Built-in constants
Tick the built-in constants you wish to include in this question.
## Custom constants
You can define constants in terms of the built-in constants, even if they're disabled.
Names Value LaTeX
Add styling to the question's display and write a script to run when the question is created.
This script will run after the question's variable have been generated but before the HTML is attached.
Apply styling rules to the question's content.
Use this tab to check that this question works as expected.
There was an error which means the tests can't run:
Part Test Passed?
Hasn't run yet Running Passed Failed
• :
This question is used in the following exams: |
# Measurement of the polarisation of W bosons produced with large transverse momentum in pp collisions at sqrt(s) = 7 TeV with the ATLAS experiment
5 Laboratoire de Physique Corpusculaire
LPC - Laboratoire de Physique Corpusculaire - Clermont-Ferrand
Abstract : This paper describes an analysis of the angular distribution of W->enu and W->munu decays, using data from pp collisions at sqrt(s) = 7 TeV recorded with the ATLAS detector at the LHC in 2010, corresponding to an integrated luminosity of about 35 pb^-1. Using the decay lepton transverse momentum and the missing transverse energy, the W decay angular distribution projected onto the transverse plane is obtained and analysed in terms of helicity fractions f0, fL and fR over two ranges of W transverse momentum (ptw): 35 < ptw < 50 GeV and ptw > 50 GeV. Good agreement is found with theoretical predictions. For ptw > 50 GeV, the values of f0 and fL-fR, averaged over charge and lepton flavour, are measured to be : f0 = 0.127 +/- 0.030 +/- 0.108 and fL-fR = 0.252 +/- 0.017 +/- 0.030, where the first uncertainties are statistical, and the second include all systematic effects.
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-00679734
Contributor : Claudine Bombar <>
Submitted on : Friday, March 16, 2012 - 11:45:20 AM
Last modification on : Tuesday, December 1, 2020 - 2:32:13 PM
### Citation
G. Aad, L. Aperio Bella, B. Aubert, N. Berger, J. Colas, et al.. Measurement of the polarisation of W bosons produced with large transverse momentum in pp collisions at sqrt(s) = 7 TeV with the ATLAS experiment. European Physical Journal C: Particles and Fields, Springer Verlag (Germany), 2012, 72, pp.2001. ⟨10.1140/epjc/s10052-012-2001-6⟩. ⟨in2p3-00679734⟩
Record views |
Exercise 7.9
Consider a vocabulary with only four propositions, $A$, $B$, $C$, and $D$. How many models are there for the following sentences?
1. $B\lor C$.
2. $\lnot A\lor \lnot B \lor \lnot C \lor \lnot D$.
3. $(A{:\;{\Rightarrow}:\;}B) \land A \land \lnot B \land C \land D$.
View Answer |
## Tuesday, April 03, 2018
### Challenge: Is there a small NFA for { a^i : i\ne 1000} ?
(Added later- a reader left a comment pointing to a paper with the answer and saying that the problem is not original. My apologies- upon rereading I can see why one would think I was claiming it was my problem. It is not. I had heard the result was folklore but now I have a source! So I thank the commenter and re-iterate that I am NOT claiming it is my problem.)
Consider the language
{L = ai : i ≠ 1000 }
There is a DFA for L of size 1002 and one can prove that there is no smaller DFA.
a) Show that any NFA for L requires roughly 1000 states
or
b) Show that there is a small NFA for L, say less than 500 states
or
c) State that you think the question is unknown to science.
I will reveal the answer in my next post, though its possible that (c) is the answer and the comments will convert it to either (a) or (b).
Universal finite automaton recognizes the complement of this language (i.e. NFA recognizes this language) with \Theta(\sqrt(n)) states, where n=1000 for this language.
2. I can see a sqrt(n) upper bound using the Frobenius coin problem, I wonder if it's the same that you have.
3. Oh well, I see by the time I got back from vacation, the solution has been posted and it's essentially the same... |
# Has a magnetic field flip of a distant star ever been measured?
The magnetic field of the Sun flips during each solar cycle, with the flip occurring when sunspot cycle is near its maximum. Levels of solar radiation and ejection of solar material, the number and size of sunspots, solar flares, and coronal loops all exhibit a synchronized fluctuation, from active to quiet to active again, with a period of 11 years.
Has this phenomenon ever been measured, directly or indirectly, on stars outside the solar system?
The Sun's magnetic activity cycle of $$\sim 22$$ years involves a large-scale reversal of the polarity of the magnetic field every $$\sim 11$$ years.
To directly measure the reversing cycles in magnetic polarity requires spatially resolved maps of the vector magnetic field. Such spatially resolved maps are possible for fast-rotating, and hence highly magnetically active stars through Zeeman Doppler Imaging. In general, highly magnetically active stars appear not to show magnetic activity variations as strongly as the Sun. Nevertheless, recent instrumental developments have led to (difficult) observations of some solar-type stars with intermediate rotation rates. There is now plenty of evidence for magnetic polarity reversals in many of these (e.g. in Chi$$^1$$ Ori, Rosen et al. 2016; in LQ Hya, Lehtinen 2019; in V1358 Ori, Willamo et al. 2021). |
# University of Hertfordshire
## A search for white dwarfs in the Galactic plane: the field and the open cluster population
Research output: Contribution to journalArticlepeer-review
### Standard
A search for white dwarfs in the Galactic plane: the field and the open cluster population. / Raddi, R.; Catalan, S.; Gaensicke, B. T.; Hermes, J. J.; Napiwotzki, R.; Koester, D.; Tremblay, P. -E.; Barentsen, G.; Farnhill, H. J.; Mohr-Smith, M.; Drew, J. E.; Groot, P. J.; Guzman-Ramirez, L.; Parker, Q. A.; Steeghs, D.; Zijlstra, A.
In: Monthly Notices of the Royal Astronomical Society, Vol. 457, No. 2, 01.04.2016.
Research output: Contribution to journalArticlepeer-review
### Harvard
Raddi, R, Catalan, S, Gaensicke, BT, Hermes, JJ, Napiwotzki, R, Koester, D, Tremblay, P-E, Barentsen, G, Farnhill, HJ, Mohr-Smith, M, Drew, JE, Groot, PJ, Guzman-Ramirez, L, Parker, QA, Steeghs, D & Zijlstra, A 2016, 'A search for white dwarfs in the Galactic plane: the field and the open cluster population', Monthly Notices of the Royal Astronomical Society, vol. 457, no. 2. https://doi.org/10.1093/mnras/stw042
### APA
Raddi, R., Catalan, S., Gaensicke, B. T., Hermes, J. J., Napiwotzki, R., Koester, D., Tremblay, P. -E., Barentsen, G., Farnhill, H. J., Mohr-Smith, M., Drew, J. E., Groot, P. J., Guzman-Ramirez, L., Parker, Q. A., Steeghs, D., & Zijlstra, A. (2016). A search for white dwarfs in the Galactic plane: the field and the open cluster population. Monthly Notices of the Royal Astronomical Society, 457(2). https://doi.org/10.1093/mnras/stw042
### Author
Raddi, R. ; Catalan, S. ; Gaensicke, B. T. ; Hermes, J. J. ; Napiwotzki, R. ; Koester, D. ; Tremblay, P. -E. ; Barentsen, G. ; Farnhill, H. J. ; Mohr-Smith, M. ; Drew, J. E. ; Groot, P. J. ; Guzman-Ramirez, L. ; Parker, Q. A. ; Steeghs, D. ; Zijlstra, A. / A search for white dwarfs in the Galactic plane: the field and the open cluster population. In: Monthly Notices of the Royal Astronomical Society. 2016 ; Vol. 457, No. 2.
### Bibtex
@article{19c2bf15896042e189ecaf4958ef5d3b,
title = "A search for white dwarfs in the Galactic plane:: the field and the open cluster population",
abstract = "We investigated the prospects for systematic searches of white dwarfs at low Galactic latitudes, using the VLT Survey Telescope (VST) H$\alpha$ Photometric Survey of the Galactic plane and Bulge (VPHAS+). We targeted 17 white dwarf candidates along sightlines of known open clusters, aiming to identify potential cluster members. We confirmed all the 17 white dwarf candidates from blue/optical spectroscopy, and we suggest five of them to be likely cluster members. We estimated progenitor ages and masses for the candidate cluster members, and compared our findings to those for other cluster white dwarfs. A white dwarf in NGC 3532 is the most massive known cluster member (1.13 M$_{\odot}$), likely with an oxygen-neon core, for which we estimate an $8.8_{-4.3}^{+1.2}$ M$_{\odot}$ progenitor, close to the mass-divide between white dwarf and neutron star progenitors. A cluster member in Ruprecht 131 is a magnetic white dwarf, whose progenitor mass exceeded 2-3 M$_{\odot}$. We stress that wider searches, and improved cluster distances and ages derived from data of the ESA Gaia mission, will advance the understanding of the mass-loss processes for low- to intermediate-mass stars.",
keywords = "astro-ph.SR, astro-ph.GA, stars: AGB and post-AGB, stars: mass-loss, stars:neutron, white dwarfs, open clusters and associations : general",
author = "R. Raddi and S. Catalan and Gaensicke, {B. T.} and Hermes, {J. J.} and R. Napiwotzki and D. Koester and Tremblay, {P. -E.} and G. Barentsen and Farnhill, {H. J.} and M. Mohr-Smith and Drew, {J. E.} and Groot, {P. J.} and L. Guzman-Ramirez and Parker, {Q. A.} and D. Steeghs and A. Zijlstra",
note = "This article has been accepted for publication in Monthly Notices of the Royal Astronomical Society. The Version of Record [R. Raddi, et al, {\textquoteleft}A search for white dwarfs in the Galactic plane: the field and the open cluster population{\textquoteright}, Monthly Notices of the Royal Astronomical Society, Vol. 457 (2): 1988-2004, first published online 5 February 2016] is available online at doi: https://doi.org/10.1093/mnras/stw042. {\textcopyright} 2016 The Author(s). Published by Oxford University Press on behalf of the Royal Astronomical Society. All rights reserved. ",
year = "2016",
month = apr,
day = "1",
doi = "10.1093/mnras/stw042",
language = "English",
volume = "457",
journal = "Monthly Notices of the Royal Astronomical Society",
issn = "0035-8711",
publisher = "Oxford University Press",
number = "2",
}
### RIS
TY - JOUR
T1 - A search for white dwarfs in the Galactic plane:
T2 - the field and the open cluster population
AU - Raddi, R.
AU - Catalan, S.
AU - Gaensicke, B. T.
AU - Hermes, J. J.
AU - Napiwotzki, R.
AU - Koester, D.
AU - Tremblay, P. -E.
AU - Barentsen, G.
AU - Farnhill, H. J.
AU - Mohr-Smith, M.
AU - Drew, J. E.
AU - Groot, P. J.
AU - Guzman-Ramirez, L.
AU - Parker, Q. A.
AU - Steeghs, D.
AU - Zijlstra, A.
N1 - This article has been accepted for publication in Monthly Notices of the Royal Astronomical Society. The Version of Record [R. Raddi, et al, ‘A search for white dwarfs in the Galactic plane: the field and the open cluster population’, Monthly Notices of the Royal Astronomical Society, Vol. 457 (2): 1988-2004, first published online 5 February 2016] is available online at doi: https://doi.org/10.1093/mnras/stw042. © 2016 The Author(s). Published by Oxford University Press on behalf of the Royal Astronomical Society. All rights reserved.
PY - 2016/4/1
Y1 - 2016/4/1
N2 - We investigated the prospects for systematic searches of white dwarfs at low Galactic latitudes, using the VLT Survey Telescope (VST) H$\alpha$ Photometric Survey of the Galactic plane and Bulge (VPHAS+). We targeted 17 white dwarf candidates along sightlines of known open clusters, aiming to identify potential cluster members. We confirmed all the 17 white dwarf candidates from blue/optical spectroscopy, and we suggest five of them to be likely cluster members. We estimated progenitor ages and masses for the candidate cluster members, and compared our findings to those for other cluster white dwarfs. A white dwarf in NGC 3532 is the most massive known cluster member (1.13 M$_{\odot}$), likely with an oxygen-neon core, for which we estimate an $8.8_{-4.3}^{+1.2}$ M$_{\odot}$ progenitor, close to the mass-divide between white dwarf and neutron star progenitors. A cluster member in Ruprecht 131 is a magnetic white dwarf, whose progenitor mass exceeded 2-3 M$_{\odot}$. We stress that wider searches, and improved cluster distances and ages derived from data of the ESA Gaia mission, will advance the understanding of the mass-loss processes for low- to intermediate-mass stars.
AB - We investigated the prospects for systematic searches of white dwarfs at low Galactic latitudes, using the VLT Survey Telescope (VST) H$\alpha$ Photometric Survey of the Galactic plane and Bulge (VPHAS+). We targeted 17 white dwarf candidates along sightlines of known open clusters, aiming to identify potential cluster members. We confirmed all the 17 white dwarf candidates from blue/optical spectroscopy, and we suggest five of them to be likely cluster members. We estimated progenitor ages and masses for the candidate cluster members, and compared our findings to those for other cluster white dwarfs. A white dwarf in NGC 3532 is the most massive known cluster member (1.13 M$_{\odot}$), likely with an oxygen-neon core, for which we estimate an $8.8_{-4.3}^{+1.2}$ M$_{\odot}$ progenitor, close to the mass-divide between white dwarf and neutron star progenitors. A cluster member in Ruprecht 131 is a magnetic white dwarf, whose progenitor mass exceeded 2-3 M$_{\odot}$. We stress that wider searches, and improved cluster distances and ages derived from data of the ESA Gaia mission, will advance the understanding of the mass-loss processes for low- to intermediate-mass stars.
KW - astro-ph.SR
KW - astro-ph.GA
KW - stars: AGB and post-AGB
KW - stars: mass-loss
KW - stars:neutron
KW - white dwarfs
KW - open clusters and associations : general
U2 - 10.1093/mnras/stw042
DO - 10.1093/mnras/stw042
M3 - Article
VL - 457
JO - Monthly Notices of the Royal Astronomical Society
JF - Monthly Notices of the Royal Astronomical Society
SN - 0035-8711
IS - 2
ER - |
The étale topos of a scheme is the classifying topos of…?
By a theorem of Joyal and Tierney, every Grothendieck topos is the classifying topos of a localic groupoid. It has been proved (e.g. C. Butz. and I. Moerdijk. Representing topoi by topological grupoids. Journal of Pure and Applied Algebra 130, 223-235, 1998) that topoi "with enough points" admit actually a representation as classifying topoi of topological groupoids.
Now my question is the following: take a well-known topos, as the étale topos for a scheme. This is the classifying topos of a localic groupoid, but which one? Do you know if someone has ever investigated that? Thank you in advance.
• You might want to look at the work of Ingo Blechschmidt, whose is working on similar things (characterizing the big and small Zariski topoi for example), the answer might just be in his Thesis. github.com/iblech/internal-methods – user45878 Apr 8 '18 at 20:25
• The answer isn't actually in that work, at least from a first look, but thank you very much for the reference! It really looks interesting. Any other suggestions are welcome. – W. Rether Apr 9 '18 at 17:50
• This is an excellent question. While some parts of my thesis might be tangentially relevant, it doesn't contain an answer to this question. Could you migrate this question to MathOverflow? – Ingo Blechschmidt Jun 3 '18 at 10:30
• I know very little algebraic geometry, but my understanding from what I have heard other people say in passing is the following: if $X=\mathrm{Spec}(R)$ for a local ring $R$, then the \'etale topos of $X$ is sheaves on the absolute Galois group of the residue field of $R$; in general, the topological groupoid you get is obtained by gluing these together. It looks like Pirashvili - The \'Etale Fundamental Groupoid as a Terminal Costack has a theorem along these lines, but I have not read it. (Aside: Wraith - Generic Galois Theory of Local Rings answers the question in your title, but not body). – ne- Jul 17 '18 at 18:03
• This is too old to migrate but you may want to consider asking it there. – Alexander Gruber Oct 16 '18 at 14:57 |
The thermodynamic reasons why membrane proteins form stable complexes inside the hydrophobic lipid bilayer remain poorly understood. This is largely because of a lack of membrane–protein systems amenable for equilibrium studies and a limited number of methods for measuring these reactions. Recently, we reported the equilibrium dimerization of the CLC-ec1 Cl/H+ transporter in lipid bilayers (Chadda et al. 2016. eLife. https://doi.org/10.7554/eLife.17438), which provided a new type of model system for studying protein association in membranes. The measurement was conducted using the subunit-capture approach, involving passive dilution of the protein in large multilamellar vesicles, followed by single-molecule photobleaching analysis of the Poisson distribution describing protein encapsulation into extruded liposomes. To estimate the fraction of dimers (FDimer) as a function of protein density, the photobleaching distributions for the nonreactive, ideal monomer and dimer species must be known so that random co-capture probabilities can be accounted for. Previously, this was done by simulating the Poisson process of protein reconstitution into a known size distribution of liposomes composed of Escherichia coli polar lipids (EPLs). In the present study, we investigate the dependency of FDimer and Δ on the modeling through a comparison of different liposome size distributions (EPL versus 2:1 POPE/POPG). The results show that the estimated FDimer values are comparable, except at higher densities when liposomes become saturated with protein. We then develop empirical controls to directly measure the photobleaching distributions of the nonreactive monomer (CLC-ec1 I201W/I422W) and ideal dimer (WT CLC-ec1 cross-linked by glutaraldehyde or CLC-ec1 R230C/L249C cross-linked by a disulfide bond). The measured equilibrium constants do not depend on the correction method used, indicating the robustness of the subunit-capture approach. This strategy therefore presents a model-free way to quantify protein dimerization in lipid bilayers, offering a simplified strategy in the ongoing effort to characterize equilibrium membrane–protein reactions in membranes.
Passive dilution is a straightforward method for measuring protein self-assembly reactions. For water-soluble proteins, this is an easy experiment to carry out because one simply dilutes the sample with buffer to drive the reaction toward dissociation. For membrane proteins, the exact same experiment becomes inherently challenging, because the protein is now solvated in a two-dimensional lipid bilayer. Methods for rapidly diluting membranes do not exist because lipids or vesicles added to the bulk do not spontaneously incorporate into preformed lipid bilayers. Adding to the challenge, membranes occupy a small fraction of the sample volume, and so the protein signal is considerably lower, limiting the use of bulk detection methods. Therefore, special considerations must be made in order to overcome these hurdles to study membrane–protein reactions in membranes by passive dilution approaches.
To address these issues, we developed a single-molecule approach that uses fluorescence microscopy to examine the equilibrium protein population in large membranes (Chadda et al., 2016; Chadda and Robertson, 2016). In this method, referred to as the subunit-capture approach, the protein is incubated in large multilamellar vesicles (MLVs), which are then fractionated by extrusion forming a population of smaller vesicles. Each liposome traps zero, one, two, or more of the fluorescently labeled subunits based on their prior proximity in the membrane, and this occupancy is counted by single-molecule photobleaching analysis. The probability distribution of photobleaching steps is calculated from hundreds of vesicles, representing a fluorescent version of the Poisson distribution of protein reconstitution. This distribution depends on the protein density in the membrane, the size of the liposome compartments, and the population of oligomeric states that existed in the MLV membranes, with the latter containing information about the equilibrium constant of protein association. Recently, this method was used to measure the equilibrium dimerization reaction of the CLC-ec1 Cl/H+ antiporter for the WT protein, as well as the destabilization caused by addition of a tryptophan at the dimerization interface (I422W, “W”; Chadda et al., 2016).
The subunit-capture approach of trapping protein into liposomes has several advantages beyond studying the protein in planar bilayers. First, the act of liposome formation captures the equilibrium distribution in the MLV membrane, analogous to a rapid, irreversible cross-linking event. Although it is possible that the oligomeric state of the protein could change after capture, this does not affect the measured photobleaching distribution. Therefore, this method allows us to freeze the state of the protein in time, separating it from the actual imaging step, and making it significantly easier to examine membranes under different experimental conditions. Second, the liposomes used in these studies can be loaded onto the slide at high density without rupture, increasing the likelihood of observing a protein at the lowest density limit. This enables a wide dynamic range of densities (from 10−9 to 10−5 subunits/lipid) that can be studied using this approach. Finally, examination of the protein by photobleaching analysis provides a rigorous method of counting all protein subunits. This serves as an important quality control step that also informs on potential aggregation, observed as an increase in liposomes with more than three steps, which can easily confound equilibrium membrane–protein reactions.
However, the subunit-capture approach is not without its own challenges. First and foremost, it requires a priori information about the capture process in order to properly quantify the protein population. Although it is straightforward to measure protein stoichiometry at low dilutions where the majority of liposomes are unoccupied (Fang et al., 2007; Walden et al., 2007; Robertson et al., 2010; Stockbridge et al., 2013), at higher densities, there is a significant probability of random co-capture of subunits. Following the Poisson distribution, an increase in density leads to an increase in multioccupied liposomes by chance alone, reporting a false signal of oligomerization. This has previously been referred to as “artifactual togetherness” or “forced cohabitation” and has been shown to occur for membrane proteins in detergent micelles (Tanford and Reynolds, 1976; Kobus and Fleming, 2005). One way of correcting this is to simulate the Poisson process of fluorescent-subunit capture into liposomes to generate the expected photobleaching distributions for nonreactive monomer and dimer populations (Chadda et al., 2016). As long as the fluorescent labeling yield is known, this is straightforward to simulate, but it requires knowledge about the liposome size distribution. Because liposome populations are often heterogenous (Walden et al., 2007), this must be measured experimentally by a high-resolution method such as cryo-electron microscopy (cryo-EM). This adds a technically challenging step to the approach, which must be conducted every time a new experimental condition is investigated, such as temperature or lipid composition. It is also not clear whether a single measurement of the liposome distribution is sufficiently precise to allow for a robust determination of the equilibrium constant. To address these issues, the dependency of the equilibrium constant on variations in the liposome size distributions is investigated, comparing the previous liposome distribution comprised of Escherichia coli polar lipids (EPLs; Walden et al., 2007) to a new distribution measured from 2:1 POPE/POPG liposomes. In addition, empirical, nonreactive monomer and dimer controls are developed based on the CLC-ec1 scaffold, presenting a model-free option for quantifying membrane–protein dimerization. Analysis of CLC-ec1 association using either correction method yields comparable values for the free energy of dimerization, demonstrating the robustness of the subunit-capture approach for quantifying equilibrium protein association in membranes.
The bulk of the methods used in this study follow those reported in Chadda et al. (2016). Details of experiments specific to this study are outlined here.
### Equilibrium dimerization in membranes
Equilibrium protein dimerization provides a simple model for studying the thermodynamics of protein self-assembly in membranes. In this reaction, two monomers (M) bind resulting in a dimer (D) complex in the membrane:
$M+M⇌D,$
(1)
with an equilibrium constant of the reaction defined as
(2)
As the proteins are primarily solvated by lipids, the protein density is represented as the reactive mole fraction, χ*, of each protein species (M or D):
$χ*=12NproteinNprotein+Nsolvent~12NproteinNsolvent.$
(3)
Reconstitution of CLC-ec1 by dialysis leads to randomly oriented protein in the lipid bilayer (Matulef and Maduke, 2005; Garcia-Celma et al., 2013). Here, we assume that dimerization only occurs between oriented subunits and hence use the reactive mole fraction, χ* subunits/lipid, which is equivalent to the reconstituted mole fraction χ subunits/lipid divided by 2. Note, that the mole fraction simplifies to the mole ratio at dilute conditions (Nsolvent >> Nprotein).
The equilibrium constant of the reaction is simply obtained by diluting the protein with solvent and measuring the fraction of protein in the dimer state (FDimer). For an equilibrium reaction, this will follow the dimerization isotherm:
$FDimer=1+4χ*Keq−1+8χ*Keq4χ*Keq.$
(4)
Thus, plotting FDimer versus χ* yields Keq and the underlying free energy of dimerization, ΔG° = −RTln(Keqχ°), where χ° = 1 subunit/lipid represents the mole fraction standard state.
### The lower-density limit in MLVs
A vesicle with a diameter of 10 µm has a surface area of 4πr2 ∼ 300 µm2 = 3 × 108 nm2. Using Alipid = 0.6 nm2, this means that each leaflet contains 5 × 108 lipids, with the entire bilayer containing 10−9 lipids. Therefore, the lower mole fraction limit is >2 subunits/109 lipids, χ = 2 × 10−9 subunits/lipid (χ* = 10−9 subunits/lipid).
### Cryo-EM measurements of liposome size distributions
Liposomes were freeze-thawed seven times, incubated at room temperature, and then extruded through a 400-nm nucleopore filter (GE Life Sciences) 21 times before sample freezing. 3 µl of the undiluted sample was loaded onto glow-discharged Lacey carbon support films (Electron Microscope Sciences), blotted, and plunged into liquid ethane using a Vitrobot System (FEI). Images were collected at 300 kV on a JEOL 3200 fs microscope with a K2 Summit direct electron detector camera (GATAN). Magnifications of 15,000 and 30,000 were used. For size determination, liposomes were manually outlined in Fiji and ImageJ (Schindelin et al., 2012; Schneider et al., 2012) to measure the outer radii of all liposomes, including those located on the carbon. The normalized frequency histograms were averaged from two independent preparations (samples sizes, 140 and 686), and the mean ± SD is reported in Table 1.
### Cross-linking of “WT” C85A/H234C CLC-ec1
For SDS-PAGE, glutaraldehyde (Sigma-Aldrich) was added to 8 µM WT in size exclusion buffer (150 mM NaCl, 20 mM MOPS, pH 7.5, 5 mM analytical-grade DM; Anatrace), for a final concentration of glutaraldehyde of 0.4% wt/vol (∼40 mM). The reaction was allowed to proceed for 8 min, after which 10× Tris or glycine buffer was added to quench the reaction. For reconstitution, WT protein on the C85A/H234C background was labeled with Cy5-maleimide as described previously and then cross-linked with glutaraldehyde and quenched before reconstitution into 2:1 POPE/POPG liposomes (Avanti Polar Lipids). For the R230C/L249C disulfide cross-linked construct (Nguitragool and Miller, 2007), mutations were added to the C85A/H234C background using a QuikChange II site-directed mutagenesis kit (Agilent Technologies). Purification was performed as described previously (Chadda et al., 2016) in the presence of 1 mM TCEP until the size exclusion chromatography (SEC) purification step. Labeling and reconstitution was performed as before. All samples were run on nonreducing gels. For DTT reduction, 10 µM protein was incubated with 100 mM DTT at 30°C for 1 h.
### Calculation of FDimer
For a homogeneous liposome population with a single protein species, the statistics of subunit capture is described by the Poisson distribution:
(5)
In the subunit-capture method, it is the photobleaching probability distribution that is being measured, which can be considered as a fluorescent version of the Poisson distribution. In the case of monomer–dimer equilibrium and a heterogeneous liposome population, the system becomes sufficiently complex, making an analytical solution of the expected photobleaching distribution intractable. Instead, a stochastic simulation of the Poisson process directly calculates the nonreactive monomer and dimer distributions. Complete details on the procedure for simulating the expected monomer and dimer photobleaching distributions, as well as MATLAB simulation scripts, are available elsewhere (Chadda et al., 2016; Chadda and Robertson, 2016).
A MATLAB app was created to calculate the fraction of dimer using the various models and experimental controls based on least-squares (R2) analysis (Chadda et al., 2016). The app file is available for download as a source file in the supplemental information (MATLAB 2016b or higher is required). All of the models and experimental control data are implemented in the code. For the modeling, PCy5 = 0.72, Pnon-specific = 0.14 was used, and bias = 4, i.e., bins 1–4 in the radius probability distribution (i.e., r < 25 nm) were excluded for the dimer model, and Alipid = 0.6 nm2.
### Online supplemental material
MATLAB application file for least-squares calculation of the fraction of dimer based on the various monomer/dimer benchmarks presented in this paper (MATLAB 2016b or higher required) is available for download.
When studying protein assembly reactions, it is important to have benchmarks that map out the expected signals for the dissociated and associated states. Previously, we estimated this using a stochastic simulation of the Poisson process of subunit reconstitution into a defined liposome population based on the “Walden” distribution of 400 nm extruded vesicles comprised of EPL (Fig. 1, A and B; Walden et al., 2007). Although the data and model agreed at lower densities, it systematically deviated for χ* > 10−6 subunits/lipid, where the data contained fewer single steps (P1) and more multistep photobleaching events (P3+) than the model (see Figs. 2 A, 5 A, and 6 A). We hypothesized that larger liposomes were underrepresented in the Walden distribution, leading to a significant underestimation of liposomes containing more than three steps in the model. This could arise because of size selection during the freezing of cryo-EM samples, or it could be a result of minor differences in the lipid composition. EPL is a crude extract with ∼67% PE, 20% PG, and 10% cardiolipin, whereas our experimental lipid conditions represent a synthetic mimic made of 67% POPE and 33% POPG. To investigate this, the 400-nm extruded 2:1 POPE/POPG liposome size distribution was measured by cryo-EM (Fig. 1, C–E), showing that there is a population of larger liposomes that was not observed in the Walden distribution (Table 1). The difference is small, but the effect is pronounced when considering the fractional surface area (Fig. 1 F), which dictates the Poisson process. With the updated liposome size distribution, we reexamined the I201W/I422W (“WW”) CLC-ec1 photobleaching data from Chadda et al. (2016). Previously, this construct was found to be monomeric in detergent by both glutaraldehyde cross-linking and x-ray crystallography and also in 3:1 egg PC/POPG liposomes reconstituted at χ* = 10−5 subunits/lipid (1 µg/mg; Robertson et al., 2010). In addition, the fraction of empty liposomes (F0) measured by single-molecule colocalization microscopy for the Cy5-labeled protein and Alexa Fluor 488–labeled liposomes indicated that the protein occupancy was consistent with a monomer at saturating densities (Chadda et al., 2016). However, when FDimer was calculated using the Walden distribution, a weak apparent dimerization reaction was observed (Fig. 2 A), suggesting that dimers were either forming or that the model was incorrect at higher densities. With the 2:1 POPE/POPG distribution, the experimental WW data now correspond to the ideal monomer probabilities, and the apparent dimerization is no longer present (Fig. 2 B). This, together with the other evidence presented in previous studies, demonstrates that WW is monomeric in our experimental range of measurements and can serve as a control in the subunit-capture method.
With a monomeric control in place, the next step was to identify a dimer control to establish an upper bound for the dimerization reaction. For this, we turned to covalent cross-linking methods that have already been well established for CLC-ec1. Glutaraldehyde has been shown to specifically cross-link the dimer state, as demonstrated by SDS-PAGE (Fig. 3 B; Maduke et al., 1999; Robertson et al., 2010). Glutaraldehyde is a short chain bis-reactive molecule that cross-links primary amine groups present on lysines and the N terminus. Although CLC-ec1 has 13 native lysine residues (Fig. 3 A), under our reaction conditions, glutaraldehyde captures the majority of the protein in a dimeric form, with only a small amount of protein cross-linked as a non-specific tetramer (Fig. 3, B and C). Measurement of the photobleaching probability distribution shows that WT + glutaraldehyde proteoliposomes follow the 2:1 POPE/POPG dimer model, as well as the saturating range of the WT data (Fig. 3 F). However, upon measurement of functional activity, it was found that a large fraction of the protein is nonfunctional (Fig. 3, D and E). Therefore, glutaraldehyde cross-linked WT serves as a structural dimer control in the membrane, but not one with a proper biological fold.
For an alternate approach, we investigated disulfide cross-linking across the dimerization interface. Previously, Nguitragool and Miller (2007) demonstrated that the CLC-ec1 dimer spontaneously cross-linked via a disulfide bond between R230C and L249C during expression and/or purification. R230C/L249C was introduced onto the C85A/H234C WT background (Fig. 4 A), purified as a dimer in detergent micelles, and ran as a dimer on nonreducing SDS-PAGE (Fig. 4, B and C). The disulfide bond was not modified by the reducing agent tris(2-carboxyethyl)phosphine (TCEP) included in the purification, which allows for the protein to remain reactive for Cy5 labeling comparable to the WT (PCy5 = 0.72 ± 0.02, n = 5). We interpret this as a disulfide bond formed between R230C and L249C, with H234C available for Cy5 labeling. However, it is possible that L249C may form the disulfide bond with H234C instead because they are positioned at a similar distance. Still, the comparable labeling yield suggests that the cysteine being modified is H234C, which is directly accessible to the surrounding solution, as opposed to R230C, which is visibly buried in the crystal structure. The photobleaching probability distribution shows that R230C/L249C corresponds to the ideal dimer simulation based on the updated 2:1 POPE/POPG liposome size distribution, as well as the saturating range of the WT data (Fig. 4 F). In addition, chloride transport function of the R230C/L249C proteoliposomes reconstituted at χ* = 10−5 subunits/lipid showed comparable function to WT (Fig. 4, D and E). Therefore, R230C/L249C provides a functionally competent dimer control for CLC-ec1 dimerization reactions, preserving the native functional fold.
With these ideal monomer and dimer controls, we recalculated FDimer vs. χ* subunits/lipid for the WT (Fig. 5) and W (Fig. 6) constructs. The fits of the equilibrium dimerization isotherm are improved using either WT + glutaraldehyde or R230C/L249C for the dimer state and WW defining the monomeric state. However, there is no significant difference in the Δ values obtained using either the empirical controls or the Poisson simulation based on the 2:1 POPE/POPG distribution (Table 2 and Fig. 7). This agreement demonstrates the overall robustness of this method, whereas the development of empirical controls greatly simplifies the practical requirements of the subunit-capture approach.
The single-molecule subunit-capture method presents a way of measuring protein association reactions in membranes by passive dilution. It does not require actual knowledge of the protein structure, as the fluorophore could arbitrarily be attached to one of the termini, but it does require quantitative fluorescent labeling of the protein of interest. From there, the protein is reconstituted and introduced into the MLV state and incubated as a function of time and temperature, and then the equilibrium distribution is reported through the capture statistics of protein into liposomes. This approach follows the same principles of membrane–protein reconstitution for functional studies (Maduke et al., 1999; Walden et al., 2007; Stockbridge et al., 2013), which can be performed in parallel for rigorous interrogation of the protein fold. If the oligomeric distribution shows a reversible dependency on the density in the membrane, then this provides a way of studying the thermodynamics of membrane–protein association in lipid bilayers.
Although certain aspects of the subunit-capture approach may seem complex, the method addresses several long-standing roadblocks that have limited this area of study. First, equilibrium membrane–protein reactions depend on the membrane-like solvent and not the surrounding water. This has been outlined previously (White and Wimley, 1999) and explicitly shown for the equilibrium association of membrane proteins in detergent micelles (Fleming, 2002). Therefore, the most direct method to dilute membrane proteins is to increase the area of the bilayer. Unfortunately, spontaneous fusion of liposomes is slow, and mixing proteoliposomes with empty vesicles does not readily dilute the reaction. One solution is to drive fusion of liposomes together through repeated freeze–thaw cycles, resulting in the formation of large, 10-µm-diameter MLVs in the case of 2:1 POPE/POPG membranes (Pozo Navas et al., 2005). In this state, subunits may exchange with one another and sample the complete area of the lipid bilayer, resulting in a condition where the new equilibrium can be accessed. It is the protein distribution in this MLV state that reflects the reaction equilibrium, and this is why we measure the statistical distribution of subunit capture rather than the actual state of the protein that is trapped in the liposomes.
The second issue that arises is the limited protein signal when studying membrane proteins in membranes. Although MLVs allow dilutions as low as χ* = 10−9 subunits/lipid, at a working lipid concentration of 30 mM, this leads to a bulk protein concentration of 30 pM. This is lower than the biological limit of dilution in cell membranes, and it pushes the technical limits of bulk detection methods. Because of this, many studies have been limited to the examination of weaker complexes, where the reaction can be observed at saturating liposome densities (Yano et al., 2002, 2011; Yano and Matsuzaki, 2006; Mathiasen et al., 2014). Alternatively, equilibrium biasing methods, such as redox-driven disulfide exchange (Cristian et al., 2003; North et al., 2006) or steric trapping of the dissociated state by streptavidin binding (Hong et al., 2010), provide elegant approaches to study stronger membrane–protein complexes at high densities in liposomes. However, these methods require a sufficient knowledge of the protein structure for engineering of the protein complex. In contrast, single-molecule photobleaching analysis of subunit capture can be performed without prior knowledge of the protein structure (Stockbridge et al., 2013). Although the fluorophore labeling must be subunit specific, this is a minimal requirement for protein modification and thus presents a general method for investigating membrane–protein oligomerization in membranes. Most importantly, the single-molecule approach means that the protein signal is detected with equal quality at all densities within the membrane. At the lowest limits of dilution, observation of protein spots will become rare, but this problem is simply solved by imaging more fields or loading more liposomes onto the slide.
The development of empirical controls greatly simplifies the subunit-capture approach, adding it to the already existing arsenal of methods for studying membrane–protein oligomerization in lipid bilayers (Cristian et al., 2003; Hong et al., 2010; Yano et al., 2011, 2015; Mathiasen et al., 2014). It is important to note that the controls developed in this study can serve as monomer and dimer benchmarks in the study of other oligomerization reactions as well. For example, WT CLC-ec1 has been used as a dimeric control in the determination of the stoichiometry of the Fluc F channel in liposomes by functional analysis of the Poisson distribution (Stockbridge et al., 2013). These controls should offer reasonable comparisons at low densities where liposomes are rarely occupied (χ* < 10−6). At higher densities, caution must be exercised, because the saturation of liposomes depends on the accessible liposome population, and this may be protein dependent. For example, colocalization microscopy indicates that CLC-ec1 dimers, with ∼10-nm end-to-end distance, are excluded from liposomes with radius smaller than 25 nm, presumably because of curvature effects (Chadda et al., 2016). In general, it is advisable to construct protein-specific controls, if possible, and this study validates methods of constructing monomeric controls by tryptophan mutagenesis (Robertson et al., 2010; Schmidt and Sturgis, 2017; Yu et al., 2017) and dimer controls by intersubunit cross-linking (Nguitragool and Miller, 2007). However, it is also possible to use the Poisson simulation approach if the liposome size distribution is known because our investigation demonstrates that these two methods converge in their quantification of the CLC-ec1 dimerization reaction. With that, we expect that these studies will simplify the methods to study other membrane–protein systems and build a path toward understanding the thermodynamic reasons why greasy membrane proteins form stable complexes in greasy lipid bilayers.
We acknowledge instrumentation support from Tom Moninger at the Microscopy Core Facility at the University of Iowa and Jonathan Remis at the Structural Biology Facility at Northwestern University.
The Structural Biology Facility is partially supported by the R.H. Lurie Comprehensive Cancer Center of Northwestern University. This research was supported by the National Institutes of Health/National Institute of General Medical Sciences (grants R00GM101016 and R01GM120260) and a Roy J. Carver Charitable Trust Foundation Early Investigator Award.
The authors declare no competing financial interests.
Author contributions: R. Chadda, L. Cliff, M. Brimberry, and J.L. Robertson designed experiments. R. Chadda, L. Cliff, and M. Brimberry carried out the experiments. R. Chadda, L. Cliff, M. Brimberry, and J.L. Robertson analyzed data and wrote the manuscript.
,
R.
, and
J.L.
Robertson
.
2016
.
Measuring Membrane Protein Dimerization Equilibrium in Lipid Bilayers by Single-Molecule Fluorescence Microscopy
.
Methods Enzymol.
581
:
53
82
.
,
R.
,
V.
Krishnamani
,
K.
Mersch
,
J.
Wong
,
M.
Brimberry
,
A.
,
L.
Kolmakova-Partensky
,
L.J.
Friedman
,
J.
Gelles
, and
J.L.
Robertson
.
2016
.
The dimerization equilibrium of a ClC Cl(-)/H(+) antiporter in lipid bilayers
.
eLife.
5
:
e17438
.
Cristian
,
L.
,
J.D.
Lear
, and
W.F.
.
2003
.
Use of thiol-disulfide equilibria to measure the energetics of assembly of transmembrane helices in phospholipid bilayers
.
100
:
14772
14777
.
Fang
,
Y.
,
L.
Kolmakova-Partensky
, and
C.
Miller
.
2007
.
A bacterial arginine-agmatine exchange transporter involved in extreme acid resistance
.
J. Biol. Chem.
282
:
176
182
.
Fleming
,
K.G.
2002
.
Standardizing the free energy change of transmembrane helix-helix interactions
.
J. Mol. Biol.
323
:
563
571
.
Garcia-Celma
,
J.
,
A.
Szydelko
, and
R.
Dutzler
.
2013
.
Functional characterization of a ClC transporter by solid-supported membrane electrophysiology
.
J. Gen. Physiol.
141
:
479
491
.
Hong
,
H.
,
T.M.
Blois
,
Z.
Cao
, and
J.U.
Bowie
.
2010
.
Method to measure strong protein-protein interactions in lipid bilayers using a steric trap
.
107
:
19802
19807
.
Kobus
,
F.J.
, and
K.G.
Fleming
.
2005
.
The GxxxG-containing transmembrane domain of the CCK4 oncogene does not encode preferential self-interactions
.
Biochemistry.
44
:
1464
1470
.
,
M.
,
D.J.
Pheasant
, and
C.
Miller
.
1999
.
High-level expression, functional reconstitution, and quaternary structure of a prokaryotic ClC-type chloride channel
.
J. Gen. Physiol.
114
:
713
722
.
Mathiasen
,
S.
,
S.M.
Christensen
,
J.J.
Fung
,
S.G.F.
Rasmussen
,
J.F.
Fay
,
S.K.
Jorgensen
,
S.
Veshaguri
,
D.L.
Farrens
,
M.
Kiskowski
,
B.
Kobilka
, and
D.
Stamou
.
2014
.
Nanoscale high-content analysis using compositional heterogeneities of single proteoliposomes
.
Nat. Methods.
11
:
931
934
.
Matulef
,
K.
, and
M.
.
2005
.
Side-dependent inhibition of a prokaryotic ClC by DIDS
.
Biophys. J.
89
:
1721
1730
.
Nguitragool
,
W.
, and
C.
Miller
.
2007
.
CLC Cl-/H+ transporters constrained by covalent cross-linking
.
104
:
20659
20665
.
North
,
B.
,
L.
Cristian
,
X.
Fu Stowell
,
J.D.
Lear
,
J.G.
Saven
, and
W.F.
.
2006
.
Characterization of a membrane protein folding motif, the Ser zipper, using designed peptides
.
J. Mol. Biol.
359
:
930
939
.
Pozo Navas
,
B.
,
K.
Lohner
,
G.
Deutsch
,
E.
Sevcsik
,
K.A.
Riske
,
R.
Dimova
,
P.
Garidel
, and
G.
Pabst
.
2005
.
Composition dependence of vesicle morphology and mixing properties in a bacterial model membrane system
.
Biochim. Biophys. Acta.
1716
:
40
48
.
Robertson
,
J.L.
,
L.
Kolmakova-Partensky
, and
C.
Miller
.
2010
.
Design, function and structure of a monomeric ClC transporter
.
Nature.
468
:
844
847
.
Schindelin
,
J.
,
I.
Arganda-Carreras
,
E.
Frise
,
V.
Kaynig
,
M.
Longair
,
T.
Pietzsch
,
S.
Preibisch
,
C.
Rueden
,
S.
Saalfeld
,
B.
Schmid
, et al
2012
.
Fiji: an open-source platform for biological-image analysis
.
Nat. Methods.
9
:
676
682
.
Schmidt
,
V.
, and
J.N.
Sturgis
.
2017
.
Making Monomeric Aquaporin Z by Disrupting the Hydrophobic Tetramer Interface
.
ACS Omega.
2
:
3017
3027
.
Schneider
,
C.A.
,
W.S.
Rasband
, and
K.W.
Eliceiri
.
2012
.
NIH Image to ImageJ: 25 years of image analysis
.
Nat. Methods.
9
:
671
675
.
Stockbridge
,
R.B.
,
J.L.
Robertson
,
L.
Kolmakova-Partensky
, and
C.
Miller
.
2013
.
A family of fluoride-specific ion channels with dual-topology architecture
.
eLife.
2
:
e01084
.
Tanford
,
C.
, and
J.A.
Reynolds
.
1976
.
Characterization of membrane proteins in detergent solutions
.
Biochim. Biophys. Acta.
457
:
133
170
.
Walden
,
M.
,
A.
Accardi
,
F.
Wu
,
C.
Xu
,
C.
Williams
, and
C.
Miller
.
2007
.
Uncoupling and turnover in a Cl-/H+ exchange transporter
.
J. Gen. Physiol.
129
:
317
329
.
White
,
S.H.
, and
W.C.
Wimley
.
1999
.
Membrane protein folding and stability: physical principles
.
Annu. Rev. Biophys. Biomol. Struct.
28
:
319
365
.
Yano
,
Y.
, and
K.
Matsuzaki
.
2006
.
Measurement of thermodynamic parameters for hydrophobic mismatch 1: self-association of a transmembrane helix
.
Biochemistry.
45
:
3370
3378
.
Yano
,
Y.
,
T.
Takemoto
,
S.
Kobayashi
,
H.
Yasui
,
H.
Sakurai
,
W.
Ohashi
,
M.
Niwa
,
S.
Futaki
,
Y.
Sugiura
, and
K.
Matsuzaki
.
2002
.
Topological stability and self-association of a completely hydrophobic model transmembrane helix in lipid bilayers
.
Biochemistry.
41
:
3073
3080
.
Yano
,
Y.
,
A.
Yamamoto
,
M.
Ogura
, and
K.
Matsuzaki
.
2011
.
Thermodynamics of insertion and self-association of a transmembrane helix: a lipophobic interaction by phosphatidylethanolamine
.
Biochemistry.
50
:
6806
6814
.
Yano
,
Y.
,
K.
Kondo
,
R.
Kitani
,
A.
Yamamoto
, and
K.
Matsuzaki
.
2015
.
Cholesterol-induced lipophobic interaction between transmembrane helices using ensemble and single-molecule fluorescence resonance energy transfer
.
Biochemistry.
54
:
1371
1379
.
Yu
,
X.
,
G.
Yang
,
C.
Yan
,
J.L.
Baylon
,
J.
Jiang
,
H.
Fan
,
G.
Lu
,
K.
Hasegawa
,
H.
Okumura
,
T.
Wang
, et al
2017
.
Dimeric structure of the uracil:proton symporter UraA provides mechanistic insights into the SLC4/23/26 transporters
.
Cell Res.
27
:
1020
1033
. |
# Let $f$ be a function with measurable domain $D$. show that$f$ is mble iff the function $g$ is measurable
I am working through some problems in Real Analysis (Royden) and I came across this one.
Let $f$ be a function with measurable domain $D$. Show that $f$ is measurable if and only if the function $g:\mathbb{R}\to \mathbb{R}$, defined by $g(x)=f(x)$ for $x \in D$ and $g(x)=0,$ for $x\notin D$, is measurable.
I realized that in one direction, if $g$ is measurable then $$\{x \in D : f(x)>c\}=\{x \in \mathbb{R}:g(x)>c \}\cap D$$ since $g(x)=f(x)$ for $x \in D$. Because the sets $D$ and $\{x \in \mathbb{R}:g(x)>c \}$ are measurable , $f$ is measurable.
On the other hand if $f$ is measurable, then $$\{x\in\mathbb{R}:g(x)>c\}= \begin{cases} \{x\in D:f(x)>c\}, &\mbox{ if }x\in D\\ D^c, &\mbox{ if }x\notin D \mbox{ and } c<0\\ \varnothing, &\mbox{ if }x\notin D \mbox{ and } c\geq0. \end{cases}$$ Because $\{x\in D:f(x)>c\}$, $D^c$, and $\varnothing$ are meas sets, it follows that $g$ is measurable.
This illustration is almost clear to me except the part where we consider the cases where $c \ge$ 0 and $c<0$. I don't understand why we do that. Can someone explain to me or give me a similar one.
• @Summarizing the comments, I edited your question. Dec 22 '16 at 7:48
$\Rightarrow$ If $f$ is meas and $\alpha\in\mathbb{R}$ then
$$\{x\in\mathbb{R}:g(x)<\alpha\}= \begin{cases} \{x\in D:f(x)<\alpha\}, &\mbox{ if }x\in D\\ D^c, &\mbox{ if }x\notin D \mbox{ and } \alpha >0\\ \varnothing, &\mbox{ if }x\notin D \mbox{ and } \alpha \leq0. \end{cases}$$ This shows that $g$ is meas.
$\Leftarrow$ Suppose $g$ is meas and let $\alpha\in\mathbb{R}$. Then $$f^{-1}((-\infty,\alpha))=g^{-1}((-\infty,\alpha))\cap D.$$ This shows that $f$ is meas.
• Could it also go this way. $$\Rightarrow$$ if $f$ is mble and $\alpha \in R$ then $$g^{-1}((\alpha ,\infty))= \left\{ \begin{array}{ll} f^{-1}((\alpha ,\infty)) & \text{if} \quad x\in D \\ D^c & \text{if} \quad x \notin D \quad \text{and} \quad \alpha < 0 \\ \phi & \text{if} \quad x\notin D \quad \text{and} \quad\alpha \ge 0 \end{array} \right.$$ And that G is measurable? Dec 22 '16 at 7:21
• Yes, and thank you. Dec 22 '16 at 7:31
• Your first equation doesn't make sense. The right side refers to a variable $x$ that is not present on the left. Dec 22 '16 at 7:51
• @mario, what are the differences between the first and the edited one, can you explain further? if possible in grammar as well. thanks Dec 22 '16 at 8:16
• @J.Kyei The edit is the same as the original, and still has the same issue, although it is now harder to see - the $x$ in the if statement cases is not bound, where as the two set comprehensions have an (unrelated) bound variable $x$. I believe the correct equation is: $$g^{-1}((-\infty,\alpha))=\begin{cases}f^{-1}((-\infty,\alpha))\cup D^c,&\mbox{if }\alpha>0\\f^{-1}((-\infty,\alpha)),&\mbox{if }\alpha\le 0\end{cases}.$$ Dec 22 '16 at 11:56
The second part (the part you have are questioning) follows directly from Proposition 5(ii) in your text Real Analysis (Royden). |
# If the Equations of Two Diameters of a Circle Are 2x + Y = 6 and 3x + 2y = 4 and the Radius is 10, Find the Equation of the Circle. - Mathematics
If the equations of two diameters of a circle are 2x + y = 6 and 3x + 2y = 4 and the radius is 10, find the equation of the circle.
#### Solution
Let (hk) be the centre of a circle with radius a.
Thus, its equation will be
$\left( x - h \right)^2 + \left( y - k \right)^2 = a^2$
The intersection point of 2x + y = 6 and 3x + 2y = 4 is (8, −10).
The diameters of a circle intersect at the centre.
Thus, the coordinates of the centre are (8, −10).
∴ h = 8, = −10
Thus, the equation of the required circle is
$\left( x - 8 \right)^2 + \left( y + 10 \right)^2 = a^2$
Also, a = 10
Substituting the value of a in equation (1):
$\left( x - 8 \right)^2 + \left( y + 10 \right)^2 = 100$
$\Rightarrow x^2 + y^2 - 16x + 64 + 100 + 20y = 100$
$\Rightarrow x^2 + y^2 - 16x + 20y + 64 = 0$
Hence, the required equation of the circle is
$x^2 + y^2 - 16x + 20y + 64 = 0$
Concept: Circle - Standard Equation of a Circle
Is there an error in this question or solution?
#### APPEARS IN
RD Sharma Class 11 Mathematics Textbook
Chapter 24 The circle
Exercise 24.1 | Q 6 | Page 21 |
Rational Number
Get Rational Number essential facts below. View Videos or join the Rational Number discussion. Add Rational Number to your PopFlock.com topic list for future reference or share this resource on social media.
Rational Number
A symbol for the set of rational numbers
The rational numbers (${\displaystyle \mathbb {Q} }$) are included in the real numbers (${\displaystyle \mathbb {R} }$), while themselves including the integers (${\displaystyle \mathbb {Z} }$), which in turn include the natural numbers (${\displaystyle \mathbb {N} }$)
In mathematics, a rational number is a number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q.[1] For example, -3/7 is a rational number, as is every integer (e.g. 5 = 5/1). The set of all rational numbers, also referred to as "the rationals",[2] the field of rationals[3] or the field of rational numbers is usually denoted by a boldface Q (or blackboard bold ${\displaystyle \mathbb {Q} }$, Unicode 𝐐 MATHEMATICAL BOLD CAPITAL Q or DOUBLE-STRUCK CAPITAL Q);[4] it was thus denoted in 1895 by Giuseppe Peano after quoziente, Italian for "quotient",[] and first appeared in Bourbaki's Algèbre.[5]
The decimal expansion of a rational number either terminates after a finite number of digits (example: 3/4 = 0.75), or eventually begins to repeat the same finite sequence of digits over and over (example: 9/44 = 0.20454545...).[6] Conversely, any repeating or terminating decimal represents a rational number. These statements are true in base 10, and in every other integer base (for example, binary or hexadecimal).[]
A real number that is not rational is called irrational.[5] Irrational numbers include , ?, e, and ?. The decimal expansion of an irrational number continues without repeating. Since the set of rational numbers is countable, and the set of real numbers is uncountable, almost all real numbers are irrational.[1]
Rational numbers can be formally defined as equivalence classes of pairs of integers (p, q) with q ? 0, using the equivalence relation defined as follows:
${\displaystyle \left(p_{1},q_{1}\right)\sim \left(p_{2},q_{2}\right)\iff p_{1}q_{2}=p_{2}q_{1}.}$
The fraction p/q then denotes the equivalence class of (p, q).[7]
Rational numbers together with addition and multiplication form a field which contains the integers, and is contained in any field containing the integers. In other words, the field of rational numbers is a prime field, and a field has characteristic zero if and only if it contains the rational numbers as a subfield. Finite extensions of Q are called algebraic number fields, and the algebraic closure of Q is the field of algebraic numbers.[8]
In mathematical analysis, the rational numbers form a dense subset of the real numbers. The real numbers can be constructed from the rational numbers by completion, using Cauchy sequences, Dedekind cuts, or infinite decimals (for more, see Construction of the real numbers).[]
## Terminology
The term rational in reference to the set Q refers to the fact that a rational number represents a ratio of two integers. In mathematics, "rational" is often used as a noun abbreviating "rational number". The adjective rational sometimes means that the coefficients are rational numbers. For example, a rational point is a point with rational coordinates (i.e., a point whose coordinates are rational numbers); a rational matrix is a matrix of rational numbers; a rational polynomial may be a polynomial with rational coefficients, although the term "polynomial over the rationals" is generally preferred, to avoid confusion between "rational expression" and "rational function" (a polynomial is a rational expression and defines a rational function, even if its coefficients are not rational numbers). However, a rational curve is not a curve defined over the rationals, but a curve which can be parameterized by rational functions.[]
### Etymology
Although nowadays rational numbers are defined in terms of ratios, the term rational is not a derivation of ratio. On the opposite, it is ratio that is derived from rational: the first use of ratio with its modern meaning was attested in English about 1660,[9] while the use of rational for qualifying numbers appeared almost a century earlier, in 1570.[10] This meaning of rational came from the mathematical meaning of irrational, which was first used in 1551, and it was used in "translations of Euclid (following his peculiar use of )".[11][12]
This unusual history originated in the fact that ancient Greeks "avoided heresy by forbidding themselves from thinking of those [irrational] lengths as numbers".[13] So such lengths were irrational, in the sense of illogical, that is "not to be spoken about" ( in Greek).[14]
This etymology is similar to that of imaginary numbers and real numbers.
## Arithmetic
### Irreducible fraction
Every rational number may be expressed in a unique way as an irreducible fraction a/b, where a and b are coprime integers and b > 0. This is often called the canonical form of the rational number.
Starting from a rational number a/b, its canonical form may be obtained by dividing a and b by their greatest common divisor, and, if b < 0, changing the sign of the resulting numerator and denominator.[]
### Embedding of integers
Any integer n can be expressed as the rational number n/1, which is its canonical form as a rational number.[]
### Equality
${\displaystyle {\frac {a}{b}}={\frac {c}{d}}}$ if and only if ${\displaystyle ad=bc}$
If both fractions are in canonical form, then:
${\displaystyle {\frac {a}{b}}={\frac {c}{d}}}$ if and only if ${\displaystyle a=c}$ and ${\displaystyle b=d}$[7]
### Ordering
If both denominators are positive (particularly if both fractions are in canonical form):
${\displaystyle {\frac {a}{b}}<{\frac {c}{d}}}$ if and only if ${\displaystyle ad
On the other hand, if either denominator is negative, then each fraction with a negative denominator must first be converted into an equivalent form with a positive denominator--by changing the signs of both its numerator and denominator.[7]
Two fractions are added as follows:
${\displaystyle {\frac {a}{b}}+{\frac {c}{d}}={\frac {ad+bc}{bd}}.}$
If both fractions are in canonical form, the result is in canonical form if and only if b and d are coprime integers.[7][15]
### Subtraction
${\displaystyle {\frac {a}{b}}-{\frac {c}{d}}={\frac {ad-bc}{bd}}.}$
If both fractions are in canonical form, the result is in canonical form if and only if b and d are coprime integers.[15][verification needed]
### Multiplication
The rule for multiplication is:
${\displaystyle {\frac {a}{b}}\cdot {\frac {c}{d}}={\frac {ac}{bd}}.}$
where the result may be a reducible fraction--even if both original fractions are in canonical form.[7][15]
### Inverse
Every rational number a/b has an additive inverse, often called its opposite,
${\displaystyle -\left({\frac {a}{b}}\right)={\frac {-a}{b}}.}$
If a/b is in canonical form, the same is true for its opposite.
A nonzero rational number a/b has a multiplicative inverse, also called its reciprocal,
${\displaystyle \left({\frac {a}{b}}\right)^{-1}={\frac {b}{a}}.}$
If a/b is in canonical form, then the canonical form of its reciprocal is either b/a or -b/-a, depending on the sign of a.[]
### Division
If b, c, and d are nonzero, the division rule is
${\displaystyle {\frac {\frac {a}{b}}{\frac {c}{d}}}={\frac {ad}{bc}}.}$
Thus, dividing a/b by c/d is equivalent to multiplying a/b by the reciprocal of c/d:
${\displaystyle {\frac {ad}{bc}}={\frac {a}{b}}\cdot {\frac {d}{c}}.}$[15][verification needed]
### Exponentiation to integer power
If n is a non-negative integer, then
${\displaystyle \left({\frac {a}{b}}\right)^{n}={\frac {a^{n}}{b^{n}}}.}$
The result is in canonical form if the same is true for a/b. In particular,
${\displaystyle \left({\frac {a}{b}}\right)^{0}=1.}$
If a ? 0, then
${\displaystyle \left({\frac {a}{b}}\right)^{-n}={\frac {b^{n}}{a^{n}}}.}$
If a/b is in canonical form, the canonical form of the result is bn/an if a > 0 or n is even. Otherwise, the canonical form of the result is -bn/-an.[]
## Continued fraction representation
A finite continued fraction is an expression such as
${\displaystyle a_{0}+{\cfrac {1}{a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{\ddots +{\cfrac {1}{a_{n}}}}}}}}},}$
where an are integers. Every rational number a/b can be represented as a finite continued fraction, whose coefficients an can be determined by applying the Euclidean algorithm to (a, b).
## Other representations
are different ways to represent the same rational value.
## Formal construction
A diagram showing a representation of the equivalent classes of pairs of integers
The rational numbers may be built as equivalence classes of ordered pairs of integers.[7][15]
More precisely, let (Z × (Z \ {0})) be the set of the pairs (m, n) of integers such n ? 0. An equivalence relation is defined on this set by
${\displaystyle \left(m_{1},n_{1}\right)\sim \left(m_{2},n_{2}\right)\iff m_{1}n_{2}=m_{2}n_{1}.}$[7][15]
Addition and multiplication can be defined by the following rules:
${\displaystyle \left(m_{1},n_{1}\right)+\left(m_{2},n_{2}\right)\equiv \left(m_{1}n_{2}+n_{1}m_{2},n_{1}n_{2}\right),}$
${\displaystyle \left(m_{1},n_{1}\right)\times \left(m_{2},n_{2}\right)\equiv \left(m_{1}m_{2},n_{1}n_{2}\right).}$[7]
This equivalence relation is a congruence relation, which means that it is compatible with the addition and multiplication defined above; the set of rational numbers Q is the defined as the quotient set by this equivalence relation, (Z × (Z \ {0})) / ~, equipped with the addition and the multiplication induced by the above operations. (This construction can be carried out with any integral domain and produces its field of fractions.)[7]
The equivalence class of a pair (m, n) is denoted m/n. Two pairs (m1, n1) and (m2, n2) belong to the same equivalence class (that is are equivalent) if and only if m1n2 = m2n1. This means that m1/n1 = m2/n2 if and only m1n2 = m2n1.[7][15]
Every equivalence class m/n may be represented by infinitely many pairs, since
${\displaystyle \cdots ={\frac {-2m}{-2n}}={\frac {-m}{-n}}={\frac {m}{n}}={\frac {2m}{2n}}=\cdots .}$
Each equivalence class contains a unique canonical representative element. The canonical representative is the unique pair (m, n) in the equivalence class such that m and n are coprime, and n > 0. It is called the representation in lowest terms of the rational number.
The integers may be considered to be rational numbers identifying the integer n with the rational number n/1.
A total order may be defined on the rational numbers, that extends the natural order of the integers. One has
${\displaystyle {\frac {m_{1}}{n_{1}}}\leq {\frac {m_{2}}{n_{2}}}}$
if
${\displaystyle (n_{1}n_{2}>0\quad {\text{and}}\quad m_{1}n_{2}\leq n_{1}m_{2})\qquad {\text{or}}\qquad (n_{1}n_{2}<0\quad {\text{and}}\quad m_{1}n_{2}\geq n_{1}m_{2}).}$
## Properties
Illustration of the countability of the positive rationals
The set Q of all rational numbers, together with the addition and multiplication operations shown above, forms a field.[7]
Q has no field automorphism other than the identity.[]
With the order defined above, Q is an ordered field[15] that has no subfield other than itself, and is the smallest ordered field, in the sense that every ordered field contains a unique subfield isomorphic to Q.[]
Q is a prime field, which is a field that has no subfield other than itself.[16] The rationals are the smallest field with characteristic zero. Every field of characteristic zero contains a unique subfield isomorphic to Q.[]
Q is the field of fractions of the integers Z.[17] The algebraic closure of Q, i.e. the field of roots of rational polynomials, is the field of algebraic numbers.[]
The set of all rational numbers is countable (see the figure), while the set of all real numbers (as well as the set of irrational numbers) is uncountable. Being countable, the set of rational numbers is a null set, that is, almost all real numbers are irrational, in the sense of Lebesgue measure.[]
The rationals are a densely ordered set: between any two rationals, there sits another one, and, therefore, infinitely many other ones.[7] For example, for any two fractions such that
${\displaystyle {\frac {a}{b}}<{\frac {c}{d}}}$
(where ${\displaystyle b,d}$ are positive), we have
${\displaystyle {\frac {a}{b}}<{\frac {a+c}{b+d}}<{\frac {c}{d}}.}$[]
Any totally ordered set which is countable, dense (in the above sense), and has no least or greatest element is order isomorphic to the rational numbers.[18]
## Real numbers and topological properties
The rationals are a dense subset of the real numbers: every real number has rational numbers arbitrarily close to it.[7] A related property is that rational numbers are the only numbers with finite expansions as regular continued fractions.[]
By virtue of their order, the rationals carry an order topology. The rational numbers, as a subspace of the real numbers, also carry a subspace topology. The rational numbers form a metric space by using the absolute difference metric d(x, y) = ||, and this yields a third topology on Q. All three topologies coincide and turn the rationals into a topological field. The rational numbers are an important example of a space which is not locally compact. The rationals are characterized topologically as the unique countable metrizable space without isolated points. The space is also totally disconnected. The rational numbers do not form a complete metric space[]; the real numbers are the completion of Q under the metric d(x, y) = || above.[15]
In addition to the absolute value metric mentioned above, there are other metrics which turn Q into a topological field:
Let p be a prime number and for any non-zero integer a, let ||p = p-n, where pn is the highest power of p dividing a.
In addition set ||p = 0. For any rational number a/b, we set ||p = ||p/||p.
Then dp(x, y) = ||p defines a metric on Q.[19]
The metric space (Q, dp) is not complete, and its completion is the p-adic number field Qp. Ostrowski's theorem states that any non-trivial absolute value on the rational numbers Q is equivalent to either the usual real absolute value or a p-adic absolute value.[]
Number systems
Complex ${\displaystyle :\;\mathbb {C} }$
Real ${\displaystyle :\;\mathbb {R} }$
Rational ${\displaystyle :\;\mathbb {Q} }$
Integer ${\displaystyle :\;\mathbb {Z} }$
Natural ${\displaystyle :\;\mathbb {N} }$
Zero: 0 One: 1 Prime numbers Composite numbers
Negative integers
Imaginary
## References
1. ^ a b Rosen, Kenneth (2007). Discrete Mathematics and its Applications (6th ed.). New York, NY: McGraw-Hill. pp. 105, 158-160. ISBN 978-0-07-288008-3.
2. ^ Lass, Harry (2009). Elements of Pure and Applied Mathematics (illustrated ed.). Courier Corporation. p. 382. ISBN 978-0-486-47186-0. Extract of page 382
3. ^ Robinson, Julia (1996). The Collected Works of Julia Robinson. American Mathematical Soc. p. 104. ISBN 978-0-8218-0575-6. Extract of page 104
4. ^ Rouse, Margaret. "Mathematical Symbols". Retrieved 2015.
5. ^ a b Weisstein, Eric W. "Rational Number". mathworld.wolfram.com. Retrieved .
6. ^ "Rational number". Encyclopedia Britannica. Retrieved .
7. Biggs, Norman L. (2002). Discrete Mathematics. India: Oxford University Press. pp. 75-78. ISBN 978-0-19-871369-2.
8. ^ Gilbert, Jimmie; Linda, Gilbert (2005). Elements of Modern Algebra (6th ed.). Belmont, CA: Thomson Brooks/Cole. pp. 243-244. ISBN 0-534-40264-X.
9. ^ Oxford English Dictionary (2nd ed.). Oxford University Press. 1989. Entry ratio, n., sense 2.a.
10. ^ Oxford English Dictionary (2nd ed.). Oxford University Press. 1989. Entry rational, a. (adv.) and n.1, sense 5.a.
11. ^ Oxford English Dictionary (2nd ed.). Oxford University Press. 1989. Entry irrational, a. and n., sense 3.
12. ^ Shor, Peter (2017-05-09). "Does rational come from ratio or ratio come from rational". Stack Exchange. Retrieved .
13. ^ Coolman, Robert (2016-01-29). "How a Mathematical Superstition Stultified Algebra for Over a Thousand Years". Retrieved .
14. ^ Kramer, Edna (1983). The Nature and Growth of Modern Mathematics. Princeton University Press. p. 28.
15. "Fraction - Encyclopedia of Mathematics". encyclopediaofmath.org. Retrieved .
16. ^ S?gakkai, Nihon (1993). Encyclopedic Dictionary of Mathematics, Volume 1. London, England: MIT Press. p. 578. ISBN 0-2625-9020-4.
17. ^ Bourbaki, N. (2003). Algebra II: Chapters 4 - 7. Springer Science & Business Media. p. A.VII.5.
18. ^ Giese, Martin; Schönegge, Arno (December 1995). Any two countable densely ordered sets without endpoints are isomorphic - a formal proof with KIV (PDF) (Technical report). Retrieved 2021.
19. ^ Weisstein, Eric W. "p-adic Number". mathworld.wolfram.com. Retrieved . |
Expressions¶
Calling functions on arrays of data is performed lazily using C++ template expressions. This allows better optimization and does not require saving temporary data.
For example, subtracting one univector from another gives expression type, not univector:
univector<int, 5> x{1, 2, 3, 4, 5};
univector<int, 5> y{0, 0, 1, 10, -5};
auto z = x - y; // z is of type expression, not univector.
// This only constructs an expression and does not perform any calculation
But you can always convert expression back to univector to get actual data:
univector<int, 5> x{1, 2, 3, 4, 5};
univector<int, 5> y{0, 0, 1, 10, -5};
univector<int, 5> z = x - y;
Note
when an expression is assigned to a univector variable, expression is evaluated and values are being written to the variable.
Same applies to calling KFR functions on univectors, this doesn't calculate value immediately. Instead, new expression will be created.
univector<float, 5> x{1, 2, 3, 4, 5};
sqrt(x); // only constructs an expression
univector<float, 5> values = sqrt(x); // constructs an expression and writes data to univector
Input expressions can be read from and output expressions can be written to. Class can be an input and output expression at same time. univector is an example of such class.
Data type of an input expressions can be determined by using value_type_of<Expression>. However, not all expressions have their types specified. In such cases value_type_of will return special type generic.
Size (length) of an expression also can be specified or not. counter is an example of generic (untyped) expression without the size. |
# Polynomial Identity Testing (PIT)
The Polynomial Identity Testing (PIT) is such a problem: given as input two polynomials, determine whether they are identical. It plays a fundamental role in Identity Testing problems.
First, let's consider the univariate ("one variable") case:
• Input: two polynomials $f, g\in\mathbb{F}[x]$ of degree $d$.
• Determine whether $f\equiv g$ ($f$ and $g$ are identical).
Here the $\mathbb{F}[x]$ denotes the ring of univariate polynomials on a field $\mathbb{F}$. More precisely, a polynomial $f\in\mathbb{F}[x]$ is
$f(x)=\sum_{i=0}^\infty a_ix^i$,
where the coefficients $a_i$ are taken from the field $\mathbb{F}$, and the addition and multiplication are also defined over the field $\mathbb{F}$. And:
• the degree of $f$ is the highest $i$ with non-zero $a_i$;
• a polynomial $f$ is a zero-polynomial, denoted as $f\equiv 0$, if all coefficients $a_i=0$.
Alternatively, we can consider the following equivalent problem by comparing the polynomial $f-g$ (whose degree is at most $d$) with the zero-polynomial:
• Input: a polynomial $f\in\mathbb{F}[x]$ of degree $d$.
• Determine whether $f\equiv 0$ ($f$ is the 0 polynomial).
The problem is trivial if the input polynomial $f$ is given explicitly: one can trivially solve the problem by checking whether all $d+1$ coefficients are $0$. To make the problem nontrivial, we assume that the input polynomial is given implicitly as a black box (also called an oracle): the only way the algorithm can access to $f$ is to evaluate $f(x)$ over some $x$ from the field $\mathbb{F}$, $x$ chosen by the algorithm.
A straightforward deterministic algorithm is to evaluate $f(x_1),f(x_2),\ldots,f(x_{d+1})$ over $d+1$ distinct elements $x_1,x_2,\ldots,x_{d+1}$ from the field $\mathbb{F}$ and check whether they are all zero. By the fundamental theorem of algebra, also known as polynomial interpolations, this guarantees to verify whether a degree-$d$ univariate polynomial $f\equiv 0$.
Fundamental Theorem of Algebra Any non-zero univariate polynomial of degree $d$ has at most $d$ roots.
The reason for this fundamental theorem holding generally over any field $\mathbb{F}$ is that any univariate polynomial of degree $d$ factors uniquely into at most $d$ irreducible polynomials, each of which has at most one root.
The following simple randomized algorithm is natural:
Algorithm for PIT suppose we have a finite subset $S\subseteq\mathbb{F}$ (to be specified later); pick $r\in S$ uniformly at random; if $f(r) = 0$ then return “yes” else return “no”;
This algorithm evaluates $f$ at one point chosen uniformly at random from a finite subset $S\subseteq\mathbb{F}$. It is easy to see the followings:
• If $f\equiv 0$, the algorithm always returns "yes", so it is always correct.
• If $f\not\equiv 0$, the algorithm may wrongly return "yes" (a false positive). But this happens only when the random $r$ is a root of $f$. By the fundamental theorem of algebra, $f$ has at most $d$ roots, so the probability that the algorithm is wrong is bounded as
$\Pr[f(r)=0]\le\frac{d}{|S|}.$
By fixing $S\subseteq\mathbb{F}$ to be an arbitrary subset of size $|S|=2d$, this probability of false positive is at most $1/2$. We can reduce it to an arbitrarily small constant $\delta$ by repeat the above testing independently for $\log_2 \frac{1}{\delta}$ times, since the error probability decays geometrically as we repeat the algorithm independently.
## Communication Complexity of Equality
The communication complexity is introduced by Andrew Chi-Chih Yao as a model of computation with more than one entities, each with partial information about the input.
Assume that there are two entities, say Alice and Bob. Alice has a private input $a$ and Bob has a private input $b$. Together they want to compute a function $f(a,b)$ by communicating with each other. The communication follows a predefined communication protocol (the "algorithm" in this model). The complexity of a communication protocol is measured by the number of bits communicated between Alice and Bob in the worst case.
The problem of checking identity is formally defined by the function EQ as follows: $\mathrm{EQ}:\{0,1\}^n\times\{0,1\}^n\rightarrow\{0,1\}$ and for any $a,b\in\{0,1\}^n$,
$\mathrm{EQ}(a,b)= \begin{cases} 1& \mbox{if } a=b,\\ 0& \mbox{otherwise.} \end{cases}$
A trivial way to solve EQ is to let Bob send his entire input string $b$ to Alice and let Alice check whether $a=b$. This costs $n$ bits of communications.
It is known that for deterministic communication protocols, this is the best we can get for computing EQ.
Theorem (Yao 1979) Any deterministic communication protocol computing EQ on two $n$-bit strings costs $n$ bits of communication in the worst-case.
This theorem is much more nontrivial to prove than it looks, because Alice and Bob are allowed to interact with each other in arbitrary ways. The proof of this theorem is in Yao's celebrated paper in 1979 with a humble title. It pioneered the field of communication complexity.
If we allow randomness in protocols, and also tolerate a small probabilistic error, the problem can be solved with significantly less communications. To present this randomized protocol, we need a few preparations:
• We represent the inputs $a,b \in\{0,1\}^{n}$ of Alice and Bob as two univariate polynomials of degree at most $n-1$, respectively
$f(x)=\sum_{i=0}^{n-1}a_ix^{i}$ and $g(x)=\sum_{i=0}^{n-1}b_ix^{i}$.
• The two polynomials $f$ and $g$ are defined over finite field $\mathbb{Z}_p=\{0,1,\ldots,p-1\}$ for some suitable prime $p$ (to be specified later), which means the additions and multiplications are modulo $p$.
The randomized communication protocol is then as follows:
A randomized protocol for EQ Bob does: pick $r\in\mathbb{Z}_p$ uniformly at random; send $r$ and $g(r)$ to Alice; Upon receiving $r$ and $g(r)$ Alice does: compute $f(r)$; If $f(r)= g(r)$ return "yes"; else return "no".
The communication complexity of the protocol is given by the number of bits used to represent the values of $r$ and $g(r)$. Since the polynomials are defined over finite field $\mathbb{Z}_p$ and the random number $f$ is also chosen from $\mathbb{Z}_p$, this is bounded by $O(\log p)$.
On the other hand the protocol makes mistakes only when $a\neq b$ but wrongly answers "yes". This happens only when $f\not\equiv g$ but $f(r)=g(r)$. The degrees of $f, g$ are at most $n-1$, and $r$ is chosen among $p$ distinct values, we have
$\Pr[f(r)=g(r)]\le \frac{n-1}{p}$.
By choosing $p$ to be a prime in the interval $[n^2, 2n^2]$ (by Chebyshev's theorem, such prime $p$ always exists), the above randomized communication protocol solves the Equality function EQ with an error probability of false positive at most $O(1/n)$, with communication complexity $O(\log n)$, an EXPONENTIAL improvement to ANY deterministic communication protocol!
## Schwartz-Zippel Theorem
Now let's see the the true form of Polynomial Identity Testing (PIT), for multivariate polynomials:
• Input: two $n$-variate polynomials $f, g\in\mathbb{F}[x_1,x_2,\ldots,x_n]$ of degree $d$.
• Determine whether $f\equiv g$.
The $\mathbb{F}[x_1,x_2,\ldots,x_n]$ is the ring of multivariate polynomials over field $\mathbb{F}$. An $n$-variate polynomial of degree $d$, written as a sum of monomials, is:
$f(x_1,x_2,\ldots,x_n)=\sum_{i_1,i_2,\ldots,i_n\ge 0\atop i_1+i_2+\cdots+i_n\le d}a_{i_1,i_2,\ldots,i_n}x_{1}^{i_1}x_2^{i_2}\cdots x_{n}^{i_n}$.
The degree or total degree of a monomial $a_{i_1,i_2,\ldots,i_n}x_{1}^{i_1}x_2^{i_2}\cdots x_{n}^{i_n}$ is given by $i_1+i_2+\cdots+i_n$ and the degree of a polynomial $f$ is the maximum degree of monomials of nonzero coefficients.
As before, we also consider the following equivalent problem:
• Input: a polynomial $f\in\mathbb{F}[x_1,x_2,\ldots,x_n]$ of degree $d$.
• Determine whether $f\equiv 0$.
If $f$ is written explicitly as a sum of monomials, then the problem can be solved by checking whether all coefficients, and there at most ${n+d\choose d}\le (n+d)^{d}$ coefficients in an $n$-variate polynomial of degree at most $d$.
A multivariate polynomial $f$ can also be presented in its product form, for example:
Example The Vandermonde matrix $M=M(x_1,x_2,\ldots,x_n)$ is defined as that $M_{ij}=x_i^{j-1}$, that is $M=\begin{bmatrix} 1 & x_1 & x_1^2 & \dots & x_1^{n-1}\\ 1 & x_2 & x_2^2 & \dots & x_2^{n-1}\\ 1 & x_3 & x_3^2 & \dots & x_3^{n-1}\\ \vdots & \vdots & \vdots & \ddots &\vdots \\ 1 & x_n & x_n^2 & \dots & x_n^{n-1} \end{bmatrix}$. Let $f$ be the polynomial defined as $f(x_1,\ldots,x_n)=\det(M)=\prod_{j\lt i}(x_i-x_j).$
For polynomials in product form, it is quite efficient to evaluate the polynomial at any specific point from the field over which the polynomial is defined, however, expanding the polynomial to a sum of monomials can be very expensive.
The following is a simple randomized algorithm for testing identity of multivariate polynomials:
Randomized algorithm for multivariate PIT suppose we have a finite subset $S\subseteq\mathbb{F}$ (to be specified later); pick $r_1,r_2,\ldots,r_n\in S$ uniformly and independently at random; if $f(\vec{r})=f(r_1,r_2,\ldots,r_n) = 0$ then return “yes” else return “no”;
This algorithm evaluates $f$ at one point chosen uniformly from an $n$-dimensional cube $S^n$, where $S\subseteq\mathbb{F}$ is a finite subset. And:
• If $f\equiv 0$, the algorithm always returns "yes", so it is always correct.
• If $f\not\equiv 0$, the algorithm may wrongly return "yes" (a false positive). But this happens only when the random $\vec{r}=(r_1,r_2,\ldots,r_n)$ is a root of $f$. The probability of this bad event is upper bounded by the following famous result due to Schwartz (1980) and Zippel (1979).
Schwartz-Zippel Theorem Let $f\in\mathbb{F}[x_1,x_2,\ldots,x_n]$ be a multivariate polynomial of degree $d$ over a field $\mathbb{F}$. If $f\not\equiv 0$, then for any finite set $S\subset\mathbb{F}$, and $r_1,r_2\ldots,r_n\in S$ chosen uniformly and independently at random, $\Pr[f(r_1,r_2,\ldots,r_n)=0]\le\frac{d}{|S|}.$
The Schwartz-Zippel Theorem states that for any nonzero $n$-variate polynomial of degree at most $d$, the number of roots in any cube $S^n$ is at most $d\cdot |S|^{n-1}$.
Dana Moshkovitz gave a surprisingly simply and elegant proof of Schwartz-Zippel Theorem, using some advanced ideas. Now we introduce the standard proof by induction.
Proof.
The theorem is proved by induction on $n$. Induction basis: For $n=1$, this is the univariate case. Assume that $f\not\equiv 0$. Due to the fundamental theorem of algebra, any polynomial $f(x)$ of degree at most $d$ must have at most $d$ roots, thus $\Pr[f(r)=0]\le\frac{d}{|S|}.$ Induction hypothesis: Assume the theorem holds for any $m$-variate polynomials for all $m\lt n$. Induction step: For any $n$-variate polynomial $f(x_1,x_2,\ldots,x_n)$ of degree at most $d$, we write $f$ as $f(x_1,x_2,\ldots,x_n)=\sum_{i=0}^kx_n^{i}f_i(x_1,x_2,\ldots,x_{n-1})$, where $k$ is the highest degree of $x_n$, which means the degree of $f_k$ is at most $d-k$ and $f_k\not\equiv 0$. In particular, we write $f$ as a sum of two parts: $f(x_1,x_2,\ldots,x_n)=x_n^k f_k(x_1,x_2,\ldots,x_{n-1})+\bar{f}(x_1,x_2,\ldots,x_n)$, where both $f_k$ and $\bar{f}$ are polynomials, such that $f_k\not\equiv 0$ is as above, whose degree is at most $d-k$; $\bar{f}(x_1,x_2,\ldots,x_n)=\sum_{i=0}^{k-1}x_n^i f_i(x_1,x_2,\ldots,x_{n-1})$, thus $\bar{f}(x_1,x_2,\ldots,x_n)$ has no $x_n^{k}$ factor in any term. By the law of total probability, we have \begin{align} &\Pr[f(r_1,r_2,\ldots,r_n)=0]\\ = &\Pr[f(\vec{r})=0\mid f_k(r_1,r_2,\ldots,r_{n-1})=0]\cdot\Pr[f_k(r_1,r_2,\ldots,r_{n-1})=0]\\ &+\Pr[f(\vec{r})=0\mid f_k(r_1,r_2,\ldots,r_{n-1})\neq0]\cdot\Pr[f_k(r_1,r_2,\ldots,r_{n-1})\neq0]. \end{align} Note that $f_k(r_1,r_2,\ldots,r_{n-1})$ is a polynomial on $n-1$ variables of degree $d-k$ such that $f_k\not\equiv 0$. By the induction hypothesis, we have \begin{align} (*) &\qquad &\Pr[f_k(r_1,r_2,\ldots,r_{n-1})=0]\le\frac{d-k}{|S|}. \end{align} Now we look at the case conditioning on $f_k(r_1,r_2,\ldots,r_{n-1})\neq0$. Recall that $\bar{f}(x_1,\ldots,x_n)$ has no $x_n^k$ factor in any term, thus the condition $f_k(r_1,r_2,\ldots,r_{n-1})\neq0$ guarantees that $f(r_1,\ldots,r_{n-1},x_n)=x_n^k f_k(r_1,r_2,\ldots,r_{n-1})+\bar{f}(r_1,r_2,\ldots,r_{n-1},x_n)=g_{r_1,\ldots,r_{n-1}}(x_n)$ is a nonzero univariate polynomial of $x_n$ such that the degree of $g_{r_1,\ldots,r_{n-1}}(x_n)$ is $k$ and $g_{r_1,\ldots,r_{n-1}}\not\equiv 0$, for which we already known that the probability $g_{r_1,\ldots,r_{n-1}}(r_n)=0$ is at most $\frac{k}{|S|}$. Therefore, \begin{align} (**) &\qquad &\Pr[f(\vec{r})=0\mid f_k(r_1,r_2,\ldots,r_{n-1})\neq0]=\Pr[g_{r_1,\ldots,r_{n-1}}(r_n)=0\mid f_k(r_1,r_2,\ldots,r_{n-1})\neq0]\le\frac{k}{|S|} \end{align}. Substituting both $(*)$ and $(**)$ back in the total probability, we have $\Pr[f(r_1,r_2,\ldots,r_n)=0] \le\frac{d-k}{|S|}+\frac{k}{|S|}=\frac{d}{|S|},$ which proves the theorem.
$\square$
# Fingerprinting
The polynomial identity testing algorithm in the Schwartz-Zippel theorem can be abstracted as the following framework: Suppose we want to compare two objects $X$ and $Y$. Instead of comparing them directly, we compute random fingerprints $\mathrm{FING}(X)$ and $\mathrm{FING}(Y)$ of them and compare the fingerprints.
The fingerprints has the following properties:
• $\mathrm{FING}(\cdot)$ is a function, meaning that if $X= Y$ then $\mathrm{FING}(X)=\mathrm{FING}(Y)$.
• It is much easier to compute and compare the fingerprints.
• Ideally, the domain of fingerprints is much smaller than the domain of original objects, so storing and comparing fingerprints are easy. This means the fingerprint function $\mathrm{FING}(\cdot)$ cannot be an injection (one-to-one mapping), so it's possible that different $X$ and $Y$ are mapped to the same fingerprint. We resolve this by making fingerprint function randomized, and for $X\neq Y$, we want the probability $\Pr[\mathrm{FING}(X)=\mathrm{FING}(Y)]$ to be small.
In Schwartz-Zippel theorem, the objects to compare are polynomials from $\mathbb{F}[x_1,\ldots,x_n]$. Given a polynomial $f\in \mathbb{F}[x_1,\ldots,x_n]$, its fingerprint is computed as $\mathrm{FING}(f)=f(r_1,\ldots,r_n)$ for $r_i$ chosen independently and uniformly at random from some fixed set $S\subseteq\mathbb{F}$.
With this generic framework, for various identity testing problems, we may design different fingerprints $\mathrm{FING}(\cdot)$.
## Communication protocols for Equality by fingerprinting
Now consider again the communication model where the two players Alice with a private input $x\in\{0,1\}^n$ and Bob with a private input $y\in\{0,1\}^n$ together compute a function $f(x,y)$ by running a communication protocol.
We still consider the communication protocols for the equality function EQ
$\mathrm{EQ}(x,y)= \begin{cases} 1& \mbox{if } x=y,\\ 0& \mbox{otherwise.} \end{cases}$
With the language of fingerprinting, this communication problem can be solved by the following generic scheme:
Communication protocol for EQ by fingerprinting Bob does: choose a random fingerprint function $\mathrm{FING}(\cdot)$ and compute the fingerprint of her input $\mathrm{FING}(y)$; sends both the description of $\mathrm{FING}(\cdot)$ and the value of $\mathrm{FING}(y)$ to Alice; Upon receiving the description of $\mathrm{FING}(\cdot)$ and the value of $\mathrm{FING}(y)$, Alice does: computes $\mathrm{FING}(x)$ and check whether $\mathrm{FING}(x)=\mathrm{FING}(y)$.
In this way we have a randomized communication protocol for the equality function EQ with false positive. The communication cost as well as the error probability are reduced to the question of how to design this random fingerprint function $\mathrm{FING}(\cdot)$ to guarantee:
1. A random fingerprint function $\mathrm{FING}(\cdot)$ can be described succinctly.
2. The range of $\mathrm{FING}(\cdot)$ is small, so the fingerprints are succinct.
3. If $x\neq y$, the probability $\Pr[\mathrm{FING}(x)=\mathrm{FING}(y)]$ is small.
### Fingerprinting by PIT
As before, we can define the fingerprint function as: for any bit-string $x\in\{0,1\}^n$, its random fingerprint is $\mathrm{FING}(x)=\sum_{i=1}^n x_i r^{i}$, where the additions and multiplications are defined over a finite field $\mathbb{Z}_p$, and $r$ is chosen uniformly at random from $\mathbb{Z}_p$, where $p$ is some suitable prime which can be represented in $\Theta(\log n)$ bits. More specifically, we can choose $p$ to be any prime from the interval $[n^2, 2n^2]$. Due to Chebyshev's theorem, such prime must exist.
As we have shown before, it takes $O(\log p)=O(\log n)$ bits to represent $\mathrm{FING}(y)$ and to describe the random function $\mathrm{FING}(\cdot)$ (since it a random function $\mathrm{FING}(\cdot)$ from this family is uniquely identified by a random $r\in\mathbb{Z}_p$, which can be represented within $\log p=O(\log n)$ bits). And it follows easily from the fundamental theorem of algebra that for any distinct $x, y\in\{0,1\}^n$,
$\Pr[\mathrm{FING}(x)=\mathrm{FING}(y)] \le \frac{n-1}{p}\le \frac{1}{n}.$
### Fingerprinting by randomized checksum
Now we consider a new fingerprint function: We treat each input string $x\in\{0,1\}^n$ as the binary representation of a number, and let $\mathrm{FING}(x)=x\bmod p$ for some random prime $p$ chosen from $[k]=\{0,1,\ldots,k-1\}$, for some $k$ to be specified later.
Now a random fingerprint function $\mathrm{FING}(\cdot)$ can be uniquely identified by this random prime $p$. The new communication protocol for EQ with this fingerprint is as follows:
Communication protocol for EQ by random checksum Bob does: for some parameter $k$ (to be specified), choose a prime $p\in[k]$ uniformly at random; send $p$ and $x\bmod p$ to Alice; Upon receiving $p$ and $x\bmod p$, Alice does: check whether $x\bmod p=y\bmod p$.
The number of bits to be communicated is obviously $O(\log k)$. When $x\neq y$, we want to upper bound the error probability $\Pr[x\bmod p=y\bmod p]$.
Suppose without loss of generality $x\gt y$. Let $z=x-y$. Then $z\lt 2^n$ since $x,y\in[2^n]$, and $z\neq 0$ for $x\neq y$. It holds that $x\equiv y\pmod p$ if and only if $p\mid z$. Therefore, we only need to upper bound the probability
$\Pr[z\bmod p=0]$
for an arbitrarily fixed $0\lt z\lt 2^n$, and a uniform random prime $p\in[k]$.
The probability $\Pr[z\bmod p=0]$ is computed directly as
$\Pr[z\bmod p=0]\le\frac{\mbox{the number of prime divisors of }z}{\mbox{the number of primes in }[k]}$.
For the numerator, any positive $z\lt 2^n$ has at most $n$ prime factors. To see this, by contradiction assume that $z$ has more than $n$ prime factors. Note that any prime number is at least 2. Then $z$ must be greater than $2^n$, contradicting the fact that $z\lt 2^n$.
For the denominator, we need to lower bound the number of primes in $[k]$. This is given by the celebrated Prime Number Theorem (PNT).
Prime Number Theorem Let $\pi(k)$ denote the number of primes less than $k$. Then $\pi(k)\sim\frac{k}{\ln k}$ as $k\rightarrow\infty$.
Therefore, by choosing $k=2n^2\ln n$, we have that for a $0\lt z\lt 2^n$, and a random prime $p\in[k]$,
$\Pr[z\bmod p=0]\le\frac{n}{\pi(k)}\sim\frac{1}{n}$,
which means the for any inputs $x,y\in\{0,1\}^n$, if $x\neq y$, then the false positive is bounded as
$\Pr[\mathrm{FING}(x)=\mathrm{FING}(y)]\le\Pr[|x-y|\bmod p=0]\le \frac{1}{n}$.
Moreover, by this choice of parameter $k=2n^2\ln n$, the communication complexity of the protocol is bounded by $O(\log k)=O(\log n)$.
# Checking distinctness
Consider the following problem of checking distinctness:
• Given a sequence $x_1,x_2,\ldots,x_n\in\{1,2,\ldots,n\}$, check whether every element of $\{1,2,\ldots,n\}$ appears exactly once.
Obviously this problem can be solved in linear time and linear space (in addition to the space for storing the input) by maintaining a $n$-bit vector that indicates which numbers among $\{1,2,\ldots,n\}$ have appeared.
When this $n$ is enormously large, $\Omega(n)$ space cost is too expensive. We wonder whether we could solve this problem with a space cost (in addition to the space for storing the input) much less than $O(n)$. This can be done by fingerprinting if we tolerate a certain degree of inaccuracy.
## Fingerprinting multisets
We consider the following more generalized problem, checking identity of multisets:
• Input: two multisets $A=\{a_1,a_2,\ldots, a_n\}$ and $B=\{b_1,b_2,\ldots, b_n\}$ where $a_1,a_2,\ldots,b_1,b_2,\ldots,b_n\in \{1,2,\ldots,n\}$.
• Determine whether $A=B$ (multiset equivalence).
Here for a multiset $A=\{a_1,a_2,\ldots, a_n\}$, its elements $a_i$ are not necessarily distinct. The multiplicity of an element $a_i$ in a multiset $A$ is the number of times $a_i$ appears in $A$. Two multisets $A$ and $B$ are equivalent if they contain the same set of elements and the multiplicities of every element in $A$ and $B$ are equal.
Obviously the above problem of checking distinctness can be treated as a special case of checking identity of multisets: by checking the identity of the multiset $A$ and set $\{1,2,\ldots, n\}$.
The following fingerprinting function for multisets was introduced by Lipton for solving multiset identity testing.
Fingerprint for multiset Let $p$ be a uniform random prime chosen from the interval $[(n\log n)^2,2(n\log n)^2]$. By Chebyshev's theorem, such prime must exist. And consider the the finite field $\mathbb{Z}_p=[p]$. Given a multiset $A=\{a_1,a_2,\ldots,a_n\}$, we define a univariate polynomial $f_A\in\mathbb{Z}_p[x]$ over the finite field $\mathbb{Z}_p$ as follows: $f_A(x)=\prod_{i=1}^n(x-a_i)$, where $+$ and $\cdot$ are defined over the finite field $\mathbb{Z}_p$. We then define the random fingerprinting function as: $\mathrm{FING}(A)=f_A(r)=\prod_{i=1}^n(r-a_i)$, where $r$ is chosen uniformly at random from $\mathbb{Z}_p$.
Since all computations of $\mathrm{FING}(A)=\prod_{i=1}^n(r-a_i)$ are over the finite field $\mathbb{Z}_p$, the space cost for computing the fingerprint $\mathrm{FING}(A)$ is only $O(\log p)=O(\log n)$.
Moreover, the fingerprinting function $\mathrm{FING}(A)$ is invariant under permutation of elements of the multiset $A=\{a_1,a_2,\ldots,a_n\}$, thus it is indeed a function of multisets (meaning every multiset has only one fingerprint). Therefore, if $A=B$ then $\mathrm{FING}(A)=\mathrm{FING}(B)$.
For two distinct multisets $A\neq B$, it is possible that $\mathrm{FING}(A)=\mathrm{FING}(B)$, but the following theorem due to Lipton bounds this error probability of false positive.
Theorem (Lipton 1989) Let $A=\{a_1,a_2,\ldots,a_n\}$ and $B=\{b_1,b_2,\ldots,b_n\}$ be two multisets whose elements are from $\{1,2,\ldots,n\}$. If $A\neq B$, then $\Pr[\mathrm{FING}(A)= \mathrm{FING}(B)]=O\left(\frac{1}{n}\right)$.
Proof.
Let $\tilde{f}_A(x)=\prod_{i=1}^n(x-a_i)$ and $\tilde{f}_B(x)=\prod_{i=1}^n(x-b_i)$ be two univariate polynomials defined over reals $\mathbb{R}$. Note that in contrast to $f_A(x)$ and $f_B(x)$, the $+$ and $\cdot$ in $\tilde{f}_A(x), \tilde{f}_B(x)$ do not modulo $p$. It is easy to verify that the polynomials $\tilde{f}_A(x), \tilde{f}_B(x)$ have the following properties: $\tilde{f}_A\equiv \tilde{f}_B$ if and only if $A=B$. Here $A=B$ means the multiset equivalence. By the properties of finite field, for any value $r\in\mathbb{Z}_p$, it holds that $f_A(r)=\tilde{f}_A(r)\bmod p$ and $f_B(r)=\tilde{f}_B(r)\bmod p$. Therefore, assuming that $A\neq B$, we must have $\tilde{f}_A(x)\not\equiv \tilde{f}_B(x)$. Then by the law of total probability: \begin{align} \Pr[\mathrm{FING}(A)= \mathrm{FING}(B)] &= \Pr\left[f_A(r)=f_B(r)\mid f_A\not\equiv f_B\right]\Pr[f_A\not\equiv f_B]\\ &\quad\,\,+\Pr\left[f_A(r)=f_B(r)\mid f_A\equiv f_B\right]\Pr[f_A\equiv f_B]\\ &\le \Pr\left[f_A(r)=f_B(r)\mid f_A\not\equiv f_B\right]+\Pr[f_A\equiv f_B]. \end{align} Note that the degrees of $f_A,f_B$ are at most $n$ and $r$ is chosen uniformly from $[p]$. By the Schwartz-Zippel theorem for univariate polynomials, the first probability $\Pr\left[f_A(r)=f_B(r)\mid f_A\not\equiv f_B\right]\le \frac{n}{p}=o\left(\frac{1}{n}\right),$ since $p$ is chosen from the interval $[(n\log n)^2,2(n\log n)^2]$. For the second probability $\Pr[f_A\equiv f_B]$, recall that $\tilde{f}_A\not\equiv \tilde{f}_B$, therefore there is at least a non-zero coefficient $c\le n^n$ in $\tilde{f}_A-\tilde{f}_B$. The event $f_A\equiv f_B$ occurs only if $c\bmod p=0$, which means \begin{align} \Pr[f_A\equiv f_B] &\le \Pr[c\bmod p=0]\\ &=\frac{\text{number of prime factors of }c}{\text{number of primes in }[(n\log n)^2,2(n\log n)^2]}\\ &\le \frac{n\log_2n}{\pi(2(n\log n)^2)-\pi((n\log n)^2)}. \end{align} By the prime number theorem, $\pi(N)\rightarrow \frac{N}{\ln N}$ as $N\to\infty$. Therefore, $\Pr[f_A\equiv f_B]=O\left(\frac{n\log n}{n^2\log n}\right)=O\left(\frac{1}{n}\right).$ Combining everything together, we have $\Pr[\mathrm{FING}(A)= \mathrm{FING}(B)]=O\left(\frac{1}{n}\right)$.
$\square$ |
What is the intersection of the closures of left invertible operators and right invertible operators? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-18T21:08:27Z http://mathoverflow.net/feeds/question/99815 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/99815/what-is-the-intersection-of-the-closures-of-left-invertible-operators-and-right-i What is the intersection of the closures of left invertible operators and right invertible operators? Qingping Zeng 2012-06-17T03:04:37Z 2012-06-19T11:02:03Z <p>From Douglas Zare's answer (see <a href="http://mathoverflow.net/questions/99777/does-x-embed-in-y-and-y-embed-in-x-always-imply-that-x-isomorphic-on" rel="nofollow">http://mathoverflow.net/questions/99777/does-x-embed-in-y-and-y-embed-in-x-always-imply-that-x-isomorphic-on</a>), one know that $$\overline{G_{l}(X,Y)} \bigcap \overline{G_{r}(X,Y) } = \overline{G(X,Y)}$$ does not hold in general, where $G_{l}((X,Y),G_{r}((X,Y)$ and $G(X,Y)$ denote the set of left invertible operators, right invertible operators and invertible operators. (We say an operator $T$ left invertible, if $ST=I$ for some operator $S$.) But does this equality holds when $X=Y$? If not, for what kinds of $X$, this equality holds? Furthermore, does this equality holds when $B(X)$ is replaced by a Banach algebra $A$ with an identity? </p> |
# Are there any “cumulative signatures” for decentralized networks?
Lets assume that there is a decentralized network $N$ with participants $A,B,C, D$ and that there was a message $m$ that all of $A,B,C,D$ agreed to. An outsider $X$ wants to know via signatures that $m$ was indeed agreed to by all of $N$.
Is there a way of combining the signatures of $A,B,C,D$ such that $X$ just has to check one signature? So if $m$ arrives at $A$ first, then $B,C, D$ they sign $D(C(B(A(m))))$ which would translate into $N(m)$ which $X$ only had to check once regardless of the order of $A,B,C,D$ in $D(C(B(A(m))))$? Is there any literature on this or does such an algorithm not exist (yet)? |
The Fundamental Theorem of Arithmetic states that every natural number greater than 1 is either a prime or a product of a finite number of primes and this factorization is unique except for the rearrangement of the factors. This will give us the prime factors. It states that any integer greater than 1 can be expressed as the product of prime numbers in only one way. Some people say that it is fundamental because it establishes the importance of primes as the building blocks of positive integers, but I could just as easily 'build up' the positive integers just by simply iterating +1's starting from 0. 8 1 18. If UPF-S holds, then S is in nite.Equivalently, if S is nite, then UPF-S is false. The fundamental theorem of calculus is one of the most important theorems in the history of mathematics. Click here to get an answer to your question ️ why is fundamental theorem of arithmetic fundamental Proof of Fundamental Theorem of Arithmetic This lesson is one step aside of the standard school Math curriculum. Despite its name, its often claimed that the fundamental theorem of algebra (which shows that the Complex numbers are algebraically closed - this is not to be confused with the claim that a polynomial of degree n has at most n roots) is not considered fundamental by algebraists as it's not needed for the development of modern algebra. The Fundamental theorem of arithmetic (also called the unique factorization theorem) is a theorem of number theory.The theorem says that every positive integer greater than 1 can be written as a product of prime numbers (or the integer is itself a prime number). Like this: This continues on: 10 is 2×5; 11 is Prime, 12 is 2×2×3; 13 is Prime; 14 is 2×7; 15 is 3×5 Prime numbers are used to encrypt information through communication networks utilised by mobile phones and the internet. How is this used in real life contexts? The fundamental theorem of arithmetic is a corollary of the first of Euclid's theorems (Hardy and … Derivatives tell us about the rate at which something changes; integrals tell us how to accumulate some quantity. Before we prove the fundamental fact, it is important to realize that not all sets of numbers have this property. Knowing multiples of 2, 5, 10 helps when counting coins. 1. The infinitude of S is a necessary condition, but clearly not a sufficient condition for UPF-S.For instance, the set S:= f3;5;:::g of primes other than 2 is infinite but UPF-S fails to hold.In general, we have the following theorem. It states that every composite number can be expressed as a product of prime numbers, this factorization is unique except for the order in which the prime factors occur. Dec 22,2020 - explanation of the fundamental theorem of arithmetic | EduRev Class 10 Question is disucussed on EduRev Study Group by 115 Class 10 Students. For example, 12 = 3*2*2, where 2 and 3 are prime numbers. Thus, the fundamental theorem of arithmetic: proof is done in TWO steps. save. The prime numbers, themselves, are unique, starting with 2. It is intended for students who are interested in Math. Thus 2 j0 but 0 -2. Definition 1.1 The number p2Nis said to be prime if phas just 2 divisors in N, namely 1 and itself. The Fundamental Theorem of Arithmetic 1.1 Prime numbers If a;b2Zwe say that adivides b(or is a divisor of b) and we write ajb, if b= ac for some c2Z. The fundamental theorem of Arithmetic(FTA) was proved by Carl Friedrich Gauss in the year 1801. share. report. A number p2N;p>1 is prime if phas no factors different from 1 and p. With a prime factorization n= p 1:::p n, we understandtheprimefactorsp j ofntobeorderedasp i p i+1. Click on the given link to … For that task, the constant $$C$$ is irrelevant, and we usually omit it. To prove the fundamental theorem of arithmetic, we have to prove the existence and the uniqueness of the prime factorization. Take any number, say 30, and find all the prime numbers it divides into equally. infinitude of primes that rely on the Fundamental Theorem of Arithmetic. So, because the rate is […] A prime number (or a prime) is a natural number, a positive integer, greater than 1 that is not a product of two smaller natural numbers. Our current interest in antiderivatives is so that we can evaluate definite integrals by the Fundamental Theorem of Calculus. Definition We say b divides a and write b|a when there exists an integer k such that a = bk. Why is it significant enough to be fundamental? Fundamental Theorem of Arithmetic The Basic Idea. Fundamental theorem of arithmetic, Fundamental principle of number theory proved by Carl Friedrich Gauss in 1801. Three Different Concepts As the name implies, the Fundamental Theorem of Calculus (FTC) is among the biggest ideas of calculus, tying together derivatives and integrals. The fundamental theorem of arithmetic states that every natural number can be factorized uniquely as a product of prime numbers. Fundamental Theorem of Arithmetic and Divisibility Review Mini Lecture Here we will provide a proof of the Fundamental Theorem of Arithmetic (about prime factorizations). So, it is up to you to read or to omit this lesson. \nonumber \] 6 6. comments. Introduction We know what a circular argument or a circular reasoning is. ON THE FUNDAMENTAL THEOREM OF ARITHMETIC AND EUCLID’S THEOREM 3 Theorem 4. Fundamental Theorem of Arithmetic has been explained in this lesson in a detailed way. The fundamental theorem of calculus (FTC) connects derivatives and integrals. This we know as factorization. The theorem means that if you and I take the same number and I write and you write where each and is … The fundamental theorem of arithmetic states that every positive integer (except the number 1) can be represented in exactly one way apart from rearrangement as a product of one or more primes (Hardy and Wright 1979, pp. Like the fundamental theorem of arithmetic, this is an "existence" theorem: it tells you the roots are there, but doesn't help you to find them. That these … Arithmetic Let N = f0;1;2;3;:::gbe the set of natural numbers. The Fundamental Theorem of Algebra Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 13, 2007) The set C of complex numbers can be described as elegant, intriguing, and fun, but why are complex numbers important? Nov 09,2020 - why 2is prime nounmber Related: Fundamental Theorem of Arithmetic? | EduRev Class 10 Question is disucussed on EduRev Study Group by 135 Class 10 Students. Number and number processes Why is it important? The fundamental theorem was therefore equivalent to asserting that a polynomial may be decomposed into linear and quadratic factors. So it is also called a unique factorization theorem or the unique prime factorization theorem. Before we get to that, please permit me to review and summarize some divisibility facts. Here is a brief sketch of the proof of the fundamental theorem of arithmetic that is most commonly presented in textbooks. In any case, it contains nothing that can harm you, and every student can benefit by reading it. BACKTO CONTENT 4. This is the root of his discovery, known as the fundamental theorem of arithmetic, as follows. This article was most recently revised and updated by William L. Hosch, Associate Editor. We discover this by carefully observing the set of primes involved in the statement. The theorem also says that there is only one way to write the number. The word “uniquely” here means unique up to rearranging. Thefundamentaltheorem ofarithmeticis Theorem: Everyn2N;n>1 hasauniqueprimefactorization. Click now to learn what is the fundamental theorem of arithmetic and its proof along with solved example question. Close. The fundamental theorem of arithmetic (FTA), also called the unique factorization theorem or the unique-prime-factorization theorem, states that every integer greater than 1 1 1 either is prime itself or is the product of a unique combination of prime numbers. To see why, consider the definite integral $\int_0^1 x^2 \, dx\text{.} The Basic Idea is that any integer above 1 is either a Prime Number, or can be made by multiplying Prime Numbers together. The Fundamental theorem of arithmetic (also called the unique factorization theorem) is a theorem of number theory.The theorem says that every positive integer greater than 1 can be written as a product of prime numbers (or the integer is itself a prime number). It states that, given an area function Af that sweeps out area under f (t), the rate at which area is being swept out is equal to the height of the original function. Why is the fundamental theorem of arithmetic not true for general rings and how do prime ideals solve this problem? The first part of the theorem (FTC 1) relates the rate at which an integral is growing to the function being integrated, indicating that integration and differentiation can be thought of as inverse operations. In this case, 2, 3, and 5 are the prime factors of 30. In number theory, the fundamental theorem of arithmetic, also called the unique factorization theorem or the unique-prime- factorization theorem, states that every integer greater than 1 either is prime itself or is the product of prime numbers, and that this product is unique, up to the order of the factors. Why is the fundamental theorem of arithmetic not true for general rings and how do prime ideals solve this problem? How to discover a proof of the fundamental theorem of arithmetic. The usual proof. The fundamental theorem of algebra states that every non-constant single-variable polynomial with complex coefficients has at least one complex root.This includes polynomials with real coefficients, since every real number is a complex number with its imaginary part equal to zero.. Equivalently (by definition), the theorem states that the field of complex numbers is algebraically closed. The fundamental theorem of arithmetic states that every integer greater than 1 either is either a prime number or can be represented as the product of prime numbers and that this representation is unique except for the order of the factors. The theorem also says that there is only one way to write the number. One possible answer to this question is the Fundamental Theorem of Algebra. 91% Upvoted. Fundamental Theorem of Arithmetic. Archived. 1. The fundamental theorem of calculus . 2-3). Real Numbers,Fundamental theorem of Arithmetic (Important properties) and question discussion by science vision begusarai. We will prove that for every integer, $$n \geq 2$$, it can be expressed as the product of primes in a unique way: \[n =p_{1} p_{2} \cdots p_{i}$ Posted by 5 years ago. The fundamental theorem of arithmetic is at the center of number theory, and simply, but elegantly, says that all composite numbers are products of smaller prime numbers, unique except for order. hide . This theorem is also called the unique factorization theorem. |
HERE Map Feedback API
Map Feedback API Developer's Guide
# SLI-specific Properties
The SLI blurring request handles information to blur specific images used in the Street Level Imagery product to comply with quality as well as legal and privacy regulations.
• Image Quality
• incorrect location
• obstructed view
• poor quality
• Image Blurring
• license plate
• face
• building
• other
All angles are in decimal degrees. The coordinate system for the angles is polar and azimuth based. The polar angle is measured from up (meaning the normal vector to the surface plane of the Earth) toward the Earth's surface. Its value ranges from 0 to 180. A value of zero points straight up, a value of 90 is in the plane of the Earth, and a value of 180. The azimuth angle 0 looks straight down at the ground. It is measured from East in a counter-clockwise fashion as viewed from above. Its value ranges from 0 value (as 0 points straight east), to a value of 90 points North, 0 to 360. A value of 180 points West, and a value of 270 points south.
SLI feedback must be sent as POINT data.
Table 1. SLI Blurring Feedback Attributes
Element Required Data Type Description
error Yes String 900 for SLI
domain Yes Object SLI Blurring Request structure with the following attributes encoded.
imageId Yes String Image Id as provided by the SLI APIs.
subType Yes Integer Specific information about the nature of the request.
areaOnImage No Object Structure describing the section of the image to be blurred as a region around a center point.
centerPoint Yes Object Describing the center point of the blur region
polarAngleInDegrees Yes Number (0-180) A value of 0 points straight up, a value of 90 is in the plane of the Earth, and a value of 180. Property of object centerPoint.
azimuthAngleInDegrees Yes Number (0-360) A value 0 value (as 0 points straight east), to a value of 90 points North, 0 to 360. A value of 180 points West, and a value of 270 points south. Property of object centerPoint.
widthInDegrees Yes Number Size (width) of the affected image region in degrees.
heightInDegrees Yes Number Size (height) of the affected image region in degrees.
Table 2. SLI subType Definitions
SubType Code Description
Blurring 1 license plate
2 face
3 building
4 other
Image quality 11 incorrect location
12 obstructed view
13 poor quality |
# dualizing sheaf of a nodal curve
I'm trying to understand the dualizing sheaf $\omega_C$ on a nodal curve $C$, in particular why is $H^1(C,\omega_C)=k$, where $k$ is the algebraically closed ground field. I know this sheaf is defined as the push-forward of the sheaf of rational differentials on the normalization $\tilde{C}$ of $C$ with at most simple poles at the points lying over the nodal points of $C$ and such that the sum of residues at the two points lying over the node will be zero. I can show that this is indeed an invertible sheaf on $C$, but I have no clue, despite my many attempts, how to show that $H^1(C,\omega_C)=k$. I've been able to show it in some very simple cases using Cech cohomology, but can someone explain to me how to do it in general?
• Doesn't Serre duality imply that group is dual to $H^0(C,\mathcal{O}_C)$? – S. Carnahan Mar 15 '11 at 19:04
• Dear Scott, Yes, but my impression was that the OP wanted a direct proof, so to speak. Regards, Matt – Emerton Mar 15 '11 at 19:32
• How does Serre duality imply this? In order to prove Serre Duality for a singular curve one first needs the dualizing sheaf, no? – HNuer Mar 15 '11 at 21:22
• HNuer: It's a question of terminology. A dualizing sheaf is usually understood as the thing which satisfies Grothendieck-Serre duality. So your question is really: why does the thing defined in your second sentence behave (to some extent) ike a dualizing sheaf? – Donu Arapura Mar 15 '11 at 21:47
If $\tilde{C}$ is the normalization, with two points $x$ and $y$ being identified under the map $\pi: \tilde{C} \to C$ to the node $z$ of $C$, then we have an exact sequence $$0 \to \Omega^1_{\tilde C} \to \Omega^1_{\tilde C}(x + y) \to k_x \oplus k_y \to 0,$$ where $k_x$ and $k_y$ are the skyscraper sheaves at the points $x$ and $y$. Pushing forward (which is exact because the map $\pi$ is finite, and so in particular affine) we get an exact sequence $$0 \to \pi_* \Omega^1_{\tilde C} \to \pi_*\Omega^1_{\tilde C}(x+y) \to k_z^{\oplus 2} \to 0.$$ Now there is a short exact sequence $0 \to k_z \to k_z^{\oplus 2} \to k_z \to 0$, where the third arrow is just given by adding the two components, and $\omega_C$ is the preimage of (the first copy of) $k_z$ under the surjection $\pi_* \Omega^1_{\tilde C}(x+y) \to k_z^{\oplus 2}$. In conclusion, we have an exact sequence $$0 \to \pi_* \Omega^1_{\tilde C} \to \omega_{C} \to k_z \to 0.$$
Now taking cohomology (and recalling that $H^i(C,\pi_*\mathcal F) = H^i(\tilde{C},\mathcal F)$ for a coherent sheaf on $\tilde{C}$), we obtain $$0 \to H^0(\tilde{C},\Omega^1_{\tilde C}) \to H^0(C,\omega_C) \to H^0(C,k_z) \to H^1(\tilde{C},\Omega^1_{\tilde C}) \to H^1(C,\omega_C) \to 0.$$ (The point here being that $H^1$ of a skyscraper sheaf such as $k_z$ vanishes.)
I claim that in this exact sequence the map $H^1(\tilde{C},\Omega^1_{\tilde C}) \to H^1(C,\omega_C)$ is an isomorphism, and hence that the latter is one-dimensional, since the former is.
For this, it is equivalent to show that the map $H^0(C,\omega_C) \to H^0(C,k_Z) = k$ is surjective.
Now $H^0(C,\omega_C) \subset H^0(C,\pi_*\Omega^1_C(x+y)) = H^0(\tilde{C},\Omega^1(x+y)).$ The residue theorem shows that we may find a differential $\omega \in H^0(\tilde{C},\Omega^1(x+y))$ whose residues at $x$ and $y$ are non-zero. (These residues are then negative to one another.) Thought of as a section of $H^0(C,\pi_*\Omega^1_C(x+y))$, this differential $\omega$ clearly lies in $H^0(C,\omega_C)$. Its image under the map $H^0(C,\omega_C)$ is non-zero (equal to the residue at either $x$ or at $y$, depending on a choice that was implicitly made above), and so indeed $H^0(C,\omega_C) \to k$ is surjective.
Summary: The residue theorem guarantees the existence of sections of $H^0(C,\omega_C)$ which have non-zero residues at $x$ and $y$ when pulled back to $\tilde{C}$, and this in turn shows that $H^1(C,\omega_C)$ is isomorphic to $H^1(\tilde{C},\Omega^1_C)$, and hence is one-dimensional.
• Dear Matt, in the middle of the displayed short exact sequence in the tenth line, I think you meant $\omega_C$ rather than $\omega_{\tilde C}$. Let me use the occasion to thank you for the recurring pleasure of reading your always lucid and beautifully written posts. – Georges Elencwajg Mar 15 '11 at 21:06
• How does the residue theorem imply the existence of differentials with non-zero residues? I thought it just implied that the sum of the residues of any meromorphic differential was zero. – HNuer Mar 15 '11 at 21:58
• Dear Georges, Thank you for the correction and for the kind words. Best wishes, Matt – Emerton Mar 16 '11 at 0:12
• Dear HNuer, I am referring to the stronger result which says that summing to zero is the only obstruction for finding a differential on a smooth projective curve with at worst simple poles with prescribed residues at some finite set of points (and holomorphic everywhere else). If you aren't already familiar with this statement, I'll leave it as an exercise. Best wishes, Matt – Emerton Mar 16 '11 at 0:15
• Dear Matt, I am indeed unfamiliar with that statement, and only know of the result I quoted from Hartshorne's brief discussion in III.7 of his book. Do know of a reference for the stronger statement? Also, in you answer above, isn't $\Omega^1_{\tilde{C}}(x+y)$ the differentials with zeroes at x and y? Wouldn't $\Omega^1_{\tilde{C}}(-x-y)$ be the sheaf of differentials with poles there? This may just be a stupid question, but I want to make sure I understand. Thanks for all the help. – HNuer Mar 16 '11 at 5:24 |
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Net Ionic Equations | CK-12 Foundation
You are reading an older version of this FlexBook® textbook: CK-12 Chemistry - Intermediate Go to the latest version.
# 16.4: Net Ionic Equations
Created by: CK-12
## Lesson Objectives
• Write net ionic equations for double-replacement reactions that produce precipitates, gases, or molecular compounds.
• Write net ionic equations for single-replacement reactions.
• Use the solubility rules to predict precipitate formation.
## Lesson Vocabulary
• ionic equation
• molecular equation
• net ionic equation
• spectator ion
### Recalling Prior Knowledge
• What are the five types of chemical reactions?
• What is dissociation?
Several types of reactions were introduced in the chapter Chemical Reactions: combination, decomposition, single-replacement, double-replacement, and combustion. Single-replacement and double-replacement reactions occur most frequently in aqueous solution. In this lesson, you will learn about various ways to depict these chemical reactions.
## Aqueous Reactions
When ionic compounds are dissolved into water, the polar water molecules break apart the solid crystal lattice, resulting in the hydrated ions being evenly distributed through the water. As you have learned, this process is called dissociation, and it is the reason that ionic compounds tend to be strong electrolytes. When two different ionic compounds that have been dissolved in water are mixed, a chemical reaction may occur between certain pairs of the hydrated ions.
Consider the double-replacement reaction that occurs when a solution of sodium chloride is mixed with a solution of silver nitrate. The driving force behind this reaction is the formation of the silver chloride precipitate.
$\mathrm{NaCl}{(aq)}+\mathrm{AgNO}_{3}{(aq)} \rightarrow \mathrm{NaNO}_{3}{(aq)}+\mathrm{AgCl}{(s)}$
This is called a molecular equation. A molecular equation is an equation in which the formulas of the compounds are written as though all substances exist as molecules. However, there is a better way to show what is happening in this reaction. All of the aqueous compounds can be written as ions, because they are actually present in the water as dissociated ions.
$\mathrm{Na}^+{(aq)}+\text{Cl}^-{(aq)}+\text{Ag}^+{(aq)}+\text{NO}^-_{3}{(aq)} \rightarrow \text{Na}^+{(aq)}+\text{NO}^-_{3}{(aq)}+\text{AgCl}{(s)}$
This equation is called an ionic equation, an equation in which dissolved ionic compounds are shown as free ions.
If you look carefully at the last equation, you will notice that the sodium ion and the nitrate ion appear unchanged on both sides of the equation. When the two solutions are mixed, neither the Na+ nor the NO3 ions participate in the reaction. Although they are still present in the solution, they do not need to be included when describing the chemical reaction that occurs upon mixing.
$\cancel{\text{Na}^+{(aq)}}+\text{Cl}^-{(aq)}+\text{Ag}^+{(aq)}+\cancel{\text{NO}^-_{3}{(aq)}} \rightarrow \cancel{\text{Na}^+{(aq)}}+\cancel{\text{NO}^-_{3}{(aq)}}+\text{AgCl}{(s)}$
A spectator ion is an ion that does not take part in the chemical reaction and is found in solution both before and after the reaction. In the above reaction, the sodium ion and the nitrate ion are both spectator ions. The equation can now be written without the spectator ions.
$\text{Ag}^+{(aq)}+\text{Cl}^-{(aq)} \rightarrow \text{AgCl}{(s)}$
The net ionic equation is the chemical equation that shows only those elements, compounds, and ions that are directly involved in the chemical reaction. Notice that in writing the net ionic equation, the positively charged silver cation was written first on the reactant side, followed by the negatively charged chloride anion. This is somewhat customary because that is the order in which the ions must be written in the silver chloride product. However, it is not absolutely necessary to order the reactants in this way.
Net ionic equations must be balanced by both mass and charge. An equation that is balanced by mass has equal amounts of each element on both sides. Balancing by charge means that the total charge is the same on both sides of the equation. In the above equation, the overall charge is zero, or neutral, on both sides of the equation. As a general rule, if you balance the molecular equation properly, the net ionic equation will end up being balanced by both mass and charge.
Sample Problem 16.9: Writing and Balancing Net Ionic Equations
When aqueous solutions of copper(II) chloride and potassium phosphate are mixed, a precipitate of copper(II) phosphate is formed. Write a balanced net ionic equation for this reaction.
Step 1: Plan the problem.
Write and balance the molecular equation first, making sure that all formulas are correct. Then write the ionic equation, showing all aqueous substances as ions. Carry through any coefficients. Finally, eliminate spectator ions and write the net ionic equation.
Step 2: Solve.
Molecular equation:
$\text{3CuCl}_{2}{(aq)}+2\text{K}_3\text{PO}_{4}{(aq)} \rightarrow 6\text{KCl}{(aq)}+\text{Cu}_3(\text{PO}_4)_{2}{(s)}$
Ionic equation:
$\text{3Cu}^{2+}{(aq)}+6\text{Cl}^-{(aq)}+6\text{K}^+{(aq)}+2\text{PO}^{3-}_{4}{(aq)} \rightarrow 6\text{K}^+{(aq)}+6\text{Cl}^-{(aq)}+\text{Cu}_3(\text{PO}_4)_{2}{(s)}$
Notice that the balancing is carried through when writing the dissociated ions. For example, there are six chloride ions on the reactant side because the coefficient of 3 is multiplied by the subscript of 2 in the copper(II) chloride formula. The spectator ions are K+ and Cl and can be eliminated.
Net ionic equation:
$\text{3Cu}^{2+}{(aq)}+2\text{PO}^{3-}_{4}{(aq)} \rightarrow \text{Cu}_3(\text{PO}_4)_{2}{(s)}$
For a precipitation reaction, the net ionic equation always shows the two ions that come together to form the precipitate. The equation is balanced by mass and charge.
Practice Problem
1. Write the net ionic equation for the reaction of calcium nitrate with lithium hydroxide. The products are aqueous lithium nitrate and a calcium hydroxide precipitate.
Some other double-replacement reactions do not produce a precipitate as one of the products. The production of a gas and/or a molecular compound such as water may also drive the reaction. For example, consider the reaction of a solution of sodium carbonate with a solution of hydrochloric acid (HCl). The products of the reaction are aqueous sodium chloride, carbon dioxide, and water. The balanced molecular equation is:
$\text{Na}_2\text{CO}_{3}{(aq)}+2\text{HCl}{(aq)} \rightarrow 2\text{NaCl}{(aq)}+\text{CO}_{2}{(g)}+\text{H}_2\text{O}{(l)}$
The ionic equation is:
$\text{2Na}^+{(aq)}+\text{CO}^{2-}_{3}{(aq)}+2\text{H}^+{(aq)}+2\text{Cl}^-{(aq)} \rightarrow 2\text{Na}^+{(aq)}+2\text{Cl}^-{(aq)}+\text{CO}_{2}{(g)}+\text{H}_2\text{O}{(l)}$
The sodium and chloride ions are spectator ions, making the final net ionic equation:
$\text{2H}^+{(aq)}+\text{CO}^{2-}_{3}{(aq)} \rightarrow \text{CO}_{2}{(g)}+\text{H}_2\text{O}{(l)}$
You will obtain the correct net ionic equation for any reaction as long as you follow the steps in the examples.
A single-replacement reaction is one in which a pure, neutral element replaces another element in a compound. A neutral element would not carry a charge, so it will not be a spectator ion. The example below shows the reaction of solid magnesium metal with aqueous silver nitrate to form aqueous magnesium nitrate and silver metal.
Balanced molecular equation:
$\text{Mg}{(s)}+2\text{AgNO}_{3}{(aq)} \rightarrow \text{Mg(NO}_{3}{)}_{2}{(aq)}+2\text{Ag}{(s)}$
Ionic equation:
$\text{Mg}{(s)}+2\text{Ag}^+{(aq)}+2\text{NO}^-_{3}{(aq)} \rightarrow \text{Mg}^{2+}{(aq)}+2\text{NO}^-_{3}{(aq)}+2\text{Ag}{(s)}$
The only spectator ion is the nitrate ion, so the net ionic equation is:
$\text{Mg}{(s)}+2\text{Ag}^+{(aq)} \rightarrow \text{Mg}^{2+}{(aq)}+2\text{Ag}{(s)}$
Notice that the overall charge on both sides of the equation is now +2, instead of zero like it was in the previous examples. This is typical for a single-replacement reaction. Because both sides of the reaction carry the same total charge, it is still balanced. This type of single-replacement reaction is called a metal replacement. Other common categories of single-replacement reactions are hydrogen replacement and halogen replacement. These were discussed in the chapter Chemical Reactions.
## Predicting Precipitates
Some combinations of aqueous reactants result in the formation of a solid precipitate as a product. However, some combinations will not produce such a product. If solutions of sodium nitrate and ammonium chloride are mixed, no reaction occurs. One could write a molecular equation showing a double-replacement reaction, but both products, sodium chloride and ammonium nitrate, are soluble and would remain in the solution as ions. Every ion is a spectator ion, so there is no net ionic equation.
It is useful to be able to predict when a precipitate will form from a given mixture of ions. To do so, you can use a set of guidelines called the solubility rules (Table below).
Solubility Rules for Ionic Compounds in Water
Solubility Ionic Compound
Soluble Compounds containing the alkali metal ions (Li+, Na+, K+, Rb+, Cs+) or the ammonium ion (NH4+)
Soluble Compounds containing the nitrate ion (NO3), acetate ion (CH3COO), chlorate ion (ClO3), or bicarbonate ion (HCO3)
Mostly soluble
Compounds containing the chloride ion (Cl), bromide ion (Br), or iodide ion (I)
Exceptions are those compounds that also contain silver (Ag+), mercury(I) (Hg22+), or lead(II) (Pb2+)
Mostly soluble
Compounds containing the sulfate ion (SO42−)
Exceptions are the sulfate salts of silver (Ag+), calcium (Ca2+), strontium (Sr2+), barium (Ba2+), mercury(I) (Hg22+), or lead(II) (Pb2+) ions
Mostly insoluble
Compounds containing the carbonate ion (CO32−), phosphate ion (PO43−), chromate ion (CrO42−), sulfide ion (S2−), or silicate ion (SiO32-)
Exceptions are those compounds that also contain the alkali metals or ammonium
Mostly insoluble
Compounds containing the hydroxide ion (OH)
Exceptions are hydroxide salts of the alkali metals and the barium ion (Ba2+)
As an example on how to use the solubility rules, predict if a precipitate will form when solutions of cesium bromide and lead(II) nitrate are mixed.
$\text{Cs}^+{(aq)}+\text{Br}^-{(aq)}+\text{Pb}^{2+}{(aq)}+2\text{NO}^-_{3}{(aq)} \rightarrow \ ?$
The potential precipitates from a double-replacement reaction are cesium nitrate and lead(II) bromide. According to the solubility rules table, cesium nitrate is soluble because all compounds containing the nitrate ion, as well as all compounds containing the alkali metal ions, are soluble. Most compounds containing the bromide ion are soluble, but lead(II) is an exception. Therefore, the cesium and nitrate ions are spectator ions and the lead(II) bromide is a precipitate. The balanced net ionic reaction is:
$\text{Pb}^{2+}{(aq)}+2\text{Br}^-{(aq)} \rightarrow \text{PbBr}_{2}{(s)}$
## Lesson Summary
• Single-replacement and double-replacement reactions often place in aqueous solution, in which dissociated ionic compounds are more accurately represented as free ions.
• Spectator ions do not participate directly in the chemical reaction. An equation that shows only the substances in the reaction is called a net ionic equation. Net ionic equations must be balanced by both mass and charge.
• The solubility rules can be used to predict whether a precipitate is produced from a given mixture of ions.
## Lesson Review Questions
### Reviewing Concepts
1. Substances that are in which state(s) in a molecular equation are broken down into ions to make an ionic equation?
2. One or more of three possible types of products are generally formed in a double-replacement reaction. What are they?
3. What happens to a spectator ion during a chemical reaction?
4. Which statement below is true concerning a net ionic equation?
1. The overall charge on both sides of the equation must be zero.
2. The overall charge on both sides of the equation must be equal.
5. Use the solubility rules to determine whether each of the following compounds is soluble or insoluble in water.
1. (NH4)3PO4
2. CaBr2
3. AgBr
4. Li2SO4
5. Mn(OH)2
6. SrCO3
7. Pt(NO3)2
8. Fe2(CrO4)3
### Problems
1. Write a balanced net ionic equation for the reactions represented by each of the following unbalanced molecular equations:
1. $\text{Na}_3\text{PO}_4{(aq)}+\text{Fe(NO}_3)_3{(aq)} \rightarrow \text{NaNO}_3{(aq)}+\text{FePO}_4{(s)}$
2. $\text{HCl}{(aq)}+\text{Mg(OH)}_2{(s)} \rightarrow \text{MgCl}_2{(aq)}+\text{H}_2\text{O}{(l)}$
3. $\text{Na}{(s)}+\text{H}_2\text{O}{(l)} \rightarrow \text{NaOH}{(aq)}+\text{H}_{2}{(g)}$
4. $\text{K}_2\text{S}{(aq)}+\text{HBr}{(aq)} \rightarrow \text{KBr}{(aq)}+\text{H}_2\text{S}{(g)}$
2. Will a precipitate form when aqueous solutions of the following salts are mixed? If so, write the formula of the precipitate.
1. CaCl2 and Mg(NO3)2
2. K2CrO4 and MgI2
3. NaCH3COO and Ni(ClO3)2
4. Ba(OH)2 and Al2(SO4)3
3. Finish the molecular equations below, balance them, and then write a corresponding net ionic equation. You will need to use the solubility rules to determine whether any of the products will precipitate.
1. $\text{Na}_2\text{CO}_3{(aq)}+\text{ZnCl}_2{(aq)} \rightarrow$
2. $\text{NH}_4\text{Cl}{(aq)}+\text{Pb(NO}_3)_2{(aq)} \rightarrow$
3. $\text{Al}{(s)}+\text{HI}{(aq)} \rightarrow$
4. $\text{HCl}{(aq)}+\text{Ba(OH)}_2{(aq)} \rightarrow$
5. $\text{Zn}{(s)}+\text{Fe(NO}_3)_3{(aq)} \rightarrow$
6. $\text{Cl}_2{(g)}+\text{KBr}{(aq)} \rightarrow$
4. Write a balanced net ionic equation from each of the word equations below.
1. Aqueous cobalt(III) chloride reacts with aqueous ammonium sulfate to produce aqueous ammonium chloride and solid cobalt(III) sulfate.
2. Aluminum metal reacts with a solution of copper(II) acetate to produce aqueous aluminum acetate and copper metal.
3. Nickel metal reacts with sulfuric acid (H2SO4) to produce aqueous nickel(II) sulfate and hydrogen gas.
5. Write a balanced net ionic equation for each of the following situations. You will need to determine the products first.
1. Aqueous solutions of potassium hydroxide and chromium(III) chloride are mixed.
2. Magnesium metal is dipped into a solution of lead(II) nitrate.
3. Solid iron(III) hydroxide is added to a solution of hydrochloric acid.
4. Fluorine gas is bubbled into a solution of sodium iodide.
## Points to Consider
Heat and energy are important concepts in chemistry, as most chemical reactions are accompanied by a transfer of energy.
• What is heat and how does a transfer of heat occur?
• How do different substances respond to an input or loss of heat?
Mar 01, 2013
Aug 21, 2014 |
# Introduction
IndexNumR is a package for computing indices of aggregate prices or quantities using information on the prices and quantities on multiple products over multiple time periods. Such numbers are routinely computed by statistical agencies to measure, for example, the change in the general level of prices, production inputs and productivity for an economy. Well known examples are consumer price indices and producer price indices.
In recent years, advances have been made in index number theory to address biases in many well known and widely used index number methods. One area of development has been the adaptation of multilateral methods, commonly used in cross-sectional comparisons, to the time series context. This typically involves more computational complexity than bilateral methods. IndexNumR provides functions that make it easy to estimate indices using common index number methods, as well as multilateral methods.
# The IndexNumR package
## Data organisation
This first section covers the inputs into the main index number functions and how the data are to be organised to use these functions.
### Index number input dataframe
The index number functions such as priceIndex, quantityIndex and GEKSIndex all take a dataframe as their first argument. This dataframe should contain everything needed to compute the index. In general this includes columns for,
• prices
• quantities
• a time period variable (more on this below)
• a product identifier that uniquely identifies each product.
One exception to the above is when elementary indexes are estimated using the priceIndex function. A quantity variable is not required in this case because the index is unweighted, and in many cases quantities may not be available (for example, when statistical agencies collect sample prices on individual products).
The dataframe must have column names, since character strings are used in other arguments to the index number functions to specify which columns contain the data listed above. Column names can be set with the colnames function of base R. The sample dataset CES_sigma_2 is an example of the minimum dataframe required to compute an index.
head(CES_sigma_2)
## time prices quantities prodID
## 1 1 2.00 0.3846154 1
## 2 2 1.75 0.5846626 1
## 3 3 1.60 0.7135502 1
## 4 4 1.50 0.9149417 1
## 5 5 1.45 1.0280574 1
## 6 6 1.40 1.2058234 1
In this case, the dataframe is sorted by the product identifier prodID, but it need not be sorted at all.
### The time period variable
To be able to compute indices, the data need to be subset in order to extract all observations on products for given periods. The approach used in IndexNumR is to require a time period variable as an input into many of its functions that will be used for subsetting. This time period variable must satisfy the following,
• start at 1
• increase in integer increments of 1
• continuous (that is, no gaps).
The variable may, and in fact likely will, have many observations for a given time period, since there are generally multiple items with price and quantity information. For example, the CES_sigma_2 dataset has observations on 4 products for each time period. We can see this by observing the first few rows of the dataset sorted by the time period.
head(CES_sigma_2[order(CES_sigma_2$time),]) ## time prices quantities prodID ## 1 1 2.00 0.3846154 1 ## 13 1 1.00 1.5384615 2 ## 25 1 1.00 1.5384615 3 ## 37 1 0.50 12.3076923 4 ## 2 2 1.75 0.5846626 1 ## 14 2 0.50 7.1621164 2 The user can provide their own time variable, or if a date variable is available, IndexNumR has four functions that can compute the required time variable: yearIndex, quarterIndex, monthIndex and weekIndex. Users should be aware that if there are a very large number of observations then these functions can take some time to compute, but once it has been computed it is easier and faster to work with than dates. ### Time aggregation A related issue is that of aggregating data collected at some higher frequency, to a lower frequency. When computing index numbers, this is often done by computing a unit value as follows, $$$UV_{t} = \frac{\sum_{i=1}^{N}p^{t}_{n}q^{t}_{n}}{\sum_{i=1}^{N}q^{t}_{n}}$$$ That is, sum up total expenditure on each item over the required period, and divide by the total quantity. Provided that a time period variable as described above is available, the unit values can be computed using the function unitValues. This function returns the unit values, along with the aggregate quantities for each time period and each product. The output will also include the product identifier and time period variable so the output dataframe from the unitvalues function contains everything needed to compute an index number. ## Sample data IndexNumR provides a sample dataset, CES_sigma_2 that contains prices and quantities on four products over twelve time periods that are consistent with consumers displaying CES preferences with an elasticity of substitution equal to two. This dataset is calculated using the method described in (W. Erwin Diewert and Fox 2017). We start with prices for each of $$n$$ products in each of $$T$$ time periods, an n-dimensional vector of preference parameters $$\alpha$$, and a T-dimensional vector of total expenditures. Then calculate the expenditure shares for each product in each time period using, $$$s_{tn} = \frac{\alpha_{n}p_{tn}^{1-\sigma}}{\sum_{n=1}^{N}\alpha_{n}p_{tn}^{1-\sigma}}$$$ and use those shares to calculate the quantities, $$$q_{tn} = \frac{e_{t}s_{tn}}{p_{tn}}$$$ IndexNumR provides the function CESData to produce datasets assuming CES preferences as above for any elasticity of substitution $$\sigma$$, using the prices, $$\alpha$$, and expenditure values assumed in (W. Erwin Diewert and Fox 2017). The vector $$\alpha$$ is, $$$\alpha = \begin{bmatrix} 0.2 & 0.2 & 0.2 & 0.4 \end {bmatrix}$$$ and the prices and expenditures are, t p1 p2 p3 p4 e 1 2.00 1.00 1.00 0.50 10 2 1.75 0.50 0.95 0.55 13 3 1.60 1.05 0.90 0.60 11 4 1.50 1.10 0.85 0.65 12 5 1.45 1.12 0.40 0.70 15 6 1.40 1.15 0.80 0.75 13 7 1.35 1.18 0.75 0.70 14 8 1.30 0.60 0.72 0.65 17 9 1.25 1.20 0.70 0.70 15 10 1.20 1.25 0.40 0.75 18 11 1.15 1.28 0.70 0.75 16 12 1.10 1.30 0.65 0.80 17 ## Matched-sample indexes A common issue when computing index numbers is that the sample of products over which the index is computed changes over time. Since price and quantity information is generally needed on the same set of products for each pair of periods being compared, the index calculation functions provided in IndexNumR provide the option sample="matched" to use only a matched sample of products. How this performs the matching depends on whether the index is bilateral or multilateral. For bilateral indices the price and quantity information will be extracted for a pair of periods, any non-overlapping products removed, and the index computed over these matched products. This is repeated for each pair of periods over which the index is being computed. For multilateral indexes it is somewhat different. For the GEKS index, the matching is performed for each bilateral comparison that enters into the calculation of the multilateral index (see section on the GEKS index below). For the Geary-Khamis and Weighted-Time-Product-Dummy methods, matching can be performed over each window of data. That is, only products that appear in all time periods within each calculation window are kept. For these two indexes a matched sample is not required; by default, IndexNumR will set price and quantity to zero for all missing observations, to allow the index to be computed. For the WTPD index, this can be shown to give the same result as running a weighted least squares regression on the available pooled data. Matched-sample indexes may suffer from bias. As a simple assessment of the potential bias, the function evaluateMatched calculates the proportion of total expenditure that the matched sample covers in each time period. The function provides output for expenditure as well as counts and can evaluate overlap using either a chained or fixed base index. The first four columns of the output presents the base period information base_index (the time index of the base period), base (total base period expenditure or count), base_matched (the expenditure or count of the base period for matched products), base_share (share of total expenditure in the base period that remains after matching). Columns 5-8 report the same information for the current period. Columns 4 and 8 can be expressed as, $$$\lambda_{t} = \frac{\sum_{I\in I(1)\cap I(0)}p_{n}^{t}q_{n}^{t}}{\sum_{I\in I(t)}p_{n}^{t}q_{n}^{t}} \quad \text{for } t \in \{1,0\},$$$ where $$I(t)$$ is the set of products available in period $$t$$, $$t=1$$ refers to the current period as is used to compute column 8 and $$t=0$$ refers to the comparison period, which is used to compute column 4. The count matrix has two additional columns, “new” and “leaving.” The new column gives the number of products that exist in the current period but not the base period (products entering the sample). The leaving column gives the count of products that exist in the base period but not the current period (products leaving the sample). Matching removes both of these types of products. ## Data imputation ### Carry forward/backward prices An alternative to using a matched sample of products is to impute the missing data. One technique for doing this is to replace missing values with the last actual price observation. If the data has both prices and quantities then the corresponding quantities are set to zero. If the missing observations occur at the beginning of the time series then the first actual observation is carried backward to the first time period. IndexNumR performs this carry price imputation with the imputeCarryPrices function; however, this is only needed if the imputed data themselves are of interest. Otherwise, the price index functions can use carry price imputation by setting the parameter imputePrices = "carry". In the example below, the first two observations on product 1 are missing, so the price from the third period is carried backwards to fill the missing observations. Observations 3 and 4 are missing on product 2, so the price in period 2 is carried forward to fill them. The corresponding quantities are set to zero. # create a dataset with some missing observations on product 1 and 2 df <- CES_sigma_2[-c(1,2,15,16),] df <- df[df$prodID %in% 1:2 & df$time <= 6,] dfMissing <- df[, c("time", "prices", "prodID")] %>% tidyr::pivot_wider(id_cols = time, names_from = prodID, values_from = prices) dfMissing[order(dfMissing$time),]
## # A tibble: 6 x 3
## time 1 2
## <int> <dbl> <dbl>
## 1 1 NA 1
## 2 2 NA 0.5
## 3 3 1.6 NA
## 4 4 1.5 NA
## 5 5 1.45 1.12
## 6 6 1.4 1.15
# compute carry prices
carryPrices <- imputeCarryPrices(df, pvar = "prices", qvar = "quantities",
pervar = "time", prodID = "prodID")
# print the data with the product prices in columns to see the filled data
carryPrices[, c("time", "prices", "prodID")] %>%
tidyr::pivot_wider(id_cols = time, names_from = prodID, values_from = prices)
## # A tibble: 6 x 3
## time 1 2
## <dbl> <dbl> <dbl>
## 1 1 1.6 1
## 2 2 1.6 0.5
## 3 3 1.6 0.5
## 4 4 1.5 0.5
## 5 5 1.45 1.12
## 6 6 1.4 1.15
# print the data with the product quantities in columns to see the corresponding zeros
carryPrices[, c("time", "quantities", "prodID")] %>%
tidyr::pivot_wider(id_cols = time, names_from = prodID, values_from = quantities)
## # A tibble: 6 x 3
## time 1 2
## <dbl> <dbl> <dbl>
## 1 1 0 1.54
## 2 2 0 7.16
## 3 3 0.714 0
## 4 4 0.915 0
## 5 5 1.03 1.72
## 6 6 1.21 1.79
## Bilateral index numbers
Bilateral index numbers are those that examine the movement between two periods. All of the bilateral index numbers can be computed as period-on-period, chained or fixed base. Period-on-period simply measures the change from one period to the next. Chained indices give the cumulative change, and it is calculated as the cumulative product of the period-on-period index. The fixed base index compares each period to the base period. This is also called a direct index, because unlike a chained index, it does not go through other periods to measure the change since the base period. Formulae used to compute the bilateral index numbers from period t-1 to period t are as below.
• Carli index (Carli 1804), $\begin{equation*} P(p^{t-1},p^{t}) = \frac{1}{N}\sum_{n=1}^{N}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right) \end{equation*}$
• Jevons index (Jevons 1865), $\begin{equation*} P(p^{t-1},p^{t}) = \prod_{n=1}^{N}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{(1/N)} \end{equation*}$
• Dutot index (Dutot 1738), $\begin{equation*} P(p^{t-1},p^{t}) = \frac{\sum_{n=1}^{N}p^{t}_{n}}{\sum_{n=1}^{N}p^{t-1}_{n}} \end{equation*}$
• Laspeyres index (Laspeyres 1871), $\begin{equation*} P(p^{t-1},p^{t},q^{t-1}) = \frac{\sum_{n=1}^{N}p^{t}_{n}q^{t-1}_{n}}{\sum_{n=1}^{N}p^{t-1}_{n}q^{t-1}_{n}} \end{equation*}$
• Paasche index (Paasche 1874) $\begin{equation*} P(p^{t-1},p^{t},q^{t}) = \frac{\sum_{n=1}^{N}p^{t}_{n}q^{t}_{n}}{\sum_{n=1}^{N}p^{t-1}_{n}q^{t}_{n}} \end{equation*}$
• Geometric Laspeyres index (Konüs and Byushgens 1926) $\begin{equation*} P(p^{t-1},p^{t},q^{t-1}) = \prod_{n=1}^{N}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{s^{t-1}_{n}}, \end{equation*}$ where $$s^{t}_{n} = \frac{p^{t}_{n}q^{t}_{n}}{\sum_{n=1}^{N}p^{t}_{n}q^{t}_{n}}$$ is the share of period $$t$$ expenditure on good $$n$$.
• Geometric Paasche index (Konüs and Byushgens 1926) $\begin{equation*} P(p^{t-1},p^{t},q^{t}) = \prod_{n=1}^{N}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{s^{t}_{n}}, \end{equation*}$ where $$s^{t}_{n}$$ is defined as above for the geometric laspeyres index.
• Lowe index (Lowe 1823) $\begin{equation*} P(p^{t-1},p^{t},q^{b}) = \frac{\sum_{n=1}^{N}p^{t}_{n}q^{b}_{n}}{\sum_{n=1}^{N}p^{t-1}_{n}q^{b}_{n}}, \end{equation*}$ where $$b$$ can be any period, or range of periods, in the dataset.
• Young index (Young 1812) $\begin{equation*} P(p^{t-1},p^{t},p^{b},q^{b}) = \sum_{n=1}^{N}s^{b}_{n}\frac{p^{t}_{n}}{p^{t-1}_{n}}, \end{equation*}$ where $$b$$ can be any period, or range of periods, in the dataset.
• Drobish index (Drobish 1871) $\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = (P_{L}+P_{P})/2, \end{equation*}$ where $$P_{L}$$ is the Laspeyres price index and $$P_{P}$$ is the Paasche price index.
• Marshall-Edgeworth index (Marshall 1887), (Edgeworth 1925) $\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \frac{\sum_{n=1}^{N}p_{n}^{t}(q_{n}^{t-1}+q_{n}^{t})}{\sum_{n=1}^{N}p_{n}^{t-1}(q_{n}^{t-1}+q_{n}^{t})} \end{equation*}$
• Palgrave index (Palgrave 1886) $\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \sum_{n=1}^{N}s^{t}_{n}\frac{p^{t}_{n}}{p^{t-1}_{n}}, \end{equation*}$ where $$s^{t}_{n}$$ is defined as above for the geometric laspeyres index.
• Fisher index (Fisher 1921), $\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = [P_{P}P_{L}]^{\frac{1}{2}}, \end{equation*}$ where $$P_{P}$$ is the Paasche index and $$P_{L}$$ is the Laspeyres index. The Fisher index has other representations, but this is the one used by IndexNumR in its computations.
• Tornqvist index (Törnqvist 1936; Törnqvist and Törnqvist 1937), $\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \prod_{n=1}^{N}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{\left(s^{t-1}_{n}+s^{t}_{n}\right)/2}, \end{equation*}$ where $$s^{t}_{n}$$ is defined as above for the geometric laspeyres index.
• Walsh index, $\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \frac{\sum_{n=1}^{N}\sqrt{q^{t-1}_{n}q^{t}_{n}}\cdot p^{t}_{n}}{\sum_{n=1}^{N}\sqrt{q^{t-1}_{n}q^{t}_{n}}\cdot p^{t-1}_{n}} \end{equation*}$
• Sato-Vartia index (Sato 1976; Vartia 1976), $\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \prod_{n=1}^{N}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{w_{n}} \end{equation*}$ where the weights are normalised to sum to one, $\begin{equation*} w_{n} = \frac{w^{*}_{n}}{\sum_{n=1}^{N}w^{*}_{n}} \end{equation*}$ and $$w^{*}_{n}$$ is the logarithmic mean of the shares, $\begin{equation*} w^{*}_{n} = \frac{s^{t}_{n}-s^{t-1}_{n}}{\log (s^{t}_{n}) - \log (s^{t-1}_{n})} \end{equation*}$
• Geary-Khamis (Khamis 1972) $\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \frac{\sum_{n=1}^{N}h(q^{t-1}_{n}, q^{t}_{n})p^{t}_{n}}{\sum_{n=1}^{N}h(q^{t-1}_{n}, q^{t}_{n})p^{t-1}_{n}} \end{equation*}$ where h() is the harmonic mean.
• Stuvel index (Stuvel 1957) $\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = A + \sqrt{A^2 + V^{t}/V^{t-1}}, \end{equation*}$ where $$V^{t}$$ is value of total sales in period $$t$$, $$A = (P_{L}-Q_{L})/2$$, $$P_{L}$$ is the laspeyres price index and $$Q_{L}$$ is the laspeyres quantity index.
• CES index, also known as the Lloyd-Moulton index (Lloyd 1975; Moulton 1996), $\begin{equation*} P(p^{t-1},p^{t},q^{t-1}) = \left[\sum_{n=1}^{N}s_{n}^{t-1}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{(1-\sigma)}\right]^{\left(\frac{1}{1-\sigma}\right)}, \end{equation*}$ where $$\sigma$$ is the elasticity of substitution.
### Time dummy methods
• Time-product-dummy
This is a regression model approach where log prices are modelled as a function of time and product dummies. The regression equation is given by,
$\begin{equation*} \ln{p_{n}^{t}} = \alpha + \beta_{1} D^{t} + \sum_{n = 2}^{N}\beta_{n}D_{n} + \epsilon_{n}^{t}, \end{equation*}$ where $$D^{t}$$ is equal to 1 in period $$t$$ and 0 in period $$t-1$$, and $$D_{n}$$ is equal to 1 if the product is product $$n$$ and 0 otherwise.
The price index is then given by, $\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \exp({\hat{\beta_{1}}}) \end{equation*}$
However, this is a biased estimate (Kennedy 1981), so IndexNumR optionally calculates the following adjusted estimate,
$\begin{equation*} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \exp({\hat{\beta_{1}} - 0.5 \times Var(\hat{\beta_{1}})}) \end{equation*}$
The time-product-dummy equation can be estimated using three methods in IndexNumR using the weights parameter: ordinary least squares; weighted least squares where the weights are the product expenditure shares; or weighted least squares where the weights are the average of the expenditure shares in the two periods. In the first case, the index produced is the same as the matched sample Jevons index, which does not use quantity information. The second option produces a matched sample harmonic share weights index, and the last option produces the matched sample Tornqvist index. See (Walter E. Diewert 2005b) for a discussion of these results.
### Examples
To estimate a simple chained Laspeyres price index,
priceIndex(CES_sigma_2,
pvar = "prices",
qvar = "quantities",
pervar = "time",
prodID = "prodID",
indexMethod = "laspeyres",
output = "chained")
## [,1]
## [1,] 1.0000000
## [2,] 0.9673077
## [3,] 1.2905504
## [4,] 1.3382002
## [5,] 1.2482444
## [6,] 1.7346552
## [7,] 1.6530619
## [8,] 1.4524186
## [9,] 1.8386215
## [10,] 1.7126802
## [11,] 2.1810170
## [12,] 2.2000474
Estimating multiple different index numbers on the same data is straight-forward,
methods <- c("laspeyres","paasche","fisher","tornqvist")
prices <- lapply(methods,
function(x) {priceIndex(CES_sigma_2,
pvar = "prices",
qvar = "quantities",
pervar = "time",
prodID = "prodID",
indexMethod = x,
output = "chained")})
as.data.frame(prices, col.names = methods)
## laspeyres paasche fisher tornqvist
## 1 1.0000000 1.0000000 1.0000000 1.0000000
## 2 0.9673077 0.8007632 0.8801048 0.8925715
## 3 1.2905504 0.8987146 1.0769571 1.0789612
## 4 1.3382002 0.9247902 1.1124543 1.1146080
## 5 1.2482444 0.6715974 0.9155969 0.9327861
## 6 1.7346552 0.7858912 1.1675831 1.1790710
## 7 1.6530619 0.7472454 1.1114148 1.1223220
## 8 1.4524186 0.5836022 0.9206708 0.9379711
## 9 1.8386215 0.6431381 1.0874224 1.0961295
## 10 1.7126802 0.5145138 0.9387213 0.9527309
## 11 2.1810170 0.5736947 1.1185875 1.1288419
## 12 2.2000474 0.5745408 1.1242851 1.1346166
This illustrates the Laspeyres index’s substantial positive bias, the Paasche index’s substantial negative bias, and the similar estimates produced by the Fisher and Tornqvist superlative index numbers.
## The elasticity of substitution parameter
The CES index number method requires an elasticity of substitution parameter in order to be calculated. IndexNumR provides a function elasticity to estimate the elasticity of substitution parameter, following the method of (Balk 2000). The basic method is to solve for the value of the elasticity of substitution that equates the CES index to a comparison index. One comparison index noted by Balk is the ‘current period’ CES index, $$$\left[\sum_{n=1}^{N}s_{n}^{t}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{-(1-\sigma)}\right]^{\left(\frac{-1}{1-\sigma}\right)}.$$$ Therefore, we numerically calculate the value of $$\sigma$$ that solves, $$$\left[\sum_{n=1}^{N}s_{n}^{t-1}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{(1-\sigma)}\right]^{\left(\frac{1}{1-\sigma}\right)} - \left[\sum_{n=1}^{N}s_{n}^{t}\left(\frac{p^{t}_{n}}{p^{t-1}_{n}}\right)^{-(1-\sigma)}\right]^{\left(\frac{-1}{1-\sigma}\right)} = 0.$$$
This is done using the uniroot function of the stats package distributed with base R. Note that this equation can be used to solve for sigma for any $$t=2,\cdots,T$$, so there are $$T-1$$ potential estimates of sigma. The elasticity function will return all $$T-1$$ estimates as well as the arithmetic mean of the estimates. In addition to the current period CES index, Balk also notes that the Sato-Vartia index can be used, while (Ivancic, Diewert, and Fox 2010) note that a Fisher index could be used. Any of these three indexes can be used as the comparison index by specifying the compIndex option as either "fisher", "ces" or "satovartia". The current period CES index is the default.
The dataset available with IndexNumR, CES_sigma_2, was calculated assuming a CES cost function with an elasticity of substitution equal to 2. Running the elasticity function on this dataset,
elasticity(CES_sigma_2,
pvar="prices",
qvar="quantities",
pervar="time",
prodID="prodID",
compIndex="ces")
## $sigma ## [1] 2 ## ##$allsigma
## [,1]
## [1,] 2.000000
## [2,] 2.000001
## [3,] 2.000000
## [4,] 1.999999
## [5,] 2.000000
## [6,] 2.000000
## [7,] 2.000000
## [8,] 2.000000
## [9,] 2.000000
## [10,] 2.000000
## [11,] 2.000000
##
## diff ## [,1] ## [1,] -5.418676e-09 ## [2,] -5.665104e-08 ## [3,] 3.426148e-13 ## [4,] 1.213978e-07 ## [5,] 2.196501e-10 ## [6,] -1.141232e-11 ## [7,] 3.118616e-13 ## [8,] 9.429124e-12 ## [9,] -7.997090e-09 ## [10,] 4.536105e-11 ## [11,] 5.087042e-13 which recovers the value of $$\sigma$$ used to construct the dataset. There is one additional item of output labelled ‘diff.’ This is the value of the difference between the CES index and the comparison index and is returned so that the user can check that the value of this difference is indeed zero. If it is non-zero then it may indicate that uniroot was not able to find a solution, within the specified upper and lower bounds for $$\sigma$$. These bounds can be changed with the options upper and lower of the elasticity function. The defaults are 20 and -20 respectively. ## Chain-linked indices and the linking period problem One problem with chain-linked indices is the potential for chain drift. Take an example where prices increase in one period and then return to their original level in the next period. An index suffering from chain-drift will increase when prices increase, but won’t return to its original level when prices do. In the above examples, it was noted that there is substantial positive bias in the Laspeyres index and substantial negative bias in the Paasche index. Part of this is due to chain drift. One way of reducing the amount of chain drift is to choose linking periods that are ‘similar’ in some sense (alternatively, use a multilateral method). This method of linking has been mentioned by Diewert and Fox (W. Erwin Diewert and Fox 2017), and Hill (Hill 2001) takes the concept further to choose the link period based on a minimum cost spanning tree. ### Dissimilarity measures To choose the linking period we need a measure of the similarity between two periods. For each period we have information on prices and quantities. The Hill (2001) method compares the two periods based on the Paasche-Laspeyres spread, $$$PL (p^{t},p^{T+1},q^{t},q^{T+1}) = \Bigg|{ln\Bigg(\frac{P_{T+1,t}^{L}}{P_{T+1,t}^{P}}\Bigg)}\Bigg|,$$$ where $$P^{L}$$ is a Laspeyres price index and $$P^{P}$$ is a Paasche price index. Since the Laspeyres and Paasche indices are biased in opposite directions, this choice of similarity measure is designed to choose linking periods that minimise the influence of index number method choice. Alternative measures exist that compute the dissimilarity of two vectors. Two such measures, recommended by Diewert (Walter E. Diewert 2002) are the weighted log-quadratic index of relative price dissimilarity and the weighted asymptotically linear index of relative price dissimilarity, given by the following, \begin{align} LQ(p^{t},p^{T+1},q^{t},q^{T+1}) = \sum_{n=1}^{N}\frac{1}{2}&(s_{T+1,n} + s_{t,n})[ln(p_{T+1,n}/P(p^{t},p^{T+1},q^{t},q^{T+1})p_{t,n})]^{2} \label{eq:logQuadratic} \\ AL(p^{t},p^{T+1},q^{t},q^{T+1}) = \sum_{n=1}^{N}\frac{1}{2}&(s_{T+1,n} + s_{t,n})[(p_{T+1,n}/P(p^{t},p^{T+1},q^{t},q^{T+1})p_{t,n}) + \nonumber \\ & (P(p^{t},p^{T+1},q^{t},q^{T+1})p_{t,n}/p_{T+1,n}) - 2] \end{align} where $$P(p^{t},p^{T+1},q^{t},q^{T+1})$$ is a superlative index number. Another measure proposed by Fox, Hill and Diewert (Fox, Hill, and Diewert 2004) is a measure of absolute dissimilarity given by, $$$AD(x_{j},x_{k}) = \frac{1}{M+N}\sum_{l=1}^{M+N}\Bigg[ln\Bigg(\frac{x_{kl}}{x_{jl}}\Bigg) - \frac{1}{M+N}\sum_{i=1}^{M+N}ln\Bigg(\frac{x_{ki}}{x_{ji}}\Bigg)\Bigg]^{2} + \Bigg[\frac{1}{M+N}\sum_{i=1}^{M+N}ln\Bigg(\frac{x_{ki}}{x_{ji}}\Bigg)\Bigg]^{2},$$$ where $$M+N$$ is the total number of items in the vector and $$x_{j}$$ and $$x_{k}$$ are the two vectors being compared. The authors use this in the context of detecting outliers, but it can be used to compare the price and quantity vectors of two time periods. One way to do this is to only use price information, or only use quantity information. There are two ways to use both price and quantity information: stack the price and quantity vectors for each time period into a single vector and compare the two stacked’ vectors; or calculate separate measures of absolute dissimilarity for prices and quantities before combining these into a single measure. The former method is simple to implement, but augments the price vector with a quantity vector that may be of considerably different magnitude and variance. Another option is to compute the absolute dissimilarity using prices and quantities separately, then combine them by taking the geometric average. The final measure is the predicted share measure of relative price dissimilarity employed by Diewert, Finkel, Sayag and White in the Seasonal Products chapter of the Update of the Consumer Price Index Manual, Consumer Price Index Theory (draft available here). To introduce this measure, first we define some notation. The share of expenditure on product $$n$$ in period $$t$$ is given by $$s_{t,n} = p_{t,n}q_{t,n}/ \sum_{i=1}^{K}p_{t,i}q_{t,i}$$. The ‘predicted’ share of expenditure on product $$n$$ in period $$t$$, using the quantities of period $$t$$ and the prices of period $$r$$ is given by $$s_{r,t,n} = p_{r,n}q_{t,n}/ \sum_{i=1}^{K}p_{r,i}q_{t,i}$$. We also define the predicted share error $$e_{r,t,n}$$ as the actual share, minus the predicted share $$s_{t,n} - s_{r,t,n}$$. The predicted share measure of relative price dissimilarity between the periods $$t$$ and $$r$$ is given by: $$$PS_{r,t} = \sum_{n=1}^{N} (e_{r,t,n})^2 + \sum_{n=1}^{N} (e_{t,r,n})^2$$$ When the dataset being used does not have quantities, and an elementary index is being constructed, we cannot compute the shares in the above formulas. In this case, quantities are imputed in such a way that the expenditure shares on each product available in a time period are equal. The quantities are constructed by setting quantity equal to $$1/P_{n,t}\times N_{t}$$ where $$N_{t}$$ is the number of products available in period $$t$$. IndexNumR does this with the function imputeQuantities; however price indexes can be estimated without calling this function directly. Calling priceIndex and setting qvar = "" will trigger IndexNumR to impute the quantities used in the estimation of the predicted share relative price dissimilarity measure. ### Estimating similarity-linked indexes IndexNumR provides three functions, enabling the estimation of the dissimilarity measures above. The first function relativeDissimilarity calculates the Paasche-Laspeyres spread, log-quadratic, asymptotically linear and predicted share relative dissimilarity measures, and the second function mixScaleDissimilarity computes the mix, scale and absolute measures of dissimilarity. All three functions provide the same output - a data frame with three columns containing the indices of the pairs of periods being compared in the first two columns and the value of the dissimilarity measure in the third column. Once these have been computed, the function maximiumSimilarityLinks can take the output data frame from these functions and compute the maximum similarity linking periods. The function priceIndex effectively computes a similarity-linked index as follows, • Compute the measure of dissimilarity between all possible combinations of time periods. • Set the price index to 1 in the first period. • Compute the price index for the second period and chain it with the first period, $\begin{equation*} P_{chain}^{2} = P_{chain}^{1} \times P(p^{1},p^{2},q^{1},q^{2}), \end{equation*}$ where $$P(p^{1},p^{2},q^{1},q^{2})$$ is any bilateral index number formula. • For each period $$t$$ from $$3,\dots,T$$, find the period $$t^{min}$$ with the minimum dissimilarity, comparing period $$t$$ to all periods $$1, \dots, t-1$$. • Compute the similarity chain-linked index number, $\begin{equation*} P_{chain}^{t} = P_{chain}^{t^{min}} \times P(p^{t^{min}},p^{t},q^{t^{min}},q^{t}) \end{equation*}$ ### Examples Using the log-quadratic measure of relative dissimilarity, the dissimilarity between the periods in the CES_sigma_2 dataset is as follows, lq <- relativeDissimilarity(CES_sigma_2, pvar="prices", qvar="quantities", pervar = "time", prodID = "prodID", indexMethod = "fisher", similarityMethod = "logquadratic") head(lq) ## period_i period_j dissimilarity ## 1 1 2 0.09726451 ## 2 1 3 0.02037395 ## 3 1 4 0.04164311 ## 4 1 5 0.28078294 ## 5 1 6 0.08880177 ## 6 1 7 0.08531212 The output from estimating the dissimilarity between periods can than be used to estimate the maximum similarity links, maximumSimilarityLinks(lq) ## xt x0 dissimilarity ## 1 1 1 0.000000000 ## 2 2 1 0.097264508 ## 3 3 1 0.020373951 ## 4 4 3 0.003832972 ## 5 5 4 0.130990853 ## 6 6 4 0.008684012 ## 7 7 6 0.001122913 ## 8 8 2 0.041022738 ## 9 9 7 0.001367896 ## 10 10 5 0.006962106 ## 11 11 9 0.002946674 ## 12 12 11 0.003612044 To estimate a chained Laspeyres index linking together the periods with maximum similarity as estimated above, priceIndex(CES_sigma_2, pvar = "prices", qvar = "quantities", pervar = "time", prodID = "prodID", indexMethod = "laspeyres", output = "chained", chainMethod = "logquadratic") ## [,1] ## [1,] 1.0000000 ## [2,] 0.9673077 ## [3,] 1.1000000 ## [4,] 1.1406143 ## [5,] 1.0639405 ## [6,] 1.2190887 ## [7,] 1.1617463 ## [8,] 1.0551558 ## [9,] 1.1357327 ## [10,] 1.0928877 ## [11,] 1.1732711 ## [12,] 1.1835084 ## Multilateral index numbers Multilateral index number methods use data from multiple periods to compute each term in the index. IndexNumR provides the functions GEKSIndex, GKIndex and WTPDIndex to use the GEKS, Geary-Khamis or Weighted Time-Product-Dummy multilateral index number methods respectively. ### The GEKS method The GEKS method is attributable to Gini (Gini 1931), Eltito and Koves (Eltetö and Köves 1964), and Szulc (Szulc 1964) in the cross-sectional context. The idea of adapting the method to the time series context is due to Balk (Balk 1981), and developed further by Ivancic, Diewert and Fox (Ivancic, Diewert, and Fox 2011). The user must choose the size of the window over which to apply the GEKS method, typically one or two years of data plus one period to account for seasonality. Denote this as $$w$$.The basic method followed by the function GEKSIndex is as follows. Choose a period, denoted period $$k$$, within the window as the base period. Calculate a bilateral index number between period $$k$$ and every other period in the window. Repeat this for all possible choices of $$k$$. This gives a matrix of size $$w\times w$$ of bilateral indexes between all possible pairs of periods within the window. Then compute the GEKS indexes for the first $$w$$ periods as, $$$\left[ \prod_{k=1}^{w}P^{k,1} \right]^{1/w}, \left[ \prod_{k=1}^{w}P^{k,2} \right]^{1/w}, \cdots, \left[ \prod_{k=1}^{w}P^{k,w} \right]^{1/w},$$$ where the term $$P^{k,t}$$ is the bilateral index between period $$t$$ and base period $$k$$. IndexNumR offers the Fisher, Tornqvist, Walsh, Jevons and time-product-dummy index number methods for the index $$P$$ via the indexMethod option. The Tornqvist index method is the default. The $$w\times w$$ matrix of bilateral indexes is as follows, $P = \begin{pmatrix} P^{1,1} & \cdots & P^{1,w} \\ \vdots & \ddots & \vdots \\ P^{w,1} & \cdots & P^{w,w} \end{pmatrix}$ So that the first term of the GEKS index is the geometric mean of the elements in the first column of the above matrix, the second term is the geometric mean of the second column, and so on. Note that IndexNumR makes use of two facts about the matrix above to speed up computation: it is (inversely) symmetric so that $$P^{j,k} = 1/P^{k,j}$$; and the diagonal elements are 1. #### Intersection GEKS (int-GEKS) The intersection GEKS (int-GEKS) was developed by Claude Lamboray and Frances Krsinich (C. Lamboray and Krsinich 2015) to deal with the asymmetry with which products enter the index, when there are appearing or disappearing products. The issue arises because when calculating the GEKS index comparing two adjacent periods, products that are not matched for the two periods may still contribute to the index, either in period $$t-1$$ or period $$t$$, but not both. To see this, note that the GEKS index between periods $$t-1$$ and $$t$$, using the window going from period 1 to $$w$$, can be written as: $$$P^{t-1,t}_{[1:w]} = \prod_{k=1}^{w}(P_{t-1,k}\times P_{k,t})^{1/w},$$$ where $$P_{t-1,k}$$ is the bilateral price index between periods $$t-1$$ and $$k$$, and $$P_{k,t}$$ is similarly defined. The usual GEKS procedure performed by IndexNumR when using the function GEKSIndex and specifying sample = matched, performs matching between period $$t$$ and period $$k$$ only. The int-GEKS method performs matching between periods $$t-1$$, $$t$$ and $$k$$. Since more matching is performed, fewer data points are used in estimating the index, particularly if product turnover is high. It is also computationally somewhat slower, as more matching must be performed. ### The Geary-Khamis method The Geary-Khamis, or GK method, was introduced by Geary (Geary 1958) and extended by Khamis (Khamis 1970, 1972). This method involves calculating a set of quality adjustment factors, $$b_{n}$$, simultaneously with the price levels, $$P_{t}$$. The two equations that determine both of these are: $$$b_{n} = \sum_{t=1}^{T}\left[\frac{q_{tn}}{q_{n}}\right]\left[\frac{p_{tn}}{P_{t}}\right]$$$ $$$P_{t} = \frac{p^{t} \cdot q^{t}} {b \cdot q^{t}}$$$ These equations can be solved by an iterative method, where a set of $$b_{n}$$ are arbitrarily chosen, which can then be used to calculate an initial vector of price levels. This vector of prices is then used to generate a new $$b$$ vector, and so on until the changes become smaller than some threshold. IndexNumR can use the iterative method by specifying the parameter solveMethod = "iterative". However, there is an alternative method using matrix algebra that is significantly more efficient. To use the more efficient method discussed below, specify solveMethod = "inverse". As discussed in (W. Erwin Diewert and Fox 2017) and following Diewert (Walter E. Diewert 1999), the problem of finding the $$b$$ vector can be solved using the following system of equations: $$$\left[I_{N} - C\right]b = 0_{N},$$$ where $$I_{N}$$ is the $$N \times N$$ identity matrix, $$0_{N}$$ is an n-dimensional vector of zeros and the $$C$$ matrix is given by, $$$C = \hat{q}^{-1} \sum_{t=1}^{T}s^{t}q^{t\textbf{T}},$$$ where $$\hat{q}^{-1}$$ is the inverse of an $$N \times N$$ diagonal matrix $$q$$, where the diagonal elements are the total quantities purchased for each good over all time periods, $$s^{t}$$ is a vector of the expenditure shares for time period $$t$$, and $$q^{t\textbf{T}}$$ is the transpose of the vector of quantities purchased in time period $$t$$. It can be shown that the matrix $$[I-C]$$ is singular so a normalisation is required to solve for $$b$$. IndexNumR follows the method discussed by Irwin Collier Jr. in his comment on (Walter E. Diewert 1999) and assumes the following normalisation, $$$\sum_{n=1}^{N}b_{n}q_{n} = 1,$$$ which is, in matrix form, $$$c = R\begin{bmatrix} b_{1}q_{1} \\ \vdots \\ b_{n}q_{n} \end{bmatrix},$$$ where $$c$$ is the $$N \times 1$$ vector $$\begin{bmatrix} 1 & 0 & \dots & 0 \end{bmatrix}^{\textbf{T}}$$, and $$R$$ is the $$N \times N$$ matrix, $$$R = \begin{bmatrix} 1 & 1 & \dots & 1 \\ 0 & \dots & \dots & 0 \\ \vdots & & & \vdots \\ 0 & \dots & \dots & 0 \end{bmatrix}$$$ Adding the constraint to the original equation we now have the solution for $$b$$, $$$b = [I_{N} - C + R]^{-1}c.$$$ Once the $$b$$ vector has been calculated, the price levels can be computed from the GK equations above. ### The Weighted Time-Product-Dummy method The weighed time-product-dummy method can be seen as the country-product-dummy method (Summers 1973) adapted to the time-series context and supposes the following model for prices: $$$p_{tn} = \alpha_{t}b_{n}e_{tn},$$$ where $$\alpha_{t}$$ can be interpreted as the price level in period $$t$$, $$b_{n}$$ is the quality adjustment factor for product $$n$$ and $$e_{tn}$$ is a stochastic error term. The problem is to solve for $$\alpha$$ and $$b$$ using least squares minimisation. Following (Rao 1995), it is formulated as a weighted least squares minimisation, where the weights are based on economic importance. Diewert and Fox show that this can be written as the solution to the system of equations, $$$[I_{N} - F]\beta = f,$$$ where $$I_{N}$$ is the $$N \times N$$ identity matrix, $$F$$ is the following $$N \times N$$ matrix, $$$F = \begin{bmatrix} f_{11} & \dots & f_{1N} \\ \vdots & \dots & \vdots \\ f_{N1} & \dots & f_{NN} \end{bmatrix},$$$ the elements of $$F$$ are the following, $$$f_{nj} = w_{nj}/\sum_{k=1}^{N}w_{nk} \quad n,j = 1, \dots, N,$$$ with the $$w_{nj}$$ given by, $$$w_{nj} = \sum_{t=1}^{T}w_{tnj} \quad n,j = 1, \dots, N,$$$ and the $$w_{tnj}$$ given by, $$$w_{tnj} = s_{tn}s_{tj} \quad n \neq j, n = 1, \dots, N; j = 1, \dots, N; t = 1, \dots, T.$$$ $$f$$ on the right-hand-side is the following, $$$f = [f_{1}, \dots, f_{N}]^{\textbf{T}},$$$ where the $$f_{n}$$ are given by, $$$\sum_{t=1}^{T}\sum_{j=1}^{N}f_{tnj}(y_{tn} - y_{tj}) \quad for \space n = 1, \dots, N$$$ and $$y_{tn} = log(p_{tn})$$. The matrix $$[I_{N} - F]$$ is singular so a normalisation must be used to solve the system of equations. IndexNumR uses the method discussed in (W. Erwin Diewert and Fox 2017); $$\beta_{N}$$ is assumed to be zero and the last equation is dropped to solve for the remaining coefficients. ### Extending multilateral indexes The multilateral indexes are normalised by dividing by the first term, to give an index for the first $$w$$ periods that starts at 1. If the index only covers $$w$$ periods then no further calculation is required. However, if there are $$T>w$$ periods in the dataset then the index must be extended. Extending a multilateral index can be done in a multitude of ways. Statistical agencies generally do not revise price indices like the consumer price index, so the methods offered by IndexNumR to extend multilateral indexes are methods that do not lead to revisions. More specifically, these are called splicing methods and the options available are the movement, window, half, mean, fbew (Fixed Base Expanding Window), fbmw (Fixed Base Moving Window), wisp (window splice on published data), hasp (half-splice on published data) and mean splice on published data. The idea behind most of these methods is that we start by moving the window forward by one period and calculate the index for the new window. There will be $$w-1$$ overlapping periods between the initial index and the index computed on the window that has been rolled forward one period. Any one of these overlapping periods can be used to extend the multilateral index. The variants of window, half and mean splice that are on published data use the same method as the classical counterparts, but splice onto the published series instead of the previously calculated window. Let $$P_{OLD}$$ be the index computed over periods $$1$$ to $$w$$ and let $$P_{NEW}$$ be the index computed over the window rolled forward one period, from periods $$2$$ to $$w+1$$. Let the final index simply be $$P$$. For the first $$w$$ periods $$P = P_{OLD}$$, then $$P^{w+1}$$ is computed using the splicing methods as follows. • Movement splice (Ivancic, Diewert, and Fox 2011) $$$P^{w+1} = P^{w} \times \frac{P_{NEW}^{w+1}}{P_{NEW}^{w}}$$$ That is, the movement between the final two periods of the index computed over the new window is used to extend the original index from period $$w$$ to $$w+1$$. • Window splice (Krsinich 2016) $$$P^{w+1} = P^{w} \times \frac{P_{NEW}^{w+1}/P_{NEW}^{2}}{P_{OLD}^{w}/P_{OLD}^{2}}$$$ In this case, the ratio of the movement between the first and last periods computed using the new window, to the movement between the first and last periods using the old window is used to extend the original index. • Half splice $$$P^{w+1} = P^{w} \times \frac{P_{NEW}^{w+1}/P_{NEW}^{\frac{w-1}{2}+1}}{P_{OLD}^{w}/P_{OLD}^{\frac{w-1}{2}+1}}$$$ The half splice uses the period in the middle of the window as the overlapping period to calculate the splice. • Mean splice (Ivancic, Diewert, and Fox 2011) $$$P^{w+1} = P^{w} \times \left( \prod_{t=1}^{w-1} \frac{P_{NEW}^{w+1}/P_{NEW}^{t+1}}{P_{OLD}^{w}/P_{OLD}^{t+1}} \right)^{\frac{1}{(w-1)}}$$$ The mean splice uses the geometric mean of the movements between the last period and every other period in the window to extend the original index. • FBMW (Claude Lamboray 2017) $$$P^{w+1} = P^{base} \times \frac{P_{NEW}^{w+1}}{P_{NEW}^{base}}$$$ This method uses a fixed base period that is updated periodically. For example, if the data are monthly then the base period could be each December month, which would be achieved by ensuring that December is the first period in the data and specifying a window length of 13. The splice is calculated by using the movement between the final data point and the base period in the new window to extend the index. If the new data point being calculated is the first period after the base period, then this method produces the same price growth the same as the movement splice. Using the same example, if each December is the base period, then this method will produce the same price growth for January on December as the movement splice. • FBEW (Chessa 2016) This method uses the same calculation as FBMW, but uses a different set of data for the calculation. It expands the size of the window used to compute the new data point each period to include the latest period of data. If the data are monthly and the base period is each December, then the window used to compute the new data point in January includes only the December and January months. In February it includes the December, January and February months, and so on until the next December where it includes the full 13 months (assuming a window length of 13). This method produces the same result as the FBMW method when the new period being calculated is the base period. Using the same example, if each December is the base period, then each December this will produce the same result as the FBMW method. The splicing methods are used in this fashion to extend the series up to the final period in the data. # Assume that the data in CES_sigma_2 are quarterly data with time period # 1 corresponding to the December quarter. splices <- c("window", "half", "movement", "mean", "fbew", "fbmw", "wisp", "hasp", "mean_pub") # estimate a GEKS index using the different splicing methods. Under # the above assumptions, the window must be 5 to ensure the base period is # each December quarter. result <- as.data.frame(lapply(splices, function(x){ GEKSIndex(CES_sigma_2, pvar = "prices", qvar = "quantities", pervar = "time", prodID = "prodID", indexMethod = "tornqvist", window=5, splice = x) })) colnames(result) <- splices result ## window half movement mean fbew fbmw wisp ## 1 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 ## 2 0.8927770 0.8927770 0.8927770 0.8927770 0.8927770 0.8927770 0.8927770 ## 3 1.0781723 1.0781723 1.0781723 1.0781723 1.0781723 1.0781723 1.0781723 ## 4 1.1132724 1.1132724 1.1132724 1.1132724 1.1132724 1.1132724 1.1132724 ## 5 0.9292537 0.9292537 0.9292537 0.9292537 0.9292537 0.9292537 0.9292537 ## 6 1.1784816 1.1789102 1.1772392 1.1783996 1.1746060 1.1772392 1.1784816 ## 7 1.1221118 1.1207012 1.1205204 1.1214679 1.1179392 1.1191128 1.1225753 ## 8 0.9383833 0.9371778 0.9368880 0.9376351 0.9348518 0.9352286 0.9386945 ## 9 1.0951022 1.0942446 1.0941491 1.0947038 1.0914207 1.0914207 1.0914207 ## 10 0.9515914 0.9510825 0.9507333 0.9512963 0.9486380 0.9483625 0.9517028 ## 11 1.1280620 1.1274679 1.1268628 1.1277415 1.1241728 1.1242435 1.1288942 ## 12 1.1336566 1.1327009 1.1323917 1.1332152 1.1297415 1.1297611 1.1354276 ## hasp mean_pub ## 1 1.0000000 1.0000000 ## 2 0.8927770 0.8927770 ## 3 1.0781723 1.0781723 ## 4 1.1132724 1.1132724 ## 5 0.9292537 0.9292537 ## 6 1.1789102 1.1783996 ## 7 1.1191128 1.1214484 ## 8 0.9383566 0.9373834 ## 9 1.0925320 1.0940276 ## 10 0.9524902 0.9512759 ## 11 1.1253883 1.1276104 ## 12 1.1341850 1.1330457 On the assumptions in the above example, periods 1, 5 and 9 are Decembers. Periods 1-5 are computed using full information and periods 6-12 are computed using the splicing methods. Notice that fbew = fbmw in period 9 (December) and fbmw = movement in period 6 (January). ## The differences approach to index numbers The above index number methods are derived based on a ratio approach, which decomposes the value change from one period to the next into the product of a price index and a quantity index. An alternative approach is to decompose value change into the sum of a price indicator and a quantity indicator. The theory dates back to the 1920s, and an excellent paper on this approach has been written by Diewert (Walter E. Diewert 2005a). There are a number of methods available for computing the indicator, and IndexNumR exposes the following, via the priceIndicator function: • Laspeyres indicator $$$I(p^{t-1}, p^{t}) = \sum_{n=1}^{N}q_{n}^{t-1}\times(p_{n}^{t}-p_{n}^{t-1})$$$ • Paasche indicator $$$I(p^{t-1}, p^{t}) = \sum_{n=1}^{N}q_{n}^{t}\times(p_{n}^{t}-p_{n}^{t-1})$$$ • Bennet indicator (Bennet 1920) $$$I(p^{t-1}, p^{t}) = \sum_{n=1}^{N} \frac{(q_{n}^{t}+q_{n}^{t-1})}{2} \times(p_{n}^{t}-p_{n}^{t-1})$$$ • Montgomery indicator (Montgomery 1929) $$$I(p^{t-1}, p^{t}) = \sum_{n=1}^{N} \frac{p_{n}^{t}q_{n}^{t}+p_{n}^{t-1}q_{n}^{t-1}}{log(p_{n}^{t}q_{n}^{t}) - log(p_{n}^{t-1}q_{n}^{t-1})} \times\left(\frac{p_{n}^{t}}{p_{n}^{t-1}}\right)$$$ ### Examples Price indicators for the CES_sigma_2 dataset are as follows: methods <- c("laspeyres", "paasche", "bennet", "montgomery") p <- lapply(methods, function(x) {priceIndicator(CES_sigma_2, pvar = "prices", qvar = "quantities", pervar = "time", prodID = "prodID", method = x)}) as.data.frame(p, col.names = methods) ## laspeyres paasche bennet montgomery ## 1 NA NA NA NA ## 2 -0.3269231 -3.23451167 -1.78071737 -1.27874802 ## 3 4.3441768 1.19889566 2.77153621 2.23764163 ## 4 0.4061429 0.33835480 0.37224887 0.37329461 ## 5 -0.8066580 -5.65501233 -3.23083515 -2.35138599 ## 6 5.8451382 1.89061744 3.86787782 3.23912451 ## 7 -0.6114830 -0.72404798 -0.66776546 -0.66571059 ## 8 -1.6992746 -4.76683536 -3.23305498 -2.74535253 ## 9 4.5203554 1.38856453 2.95445995 2.45168559 ## 10 -1.0274652 -4.49985294 -2.76365909 -2.28761791 ## 11 4.9221471 1.65051935 3.28633320 2.85483403 ## 12 0.1396069 0.02503502 0.08232098 0.08391295 Quantity indicators can also be produced using the same methods as outlined above via the quantityIndicator function. This allows for the value change from one period to the next to be decomposed into price and quantity movements. To facilitate this, IndexNumR contains the valueDecomposition function, which can be used as follows to produce a decomposition of the value change for CES_sigma_2 using a Bennet indicator: valueDecomposition(CES_sigma_2, pvar = "prices", qvar = "quantities", pervar = "time", prodID = "prodID", priceMethod = "bennet") ## price quantity changes values ## 1 NA NA NA NA ## 2 -1.78071737 4.7807174 3 13 ## 3 2.77153621 -4.7715362 -2 11 ## 4 0.37224887 0.6277511 1 12 ## 5 -3.23083515 6.2308351 3 15 ## 6 3.86787782 -5.8678778 -2 13 ## 7 -0.66776546 1.6677655 1 14 ## 8 -3.23305498 6.2330550 3 17 ## 9 2.95445995 -4.9544600 -2 15 ## 10 -2.76365909 5.7636591 3 18 ## 11 3.28633320 -5.2863332 -2 16 ## 12 0.08232098 0.9176790 1 17 Note that for this decomposition, the method is specified for the price indicator and IndexNumR uses the appropriate quantity indicator. For Bennet and Montgomery indicators the same method is used for the quantity indicator as for the price indicator. If a Laspeyres price indicator is requested then the corresponding volume indicator is a Paasche indicator. The reverse is true if the Paasche indicator is used for prices. ## Group indexes If a variable is available in the data set that identifies the group to which a product belongs, it is possible to estimate indexes on each of the groups in the sample using the function groupIndexes. An example is if products come from different geographic regions, or belong to different product categories. groupIndexes will split the data into the different groups and estimate a price index on each group. Any of the price index functions can be used by specifying the indexFunction parameter and then supplying the arguments to the price index function as a named list. # add a group variable to the CES_sigma_2 dataset # products 1 and 2 will be in group 1, products 3 and 4 in group 2 df <- CES_sigma_2 dfgroup <- c(rep(1, 24), rep(2, 24))
# put the arguments to the priceIndex function into a named list
argsList <- list(x = df, pvar = "prices", qvar = "quantities", pervar = "time", prodID = "prodID",
indexMethod = "fisher", output = "chained")
# estimate a bilateral chained fisher index on the groups
groupIndexes("group", "priceIndex", argsList)
## [[1]]
## prices time group
## 1 1.0000000 1 1
## 2 0.5877029 2 1
## 3 0.9380405 3 1
## 4 0.9389789 4 1
## 5 0.9349687 5 1
## 6 0.9341755 6 1
## 7 0.9316160 7 1
## 8 0.6112508 8 1
## 9 0.9039828 9 1
## 10 0.9039828 10 1
## 11 0.8944109 11 1
## 12 0.8797538 12 1
##
## [[2]]
## prices time group
## 1 1.0000000 1 2
## 2 1.0661653 2 2
## 3 1.1246802 3 2
## 4 1.1750587 4 2
## 5 0.9188633 5 2
## 6 1.2641295 6 2
## 7 1.1815301 7 2
## 8 1.1086856 8 2
## 9 1.1552277 9 2
## 10 0.9533554 10 2
## 11 1.2068426 11 2
## 12 1.2237077 12 2
# put the arguments for the GEKSIndex function in a named list
argsGEKS <- list(x = df, pvar = "prices", qvar = "quantities", pervar = "time", prodID = "prodID",
indexMethod = "fisher", window = 12)
# estimate a GEKS index on the groups
groupIndexes("group", "GEKSIndex", argsGEKS)
## [[1]]
## prices time group
## 1 1.0000000 1 1
## 2 0.6008332 2 1
## 3 0.9469454 3 1
## 4 0.9468472 4 1
## 5 0.9423350 5 1
## 6 0.9409859 6 1
## 7 0.9378510 7 1
## 8 0.6170438 8 1
## 9 0.9111127 9 1
## 10 0.9104236 10 1
## 11 0.9002828 11 1
## 12 0.8851572 12 1
##
## [[2]]
## prices time group
## 1 1.0000000 1 2
## 2 1.0585842 2 2
## 3 1.1114981 3 2
## 4 1.1577551 4 2
## 5 0.9040366 5 2
## 6 1.2531574 6 2
## 7 1.1713514 7 2
## 8 1.0996714 8 2
## 9 1.1441569 9 2
## 10 0.9354981 10 2
## 11 1.1960967 11 2
## 12 1.2099160 12 2
## Year-over-year Indexes
Year-over-year indexes are those that calculate the price change between the same periods across years. If the data are monthly then there are twelve indexes; one for each month of the year. Each element of the January index would measure the price movement from January in the base year to January in the comparison year. The second index does the same for February, and so on. IndexNumR provides the function yearOverYearIndexes to estimate these, given the frequency as either ‘monthly’ or ‘quarterly.’ This is effectively a form of group index, where the month or quarter gives the group to which the product belongs. IndexNumR will create a group variable based on the supplied frequency and call the groupIndexes function to estimate the indexes. The data must be structured in the frequency that you specify (if you specify ‘quarterly’ as the frequency in the yearOverYearIndexes function, the time period variable in the data set must be quarterly).
The output from the yearOverYearIndexes function will have a column for the month or quarter. The quarter labelled as quarter 1 will have been constructed from the time periods 1, 5, 9, 13, etc of the data set. The quarter labelled 2 would have been estimated from time periods 2, 6, 10, 14, etc of the data set.
# Assume the CES_sigma_2 data are quarterly observations over three years.
# This results in 4 indexes (one for each quarter) of 3 periods each.
# Estimate year-over-year chained fisher indexes.
argsList <- list(x = CES_sigma_2, pvar = "prices", qvar = "quantities", pervar = "time",
prodID = "prodID", indexMethod = "fisher", output = "chained")
yearOverYearIndexes("quarterly", "priceIndex", argsList)
## [[1]]
## prices time quarter
## 1 1.000000 1 1
## 2 0.892041 2 1
## 3 1.052713 3 1
##
## [[2]]
## prices time quarter
## 1 1.000000 1 2
## 2 1.324124 2 2
## 3 1.058836 3 2
##
## [[3]]
## prices time quarter
## 1 1.000000 1 3
## 2 1.040759 2 3
## 3 1.046379 3 3
##
## [[4]]
## prices time quarter
## 1 1.0000000 1 4
## 2 0.8388911 2 4
## 3 1.0246700 3 4`
# Development
IndexNumR is hosted on Github at https://github.com/grahamjwhite/IndexNumR. There users can find instructions to install the development version directly from Github, as well as report and view bugs or improvements.
# References
Balk, B M. 1981. “A Simple Method for Constructing Price Indices for Seasonal Commodities.” Statistische Hefte 22 (1).
———. 2000. “On Curing the CPI’s Substitution and New Goods Bias.” Research Paper 0005. Statistics Netherlands.
Bennet, T. L. 1920. “The Theory of Measurement of Changes in Cost of Living.” Journal of the Royal Statistics Society 83: 455–62.
Carli, G-R. 1804. “Del Valore e Della Proporzione Dei Metalli Monetati.” Scrittori Classici Italiani Di Economia Politica 13: 297–366.
Chessa, A. 2016. “A New Methodology for Processing Scanner Data in the Dutch CPI.” No.1/2016. EURONA.
Diewert, W. Erwin, and Kevin Fox. 2017. “Substitution Bias in Multilateral Methods for CPI Construction Using Scanner Data.” Discussion Paper 17–02. Department of Economics, University of British Columbia.
Diewert, Walter E. 1999. “Axiomatic and Economic Approaches to International Comparisons.” In International and Interarea Comparisons of Income, Output and Prices, edited by A. Heston and R. E Lipsey. Vol. 61. Studies in Income and Wealth. Chicago: The University of Chicago Press.
———. 2002. “Similarity and Dissimilarity Indexes: An Axiomatic Approach.” Discussion Paper 02–10. Department of Economics, University of British Columbia.
———. 2005a. “Index Number Theory Using Differences Rather Than Ratios.” American Journal of Economics and Sociology 64 (1): 331–60.
———. 2005b. “Weighted Country Product Dummy Variable Regressions and Index Number Formulae.” Review of Income and Wealth 51: 561–70.
Drobish, M. W. 1871. Über Die Berechnung Der Veränderungen Der Waarenpreise Und Des Geldwerths.” Jahrbücher für Nationalökonomie Und Statistik 16: 143–56.
Dutot, N. 1738. Réflections Politiques Sur Les Finances Et Le Commerce. La Haye: Les frères Vaillant et N. Prevost.
Edgeworth, F. Y. 1925. Papers Relating to Political Economy. New York: Burt Franklin.
Eltetö, Ö, and P Köves. 1964. “On a Problem of Index Number Computation Relating to International Comparisons.” Statisztikai Szemle 42: 507–18.
Fisher, I. 1921. “The Best Form of Index Number.” Journal of the American Statistical Association 17: 533–37.
Fox, Kevin, Robert Hill, and W. Erwin Diewert. 2004. “Identifying Outliers in Multi-Output Models.” Journal of Productivity Analysis 22 (1/2): 73–94.
Geary, R. G. 1958. “A Note on Comparisons of Exchange Rates and Purchasing Power Between Countries.” Journal of the Royal Statistical Society Series A 121: 97–99.
Gini, C. 1931. “On the Circular Test of Index Numbers.” International Review of Statistics 9 (2): 3–25.
Hill, Robert. 2001. “Measuring Inflation and Growth Using Spanning Trees.” International Economic REview 42 (1): 167–85. http://dx.doi.org/10.1111/1468-2354.00105.
Ivancic, Loraine, Walter E. Diewert, and Kevin J. Fox. 2010. “Using a Constant Elasticity of Substitution Index to Estimate a Cost of Living Index: From Theory to Practice.” {School of Economics Discussion Paper} 2010/15. University of New South Wales.
———. 2011. “Scanner Data, Time Aggregation and the Construction of Price Indexes.” Journal of Econometrics 161: 24–35. https://doi.org/10.1016/j.jeconom.2010.09.003.
Jevons, W S. 1865. “The Variation in Prices and the Value of the Currency Since 1782.” Journal of the Statistical Society of London 28: 294–320. https://doi.org/10.2307/2338419.
Kennedy, P. E. 1981. “Estimation with Correctly Interpreted Dummy Variables in Semilogarithmic Equations.” American Economic Review 71 (4): 801.
Khamis, S. H. 1970. “Properties and Conditions for the Existence of a New Type of Index Number.” Sankhya: The Indian Journal of Statistics, Series B (1960-2002) 32: 81–98.
———. 1972. “A New System of Index Numbers for National and International Purposes.” Journal of the Royal Statistical Society Series A 135: 96–121.
Konüs, A. A, and S. S. Byushgens. 1926. “K Probleme Pokupatelnoi Cili Deneg.” Voprosi Konyunkturi 2: 151–72.
Krsinich, F. 2016. “The FEWS Index: Fixed Effects with a Window Splice.” Journal of Official Statistics 32: 375–404.
Lamboray, C., and F. Krsinich. 2015. “A Modification of the GEKS Index When Product Turnover Is High.” Paper presented at the fourteenth Ottawa Group meeting. url: http://www.stat.go.jp/english/info/meetings/og2015/pdf/t1s1p2_pap.pdf.
Lamboray, Claude. 2017. “The Geary Khamis Index and the Lehr Index: How Much Do They Differ?” In 15th Meeting of the Ottawa Group 2017.
Laspeyres, E. 1871. “Die Berechnung Einer Mittleren Waarenpreissteigerung.” Jahrbücher für Nationalökonomie Und Statistik 16: 296–314.
Lloyd, P J. 1975. “Substitution Effects and Biases in Nontrue Price Indices.” The American Economic Review 65 (3): 301–13.
Lowe, J. 1823. The Present State of England in Regard to Agriculture, Trade and Finance, Second Edition. Longman, Hurst, Rees, Orme; Brown.
Marshall, A. 1887. “Remedies for Fluctuations of General Prices.” Contemporary Review 51: 355–75.
Montgomery, J. K. 1929. “Is There a Theoretically Correct Price Index of a Group of Commodities?” Poliglotta (privately printed paper, 16 pages). Rome: Roma L’Universale Tipogr.
Moulton, B R. 1996. “Constant Elasticity Cost-of-Living Index in Share-Relative Form.” Mimeo. Bureau of Labour Statistics.
Paasche, H. 1874. Über Die Preisentwicklung Der Letzten Jahre Nach Den Hamburger Borsennotirungen.” Jahrbücher für Nationalökonomie Und Statistik 12: 168–78.
Palgrave, R. H. I. 1886. “Currency and Standard of Value in England, France and India and the Rates of Exchange Between These Countries.” in Memorandum submitted to the Royal Commission on Depression of Trade and Industry, Third Report, Appendix B, 312-390.
Rao, D. S. Prasada. 1995. “On the Equivalence of the Generalized Country-Product-Dummy (CPD) Method and the Rao-System for Multilateral Comparisons.” Working Paper No. 5. Centre for International Comparisons, University of Pennsylvania, Philadelphia.
Sato, K. 1976. “The Ideal Log-Change Index Number.” The Review of Economics and Statistics 53: 223–28.
Stuvel, G. 1957. “A New Index Number Formula.” Econometrica 25: 123–31.
Summers, R. 1973. “International Comparisons with Incomplete Data.” Review of Income and Wealth 29: 1–16.
Szulc, B J. 1964. “Indices for Multiregional Comparisons.” Przeglad Statystyczny 3: 239–54.
Törnqvist, L. 1936. “The Bank of Finland’s Consumption Price Index.” Bank of Finland Monthly Bulletin 10: 1–8.
Törnqvist, L, and E Törnqvist. 1937. “Vilket är fö Rhå Llandet Mellan Finska Markens Och Svenska Kronans köpkraft?” Ekonomiska Samfundets Tidskrift 39: 121–60.
Vartia, Y O. 1976. “Ideal Log-Change Index Numbers.” The Scandinavian Journal of Statistics 3: 121–26.
Young, A. 1812. An Inquiry into the Progressive Value of Money in England as Marked by the Price of Agricultural Products. London. |
This topic is 4876 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Q: Why do I have to use header files in C++? A: Technically, you can put all your code into the .h file, but that is consider very bad coding. When C++ was created, computers didn't have the RAM or the processing power they have now, so the C++ compiler doesn't do as much "housekeeping" as more modern languages. The header files were created to help the compiler figure out how to put the various CPP files together without shooting itself in the foot. Q: I get an error saying that there are multiple definitions of my class (or variable or function). Alternatively, you may get an error saying that you have recursive (or nested) includes. A: You may have included the same header more than once in a file. Most likely, you have included one header file in another header file, and included both of them in a CPP file.
// this is myheader.h
class A
{
...
};
#include "myheader.h" // notice I've included the above file
class B
{
};
// this is myCPP.cpp file
// I've included a file that includes
// myheader.h, so the compiler thinks it should be included
// twice!
To fix this problem, you need to add "code guards". You can use some compiler commands called compiler directives to make sure every include file is "included" only once - thus preventing your code from being included multiple times. You should do this in every .h file.
// during compilation, this will ask the compiler
// if MYFILE_H is not defined defined
// this is like an if statement for the compiler
#ifndef MYFILE_H
// if we get here, it means it is not defined, so define it
// you don't need to give it a value
#define MYFILE_H
...
// if it was already defined, then the #ifndef will jump
// the compiler over the code to this line, thus
// not including any of the data!
#endif
##### Share on other sites
hold on did you just answer your own question in the same post? kinda pointless to post it if you ask me
##### Share on other sites
Quote:
Original post by raptorstrikehold on did you just answer your own question in the same post? kinda pointless to post it if you ask me
Good thing noone did.
I'm sure some newbie will find this post useful. Not everyone knows to use #ifndef to keep from including header files more than once.
##### Share on other sites
...
...
... it was an attempt at wit. Leading by not-immediately-obvious example, you see.
Oh never mind.
##### Share on other sites
Quote:
Posted by ontheheapI'm sure some newbie will find this post useful. Not everyone knows to use #ifndef to keep from including header files more than once.
Thats the long & short of it. I see that question come up at least once a week, so instead of retyping all that I decided to just make a 1 post about it. I forgot about Kylotan's guide - oops. |
# CAT Practice : Speed Time, Races
You are here: Home CAT Questionbank CAT Quant Speed Time, Races
The following topics are covered in the CAT quant section from Arithmetic in Speed Time, Races. Detailed explanatory answers, solution videos and slide decks are also provided.
1. ### Speed Time - Races
Two friends A and B simultaneously start running around a circular track . They run in the same direction. A travels at 6 m/s and B runs at b m/s. If they cross each other at exactly two points on the circular track and b is a natural number less than 30, how many values can b take?
1. 3
2. 4
3. 7
4. 5
2. ### Speed Time - Geometry
Consider a square ABCD. EFGH is another square obtained by joining the midpoints of the sides of the square ABCD where E, F , G amd H are the midpoints of AB, BC, CD and DA respectively. Lakshman and Kanika start from points B and D respectively at speeds ‘l’ kmph and ‘k’ kmph respectively and travel towards each other along the sides of the square ABCD. Jagadeesh starts from Point E and travels along the Square EFGH in the anti-clockwise direction at ‘j’ kmph. Lakshman and Kanika meet for the second time at H where Jagadeesh also meets them for the first time. If l : k : j is 1: 3 : 5$\sqrt {2}$, then the distance travelled by Jagadeesh is
1. 7.5 × $\sqrt {2}$ times the side of the square ABCD
2. 7.5 × $\sqrt {2}$ times the side of the square EFGH
3. 7.5 times the side of the square ABCD
4. 7.5 times the side of the square EFGH
7.5 × $\sqrt {2}$ times the side of the square ABCD
• Polygons at top speed
• Hard
3. ### Speed Time - Cars
Three cars leave A for B in equal time intervals. They reach B simultaneously and then leave for Point C which is 240 km away from B. The first car arrives at C an hour after the second car. The third car, having reached C, immediately turns back and heads towards B. The first and the third car meet a point that is 80 km away from C. What is the difference between the speed of the first and the third car?
1. 60 kmph
2. 80 kmph
3. 20 kmph
4. 40 kmph
4. ### Speed Time - Cars
Three friends A, B and C decide to run around a circular track. They start at the same time and run in the same direction. A is the quickest and when A finishes a lap, it is seen that C is as much behind B as B is behind A. When A completes 3 laps, C is the exact same position on the circular track as B was when A finished 1 lap. Find the ratio of the speeds of A, B and C?
1. 5 : 4 : 2
2. 4 : 3 : 2
3. 5 : 4 : 3
4. 3 : 2 : 1
5. ### Speed Time
Mr. X decides to travel from Delhi to Gurgaon at a uniform speed and decides to reach Gurgaon after T hr. After 30 km, there is some engine malfunction and the speed of the car becomes ${ {4 \over 5} ^{th}}$ of the original speed. So, he travels the rest of the distance at a constant speed ${ {4 \over 5} ^{th}}$ of the original speed and reaches Gurgaon 45 minutes late. Had the same thing happened after he travelled 48 km, he would have reached only 36 minutes late. What is the distance between Delhi and Gurgaon?
1. 90 km
2. 120 km
3. 20 km
4. 40 km
6. ### Speed Time - Meeting point
Two friends A and B leave City P and City Q simultaneously and travel towards Q and P at constant speeds. They meet at a point in between the two cities and then proceed to their respective destinations in 54 minutes and 24 minutes respectively. How long did B take to cover the entire journey between City Q and City P?
1. 60
2. 36
3. 24
4. 48
7. ### Speed Time - Races
A swimming pool is of length 50 m. A and B enter a 300 m race starting simultaneously at one end of the pool at speeds of 3 m/s and 5 m/s. How many times will they meet while travelling in opposite directions before B completes the race?
1. Twice
2. Thrice
3. Once
4. 5 times
Thrice
• Racing on a swimming pool
• Medium
8. ### Relative Speed
Car A trails car B by 50 meters. Car B travels at 45km/hr. Car C travels from the opposite direction at 54km/hr. Car C is at a distance of 220 meters from Car B. If car A decides to overtake Car B before cars B and C cross each other, what is the minimum speed at which car A must travel?
1. 36 km/hr
2. 45 km/hr
3. 67.5 km/hr
4. 18 km/hr
67.5 km/hr
• Minimum overtaking speed
• Medium
9. ### Speed Time - Races
A and B stand at distinct points of a circular race track of length 120m. They run at speeds of a m/s and b m/s respectively. They meet for the first time 16 seconds after they start the race and for the second time 40 seconds from the time they start the race. Now, if B had started in the opposite direction to the one he had originally started, they would have meet for the first time after 40 seconds. If B is quicker than A, find B’s speed.
1. 3 m/s
2. 4 m/s
3. 5 m/s
4. 8 m/s
10. ### Boats and Streams
City A to City B is a downstream journey on a stream which flows at a speed of 5km/hr. Boats P and Q run a shuttle service between the two cities that are 300 kms apart. Boat P, which starts from City A has a still-water speed of 25km/hr, while boat Q, which starts from city B at the same time has a still-water speed of 15km/hr. When will the two boats meet for the first time? (this part is easy) When and where will they meet for the second time?
1. 7.5 hours and 15 hours
2. 7.5 hours and 18 hours
3. 8 hours and 18 hours
4. 7.5 hours and 20 hours
7.5 hours and 20 hours
• Downstream Upstream
• Medium
11. ### Relative Speed
Cities M and N are 600km apart. Bus A starts from city M towards N at 9AM and bus B starts from city N towards M at the same time. Bus A travels the first one-third of the distance at a speed of 40kmph, the second one-third at 50kmph and the third one-third at 60kmpr. Bus B travels the first one-third of the total time taken at a speed of 40kmph, the second one-third at 50kmph and the third one-third at 60kmph. When and where will the two buses cross each other?
1. 300 kms from A
2. 280 kms from A
3. 305 kms from A
4. 295 kms from A
295 kms from A
• Buses crossing
• Medium
12. ### Relative Speed
A car of length 4m wants to overtake a trailer truck of length 20m travelling at 36km/hr within 10 seconds. At what speed should the car travel?
1. 12 m/s
2. 14.8 m/s
3. 12.4 m/s
4. 7.6 m/s
13. ### Crossing and overtaking trains
Train A travelling at 63 kmph takes 27 to sec to cross Train B when travelling in opposite direction whereas it takes 162 seconds to overtake it when travelling in the same direction. If the length of train B is 500 meters, find the length of Train A.
1. 400 m
2. 810 m
3. 500 m
4. 310 m
310 m
• Crossing and overtaking trains
• Medium
P cycles at a speed of 4 m/s for the first 8 seconds, 5 m/s for the next 8 seconds, 6 m/s for the next 8 and so on. Q cycles at a constant speed of 6.5 m/s throughout. If P and Q had to cycle for a 400 m race, how much lead in terms of distance, can P give Q and still finish at the same time as Q?
1. 43.4 m
2. 56.6 m
3. 32.1 m
4. P cannot give a lead as Q is always ahead of P
15. ### Ratio of Speeds
A bus starts from a bus stop P and goes to another bus stop Q. In between P and Q, there is a bridge AB of certain length. A man is standing at a point C on the bridge such that AC:CB = 1:3. When the bus starts at P and if the man starts running towards A, he will meet the bus at A. But if he runs towards B, the bus will overtake him at B. Which of the following is true?
1. Bus travels 3x times faster than the man
2. Bus travels 2x times faster than the man
3. The bus and the man travel at the same speed
4. 4x the speed of the man is equal to 3x the speed of the bus
Bus travels 2x times faster than the man
• Ratio of Speeds
• Medium
16. ### Fuel Consumption
Ramesh takes 6.5 hours to go from city A to city B at 3 different speeds 30 kmph, 45 kmph, and 60 kmph covering the same distance with each speed. The respective mileages per liter of fuel are 11 km, 14 km and 18 km for the above speeds. Ramesh's friend Arun is an efficient driver and wants to minimise his friend's car's fuel consumption. So he decides to drive Ramesh's car one day from city A to city B. How much fuel will he be able to save?
1. 4.2 liters
2. 4.5 liters
3. 0.7 liters
4. 0.3 liters
4.5 liters
• Fuel Consumption
• Medium
17. ### Speed in a race
Amar, Akbar and Antony decide to have a 'x' m race. Antony completes the race 14 m ahead of Amar. Akbar finishes 20 m ahead of Antony and 32 m ahead of Amar. What is Amar’s speed?
1. 9/10th of Antony's speed
2. 5/8th of Akbar's speed
3. 14/15th of Antony's speed
4. 10/7th of Akbar's speed
9/10th of Antony's speed
• Speed in a race
• Medium
18. ### Distance between A and B
Tom, Jerry and Bill start from point A at the same time in their cars to go to B. Tom reaches point B first and turns back and meets Jerry at a distance of 9 miles from B. When Jerry reaches B, he too turns back and meets Bill at a distance of 7 miles from B. If 3 times the speed with which Tom drives his car is equal to 5 times Bill’s speed, what could be the distance between the points A and B
1. 40 miles
2. 24 miles
3. 31 miles
4. 63 miles
63 miles
• Distance between A and B
• Medium
19. ### Starting time
Kumar started from Chennai at x hrs y minutes and travelled to Vellore. He reached Vellore at y hrs z minutes. If the total travel time was z hrs and x minutes, his starting time in Chennai could have been ______ (Assume clock format to be 0 to 24 hrs).
1. 02:08 hrs
2. 13:03 hrs
3. 00:02 hrs
4. 12:01 hrs
20. ### Increasing Speed
When Sourav increases his speed from 20 Km/hr to 25 Km/hr, he takes one hour less than the usual time to cover a certain distance. What is the distance usually covered by him?
1. 125 Km
2. 100 Km
3. 80 Km
4. 120 Km
21. ### Increasing Speed
Distance between the office and the home of Alok is 100 Km. One day, he was late by an hour than the normal time to leave for the office, so he increased his speed by 5 Km/hr and reached office at the normal time. What is the changed speed of Alok?
1. 25 Km/hr
2. 20 Km/hr
3. 16 Km/hr
4. 50 Km/hr
22. ### Increasing Speed
Akash when going slower by by 15 Km/hr, reaches late by 45 hours. If he goes faster by 10 Km/hr from his original speed, he reaches early by by 20 hours than the original time. Find the distance he covers.
1. 8750 Km
2. 9750 Km
3. 1000 Km
4. 3750 Km
23. ### Increasing Speed
Raj was travelling to his hometown from Mumbai. He met with a small accident 80 Km away from Mumbai and continued the remaining journey at 4/5 of his original speed and reached his hometown 1 hour and 24 minutes late. If he had met with the accident 40 Km further, he would have been an hour late.
i) What is Raj's normal speed?
a) 20 Km/hr b) 15 Km/hr c) 30 Km/hr d) 25 Km/hr
ii) What is the distance between Mumbai and Raj's hometown?
a) 140 Km b) 200 Km c) 220 Km d) 250 Km
24. ### Distance betw A and B
Two persons A and B start moving at each other from point P and Q respectively which are 1400 Km apart. Speed of A is 50 Km/hr and that of B is 20 Km/hr. How far is A from Q when he meets B for the 22nd time?
1. 1000 Km
2. 400 Km
3. 800 Km
4. 1400 Km
25. ### Distance betw A and B
What would happen in the previous question if both A and B had started at point P.
1. 800 Km
2. 600 Km
3. 1000 Km
4. 350 Km
26. ### Trains A and B
Two trains A and B are 100 m and 150 m long and are moving at one another at 54 Km/hr and 36 Km/hr respectively. Arun is sitting on coach B1 of train A. Calculate the time taken by Arun to completely cross Train B.
1. 10 s
2. 6 s
3. 4 s
4. 8 s
27. ### Three Trains
Two trains start together from a Station A in the same direction. The second train can cover 1.25 times the distance of first train in the same time. Half an hour later, a third train starts from same station and in the same direction. It overtakes the second train exactly 90 minutes after it overtakes the first train. What is the speed of third train, if the speed of the first train is 40 Km/hr?
1. 20 Km/hr
2. 50 Km/hr
3. 60 Km/hr
4. 80 Km/hr
28. ### Relative Speed
Two trains left from two stations P and Q towards station Q and station P respectively. 3 hours after they met, they were 675 Km apart. First train arrived at its destination 16 hours after their meeting and the second train arrived at its destination 25 hours after their meeting. How long did it take the first train to make the whole trip?
1. 18h
2. 36h
3. 25h
4. 48h
29. ### Perpendicular Directions
Arjun travels from A to B, a distance of 200 Km at the speed of 40 Km/hr. At the same time, Rakesh starts from point C at a speed of 20 Km/hr along a roadwhich is perpendicular to AB. Find the time in which Arjun & Rakesh will be closer to each other?
1. 1.5 h
2. 3.33 h
3. 5 h
4. 4 h
## Our Online Course, Now on Google Playstore!
### Fully Functional Course on Mobile
All features of the online course, including the classes, discussion board, quizes and more, on a mobile platform.
### Cache Content for Offline Viewing
Download videos onto your mobile so you can learn on the fly, even when the network gets choppy! |
## Tuesday, May 17, 2011
At Climate Audit, Steve McIntyre has been frequently writing about Chladni patterns in connected with principal components of autocorrelated data. He developed it for the Steig paper, and was disappointed when J Climate wouldn't allow its inclusion in the response. And he's recently claimed an appearance in another paper.
Chladni patterns are modes of oscillation, originally of a vibrating plate. Now people more often think of a drum membrane, which is a slightly different wave equation, but the idea, and patterns, are similar.
I must admit that I hadn't heard of Chladni before Hans Erren drew attention to them in Steve's first post. Some interesting history there. But I am familiar with the modes in question.
Steve thinks that if a Chladni pattern emerges, that somehow means that the result is showing that rather than the information about the climate pattern being sought, so the information content is reduced. I don't agree - there are reasons why the patterns arise, and they are just as informative in PCA as they are in wave studies. I'll try to show why.
Warning - mathematics (and $$\LaTeX$$) after the jump.
#### Resonance and wave equations
Resonance is familiar with acoustics. If you speak in the open air, your voice propagates away in all directions, attenuating without reflection or selective amplification. If you stand in a partly enclosed cavity, your voice is slightly louder in certain frequency bands. In a totally enclosed bare room, you hear a characteristic booming response - some frequencies are much louder.
These are the ones that excite a resonant mode, in which the air oscillates, but the normal velocity at the boundary is zero. A 3D version of the modes of a vibrating stretched string, which has zero velocity at its ends.
The essential requirement is that the wave energy must be confined but not dissipated. That is why the resonance improves in deeper cavities, for example.
The wave equation for pressure, say, is (with c=speed of sound) $$\nabla^2 p = \frac{1}{c^2}\frac{\partial^2 p}{\partial t^2}$$ If you substitute a resonant mode $$p=P \sin(\omega t)$$, then the equation is $$\nabla^2 P = -(\frac{\omega}{c})^2 P$$ P is the resonant mode, and so is an eigenvector of the Laplacian $$\nabla^2$$. The resonant frequencies correspond to the eigenvalues.
For Chladni's plate the wave equation is more complicated, but the principle is the same.
#### Spatial autocorrelation.
Steve McI also described the simple Toeplitz autocorrelation coefficient matrix that you get in one dimension for a spatial model. With N+1 equally spaced points, the coefficients can be assumed to be powers of r - the correlation of adjacent sites. The matrix is: $$R = \left(\begin{array}{ccccc} 1 & r & r^2 & \ldots & r^N\\ r & 1 & r & r^2 & \ldots \\ r^2 & r & 1 & r & \ldots \\ \ldots & \ldots & \ldots & \ldots & \ldots\\ r^N & \ldots & r^2 & r & 1 \end{array}\right)$$ The Toeplitz property is that all terms on each diagonal are the same. This correlation matrix has a simple inverse: $$R^{-1} = \left(\begin{array}{ccccc} q & -qr & 0 & \ldots & 0\\ -qr & 2q-1 & -qr & 0 & \ldots \\ 0 & -qr & 2q-1 & -qr & \ldots \\ \ldots & \ldots & \ldots & \ldots & \ldots\\ 0 & \ldots & 0 & -qr & q \end{array}\right),\quad q=\frac{1}{1-r^2}$$ Still a Toeplitz matrix, almost, but also banded - tridiagonal. The deviation from Toeplitz is at the top left and bottom right corner terms.
#### Relation to Laplacian and the wave equation
From the last equation, \begin{align} R^{-1} &= qr\left(\begin{array}{ccccc} 1/r & -1 & 0 & \ldots & 0\\ -1 & r+1/r & -1 & 0 & \ldots \\ 0 & -1 & r+1/r & -1 & \ldots \\ \ldots & \ldots & \ldots & \ldots & \ldots\\ 0 & \ldots & 0 & -1 & 1/r \end{array}\right) \\ &= qr\left(\begin{array}{ccccc} 1 & -1 & 0 & \ldots & 0\\ -1 & 2 & -1 & 0 & \ldots \\ 0 & -1 & 2 & -1 & \ldots \\ \ldots & \ldots & \ldots & \ldots & \ldots\\ 0 & \ldots & 0 & -1 & 1 \end{array}\right) + q(1-r)^2 \left(\begin{array}{ccccc} 1/(1+r^2) & 0 & 0 & \ldots & 0\\ 0 & 1 & 0 & 0 & \ldots \\ 0 & 0 & 1 & 0 & \ldots \\ \ldots & \ldots & \ldots & \ldots & \ldots\\ 0 & \ldots & 0 & 0 & 1/(1+r^2) \end{array}\right) \end{align} If you remember finite differences, the first matrix is just the negative of the second difference operator, corresponding to the 1D Laplacian. And the second is almost a multiple of the identity, and is small if r is close to 1.
That's the key to the connection between Chladni patterns and the autocorrelation matrix R. The inverse of R and the Laplacian of the wave equation differ by close to a multiple of the identity, which means they have almost the same eigenvectors. And the eigenvectors of R and $$R^{-1}$$ are the same. R is symmetric and positive definite.
#### Subtle differences
Well, an unsubtle difference is that Chladni oatterns are not 1D. But the same reasoning works - it's too messy to set out here.
The subtler difference is that the diagonal correction is not quite Toeplitz. That relates to the notion of boundary conditions for the wave equation, which is indeed critical for resonance.
At this stage I'll have to just arm-wave on that - it does in fact give the zero normal boundary condition, which is sufficient for resonance.
#### Conclusion
The same mathematics that gives Chladni for the wave equations gives similar eigenvectors for the spatially autocorrelated matrix. It isn't a spurious consequence. Consequently their appearance in, say, Steig et al 2009 doesn't mean that the PC's are "just Chladni" any more than they are just autocorrelation. Spatial autocorrelation is an essential part of the results, and the Chladni patterns reflect that.
I think that's enough heavy maths for now - I could later go more into the properties of Chladni patterns, and in particular into the implications for eigenvalue pairing (for PCs) which SM criticised in Steig et al. That would generate pretty pictures.
Meanwhile, I'll just relay these pictures of resonant modes from Wikipedia where more Chladni patterns on the disc and sphere can be found.
1. You need to read back over some of the classics of EOF/PCA literature. Sadly, however, I can't remember the names - but its back in the 1980's I think.
The assertion is fairly simple: that if the EOFs you're picking up are just the "resonant modes" of the plate, then they are likely noise {{citation needed}}.
The key point is that the modes are resonant modes of the EOF process (so to speak) but aren't actually modes of anything in the physical system itself. If you were analysing the actual motion of a metal plate, of course, the situation would be different: then, they would be physical modes of the system.
2. Belette,
My contention is that the eigenvectors of the wave equation (resonant mode) are the same as eigenvectors of the autocorrelation matrix, with basic spatial autocorrelation. On that basis the patterns aren't noise, but evidence of that spatial correlation, which is a real part of the PC/EOF pattern.
3. This paper talks about standing waves in the Antarctic. It was published many years before McIntyre thought of the concept. He talks about the Chladni patterns as if they are something novel he has thought up.
4. Anon,
Yes, circumferential modes etc have been studied for a long time. However, Steve M's angle is different. The pattern he talks about comes from PC analysis of the correlation matrix of temperature. He seems to think they are an artefact which diminishes the analysis.
My argument here is that mathematically, spatial correlation and standing waves should produce similar eigenfunctions, based on their form. Of course, it is also likely that standing waves contribute to the temperature correlation.
5. Nicke's post is right on, and of course, this why the Journal of Climate editor agreed to my objection to Steve's rant in the O'Donnell paper about Chladni patterns. This was just another ad hoc idea that they *claimed* might somehow make our analysis problematic, but didn't bother doing any work to show that it actually was problemetic. O'Donnell called this 'armwaving objections', but I was merely objecting to their armwaving. Of course, they'll say whatever the heck they want, substantiated or no.
6. Nick Stokes, much of what you say about origin and structure of Chladni patterns I would not doubt. However, I think your interpretation and that of SteveM are subtly different. You both agree that they arise from spatial autocorrelations, but I think that you merely stop by saying that since that correlation is the essence of the PC analysis its interpretation cannot be spurious.
What if those patterns were unique to the PC rendition of the AVHRR data and not part of the original AVHRR data? If one then attempted to use those patterns for explanatory purposes one could be misinterpreting. I have been doing some regressions of the various temperature anomaly data sets and reconstructions for the 1982-2006 period and was hoping to make some posts at your thread titled "Trends in Antarctica". Will you be responding to my previous posts at that thread? I have found that, for instance, the trends in the S(09) PC rendition and correlations of that S(09) data with other data series for that period for the 1982-2006 period is much more sensitive to latitude, longitude and altitude than the raw AVHRR anomaly data using the same comparisons.
I would suppose that one would want to look for the Chladni patterns in all the reconstructions from S(09) and O(10). The O(10) reconstruction, RLS, uses spatial correlations without the use of PCs while the EW reconstruction uses spatial correlations with the PCs.
I am a bit disappointed that you have not delved more deeply into some of this subject matter that you introduced before walking away to other subjects
7. Kenneth,
I'm all too conscious of things that I had been hoping to do, but got distracted with other things. But on this one, I haven't been distracted yet. I will write more.
I think Steve's specific complaint about S09 was not the rendered AVHRR data, but interpretation of the PC's. "S09 utilize a criterion that only modes that appear visually similar to known physical modes are deemed significant. They determine the first 3 modes to meet this criterion. Specifically, the first spatial eigenvector is claimed to be correlated with the SAM index and the second is claimed to reflect the zonal wave-3 pattern."
And he says that autocorrelation could have produced the same result. I contend that that is not a refutation. Resonant modes in turn produce correlation. You can't separate the cause and effect.
BTW, I'm sorry that I went overseas just as your use of TempLS was getting interesting. But I'm back now, and catching up. I'll be happy to engage on that, and I'm very interested to hear how it is going.
8. Eric: Nicke's post is right on, and of course, this why the Journal of Climate editor agreed to my objection to Steve's rant in the O'Donnell paper about Chladni patterns
Not to be argumentative here, but I apparently missed that part in the exchanges. It is my reading that your criticism was more along the lines of "unoriginal and misleading" than "misinformed". That is not true? Also where specifically did the Editor agree with your criticism? (From the exchanges, it appears the authors voluntarily withdrew this section rather than having that mandated or recommended by the editor, who appears to have said nothing at all regarding Chaldni patterns in his decision letter). Also where did you state in your criticism anything of a similar vein to what Nick wrote here? I don't see too much commonality in rereading your original review.
Nick, the question with Chladni patterns arising from EOFs is distinguishing spurious correlations from real ones. Validation is the key here, and the onus should be on the person using the EOFs to demonstrate that he is applying them properly, and not simply chasing spurious correlations in his EOF decomposition.
Pointing out that they can arise spuriously doesn't demonstrate that they did, only that they could, and that this therefore is something that needs to be checked in the EOF analysis by the authors of the EOF analysis and not simply by its critics.
I don't think you need an entire section of O10 dedicated to Chaldni patterns if it isn't going to delve into the question in any more depth than the original paper did (and even agree with Eric's choice of words "misleading" in that context). The O10 authors (enough of them anyway) agreed to this, as is evidenced in their response to the Eric's comments:
"[...] we agree that we spend insufficient time developing this as it might apply to S09."
9. Carrick,
"Validation is the key here, and the onus should be on the person using the EOFs to demonstrate that he is applying them properly, and not simply chasing spurious correlations in his EOF decomposition. "
In the course of describing the PC's, S09 made this observation:
"The first three principal components are statistically separable and can be meaningfully related to important dynamical features of high-latitude Southern Hemisphere atmospheric circulation, as defined independently by extrapolar instrumental data. The first principal component is significantly correlated with the SAM index (the first principal component of sea-level-pressure or 500-hPa geopotential heights for 20u S–90u S), and the second principal component reflects the zonal wave-3 pattern, which contributes to the Antarctic dipole pattern of sea-ice anomalies in the Ross Sea and Weddell Sea sectors."
I don't believe that the use of the PC's as basis functions for interpolating ground temperatures obliges them to identify the physical causes. The argument is simply that, since they worked for satellite readings post 1980, they are likely to be a good basis for ground station interpolation (as opposed to, say, orthogonal polynomials). The fact that physical patterns can be recognised is a bonus.
10. Since we're dealing with satellite data here, there is certainly the possibility that some of the spatial structure is the AVHRR EOFS is associated with non-climate-related processes associated with the AVHRR systematics (e.g., shifts in time of measurement for example).
So yeah, it's kind of important to understand the origin of the EOFs you are observing, if you want to use the EOFs derived from AVHRR data to interpolate ground based data. It might not matter (assuming stationarity really holds) if you wanted to apply AVHRR derived EOFs to interpolate AHVRR data of course.
11. Eric,
You are incorrect on this. In fact, if you look at a similar spherical shape to the Antarctic and add a peninsular region to the North having spatial autocorrelation, you would see a rotation of the node of PC2 and 3.
That is the key to what Steve pointed out and that is a real issue to what was unfortunately successfully eliminated from our paper.
The only reason I'm even wasting the bits on this (because I know people don't get it) is that this has become a 'trend' in climatology. Patterns pulled from autocorrelation.
PCA is designed to find the primary axes of movement, having a near circular autocorrelated dataset (antarctic) with an extrusion (peninsula) guaranteed the axis of PC2.
12. circular not spherical
13. I asked Steve to elaborate and prove some points wrt modes being spurious. He could not/would not/did not. Old news...and repeated pattern. REal skeptics should eschew this silly crew and their blog historionics.
14. Jeff #11,
I think that's one of the misunderstandings about the modes. Yes, a circle does generate repeated eigenvalues, and that actually creates a 2D subspace of eigenvectors. Two basis values are shown, orthogonal, but you can rotate the pattern to any angle and it's still a mode.
However, as soon as you break the symmetry, the eigenvalues separate, and unique eigenvectors emerge. The higher eigenvalues separate more rapidly - ie big separation for small loss of symmetry. You've looked at just adding the peninsula and noted that the eigenvectors then split with this as the axis. I would expect that. If the shape is non-circular in other ways, as it is, then other influences will determine the orientation. But I can't see why this matters.
15. While contemplating Nick Stokes' algorithm for in-filling Antarctic temperature data from ground station and AVHRR satellite grids, I decided to download all the data series from O(10) including the RLS and EW reconstructions used in O(10), the S(09) reconstruction featured in S(09), and referred to hereafter as S09, and the raw (after cloud masking) AVHRR data. Also included were the ground station data from the manned and AWS stations.
I was primarily interested in the 1982-2006 period as I wanted to make comparisons with the AVHRR data that became available during that period. The downloaded data was put into monthly anomaly form based on the 1982-2006 time period. In my analysis I used 50 of the ground station data that were most complete for that period and further were within the area covered by the AVHRR grids.
The RLS method in O(10) uses only the spatial component of the AVHRR data for in-filling the missing ground station areas, while the EW method in O910) uses both the temporal and spatial relationships between ground stations and AVHRR data.
The S09 method uses RegEM and combines ground and AVHRR data. A major part of the S09 methodology involves the retention of just 3 AVHRR PCs.
In my initial analysis of these data sets I have included correlations and trends of these data using latitude, longitude and altitude as explanatory variables. I must admit that I was surprised at how influential these variables turned out to be. In think that perhaps a better regression would have used a reference point and distances from it, but in the analysis presented below I simply used latitude, longitude and altitude as taken from the downloads from O(10).
This thread has discussed the effects of Chladni patterns on a PCA that retains just a few PCs and I thought perhaps the differences I saw between the O910) and S09 methods with regards to latitude and longitude and possibly altitude might be related to the effects of Chladni patterns.
RLS does not use AVHRR PCs, while the EW method use 150 AVHRR PCs as compared to S09 which uses only 3 AVHRR PCs. In the linked first two tables below it can be seen that, when regressing trends or correlations versus latitude, longitude and altitude, overall S09 is more influenced by these variables than are the RLS and EW reconstructions. The questions that arise in my mind are if you see a change in influence of these variables from the raw data to that data reduced to a few PCs is that change one that you might observe because the PCs are eliminating sufficient noise or are the changes a matter of an influence of geometry that might be considered spurious?
While the ground station data is limited, it is interesting to look at the correlations and trends of the reconstructions and AVHRR data (nearest grid to station coordinates) versus the ground stations (linked below in the 4 tables in the second link below). We can see that the RLS reconstruction gives the best correlation at 0.96 and EW next at 0.87, the S09 lags far behind at 0.49 and the AVHRR data is in the middle at 0.71. The same order exists for the regressing trends of AVHRR versus the RLS, EW and S09.
In the linked tables we can see that, although the data are small, the ground station and corresponding AVHRR data when regressed as trends and correlations show little or no influence from the latitude, longitude and altitude. The influence of these explanatory variables is seen to a greater extent in the O(10) and S09 reconstructions with by far the greatest influence being seen with the S09. The ground station data generally confirms what is seen in the analysis that uses all the AVHRR grid point data for all the reconstructions.
http://img695.imageshack.us/img695/9256/rlsews09avhrraltlatlon.png
http://img10.imageshack.us/img10/3707/groundstatrlsews09avhrr.png
16. Based on these relationships shown in my previous post, I would think we could consider the RLS reconstruction a good proxy for the ground station data. In that form we can debate whether one would expect to ground station data or the AVHRR data to better reflect the true temperatures in the Antarctic by comparing the RLS and AVHRR data for the 1982-2006 period. I am considering looking at breakpoints (if any exist) of the difference series derived between the AVHRR and the RLS series.
17. Unfortunately my previous post has not yet posted. I'll repost after I am assured it is not merely in que somewhere.
18. While contemplating Nick Stokes' algorithm for in-filling Antarctic temperature data from ground station and AVHRR satellite grids, I decided to download all the data series from O(10) including the RLS and EW reconstructions used in O(10), the S(09) reconstruction featured in S(09), and referred to hereafter as S09, and the raw (after cloud masking) AVHRR data. Also included were the ground station data from the manned and AWS stations.
I was primarily interested in the 1982-2006 period as I wanted to make comparisons with the AVHRR data that became available during that period. The downloaded data was put into monthly anomaly form based on the 1982-2006 time period. In my analysis I used 50 of the ground station data that were most complete for that period and further were within the area covered by the AVHRR grids.
The RLS method in O(10) uses only the spatial component of the AVHRR data for in-filling the missing ground station areas, while the EW method in O910) uses both the temporal and spatial relationships between ground stations and AVHRR data.
The S09 method uses RegEM and combines ground and AVHRR data. A major part of the S09 methodology involves the retention of just 3 AVHRR PCs.
In my initial analysis of these data sets I have included correlations and trends of these data using latitude, longitude and altitude as explanatory variables. I must admit that I was surprised at how influential these variables turned out to be. In think that perhaps a better regression would have used a reference point and distances from it, but in the analysis presented below I simply used latitude, longitude and altitude as taken from the downloads from O(10).
This thread has discussed the effects of Chladni patterns on a PCA that retains just a few PCs and I thought perhaps the differences I saw between the O910) and S09 methods with regards to latitude and longitude and possibly altitude might be related to the effects of Chladni patterns.
RLS does not use AVHRR PCs, while the EW method use 150 AVHRR PCs as compared to S09 which uses only 3 AVHRR PCs. In the linked first two tables below it can be seen that, when regressing trends or correlations versus latitude, longitude and altitude, overall S09 is more influenced by these variables than are the RLS and EW reconstructions. The questions that arise in my mind are if you see a change in influence of these variables from the raw data to that data reduced to a few PCs is that change one that you might observe because the PCs are eliminating sufficient noise or are the changes a matter of an influence of geometry that might be considered spurious?
While the ground station data is limited, it is interesting to look at the correlations and trends of the reconstructions and AVHRR data (nearest grid to station coordinates) versus the ground stations (linked below in the 4 tables in the second link below). We can see that the RLS reconstruction gives the best correlation at 0.96 and EW next at 0.87, the S09 lags far behind at 0.49 and the AVHRR data is in the middle at 0.71. The same order exists for the regressing trends of AVHRR versus the RLS, EW and S09.
In the linked tables we can see that, although the data are small, the ground station and corresponding AVHRR data when regressed as trends and correlations show little or no influence from the latitude, longitude and altitude. The influence of these explanatory variables is seen to a greater extent in the O(10) and S09 reconstructions with by far the greatest influence being seen with the S09. The ground station data generally confirms what is seen in the analysis that uses all the AVHRR grid point data for all the reconstructions.
http://img695.imageshack.us/img695/9256/rlsews09avhrraltlatlon.png
http://img10.imageshack.us/img10/3707/groundstatrlsews09avhrr.png
19. Kenneth,
Again, apologies for the spam filter. I see you have three comments there which seem to be identical. Please let me know if I should restore either of the other two.
20. Kenneth,
Apologies here too for the spam filter. I'm looking at your results, and will respond in a few hours. It's rather early morning here.
21. Nick, you left the one I wanted and where I wanted it. I think my problem may have occurred when I did not insure that the preview registered before I posted - or the post was lengthy and susceptible to the spam filter.
Also my post at the original thread was my mistake.
22. McI has been babbling Chladni for years, but has never buckled down and made a clear assertion of an artifact. I challenged him on this years ago. Told him to make a clear assertion of differences and the like. He just refused and went back to his yuck yuck blog games. Pushing the Chladni into the Steig crit paper was just stupid kitchen sinkism. The guy is so used to running his own blog that he has lost the ability to make clear arguments. Could you imagine any business person paying this guy for a report? He's just all over the map disorganized, adhom, and illogical.
23. I am responsible for some of the best acoustic guitars on the planet. We used to mess with Chladni patterns on guitar tops. It's mildly interesting, but they suck. They're for the marketing department: a sucker is born every whatever stuff...
Give me a man with an ear and a sharp tool every single time.
I wonder if he used them to find minerals?
24. Nick. I think your interpretation
"Spatial autocorrelation is an essential part of the results, and the Chladni patterns reflect that."
is very sound. If you analyse a given dataset with important spatial autocorrelations, the leading EOF will (obviously) tend to have a Buell (Chladni) pattern. Obviously, if you change the domain (including soma Antarctic Peninsula), other variability appears, but this is self-evident since you have changed the dataset. Additionally, I don't understand why spatial autocorrelation should automatically mean there-is-no-signal-here. It happens that the signal shows spatial correlation, no that there is no signal.
Quoting from:
Monahan, Adam H., John C. Fyfe, Maarten H. P. Ambaum, David B. Stephenson, Gerald R. North, 2009: Empirical Orthogonal Functions: The Medium is the Message. J. Climate, 22, 6501–6514.
doi: 10.1175/2009JCLI3062.1
"When a field is characterized by spatially homogeneous statistics (i.e., invariant from place to place), the EOFs will be strongly influenced by the size and shape of the domain (e.g., Buell 1975, 1979; Richman 1986; Dommenget 2007)."
The starting point is that "When a field is characterized by spatially homogeneous statistics" ... then it will appear a "Buell pattern". This does not mean that "In any event that the EOF pattern is Buell-like .... then, the field is just spatially homogeneous noise"
I think that there are some more severe criticisms that can be done to EOFs. Still, they are useful, particularly for data compression.
jon
25. Isn't this basic Sturm-Liouville theory? The wave equation is S-L, so will produce mutually orthogonal eigenfunctions, just like EOFs.
26. Anon - yes, I agree with all that, including the last.
Martin, Yes, my 1-D wave equation example is just Sturm-Liouville. The 2-D and 3-D patterns of alternating high-low also are governed by S-L concepts, though the theory doesn't apply directly. The orthogonality can be derived similarly (symmetric, positive definite).
My main point is that the autocorrelation matrices have S-L like operators as their inverses.
27. Chladni patterns can look very much like the atomic orbital shapes from the Schrodinger wave equations. I have no reason to doubt that Chladni patterns have meaning for violin acoustics.
The leap in reasoning here appears to be that since Chladni patterns are a function of spatial correlations and that they have meaning in some applications that that somehow means that it has meaning for all applications and lends some physical meaning to the patterns beyond the spatial correlation. The point that you all appear to be ignoring is that SteveM related the S09 Chladni patterns to a geometry of a circle. Would that not be that much different than a wave equation of atomic orbitals telling us something about the geometry of the orbitals. In the case of the Antarctica and what S09 was searching for, the application would appear to be very different.
I pointed out that regressions of S09 against latitude, longitude and altitude are very different than the cloud masked raw AVHRR data and are more dependent on those variables. I did not see any discussion of that point.
28. Here is a link to what O'Donnell found with PCA and spurious teleconnections of the earth's temperatures. You might want to discuss the points he made - that PCA can be a very useful tool in some applications, but that in itself does not preclude it being abused and misrepresented.
http://noconsensus.wordpress.com/2010/03/30/pca-sampling-error-and-teleconnections/
29. Kenneth,
One thing that bothered me is that Steve related to Chladni patterns to the disc, and then deduced than the eigenvalues should be paired. But that pairing fails as soon as the circular symmetry is lost.
Another way of seeing the relation between autocorrelation and the wave equation is that the point Greens functions for the Helmholtz equation ( ie eigenfunctions of the Laplacian or Fourier Transform of the W.E) is an exponential-like function (Bessel Kn in the plane), which is pretty much what you'd write down for autocorrelation.
The reason that atomic orbitals look similar is that they are the solution, not of a Helmholtz equation $\nabla^2 \phi = \lambda \phi$
but of $\nabla^2 \phi = (\lambda + V(r)) \phi$
where V is a potential function. But since the orbitals are usually visualised in the region where $(\lambda + V(r))$ has not varied by a lot, the patterns are similar.
30. I'll try that Latex again:
The reason that atomic orbitals look similar is that they are the solution, not of a Helmholtz equation $$\nabla^2 \phi = \lambda \phi$$
but of $$\nabla^2 \phi = (\lambda + V(r)) \phi$$
where V is a potential function. But since the orbitals are usually visualised in the region where $$(\lambda + V(r))$$ has not varied by a lot, the patterns are similar.
31. Kenneth,
I've been re-reading that tAV link. I'm still trying to relate the sphere example to the wave equation interpretation. But I think this is the key para that I dispute:
The issue of teleconnections and physical meaningfulness has come up here before – most notably with the work on the Steig Antarctic temperature reconstruction. The primary point of contention there was that Steig limited his analysis to the first 3 eigenvector pairs by using the “physical meaning” argument and a sampling error argument from North (1982). Steve McIntyre showed that very similar (and physically non-meaningful) patterns result from EOF analysis on an object shaped like Antarctica with exponential correlation functions and precisely zero physical dynamics.
I think the assertion that they are physically non-meaningful is wrong. I would go back to my demonstration that the inverse of the discretized wave operator is very like an autocorrelation matrix. Now we're used to a wave equation having modes and attaching physical significance to them, but we think of autocorrelation as being just a local relation. But when you decompose a correlation matrix (derived from points in space) into eigenvectors, you are actually doing something that is not local but involved all the points. It does indeed teleconnect. That 1-D matrix that I wrote has trig function eigenvectors.
32. Nick, I see that the point that O'Donnell made with reference to teleconnections seems to be missed in your replies. That you can show that the EOF patterns has a mathematical basis and is tied to spatial correlations is not the point at all. Nobody is disagreeing with that observation and analysis. The point is that scientist can interpret these patterns as something physical with cause and effect that they are not. The scientist does not go through the analysis you did and point the patterns as having a mathematical basis but rather they point or sometimes hint that the patterns have a given and specific physical meaning. You seem not able to admit that that interpretation could be wrong and keep going back to a mathematical analysis but never saying what that means in terms of a specific physical interpretations.
O'Donnell showed a case where teleconnections did not exist in the original data but could be seen in the EOF patterns.
33. Kenneth, I don't think I'm missing that at all. My point is basically that space correlation and wave-like pde's are intertwined. So I don't think it is at all unreasonable for a scientist to seek to interpret EOF's from a correlation matrix as having physical significance in terms of wave effects. Which means teleconnections. It's saying, in effect, that a wave equation has been observed.
There's a much higher bar to claiming that the correlation proves the teleconnections. Then you have to determine how robust the patterns are to changes in the correlation matrix.
But if you have other physically based reasons for expecting resonant modes, which was Steig's situation, then I think it is appropriate to look at the correlation eigenmodes to try to identify them with standing wave patterns.
34. Actually in Steig's case I believe he attempted to place a physical interpretation on the 3 EOFs he retained in order to rationalize why he limited his selection to 3. It all gets very arbitrary at some point - but I suppose they can always fall back on the Nick Stokes defense.
You have never answered or commented on my observation that the original AVHRR data has much less dependence on latitude,longitude and altitude in regressions than the S09 reconstruction. Does that mean anything with regards to Chladni patterns? Is that occurrence an indication of a problem and/or artifact of the S09 methodology?
35. Kenneth, re:6/6/11 11:21am
When in doubt, go with the measured data. Reconstructing a data set in order to get something that is more manageable mathematically cannot really make the results more robust.
Unfortunately climate data tends to be so sparse that any in-depth analysis quickly runs in to the fact that there simply isn't enough data to extract a signal from the noise. |
# How does skewed data affect deep neural networks?
I'm playing around with deep neural networks for a regression problem. The dataset I have is skewed right and for a linear regression model, I would typically perform a log transform. Should I be applying the same practice to a DNN?
Specifically, I'm curious how skewed data affects regression with a DNN and, if it's negatively, are the same methods that would be applied to a linear regression model the right way to go about fixing it? I couldn't find any research articles about it but if you know of any feel free to link them in your answer!
• When you say you have skewed data, what specifically do you mean? Skewed distributions of predictor variables? Skewed distribution of the response variable? Neither of those even matter to linear regression, where if there is a normality assumption (we get Gauss-Markov without such an assumption), it is about the error term, not about the data. I get the feeling that you're using the $\log$ transform when you don't need to in linear regression. – Dave Nov 9 '20 at 17:36
• The context I had in mind was one skewed predictor variable in multivariate regression with the rest of the predictors being normally distributed. – shaye059 Nov 10 '20 at 18:33 |
# zbMATH — the first resource for mathematics
Holomorphic vector bundles over Riemann surfaces and the Kadomtsev-Petviashvili equation. I. (Russian) Zbl 0393.35061
The paper starts with the Kadomtsev-Petviashvili equation, written in the form $0=\frac34 \frac{\partial^2U}{\partial y^2}+\frac{\partial}{\partial x}\left[\frac{\partial U}{\partial t}+\frac14 \left(6U\frac{\partial U}{\partial x}+\frac{\partial^3U}{\partial x^3}\right)\right],$ for which the second author [Funct. Anal. Appl. 8, 236–246 (1974); translation from Funkts. Anal. Prilozh. 8, No. 3, 54–66 (1974; Zbl 0299.35017)] and V. E. Zakharov have given solutions generalizing those of the Korteweg-de Vries equation. The aim of the present paper is to find solutions depending on arbitrary functions. By aid of Baker-Akhiezer’s functions corresponding to a set of matrices $$\psi_0$$, to an algebraic curve $$\Gamma$$, and a point $$P_0\in\Gamma$$, one can construct a solution of (1). For particular cases, each solution of the Korteweg-de Vries equation or the Boussinesq equations, respectively generates solutions of the Kadomtsev-Petviashvili equation.
Reviewer: A. Haimovici
##### MSC:
35Q53 KdV equations (Korteweg-de Vries equations) 14F05 Sheaves, derived categories of sheaves, etc. (MSC2010) 32L05 Holomorphic bundles and generalizations 37K20 Relations of infinite-dimensional Hamiltonian and Lagrangian dynamical systems with algebraic geometry, complex analysis, and special functions |