Id
stringlengths 1
5
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
5
⌀ | ParentId
stringlengths 1
5
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
27.8k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
5
⌀ | OwnerUserId
stringlengths 1
5
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
217 | 2 | null | 214 | 27 | null | In the Interest Rates field there is one paradox in nowadays market conditions (i.e. since the crisis) that is quite tricky to properly understand.
This is the fact that one need several curves to have a correct pricing of simple interest derivatives such as Swap with floating index set to some Libor reference.
Simply and crudely speaking, you have to build first a discount curve (generally based on OIS swap curve) and then use this curve to compute some "adjusted" forward Libor Rate (procedure that I improperly qualify as "forwarding"). The froward Libor Rates used to be calculated by discounting and "forwarding" (sorry for the term) on the very same curve.
This is due to the fact that the once negligeable spreads between OIS and Libor Curves are now large enough to generate significant arbitrage if not properly taken into account.
The paradox comes from the fact that "usual" theory of pricing of linear interest rates derivatives asserts that there can be only one curve for discounting cash flows and "forwarding" floating index references otherwise there is arbitrage.
Moreover the right discount curve can be even more problematic if multicurrency trade are involved (then the collateral currency and rate are important aspect of this topic).
The extension of the multicurve framework to the Risk Neutral Pricing is not easy to implement and many attempts are now published. I will add some references when I have enough time,
Here are a few references on the subject:
- Fujii, Shimada, Takahashi — "A Note on Construction of Multiple Swap Curves with and without Collateral"
- Bianchetti — "Two curves, One price"
- Henrard — "The Irony in the Derivatives Discounting"
- Henrard — "The Irony in the Derivatives Discounting II"
- Mercurio — "Interest Rates and The Credit Crunch, New Formulas and Market Models"
- Mercurio — "Libor Market Models with Stochastic Basis"
- Morini — "Solving the Puzzle of Interest Rate Market"
- Moreni, Pallavicini — "Parsimonious HJM Models for Multiple Yield-Curve Dynamics"
| null | CC BY-SA 3.0 | null | 2011-02-07T15:07:08.987 | 2013-12-29T14:45:04.677 | 2013-12-29T14:45:04.677 | 263 | 92 | null |
218 | 2 | null | 208 | 2 | null | 1) This last DE is implicit equation. This can't be solved analytically. I guess you can solve it by finite difference method.
2) The last term is indeed a differential term of order 1/2. However, it is the term for time difference and it can remain in the equation as it is. In the final formula as well it will come out to be as difference term, implying that we use difference of time when substituting in the formula to compute stock price, or option price for that matter.
3) I am not sure about this part.
| null | CC BY-SA 2.5 | null | 2011-02-07T16:11:01.410 | 2011-02-07T16:11:01.410 | null | null | null | null |
219 | 1 | 221 | null | 32 | 19940 | What is the intuition behind cointegration? What does the Dickey-Fuller test do to test for it? Ideally, a non-technical explanation would be appreciated.
Say you need to explain it to an investor and justify why your pairs trading strategy should make him rich!
| What is the intuition behind cointegration? | CC BY-SA 3.0 | null | 2011-02-07T16:16:19.097 | 2014-11-12T20:43:03.297 | 2012-01-01T21:32:04.370 | 1106 | 40 | [
"time-series",
"statistics",
"cointegration",
"intuition"
] |
220 | 2 | null | 219 | 50 | null | This one is quite easy: Think of a man walking his dog. He will go along and his dog will stroll along running back and forth. Man and dog are mathematically "cointegrated".
As an investor you bet that the dog is coming back to his master or that the leash has only a certain length.
| null | CC BY-SA 2.5 | null | 2011-02-07T16:18:47.113 | 2011-02-07T16:18:47.113 | null | null | 12 | null |
221 | 2 | null | 219 | 35 | null | The standard story (also told by @vonjd) is of "The Drunk and Her Dog". This is based on ["A Drunk and Her Dog: An Illustration of Cointegration and Error Correction"](http://www.uta.edu/faculty/crowder/papers/drunk%20and%20dog.pdf) (1994). The story is itself based on the standard illustration for a [random walk](http://en.wikipedia.org/wiki/Random_walk) known as the "drunkard's walk".
The [Dickey-Fuller](http://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test) test is used to check for a [unit root](http://en.wikipedia.org/wiki/Unit_root). It can be used as part of the general Engle-Granger two-step method (although it isn't the only option).
In this case, while the two assets themselves are not stationary, you are able to test if the residuals between a regression of the two assets is [stationary](https://quant.stackexchange.com/questions/183/what-is-stationary-process/185#185). Most people prefer another approach, the [Johansen test](http://en.wikipedia.org/wiki/Johansen_test), which uses a VECM model.
The intuition behind pairs trading is that two cointegrated instruments will follow the same long-run path (since they presumably have some common factor, such as they are both oil companies and are heavily influenced by the price of oil), and any deviations will ultimately return back to the mean. Needless to say, pairs trading (or any other form of statistical arbitrage) is still a risky endeavor, as should be clear by the performance of arbitrage funds.
| null | CC BY-SA 3.0 | null | 2011-02-07T16:24:21.430 | 2014-06-01T17:21:53.897 | 2017-04-13T12:46:23.037 | -1 | 17 | null |
222 | 2 | null | 219 | 23 | null | Two time series $X_1$ and $X_2$ are cointegrated if a linear combination $aX_1+bX_2$ is stationary i.e. it has constant mean, standard deviation and autocorrelation function for some $a$ and $b$. In other words, the two series never stray very far from one another.
Cointegration might provide a more robust measure of the linkage between two financial quantities than correlation which is very unstable in practice.
I have borrowed the following two examples from Willmot's Frequently Asked Questions in Quantitative Finance, one may be typical for a hedge fund trader and another illustrates the job of a mutual fund manager.
>
A. Suppose you have two stocks $S_1$ and $S_2$ and you find that $S_1 − 3 S_2$ is stationary, so that this combination never strays too far from its mean. If one day this ‘spread’ is particularly large then you would have sound statistical reasons for thinking
the spread might shortly reduce, giving you a possible source of statistical arbitrage profit. This can be the basis for pairs trading.
B. Suppose we find that the S&P500 index is cointegrated with a portfolio of 15 stocks. We can then use these fifteen stocks to track the index. The error in this tracking
portfolio will have constant mean and standard deviation, so should not wander too far from its average. This is clearly easier than using all 500 stocks for the tracking (when, of
course, the tracking error would be zero).
| null | CC BY-SA 2.5 | null | 2011-02-07T16:43:53.940 | 2011-02-07T17:08:44.230 | 2011-02-07T17:08:44.230 | 70 | 70 | null |
223 | 2 | null | 214 | 14 | null | There is also the so-called Hakansson’s paradox that can be found in [Derman's article](http://www.ederman.com/new/docs/qf-Illusions-dynamic.pdf) on dynamic replication.
>
Hakansson’s so-called paradox (Hakansson 1979, Merton
1992) encapsulates the skepticism about dynamic replication:
if options can only be priced because they can be
replicated, then, since they can be replicated, why are they
needed at all?
| null | CC BY-SA 3.0 | null | 2011-02-07T18:04:39.440 | 2013-12-29T14:44:11.827 | 2013-12-29T14:44:11.827 | 263 | 15 | null |
224 | 2 | null | 156 | 15 | null | This isn't particularly insightful, but worth pointing out in this thread. Many people get caught up in the elegance and beauty of the mathematics and tend to be disconnected from the real world.
| null | CC BY-SA 2.5 | null | 2011-02-07T18:48:43.657 | 2011-02-07T18:48:43.657 | null | null | 122 | null |
225 | 2 | null | 208 | 10 | null | (1) You analytically solve a stochastic differential equation (SDE) using [Ito's lemma](http://en.wikipedia.org/wiki/It%C5%8D%27s_lemma). Your second equation (the discretized one) is how you could model one path over one step. To find the solution, you would model many of these paths over many steps and then take the expectation (i.e., Monte Carlo methods). The solution to the SDE models all of these paths simultaneously in expectation. You can't switch directly to discretized version and solve without some numerical technique like Monte Carlo. The differential notation is really just short hand for the more formal way to write the Ito process. For example: $$S_t = x + \int_0^t \mu_s S_s ds + \int_0^t \sigma_s S_s dW_s \Leftrightarrow dS_t = \mu_t S_t dt + \sigma_t S_t dW_t$$
Then use the expectation operator to find the expected stock price $S$ at time $t$ given the parameters and original stock price $x$. Do you have a specific problem? Maybe someone here could help you along. The stuff's pretty tricky, but notes that Steve Shreve later built into a textbook series are still available [for free](http://www.stat.berkeley.edu/users/evans/shreve.pdf). His textbooks are pretty approachable if you'd like to learn more.
(2) Different concept. Ito calculus is a bit different from the calculus we learn in high school and undergrad. The power of $1/2$ is in your second equation because of the variance-standard deviation conversion. I think the wikipedia and Shreve links above are the best place to start.
(3) I am not familiar with Stratonovich integrals.
| null | CC BY-SA 2.5 | null | 2011-02-07T19:23:25.403 | 2011-02-08T11:34:34.153 | 2011-02-08T11:34:34.153 | 106 | 106 | null |
226 | 1 | null | null | 38 | 67149 | How do [volatility](http://en.wikipedia.org/wiki/Volatility_(finance)) and variance differ in finance and what do both imply about the movement of an underlying?
| What is the difference between volatility and variance? | CC BY-SA 3.0 | null | 2011-02-07T21:48:03.303 | 2020-01-08T11:03:14.143 | 2015-02-15T08:14:20.730 | 12 | 205 | [
"volatility",
"variance"
] |
227 | 2 | null | 212 | 0 | null | You can use Monte Carlo methods to generate paths.
| null | CC BY-SA 2.5 | null | 2011-02-07T21:48:24.903 | 2011-02-07T21:48:24.903 | null | null | 89 | null |
228 | 2 | null | 37 | 7 | null | A high quality and reliable solution:
[http://www.nanex.net/](http://www.nanex.net/)
I found out that you can't afford it if you have to ask for the price.
| null | CC BY-SA 2.5 | null | 2011-02-07T21:56:26.457 | 2011-02-07T21:56:26.457 | null | null | null | null |
229 | 2 | null | 226 | 28 | null | Volatility is typically unobservable, and as such estimated --- for example via the (sample) variance of returns, or more frequently, its square root yielding the standard deviation of returns as a volatility estimate.
There are also countless models for volatility, from old applied models like [Garman/Klass](http://www.fea.com/resources/pdf/a_estimation_of_security_price.pdf) to exponential decaying and formal models such as [GARCH](http://www.fea.com/resources/pdf/a_estimation_of_security_price.pdf) or
[Stochastic Volatility](http://en.wikipedia.org/wiki/Stochastic_volatility).
As for forecasts of the movement: well, that is a different topic as movement is the first moment (mean, location) whereas volatility is a second moment (dispersion, variance, volatility). So in a certain sense, volatility estimates do not give you estimates of future direction but of future ranges of movement.
| null | CC BY-SA 2.5 | null | 2011-02-07T22:01:37.103 | 2011-02-07T22:07:43.987 | 2011-02-07T22:07:43.987 | 69 | 69 | null |
230 | 2 | null | 226 | 1 | null | Volatility is essentially quadratic variation. It is a property of sample paths, not probability measures. In other words, it can be calculated given a single historical path and doesn't depended upon the probability you assign to that path.
Variance, and standard deviation, are functions of the probability you assign to events.
| null | CC BY-SA 2.5 | null | 2011-02-07T22:06:15.590 | 2011-02-07T22:06:15.590 | null | null | 212 | null |
231 | 2 | null | 212 | 9 | null | There are many numerical approaches to solving stochastic integrals such as the above. Assuming that there is no closed form slight-of-hand, the easiest approach is the Monte Carlo approach. I would recommend referring to Glasserman's excellent "Monte Carlo Methods in Financial Engineering"
If you are not familiar with MC, think of it as evaluating millions of possible paths in N dimensional space (the space of your random variable x time) and computing the expectation from a probability weighted average.
Making MC work for you involves:
- modeling your distribution accurately
- being able to randomly sample your distribution over the simulation in such as way as to have uniformly sampled on its cumulative probability function
- having a good random N dimensional number generator with period > total # of samples
- various tricks to reduce the required sample space
| null | CC BY-SA 2.5 | null | 2011-02-07T22:06:15.860 | 2011-02-07T22:06:15.860 | null | null | 51 | null |
232 | 2 | null | 226 | 13 | null | By volatility people usually refer to to annualized standard deviation of an asset. For an asset it's usually quoted as a percentage of the asset price (i.e. the return volatility). For a portfolio, it is often quoted in currency units. Variance is the square of the standard deviation. It is usually not quoted directly because it doesn't have an intuitive unit of measure. Instead, it is used in variance decomposition, e.g. the idiosyncratic variance of a portfolio is 6% of the total portfolio variance.
| null | CC BY-SA 2.5 | null | 2011-02-07T22:08:32.250 | 2011-02-07T22:08:32.250 | null | null | 194 | null |
235 | 1 | 403 | null | 33 | 17585 | For a vanilla option, I know that the probability of the option expiring in the money is simply the delta of the option... but how would I calculate the probability, without doing monte carlo, of the underlying touching the strike at some time at or before maturity?
| Probability of touching | CC BY-SA 2.5 | null | 2011-02-07T22:50:44.870 | 2022-08-05T12:09:23.050 | null | null | 214 | [
"option-pricing",
"probability"
] |
236 | 1 | 280 | null | 9 | 700 | How can I use a binomial tree to price a European option that's based on a portfolio of equity products? I have volatility and correlation matrix of all underlying products?
Looking for a formula based solution so that I use in Matlab. Thanks.
| How to use binomial tree for portfolio of equity products | CC BY-SA 2.5 | null | 2011-02-07T22:54:40.780 | 2011-02-08T14:59:52.827 | null | null | 223 | [
"option-pricing",
"equities",
"binomial-tree"
] |
237 | 2 | null | 27 | 6 | null | Short answer: volatility skew.
Longer answer:
investors are willing to pay more for out of the money puts (disaster hedge).
This buying bids up the price of puts, which makes the volatility implied by those prices go up.
calls and puts at the same strike must trade roughly at the same implied volatility otherwise there is arbitrage, this is why you see the same phenomenon for lower strike calls.
(investors are less willing to do this when buying out of the money calls(higher strikes), and so those options typically trade at lower bids, and lower implied volatility.
| null | CC BY-SA 2.5 | null | 2011-02-07T22:56:54.110 | 2011-02-07T22:56:54.110 | null | null | 214 | null |
238 | 2 | null | 235 | 2 | null | This surely isn't the most efficient way, but if you want something quick and dirty:
You could run a vanilla model that calcs delta for each expiration date between now and expiration, and grab the delta for each. That would give you the likelihood that it's in the money at the close on any day.
From that, you can pretty easily calculate the odds that it's not in the money each day (just subtract the delta from one), multiply them all together, and subract the product from one to determine the likelihood that it closes above the strike between now and expiration.
This does require running the formula to calc delta many times, and it ignores the risk of an intra-day touch, but it doesn't require writing something to calc the exotic you're describing.
| null | CC BY-SA 2.5 | null | 2011-02-07T23:00:35.663 | 2011-02-08T00:52:51.910 | 2011-02-08T00:52:51.910 | 205 | 205 | null |
239 | 2 | null | 195 | 4 | null | There's a very interesting article about this: [http://www.wired.com/magazine/2010/11/ff_midas](http://www.wired.com/magazine/2010/11/ff_midas)
---
Wall Street Firm Uses Algorithms to Make Sports Betting Like Stock Trading
The cornerstone of the operation is a piece of number-crunching software called Midas. It functions like the predictive computer programs that Amaitis dealt with on Wall Street: Midas acquires information, processes it, finds mathematical patterns and correlations, and uses all of that to divine the ever-shifting odds of sporting events. The system is robust enough to handle the play-by-play handicapping that keeps Jimmy E. glued to every pitch of the Tigers-White Sox game. During basketball season, things move so quickly that the bettors at the M have about eight seconds to consider a wager before the odds change.
| null | CC BY-SA 2.5 | null | 2011-02-07T23:40:46.930 | 2011-02-07T23:40:46.930 | null | null | 225 | null |
240 | 1 | 242 | null | 15 | 5065 | I've always wondered about this.
If you have a series of options, with the expires spaced let's say one week between them, and you search for each expiration date the option with the smallest premium, would the series of strikes represent the current market predicted path of the asset?
Can you use that information to speculate the spot price?
I believe that this paper is about a similar approach: [http://ideas.repec.org/p/ihs/ihsesp/104.html](http://ideas.repec.org/p/ihs/ihsesp/104.html)
| Is it possible to use a series of option prices to predict the most likely path of an asset? | CC BY-SA 2.5 | null | 2011-02-07T23:50:26.813 | 2018-01-29T14:38:54.967 | null | null | 225 | [
"option-strategies"
] |
241 | 2 | null | 226 | 19 | null |
- The main underlying difference is in their definition. Variance has a fixed mathematical definition, however volatility does not as such. Volatility is said to be the measure of fluctuations of a process.
- Volatility is a subjective term, whereas variance is an objective term i.e. given the data you can definitely find the variance, while you can't find volatility just having the data. Volatility is associated with the process, and not with the data.
- In order to know the volatility you
need to have an idea of the process
i.e you need to have an
observation of the dispersion of the
process. All the different processes will have different methods to compute volatilities based on the underlying assumptions of the process.
| null | CC BY-SA 2.5 | null | 2011-02-08T00:19:44.017 | 2011-02-08T00:19:44.017 | null | null | null | null |
242 | 2 | null | 240 | 9 | null | If you think of a path as a series of ranges then your idea kind of makes sense.
However, I don't think you would get a path out of this approach, just a series of ranges.
Example:
Taking one expiry, the prices in a chain imply a range of price movements between today and expiration.
Take the SPX(at say 1300) and VIX, for example, is at 15.8 and the SPX option chain that you are looking at is 30 days from expiry. That tells you that there is approximately a 68% probability that the SPX will be between approximately 1370 and 1230 at the end of 30 days, or a 68% probability that it will be within 1% of 1300 in 1 day.
Running this example on multiple chains would only expand the range(implied vol is increasing in later expiries), or contract the range(implied vol is decreasing in later expiries).
----- idea -----
If you had access to a standardized/liquid market of path dependant options, you might be able to narraw the range down somewhat.
If you did arrive at a narrow path estimate, it would change frequently with volatility... what would be the value of this path estimate?
| null | CC BY-SA 2.5 | null | 2011-02-08T01:03:27.863 | 2011-02-08T01:03:27.863 | null | null | 214 | null |
244 | 2 | null | 240 | 3 | null | If you're asking "can I get a prediction of a future price from an option chain", then, no, I don't think so. The value of an option does not depend on the underlying stock's drift, or price expectation, because this expectation is already reflected in the stock's current price. Given the risk-free rate and the time to expiration, all that you can back out of the option price is the implied volatility.
The intuition is that we don't really value options in absolute terms, but in terms of the underlying stock.
| null | CC BY-SA 2.5 | null | 2011-02-08T02:17:19.120 | 2011-02-08T02:17:19.120 | null | null | 106 | null |
245 | 2 | null | 240 | 6 | null | Just would like to expand on user214's answer: you can use options to predict underlying in probabilistic sense. As you know option prices imply a certain distribution - you can find expected value for stock, and volatility around that value.
If you assume a particular distribution (for example normal distribution for returns) you can derive expected high (over some period of time), expected low, expected range, expected drawdown, probabilities for different paths, etc. That is not something you should do in practice, or do it and know the limitations of such model-based estimates (that is what I do in trading).
If you have access to exotic, particularly path-dependent options you can fit more complicated models, and figure out what they predict about the stock price. While you can fit more realistic models to only vanilla options, such fits are not robust, because they depend only on terminal distribution of the underlying, and not its path.
| null | CC BY-SA 2.5 | null | 2011-02-08T02:17:50.630 | 2011-02-08T02:17:50.630 | null | null | 185 | null |
246 | 2 | null | 209 | 3 | null | It appears like you're asking for steps on how to process large datasets.
The best solution for handling/processing/filtering these things is a relational database.
I use MySQL or Oracle most of the time.
Here is something that I usually do:
- Lay down all your data in one sheet/table (make sure you have a column for date, and another unique identifier like hour of day)
- Import the data into a MySQL/Oracle Database table.
- Apply your filters for each query you make. (select according to a range)
- Update the tables according to corporate events. e.g. If you experience par value adjustments multiply it with the appropriate factors. You can even have a separate table listing the events and you can use nested SQL to filter them as they update.
- You now have an adjusted Dataset.
- Live like a boss.
:D
| null | CC BY-SA 2.5 | null | 2011-02-08T02:30:53.693 | 2011-02-08T02:30:53.693 | null | null | 233 | null |
247 | 1 | null | null | 10 | 1940 | For the Black-Scholes model my feeling is that the volatility parameter is like sweeping stuff under the rug.
Are there models which improve on the volatility aspect of Black-Scholes by adding other parameters (I'm guessing things like the distribution of past returns, or perhaps some measure of debt load held by the company).
| Extensions of Black-Scholes model | CC BY-SA 2.5 | null | 2011-02-08T02:34:17.810 | 2013-07-31T15:43:43.337 | null | null | 188 | [
"black-scholes",
"option-pricing",
"volatility"
] |
248 | 1 | 256 | null | 5 | 3393 | Does the debt load of a company have an impact on the stock price of a company and its volatility? Also, how does the market react to the announcement of a company issuing bonds?
| Does the debt load affect the volatility of equity? | CC BY-SA 3.0 | null | 2011-02-08T02:36:37.700 | 2011-09-09T17:39:14.223 | 2011-09-09T16:40:15.170 | 1106 | 188 | [
"volatility",
"equities",
"fixed-income"
] |
249 | 1 | 290 | null | 14 | 4896 | Forward volatility implied by SPX options, and that of VIX futures get out of line. If there existed VIX SQUARED futures they could easily be replicated (and arbitraged) with a strip of SPX options. However replicating VIX futures would theoretically require dynamic trading in options (all strikes) and is also would depend on the model for distribution of vol of vol.
Question for traders: have you or someone you know ever traded SPX options (variance) vs VIX futures, and if yes then please provide some clues. Please don't write that there is an academic article about this; I'm asking if someone did this is practice.
| SPX options vs VIX futures trading | CC BY-SA 2.5 | null | 2011-02-08T02:37:32.067 | 2016-04-26T16:38:47.610 | null | null | 185 | [
"volatility",
"vix",
"arbitrage"
] |
250 | 1 | 274 | null | 5 | 1120 | The model assumption of the Black-Scholes formula has two parameters for the geometric Brownian motion, the volatility $\sigma$ and the expected growth $\mu$ (which disappears in the option formulae). How can this parameter $\mu$ be estimated?
| Expected Growth | CC BY-SA 2.5 | 0 | 2011-02-08T02:41:46.133 | 2011-02-08T22:00:33.247 | null | null | 188 | [
"black-scholes",
"interest-rates",
"brownian-motion"
] |
251 | 1 | 281 | null | 26 | 4553 | How much is QuantLib used in industry and how much street cred does it have?
| QuantLib in industry | CC BY-SA 2.5 | null | 2011-02-08T02:56:39.583 | 2012-01-27T15:51:49.440 | null | null | 188 | [
"software",
"quantlib"
] |
252 | 2 | null | 250 | 2 | null | I take it that μ is the drift of the long-term equilibrium price.
let's take a lognormal model as an example,
dS = μ x dt + σ x S x dz
where:
S= spot,
t = time,
T-t = length of time
μ = drift rate,
σ = volatility,
dz= random variable,
In order to solve for μ, you might first want to look for the expected spot price:
given X = ln(S),
dX = ((μ - σ)dt)/2 + σdz
This allows us to solve for X,
ST = St e ^((μσ^2/2)(T-t) -σdz)
Taking the expected value of both Sides:
E[ST] = S e ^(μ(T-t))
This equation can be used to look for your expected growth, μ
| null | CC BY-SA 2.5 | null | 2011-02-08T03:20:14.277 | 2011-02-08T03:20:14.277 | null | null | 233 | null |
253 | 2 | null | 250 | 0 | null | The drift term, $\mu$ is assumed to follow from the 'no-arbitrage' assumption. That is, if $\mu$ were greater than the risk-free rate, one would borrow at the risk-free rate, invest in the stock, and collect the difference. If the stock may be freely borrowed, and $\mu$ is less than the risk free rate, one would short the stock and invest in the risk-free rate, and collect the difference.
| null | CC BY-SA 2.5 | null | 2011-02-08T03:33:50.020 | 2011-02-08T03:33:50.020 | null | null | 108 | null |
254 | 1 | 321 | null | 8 | 732 | Can Someone Explain to me what this term means, and how it's used?
| Black-Equivalent Volatility | CC BY-SA 2.5 | 0 | 2011-02-08T03:39:19.983 | 2011-02-08T22:11:12.113 | null | null | 233 | [
"volatility"
] |
255 | 2 | null | 189 | 7 | null | I think this has something to do with my question ("[Black Equivalent Volatility](https://quant.stackexchange.com/questions/254/black-equivalent-volatility)"). I just realized that the answer might be your question:
>
Knowing this information what could be the appropriate measure of computing volatility of the economic returns?
In Energy Markets, like oil and electricity, one model we use is the mean reversion in the natural log of the spot prices.
$$d(ln(S)) = a(b-ln(S))de + vdz$$
where:
$S$ = spot price
$t$ = time of observation
$a$ = rate of mean reversion
$v$ = volatility
$b$ = long-term equilibrium
$dz$ = random stochastic variable
Now there are books that would show you how to solve for the volatility in that equation but i think the best one for me is Dragana Pilipovic's book entitled "Energy Risk 2nd Ed" (chapter 5, page 108)
And I think the black-equivalent volatility is a short-form equation that you can use off the bat. So here is my answer to your question:
black-equivalent volatility = volatility x $\sqrt{(1-e^{-2aT})/2aT }$
where:
$T$ = period of time (20 years)
$a$ = rate of mean reversion
| null | CC BY-SA 3.0 | null | 2011-02-08T05:17:44.093 | 2016-07-12T09:30:21.863 | 2017-04-13T12:46:23.000 | -1 | 233 | null |
256 | 2 | null | 248 | 6 | null | Q1 - Yes, debt load has an impact on the stock price. For instance, say you are valuing a company with a discounted cash flow model, while the interest won't affect the operational cash flows, it will increase the cost of capital. With that, the perceived value will be less than a similar company with less debt. Debt will also affect the volatility of the equity. As debt becomes a larger portion of the old Assets = Liabilities + Equity equation, changes in asset value will have a larger sway on Equity.
Q2 - Depends. If the market perceives that the company needs more debt to fund capital expenditures due to opportunity, then the market should react positively. If they're issuing debt and the market thinks this is a poor choice, the market with punish the stock. As a caveat, if they issue debt to buy back stock the market will almost always act positively due to less shares for trade.
| null | CC BY-SA 2.5 | null | 2011-02-08T06:23:36.907 | 2011-02-08T06:23:36.907 | null | null | 60 | null |
257 | 1 | 414 | null | 8 | 1050 | Many times, we want to calculate VaR using some parametric approach (delta-normal approximation for instance) when historical simulation or monte carlo are simply to slow. This is fine as long as only the deltas are needed and the instruments are reasonably linear.
But when the approach is extended to use both deltas and gammas, it is no longer certain that the approach is computationally efficient compared to the simulation-based methods since the calculation of a complete matrix of cross-gammas between the risk factors (nbr of RFs >10000) becomes a very heavy task.
Are there any "justifications"/old-wives-tales/adhoc methods on how to limit the nbr of gammas to calculate that has been used used with good result in the industry? One obvious simplification is of course to skip all cross-gammas and only calc the diagonal of the matrix but that feels dangerous.
| How to limit the nbr of cross-gamma calculations in a delta-gamma VaR calculation? | CC BY-SA 3.0 | null | 2011-02-08T07:49:55.303 | 2012-08-08T12:57:00.600 | 2012-08-08T12:57:00.600 | 35 | 32 | [
"value-at-risk"
] |
258 | 2 | null | 251 | 16 | null | Based on anecdata (conversations with other quants), not much. Banks develop their own models and if they do outsource the effort, they pay someone for the code + support (there are companies like Numerix or Pricing Partners which do that). QuantLib is criticized for being poorly documented and convoluted.
My own observation is that the parts of QuantLib I was interested in (LMM) weren't particularly advanced, so if I were to make a call, I would see no point to make an effort to integrate this code with the rest of my bank's systems.
| null | CC BY-SA 2.5 | null | 2011-02-08T08:17:42.957 | 2011-02-08T08:17:42.957 | null | null | 89 | null |
260 | 1 | null | null | 8 | 762 | It was pointed in an other question that ensemble methods can help to reduce curve fitting.
What are your experience with these and which one seems the most appropriate? If I had two forecasters that give reasonably good results. Would it be better to use both and invest half in each (diversification) or use one of the ensemble method?
| What are the ensemble techniques to forecast returns? | CC BY-SA 2.5 | null | 2011-02-08T09:12:56.647 | 2011-02-08T13:41:46.523 | null | null | 155 | [
"backtesting",
"forecasting"
] |
261 | 1 | null | null | 3 | 20124 | What is the difference between the two?
Today in the FT I see that UBS is the second biggest 'wealth manager' after BOA whilst I was always under the impression that Blackrock was the largest asset manager.
| Wealth Management Vs Asset Management | CC BY-SA 2.5 | null | 2011-02-08T09:48:36.943 | 2011-02-09T02:17:55.490 | 2011-02-09T02:17:55.490 | 117 | 103 | [
"finance"
] |
262 | 1 | null | null | 4 | 2852 | This is a bit of a subjective question and relates primarily to the UK market
There are a number of banks who are lending at BOE + 1.49% (ie: 1.99 %) whilst at the same time accepting deposits paying 2.75%
Granted the 2.75 is a bonus rate but I just cannot understand how writing these kind of mortgages are beneficial froma a banks perspective
EDIT: The terms were in fact on a two year fix with the ability to refinance tpo another provider. The answer to this question was in fact that the bank is able to artifically create moiney by fractional reserve banking and is further able to source funds from the BoE direct.
| How do banks actually make money on mortgages | CC BY-SA 3.0 | null | 2011-02-08T09:49:04.230 | 2014-10-01T10:30:04.017 | 2014-10-01T10:30:04.017 | 103 | 103 | [
"finance"
] |
263 | 2 | null | 235 | 7 | null | Allow me to disagree with Jaydles' proposal ; his methodology is valid only if the events of touching the barrier on each were independent.
If you are working within the standard Black-Scholes framework, you're looking for the probability of a drifted Brownian motion hitting a fixed level before a fixed time ; this probability is derived in most stochastic calculus texts, see for example Karatzas-Shreve or Chesney-Jeanblanc-Yor.
Another way of seeing it : you're trying to price a knock-in digital option with 0 interest rate, or knock-in zero bond. You can find formulae for these in Peter Carr's work on barrier options.
| null | CC BY-SA 2.5 | null | 2011-02-08T10:15:05.880 | 2011-02-08T10:15:05.880 | null | null | 250 | null |
264 | 2 | null | 219 | 12 | null | The somewhat tongue-in-cheek blog post [http://www.portfolioprobe.com/2010/10/18/american-tv-does-cointegration/](http://www.portfolioprobe.com/2010/10/18/american-tv-does-cointegration/) includes the example of two classes of shares on the same company.
In this case you have two assets that are essentially the same but with a few details different. The buying and selling of these assets will make the prices fluctuate from each other. However they are unlikely to stray too far from each other because there will be arbitrageurs that will bring the prices back together. Arbitrage is the leash in the human-canine analogy.
But there is a difference between cointegration and high correlation. I'm guessing that a lot of pairs trading based on "cointegration" is actually based on high correlation. The difference is risk: if two assets are truly cointegrated, then they will eventually snap back towards each other; two assets that have a history of high correlation need not snap back together.
| null | CC BY-SA 2.5 | null | 2011-02-08T10:21:00.437 | 2011-02-08T10:21:00.437 | null | null | 249 | null |
266 | 2 | null | 82 | -1 | null | I have worked with First Futures at the time developing their Strategy Studio against our platform, has back testing and algo tuning possibilities: [http://www.firstfuturessoftware.com/products/strategystudio.php](http://www.firstfuturessoftware.com/products/strategystudio.php)
| null | CC BY-SA 2.5 | null | 2011-02-08T10:48:24.853 | 2011-02-08T10:48:24.853 | null | null | null | null |
267 | 2 | null | 121 | 2 | null | I think a good approach is to compare your two covariance matrices on a set of random portfolios (see for instance [http://www.portfolioprobe.com/about/applications-of-random-portfolios/assess-risk-models/](http://www.portfolioprobe.com/about/applications-of-random-portfolios/assess-risk-models/)).
What you want is a high correlation (across the portfolios) between the predicted and realized portfolio volatility. We're never going to estimate the level of volatility especially well. But if you get the right ranking across portfolios, then that is as much as you can ask.
It would be best to generate random portfolios that look like the ones you will actually have, but even naively generated portfolios may be good enough.
| null | CC BY-SA 2.5 | null | 2011-02-08T11:29:32.447 | 2011-02-08T11:29:32.447 | null | null | 249 | null |
268 | 2 | null | 247 | 5 | null | The Black-Scholes model assumes that the underlying volatility is constant over the life of the derivative, which is indeed a gross oversimplification. [Stochastic Volatility](http://en.wikipedia.org/wiki/Stochastic_volatility) models improve on that assumption by making volatility dependent on additional parameters such as distribution of returns and variance itself. However, the well known stochastic volatility models do not include company fundamentals among their parameters.
| null | CC BY-SA 2.5 | null | 2011-02-08T11:32:16.530 | 2011-02-08T11:32:16.530 | null | null | 53 | null |
269 | 2 | null | 140 | 7 | null | This blog post points to a presentation about backtesting and data snooping: [http://www.portfolioprobe.com/2010/11/05/backtesting-almost-wordless/](http://www.portfolioprobe.com/2010/11/05/backtesting-almost-wordless/)
I think the only non-datasnooping method there is is to trade live. But the problem of data snooping can be reduced by seeing how significant the backtest result is compared to what would have happened if the trades were random. Using this technology also makes it clear that backtesting results can easily be deceiving.
| null | CC BY-SA 2.5 | null | 2011-02-08T11:40:55.330 | 2011-02-08T11:40:55.330 | null | null | 249 | null |
270 | 2 | null | 260 | 7 | null | Ensemble methods, or [ensemble learning](http://en.wikipedia.org/wiki/Ensemble_learning) are a class of statistical methods that, loosely speaking, operate on many rather than a single instance of the data. Think bootstrapping, but then combine the estimates for an aggregate. The [Wikipedia link](http://en.wikipedia.org/wiki/Ensemble_learning) has more.
Combining two forecasters is something else that is sometimes called [pooling forecasts](http://www.google.com/webhp?sourceid=chrome-instant&ie=UTF-8&ion=1&nord=1#hl=en&sugexp=ldymls&xhr=t&q=pooling+forecasts&cp=0&qe=cG9vbGluZyBmb3Jl&qesig=ZJHlsP2SSloq8fj5g4vhww&pkc=AFgZ2tlTEByw1xxXxY8Syk-Ov0zN_-HfCTHAxwPXXx8CsMAedOgKyGqhehOqwXTc9w3FPMfQVI1eKiP3rH6bvjxy891Pu9cv9A&pf=p&sclient=psy&nord=1&site=webhp&source=hp&aq=0v&aqi=&aql=&oq=pooling+fore&pbx=1&fp=5b2df7f8ff10eaf9&ion=1) or, more generally, [consensus forecast](http://en.wikipedia.org/wiki/Consensus_forecast).
The main difference is that the pieces in an ensemble method are related---pooling is from the same class or instance of an estimate---whereas pooled forecast are aggregating over different forecast which may not have any commonality.
| null | CC BY-SA 2.5 | null | 2011-02-08T12:48:51.157 | 2011-02-08T13:41:46.523 | 2011-02-08T13:41:46.523 | 69 | 69 | null |
271 | 2 | null | 261 | 1 | null | Wealth management typically refers to the management of the personal wealth of high net worth (HNW) individuals. Asset management, on the other hand, usually means managing the assets on the behalf of a larger entity (pension funds, insurance companies, endowments, etc).
- Some banks provide both services
- Some larger entities do their own asset management (some less successfully, like Harvard)
- Wealth management could be used even by smaller banks for retail customer assets (401k allocation, etc)
| null | CC BY-SA 2.5 | null | 2011-02-08T13:03:46.107 | 2011-02-08T13:03:46.107 | null | null | 253 | null |
272 | 2 | null | 240 | 10 | null | A further comment on user214's answer : the probability distribution of the future value of the index that you imply from option prices is its distribution under the (market) risk-neutral measure, which generally different from the true historical measure. In particular, option prices do not give information about the risk premium. There is a vast literature about this, but a good start is [this paper from Chris Rogers and Steve Satchell](http://www.statslab.cam.ac.uk/~chris/papers/caution.pdf).
Furthermore, European option prices give you information about the marginal distributions of the index at fixed maturities, but they give you no clue about the dynamical properties of the value process, that is, the distribution of the paths.
| null | CC BY-SA 2.5 | null | 2011-02-08T13:16:26.903 | 2011-02-08T13:16:26.903 | null | null | 250 | null |
273 | 2 | null | 240 | 5 | null | This is the paper for you:
[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1107464](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1107464)
From the abstract:
>
The shape of the volatility smirk has significant cross-sectional predictive power for future equity returns. Stocks exhibiting the steepest smirks in their traded options underperform stocks with the least pronounced volatility smirks in their options by around 10.9% per year on a risk-adjusted basis. This predictability persists for at least six months, and firms with the steepest volatility smirks are those experiencing the worst earnings shocks in the following quarter. The results are consistent with the notion that informed traders with negative news prefer to trade out-of-the-money put options, and that the equity market is slow in incorporating the information embedded in volatility smirks.
For information on the volatility smirk a good starting point is here:
[http://en.wikipedia.org/wiki/Volatility_smile](http://en.wikipedia.org/wiki/Volatility_smile)
| null | CC BY-SA 2.5 | null | 2011-02-08T13:45:08.937 | 2011-02-08T13:45:08.937 | null | null | 12 | null |
274 | 2 | null | 250 | 5 | null | As you said, $\mu$ is the expected return that is the expected value (mathematical expectation) of the random variable "stock return" under the objective probability measure. Assuming that returns are stationary*, the obvious way to estimate it is to compute a large number $N$ of returns $R_i$, then to average them. You also want to annualize this average (multyply by 252).
Now if you are using consecutive periods, and logarithmic returns, this simply amounts to computing the overall return $\log S_{T_N} / S_{T_0}$ and dividing it by the time lapse $T_N-T_0$ (in year fraction).
(*) : This is a dubious hypothesis, and the estimate will indeed be very unstable.
| null | CC BY-SA 2.5 | null | 2011-02-08T13:48:24.193 | 2011-02-08T13:48:24.193 | null | null | 250 | null |
275 | 2 | null | 209 | 17 | null | >
Q: How do strategies deal with
corporate actions?
A: Very carefully.
Jokes aside, it is not trivial, and the answer depends on what you want to do with the data. Yahoo provides adjusted stock prices/returns for splits/mergers/dividend, as explained by Shane. The resulting time series is not very useful for predictive and risk management purposes. Commercial providers (or their resellers) do not follow this approach. They develop an internal and unique ID for every asset (Barra ID, Axioma Db, Thomson Reuters has its own etc.) that is mapped to the ticker/cusip/sedol at any given date. Prices are not merged and dividends are reported separately. Why is that? Because if you want to actually predict asset returns, you're better off with the pristine time series. For example, if you want to develop a model to predict returns of HPQ, you are better off looking at HP time series pre-merger with Compaq, and HPQ afterwards. By combining their return, you would corrupt the value of predictive variables like company size, EPS. etc. One more complication: from the announcement of the merger of CPQ and HP, their return was effectively coupled. How you go about modeling this correlation of their idiosyncratic returns is highly subjective, but it shouldn't be ignored.
Finally, one more point: when building a prediction model of stock returns based on daily or monthly data, the dividend should be ignored, since the value of the dividend is already priced in the asset prices and returns.
I am just skimming the surface of the issue. But the take-away messages are:
- it depends on what you want to do, so think hard about what the data mean;
- it is complicated, and that it is why data providers sell their service for a small fortune.
Daily returns from public sources do not backfill for delisted assets, so you have a survivorship bias. If you are using those, I recommend having a small universe, a short time interval (2-3 years), and using unadjusted returns.
| null | CC BY-SA 2.5 | null | 2011-02-08T13:52:19.157 | 2011-02-08T13:52:19.157 | null | null | 194 | null |
276 | 2 | null | 180 | 10 | null | One approach is Conditional Value at Risk (CVaR) a.k.a. Expected Shortfall (ES). It does, as you suggest, take into account the whole set of returns. However, instead of traditional VaR which asks "what is the worst 1% or 5% loss I can expect" in a given time frame, conditional VaR asks "assuming I sustain losses of at least 95% or 99% (and perhaps am capitalized to sustain losses of only this amount), what is my expected loss (or shortfall)" for this time period? It can be argued this is more relevant for understanding the impact of more dire scenarios.
Another approach from Extreme Value Theory is concerned with strictly modeling the heavy tail of the returns. Generalized distributions (e.g. Gumbel, Frechet) can be fit to the tail(s) in question via something called the Hill Estimation technique. These are covered in depth in literature should you be interested in more detail.
| null | CC BY-SA 2.5 | null | 2011-02-08T13:55:07.393 | 2011-02-08T13:55:07.393 | null | null | 253 | null |
277 | 2 | null | 43 | 27 | null | There is a family of models that is so commonly used among practitioners that it can be almost regarded as standard. For a survey, check out [Rob Almgren's](http://www.courant.nyu.edu/~almgren/papers/eqf.pdf) entry in the Encyclopedia of Quantitative Finance. Check out also Barra, Axioma and Northfield's handbooks. In general, the impact term per unit traded currency is of the form
$$MI \propto \sigma_n \cdot \text{(participation rate)}^\beta$$
where the exponent is somewhere between 1/2 and 1, depending on the model being used, and the participation rate is the percentage of total volume of the trade, during the trading interval itself. When including the total MI in optimization, the models commonly used are the "3/2" model and the "5/3" model, in which the costs are proportional to (dollar value being traded for asset i)^{3/2, 5/3}. Since the term is not quadratic (and not solvable by a quadratic optimizer) some people approximate it by a linear term plus a quadratic one, or by a piece-wise linear convex function.
| null | CC BY-SA 2.5 | null | 2011-02-08T14:20:15.810 | 2011-02-08T14:20:15.810 | null | null | 194 | null |
278 | 2 | null | 251 | 1 | null | I've never heard of it, but I've only been in the industry 2.5 years. Our C++ guys haven't even mentioned it either. They prefer using PACK/LAPACK which is mostly rooted in academia & heavily debugged. We also make heavy use of the IMSL FORTRAN libraries for hardcore statistical computation and Extreme Optimization (for .NET).
One of our other researches has reported some interest in the Intel Math Kernel library, but that faded once they saw the price tag.
To echo what @quant_dev said, we much prefer to build our own models.
| null | CC BY-SA 2.5 | null | 2011-02-08T14:25:41.173 | 2011-02-08T14:25:41.173 | null | null | 19 | null |
279 | 2 | null | 262 | 0 | null | I am no expert of banking, but hope to share my thoughts.
First, bank's capital charge for mortgage is not 100%, so with $100 deposite, banks can lend out, say, $500, depending on the capital charge.
Second, I think the deposit rate and the mortgage rate you mentioned is not the same duration. I am not familar with the rates in UK. It's possible that the deposit rate of 2.75% is, for example, 3 year rate, while the 1.49% mortgage rate is a floating rate (1-month rate). Usually the short term rates are lower than long term rates.
Let me know if I am wrong. Thanks.
| null | CC BY-SA 2.5 | null | 2011-02-08T14:37:00.767 | 2011-02-08T14:37:00.767 | null | null | null | null |
280 | 2 | null | 236 | 4 | null | Under the typical Black-Scholes model, you "cannot" do it, because the assumption is that each of the securities in the portfolio has a lognormal terminal distribution, and the sum of lognormally distributed variables it not itself lognormally distributed. In theory one needs an N-dimensional tree (or grid) to treat an N-element portfolio.
I write "cannot" in quotes because this problem is actually quite commonly encountered and solved in one of a few ways, none of which involves a binomial tree:
- If you are comfortable using historical estimates, simply look at the volatility of the portfolio hypothetically over history. This has two significant disadvantages: (i) historical volatility is generally smaller than forward volatility due to survivorship bias, and (ii) there may have been IPOs or other corporate events that make the portfolio value unknown before some date
- Use Monte Carlo to simulate every element of the portfolio, pricing the option by the usual MC methods.
- Use the trick of moment-matching, where a little mathematics tells you the equivalent lognormal (or sometimes shifted lognormal) distribution to your portfolio. You can then use the usual closed form option pricing formulas. The technique has been around since at least the mid-90s. Since not all the papers from back then are easy to see online, here's an URL to a recent rediscovery of the trick in which they go so far as to run a binomial tree for American basket options.
The final technique is almost certainly what you want to use.
| null | CC BY-SA 2.5 | null | 2011-02-08T14:59:52.827 | 2011-02-08T14:59:52.827 | null | null | 254 | null |
281 | 2 | null | 251 | 14 | null | It's a very good and useful question. And it is bloody hard to answer just like many other questions relating to the proprietary nature of how banks and funds implement their core technology. A better route may be to ask on the QL lists, and/or to inquire as to who actually attented the first [Quantlib forum](http://www.statpro.com/quantlib_forum.aspx) in London last month. Another route would be to see who [StatPro](http://www.statpro.com/) lists as clients.
I have been near QL for a long time based on my [RQuantLib](http://dirk.eddelbuettel.com/code/rquantlib.html) bindings to [GNU R](http://www.r-project.org). That obviously covers only a subset of users (those who like R) as well as functionality but I can assure you that I have been in contact with a number of places about it.
Which makes perfect sense: this is open source code, so you can always bring it in to at least provide a benchmark or reference implementation.
| null | CC BY-SA 2.5 | null | 2011-02-08T15:09:58.430 | 2011-02-08T15:09:58.430 | null | null | 69 | null |
282 | 1 | 384 | null | 7 | 422 | This sentence in the following paper got me thinking:
"Some traders [...] trade every pattern whether proven or not, expecting
authentic ones to produce positive results, whilst the profits and losses of
fake patterns cancel each other out."
[http://www.seasonalcharts.com/img/ZUTEXTEN/saisonalitaet_e.pdf](http://www.seasonalcharts.com/img/ZUTEXTEN/saisonalitaet_e.pdf)
Then I got my hands on another paper - seemingly unrelated...: "The
evolution of superstitious and superstition-like behaviour":
[http://rspb.royalsocietypublishing.org/content/276/1654/31.full.pdf](http://rspb.royalsocietypublishing.org/content/276/1654/31.full.pdf)
The bottom line is that I can be rational to act irrational, or in other
words there is a trade-off between being superstitious and being ignorant -
and being superstitious (i.e. seeing patterns where there are none) can under certain circumstances be beneficial - this is why it survived evolution up until now.
Have you come across research that systematized this approach for the
trading arena, i.e.
first: what is the right confidence level to trade a signal and
second "opportunistic trading" as an approach of its own?
With "opportunistic trading" I mean a framework to trade every signal out
of some class (with money and risk management attached for not going
bust) in the hope that the false signals cancel each other out and the real
ones make money.
Perhaps you have some thoughts how to backtest these ideas, too.
| Is there something like opportunistic "superstitious" trading? | CC BY-SA 2.5 | null | 2011-02-08T15:29:51.080 | 2011-02-10T03:58:46.130 | null | null | 12 | [
"backtesting",
"trading",
"trading-patterns"
] |
283 | 1 | 383 | null | 8 | 779 | Are there any papers about possible trading strategies you can apply when you know where a large cluster of orders is located in the order-book?
These seem to fall in the liquidity-provisioning/hunting range of strategies, is this correct?
| What quant terms to use to search for papers about "stop-hunting" trading strategies? | CC BY-SA 2.5 | null | 2011-02-08T16:14:56.500 | 2011-09-25T19:51:18.747 | null | null | 225 | [
"strategy",
"liquidity"
] |
284 | 1 | 285 | null | 8 | 2309 | On average, how much slippage (measured in lost % return potential) is typical for an operating quant fund that trades in, say, major U.S. equities?
| How significant is slippage in a successful quant fund? | CC BY-SA 2.5 | null | 2011-02-08T16:16:42.410 | 2011-10-31T23:58:56.180 | null | null | 80 | [
"slippage"
] |
285 | 2 | null | 284 | 9 | null | It depends almost entirely on the size of the positions the fund is trading.
More specifically, it is a function of the liquidity of the stock.
ex-post liquidity can be observed from volume in aggregate and time & sales, in particular.
ex-ante liquidity can be estimated (often reasonably well using a bucketed look-back approach)
It also depends on whether the fund is trading to provide liquidity(against momentum), or remove liquidity(momentum following). In the case of the former the slippage can often be negative. In the latter, slippage increases as trade size increases. Slippage can be lowered by using an algo like vwap, IS, twap, etc... But trades take longer to place under these block break-up approaches.
For large US equities, you can consistently trade in the low thousands of shares multiple times per day, with little to no impact on slippage.
This guy is generally considered the expert on this topic:
[http://www.courant.nyu.edu/~almgren/](http://www.courant.nyu.edu/~almgren/)
his papers describe some novel approaches to minimizing slippage for large blocks.
| null | CC BY-SA 2.5 | null | 2011-02-08T16:28:31.800 | 2011-02-08T16:28:31.800 | null | null | 214 | null |
286 | 1 | null | null | 3 | 1259 | What are some trading strategies for stocks (just stocks, no derivatives) using freely available [online data sources](https://quant.stackexchange.com/questions/141/data-sources-online)?
| basic stock trading strategies | CC BY-SA 2.5 | null | 2011-02-08T16:35:43.010 | 2011-02-08T16:45:13.347 | 2017-04-13T12:46:22.953 | -1 | 128 | [
"strategy",
"trading"
] |
287 | 2 | null | 262 | 0 | null | A banking textbook will tell you that banks earn their profits from their net interest margin(difference between rates charged, and rates paid on banks borrowing).
Your question makes it clear that the net interest margin math does not add up.
Look at the 10-K (or the 10-Q for that matter) filings for a publicly traded bank that is a significant mortgage originator.
What you'll see in the filings is that the income of the bank is actually driven primarily by fee income. (this fee income is overdraft fees, atm fees, and loan(mortgage) origination fees). For most banks fee income drives 50% or more of their net income.
This is how they make money on doing mortgages.
| null | CC BY-SA 2.5 | null | 2011-02-08T16:36:48.203 | 2011-02-08T16:36:48.203 | null | null | 214 | null |
288 | 2 | null | 286 | 3 | null | There are two broad categories of trading strategies.
- Momentum strategies(e.g trend following)
- Mean reversion strategies.
Elder's book outlines the major ones:
[http://www.amazon.com/Trading-Living-Psychology-Tactics-Management/dp/0471592242](http://rads.stackoverflow.com/amzn/click/0471592242)
And the depressing truth about most of these strategies is outlined by Aronson:
[http://www.amazon.com/Evidence-Based-Technical-Analysis-Scientific-Statistical/dp/0470008741/](http://rads.stackoverflow.com/amzn/click/0470008741)
| null | CC BY-SA 2.5 | null | 2011-02-08T16:45:13.347 | 2011-02-08T16:45:13.347 | null | null | 214 | null |
289 | 2 | null | 282 | 2 | null | Not exactly definitive, but I once met a lady at a seminar I was teaching that gave an overview of how she uses astrology to make trading decisions.
| null | CC BY-SA 2.5 | null | 2011-02-08T16:46:53.297 | 2011-02-08T16:46:53.297 | null | null | 214 | null |
290 | 2 | null | 249 | 10 | null | Short answer: yes.
Long answer: the challenge in trading these things, like you mentioned, is that each contract is not perfectly hedgable. This is an intentional choice made by the exchanges that list these products, so that they can provide an incentive to trading firms(locals) to provide liquidity for these new products and help boost trading volume.
The primary challenge in trading spx options vs vix futures, or spx options vs vix options, or vix options vs vix futures, is the fact that all three have different expirations.
This fact combined with the wildly different notionals for each (vix futures = 1000*vix, vix options = 100 * vix, and spx = 250*vix) makes it more difficult to hedge and trade one versus the other.
What most liquidity providers do instead of trying some kind of variance-gamma model approach is to simply hedge vix options or vix futures using spx straddles, and updating their hedges several times a day. This isn't a perfect hedge, and has lots of additional timing risks. However, there is something like 1% to 1.5% edge in trading this approach, so levering up can make it a reasonable strategy when considered with mean reverting nature of vix.
| null | CC BY-SA 2.5 | null | 2011-02-08T16:58:04.037 | 2011-02-08T16:58:04.037 | null | null | 214 | null |
291 | 2 | null | 261 | 0 | null | Put Simply:
asset management is about producing growth.
wealth management is about preserving principle.
| null | CC BY-SA 2.5 | null | 2011-02-08T16:59:10.667 | 2011-02-08T16:59:10.667 | null | null | 214 | null |
292 | 2 | null | 262 | 5 | null | Most large banks generally sell the mortgages they originate to investors, but they retain the servicing rights. Therefore, they make money via origination and servicing fees; the spread between deposit rates and mortgage loan rates isn't as simple or important as your question suggests.
| null | CC BY-SA 2.5 | null | 2011-02-08T17:23:46.940 | 2011-02-08T17:23:46.940 | null | null | 56 | null |
293 | 1 | 301 | null | 15 | 4115 | What are the main categories of systematic trading strategies (e.g. momentum, mean reversion), as might be considered by an index or fund-of-fund analyst?
Are there any common sub-strategies?
| Categories of systematic trading strategies? | CC BY-SA 2.5 | null | 2011-02-08T17:34:23.037 | 2019-04-02T12:26:38.807 | 2011-02-08T20:37:58.537 | 17 | 262 | [
"strategy",
"mean-reversion"
] |
294 | 2 | null | 293 | 9 | null | There is no official taxonomy of quant trading models. After all, "valuations" are inherently subjective, no matter how much math we put behind them. But there are some industry-standard terms that might be helpful.
[Inside the Black Box](http://rads.stackoverflow.com/amzn/click/0470432063) has the following break-down:
- Price
-
Trend
-
Reversal
- Fundamental
-
Yield
-
Growth
-
Quality
It's also possible to break-down by implementation:
- Time horizon: ranging from long-term to high-frequency
- Bet structure: relative or intrinsic
- Instruments: liquid or illiquid
And these don't even get into portfolio construction, position limits, risk monitoring, etc.
As for what works, keep this maxim in mind:
>
Bulls make money, bears make money, but pigs get slaughtered.
And lastly, comparing chartists to quants is like comparing astrologists to astronomers.
| null | CC BY-SA 2.5 | null | 2011-02-08T18:58:41.920 | 2011-02-08T18:58:41.920 | null | null | 35 | null |
295 | 2 | null | 262 | 1 | null | Check out the amortization table on your mortgage. You'll find that for the first five years, you are paying 90+% of your payment towards interest. So, if you need to sell end your loan anytime before full maturity, the majority of what you would have paid is interest, not principle. So, your effective interest rate for your specific loan duration would have been much higher.
I don't think this factor is stressed enough, and really ought to be illegal. Why not just divide the interest evenly over the life of the loan so that interest and principle are applied equally? A real shameful way to do business...
| null | CC BY-SA 2.5 | null | 2011-02-08T19:20:41.707 | 2011-02-08T19:25:42.203 | 2011-02-08T19:25:42.203 | null | null | null |
296 | 7 | null | null | 0 | null | The Quantitative Finance Stack Exchange is intended for professionals and academics involved in quant modeling or trading.
Basically, you must either
- be earning a living at this,
- or be studying this specifically in graduate school.
Otherwise, your question will probably be off topic.
Good questions are focused on an actual problem you face in the course of your work as a quant or academic researcher. If you have a question about
- securities valuation
- risk modeling
- market microstructure
- portfolio management
- financial engineering
- econometrics
then you're in the right place to ask your question.
Some questions are often asked on here repeatedly; we generally redirect those to the canonical answers. Common questions include:
- Where can I find free data online?
- What programming language should I use to implement my trading system?
- What books should I read?
There are also some common questions that are totally off-topic and will be closed. Usually these come from people outside the industry. Examples include:
- How do I become a quant?
- What should my thesis topic be?
- Could someone help me develop a trading strategy?
There may be other questions that are better suited for our sister sites:
- Programming: Stack Overflow
- Statistics: Cross Validated
- Basic Finance and Definitions: Personal Finance & Money
We can answer questions for software packages that are specific to quantitative finance, though bug reports and requests for support should be filed with the software vendor.
Please note, however, that cross-posting is not encouraged.
| null | CC BY-SA 3.0 | null | 2011-02-08T19:34:31.400 | 2013-02-04T16:25:37.370 | 2013-02-04T16:25:37.370 | 467 | -1 | null |
297 | 1 | 307 | null | 8 | 1522 | I don't quite understand how anyone would invest in VXX (asides from short-term trades)... Since the VIX term structure is generally in contango, the VXX is doomed to bleed to death.
Therefore, how exactly does this type of structure work (ETN)? Eventually, the capital raised by selling these notes will run out - what happens then? They have already reverse splitted the VXX, what's next? Who is investing in it in the first place?
| How to make sense of VXX and the people who bought it? | CC BY-SA 2.5 | null | 2011-02-08T20:33:33.290 | 2011-02-08T21:10:05.473 | 2011-02-08T20:41:50.950 | 17 | 290 | [
"vix"
] |
298 | 1 | 308 | null | 21 | 18190 | I'm curious about high performance computing and consider algo/program trading as an interesting source of information about what are performant technologies that are used to trade the markets.
Is scala being used out there? Is it a viable language for a startup prop shop? Would it be considered an advantageous language given it's more expressive syntax (and thus less code) as compared to java/c++ but be just as speedy?
| Is Scala used in trading systems | CC BY-SA 2.5 | null | 2011-02-08T20:34:09.507 | 2020-10-26T19:45:36.303 | null | null | 57 | [
"trading-systems"
] |
299 | 2 | null | 298 | 9 | null | Empirics should count for something, and the (awesome) [Language Shootout](http://shootout.alioth.debian.org/) does show that [C++ still very clearly dominates Scala](http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=scala&lang2=gpp) in execution time and memory use -- though Scala looks better in code size.
Shops that have existing investment in Java like Scala as a next-generation Java given that the latter hasn't moved all that much of late. I also heard some startups betting on it. I am not in the Java camp, so my money is still on C++ (especially when performance really matters).
| null | CC BY-SA 2.5 | null | 2011-02-08T20:39:12.170 | 2011-02-08T20:39:12.170 | null | null | 69 | null |
300 | 2 | null | 298 | 5 | null | To the best of my knowledge, there is no Scala implementation of execution platforms. C/C++ is still the language of choice for mission-critical financial applications, followed by Java, for those more recent shops that didn't have the burden of much legacy C++ code. I cannot imagine Scala taking hold, given that is slower and not nearly as robust as C++ and Java.
For UIs, everything is fair game, from Java to Python to C# to Ruby. But choosing the right language of a UI has never been the competitive advantage of any investment firm, I believe.
| null | CC BY-SA 2.5 | null | 2011-02-08T20:55:13.113 | 2011-02-08T20:55:13.113 | null | null | 194 | null |
301 | 2 | null | 293 | 12 | null | There are other strategy types not covered by mean-reversion/trend following:
- arbitrage - keep correlated assets close in price (SPX index versus the 500 stocks contained in it, or Gold trading in London versus Gold trading in New York)
- market making - buy on bid, sell on ask, gain the spread
- liquidity rebate - some venus pay you for putting limit orders in the book. Put in a limit order to buy, when it's hit try to sell at the same price that you bought at (or better) and gain the rebate. Works best on high volume, low price assets.
- predatory trading - seek big hidden liquidity in the market and front-run it
- behavioral trading - quantify market sentiment and trade on it (analyze tweets, determine global/regional mood and use known psychological theories to predict the effect on market behavior)
- event trading - analyze news (electronic, paper, blogs, twits) and predict market impact of new relevant facts (litigation, new products, new management, ...)
| null | CC BY-SA 3.0 | null | 2011-02-08T20:57:47.933 | 2013-10-26T02:11:50.650 | 2013-10-26T02:11:50.650 | 35 | 225 | null |
302 | 2 | null | 298 | 5 | null | Yes, it's used. However its purpose for now is mainly limited by creation of Domain Specific Languages for prototyping because of easy integration with existing infrastructure in Java. Here Scala also competes with another JVM based language - Groovy and F# because of its relatively easy interaction with Excel.
| null | CC BY-SA 2.5 | null | 2011-02-08T20:59:41.760 | 2011-02-08T20:59:41.760 | null | null | 15 | null |
303 | 1 | null | null | 36 | 12033 | Similar to [this other question about Scala](https://quant.stackexchange.com/questions/298/is-scala-used-in-trading-systems), I'm interested in knowing whether F# is used to any measurable degree in financial circles. Have there been any successful shops using it, any research on performance and viability?
| Is F# used in trading systems? | CC BY-SA 2.5 | null | 2011-02-08T21:00:27.773 | 2020-06-24T04:52:00.707 | 2017-04-13T12:46:23.037 | -1 | 288 | [
"programming"
] |
304 | 2 | null | 298 | 14 | null | We can play these language wars until pigs fly, but there are a few very basic things that most people agree on:
- As has already been said, C++ is the standard language where performance is really important (and Java comes in second). An example of how this shows up: C++ is taught in the Wilmott quant finance certificate and in MFE programs. It also appears the most often in job postings.
- For other areas, where performance is the top priority, a wide variety of other languages undoubtedly get used. For instance, Jane Street is very public about their OCaml implementation. Many hedge funds use languages like R, matlab, and Python, even in their production environments.
Are firms using Scala? Probably. Do you want to become a Scala expert in the hope of using it in quantitative finance? Not a good bet.
| null | CC BY-SA 2.5 | null | 2011-02-08T21:04:59.577 | 2011-02-08T21:04:59.577 | null | null | 17 | null |
305 | 2 | null | 303 | 5 | null | Not that I know of, although I know .net is sometimes used as a platform by (larger) asset management companies, so F# seems a good candidate to parallelize computationally intensive jobs.
| null | CC BY-SA 2.5 | null | 2011-02-08T21:05:48.970 | 2011-02-08T21:05:48.970 | null | null | 194 | null |
306 | 1 | null | null | 27 | 92080 | What programming languages are the most common in quantitative finance, and why are these languages used?
Note: I do not mean, what languages are used to develop the accounting system at a hedge fund: this is specifically related to aspects of valuation and trading.
| What programming languages are most commonly used in quantitative finance? | CC BY-SA 2.5 | null | 2011-02-08T21:10:03.827 | 2013-06-12T10:13:15.257 | null | null | 17 | [
"programming"
] |
307 | 2 | null | 297 | 6 | null | Generally, one holds VXX (or VXZ) for the same reasons one holds any long-volatility position, either (a) as a directional bet on volatility or (b) as a hedge to large directional moves or implicit short volatility positions. Obviously the former reason is often shorter-term.
In the second case, it's relatively easy to see that, say, an equity portfolio of leveraged companies will get creamed in precisely the same circumstances that VXX will spike. In effect, the dividends or appreciation of those stocks will pay for the bleeding of the VXX position. The same is even more evidently true of positions belonging to the subset of options traders whose personal style often puts them in short volatility (short gamma) positions. In these cases VXX provides something of a fire-and-forget hedge that does not need constant rolls, as an option or VIX futures position might.
Essentially all long-volatility positions tend to "bleed" value from day to day, at least on those days when nothing "interesting" happens. Options traders call this the theta bill.
With respect to the mechanics, Barclay's has the right to redeem the notes. The capital won't run all the way out, but may get so low that they decide to redeem and reissue. I would be willing to bet that, if they do so, they will work out a way to keep the stock symbol unaltered.
| null | CC BY-SA 2.5 | null | 2011-02-08T21:10:05.473 | 2011-02-08T21:10:05.473 | null | null | 254 | null |
308 | 2 | null | 298 | 20 | null | EDF Trading uses it (or used it): [http://cufp.org/videos/scala-edf-trading-implementing-domain-specific-language-derivative-p](http://cufp.org/videos/scala-edf-trading-implementing-domain-specific-language-derivative-p)
In general, many financial institutions use functional programming languages. Andrei is correct in that they often are used to develop domain-specific languages (DSLs).
Some examples:
- Credit Suisse uses F# http://cufp.org/archive/2008/abstracts.html#MansellHoward (they had toyed around with Haskell for a bit to create a DSL called Paradise http://urchin.earth.li/~ganesh/icfp08.pdf)
- Jane Street Capital uses OCaml ( http://ocaml.janestreet.com/?q=node/61 )
- Barclays uses a DSL in Haskell for descibing exotic trades ( https://web.archive.org/web/20160313025331/http://www.lexifi.com/files/resources/frankau.pdf )
A "complete" list can be found by exploring proceedings from Commercial Users of Functional Programming workshop proceedings: [http://cufp.org/conference](http://cufp.org/conference)
If you are smart about DSL creation, the programs you write can actually have better performance because you can perform smarter compiler optimizations (e.g., [http://infoscience.epfl.ch/record/148814/files/paper.pdf](http://infoscience.epfl.ch/record/148814/files/paper.pdf))
One last note, since Scala targets the JVM, you could always integrate the Java and Scala code (or even make native calls to C++ code via the Java Native Interface).
| null | CC BY-SA 4.0 | null | 2011-02-08T21:10:40.270 | 2020-10-26T19:45:36.303 | 2020-10-26T19:45:36.303 | 33410 | 75 | null |
309 | 2 | null | 303 | 12 | null | I worked for a big investment bank a few years ago that announced it was moving all quant models to F#. The goal behind the switch was that F# is a functional programming language and available on .NET, both of which were desirable qualities for this particular company. I left before they got started on the transition, so I don't know what came of it.
As for the related OCaml, [Jane Street](http://www.janestcapital.com/technology/ocaml.php) famously uses that.
| null | CC BY-SA 2.5 | null | 2011-02-08T21:10:41.510 | 2011-02-08T21:10:41.510 | null | null | 35 | null |
310 | 1 | 314 | null | 45 | 9505 | How many trades per second are we talking about?
What kind of strategies are used in this time frame?
Can the small guy play the game?
| How 'High' is the frequency in HFT? | CC BY-SA 2.5 | null | 2011-02-08T21:18:52.283 | 2017-05-29T01:41:17.163 | 2014-04-12T15:56:30.683 | 2299 | 262 | [
"high-frequency",
"market-microstructure",
"strategy"
] |
311 | 2 | null | 303 | 24 | null | Credit Suisse has publicly stated that they use F# for some valuation tasks (which tend to be very parallelizable). Here's a link to a talk abstract from a Commercial Users of Functional Programming workshop:
[http://cufp.org/archive/2008/abstracts.html#MansellHoward](http://cufp.org/archive/2008/abstracts.html#MansellHoward)
I'm not sure if there's a video of the talk floating around or not.
Since F# targets the .NET framework, it gives you a lot of flexibility in integrating other applications that might have been written in, say, C#.
| null | CC BY-SA 2.5 | null | 2011-02-08T21:18:54.517 | 2011-02-08T21:18:54.517 | null | null | 75 | null |
312 | 2 | null | 306 | 29 | null | A choice of C, C++, or Java is practically required somewhere in the stack since most data vendors only supply bindings for one of those languages. Once the data arrives, though, the trading desk can use whatever it wants.
In addition to the above three, I've seen these used in production:
- Visual Basic / Excel
- q / kdb+
- R
- Python
- MATLAB
- OCaml
I've also seen Flex and AJAX used for some front-end components.
And finally, some firms build their own in-house proprietary languages. Goldman Sachs famously has Slang.
---
Related question:
What language should I use in quantitative finance?
Related answer:
- Whatever your boss pays you to use.
- At least one of the "Big Three" above to get the data feed.
- If you actually do get a choice, whatever you feel most comfortable using.
As per #3, that's why you'll notice most language choices are either mathematical in nature (like R and MATLAB) or very high-level in nature (like Python and OCaml).
| null | CC BY-SA 2.5 | null | 2011-02-08T21:22:35.853 | 2011-02-08T21:34:40.397 | 2011-02-08T21:34:40.397 | 35 | 35 | null |
313 | 2 | null | 303 | 6 | null | There was huge buzz about F# in the City and few banks / funds tried very agressively to hire people with F# knowledge and experience in finance. For example, Luka Bolognese one of the F# authors joined Credit Suisse almost 2 years ago. Also Don Syme used to conduct visiting lectures about F# and its possible applications to finance in the City .
| null | CC BY-SA 3.0 | null | 2011-02-08T21:23:09.010 | 2011-09-09T15:01:03.863 | 2011-09-09T15:01:03.863 | 15 | 15 | null |
314 | 2 | null | 310 | 29 | null | You could for example look at [this research paper released by Deutsche Bank's Research group](http://www.dbresearch.com/PROD/DBR_INTERNET_EN-PROD/PROD0000000000269468.pdf) ([mirror](https://web.archive.org/web/20170529002243/http://www.dbresearch.com/PROD/DBR_INTERNET_EN-PROD/PROD0000000000269468.pdf)) just yesterday which defines both high-frequency and ultra-high-frequency trading.
In the paper it says
>
Typically, a high frequency trader would not hold a position open for
more than a few seconds. Empirical evidence reveals that the average
U.S. stock is held for 22 seconds.
And in a footnote it says
>
There even is a subcategory of high-frequency trading, Ultra-HFT,
which is sensitive to a latency down to the microsecond. Here,
co-location [of servers] is exceedingly significant, and shaving off
further microseconds is of utmost importance.
And no, the small guy can't play for reason well-put in the paper, co-location probably being the single most important one.
| null | CC BY-SA 3.0 | null | 2011-02-08T21:24:19.053 | 2017-05-29T01:41:17.163 | 2017-05-29T01:41:17.163 | 2183 | 69 | null |
315 | 2 | null | 310 | 17 | null | A survey by FinAlternatives in 2009 concluded that "86% believe that the term “high-frequency trading” referred strictly to holding periods of only one day or less." (Aldridge 2009):
![enter image description here](https://i.stack.imgur.com/kIav2.gif)
There are two problems with this survey for our present discussion: (1) the meaning of the term has been clarified significantly since that survey and (2) it surveyed a wide spectrum of people. This latter point isn't necessarily a problem, but it amounts to asking non-experts questions about a very esoteric field: you are bound to get a wide variance in the answers. High frequency trading is primarily a proprietary trading phenomenon (including when it is done by banks like Goldman Sachs), because its primary benefit is very high returns with low risk on a small amount of capital. Trading at high speeds is not a high-capacity strategy, so it is not as suitable to a hedge fund structure which is (a) fee-driven and (b) subject to client scrutiny.
Long story short: in order to have any meaning, HFT needs to mean something akin to latency arbitrage, in which case co-location and going as fast as possible are critical. This means different things depending on the asset class, strategy, etc. But firms like GETCO are making edits to their Linux distributions in order to get higher speeds: that's your competition.
| null | CC BY-SA 2.5 | null | 2011-02-08T21:24:53.780 | 2011-02-08T21:36:45.720 | 2011-02-08T21:36:45.720 | 17 | 17 | null |
316 | 2 | null | 306 | 13 | null | I believe that C++ is the most common quantitative infrastructure language. I don't know of a single hedge fund or investment bank that doesn't use it extensively or completely (and I spoke to a lot of them at some point in the past). In some cases, as the former Lehman brothers-now-Barclays, C++ was the only language of choice, which is a bit extreme, given that C++ is not as easy to use as a scripting language. Most companies I know pair C++ with a scripting language of choice. Traditionally, this was MATLAB, which is still amazingly popular for prototyping. More recently, R and Python have become more popular. R especially is now used at several hedge funds I know, and has taken many MATLAB seats; and I know of Python users in JPM and UBS. Python has also nice bindings to BLAS, LAPACK, NAG and MOSEK. Goldman has diffentiated itself by developing a proprietary language, SLANG; the very definiton of non-popular. I don't think Java is used heavily or at all for numerical work. Just try to find maintained to BLAS and LAPACK.
Some crazy market maker will differentiate itself via language, as in the case of Jane St., lonely user of OCAML. I would not be surprised to find a LISP-only 10-person shop flying under the radar screen, and very wealthy. But I have never heard of a single Clojure/Scala/Groovy/Ruby/[add your trendy language] in a serious company. At least, I would never invest my money in a company using a web development, immature language to manage wealth.
Summing up, I would say C++, and then a number of prototyping languages, with MATLAB still dominating, but R having a strong positive momentum.
| null | CC BY-SA 2.5 | null | 2011-02-08T21:29:42.303 | 2011-02-08T21:29:42.303 | null | null | 194 | null |
317 | 2 | null | 306 | 8 | null | Let me quote a few excerpts from Paul & Dominics Guide to Quant Careers (version 2.0):
>
Most quant jobs ask for C++, with much smaller demand for C# and Excel VBA and Java. Although Excel is the second most common skill, alas Excel VBA is regarded as “trivial” so few employers will be impressed by mastering it. This attitude is responsible for major efforts at the large banks to defuse the vast number of actively disruptive, yet critical spreadsheets that enjoy the reliability of a British train... Some quants get sucked into roles that we call “Excel Jockeys”. Although some Excel work is cutting edge trading floor work that makes money every time you get the sheet to work properly, the majority is looking after risk reports, data ingest and sheets that even users don't know why they have.
MatLab is common in both academia and finance, and it does not harm to have used it, but again like Excel VBA the view amongst managers is that if you are smart enough to do real quant work you can pick up whatever MatLab you need though we do see a few jobs for extreme high end MatLab gurus.
Fortran is still quite popular in academia, but has only a tiny market share in banks... Much “C++” code in banks is really much like C, and one does see “C++” code that is written using the C subset but trying to be like Fortran.
| null | CC BY-SA 2.5 | null | 2011-02-08T21:37:10.020 | 2011-02-08T21:37:10.020 | 2020-06-17T08:33:06.707 | -1 | 70 | null |
318 | 2 | null | 310 | 15 | null | I. Re: # of trades...
[According to WK Selph](http://howtohft.blogspot.com/2011/02/how-to-build-fast-limit-order-book.html) (former quant turned blogger) @ [WK's High Frequency Trading How To](http://howtohft.blogspot.com):
>
To give some idea of the data volumes,
the Nasdaq TotalView ITCH feed, which
is every event in every instrument
traded on the Nasdaq, can have data
rates of 20+ gigabytes/day with spikes
of 3 megabytes/second or more. The
individual messages average about 20
bytes each so this means handling
100,000-200,000 messages per second
during high volume periods.
Doesn't speak to # of trades executed, but in terms of # of transactions analyzed by an HFT trading sytem in search of opportunistic trades...
II. re: "can the small guy play the game?"...
Can't recommend highly enough reading WK Selph's post on "[How I Built a Startup HFT Firm](http://howtohft.blogspot.com/2011/01/how-i-built-startup-hft-fund-part-1.html)", e.g.
He started an HFT startup with only one partner:
>
my focus will be mainly on the
technical challenges I faced as the
engineering half of a two person HFT
startup, the business challenges, such
as acquiring trading capital and
negotiating ever-lower trading costs,
were similarly massive.
On the never-ending challenge for lower cost-per-trade:
>
Another problem, especially acute in
high frequency trading, is that the
cost per trade directly impacts the
profitability of a trading strategy
so, for a given per-trade cost,
strategies that would otherwise be
profitable are not. Clearing firms
will only provide a low per-trade cost
to customers that execute many trades,
but to execute many trades one needs a
low per-trade cost. So in addition to
the purely technical challenges,
building a HFT firm from scratch means
solving two chicken-and-egg problems,
that of building a track record and
that of negotiating a low per-trade
cost, simultaneously.
| null | CC BY-SA 2.5 | null | 2011-02-08T21:44:00.723 | 2011-02-08T21:52:22.243 | 2011-02-08T21:52:22.243 | 266 | 266 | null |
319 | 2 | null | 180 | 11 | null | Perhaps you may want to consider article by [D. Levine - Modeling Tail Behavior with Extreme Value Theory](http://www.soa.org/library/newsletters/risk-management-newsletter/2009/september/jrm-2009-iss17-levine.pdf) who gives practicale example on how EVT can be used to calculate probabilities on returns in tails with use of the Pickands-Balkema-de Haan Theorem and generalized Pareto distribution. It also contains some criterias and points on other methods that can be used to determine threshold value for PBH theorem:
>
Contrary to this notion is the fact that the PBH theorem
states a result based on the assumption of threshold values
approaching the right endpoint of the distribution F. This
implies that better GPD fits are expected for larger choices
of the threshold u.
One must strike a balance between choosing u large
enough so that the theorem is applicable from a practical
standpoint and small enough so that a sufficient number
of data points can be used in estimation of the parameters
of the GPD.
There is no hard and fast rule describing the “right”
choice of the threshold value. Some methods for threshold
selection can be found in Bensalah’s “Steps in Applying
Extreme Value Theory to Finance: A Review
| null | CC BY-SA 2.5 | null | 2011-02-08T21:58:15.363 | 2011-02-08T21:58:15.363 | null | null | 15 | null |
320 | 2 | null | 250 | 0 | null | In the risk-free measure, $\mu$ is equal to the risk-free rate. In the real-world measure, $\mu$ is uknown and must be estimated using statistical methods: either directly from historical data, or from a model calibrated to historical data. Note that, in general, there is nothing preventing you from using today's (or past) prices as inputs to this model. For example, no-arbitrage relationships ought to hold in both "worlds".
| null | CC BY-SA 2.5 | null | 2011-02-08T22:00:33.247 | 2011-02-08T22:00:33.247 | null | null | 89 | null |
321 | 2 | null | 254 | 5 | null | Quite a lot of options on asset $S(t) > 0$ have a payoff at tinme $T$ equal (at least approximately -- it's a bit more complicated in the case of e.g. credit index options)
$
(S(T) - K)^+
$
You can always find a number $\sigma$ such that, when plugged into Black formula together with strike $K$, spot price $S(t)$, interest rate $r$ and time to expiry $T-t$, you will recover the market price of the option $V(t)$. This number is called the Black implied volatility of the option. Basically, it's a quoting convention for the option prices. Traders use it because:
- it makes it easier for them to compare prices of options on different days, with different strikes
- Black vols tend to be similar across strikes and expiries (not always!)
- it is better (in the sense: you're less likely to suffer lots of arbitrage) to interpolate market prices in the $\sigma$ space then directly; that is, if prices for strikes $K_1$ and $K_2$ are quoted, it's better to use some interpolation method on their Black vols $\sigma_1$ and $\sigma_2$ than on their prices $V_1$ and $V_2$
- it fits their intution better (a paramount argument)
| null | CC BY-SA 2.5 | null | 2011-02-08T22:11:12.113 | 2011-02-08T22:11:12.113 | null | null | 89 | null |