Id
stringlengths 1
5
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
5
⌀ | ParentId
stringlengths 1
5
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
27.8k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
5
⌀ | OwnerUserId
stringlengths 1
5
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1069
|
5
| null | null |
0
| null |
[R](http://www.r-project.org) is available on a wide variety of UNIX platforms, Windows and MacOS, and can be downloaded from [CRAN](http://cran.r-project.org/). It is an implementation of the S programming language combined with lexical scoping semantics inspired by Scheme. R was created by Ross Ihaka and Robert Gentleman and is now developed by the R Development Core Team. It is easily extended through a packaging system on [CRAN](http://cran.r-project.org/).
| null |
CC BY-SA 3.0
| null |
2011-04-26T23:19:21.510
|
2011-04-27T00:58:47.313
|
2011-04-27T00:58:47.313
|
126
|
126
| null |
1070
|
4
| null | null |
0
| null |
An open source programming language and software environment for statistical computing and graphics.
| null |
CC BY-SA 3.0
| null |
2011-04-26T23:19:21.510
|
2011-12-14T15:32:48.880
|
2011-12-14T15:32:48.880
|
1106
|
126
| null |
1071
|
5
| null | null |
0
| null | null |
CC BY-SA 3.0
| null |
2011-04-26T23:22:30.477
|
2011-04-26T23:22:30.477
|
2011-04-26T23:22:30.477
|
-1
|
-1
| null |
|
1072
|
4
| null | null |
0
| null | null |
CC BY-SA 3.0
| null |
2011-04-26T23:22:30.477
|
2011-04-26T23:22:30.477
|
2011-04-26T23:22:30.477
|
-1
|
-1
| null |
|
1073
|
5
| null | null |
0
| null |
One related question with a clear definition
- Measuring liquidity
| null |
CC BY-SA 3.0
| null |
2011-04-26T23:24:54.453
|
2017-06-17T17:37:35.947
|
2017-06-17T17:37:35.947
|
2299
|
-1
| null |
1074
|
4
| null | null |
0
| null |
Liquidity is easy to define qualitatively (the easiness to buy or sell an asset), but difficult to measure.
| null |
CC BY-SA 3.0
| null |
2011-04-26T23:24:54.453
|
2017-06-17T17:37:35.947
|
2017-06-17T17:37:35.947
|
2299
|
-1
| null |
1075
|
5
| null | null |
0
| null | null |
CC BY-SA 3.0
| null |
2011-04-26T23:26:51.067
|
2011-04-26T23:26:51.067
|
2011-04-26T23:26:51.067
|
-1
|
-1
| null |
|
1076
|
4
| null | null |
0
| null |
Mortgage-backed securities.
| null |
CC BY-SA 3.0
| null |
2011-04-26T23:26:51.067
|
2012-01-18T21:19:19.307
|
2012-01-18T21:19:19.307
|
1106
|
1106
| null |
1077
|
2
| null |
1004
|
29
| null |
By "cryptography" you mean information theory. Information theory is useful for portfolio optimization and for optimally allocating capital between trading strategies (a problem which is not well addressed by other theoretical frameworks.)
See:
--- J. L. Kelly, Jr., "A New
Interpretation of Information Rate,"
Bell System Technical Journal, Vol.
35, July 1956, pp. 917-26
--- E. T. Jaynes, Probability Theory: The
Logic of Science
[http://amzn.to/dtcySD](http://amzn.to/dtcySD)
--- [http://en.wikipedia.org/wiki/Gambling_and_information_theory](http://en.wikipedia.org/wiki/Gambling_and_information_theory)
- http://en.wikipedia.org/wiki/Kelly_criterion
In the simple case, you would use "The Kelly Rule". More complicated information theory based strategies for allocating capital between trading strategies take into account correlations between the performance of trading strategies and the relationship between market conditions and strategy performance.
As for Natural Language Processing and speech recognition; when you examine the founders of Renaissance Technology, you will notice that many of the early employees had backgrounds in natural language processing. Naively, you might assume that RT is using NLP based strategies.
However, you will find that all of RT's NLP related hires have backgrounds (published research, Phd thesis's) in speech recognition and specifically in Hidden Markov Models and Kalman filters. The academic background and published research of RT employees gives you a good idea of the algorithms they are using.
The information that has leaked out of RT suggests that RT heavily uses "hierarchical hidden markov models" for latent variable extraction from market time series. It is also believed that RT has developed a proprietary algorithm for "layering" multiple trading strategies for trade signal generation.
RT does not have a single secret trading strategy that magically generates billions of dollars a year. Renaissance Technology's trading strategies are based upon the integration of information from multiple mathematical models.
| null |
CC BY-SA 4.0
| null |
2011-04-27T13:40:46.850
|
2018-09-13T11:01:34.117
|
2018-09-13T11:01:34.117
|
-1
|
806
| null |
1078
|
1
|
1083
| null |
11
|
1408
|
The algorithm is introduced in the paper, [Can We Learn to Beat the Best Stock](http://www.aaai.org/Papers/JAIR/Vol21/JAIR-2117.pdf).
The obvious advantage is superior risk-adjusted returns (if you can actually achieve them). Transaction costs and sensitivity to window size seem to be potential disadvantages.
I'm hoping someone can provide more insightful advantages / disadvantages.
|
What are the advantages / disadvantages of the ANTICOR algorithm?
|
CC BY-SA 3.0
| null |
2011-04-27T15:47:45.587
|
2011-04-28T02:20:16.870
|
2011-04-27T16:53:26.643
|
35
|
56
|
[
"correlation",
"portfolio-selection"
] |
1079
|
1
|
1080
| null |
9
|
3934
|
I feel like I'm missing something fundamental here, but I can't shake the feeling that these two series should be equivalent.
/edit: there is also dailyReturn(Cl(SPY)). I've seen all 3 of these methods used to calculate stock returns on various blogs, and I'm wondering which is 'correct.' They all give slightly different results...
/edit2: and there is also Delt(Cl(SPY)), which seems to be equivalent to ClCl(SPY)
|
Quantmod: what's the difference between ROC(Cl(SPY)) and ClCl(SPY)
|
CC BY-SA 3.0
| null |
2011-04-27T21:44:57.803
|
2011-05-01T12:38:56.243
|
2011-04-27T21:56:50.783
|
500
|
500
|
[
"r",
"returns"
] |
1080
|
2
| null |
1079
|
3
| null |
`TTR::ROC` calculates log returns by default. `quantmod::ClCl` uses `quantmod::Delt`, which calculates arithmetic returns by default.
`ROC(Cl(SPY), type="discrete")` should match `ClCl(SPY)`. Which is 'correct' depends on your purpose.
| null |
CC BY-SA 3.0
| null |
2011-04-27T21:57:27.897
|
2011-04-27T21:57:27.897
| null | null |
56
| null |
1081
|
1
| null | null |
13
|
1558
|
Most quantitative investment strategies focus on the changing prices of a commodity or equity over time. Derivatives, however, make this more complicated. How can I apply quantitative strategies to something that is not a single product with a single price, but rather separate products for each month (each with their own price and volatility)?
For example, how should I know whether I should be buying an orange futures contract for July 2011 rather than July 2012? Additionally, how could I apply formulas such as a moving average to all of these different prices?
Thank you for bearing with me to answer such a fundamental question, I feel like most quantitative strategies I have read about are in an equity context.
|
Quantitative Derivatives Trading vs. Time
|
CC BY-SA 3.0
|
0
|
2011-04-27T21:57:37.510
|
2014-02-02T02:35:03.383
| null | null |
807
|
[
"derivatives"
] |
1082
|
2
| null |
1079
|
6
| null |
To expand on what Joshua has already stated, here is a truncated parameter list of similar functions, along with the package to which they belong.
```
quantmod::Delt(x1,type = c("arithmetic", "log"))
quantmod::periodReturn(x, type='arithmetic') # log would be "log"
TTR::ROC(x, type=c("continuous", "discrete"))
PerformanceAnalytics::CalculateReturns(prices, method=c("compound","simple"))
```
Your choice are simple returns, which are `(today's_close - yesterday's_close) / yesterday's_close`, or log returns, which are `log(today's_close) - log(yesterday's_close)`, or, equivalently, `log(today's_close / yesterday's_close)`. If you decide on simple returns, you must multiply returns to get the total return at the end of a period. With log returns you get to add them. This is preferred when your vector may have zeroes in it for obvious reasons. If you have a simple 1 or -1 signal, then you're only going to have a zero in the beginning, or an NA depending on which function you choose. But once you have a system that goes flat, or signals a 0, then you will have some trouble with simple returns.
Simple returns are referred to as arithmetic, discrete or simple in the above functions. The log returns are alternately referred to as log, continuous or compound.
The `Delt` function is sort of an artifact and has been updated with the `dailyReturn` function. Here is a snapshot of what each function generates on the first two lines of a trading system. Notice also that some have their defaults set to simple returns and others have default set to log returns. Each function allows you to change the default.
```
SLV.Close Delt dailyReturn ROC CalculateReturns
2010-01-04 17.23 NA 0.000000000 NA NA
2010-01-05 17.51 0.016250725 0.016250725 0.016120096 0.016120096
```
Remember that once you convert your returns to a log return, you need to un-convert it to get simple returns again, and this is accomplished by simply applying `exp(log_return)`.
My recent blog post about this topic may be of interest to you. [http://www.milktrader.net/2011/04/chop-slice-and-dice-your-returns-in-r.html](http://www.milktrader.net/2011/04/chop-slice-and-dice-your-returns-in-r.html)
| null |
CC BY-SA 3.0
| null |
2011-04-28T01:24:18.543
|
2011-05-01T12:38:56.243
|
2011-05-01T12:38:56.243
|
291
|
291
| null |
1083
|
2
| null |
1078
|
4
| null |
One of the major assumptions is that you have zero transaction costs. Another one is that your returns are tax-free. Otherwise it looks to me to be a windowed version of CBAL (constant rebalanced).
A more technical analysis can be found at:
Castonguay, [Portfolio Management: An empirical study of the Anticor algorithm](http://digitool.library.mcgill.ca/webclient/StreamGate?folder_id=0&dvs=1303954512559~100) (An MS thesis)
Covan and Gluss, [Empirical Bayes Stock Market Portfolios](http://www.stanford.edu/~cover/papers/paper67.pdf) (CBAL)
| null |
CC BY-SA 3.0
| null |
2011-04-28T02:20:16.870
|
2011-04-28T02:20:16.870
| null | null |
126
| null |
1084
|
2
| null |
832
|
2
| null |
Wavelets and Kalman filtering.
| null |
CC BY-SA 3.0
| null |
2011-04-28T02:26:32.080
|
2011-04-28T02:26:32.080
| null | null |
126
| null |
1085
|
2
| null |
955
|
5
| null |
Take a look at White's Reality Check.
Another very crude way would be to calculate a "skill score" (from The Mathematics of Technical Analysis, p325)
$$\tt{skill\ score} = \frac{SKILL\_correct - NOSKILL\_correct}{Total\ decisions - NOSKILL\_correct}$$
- SKILL_correct: the profitable trades
- NOSKILL_correct: randomly assigned trades that were profitable
- Total decisions: number of trades
If this number is 0 or negative, it indicates that you are mostly dealing with a lucky investor, and not a skilled one.
| null |
CC BY-SA 3.0
| null |
2011-04-28T11:16:17.090
|
2013-03-15T23:42:50.793
|
2013-03-15T23:42:50.793
|
104
|
659
| null |
1086
|
2
| null |
946
|
7
| null |
If you get paid enough theta it absolutely makes sense to be short gamma. And the closer to expiration, the faster the time-value flees. Most of the time, most people would prefer to be gamma long though. It's simply a safer bet because of uncertainty: unexpected events can seriously damage your book if you're short vol.
| null |
CC BY-SA 3.0
| null |
2011-04-28T11:20:58.210
|
2011-04-28T11:20:58.210
| null | null |
659
| null |
1087
|
2
| null |
942
|
21
| null |
Yahoo rounds the adjusted price to 2 decimals even though dividend amounts often have 3 decimal places. Since they apply the adjustment formula to adjusted prices, if you go far enough back in time, the value they give for Adjusted Price will be different than it would be if there were no rounding.
edit: For example, for C (Citigroup), on January 2, 1990, Yahoo gives a close value of 29.37 and an Adjusted value of 1.50. Using the dividend data that Yahoo supplies, if they didn't round to cents on every adjustment, the adjusted value would be 1.677.
| null |
CC BY-SA 3.0
| null |
2011-04-29T14:13:12.343
|
2011-04-29T14:28:13.617
|
2011-04-29T14:28:13.617
|
508
|
508
| null |
1088
|
1
|
1089
| null |
6
|
392
|
Would it be considered appropriate risk-management or overkill to utilize multiple brokers to manage a given trading strategy?
A couple specifics:
- I'm interested in what a reasonably-sized hedge fund might typically do, not an individual day trader.
- Let's assume major equities and that either broker could easily handle the whole strategy alone; the point is avoiding the risk of one broker being unavailable or unable to execute a trade.
|
Is it common to use multiple brokers for risk reduction?
|
CC BY-SA 3.0
| null |
2011-04-29T14:54:37.953
|
2011-04-29T18:38:09.683
| null | null |
80
|
[
"risk-management",
"order-execution",
"broker"
] |
1089
|
2
| null |
1088
|
6
| null |
The shops I've worked for have had access to multiple brokers, but not for redundancy as your question implies. It's often because no one broker can handle every task.
For example, I might need a floor broker, a dark-pool broker, an algo broker, and a separate prime broker. Each agency handles a different requirement. Even if one broker could handle all of these tasks, it may be more cost effective to pick and choose. (An analogy on the retail side is that I can have a credit card, checking account, and IRA from separate banks just because the rates are better.)
So yes, it's common to have multiple brokers, but not for risk reduction. It's just to silo the required tasks.
| null |
CC BY-SA 3.0
| null |
2011-04-29T15:36:42.180
|
2011-04-29T15:36:42.180
| null | null |
35
| null |
1090
|
2
| null |
1088
|
4
| null |
Can't speak to the cash equity space, but at futures shops I think it is common to have the phone number of a give-up broker in case the power goes out or something, but it is uncommon to ever use them.
| null |
CC BY-SA 3.0
| null |
2011-04-29T18:38:09.683
|
2011-04-29T18:38:09.683
| null | null |
508
| null |
1091
|
2
| null |
908
|
10
| null |
Keynes introduced this idea in the notion of a Keynesian Beauty contest:
[http://en.wikipedia.org/wiki/Keynesian_beauty_contest](http://en.wikipedia.org/wiki/Keynesian_beauty_contest)
Anyone who uses a rolling window regression where the parameters and/or parameter estimates are re-fitted periodically are implicitly accounting for this reflexivity (i.e. the market's changing behavior as agents respond and adapt to each others actions)
With respect to a scientific-framework, agent-based modeling is something that fits the bill and also game theory:
[http://en.wikipedia.org/wiki/Agent-based_model](http://en.wikipedia.org/wiki/Agent-based_model)
| null |
CC BY-SA 3.0
| null |
2011-04-29T21:56:09.857
|
2011-04-29T21:56:09.857
| null | null |
1800
| null |
1092
|
2
| null |
1081
|
3
| null |
Best approach is to model each contract separately, or to develop an equilibrium model that constrains the relationship among the various spot and futures contracts. So if you estimate you can make inferences about the other contracts.
The [term structure of interest rates](http://en.wikipedia.org/wiki/Yield_curve#Market_expectations_.28pure_expectations.29_hypothesis) or [covered interest parity](http://en.wikipedia.org/wiki/Interest_rate_parity#Covered_interest_rate_parity) would be examples of the latter.
| null |
CC BY-SA 3.0
| null |
2011-04-29T22:00:37.030
|
2011-04-29T22:00:37.030
| null | null |
1800
| null |
1093
|
2
| null |
1081
|
3
| null |
The time behavior of derivatives do not resemble that of commodity or equity towards the end of their life time. Before the expiry date of a derivative there are correlaion models that can be used in both areas, but for your question in making choices between options...
I have practical experience with stocks and sports betting, but not derivatives. Regardless, it looks like sports betting approaches are similar in making choices. Moving average does not really help in making choices, and in general all sort of Markovian online learning models. Traditional derivatives tools like the [Greeks](http://en.wikipedia.org/wiki/Greeks_%28finance%29), [Black-Scholes](http://en.wikipedia.org/wiki/Black%E2%80%93Scholes) and other methods look at the history of the data and come out with measures for the futures which is better in your case. Then you can compare values and chose accordingly.
| null |
CC BY-SA 3.0
| null |
2011-04-30T13:01:38.387
|
2011-04-30T13:01:38.387
| null | null |
803
| null |
1094
|
1
|
1095
| null |
14
|
1880
|
Motivation: I am running a quantitative analysis that requires long-term, exchange rate data.
Problem: Does anyone have methods for dealing with the EURUSD exchange rate prior to the Euro's existence? Is there some "natural" weighting scheme of the various European currencies prior to the Euro that I might use? Or, is there some proxy for the Euro prior to its debut?
Thanks in advance...
|
Effective Euro-USD (EURUSD) Exchange Rate Prior to Euro's Existence
|
CC BY-SA 3.0
| null |
2011-05-01T03:26:46.407
|
2019-06-10T05:35:31.053
| null | null |
819
|
[
"data",
"analysis",
"fx",
"numerical-methods",
"currency"
] |
1095
|
2
| null |
1094
|
13
| null |
There was a proxy called the [ECU](http://en.wikipedia.org/wiki/European_Currency_Unit).
You should be able to use the weights on the Wikipedia page to get a time series back to 1979. Alternatively, the [St. Louis FRED](http://research.stlouisfed.org/fred2/series/EXUSEC?cid=280) also provides this time series.
| null |
CC BY-SA 3.0
| null |
2011-05-01T06:36:10.510
|
2013-03-12T07:09:19.537
|
2013-03-12T07:09:19.537
|
848
|
371
| null |
1096
|
2
| null |
1029
|
1
| null |
There's quite a lot of places where you can find free data. Then of course, it really depends on what kind of data you're looking after, as well as what you want to do with it.
- it's fairly easy to find data on governmental websites : especially true for inflation / currency end of day quotes
exchanges often provide historical and end of day data, and some delayed forward quotes may also be available.
but when you want to do real stuff, a paid connection to exchanges or news agency is necessary.
[Here](http://commoditymodels.com/2010/08/17/commodity-prices/) is a list for commodities, could be extended to other asset classes.
Anyway, the real problem is that each these free data come with their own format, so that's not easy to process.
Also as already stated in a previous answer, free data access doesn't mean you will be able to redistribute data which are not yours without paying the original owner, and that makes perfect sense.
| null |
CC BY-SA 3.0
| null |
2011-05-01T20:49:06.920
|
2011-05-01T22:12:06.770
|
2011-05-01T22:12:06.770
|
820
|
820
| null |
1097
|
2
| null |
788
|
1
| null |
There are a few books on commodity markets, generally available on amazon. I also suggest you look at [GARP's Energy Risk Professional curriculum](http://garpdigitallibrary.org/display/ERP_Course_Pack.asp), their resources have been carefully chosen.
| null |
CC BY-SA 3.0
| null |
2011-05-01T21:01:53.607
|
2011-05-01T21:01:53.607
| null | null |
820
| null |
1098
|
2
| null |
511
|
4
| null |
This [guy](http://commoditymodels.com/2010/02/25/key-papers-in-commodities-finance/) listed a list of key papers relative to commodities price modeling. That could perhaps help you get started.
| null |
CC BY-SA 3.0
| null |
2011-05-01T21:44:21.090
|
2011-05-01T21:44:21.090
| null | null |
820
| null |
1099
|
2
| null |
841
|
2
| null |
There are already quite a lot of softwares that do that. Quite expensive however for most of them. Then it depends whether you're interested into a trading software (trade capture and stuff) or a pricing engine.
Trading softwares : murex, misys summit, calypso ... provide tools to structure deals and value them. Then they are processed front to back.
Pricing engines : NumeriX, Pricing partners ... are able to define payoff scripts and value them.
disclaimer : I used to work for one of these vendors, but I don't think my answer is biased.
| null |
CC BY-SA 3.0
| null |
2011-05-01T21:49:16.173
|
2011-05-01T21:49:16.173
| null | null |
820
| null |
1100
|
2
| null |
251
|
1
| null |
Same feedback as [quant_dev](https://quant.stackexchange.com/users/89/quant-dev).
A few quants are related to [Premia](http://www-rocq.inria.fr/mathfi/Premia/index.html) but again, not directly used in production. That's more a research consortium and the latest version is not disclosed publicly (but you can pay for it if you want)
| null |
CC BY-SA 3.0
| null |
2011-05-01T22:01:15.527
|
2011-05-01T22:01:15.527
|
2017-04-13T12:46:22.823
|
-1
|
820
| null |
1101
|
2
| null |
925
|
2
| null |
Commonly used on FX option markets, see [wikipedia](http://en.wikipedia.org/wiki/Vanna_Volga_pricing)
| null |
CC BY-SA 3.0
| null |
2011-05-01T22:17:10.997
|
2011-05-01T22:17:10.997
| null | null |
820
| null |
1102
|
1
| null | null |
28
|
13124
|
I am trying to reconcile some research with some published values of 'Sharpe ratio', and would like to know the 'standard' method for computing the same:
- Based on daily returns? Monthly? Weekly?
- Computed based on log returns or relative returns?
- How should the result be annualized (I can think of a wrong way to do it for relative returns, and hope it is not the standard)?
|
Should Sharpe ratio be computed using log returns or relative returns?
|
CC BY-SA 3.0
| null |
2011-05-01T22:31:04.530
|
2014-04-19T21:34:11.530
| null | null |
108
|
[
"returns",
"sharpe-ratio"
] |
1103
|
2
| null |
689
|
7
| null |
Well, that's still a very general question.
A few elements of answer :
Bonds pay interest on a regular basis, semiannual for US treasury and corporate bonds, annual for others such as Eurobonds, and quarterly for others.
You need to distinguish between fixed coupon bonds, zero coupon bonds, bonds with an amortization schedule, floating rate notes based on LIBOR or equivalent, etc ...
Different quotation methods are available (clean vs dirty especially), and the basis used to compute the accrual interest can differ as well.
| null |
CC BY-SA 3.0
| null |
2011-05-01T22:40:24.200
|
2011-05-01T22:40:24.200
| null | null |
820
| null |
1104
|
2
| null |
1102
|
5
| null |
I don't feel I can give you an authoritative answer on what the "standard" approach is, maybe someone with more hands-on experience will be able to help. But my quick thoughts.
As to the period, I've seen both daily and monthly returns being used. Weekly probably not that often. But in the end you annualize them either way to make them comparable.
The method I know is to multiply by $\sqrt{12}$ (for monthly data) - as can be seen in Kestner, 2003.
I would go with log returns, but it's rather gut instinct. I haven't really thought about it, so feel free to correct me/validate this statement.
There's one implication to arbitrarily changing your measurement interval - it can (should) alter the deviation. See Spurgin, 2002 for details.
And all this has to be done under the assumption that you can define your performance using only two first moments of the distribution. But the pitfalls of using Sharpe ratio - that's another issue to discuss.
---
- Kestner, Lars: Quantitative trading strategies: harnessing the power of quantitative techniques to create a winning trading program. McGraw-Hill Professional, 2003. p. 84 (preview at Google books)
- Spurgin, Richard: How to Game Your Sharpe Ratio @ http://www.hedgeworld.com/research/download/howtogameyoursharperatio.pdf
| null |
CC BY-SA 3.0
| null |
2011-05-01T23:40:47.680
|
2011-05-02T13:48:11.533
|
2011-05-02T13:48:11.533
|
38
|
38
| null |
1105
|
2
| null |
1102
|
6
| null |
In long-short equities, it's common to use daily returns in $\frac{\mu}{\sigma}$ and then multiply by $\sqrt{252}$ to annualize.
| null |
CC BY-SA 3.0
| null |
2011-05-02T04:51:20.700
|
2011-05-02T04:51:20.700
| null | null |
35
| null |
1106
|
1
| null | null |
8
|
1195
|
Can someone provide (or point me to) a summary of back office processing nuances specific to FX trading? For example, I know that there are several FX-specific risks that must be managed. They include transaction risk, translation risk, and short- and long-term exposure risks. GAAP and IFRS have published guidelines for fund and tax accounting for FX instruments. What other FX-specific nuanced processing challenges exist within the various back office functions including pricing, risk management, fund and tax accounting, clearance, settlement, custody, cash management, collateral management, and financial reporting?
|
Back office processing for FX trades
|
CC BY-SA 3.0
|
0
|
2011-05-02T13:34:04.180
|
2014-09-30T17:47:44.733
|
2011-05-03T01:14:42.257
|
70
|
772
|
[
"fx"
] |
1107
|
1
| null | null |
8
|
505
|
I watched every documentary on the financial crisis and CDOs, tried to understand Wikipedia etc.. but still not getting the full picture as examples seem to be limited (or complicated).
Say Local-Bank-A starts providing mortgages to US subprime, big marketing campaign and sweet offer 100% mortgage, offer start 1 Jun and finish 31 August, so a 3 month marketing window. The sales and marketing guys believe they are going to close 10,000 mortgages with an average loan of US$100,000.
Where they do they get the money from first to finance 10,000 mortgages. Do they borrow US$ 1 billion and from where? When they borrow from other banks what collateral do they accept?
Okay 1st September comes round and Local-Bank-A has sold all expected 10,000 mortgages, exhausted the 1 billion and now wants to sell it on to Goldman Sachs etc.. What do they charge to sell it? How much money is in selling US$1billion of 100% mortgages to the wall street investment banks?
So they sell, they get the US$1billion back plus whatever profit and settle their original loan of 1 billion. They make some money out of this and start the process again?
Goldman Sach now has 1 billion of mortgages on their books and works quickly to CDO this and shift it on. How much were they making on this as obviously they had shell out at least 1 billion for them (as Local-Bank-A wouldnt be selling for a loss)?
When the bank sell these mortgages on who is responsible for collecting the monthly payments and handling redemptions etc..? Do US mortgages have a separate handling fee to a 3rd party entity like $10 a month? Who answers the phone when changing repayment amounts etc..
I talk in the current sense but obviously I mean 2006/2007.
|
Understanding CDOs
|
CC BY-SA 3.0
| null |
2011-05-02T16:49:15.263
|
2011-05-03T12:12:33.563
| null | null |
825
|
[
"cdo"
] |
1108
|
2
| null |
1107
|
5
| null |
>
When the bank sell these mortgages on who is responsible for collecting the monthly payments and handling redemptions etc..? Do US mortgages have a separate handling fee to a 3rd party entity like $10 a month? Who answers the phone when changing repayment amounts etc..
This would be the "servicer", which is often the bank/lending institution that wrote the loan in the first place. The servicer collects the monthly payment, subtracts its fee, and passes the payment on to the next guy.
It might be helpful to look at how GNMA/FNM/FRE handle the situation and turn the results into a "mortgage-backed security" (MBS):
- The lender (usually a bank) writes the original loan by lending money it already has.
- If the loan meets the underwriting standards, then GNMA/FNM/FRE will purchase the loan from the lender.
- They will then bundle the loans into large pools so that the risk of an individual foreclosure/default won't adversely sink any individual bond. Sample list of GNMA pools and bonds from April 2011.
- They'll carve the huge bundle of loans into individual bonds (usually denominated in $5k chunks).
Each pool is an MBS and will have its own unique [CUSIP](http://en.wikipedia.org/wiki/CUSIP).
>
Where they do they get the money from first to finance 10,000 mortgages.
The lenders themselves don't need to "have enough to finance 10k mortgages", because they sell them as they get them.
>
Okay 1st September comes round and Local-Bank-A has sold all expected 10,000 mortgages, exhausted the 1 billion and now wants to sell it on to...
They will sell the notes within a day or two after the closing.
I know this isn't going to come close to answering all your questions, as some of them ("How much were they making on this?") cannot be answered.
| null |
CC BY-SA 3.0
| null |
2011-05-02T18:22:08.687
|
2011-05-02T18:58:57.983
|
2011-05-02T18:58:57.983
|
35
|
126
| null |
1109
|
2
| null |
1102
|
5
| null |
For fixed income hedge funds, monthly returns are almost always used to calculate the Sharpe ratio, because some securities held are relatively illiquid and the dealers who do the pricing for the hedge funds are only willing to do month-end pricing. Daily returns are not available to be calculated for most such funds.
| null |
CC BY-SA 3.0
| null |
2011-05-02T18:49:23.847
|
2011-05-02T18:49:23.847
| null | null |
828
| null |
1110
|
1
|
1441
| null |
8
|
658
|
Ladies and Gents,
Im writing a quick routine to calculate implied vols for options on EUR$ futures with Bloomberg data. My question concerns the part where I have all my inputs and am ready to pass them to my implied vol function. Assume we have the following market quotes (example ticker: EDK1C 98.750 Comdty):
Strike = 98.750
Underlying Spot = 99.7050
Option Price = 0.9550
When passing these to my function, do I convert the Underlying spot and strike to
S = (100 - Underlying Spot)/100 and K = (100 - Strike)/100 respectively
and use the market option price as is so our implied vol method is some function IV = f(S,K,Option Price,...)
OR
convert the option price to
oP = 100 - (Option Price)*100 and leave the spot and strike such that our implied vol method is some function
IV = f(Strike ,Underlying Spot,oP,...)
???
The latter has yielded a rational result but I would love some feedback.
Thanks.
|
Parameters for pricing option on EDF
|
CC BY-SA 3.0
| null |
2011-05-02T18:56:26.963
|
2018-12-05T14:14:23.317
| null | null |
826
|
[
"options",
"implied-volatility"
] |
1111
|
2
| null |
1107
|
3
| null |
>
Where they do they get the money from first to finance 10,000 mortgages. Do they borrow US$ 1 billion and from where? When they borrow from other banks what collateral do they accept?
It sounds like you're missing a basic understanding of how banking works here. In the fractional reserve system, a bank doesn't need to have 1 billion on its books to issue 1 billion in loans, it only needs to maintain the minimal capital ratio specified by the government. Say this ratio is 10%, they would need to have 100 million to lend out 1 billion.
| null |
CC BY-SA 3.0
| null |
2011-05-03T11:19:49.777
|
2011-05-03T11:19:49.777
| null | null |
351
| null |
1112
|
2
| null |
1107
|
2
| null |
I wanted to post it as comment to @codebolt's answer, but I'm not sure I would fit with the length limit.
I'm not sure I agree with the current wording of your response.
I think we mean the same, but I'd rather say they (the bank) need 1 billion in the first place (or exactly said 1,(1) billion) for example coming from deposits. They leave 10% of this 1,(1) billion as reserves, so they now have 1 billion left for lending.
Let's say they use the whole billion for lending. Now all this money will come back somehow to the financial system. If you borrow money, you have to keep it somewhere (let's assume it's not in your wallet). Even if you use it to pay somebody, he will have the same "problem".
So this way most of this 1 billion of initial lending will come back to our bank (or some other banks) as new deposits.
Rinse and repeat till no money comes back or you can't find new clients to lend money to.
| null |
CC BY-SA 3.0
| null |
2011-05-03T12:12:33.563
|
2011-05-03T12:12:33.563
| null | null |
38
| null |
1113
|
2
| null |
955
|
3
| null |
I remember an article from graduate school that describes a methodology for measuring the true timing ability of a money manager. I don't remember the name of the article nor the name of the author, however, I do remember some of the details of the article. Maybe someone else has run across it and would be kind enough to post the appropriate reference.
Let's assume that a manager has the ability to be either in cash, earning the risk-free rate, or a long position in a basket of stocks. If the money manager had superior timing ability, he would be in the basket when the basket was returning greater than the risk-free rate, and he would be in a cash position when the basket is returning less than the risk-free rate. What you basically have is a return profile that looks a lot like the payoff of call option.
If you plot market return on the x-axis and manager return on the y-axis, the return should be flat at the risk-free rate for everything to the left of the risk-free rate on the x-axis/ At the risk free-rate on the x-axis, the return should be a 45-degree line up and to the right of the diagram. Over time, you measure manager return against market return, and if he is any good, you should see the call option payoff diagram being roughly drawn out.
| null |
CC BY-SA 3.0
| null |
2011-05-03T23:40:32.730
|
2011-05-03T23:40:32.730
| null | null |
832
| null |
1114
|
1
|
1116
| null |
23
|
4499
|
I'm reading Natenberg's book, and he says that all options trades should be delta neutral.
I understand that this prevents small changes in the underlying price from changing the price of the option, but couldn't there be a case where you would want that?
I (think I) also understand that if you're betting against just volatilty, it would make sense, since you don't care what direction the underlying price moves, but I don't entirely understand why he says all options trades should be delta neutral.
|
Why are options trades supposed to be delta-neutral?
|
CC BY-SA 3.0
| null |
2011-05-04T12:56:24.527
|
2014-01-12T14:37:33.317
| null | null |
833
|
[
"options",
"delta-neutral"
] |
1115
|
2
| null |
1114
|
5
| null |
Well - if you're not delta neutral - this means you take a position with certain view on the market.
This can be very comfortably done when you think that a stock price will go high up, but you don't want to spend all your money on acquiring the stock - you buy a call on it, which is quite cheap, and get the same payoff.
| null |
CC BY-SA 3.0
| null |
2011-05-04T13:43:37.140
|
2011-05-04T13:43:37.140
| null | null |
647
| null |
1116
|
2
| null |
1114
|
14
| null |
I haven't read Natenberg but it of course depends on your side in the trade:
Are you a market maker or a risk taker? So do you live on the spread (first) or are trying to make money based on e.g. forecasts on direction (second).
This is the great divide in QuantFinance!
Only in the first case will all your option trades be delta neutral.
There is a nice short paper which elaborates on both concepts (it calls the first one Q and the second P):
Meucci: 'P' Versus 'Q': Differences and Commonalities between the Two Areas of Quantitative Finance
[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1717163](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1717163)
| null |
CC BY-SA 3.0
| null |
2011-05-04T15:39:41.437
|
2011-05-04T15:39:41.437
| null | null |
12
| null |
1117
|
2
| null |
1114
|
2
| null |
As I recall, Natenberg recommends selling time premium and places himself in the market maker camp that @vonjd describes.
You are correct in noting that delta neutral holds for small changes in the underlying price. You can probably imagine a case where you sell a lot of deep out-of-the-money puts and sell a few slightly out-of-the-money calls. This would be delta neutral while the underlying remained steady, but would not be delta neutral if the underlying dropped sharply.
Part of the reason why is because this position is not gamma neutral: the deltas of the puts and calls would change as the underlying moved away from its opening position.
| null |
CC BY-SA 3.0
| null |
2011-05-04T18:58:32.180
|
2011-05-04T18:58:32.180
| null | null |
343
| null |
1118
|
1
| null | null |
8
|
14237
|
Say I'm given I set of monthly returns over 10 years on a monthly basis. What is the correct way to find the geometric returns of this data? I ask because a classmate and I are on different sides of the coin.
I found the cumulative returns of each year, then found the geometric mean of the 10 years. He found the cumulative returns of the entire time period, then took the (months in a year / total months) root of that data.
The numbers turn out to be very close, but if I remember correctly mine are slightly lower over all the funds we measured.
Or is there a different way and we're both wrong?
|
Correct way to find the mean of annual geometric returns of monthly returns?
|
CC BY-SA 3.0
| null |
2011-05-04T21:10:25.703
|
2011-05-05T12:27:25.453
|
2011-05-04T22:46:33.600
|
35
|
60
|
[
"returns"
] |
1119
|
1
|
1147
| null |
5
|
570
|
I'd like to expand on the [Data Sources Online](https://quant.stackexchange.com/q/141/343) question. I found this site for a German convertible bond, all free and not requiring a sign-up.
- Börse Stuttgart
- German Google
I am looking for similar information for a Norwegian bond.
|
Where can I find European and Scandinavian convertible bond prices?
|
CC BY-SA 3.0
| null |
2011-05-04T21:32:37.643
|
2011-05-09T17:10:21.337
|
2017-04-13T12:46:23.000
|
-1
|
343
|
[
"market-data"
] |
1120
|
2
| null |
1118
|
6
| null |
If I understand you correctly, your question is whether this is true:
\begin{equation}
\sqrt[10]{\prod_{i=1}^{10}{Y_i}} < \sqrt[10]{A}
\end{equation}
where $Y$ is the yearly cumulative returns (your method), and $A$ is the absolute cumulative return (your classmate's method).
The question then becomes whether you find this relationship:
\begin{equation}
\prod_{i=1}^{10}{Y_i} < A
\end{equation}
But that can't be! The absolute cumulative return must be equal to the product of the yearly cumulative returns. So if your yearly returns don't multiply to be his absolute return, then one of you has made a mistake.
If you believe that your and his math are both correct, then the culprit is most likely a rounding error.
| null |
CC BY-SA 3.0
| null |
2011-05-04T23:10:05.367
|
2011-05-04T23:10:05.367
| null | null |
35
| null |
1121
|
1
| null | null |
5
|
577
|
Totally new to the world of quant finance, so perhaps this is an odd question...
Does there exist an American equivalent to the German style "knock out zertifkate"? (The name might be slightly wrong.)
Not even exactly sure what they are, but from what I did learn about their behavior, the nearest thing I could find on Wikipedia was:
[http://en.wikipedia.org/wiki/Turbo_warrant](http://en.wikipedia.org/wiki/Turbo_warrant)
If there is not an american equivalent, does anybody know where to get English language details and pricing history/data?
Thank you
|
European turbo warrants
|
CC BY-SA 3.0
| null |
2011-05-05T02:36:57.970
|
2011-05-05T09:17:45.273
|
2011-05-05T02:49:55.960
|
837
|
837
|
[
"options",
"option-pricing",
"eurex"
] |
1122
|
2
| null |
1121
|
4
| null |
I haven't encountered them, but if you mean "Hebel-Zertifikate" as [defined on Wikipedia](http://de.wikipedia.org/wiki/Zertifikat_%28Wirtschaft%29#Hebel-Zertifikate_.28auch%3a_Turbo-_oder_Knock-out-Zertifikate.2C_Mini-Futures.29) they just seem to be leveraged certificates with an obligatory stop-loss knock-out.
Those turbo warrants you mention are more or less the same as [Mini Futures](http://www.vam.ag/product-know-how/leveraged-certificates/mini-futures.html). Depending on issuer or some small differences, they can also come up as Turbo / Short Certificates (naming used by The Royal Bank of Scotland) or [knock-out certificates](http://en.wienerborse.at/prices_statistics/certificates/knockout/).
So you can encounter many names for essentially similar/identical products.
Your pricing question is a bit too vague. There are so many products that you would have to define what you're interested in (preferably what market/exchange). You can browse through the websites of the exchanges where products you're interested in are listed. For example [Wiener Börse](http://en.wienerborse.at/certificates/) has some pricing data for numerous certificates.
| null |
CC BY-SA 3.0
| null |
2011-05-05T09:17:45.273
|
2011-05-05T09:17:45.273
| null | null |
38
| null |
1123
|
2
| null |
1118
|
1
| null |
I assume you have net simple montly returns. 12 months and 10 years gives you 120 monthly returns $r_1, r_2,...,r_{120} $. You want to know the annual geometric return. Then solve for $ r_g:$
$$ (1+r_1)\times(1+r_2)\times \dots \times(1+r_{120})=(1+r_g)^{10}$$
The order of the multiplication on the LHS is important, that is, you should start multiplying with the oldest return ($r_1$).
| null |
CC BY-SA 3.0
| null |
2011-05-05T11:51:36.590
|
2011-05-05T11:51:36.590
| null | null |
796
| null |
1124
|
2
| null |
1114
|
4
| null |
One other consideration is the cost of the trade. If you're not delta-neutral, you're expressing a directional view, and there are cheaper ways to express a direction view than options. (Namely, just owning the underlying.)
So, conceptually, it's a little easier to think of there being two separate trades going on: an expensive vol trade (the options) and a cheap direction trade (underlying).
| null |
CC BY-SA 3.0
| null |
2011-05-05T11:52:00.350
|
2011-05-05T11:52:00.350
| null | null |
596
| null |
1125
|
2
| null |
1118
|
5
| null |
@chrisaycock already gave you a correct answer, but I thought I would add a more verbose version (and practice some MathJax by the way).
In fact when I began answering I thought it was going to be a straightforward answer, but having spent some more time with this question I see there are some potential traps you can fall in.
Especially since some of the steps you name are not 100% clear, I assumed the worst-case scenario (AKA everything wrong). I suppose some of them are just shorthand notions. Sorry if you already do it the right way and it's obvious it's wrong making my explanations ridiculous, but at least one of the steps is to blame as you are getting different results.
---
So, going through your task:
- Say I'm given a set of monthly returns over 10 years on a monthly basis.
Let's call them
$$
r_{1_{jan}}, \ ...,\ r_{1_{dec}}, \ ...,\ r_{10_{jan}}, \ ...,\ r_{10_{dec}} \ [eq. 1]
$$
---
What you do is:
- I found the cumulative returns of each year
Your cumulative return for a year is a product of monthly returns:
$$
R_{i} = (1+r_{i_{jan}}) * \ ... \ * (1+r_{i_{dec}}) - 1 \ [eq. 2]
$$
OK, straightforward. Not that many options here.
- then found the geometric mean of the 10 years
if you mean that literally (I warned you I would take the worst approach possible, sorry), as in found the [geometric mean](http://en.wikipedia.org/wiki/Geometric_mean) of those 10 returns:
$$
R_{G} = \sqrt[10]{R_{1} * R_{2} * \ ... \ * R_{10}} \ [eq. 3]
$$
we have our first problem. While technically you can calculate anything (as long as it's not negative), it doesn't make sense. We are looking for a [geometric average rate of return](http://en.wikipedia.org/wiki/Rate_of_return#Geometric_average_rate_of_return) instead:
$$
R_{G} = \sqrt[10]{(1 + R_{1}) * (1 + R_{2}) * \ ... \ * (1 + R_{10})} - 1 \ [eq. 4]
$$
OK, done, should be the correct answer.
---
Your classmate's version:
- He found the cumulative returns of the entire time period,
He calculated it either this way:
$$
AR = (1+r_{1_{jan}}) * \ ... \ * (1+r_{1_{dec}}) * \ ... \ * (1+r_{10_{jan}}) * \ ... \ * (1+r_{10_{dec}}) - 1 \ [eq. 5]
$$
or just used $\frac{P_{last}}{P_{first}} - 1$ which is the same. No problem here.
- then took the (months in a year / total months) root of that data.
First assumption - I suppose you meant power here (or `total months / months in a year` root), because otherwise it wouldn't make much sense.
Now, if we literally take the root out of our accumulated returns ($AR$):
$$
\sqrt[\frac{120}{12}]{AR} = \sqrt[10]{(1+r_{1_{jan}}) * \ ... \ * (1+r_{1_{dec}}) * \ ... \ * (1+r_{10_{jan}}) * \ ... \ * (1+r_{10_{dec}}) - 1} \ [eq. 6]
$$
using $[eq. 2]$ we get:
$$
= \sqrt[10]{(1+R_{1})*(1+R_{2})* \ ... \ * (1+R_{10}) - 1}
$$
Oops, seems similar to $[eq. 4]$, but it's not the same. We did something wrong.
In fact we wanted it this way (remembering that we're looking for annual returns):
$$
R_{G} = \sqrt[10]{1 + AR} - 1 \ [eq. 7]
$$
Now plugging $[eq. 5]$ and $[eq. 2]$:
$$
= \sqrt[10]{1 + (1+R_{1})*(1+R_{2})* \ ... \ * (1+R_{10}) - 1} -1
$$
$$
= \sqrt[10]{(1+R_{1})*(1+R_{2})* \ ... \ * (1+R_{10})} - 1
$$
and this is the same as $[eq. 4]$
---
This way you see that both methods should give equivalent results. If not, then either it's a calculation mistake/rounding issue or you're using different methods and someone is not calculating an actual geometric average rate of return.
I hope now you can find where the issue was.
| null |
CC BY-SA 3.0
| null |
2011-05-05T12:27:25.453
|
2011-05-05T12:27:25.453
| null | null |
38
| null |
1126
|
2
| null |
955
|
5
| null |
Read Fooled by Randomness by Nassim Taleb. In a nutshell, he says that you can only tell the difference by understanding the risks that were taken. Lucky investors can win for many years before blowing up. Even if he doesn't blow up, there is no way to know what might have happened if the risks turned out badly.
| null |
CC BY-SA 3.0
| null |
2011-05-05T13:34:16.153
|
2011-05-05T13:34:16.153
| null | null |
838
| null |
1127
|
1
|
1142
| null |
4
|
3056
|
In my quest for simulated data, I am trying to generate prices for Total Return Swaps by calculating the NPVs of the fixed and floating leg. My problem: Given the fixed leg, how do I set the spread on the floating leg so that the value of the swap at the beginning equals Zero?
On a more technical side: Using RQuantLib, I use FloatingRateBond to calculate the NPV. How exactly do I set the spread there? The documentation is a bit unclear at that point.
|
Valuing Total Return Swaps
|
CC BY-SA 3.0
| null |
2011-05-05T15:26:45.320
|
2014-03-22T16:05:34.703
| null | null |
357
|
[
"r",
"simulations",
"swaps",
"total",
"return"
] |
1128
|
2
| null |
815
|
4
| null |
Claudio Albanese has a paper on the topic of GPUs and CVA computations. Here is one of his papers: [link to paper](http://www.level3finance.com/gmcse.pdf)
| null |
CC BY-SA 3.0
| null |
2011-05-05T20:29:02.893
|
2011-05-05T20:29:02.893
| null | null |
842
| null |
1129
|
1
| null | null |
15
|
11489
|
Suppose I want to build a linear regression to see if returns of one stock can predict returns of another. For example, let's say I want to see if the VIX return on day X is predictive of the S&P return on day (X + 30). How would I go about this?
The naive way would be to form pairs (VIX return on day 1, S&P return on day 31), (VIX return on day 2, S&P return on day 32), ..., (VIX return on day N, S&P return on day N + 30), and then run a standard linear regression. A t-test on the coefficients would then tell if the model has any real predictive power. But this seems wrong to me, since my points are autocorrelated, and I think the p-value from my t-test would underestimate the true p-value. (Though IIRC, the t-test would be asymptotically unbiased? Not sure.)
So what should I do? Some random thoughts I have are:
- Take a bunch of bootstrap samples on my pairs of points, and use these to estimate the empirical distribution of my coefficients and p-values. (What kind of bootstrap do I run? And should I be running the bootstrap on the coefficient of the model, or on the p-value?)
- Instead of taking data from consecutive days, only take data from every K days. For example, use (VIX return on day 1, S&P return on day 31), (VIX return on day 11, S&P return on day 41), etc. (It seems like this would make the dataset way too small, though.)
Are any of these thoughts valid? What are other suggestions?
|
Using linear regression on (lagged) returns of one stock to predict returns of another
|
CC BY-SA 3.0
| null |
2011-05-05T23:42:56.643
|
2014-08-21T13:26:17.297
| null | null |
672
|
[
"forecasting",
"vix",
"prediction",
"regression"
] |
1130
|
2
| null |
1129
|
11
| null |
A few thoughts.
Yes, your return series are autocorrelated (i.e., stocks don't exactly follow a random walk), so you should use [Newey-West standard errors](http://en.wikipedia.org/wiki/Newey%E2%80%93West_estimator).
If you do this as a univariate regression $$R_{i,t} = \alpha_i + \beta_i R_{j,t-1} + \epsilon_{i,t}$$ then there's almost certainly an omitted variable inside $\epsilon$ that is moving both $R_i$ and $R_j$. So make sure you throw in at least some of the normal "predictors", like market return, default premium, term premium, or the other standard factors ([Fama and French's SMB, HML, and UMD](http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html)). In the hedge fund research you also see a [set of seven or so factors](http://faculty.fuqua.duke.edu/~dah7/HedgeFund/HedgeFund.htm).
| null |
CC BY-SA 3.0
| null |
2011-05-06T00:37:17.340
|
2011-05-06T00:37:17.340
| null | null |
106
| null |
1131
|
2
| null |
30
|
3
| null |
There are two main approaches:
- Comparables (depends on having an existing sample of similar project outcomes, which can be difficult to obtai [similar to historical volatility for stocks])
- Simulation (when comparables, or rather enough of them, aren't available)
Here's a link to a paper that provides a technique for estimating volatility for real options:
[http://new.vmi.edu/media/ecbu/cobb/EE4902.pdf](http://new.vmi.edu/media/ecbu/cobb/EE4902.pdf)
Generally comparables is the preferable approach in real options, provided that you have a large enough historical sample. In most cases, however simulation is used as 'true' comparables are notoriously difficult to find en masse.
| null |
CC BY-SA 3.0
| null |
2011-05-06T01:33:53.307
|
2011-05-06T01:33:53.307
| null | null |
214
| null |
1133
|
1
|
1134
| null |
11
|
4059
|
From [Optimal Trading Strategies](http://rads.stackoverflow.com/amzn/click/0814407242) :
>
There are two main reasons why traders
execute orders using a VWAP trading
strategy. First, a VWAP strategy is
the trading strategy that minimizes
market impact cost. Traders executing
orders utilizing VWAP strategies are
interested in participating with
market volumes so to cause the least
disruption to the supply-demand
imbalance in each period. Second, for
many traders their execution
performance is measured against the
VWAP benchmark. An average execution
price more favorable than the VWAP
price benchmark is considered
high-quality performance while an
average execution price less favorable
than the VWAP benchmark is considered
inadequate performance.
But, when it comes to backtesting, the market impact is thus assumed to be zero. One is assuming he does not affect the market as the VWAP benchmark price stays untouched. So how can one backtest his VWAP strategy effectively and more realistically?
|
What is an effective way of backtesting VWAP execution?
|
CC BY-SA 3.0
| null |
2011-05-06T08:45:32.677
|
2012-10-19T20:08:38.280
|
2011-09-22T14:55:45.517
|
1106
|
42
|
[
"backtesting",
"order-execution",
"market-impact"
] |
1134
|
2
| null |
1133
|
6
| null |
This isn't exclusive to VWAP; any assumed trade price (NBBO, Arrival Price, etc) has the same vulnerabilities. Many shops often lump market impact with slippage and transaction costs when modeling the difference between the ideal price and the realized price.
To model the impact during a backtest for a given trade price, assess a penalty:
- This penalty can be fixed (X basis points added to each buy price) or it can be variable (Y percentage extra cost for each buy price). One could use a combination of both.
- For variable penalties, the percentage can be a single number for all trading activity, or it can grow quadratically with respect to the size of the trade. An aggressive penalty models removing liquidity in the order book.
- To be really swanky, one could have a different penalty for each unique instrument in the portfolio.
Many shops consider their transaction-cost models to be very proprietary. It takes a lot of post-trade analysis to determine proper values for X and Y.
| null |
CC BY-SA 3.0
| null |
2011-05-06T12:10:54.323
|
2011-05-06T12:10:54.323
| null | null |
35
| null |
1135
|
1
|
1141
| null |
11
|
1971
|
Reading from "www.nadex.com" - the copy reads "Binaries are similar to traditional options but with one key difference: their final settlement value will be 0 or 100. This means your maximum risk and reward are always known and capped.".
Isn't that true when you are using traditional options? (Assuming the markets are the same.)
Addition: Can't you essentially replicate the payoff of a binary option using a vertical ATM option spread?
|
Do binary options make any sense?
|
CC BY-SA 3.0
| null |
2011-05-06T18:19:56.237
|
2015-09-13T05:30:25.710
|
2011-05-14T17:32:31.683
|
70
|
520
|
[
"options",
"risk",
"binary-options"
] |
1136
|
2
| null |
1135
|
5
| null |
No. If you are long a vanilla option, your reward is unlimited. If you are short an option, your risk is unlimited.
| null |
CC BY-SA 3.0
| null |
2011-05-06T18:41:14.357
|
2011-05-06T18:41:14.357
| null | null |
508
| null |
1137
|
2
| null |
1135
|
6
| null |
The key difference besides the cap is that there is nothing in between: its 100 or nothing (binary!) - with traditional options you have S-K as long as S>K (for the call).
You can find out more here:
[http://en.wikipedia.org/wiki/Binary_option](http://en.wikipedia.org/wiki/Binary_option)
| null |
CC BY-SA 3.0
| null |
2011-05-06T19:17:33.190
|
2011-05-06T19:17:33.190
| null | null |
12
| null |
1138
|
2
| null |
1133
|
4
| null |
To get a feel for it think about an extreme scenario. Suppose I have an order to buy $100bln VWAP in IBM over the course of one day. My "cost" relative to VWAP will be near zero, b/c I will be on the buy side of every trade that day. However, my market impact will be extremely high b/c IBM will fall like a rock the next day, leaving me with huge losses. This is your true cost.
The bottom line is trading will create impact - that is part of your cost. It is increasing in increasing trade size and decreasing in increasing liquidity of the asset. VWAP merely spreads the trade out to reduce the impact. Look at Almgren and Chriss for a discussion of transaction cost functions, or talk to the provider of your VWAP algo for more details.
| null |
CC BY-SA 3.0
| null |
2011-05-07T01:04:14.320
|
2011-05-07T01:04:14.320
| null | null |
443
| null |
1139
|
2
| null |
1135
|
2
| null |
For a long or short position in a vanilla put, the maximum risk/reward is known and capped. For a long position in a vanilla call, the maximum risk is also always known and capped. The maximum reward is therefore known and capped for a short vanilla call position. However, the reward is unbounded for a long vanilla call position, and therefore the risk is likewise unbounded for a short vanilla call.
| null |
CC BY-SA 3.0
| null |
2011-05-07T16:51:07.517
|
2011-05-07T16:51:07.517
| null | null |
847
| null |
1140
|
2
| null |
1133
|
0
| null |
A factor to take into account: there are algorithms searching for other VWAP algorithms present in the market and then gaming them.
[http://www.qsg.com/PDFReader.aspx?PUBID=722](http://www.qsg.com/PDFReader.aspx?PUBID=722)
| null |
CC BY-SA 3.0
| null |
2011-05-08T04:27:12.857
|
2011-05-08T04:27:12.857
| null | null |
225
| null |
1141
|
2
| null |
1135
|
15
| null |
The text of your question doesn't actually match the question title. The answer to your title is of course yes binary options make sense. And as others have pointed out with binary options your reward is limited, and conversely the risk involved in writing them is less.
To answer your additional question you can replicate a binary option with a tight call spread around the strike (not ATM as you suggest). So for example, if you have a binary call struck at K, which pays off 1 if it's ITM and 0 if not, you can replicate that with
$$
(C(K+\epsilon) - C(K))/\epsilon.
$$
Where $\epsilon$ is typically the smallest strike unit you can trade.
Risk managing these positions can get tricky when the level of the forward gets close to the strike, as the greeks can change sign.
| null |
CC BY-SA 3.0
| null |
2011-05-08T09:24:52.340
|
2011-05-08T09:24:52.340
| null | null |
371
| null |
1142
|
2
| null |
1127
|
4
| null |
Not sure I understand your question. If I have a fixed stream of payments it has some value $V_{fixed}$ I can always solve for a spread to LIBOR by simply adding the spread $S$ to my calculated stream of LIBOR.
That is the value of the LIBOR + spread leg is
$$
V_{LIBOR}(S) = \sum_{n=1}^{N} D(t_{n}) \alpha(t_{n-1},t_{n}) [L(t_{n-1},t_{n}) + S]
$$
where $D(t_{n})$ is the discount factor, $\alpha$ is the day count fraction, and $L$ is the LIBOR rate. I just solve
$$
V_{LIBOR}(S) = V_{fixed}
$$
for S.
Computing the value of the fixed leg for a TRS might be tricky, as you have to factor in the default probability. But you can hopefully get that from the CDS market.
| null |
CC BY-SA 3.0
| null |
2011-05-08T09:36:18.400
|
2011-05-08T09:36:18.400
| null | null |
371
| null |
1143
|
1
|
1144
| null |
12
|
1166
|
Is there a quantitative method in monitoring trades to reduce the possibility of correlated trades?
|
Minimizing Correlation
|
CC BY-SA 3.0
| null |
2011-05-09T04:18:48.443
|
2011-05-10T21:59:51.287
| null | null |
2318
|
[
"risk"
] |
1144
|
2
| null |
1143
|
5
| null |
The most straightforward approach is to develop a covariance matrix to ensure that you are not overweighting to the same factors or bets in your trading. The covariance matrix can be built off of a factor model, for example, or you can construct a covariance matrix based on your prediction signals if you have multiple models. In this way you can understand the ex-ante historical correlation of your trades.
Note that there is a considerable amount of art in designing a covariance matrix (See Fabozzi).
Without understanding more about your approach it's hard to be more helpful than the above approach.
| null |
CC BY-SA 3.0
| null |
2011-05-09T12:36:44.760
|
2011-05-09T12:36:44.760
| null | null |
1800
| null |
1145
|
2
| null |
1143
|
3
| null |
Historical correlation isn't as useful as you might think. Like volatility, correlation is not constant. During times of stress, it is common for the correlation of many different assets to increase/change.
As examples, look at the data for August 2007, the last quarter of 2008, and May 6, 2010. Any trading scheme that minimized correlation before those periods probably had a much different affect during those periods.
Edit 1 (05/10/2011) ===========================
I've bumped into all sorts of problems with this issue, with the estimation itself involving large errors. If you dig around, you'll find several papers with important improvements.
[http://www.ledoit.net/honey.pdf](http://www.ledoit.net/honey.pdf)
[http://arxiv.org/PS_cache/arxiv/pdf/1009/1009.5331v1.pdf](http://arxiv.org/PS_cache/arxiv/pdf/1009/1009.5331v1.pdf)
[http://www.oxford-man.ox.ac.uk/documents/papers/2011OMI08_Sheppard.pdf](http://www.oxford-man.ox.ac.uk/documents/papers/2011OMI08_Sheppard.pdf)
[http://www.christoffersen.ca/CHRISTOP/2007/RM2006_introduction.pdf](http://www.christoffersen.ca/CHRISTOP/2007/RM2006_introduction.pdf)
[http://www.kevinsheppard.com/images/4/47/Chapter8.pdf](http://www.kevinsheppard.com/images/4/47/Chapter8.pdf)
[http://www.kevinsheppard.com/images/d/de/CES_JFeC.pdf](http://www.kevinsheppard.com/images/d/de/CES_JFeC.pdf)
[http://www.kevinsheppard.com/images/c/c6/Patton_Sheppard.pdf](http://www.kevinsheppard.com/images/c/c6/Patton_Sheppard.pdf)
| null |
CC BY-SA 3.0
| null |
2011-05-09T15:05:46.310
|
2011-05-10T21:59:51.287
|
2011-05-10T21:59:51.287
|
392
|
392
| null |
1146
|
2
| null |
1004
|
5
| null |
The Brown et al. paper and its connection with trading is discussed here: [http://jochenleidner.posterous.com:80/from-speech-recognition-to-trading](http://jochenleidner.posterous.com:80/from-speech-recognition-to-trading) ([mirror](https://web.archive.org/web/20110528070947/http://jochenleidner.posterous.com:80/from-speech-recognition-to-trading))
| null |
CC BY-SA 3.0
| null |
2011-05-09T16:21:19.633
|
2017-07-17T07:33:43.107
|
2017-07-17T07:33:43.107
|
2183
|
854
| null |
1147
|
2
| null |
1119
|
1
| null |
Let me expand on the problem scope a little, to include company information and stock quotes as well.
## German company information
- BaFin has an English page
## Norwegian company information
- Newsweb; choose your company
name from the Issuer pulldown.
Change the From Date/To Date and a
search will give you headlines into
the issues.
They are not as detailed as a prospectus, but they had the salient details for convertible bonds that I was looking for.
## Norwegian stock quotes
- Reuters; enter the company name
in the search field. The .OL suffix
means you're on the Oslo exchange.
- MarketWatch; enter the company
name in the search field. The NO:
prefix means that you're looking at
Norway.
## Norwegian bond information
- Oslo Børs; Enter the ISIN.
The bond I was looking for was not found on the Oslo Børs. Instead, I went to
[Google Norway](http://www.google.no) and typed in the word "marked" (Market) and the ISIN, which had a NO prefix.
This led me to:
- Börse Frankfurt. I entered the
ISIN in the Name/ISIN/Symbol Search
field.
That got me the quote I was looking for. Note that while Börse Frankfurt had the quote, Börse Stuttgart did not. Further, the convertible bond I was looking for was denominated in EUR, not NOK, which is probably why it was quoted on a Stuttgart exchange, but not an Oslo exchange.
| null |
CC BY-SA 3.0
| null |
2011-05-09T17:10:21.337
|
2011-05-09T17:10:21.337
| null | null |
343
| null |
1148
|
1
| null | null |
5
|
1003
|
in their paper "European Real Options: An intuitive algorithm for the Black and Scholes Formula" Datar and Mathews provide a proof in the appendix on page 50, which is not really clear to me. It's meant to show the equivalence of their formula $E_{o}(max(s_{T}e^{-\mu T}-xe^{-rT},0))$ and Black and Scholes.
They refer to Hull(2000), define $y=s_{T}e^{-\mu T}$, and then do the following transformation:
$E_{o}(max(s_{T}e^{-\mu T}-xe^{-rT},0))$
$=\intop_{-xe^{-rT}}^{\infty}(s_{T}*e^{-\mu T})g(y)dy$
$=E(s_{T}e^{-\mu T})N_{d_{1}}-xe^{-rT}N_{d_{2}}$
An addition: Actually, in the paper it says $E_{o}(max(s_{T}e^{-\mu T}-xe^{-rT}),0)$, so the 0 is outside the brackets. However, I am not sure, if that is a typo and should rather be $E_{o}(max(s_{T}e^{-\mu T}-xe^{-rT},0))$. I am not familiar with a function E(max(x),0)
$\mu$ and $r$ are two different discount rates, one being the WACC and the other one the riskless rate.
Could I substitute $V=s_{0}e^{-\mu T},K=xe^{-rT}$, go through the BS steps and re-substitute? In other words, under what constrains is $E\left[max(V-K,0)\right]=E(V)N(d_{1})-KN(d_{2})$ valid?
The research related to it is a comparison of different real option pricing method.
Could anybody help me out?
Thanks in advance.
Corn
|
Better understanding of the Datar Mathews Method - Real Option Pricing
|
CC BY-SA 3.0
| null |
2011-05-10T10:00:36.507
|
2011-05-18T09:53:51.463
|
2011-05-18T09:53:51.463
|
860
|
860
|
[
"option-pricing",
"black-scholes",
"financial-engineering",
"quantitative"
] |
1150
|
1
|
1154
| null |
66
|
82813
|
Let the Black-Scholes formula be defined as the function $f(S, X, T, r, v)$.
I'm curious about functions that are computationally simpler than the Black-Scholes that yields results that approximate $f$ for a given set of inputs $S, X, T, r, v$.
I understand that "computationally simpler" is not well-defined. But I mean simpler in terms of number of terms used in the function. Or even more specifically, the number of distinct computational steps that needs to be completed to arrive at the Black-Scholes output.
Obviously Black-Scholes is computationally simple as it is, but I'm ready to trade some accuracy for an even simpler function that would give results that approximate B&S.
Does any such simpler approximations exist?
|
What are some useful approximations to the Black-Scholes formula?
|
CC BY-SA 3.0
| null |
2011-05-10T15:44:39.800
|
2022-06-29T17:38:51.460
|
2018-04-04T08:44:56.523
|
30479
|
526
|
[
"options",
"option-pricing",
"black-scholes",
"optimization"
] |
1152
|
2
| null |
1150
|
45
| null |
This one is the best approximation I have ever seen:
>
If you hate computers and computer
languages don't give up it's still
hope! What about taking Black-Scholes
in your head instead? If the option is
about at-the-money-forward and it is a
short time to maturity then you can
use the following approximation:
call = put = StockPrice * 0.4 *
volatility * Sqrt( Time )
Source: [http://www.espenhaug.com/black_scholes.html](http://www.espenhaug.com/black_scholes.html)
| null |
CC BY-SA 3.0
| null |
2011-05-10T16:30:48.853
|
2011-05-10T16:30:48.853
| null | null |
12
| null |
1153
|
2
| null |
1150
|
23
| null |
In addition to what vonjd already posted I would recommend you to look at the E.G. Haug's article - [The Options Genius. Wilmott.com](http://www.wilmott.com/pdfs/010721_collector_01.pdf). You can find some aproximations of BS not only for vanilla european call and put but even for some exotics. For example:
- chooser option: call = put = $0.4F_{0} e^{-\mu T}\sigma(\sqrt{T}-\sqrt{t})$
- asian option: call = put = $0.23F_{0} e^{-\mu T}\sigma(\sqrt{T}+2\sqrt{t})$
- floating strike lookback call = $0.8F_{0} e^{-\mu T}\sigma\sqrt{T} - 0.25{\sigma^2}T$
- floating strike lookback put = $0.8F_{0} e^{-\mu T}\sigma\sqrt{T} + 0.25{\sigma^2}T$
- forward spread: call = put = $0.4F_{1} e^{-\mu T}\sigma\sqrt{T}$
| null |
CC BY-SA 3.0
| null |
2011-05-10T19:34:10.337
|
2011-05-10T19:34:10.337
| null | null |
15
| null |
1154
|
2
| null |
1150
|
58
| null |
This is just to expand a bit on [vonjd's answer](https://quant.stackexchange.com/a/1152/129).
The approximate formula mentioned by vonjd is due to Brenner and Subrahmanyam ("A simple solution to compute the Implied Standard Deviation", Financial Analysts Journal (1988), pp. 80-83). I do not have a free link to the paper so let me just give a quick and dirty derivation here.
For the at-the-money call option, we have $S=Ke^{-r(T-t)}$. Plugging this into the standard Black-Scholes formula
$$C(S,t)=N(d_1)S-N(d_2)Ke^{-r(T-t)},$$
we get that
$$C(S,t)=\left[N\left(\frac{1}{2}\sigma\sqrt{T-t}\right)-N\left(-\frac{1}{2}\sigma\sqrt{T-t}\right)\right]S.\qquad\qquad(1)$$
Now, Taylor's formula implies for small $x$ that
$$N(x)=N(0)+N'(0)x+N''(0)\frac{x^2}{2}+O(x^3).\qquad\qquad\qquad\qquad(2)$$
Combining (1) and (2), we will get with some obvious cancellations that
$$C(S,t)=S\left(N'(0)\sigma\sqrt{T-t}+O(\sigma^3\sqrt{(T-t)^3})\right).$$
But
$$N'(0)=\frac{1}{\sqrt{2\pi}}=0.39894228...$$
so finally we have, for small $\sigma\sqrt{T-t}$, that
$$C(S,t)\approx 0.4S\sigma\sqrt{T-t}.$$
The modified formula
$$C(S,t)\approx 0.4Se^{-r(T-t)}\sigma\sqrt{T-t}$$
gives a slightly better approximation.
| null |
CC BY-SA 3.0
| null |
2011-05-10T19:37:19.613
|
2017-12-01T08:28:17.153
|
2017-12-01T08:28:17.153
|
70
|
70
| null |
1155
|
2
| null |
1148
|
3
| null |
I don't know what $\mu$ stands for in the model so let me just recall the standard Black-Scholes formalism. It's likely that everything can be extended with minor modifications to the model you're interested in.
The price of the vanilla call option with a strike $K$ is equal to the expectation of the discounted pay-off
$$C_K=\mathbb E(e^{-rT}(S_T-K)_+),$$
where $(S_T-K)_+:=\max(S_T-K,0)$ and $\mathbb E$ is taken with respect to the risk-neutral measure $\mathbb P$. Assuming that $\mathbb P$ admits a continuous density $p(y)$, we have that
$$\mathbb E(e^{-rT}(S_T-K)_+)=\int_{-\infty}^{\infty}e^{-rT}(S_T-K)_+p(S_T)dS_T.$$
Now, in the risk-neutral Black-Scholes world
$$S_T=S_0\exp\left(rT-\frac{1}{2}\sigma^2T+\sigma\sqrt{T}N(0,1)\right).$$
Recalling that the density of $N(0,1)$ is $\frac{1}{\sqrt{2\pi}}e^{-s^2/2}$, we get that
$$C_K=\frac{e^{-rT}}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-s^2/2}\left(S_0\exp\left(rT-\frac{1}{2}\sigma^2T+\sigma\sqrt{T}s\right)-K\right)_+ds.$$
The integrand is non-zero if and only if
$$S_0\exp\left(rT-\frac{1}{2}\sigma^2T+\sigma\sqrt{T}s\right)>K,$$
i.e. when
$$s> a=\frac{\ln(K/S_0)+\sigma^2T/2-rT}{\sigma\sqrt{T}}.$$
Therefore
$$C_K=\frac{e^{-rT}}{\sqrt{2\pi}}\int_{a}^{\infty}e^{-s^2/2}S_0\exp\left(rT-\frac{1}{2}\sigma^2T+\sigma\sqrt{T}s\right)ds-\frac{e^{-rT}}{\sqrt{2\pi}}\int_{a}^{\infty}e^{-s^2/2}Kds,$$
which implies after some straightforward manipulations the standard Black-Scholes formula
$$C_K=S_0N(d_1)-Ke^{-rT}N(d_2).$$
Note that $S_0=\mathbb E(e^{-rT}S_T)$ since the discounted stock price is a martingale under the risk-neutral measure.
| null |
CC BY-SA 3.0
| null |
2011-05-11T13:13:32.337
|
2011-05-11T14:24:16.200
|
2011-05-11T14:24:16.200
|
70
|
70
| null |
1156
|
2
| null |
1110
|
1
| null |
If it is option on future, you don't need the spot price at all.
use the unaltered units for S and K, and you should get results for IV that make sense.
If you think they don't make sense, just plug your IV result back in and verify that you get the option price back correctly.
| null |
CC BY-SA 3.0
| null |
2011-05-12T00:21:53.153
|
2011-05-12T00:21:53.153
| null | null |
214
| null |
1157
|
1
|
3625
| null |
17
|
1204
|
I tried it in several symbols and timeframes with the same result:
$$\frac {mean(HIGH-LOW)}{mean(|CLOSE-OPEN|)}$$
```
Symbol Result
------ ------
EURUSD:W1 1.9725
EURUSD:D1 2.0023
EURUSD:H1 2.1766
USDJPY:W1 1.9949
USDJPY:D1 2.0622
USDJPY:H1 2.2327
SAN.MC:D1 2.0075
BBVA.MC:D1 2.0075
REP.MC:D1 2.1320
```
|
Why is the ratio of Hi-Low range to Open-Close range close to 2?
|
CC BY-SA 3.0
| null |
2011-05-12T08:58:41.877
|
2012-06-16T13:21:31.823
|
2012-06-16T13:21:31.823
|
1355
|
867
|
[
"data",
"analysis"
] |
1158
|
2
| null |
998
|
8
| null |
Maybe a better answer: [http://quantivity.wordpress.com/2011/05/08/manifold-learning-differential-geometry-machine-learning/#more-5397](http://quantivity.wordpress.com/2011/05/08/manifold-learning-differential-geometry-machine-learning/#more-5397)
| null |
CC BY-SA 3.0
| null |
2011-05-12T10:25:30.383
|
2011-05-12T10:25:30.383
| null | null |
134
| null |
1159
|
1
| null | null |
13
|
654
|
I have recently seen a paper about the Boeing approach that replaces the "normal" Stdev in the BS formula with the Stdev
\begin{equation}
\sigma'=\sqrt{\frac{ln(1+\frac{\sigma}{\mu})^{2}}{t}}
\end{equation}
$\sigma$ and $\mu$ being the "normal" Stdev and Mean, respectively. (Both in absolute values, resulting from a simulation of the pay-offs.)
Since it is about real options, it sounds reasonable to have the volatility decrease approaching the execution date of a project, but why design the volatility like this? I have plotted the function [here](http://www.wolframalpha.com/input/?i=plot%20%5Csqrt%7B%5Cfrac%7Bln%281%2b%5Cfrac%7B5%7D%7B100%7D%29%5E%7B2%7D%7D%7Bt%7D%7D%20for%20t=0..10) via Wolframalpha.com. Even though the volatility should be somewhere around 10% in this example, it never assumes that value. Why does that make sense?
I've run a simulation and compared the values. Since the volatility changes significantly, the option value changes, of course, are significant.
Here some equivalent expressions. Maybe it reminds somebody of something that might help?
$\Longleftrightarrow t\sigma'^{2}=ln(1+\frac{\sigma}{\mu})^{2}$
$\Longleftrightarrow\sqrt{exp(t\sigma'^{2})}-1=\frac{\sigma}{\mu}$
$\Longleftrightarrow\sigma=\mu\left[\sqrt{exp(t\sigma'^{2})}-1\right]$
It somehow looks similar to the arithmetic moments of the log-normal distribution, but it does not fit 100%.
|
Transformation of Volatility - BS
|
CC BY-SA 3.0
| null |
2011-05-12T13:42:49.857
|
2021-04-02T12:01:45.173
|
2011-05-18T14:55:56.017
|
860
|
860
|
[
"option-pricing",
"stochastic-calculus",
"stochastic-volatility"
] |
1160
|
2
| null |
447
|
6
| null |
My research so far:
OptionsXPress - with commissions of about USD 1.25/contract. USD 1K minimum account opening.
Interactive Brokers (IB) - with commissions of about USD 0.70-1.00/contract. USD 10K minimum account opening.
TradeStation - with commissions of about USD 1/contract. One point to note is that TradeStation's [EasyLanguage](http://en.wikipedia.org/wiki/EasyLanguage) platform is NOT a true API and orders can only be executed via EasyLanguage and it's not a full API as per the sales rep. So, if you want to execute more complicated trading strategies (e.g. if front month call IV exceeds) you won't be able to execute it there. USD 1K minimum account opening.
Lightspeed - with commissions of about USD 0.60/contract. USD 30K Minimum account opening for using APIs.
| null |
CC BY-SA 3.0
| null |
2011-05-12T17:23:16.910
|
2011-05-12T17:59:10.567
|
2011-05-12T17:59:10.567
|
35
|
869
| null |
1161
|
1
|
1667
| null |
15
|
1331
|
In the paper "[Why We Have Never Used the Black-Scholes-Merton Option Pricing Formula](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1012075)" ([Espen Gaarder Haug](http://en.wikipedia.org/wiki/Espen_Gaarder_Haug), [Nassim Nicholas Taleb](http://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb)) a couple of model-free arbitrage conditions are mentioned which limits the degrees of freedom for an option trader.
The four conditions mentioned in the paper are:
- Put-call parity (obviously)
- A call with strike $K$ cannot trade at a lower price than call $K+\delta K$ (avoidance of negative call and put spreads)
- A call struck at $K$ and a call struck at $K+2*\delta K$ cannot be more expensive than twice the price of a call struck at $K+\delta K$ (negative butterflies)
- Horizontal calendar spreads cannot be negative (when interest rates are low)
What other such model/assumption-free no-arbitrage conditions exist in options trading?
That is, conditions that reduce the degrees of freedom for a rational option trader regardless of his or hers subjective beliefs (such as belief in a certain model, etc.).
|
What are important model and assumption-free no-arbitrage conditions in options trading?
|
CC BY-SA 3.0
| null |
2011-05-12T17:41:36.200
|
2015-05-16T07:22:10.790
|
2011-05-12T18:36:21.207
|
526
|
526
|
[
"options",
"option-pricing"
] |
1162
|
1
|
1165
| null |
24
|
2768
|
Put-call parity is given by $C + Ke^{-r(T-t)} = P + S$.
The variables $C$, $P$ and $S$ are directly observable in the market place. $T-t$ follows by the contract specification.
The variable $r$ is the risk-free interest rate -- the theoretical rate of return of an investment with zero risk.
In theory that's all very simple. But in practice there is no one objective risk-free interest rate.
So in reality, how would you go about setting $r$? Why?
|
Setting the r in put-call parity?
|
CC BY-SA 3.0
| null |
2011-05-12T19:20:21.213
|
2018-06-21T16:12:54.790
| null | null |
526
|
[
"options",
"interest-rates",
"put-call-parity"
] |
1163
|
2
| null |
1162
|
9
| null |
I think you might use the relevant OIS-rate like EONIA or Fed Fund Rate, at least this is the current fad when discounting interest rate swaps.
| null |
CC BY-SA 3.0
| null |
2011-05-12T21:44:14.747
|
2011-05-12T21:44:14.747
| null | null |
357
| null |
1164
|
2
| null |
1162
|
18
| null |
This is not a trivial question. Here's a relevant excerpt (an appetizer, really) from [Hull's book](http://rads.stackoverflow.com/amzn/click/0135052831) (7th Edition, P. 75):
>
It is natural to assume that the rates on Treasury bills and Treasury bonds are the correct benchmark risk-free rates for derivative traders working for financial institutions. In fact, these derivative traders usually use LIBOR rates as short-terrn risk-free rates. This is because they regard LIBOR as their opportunity cost of capital (see Section 4.1). Traders argue that Treasury rates are too low to be used as risk-free rates because:
Treasury bills and Treasury bonds must be purchased by financial institutions
to fulfill a variety of regulatory requirements. This increases demand for these
Treasury instruments driving the price up and the yield down.
The amount of capital a bank is required to hold to support an investment in
Treasury bills and bonds is substantially smaller than the capital required to
support a similar investment in other instruments with very low risk.
In the United States, Treasury instruments are given a favorable tax treatment
compared with most other fixed-income investments because they are not taxed
at the state level.
LIBOR is approximately equal to the short-term borrowing rate of a AA-rated company. It is therefore not a perfect proxy for the risk-free rate. There is a small chance that a AA borrower will default within the life of a LIBOR loan. Nevertheless, traders feel it is the best proxy for them to use. LIBOR rates are quoted out to 12 months. As we shall see in Chapter 7, the Eurodollar futures market and the swap market are used to extend the trader's proxy for the risk-free rate beyond 12 months.
| null |
CC BY-SA 3.0
| null |
2011-05-13T00:12:28.030
|
2011-05-13T00:12:28.030
| null | null |
70
| null |
1165
|
2
| null |
1162
|
16
| null |
If your trades are collateralized/margined, you should use the rate paid on your collateral/margin.
| null |
CC BY-SA 3.0
| null |
2011-05-13T06:05:52.757
|
2011-05-13T06:05:52.757
| null | null |
89
| null |
1166
|
1
|
1181
| null |
7
|
272
|
Assume a stock Foo with a single share class.
Furthermore, assume a dual class stock Bar with classes I and II with different voting rights.
The shares in the different classes have equal cash flow rights.
The market for shares in class I are substantially more liquid than the market for shares in class II.
Why would a market maker prefer making markets in share Bar (the dual class share) rather than in the single class share (Foo)? Or more generally, what market making opportunities arise only in dual class shares?
|
How are dual class shares different from non dual class shares from a market makers' perspective?
|
CC BY-SA 3.0
| null |
2011-05-13T15:05:09.630
|
2011-05-19T17:43:08.853
| null | null |
526
|
[
"high-frequency",
"equities",
"market-making"
] |
1167
|
2
| null |
1004
|
13
| null |
In a very, very general sense, what Renaissance Technologies does well [and others try to do, many do less well] is understand where the "true" signal is (i.e. where prices should be) and what is noise (i.e. over-/under-reactions by others in the market) in the total signal of market prices. Generally, trading profits are made by taking the opposing position to someone who over- / under-reacts to where the market price is going to be because the market will "come back" to the true price.
Cryptographic algorithms and speech recognition algorithms have been developed to accomplish essentially the same thing ... it's necessary to separate noise from underlying signal in those and other applications of information theory, basically this is machine learning ... in general, becoming proficient in machine learning is skill that applies in many fields and for the mathematically-inclined individual, it is well worth studying [because there are so many applications, not just trading]. A good starting point would be the [lectures by Andrew Ng of Stanford University](http://www.youtube.com/results?search_query=Machine%20Learning%20%28Stanford%29&aq=f)
Attempting to beat the hedge funds really isn't something that should be copied by amateurs, i.e. it's an expensive way to get an education ... the algorithms are dynamic; the people using these algorithms must be faster, better, smarter than others who are using these algorithms ... continually updated, adjusted, advanced, accelerated, sharpened by teams of very smart people ... Renaissance isn't alone; it has smart competitors; there's a constant struggle by well-capitalized new firms with smart founders to make these adjustments with smarter, brighter, more capable teams of people who have access to better, faster technologies ... for example, many these algorithms must now execute so rapidly that it's necessary to use specifically-designed hardware that use ASIC integrated circuits or similar hardware, i.e. software on a supercomputer isn't fast enough to execute the trades ... the oscillation of the noise around the signal happens faster and faster.
Generally, the world benefits by the development of these algorithms and hardware, because the advances in this technology can eventually "spill over" into other projects in the rest of the world ... you aren't likely to find out what Renaissance Technologies and its competitors are doing primarily mostly because the people doing it don't have time [while they are doing it] to write papers and explain; it's not really that they want to keep secrets (i.e. there's some advantage to having your competition look at what you were doing last month, last year because by the time they understand it, you already know how to beat your old strategy), it's mostly that they don't have time / inclination to write up the explanation just yet ... as with all pursuits, the day eventually arrives where a person just doesn't really care any more and maybe wants to answer questions or tell the story.
Anyone who smugly tells you that they can't tell you because it's a secret basically is telling you they're just somebody like Mr. HR-clerical-job-app-screening-person and they couldn't begin to understand -- so it is easier for them to tell you that it's a secret. It's not that much of a secret ... mostly, it just happens fast ... so you need to be able to pick it up on your own without somebody explaining ... if you need to learn by having it explained, you shouldn't try to use it.
| null |
CC BY-SA 3.0
| null |
2011-05-13T16:15:17.590
|
2011-05-13T16:24:56.137
|
2011-05-13T16:24:56.137
|
121
|
121
| null |
1168
|
2
| null |
1135
|
4
| null |
They would make sense in certain narrow applications; one can perhaps think about scenarios where binary option might be the most efficient, quickest or easiest way to either benefit from a particular insight OR to hedge against some sort of event ... the real question is whether the volume in the markets for binary options will continue to sufficient to generate enough liquidity to render them as a practical alternative for different people who might have a reason to engage in either sides of the trade.
In other words, whether they make sense or not depends upon whether there are enough people who believe that other enough people will believe that they make sense. This is true for any any asset or derivative market. All markets fail (i.e. stop making sense) when people stop believing that other people believe that the market has stopped making sense for anyone.
| null |
CC BY-SA 3.0
| null |
2011-05-13T17:00:57.600
|
2011-05-13T17:00:57.600
| null | null |
121
| null |
1169
|
2
| null |
998
|
1
| null |
It is neither Holy Grail nor next Madoff, although it could be perceived as the former if it continues to do well or could be perceived as the latter if it crashes and burns ... but that's just because the general populace [including the financial news media] are so clueless about economic theory, quantitative finance and the practical details of how trading is done ... the methods of Renaissance pretty straightforward; they are not about some sort of magic talisman voodoo witchcraft OR aggressively seeking out idiots and tricking honest people to believe in some sort of magic talisman.
| null |
CC BY-SA 3.0
| null |
2011-05-13T17:18:57.553
|
2011-05-13T17:18:57.553
| null | null |
121
| null |
1170
|
2
| null |
1157
|
3
| null |
Unfortunately, I cannot answer fully your question. Though I'll give you my partial answer.
First of all, using entire price history of an index (from Yahoo), this is what I got:
```
Daily Weekly Monthly
DJIA 2.91 2.37 2.33
NASDAQ 100 1.74 1.94 1.91
NYSE Composite 1.61 1.59 1.85
S&P 100 1.75 1.90 1.97
S&P 600 Small Cap 1.59 1.82 1.99
```
Based on this, I don't think we can claim that the ratio is always 2.
So you agree, that the ratio is not always 2. But still, you want to know why it is equal to 2.91 or 1.59 or whatever.
This is how I would proceed in answering the question. First, the ratio can be expressed as
$$ \frac{E[max(P_1, P_2,...,P_n)-min(P_1, P_2,...,P_n)]} {E[|P_1 - P_n|]}$$
Second, I would start expanding the fraction in order to get a better picture of what exactly influences the ratio. I hope to expand the fraction, and then have some terms cancel each other out and then obtain as an answer 2, $\sigma / \mu$, or something else concise and beautiful. The problem is, it is extremely hard (at least for me) to obtain analytical expression for the expected value of a maximum (or minimum) of a sequence of correlated non-normal variables--the prices. I do not think anyone can give you the analytical expression for that. So this is where it ends as for analytical answer.
You can also use numerical methods, something like Monte Carlo simulation. Assume some model for prices, simulate them, and do some sensitivity analysis in parameters of the model to see how they affect the ratio.
Finally, one thing I do not get is why you are interested in that ratio. Shoundn't you instead be interested in $$ E(\frac{max(P_1, P_2,...,P_n)-min(P_1, P_2,...,P_n)]} {|P_1 - P_n|})$$
For example if the true expected value of the above ratio is equal to 10, and during a trading day the price is such that the ratio is 20, you would start buying the asset because you expect the close price to be higher than the current price. You would then sell the asset for a profit. Using your definition of the ratio, I see no usefulness in it. Care to explain?
| null |
CC BY-SA 3.0
| null |
2011-05-15T20:50:54.777
|
2011-05-15T20:50:54.777
| null | null |
796
| null |
1171
|
1
|
1188
| null |
6
|
232
|
I am looking at closed-form options approximations, in particular the [Bjerksund-Stensland model](http://old.nhh.no/for/dp/2002/0902.pdf).
I have run into a very basic question. How should I scale the input variables in regard to time? My assumptions would be:
- r (riskless interest rate) should be an annual rate, so if the 3-month T-bill yields 0.03%, then I should use .12.
- σ (variance) should be annual (and according to How to calculate future distribution of price using volatility?, variance scales with the square root of time.
- T (time to expiration) should be in years, so expiration in 83 days would be 83/365 (=0.227397) years.
- b (cost of carry) should be an annual rate.
Have I got that right? Are there other input variables that I will need to scale before plugging in?
|
How to scale option pricing components in regard to time
|
CC BY-SA 3.0
| null |
2011-05-17T16:33:09.370
|
2011-05-20T02:31:16.230
|
2017-04-13T12:46:22.823
|
-1
|
343
|
[
"option-pricing",
"interest-rates",
"models",
"variance"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.