Id
stringlengths 1
5
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
5
⌀ | ParentId
stringlengths 1
5
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
27.8k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
5
⌀ | OwnerUserId
stringlengths 1
5
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
859
|
2
| null |
848
|
4
| null |
If you use a Box-Jenkins model, look at this research which uses an ARIMA framework to define clusters, and then measures the similarity of the time series via a cepstral coefficient based upon the autoregressive parameters.
[http://www.csee.umbc.edu/~kalpakis/homepage/papers/ICDM01.pdf](http://www.csee.umbc.edu/~kalpakis/homepage/papers/ICDM01.pdf)
| null |
CC BY-SA 2.5
| null |
2011-03-30T15:45:05.417
|
2011-03-30T15:45:05.417
| null | null |
520
| null |
860
|
2
| null |
431
|
14
| null |
Khan Academy now offers [finance videos](https://www.khanacademy.org/science/core-finance) (he already started with e.g. the basics of option trading strategies and arbitrage pricing):
| null |
CC BY-SA 3.0
| null |
2011-03-30T18:31:42.747
|
2013-01-07T20:16:20.710
|
2013-01-07T20:16:20.710
|
35
|
12
| null |
861
|
2
| null |
857
|
6
| null |
From a theoretical point of view (you mentioned beta, so assume we're in a CAPM world), you should hold the market portfolio (let's assume S&P500 index) and be long (or short) the risk-free asset to decrease (or increase) your return and risk. That is, if you'd like higher returns than the S&P500 offers and are willing to accept the risk, trade the S&P500 index on margin. Again, theoretically, you want to hold a portfolio along the line tangent to the efficient frontier.
In the real world, maybe you can't lever up and trade on margin (maybe borrowing from the bank to buy an S&P500 index fund), so then you'd want to look at an equity only portfolio with a beta greater than one.
If you're looking at portfolio with a beta below one, then it seems that you should just hold the market and risk-free debt (maybe a money market savings account or low-yield corporate debt index fund, depending on your liquidity needs), which should outperform your low beta portfolio for a given level of risk.
Assuming that you've minimized risk for a given return (i.e., you're on the efficient frontier), then ultimately your risk aversion will determine your trade off between risk and return (i.e., picking the right beta). I agree with Chris that the Sharpe and Treynor ratios are good ways of quantifying the risk-return trade-offs in each of these portfolios.
Although not viewed as rational in the strict sense, given that you likely need this capital for retirement or buying a house, you may want to also look at value-at-risk (VaR) or expected shortfall (ES) to quantify how much you could lose in some hypothesized worst-case scenario. Of course, there are some problems with these measures, but they're illustrative.
| null |
CC BY-SA 2.5
| null |
2011-03-30T18:46:38.190
|
2011-03-30T18:46:38.190
| null | null |
106
| null |
862
|
2
| null |
614
|
2
| null |
I haven't tried myself but from what I have seen your best bet would be the managed accounts from Interactive Brokers.
There is a nice post about it here: [http://leighdrogen.com/the-hedge-fund-structure-is-dead/](http://leighdrogen.com/the-hedge-fund-structure-is-dead/)
- A quick and dirty option would be to use an API to pass the orders and loop through all the accounts doing that (i.e. one file with the list of orders, one with the list of accounts and a small program to pass the orders).
Btw, do you have a track record? ;)
| null |
CC BY-SA 2.5
| null |
2011-03-30T18:51:56.940
|
2011-03-30T18:51:56.940
| null | null |
262
| null |
863
|
2
| null |
857
|
4
| null |
The empirical evidence shows that low to medium beta portfolios beat high beta portfolios on a risk adjusted basis. Search SSRN for "betting against beta." This flies completely in the face of CAPM but frankly CAPM is crap.
| null |
CC BY-SA 2.5
| null |
2011-03-30T21:12:13.357
|
2011-03-30T21:12:13.357
| null | null |
352
| null |
864
|
1
| null | null |
13
|
1391
|
One way investors analyze stocks is on a technical basis. Looking at Bollinger Banks (20 day moving average +- 2 standard deviations) is one of the most popular technical tools. Some stocks trade between their Bollinger bands and rarely break through the bands.
Research points to price changes in stocks as a random walk. Also, most people believe that past news, and other factors have little if any impact on future price movements.
How do I reconcile stocks that trade almost exclusively between their Bollinger bands and the two points highlighted above (random walk, past performance doesn't suggest future performance)? Should looking at Bollinger Bands only be useful when deciding when to buy a stock after doing fundamental and other analysis? Or is simply looking at Bollinger Bands enough to make a short term trading call?
|
Trading a stock (or other asset) based on Bollinger Bands.
|
CC BY-SA 2.5
| null |
2011-03-31T00:08:41.337
|
2012-07-11T03:31:11.993
| null | null |
299
|
[
"moving-average",
"technicals"
] |
865
|
2
| null |
864
|
14
| null |
Seeing a pattern in a chart is the finance equivalence of a Rorschach test---the discerned pattern says more about the person than the image. And really, if you want to trade that way, you may as well use astrology.
Your real question seems to be:
>
How can I accept or reject the hypothesis that Bollinger bands are an acceptable trading signal?
For that, you'll need to backtest a trading model. You'll have to get some data and write some software to simulate technical analysis. Of course, you must be absolutely carefully that you haven't biased your sample (survivorship, selection, look-ahead, etc). Only a well-executed backtest can give you the evidence to make an informed decision.
And once you've done that, you'll be a quant. Until then, you're just staring at ink on a page.
| null |
CC BY-SA 2.5
| null |
2011-03-31T01:55:07.463
|
2011-03-31T01:55:07.463
| null | null |
35
| null |
866
|
2
| null |
864
|
5
| null |
Just like everyone else that's been down this path, you'll have to prove this stuff to yourself. Make sure that one of your competing tests is a "noise test" where the decision to go long or short is driven by a meaningless random number generator. If your method can't statistically outperform noise, then your method is not doing anything meaningful.
| null |
CC BY-SA 2.5
| null |
2011-03-31T02:15:12.000
|
2011-03-31T02:15:12.000
| null | null |
392
| null |
867
|
2
| null |
864
|
2
| null |
Apparently in Forex markets, technical analysis is becoming less and less effective: [http://forextradingtipsdaily.com/fed-paper-power-of-technical-analysis-in-forex-is-declining/](http://forextradingtipsdaily.com/fed-paper-power-of-technical-analysis-in-forex-is-declining/)
I wonder if this is also the case for equity.
| null |
CC BY-SA 2.5
| null |
2011-03-31T06:19:30.103
|
2011-03-31T06:19:30.103
| null | null |
89
| null |
868
|
2
| null |
841
|
-1
| null |
This sounds like a perfect application for genetic algorithms to me.
| null |
CC BY-SA 2.5
| null |
2011-03-31T07:31:41.897
|
2011-03-31T07:31:41.897
| null | null |
334
| null |
869
|
1
|
872
| null |
10
|
1986
|
One commonly used sample estimator of volatility is the standard deviation of the log returns.
It is indeed a very good estimator (unbiased, ...) when the sample is large.
But I don't like it for small sample as it tends to overweight outliers in log returns.
Do you know if any other statistical dispersion measure that can be use to estimate the volatility of a stock? (I don't care about statistical properties; I just want it to estimate differently / better the daily risk of this stock.)
PS: I have already tried to use the norm 1 instead of the Euclidean norm. Any other idea / remark?
|
better estimator of volatility for small samples
|
CC BY-SA 2.5
| null |
2011-03-31T08:05:24.220
|
2019-04-28T07:14:58.043
|
2013-10-27T09:02:30.917
|
2299
|
134
|
[
"volatility",
"risk",
"high-frequency-estimators"
] |
870
|
2
| null |
364
|
10
| null |
I teach Derivative Securities in the mathematical finance program at NYU and was rather surprised to learn that there is no proof of the FTAP that is accessible to masters level students. So I wrote [this](http://kalx.net/ftapd.pdf). It is a simple proof for the discrete time case.
One bonus of the proof of the one period case is that it tells you how to find the arbitrage if one exists.
| null |
CC BY-SA 2.5
| null |
2011-03-31T13:17:30.693
|
2011-03-31T13:17:30.693
| null | null |
2357
| null |
871
|
2
| null |
41
|
7
| null |
Richardh is spot on. The price of the VIX option is a weighted sum of put (strikes < forward) and call (strikes > forward) options on the S&P 500. The weights are proportional to 1/strike^2. As the S&P goes down the out of the money puts become more valuable and those have the highest weights.
I will leave arguments about the market as a whole to fuzzy headed pundits.
| null |
CC BY-SA 2.5
| null |
2011-03-31T13:30:39.810
|
2011-03-31T13:30:39.810
| null | null |
2357
| null |
872
|
2
| null |
869
|
5
| null |
You could use something like the interquartile or semi-interquartile range, which is somewhat more insensitive to extremes. This is a better measure to use if your data is skewed, however if your data is normally distributed it is still better to use the standard deviation.
[http://en.wikipedia.org/wiki/Interquartile_range](http://en.wikipedia.org/wiki/Interquartile_range)
| null |
CC BY-SA 2.5
| null |
2011-03-31T13:37:19.057
|
2011-03-31T17:09:02.327
|
2011-03-31T17:09:02.327
|
520
|
520
| null |
873
|
2
| null |
843
|
2
| null |
Yeah, I've found this formula. So you just need to put
$$
\Delta t < \frac{\log{u}}{r}.
$$
Edited: To avoid arbitrage one should have $0<d<1+r<u$ - (Shreve, Stochastic Calculus for Finance I), or in you case $0<d(\Delta t)<\mathrm{e}^{r\Delta t}<u(\Delta t)$. Only under this condition your formula
$$
p = \frac{\mathrm{e}^{r\Delta t}-d}{u-d}
$$
is valid and the probability will be less than $1$ and greater than $0$ - in fact I told you the same from the beginning. Using the formula for $u(\Delta t)$ we have that for a time step
$$
\Delta t < \frac{\sigma^2}{r^2}.
$$
It's strange that this conditions are not presented in wikipedia. Moreover they abuse notation for $u(\Delta t)$ and $d(\Delta t)$ using there $t$ rather than $\Delta t$.
| null |
CC BY-SA 2.5
| null |
2011-03-31T15:12:57.857
|
2011-04-01T08:40:44.203
|
2011-04-01T08:40:44.203
|
464
|
464
| null |
874
|
2
| null |
869
|
7
| null |
You mention "daily" risk, so I'm assuming you're looking at a daily frequency. Yang-Zhang Volatility ([Drift-independent Volatility Estimation Based on High, Low, Open and Close Prices](http://www.atmif.com/papers/range.pdf)) fits the bill for what you're asking, it takes into account intraday fluctuations as well.
| null |
CC BY-SA 2.5
| null |
2011-03-31T18:28:42.023
|
2011-03-31T18:28:42.023
| null | null |
122
| null |
875
|
2
| null |
751
|
7
| null |
Personally, I've dealt with volatility estimation using wavelets with HF data. Estimations seem reasonable and also it's fairly quick computation wise compared to other methods. There's quite a bit of literature on the subject, I would recommended starting off with [An Introduction to Wavelets and Other Filtering Methods in Finance and Economics](http://rads.stackoverflow.com/amzn/click/0122796705)
| null |
CC BY-SA 2.5
| null |
2011-03-31T18:37:54.357
|
2011-03-31T18:37:54.357
| null | null |
122
| null |
876
|
2
| null |
869
|
1
| null |
In order to suppress the effect of outliers, you can use indicator transform or rank transform. Once you convert your data in those form then you can find the volatility. Personally, I like indicator transform as it is easy, more popular in many applications and gives good results.
| null |
CC BY-SA 2.5
| null |
2011-03-31T22:16:46.393
|
2011-03-31T22:42:42.510
|
2011-03-31T22:42:42.510
| null | null | null |
877
|
2
| null |
751
|
7
| null |
Relevant paper:
Efficient Estimation of Volatility using High Frequency Data (Zumbach, Corsi, and Trapletti 2002)
[http://www.olsen.ch/fileadmin/Publications/Working_Papers/020221-efficientVolEstimator.pdf](http://www.olsen.ch/fileadmin/Publications/Working_Papers/020221-efficientVolEstimator.pdf)
| null |
CC BY-SA 2.5
| null |
2011-03-31T23:22:54.207
|
2011-03-31T23:22:54.207
| null | null |
225
| null |
878
|
2
| null |
764
|
3
| null |
This question has been answered many times over already, though hopefully this will provide a bit more insight. If I understand your question correctly, you're basically asking if you can use BSM as a trading indicator.
So let's think about what it really means to be trading an option. Every single variable (i.e. price of underlying asset, strike price, risk-free rate, and time until expiration) in the model is observable in the market, except volatility. So what does this mean? Mostly when we're trading an option, we're trading a difference in volatility. So let's assume you believe that your interpretation of what volatility is, is better than what's currently being priced by the market. That doesn't get us too far if we only want to trade the underlying.
Now some people may look at ratios of calls/puts or call/put volume as well. I would definitely warn against this type of indicator as well. You're basing your indicator on the assumption that, say people only buy calls if they think the direction of the underlying is going to rise. This is simply untrue, those positions may be there for a market maker to hedge inventory risk.
Hope this helps!
| null |
CC BY-SA 2.5
| null |
2011-03-31T23:28:56.463
|
2011-03-31T23:28:56.463
| null | null |
122
| null |
879
|
1
|
880
| null |
24
|
7203
|
I am not familiar with the concept of entropy for time series. I am looking for good reference papers and examples of use.
|
Can the concept of entropy be applied to financial time series?
|
CC BY-SA 2.5
| null |
2011-04-01T02:04:42.227
|
2020-03-25T12:07:35.387
|
2011-04-01T02:27:57.943
|
35
|
134
|
[
"time-series"
] |
880
|
2
| null |
879
|
17
| null |
As a good starting point read this recent paper by Jing Chen:
[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1734526](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1734526)
For a special use of the entropy concept for forecasting the '87-crash read this paper:
[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=959547](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=959547)
(Although I tried to contact the authors to get the data to reproduce their findings, which they didn't send, it is still an enlightening read)
For a more popular exposition of the use of entropy in money management (key word 'Kelly formula') you should read this intelligent page turner by Poundstone: [Fortunes Formula](http://books.google.de/books?id=ywLao52cY08C&dq=fortunes%20formula&hl=de&ei=7W-VTYXvO4ybOp_Znb0H&sa=X&oi=book_result&ct=result&resnum=1&sqi=2&ved=0CDEQ6AEwAA)
EDIT: Quite an interesting paper is this one where Black-Scholes is derived through the use of concepts of relative entropy: [http://www.mdpi.com/1099-4300/2/2/70/](http://www.mdpi.com/1099-4300/2/2/70/)
| null |
CC BY-SA 2.5
| null |
2011-04-01T06:27:58.400
|
2011-04-01T16:21:20.417
|
2011-04-01T16:21:20.417
|
12
|
12
| null |
881
|
1
| null | null |
33
|
8847
|
Have a good quant joke? Share it here. The principle "should be of interest to quants" trumps.
I would be particularly keen to learn jokes which involve some nontrivial finance/mathematics. I am looking for jokes which make you laugh and think at the same time.
This is a community wiki question: One quant joke or pun per answer please.
|
Good quant finance jokes
|
CC BY-SA 2.5
| null |
2011-04-01T11:05:20.063
|
2021-10-20T23:13:43.707
| null | null |
70
|
[
"soft-question"
] |
882
|
2
| null |
879
|
7
| null |
I have applied the concept of entropy and more specifically conditional entropy to spreading (ie, as a pricing model to get a sense for value) & execution decisions. It's good for everytime you're facing a problem of the sort, given X what is the probability density function of Y.
Also, the concept of mutual information which evaluates mutual dependence between two (random) variables can be useful in many applications. Again spreading comes to mind, risk management, etc
No papers on hand, but as usual the wiki is pretty good one going
| null |
CC BY-SA 2.5
| null |
2011-04-01T14:38:02.440
|
2011-04-01T14:38:02.440
| null | null |
471
| null |
883
|
1
| null | null |
15
|
14565
|
Are there any free c++ libraries that would have some of the functions that would be used in developing a trading strategy. For instance, calculating drawdown, Volatility Forecasting, MAE, MFE....etc.
I know I could code these but this would help me save some time and focus on the strategy and not the report generations.
|
Trading C++ Libraries
|
CC BY-SA 2.5
| null |
2011-04-01T15:07:21.120
|
2019-10-14T16:10:17.513
| null | null |
681
|
[
"backtesting",
"algorithmic-trading",
"programming"
] |
884
|
2
| null |
883
|
7
| null |
Here are some suggestions.
Search Amazon (or your favorite bookseller) for books concerning "C++ quantitative finance." I found several titles that look promising.
I went to [SourceForge](http://sourceforge.net/search/?q=trading%20system) (searching on "Trading Systems") and saw several promising systems that might give you a leg up in
drawdown, MAE, etc.
I use TradeStation 9.0 for comparing various trading strategies. It will provide MAE/MFE graphs, trade equity curves, and rank strategies based on maximum drawdown. But be sure to read Trading Systems That Work: Building and Evaluating Effective Trading Systems by Thomas Stridsman for an apt critique of TradeStation's generated reports.
| null |
CC BY-SA 2.5
| null |
2011-04-01T15:51:41.217
|
2011-04-01T15:51:41.217
| null | null |
343
| null |
885
|
2
| null |
883
|
2
| null |
QuantLib is widely used
[http://quantlib.org/docs.shtml](http://quantlib.org/docs.shtml)
| null |
CC BY-SA 2.5
| null |
2011-04-01T16:39:08.153
|
2011-04-01T16:39:08.153
| null | null |
471
| null |
886
|
2
| null |
883
|
5
| null |
For actually creating your trading strategy, you could use the open-source TA-Lib (written in C/C++) available [here](http://ta-lib.org/). To test it, you could use [R](http://www.r-project.org/) and the [PerformanceAnalytics package](http://cran.r-project.org/web/packages/PerformanceAnalytics/).
| null |
CC BY-SA 4.0
| null |
2011-04-01T19:09:42.937
|
2019-10-14T16:10:17.513
|
2019-10-14T16:10:17.513
|
26556
|
252
| null |
887
|
2
| null |
881
|
21
| null |
Investment bank Dog competition:
A researcher, a risk manager and a trader each bring a dog to a competition.
The first one to display is the researcher's dog. The researcher brought a bottle of milk a bowl and placed it all on the floor. Then he commanded the dog to take the bottle and pour the milk into the bowl until the maximum amount it could hold. And that is exactly what his dog did.
The second one to display is the risk manager's dog. The risk manager's brought a bottle of milk a bowl and placed it all on the floor. Then he commanded the dog to take the bottle and pour the milk into the bowl until two thirds of the maximum it could hold and without spilling any milk during the whole process. And that is exactly what his dog did.
The third one to display is the trader's dog. The trader did not bring anything other than the dog. Then he commanded the dog to do what he taught. The dog stands up and walks in the direction of each one of the bowls and drinks all the milk.
| null |
CC BY-SA 2.5
| null |
2011-04-01T20:04:33.263
|
2011-04-01T20:04:33.263
| null | null |
122
| null |
888
|
2
| null |
881
|
38
| null |
Efficient Market Hypothesis Joke:
>
"There is an old joke, widely told among economists, about an economist strolling down the street with a companion when they come upon a 100 dollar bill lying on the ground. As the companion reaches down to pick it up, the economist says ‘Don’t bother — if it were a real 100 dollar bill, someone would have already picked it up’.”
| null |
CC BY-SA 2.5
| null |
2011-04-02T00:30:39.790
|
2011-04-02T00:30:39.790
| null | null | null | null |
889
|
1
|
894
| null |
6
|
1043
|
I am asked to map financial products into XML. It can be in any format. I know there is an open specification FPML. Is that worth to adopt FPML rather than my own standards? Also, how common FPML is in the industry?
|
Financial Products Markup Language
|
CC BY-SA 2.5
| null |
2011-04-02T06:07:20.797
|
2011-04-03T10:03:41.277
|
2011-04-02T09:46:48.997
|
601
|
601
|
[
"quants"
] |
890
|
2
| null |
889
|
1
| null |
Be prepared to extend any specification you will use.
| null |
CC BY-SA 2.5
| null |
2011-04-02T16:27:43.443
|
2011-04-02T16:27:43.443
| null | null |
89
| null |
891
|
1
| null | null |
7
|
329
|
I recently become interested in finance. Many books discuss options as simple examples of derivatives. I also read some "popular books". I read in "The Poker Face of Wall Street" that almost no farmers actually use commodity exchanges (p91), while in many places the opposite is claimed (for example, "Traders, Guns, & Money", p 25, says "The major users [of commodity futures] were really farmers"). Why are these two accounts differ?
|
Did farmers really buy options on the CBOE?
|
CC BY-SA 2.5
| null |
2011-04-02T23:16:55.887
|
2011-04-02T23:34:14.100
| null | null |
686
|
[
"futures",
"books"
] |
892
|
2
| null |
891
|
6
| null |
- Farmers (usually referred to as hedgers) typically buy Futures, not options.
- The CBOE does not transact in agricultural commodities
- The CME, and other exchanges, transact in agricultural commodities.
- These exchanges grew, for the most part, directly from the trading interactions of farmers(hedgers/producers) and middle-men(speculators), in the area where these commodities were brought to be processed and sold.
- Even if a farmer never buys or sells an exchange traded commodity contract, he will almost surely check the listed exchange price when making a deal with a broker to sell his crop. Also, larger farmers/producers and consumers/foodbeverage manufacturers are more likely to use exchange traded commodities to reduce their risk/costs.
| null |
CC BY-SA 2.5
| null |
2011-04-02T23:34:14.100
|
2011-04-02T23:34:14.100
| null | null |
214
| null |
893
|
2
| null |
881
|
27
| null |
Adapted from a joke about economists:
A quant(financial engineer), a mechanical engineer, and a chemical engineer are shipwrecked on a deserted island. They are starving, and are tired of eating coconuts. They come upon a can of food on the beach....
The mechanical engineer says, 'let's wdge the can between these 2 palm trees and rig a branch to chop the top of the can off' The other two quickly point out that this will destroy the can and likely send the contents all over the place, wasting them...
The chemical engineer says, 'let's take the sea water and reverse the polarity of the water extracting the salt to make an acid and then burn one end of the can off' The other two quickly point out that while this is clever it will likely ruin the contents of the can.
The quant says, 'let's make a simplifying modeling assumption.... let's begin by assuming that we have a can opener'
:-)
| null |
CC BY-SA 2.5
| null |
2011-04-02T23:43:57.320
|
2011-04-02T23:43:57.320
| null | null |
214
| null |
894
|
2
| null |
889
|
8
| null |
we used fpml all the time in morgan. Of course its not surprising given their involvment in it.
| null |
CC BY-SA 2.5
| null |
2011-04-03T09:52:20.687
|
2011-04-03T09:52:20.687
| null | null |
689
| null |
895
|
2
| null |
889
|
7
| null |
Quite common - such products as Calypso and Murex use their own derivations of FpML: CalypsoMl and MxML respectively.
| null |
CC BY-SA 2.5
| null |
2011-04-03T10:03:41.277
|
2011-04-03T10:03:41.277
| null | null |
690
| null |
896
|
2
| null |
115
|
9
| null |
I just ran across an interesting [presentation](http://www.slideshare.net/gaetanlion/black-swan-the-fat-tail-issue) comparing the effectiveness of Normal, Cauchy, and Student's-t distributions in modeling the S&P. It concludes that the normal distribution underestimates extreme movements, the Cauchy overestimates them (although a comment on the presentation points out that Mandelbrot used different parameters than the author did), and concludes that the student's-t is a fairly good fit.
| null |
CC BY-SA 2.5
| null |
2011-04-03T18:00:40.800
|
2011-04-03T18:00:40.800
| null | null |
80
| null |
897
|
1
| null | null |
7
|
2463
|
What are the limitations of brownian motion in its applications to finance?
|
What are the limitations of brownian motion in finance?
|
CC BY-SA 2.5
| null |
2011-04-04T00:00:54.743
|
2011-04-04T01:54:56.390
| null | null | null |
[
"brownian-motion"
] |
898
|
2
| null |
897
|
4
| null |
From [this paper](http://pages.stern.nyu.edu/~churvich/Forecasting/Handouts/Scholes.pdf):
>
The geometric Brownian motion model implies that the series of first differences of the log prices must be uncorrelated. But for the S&P 500 as a whole, observed over several decades, daily from 1 July 1962 to 29 Dec 1995, there are in fact small but statistically significant correlations in the differences of the logs at short time lags.
| null |
CC BY-SA 2.5
| null |
2011-04-04T00:49:52.733
|
2011-04-04T00:49:52.733
| null | null |
35
| null |
899
|
2
| null |
897
|
5
| null |
So where to begin? Continuity is a big thing as it fails to take into account jumps, the Gaussian assumption is another big one. However, looking deeper into it stationarity is a huge problem as it applies to financial time series.
However, it does an OK job at simulation stuff in the long-run.
| null |
CC BY-SA 2.5
| null |
2011-04-04T00:57:43.877
|
2011-04-04T00:57:43.877
| null | null |
122
| null |
900
|
2
| null |
897
|
4
| null |
It is a binomial tree in disguise.
| null |
CC BY-SA 2.5
| null |
2011-04-04T01:54:56.390
|
2011-04-04T01:54:56.390
| null | null |
2357
| null |
901
|
2
| null |
214
|
5
| null |
Not really a paradox, but kind of surprising that delta is not necessarily the derivative of option value with respect to the price of the underlying in the standard MBA one period binomial model.
Suppose the realized return over the period is $R$, the stock price at the beginning is $s$ and can go to either $sd$ or $su$ at the end. We can replicate any payoff function, $f$, by solving two linear equations in two unknowns: $m$, the amount invested in the bond, and $n$, the number of shares of stock to buy.
$f(sd) = m R + n sd$
$f(su) = m R + n su$
We find the delta hedge ratio $n = (f(su) - f(sd))/(su - sd)$ and the option value $v = [(R - d) f(su) + (u - R) f(sd)]/R(u - d)$.
Find a payoff f such that $n ≠ dv/ds$.
| null |
CC BY-SA 3.0
| null |
2011-04-04T02:06:17.770
|
2013-12-29T14:43:16.837
|
2013-12-29T14:43:16.837
|
263
|
2357
| null |
903
|
1
| null | null |
17
|
3480
|
What is the role of skewness in portfolio optimization?
|
Role of skewness in portfolio optimization?
|
CC BY-SA 3.0
| null |
2011-04-04T04:05:13.860
|
2011-08-29T20:11:07.047
|
2011-08-29T14:27:54.193
|
1106
|
696
|
[
"optimization",
"portfolio-management",
"modern-portfolio-theory",
"skewness"
] |
904
|
2
| null |
214
|
9
| null |
[Parrondo's paradox](http://en.wikipedia.org/wiki/Parrondo%27s_paradox) is a paradox in game theory that describes a losing strategy that wins in the long term. It seems the paradox is only used in textbook examples of finance and has little applications in practice, though.
| null |
CC BY-SA 2.5
| null |
2011-04-04T09:51:16.920
|
2011-04-04T09:51:16.920
| null | null |
156
| null |
905
|
1
|
912
| null |
5
|
381
|
So, any European type option we can characterize with a payoff function $P(S)$ where $S$ is a price of an underlying at the maturity.
Let us consider some model $M$ such that within this model $V(S,\tau,P)$ is a price of an option with a payoff $P$ at time to maturity $\tau$ and an asset's price $S$ at time $\tau$.
Using only non-arbitrage principles we obtain
$$
V(S,\tau,P_1+P_2) = V(S,\tau,P_1)+V(S,\tau,P_2).
$$
With this formula we get one symmetry for the equation on the price of the option which holds regardless of the model $M$. Usually all equations are linear in payoff since they are linear themselves which comes from the fact that this equation are obtained using infinitesimal generators of the stochastic processes for the price.
Are there any other non-arbitrage principles which can make some additional restrictions on the equations for the price of the European option? I thought about using some facts on $V(S,\tau,P_1(P_2))$ and options on options.
Edited: the non-arbitrage argument for the linearity is the following. Suppose for some $S,\tau$ we have $V(S,\tau,P_1+P_2) > V(S,\tau,P_1)+V(S,\tau,P_2)$. Then we can short $V(...,P_1+P_2)$ and long $V(...,P_1)$ and $V(...,P_2)$ - so at the maturity we have nothing to pay, but at the current time the difference $$V(S,\tau,P_1+P_2) - V(S,\tau,P_1)-V(S,\tau,P_2)$$ is our profit.
|
An equation for European options
|
CC BY-SA 2.5
| null |
2011-04-04T19:51:34.640
|
2011-04-05T16:50:32.847
|
2011-04-05T08:05:38.177
|
464
|
464
|
[
"option-pricing",
"differential-equations"
] |
906
|
1
|
907
| null |
8
|
836
|
I have a MSc degree in the area of Financial Mathematics, but I am doing research now in other field of stochastics. Could you please tell me about the most important problems of (stochastic) financial mathematics nowadays?
Are they only looking for new processes for they price of an asset (and solutions of corresponding pricing problems)? Maybe they also wondering about distributions of log-returns?
- these are the problem which seems to be always actual and I remember it.
If there are any other striking problems? Please do not hesitate to make just suppositions - what are the important problems in your opinion.
|
Modern problems in financial mathematics
|
CC BY-SA 2.5
| null |
2011-04-04T20:29:33.110
|
2012-10-10T09:46:55.880
| null | null |
464
|
[
"general"
] |
907
|
2
| null |
906
|
7
| null |
My personal favourites:
- pricing and hedging in incomplete markets, in particular credit derivatives
- stochastic prediction-discount and inter-tenor
basis in the interest rate market; see "Two curves, one price" (Bianchetti) and "Interest Rates and The Credit Crunch: New Formulas and Market Models " (Mercurio)
| null |
CC BY-SA 3.0
| null |
2011-04-05T05:43:17.840
|
2011-04-08T05:43:15.580
|
2011-04-08T05:43:15.580
|
89
|
89
| null |
908
|
1
|
1091
| null |
20
|
3046
|
Mr. Soros in his books talked about principles which are not used by today's financial mathematics — namely reflexivity of all actions on the market. Simply it can be given by following: expectations of traders are based on the news and historical prices. They trade based on their expectations and hence influence prices which then influence expectations on the next "turn".
I have never met an implementation of these ideas in the scientific framework. Have you? If yes, please give me a reference.
|
George Soros models
|
CC BY-SA 3.0
| null |
2011-04-05T06:10:13.113
|
2015-09-03T04:03:22.150
|
2011-04-16T22:02:09.233
|
263
|
464
|
[
"models"
] |
909
|
1
| null | null |
7
|
1272
|
I am doing some research involving black-scholes model and got stuck with dividend-paying stocks when evaluating options. What is the real-world approach on handling the situations when an underlying pays dividend?
Thank you
|
Black-Scholes No Dividends assumption
|
CC BY-SA 2.5
| null |
2011-04-05T06:22:59.270
|
2011-04-06T19:02:52.947
| null | null |
428
|
[
"option-pricing",
"black-scholes"
] |
910
|
2
| null |
909
|
1
| null |
Well the real issue (IMO) is not dividend but rather estimating the forward price of the underlying at exercise date (in the BS setting for vanillas which I think is your setting).
So my answer, with which you will not be really satisfied I think, is that any model (continuous, discrete, even stochastic dividend) will do the trick for pricing purposes as long as you get the correct forward of the underlying stock of your option (for hedging this is a different issue).
| null |
CC BY-SA 2.5
| null |
2011-04-05T07:06:59.263
|
2011-04-05T14:52:39.117
|
2011-04-05T14:52:39.117
|
35
|
92
| null |
911
|
1
| null | null |
11
|
335
|
Consider $U_1(\mu,\Sigma)$ and $U_2(\mu,\Sigma)$, where $U_1(\mu, \cdot) = U_2(\mu, \cdot)$, $U_1(\cdot, \Sigma) = U_2(\cdot, \Sigma)$ such that
\begin{equation*}
arg\inf\limits_{\mu \in U_1(\mu, \cdot), \Sigma \in U_1(\cdot, \Sigma)} \left(w^T \mu – \alpha w^T \Sigma w \right)
\equiv arg\inf\limits_{\mu \in U_2(\mu, \cdot), \Sigma \in U_2(\cdot, \Sigma)} \left(w^T \mu – \alpha w^T \Sigma w\right)
\end{equation*}
Solutions for inner problems are the same
$(\mu, \Sigma)$ from $U_1$ are positively related ($\Sigma \uparrow$ as $\mu \uparrow \Rightarrow$ positively skewed)
$(\mu, \Sigma)$ from $U_2$ are negatively related ($\Sigma \downarrow$ as $\mu \uparrow \Rightarrow$ negatively skewed)
Solutions for inner problem: low $\mu$, high $\Sigma$
\begin{equation}
arg\inf\limits_{(\mu, \Sigma) \in U_1(\mu, \Sigma)} \left(w^T \mu – \alpha w^T \Sigma w \right)
\geq arg\inf\limits_{(\mu, \Sigma) \in U_2(\mu, \Sigma)} \left(w^T \mu – \alpha w^T \Sigma w \right) \quad (1)
\end{equation}
$\Rightarrow$ negative skewness is penalized by linking $U(\mu)$ and $U(\Sigma)$.
$\begin{align}
\max\limits_w \left(w^T \mu – \alpha w^T \Sigma w\right) &\geq \max\limits_w\inf\limits_{(\mu, \Sigma) \in U(\mu, \Sigma)} \left(w^T \mu – \alpha w^T \Sigma w\right)\\ &\geq \max\limits_w\inf\limits_{\mu \in U(\mu), \Sigma \in U(\Sigma)} \left(w^T \mu – \alpha w^T \Sigma w\right)\quad (2)
\end{align}$
Can anyone help me in proving the equation 1 and 2?
here the uncertain set for mu(mean) could be ellipsoid and will change the problem in second order conic problem ......and the uncertain set for the sigma(variance matrix) could be a rectangular box which will change the problem in semi definite programming problem.also we can choose some others as i mention in the comments.
|
penalizing negative skewness by linking $U(\mu)$ and $U(\Sigma)$
|
CC BY-SA 3.0
| null |
2011-04-05T07:11:34.723
|
2011-10-03T15:04:36.980
|
2011-04-11T02:42:38.897
|
696
|
696
|
[
"financial-engineering",
"portfolio-management"
] |
912
|
2
| null |
905
|
3
| null |
I think you provided the two that must be met. Pricing must be linear.$$V(\ldots, 2P) = 2 \cdot V(\ldots, P)$$ And pricing must meet the law of iterated value. Where $\tau \in (t, T)$ $$V_t(\ldots, \tau, V_{\tau}(\ldots, T, P)) = V_t(\ldots, T, P)$$ These two laws must be met for any cash flow to prevent arbitrage.
| null |
CC BY-SA 2.5
| null |
2011-04-05T09:35:26.530
|
2011-04-05T16:50:32.847
|
2011-04-05T16:50:32.847
|
106
|
106
| null |
913
|
2
| null |
545
|
18
| null |
There's a lot of people here talking about how GAs are empirical, don't have theoretical foundations, are black-boxes, and the like. I beg to differ! There's a whole branch of economics devoted to looking at markets in terms of evolutionary metaphors: Evolutionary Economics!
I highly recommend the Dopfer book, The Evolutionary Foundations of Economics, as an intro.
[http://www.cambridge.org/gb/knowledge/isbn/item1158033?site_locale=en_GB](http://www.cambridge.org/gb/knowledge/isbn/item1158033?site_locale=en_GB)
If your philosophical view is that the market is basically a giant casino, or game, then a GA is simply a black-box and doesn't have any theoretical foundation. However, if your philosophy is that the market is a survival-of-the-fittest ecology, then GA's have plenty of theoretical foundations, and it's perfectly reasonable to discuss things like corporate speciation, market ecologies, portfolio genomes, trading climates, and the like.
| null |
CC BY-SA 2.5
| null |
2011-04-05T15:42:36.530
|
2011-04-05T15:42:36.530
| null | null |
700
| null |
914
|
2
| null |
903
|
3
| null |
Assuming you're talking about optimizing a portfolio that has options included in its investment universe.
Skewness isn't directly modeled in the optimization, although many formulations involve using implied vol as the currency numeraire. (i.e. modeling the components of skewness, instead of skewness itself)
The main impact on the optimization though is that the type of solver used changes if you want to guarantee solution uniqueness/global optimality. (Second order cone - solver).
However, this solver change is a result of simply having options in the investment universe, not from skewness per se.
| null |
CC BY-SA 2.5
| null |
2011-04-05T18:19:57.437
|
2011-04-05T18:19:57.437
| null | null |
214
| null |
915
|
1
|
1623
| null |
13
|
25220
|
What does a CVA (Credit Valuation Adjustment) desk do, and how are its activities different from other trading desks? Can you work as a quant for a CVA desk and consider your role "front office"?
|
What is the role of Credit Valuation Adjustment (CVA) desks in investment banks?
|
CC BY-SA 3.0
| null |
2011-04-05T20:48:08.800
|
2014-04-20T14:41:19.677
|
2011-08-08T03:32:58.350
|
1106
|
89
|
[
"career",
"cva",
"banks"
] |
916
|
1
|
917
| null |
22
|
8945
|
does anybody know a site where I can download historical data on stocks including companies that have gone bankrupt such as lehman brothers?
it appears that bankrupt companies no longer appear in the finance sites that I know...
|
data on historical stock price of bankrupt companies
|
CC BY-SA 2.5
| null |
2011-04-05T22:46:51.513
|
2021-03-17T13:49:36.880
| null | null |
703
|
[
"data",
"finance",
"equities"
] |
917
|
2
| null |
916
|
13
| null |
Google and Yahoo finance have a survivorship bias -- they only include firms that are still around. I know of no free source that provides the data you seek. I get my data from Compustat and CRSP via the Wharton Resource Data Service, but these (or Bloomberg or Reuters) are likely too expensive for an individual.
Have you asked your broker if they will sell you the data you want?
| null |
CC BY-SA 2.5
| null |
2011-04-05T23:09:05.790
|
2011-04-05T23:09:05.790
| null | null |
106
| null |
919
|
1
| null | null |
4
|
236
|
Curious as to whether or not there is any sort of all time record. Any index, future, or stock will do. Volatility must be well above the average 1 year volatility for all periods.
|
What is the longest number of consecutive days that options implied volatility has stayed "extremely high" for any particular underlying?
|
CC BY-SA 2.5
| null |
2011-04-06T02:40:33.817
|
2017-07-21T05:34:52.150
| null | null |
520
|
[
"volatility",
"options",
"implied-volatility"
] |
920
|
1
| null | null |
2
|
1251
|
Me and my friends are from Europe, we went to US. At the end of the trip they bought an item worth $USD 100$, but because they were short of money, so I paid for the item first with my own $USD 100$.
After we came back from Europe, they would have to pay me. They would like to pay me in EUR. What is the fair amount of EUR they should pay me?
Assuming that at money exchange counter, the rates are as follow ( buy/sell are from their perspective):
1 USD buys 1 EUR
1 EUR buys $\frac{1}{1.05}$ USD.
|
The Fair Value of Paying in Currency X for Goods Bought in Currency Y
|
CC BY-SA 2.5
| null |
2011-04-06T03:33:09.547
|
2011-05-12T08:50:22.407
| null | null |
129
|
[
"currency"
] |
921
|
2
| null |
909
|
2
| null |
Assuming that by "real-world", you mean "with dividends", you can find extensions of the Black-Scholes models which include dividends on [this wikipedia page](http://en.wikipedia.org/wiki/Black%E2%80%93Scholes#Instruments_paying_continuous_yield_dividends).
As @TheBridge mentioned in his answer, there are several assumptions that are made within the BS framework, so your model can become more complicated depending on the assumptions you make.
| null |
CC BY-SA 2.5
| null |
2011-04-06T06:37:00.583
|
2011-04-06T06:37:00.583
| null | null |
467
| null |
922
|
2
| null |
920
|
1
| null |
Not realy a quant question but funny though,
Here are my two euro-cents,
I would say that you should ask them for 100-USD times the exchange rate at the date you lend them, plus the interest rates amount that can be calculated by compounding EUR EONIA rate on the period you lend them that money.
Regards
| null |
CC BY-SA 2.5
| null |
2011-04-06T06:40:33.400
|
2011-04-06T06:40:33.400
| null | null |
92
| null |
923
|
2
| null |
915
|
8
| null |
In principle you could say they mainly do risk management on bank level, but also make $ on the way trading out the counterparty risks.
Quoting a post in Willmott:
"here's how you make profits on a CVA desk.
1) you get paid by an internal desk to cover their c/p risk. you stay long and the credit tightens... you make money (similar to #2 below)
2) Prop trading in a credit you may or may not have a risk in
3) you way overcharge a moronic internal desk that doesn't have bloomberg or know wtf a CDS is AKA arbing your own firm
4) crossing dealers on illiquid credits AKA being a spiv"
| null |
CC BY-SA 2.5
| null |
2011-04-06T07:52:57.413
|
2011-04-06T07:52:57.413
| null | null |
647
| null |
924
|
2
| null |
916
|
8
| null |
There is a very cheap, i.e. free, way of obtaining the list of companies included in the S&P 500 at any given time.
Check the revision history for the [S&P 500 List updates on Wikipedia](http://en.wikipedia.org/w/index.php?title=List_of_S&P_500_companies&limit=500&action=history).
It is ugly and unreliable but you usually get what you pay for :) ... it should be okay if you are just playing around with your own strategies.
This doesn't solve the main issue since you still need a way to get the data for those companies. But having the right ticker usually helps...
| null |
CC BY-SA 3.0
| null |
2011-04-06T09:47:26.723
|
2012-10-06T10:47:53.123
|
2012-10-06T10:47:53.123
|
999
|
262
| null |
925
|
1
| null | null |
13
|
8003
|
What is the practical use for Vanna in trading?
How can it be used for a PnL attribution?
|
Vanna - any practical uses for risk or pnl attribution purposes?
|
CC BY-SA 2.5
| null |
2011-04-06T10:46:34.720
|
2023-04-03T22:24:38.237
|
2011-04-06T21:25:02.423
|
690
|
690
|
[
"greeks"
] |
926
|
2
| null |
925
|
13
| null |
Pretty much irrelevant for vanilla markets but really cannot be ignored when pricing exotics such as barriers. Basically, if you do not hedge vega you are likely to sell lots of cheap exotics.
Webb discusses the practical relevance of vanna and vomma in ["The Sensitivity of Vega"](http://www.derivativesstrategy.com/magazine/archive/1999/1199fea1.asp) (Derivatives Strategy, November (1999), pp. 16 - 19).
| null |
CC BY-SA 2.5
| null |
2011-04-06T11:52:18.050
|
2011-04-06T11:52:18.050
| null | null |
70
| null |
927
|
2
| null |
881
|
29
| null |
When asked "your money or your life", attempts to calculate the implied volatility.
| null |
CC BY-SA 2.5
| null |
2011-04-06T15:20:10.543
|
2011-04-06T15:20:10.543
| null | null | null | null |
928
|
1
|
930
| null |
9
|
2644
|
I have a time series with monthly data from which I compute the expected shortfall empirically, following the classical definition which can be found, for example, in [wikipedia's definition](http://en.wikipedia.org/wiki/Expected_shortfall).
That is, assuming I have 200 monthly returns and I am looking to compute the 10% expected shortfall, I take the worst 20 returns and I compute their mean to get my $ES_{10}$.
The thing is, I have a "monthly" expected shortfall, and I would like to annualize this result.
I wonder whether I should consider it as a "return" and do $(1+ES_{10})^{12}-1$ or maybe use the volatility annualization $ES_{10} \cdot \sqrt{12}$? Or is it something different?
|
How to annualize Expected Shortfall?
|
CC BY-SA 2.5
| null |
2011-04-06T17:57:33.510
|
2018-06-20T13:33:05.957
| null | null |
467
|
[
"time-series",
"risk-management"
] |
929
|
1
| null | null |
6
|
178
|
Does anyone know of a source of machine-readable (XML,etc) OCC Infomemos? The PDF files available below contain all the information I want, but are a pain to parse.
[http://www.optionsclearing.com/market-data/infomemos/infomemos1.jsp](http://www.optionsclearing.com/market-data/infomemos/infomemos1.jsp)
|
Availability of machine-readable OCC Infomemos?
|
CC BY-SA 2.5
| null |
2011-04-06T18:28:45.977
|
2011-04-07T09:25:13.290
| null | null |
236
|
[
"data"
] |
930
|
2
| null |
928
|
5
| null |
If you are willing to hypothesize a normal distribution of returns for these purposes, then you scale by the square root. However, normal tails are fairly skinny, so a lot of people like to fit Student-t or Pareto distributions to the tails. In this case, you have to convolve 12 copies of the fitted distribution together, and in the general case there is no simple formula to help you.
| null |
CC BY-SA 2.5
| null |
2011-04-06T18:43:38.057
|
2011-04-06T18:43:38.057
| null | null |
254
| null |
931
|
2
| null |
909
|
5
| null |
The simplest common approach is to assume a continuous dividend yield. This is treated mathematically in the same way as foreign interest rates on FX options, and the necessary changes to the formula can be found with a quick web search. If the options are European exercise then you might be perfectly happy with this.
A more robust treatment of dividends involves using dynamic programming (such as trinomial trees) to price the option, where dividends are treated as partly proportional to stock price, and partly fixed according to taste. In this case, the dividend is treated as a boundary condition on your solution grid. It's quite a bit more difficult to get that right. You can see how it works in Hull's book.
| null |
CC BY-SA 2.5
| null |
2011-04-06T19:02:52.947
|
2011-04-06T19:02:52.947
| null | null |
254
| null |
932
|
2
| null |
764
|
0
| null |
So, reading the question, I see two parts: first, can Black-Scholes be part of the buying process? and second, is Black-Scholes, by itself, sufficient to evaluate a stock and make a decision on whether to purchase it?
The answer to the first, is yes. The answer to the second, is basically no. Black-Scholes generates the price of an option on a futures market, not a value of a stock on a regular stock market. That being said, the futures market are intimately related to stock markets, and an option can be considered an "if-then" clause which defines the availability of a stock.
Black-Scholes is basically telling you how much it's going to cost to have an option to say yes-or-no to proceed with a purchase, at a certain day in the future, at today's price. (It's a little like getting on a waiting list to purchase scalped tickets for a sold-out concert.) Whether or not you want to go ahead with the purchase has to be based on some other function. But Black-Scholes can certainly be part of the process, if you want to be that sophisticated.
| null |
CC BY-SA 2.5
| null |
2011-04-06T19:38:25.700
|
2011-04-06T19:38:25.700
| null | null |
700
| null |
933
|
2
| null |
929
|
1
| null |
I do not. But I do know how to parse these specific pdfs quite easily.
Using the open source pdftotext tool ( [http://en.wikipedia.org/wiki/Pdftotext](http://en.wikipedia.org/wiki/Pdftotext) ).
The table data in the output text file will always (testing the types of the values should ensure that everything is working as expected in case of changes) begin after the line ExDate and ends before DISCLAIMER, with a blank line occuring between each value
(regular expression to grab text between ExData\n\n, \n\nDISCLAIMER, and you have a plain text double-newline delimited representation of the data).
| null |
CC BY-SA 2.5
| null |
2011-04-07T09:13:00.263
|
2011-04-07T09:25:13.290
|
2011-04-07T09:25:13.290
|
133
|
133
| null |
934
|
1
| null | null |
37
|
9371
|
What types of neural networks are most appropriate for forecasting returns? Can neural networks be the basis for a high-frequency trading strategy?
Types of neural networks include:
- Radial Basis Function Networks
- Multilayer Perceptron (standard architecture)
- Convolutional Neural Networks
- Recurrent Neural Networks
- Q-learning Networks or Deep Reinforcement Learning
What about advanced architectures such as:
Convolutional
- LeNet-5
- AlexNet
- Inception networks
- VGGNet
- ResNet
Recurrent
- Bi-directional
- Attention Models
|
What types of neural networks are most appropriate for trading?
|
CC BY-SA 4.0
| null |
2011-04-07T09:48:23.133
|
2019-04-06T10:30:52.003
|
2019-04-06T10:30:52.003
|
19222
|
707
|
[
"quant-trading-strategies",
"forecasting",
"high-frequency",
"machine-learning"
] |
935
|
2
| null |
764
|
2
| null |
This is basically how one would use BlackScholes to "purchase a stock"...
```
double optionPrice = blackScholes(stock, strike, volatility, rate, time);
if (optionPurchased) {
if (stockPrice < thresholdPrice){
ExerciseOption(PurchaseStock());
}
}
```
Note that the purchase of the option would be on a futures exchange, not the regular stock market. Also note the double if/then clauses. First you have to purchase the option, then use it to evaluate whether you want to proceed with the purchase. There are a couple different evaluation functions you could use to set the thresholdPrice.
[http://sourceforge.net/projects/chipricingmodel/](http://sourceforge.net/projects/chipricingmodel/)
ps. for what it's worth, I used to work at the Booth School of Business in Chicago, as a tutor in the computing labs, helping MBA students do Black-Scholes pricing for assignments.
| null |
CC BY-SA 2.5
| null |
2011-04-07T15:12:28.020
|
2011-04-07T15:19:42.193
|
2011-04-07T15:19:42.193
|
700
|
700
| null |
936
|
2
| null |
181
|
3
| null |
Evolutionary economics provides a framework for reasoning about meta-models about markets. The primary one being ecological. Thus, predicting a model's half life would involve discussing market ecologies, niche specialization of a model, comparative fitness of models, and the like.
| null |
CC BY-SA 2.5
| null |
2011-04-07T15:52:22.727
|
2011-04-07T15:52:22.727
| null | null |
700
| null |
937
|
1
| null | null |
2
|
123
|
ok, so I can't post more than 1 link since I'm a noobie noob, and in the course of correcting this, Stack managed to delete my post. so here's the more abbreviated, slightly irritated author version…
I'd like to calculate the fair market capitalization of the Metropolitan Transportation Authority of New York.
Facts:
According to their Wikipedia page, they have: a daily ridership of 11.5 million.
According to this NY Post article, the average fare price is USD 1.29.
Conclusions:
the MTA's daily revenue is is 14.8M, their annual revenue is USD 5.4B.
Layman's bad calculation:
comparing Nike's annual operating revenue (from wikipedia) of USD 2.5 billion and their market capitalization of USD 40 billion, we extrapolate the factor of 2.16 to determine the MTA's market cap of USD 86.4.
Yes, this is wrong. Please show me how you would do it.
(is there a way to take into account that the MTA is a pseudo monopoly, and as such: 1) gets access to cheap loans if needed from the state, 2) isn't afraid of losing market share to private companies, and as such, is less under fire of losing revenue)
Thanks!
|
How to calculate the Metropolitan Transportation Authority of New York's market capitalization or fair market value?
|
CC BY-SA 3.0
| null |
2011-04-07T16:13:22.437
|
2019-06-29T13:41:50.027
|
2019-06-29T13:41:50.027
|
16595
| null |
[
"market-capitalization"
] |
938
|
2
| null |
934
|
13
| null |
Depends on the data you are trying to model. If your data experience regime change then something with a sigmoid function (arctan, hTan, ...)
If your data is mostly linear but does have some deviation use a radial bias.
These are general guidelines for neural networks. The frequency of the data is not relevant to the above statements.
Remember that any set of basis functions can be made to fit any set of data. The idea is to use functions that reveal some under lying truth about the data.
| null |
CC BY-SA 3.0
| null |
2011-04-07T16:20:18.433
|
2016-04-17T16:50:18.673
|
2016-04-17T16:50:18.673
|
15390
| null | null |
939
|
1
|
941
| null |
10
|
880
|
Anyone can tell me what the first column and the last column in this FX tick dataset mean? The first seems like some kind of ID, and what D in the last column mean?
368412956 AUD/CAD 12/30/2007 17:00:03.000 0.8598 0.8604 D
368413005 AUD/CAD 12/30/2007 17:00:58.000 0.8599 0.8605 D
368413022 AUD/CAD 12/30/2007 17:01:06.000 0.8600 0.8606 D
368413102 AUD/CAD 12/30/2007 17:01:37.000 0.8599 0.8605 D
368413110 AUD/CAD 12/30/2007 17:01:42.000 0.8601 0.8607 D
368413262 AUD/CAD 12/30/2007 17:03:55.000 0.8602 0.8608 D
|
FX Tick Data question
|
CC BY-SA 2.5
| null |
2011-04-07T18:10:13.797
|
2011-04-08T01:50:14.033
| null | null |
43
|
[
"data",
"fx"
] |
940
|
2
| null |
939
|
5
| null |
Looking at the [original headers](http://ratedata.gaincapital.com/2011/03%20March/.%5CAUD_CAD_Week2.zip) for some recent data we have something like this:
```
lTid cDealable CurrencyPair RateDateTime RateBid RateAsk
```
So `D` would seem to be "dealable", but to be honest I couldn't find an example of a `non-D` value in the files (I haven't looked thorougly though), so I don't really get it.
The cryptic ID, is a tick identifier, but no information is given as to how it is really constructed. I assume it should be automatically incremented on every incoming tick (across any currency).
If that's true, then the actual data seems to be sorted by `date/time` column (if you look at some recent files, some ticks occur with equal time and are out of order when comparing IDs). So you should verify it first.
Unfortunately, I couldn't find anything better on this.
| null |
CC BY-SA 2.5
| null |
2011-04-07T21:45:17.110
|
2011-04-07T21:45:17.110
| null | null |
38
| null |
941
|
2
| null |
939
|
9
| null |
The first column is just a unique id tagged by Gain; this allows you to separate multiple messages that come in with the same timestamp.
D means "dealable": this means that a trade could take place. [According to this thread](http://forex-scam.com/general-trading-discussions-forex/26474-gain-error-11-pair-not-dealable.html), Gain is known for not dealing around events like major news announcements.
[Note: In EBS data, which is much more reliable, "D" means "deal" (or trade) while "Q" means "quote". This is an entirely different meaning from in the Gain data.]
| null |
CC BY-SA 3.0
| null |
2011-04-08T00:57:29.967
|
2011-04-08T01:50:14.033
|
2011-04-08T01:50:14.033
|
17
|
17
| null |
942
|
1
|
1087
| null |
29
|
19366
|
Yahoo Finance allows you to download tables of their daily historical stock price data.
The data includes an adjusted closing price that I thought I might use to calculate daily log returns as a first step to other kinds of analyses.
To calculate the adj. close you need to know all the splits and dividends, and ex-div and ex-split dates. If someone gets this wrong it should create some anomalous returns on the false vs. true ex dates.
Has anyone seen any major problems in the adj. closing price data on Yahoo Finance?
|
Any known bugs with Yahoo Finance adjusted close data ?
|
CC BY-SA 3.0
| null |
2011-04-08T08:40:58.820
|
2014-11-01T23:27:34.953
| null | null |
627
|
[
"data",
"equities",
"adjustments",
"yahoo"
] |
944
|
2
| null |
249
|
0
| null |
I trade VIX futures exclusively. My hedge is one VIX future pair vs. another future pair in a ratio. Thorough understanding of these and screen time are necessary IMO to earn.
| null |
CC BY-SA 3.0
| null |
2011-04-08T12:39:47.600
|
2011-04-08T17:04:16.983
|
2011-04-08T17:04:16.983
|
35
| null | null |
945
|
2
| null |
942
|
5
| null |
I read your question as you planning to calculate the daily log return as
\begin{equation}
\log{\frac{adj.close(t)}{adj.close(t-1)}}
\end{equation}
which I think might be problematic as the adjusted close does not necessarily represent a "price" that would have been traded at in the past. I think it would be better if you used the adjusted close to derive the actual close value and calculate returns thus:
\begin{equation}
\log{\frac{derived.actual.close(t)}{derived.actual.close(t-1)}}
\end{equation}
| null |
CC BY-SA 3.0
| null |
2011-04-08T12:52:16.577
|
2012-08-01T13:42:06.777
|
2012-08-01T13:42:06.777
|
35
|
252
| null |
946
|
1
|
1034
| null |
38
|
44520
|
As far as I understand, most investors are willing to buy options (puts and calls) in order to limit their exposure to the market in case it moves against them. This is due to the fact that they are long gamma.
Being short gamma would mean that the exposure to the underlying becomes more long as the underlying price drops and more short as the underlying price rises. Thus exposure gets higher with a P&L downturn and lower with a P&L upturn.
Hence I wonder who is willing to be short gamma? Is it a bet on a low volatility?
Also, for a market maker in the option market, writing (selling) an option means being short gamma, so if there is no counterparty willing to be short gamma, how are they going to hedge their gamma?
|
What type of investor is willing to be short gamma?
|
CC BY-SA 3.0
| null |
2011-04-08T13:32:36.977
|
2012-03-26T12:40:29.503
|
2011-09-11T21:57:48.740
|
1106
|
553
|
[
"options",
"hedging",
"portfolio-management",
"greeks",
"investing"
] |
947
|
2
| null |
946
|
3
| null |
I might be misunderstanding your question. My thoughts:
- being short gamma is being long volatility
- your comment re gamma increasing regardless of direction only
holds for ATM options. For ITM options, being short gamma is being
long the underlying. For OTM options, being short gamma is being
short the underlying.
Some graphs:
- Below, except as noted, the underlying is at 1, the interest rate
is 0%, and the expiration date is 1 year.
- The gamma of an ATM call as its volatility varies from .05 to .15:
Plot[bsgamma[1,1,1,0,v],{v,.05,0.15}]

- The gamma of a .95 ITM call as the underlying varies from .95 to 1.05:
Plot[bsgamma[x,0.95,1,0,.10],{x,.9,1.1}]

- The gamma of a 1.05 OTM call as the underlying varies from .95 to 1.05:
Plot[bsgamma[x,1.05,1,0,.10],{x,.9,1.1}]

| null |
CC BY-SA 3.0
| null |
2011-04-08T15:17:09.877
|
2011-04-08T15:17:09.877
| null | null | null | null |
948
|
1
|
949
| null |
82
|
38252
|
What would be the best approach to handle real-time intraday data storage?
For personal research I've always imported from flat files only into memory (historical EOD), so I don't have much experience with this. I'm currently working on a side project, which would require daily stock quotes updated every minute from an external feed. For the time being, I suppose any popular database solution should handle it without sweating too much in this scenario. But I would like the adopted solution to scale easily when real-time ticks become necessary.
A [similar problem](https://quant.stackexchange.com/questions/461/store-tick-data) has been mentioned by Marko, though it was mostly specific to R. I'm looking for a universal data storage accessible both for lightweight web front-ends (PHP/Ruby/Flex) and analytical back-end (C++, R or Python, don't know yet).
From what [chrisaycock mentioned](https://quant.stackexchange.com/questions/461/store-tick-data/468#468) column oriented databases should be the most viable solution. And it [seems to be the case](http://www.wallstreetandtech.com/blog/archives/2008/03/columnbased_dat.html).
But I'm not sure I understand all the intricacies of column oriented storage in some exemplary usage scenarios:
- Fetching all or subset of price data for a specific ticker for front-end charting
Compared to row based solutions fetching price data should be faster because it's a sequential read. But how does storing multiple tickers in one place influence this? For example a statement like "select all timestamps and price data where ticker is equal to something". Don't I have to compare the ticker on every row I fetch? And in the situation where I have to provide complete data for some front-end application, wouldn't serving a raw flat file for the instrument requested be more efficient?
- Analytics performed in the back-end
Things like computing single values for a stock (e.g. variance, return for last x days) and dependent time-series (daily returns, technical indicators etc.). Fetching input data for computations should be more efficient as in the preceding case, but what about writing? The gain I see is bulk writing the final result (like value of computed indicator for every timestamp), but still I don't know how the database handles my mashup of different tickers in one table. Does horizontal partitioning/sharding handle it for me automatically or am I better splitting manually into table per instrument structure (which seems unnecessary cumbersome)?
- Updating the database with new incoming ticks
Using row based orientation would be more efficient here, wouldn't it? And the same goes about updating aggregated data (for example daily OHLC tables). Won't it be a possible bottleneck?
All this is in the context of available open source solutions. I thought initially about [InfiniDB](http://infinidb.org/) or [HBase](http://hbase.apache.org/), but I've seen MonetDB and InfoBright being mentioned around here too. I don't really need "production quality" (at least not yet) as mentioned by chrisaycock in the [referenced question](https://quant.stackexchange.com/questions/461/store-tick-data/468#468), so would any of this be a better choice than the others?
And the last issue - from approximately which load point are specialized time-series databases necessary? Unfortunately, things like [kdb+](http://kx.com/kdb+.php) or [FAME](http://www.sungard.com/fame) are out of scope in this case, so I'm contemplating how much can be done on commodity hardware with standard relational databases (MySQL/PostgreSQL) or key-value stores (like [Tokyo](http://fallabs.com/tokyocabinet/)/[Kyoto Cabinet](http://fallabs.com/kyotocabinet/)'s B+ tree) - is it a dead end really? Should I just stick with some of the aforementioned column oriented solutions owing to the fact that my application is not mission critical or is even that an unnecessary precaution?
Thanks in advance for your input on this. If some part is too convoluted, let me know in a comment. I will try to amend accordingly.
EDIT:
It seems that strictly speaking HBase is [not a column oriented store](http://dbmsmusings.blogspot.com/2010/03/distinguishing-two-major-types-of_29.html) but rather a [sparse, distributed, persistent multidimensional sorted map](http://jimbojw.com/wiki/index.php?title=Understanding_Hbase_and_BigTable), so I've crossed it out from the original question.
After some research I'm mostly inclined towards InfiniDB. It has all the features I need, supports SQL (standard MySQL connectors/wrappers can be used for access) and full DML subset. The only thing missing in the open source edition is on the fly compression and scaling out to clusters. But I guess it's still a good bang for the buck, considering it's free.
|
Efficiently storing real-time intraday data in an application agnostic way
|
CC BY-SA 3.0
| null |
2011-04-08T15:44:46.773
|
2016-08-22T00:55:44.643
|
2017-04-13T12:46:23.037
|
-1
|
38
|
[
"time-series",
"software",
"performance",
"database",
"column-oriented"
] |
949
|
2
| null |
948
|
52
| null |
Column-oriented storage is faster for reading because of the cache efficiency. Looking at your sample query:
```
select price, time from data where symbol = `AAPL
```
Here I'm concerned with three columns: `price`, `time`, and `symbol`. If all ticks were stored by row, the database would have to read through all rows just to search for the symbols. It would look like this on disk:
```
IBM | 09:30:01 | 164.05; IBM | 09:30:02 | 164.02; AAPL | 09:30:02 | 336.85
```
So the software must skip over the price and time entries just to read the symbols. That would cause a cache miss for every tick!
Now let's look at the column-oriented storage:
```
IBM | IBM | AAPL; 09:30:01 | 09:30:02 | 09:30:02; 164.05 | 164.02 | 336.85
```
Here the database can sequentially scan the symbol list. This is cache efficient. Once the software has the array indices that represent the symbol locations of interest, the database can jump to the specific time and price entries via random access. (You may notice that the columns are actually associative arrays; the first element in each column refers to the first row in aggregate, so jumping to the N th row means simply accessing the N th element in each array.)
As you can imagine, column-oriented storage really shines during analytics. To compute the moving average of the prices per symbol, the database will index-sort the symbol column to determine the proper ordering of the price entries, and then begin the calculation with the prices in contiguous (sequential) layout. Again, cache efficient.
---
Beyond the column-oriented layout, many of these really new databases also store everything in memory when performing calculations. That is, if the data set is small enough, the software will read the entire tick history into memory, which will eliminate page faults when running queries. Thus, it will never access the disk!
A second optimization that kdb+ does is that it will automatically enumerate text. (This feature is inspired by [Lisp symbols](http://en.wikipedia.org/wiki/Symbol_%28programming%29)). So searching for a particular stock does not involve typical string searching; it's simply an integer search after the initial enumeration look-up.
With the sequential storage, in-memory allocation, and the automatic text enumeration, searching for a symbol is really just scanning for an integer in an array. That's why a database like kdb+ is a few orders of magnitude faster than common relational databases for reading and analytics.
---
As you've pointed-out in your question, writing is a weakness of column-oriented storage. Because each column is an array (in-memory) or file (on-disk), changing a single row means updating each array or file individually as opposed to simply streaming the entire row at once. Furthermore, appending data in-memory or on-disk is pretty straightforward, as is updating/inserting data in-memory, but updating/inserting data on-disk is practically impossible. That is, the user can't change historical data without some massive hack.
For this reason, historical data (stored on-disk) is often considered append-only. In practice, column-oriented databases require the user to adopt a bitemporal or point-in-time schema. (I advise this schema for financial applications anyway for both better time-series analysis and proper compliance reporting.)
---
I don't know enough about your application to determine performance or production-level requirements. I just hope the above guide will help you make an informed decision with regard to why column-oriented storage is often your best bet for analytics.
| null |
CC BY-SA 3.0
| null |
2011-04-08T18:00:58.587
|
2011-04-08T18:00:58.587
| null | null |
35
| null |
950
|
2
| null |
948
|
30
| null |
I have long hungered for the ultimate, super-fast, super-scaleable data storage solution. I have used relational databases, kdb, flatfiles, and binary files. In the end, I used binary files in my research language of choice. My advice is to KISS. The choice of storage is actually not that critical (unless maybe you're working with options tick data). What is critical is how you decide to splay the data.
If you look at kdb, it can actually be quite slow if you don't splay (segment) the data for your particular need. It just gives you a fast management layer, but it is up to you to design the data storage on disk for your need. What you are trying to do is store the data in such a way so that you group together the data that you need and minimize the amount of extra data that has to be read off disk.
For me, I found storing data in binary format in the language that I do research in is the least amount of overhead. Managing a simple splay is easy. One key is don't be afraid to store multiple copies of your data for different research tasks, so long as the creation of the copies is driven off of one golden source. So for example, if you very often need all ticks for one stock for the past 5 years, then I would splay by stock. But if you also need all stocks for a given day, then I would store another dataset that splays by day. Process and store the data in a way that will be most useful to you.
If you are a big institution, then by all means spend the big $ to get kdb and hire a hotshot q programmer (b/c you are probably not going to figure it out on your own very easily). It is quite nice. But, if you are an individual, do the simple thing and move on to more interesting work.
| null |
CC BY-SA 3.0
| null |
2011-04-08T19:25:23.570
|
2011-04-08T19:25:23.570
| null | null |
443
| null |
951
|
2
| null |
948
|
16
| null |
Personally I make a distinction between two conflicting goals: (1) storing data incoming in real-time for immediate processing and (2) storing the gathered data for "offline" purposes. Such approach makes things a lot easier if we're talking about a home-grown solution.
(1) must be as fast as possible but not necessarily scalable beyond a few dozen millions of ticks (remember we're still talking about a home-grow solution and not about a full-blown system for an investment bank).
(2) mustn't be slow and must scale well to hold billions of ticks.
The simplest and fastest solution for (1) are arrays kept in memory. At the end of trading day you'll just put them all into (2). They have one drawback: in case of crash your data is lost. If that worries you then you have to replace them (or back them) with a DB. Any non-toy DB will easily accommodate a few millions of records.
(2) is more demanding - you can't just put billions of ticks into a DB straight away. And even if you do then retrieval performance will be abysmal. You need to split your data in a way that serves best for your purposes - there is no universal silver bullet here and what works for others may not work so well for you.
Personally I store data gathered from my real-time feed in RAM. Actually I store them only for drawing purposes. For real deal I put them in my CEP subsystem. After the session is over I download all the ticks via my brokerage account and put them in (2).
In my solution historical tick data is stored in a database. I use it solely as a storage engine as I hit a filesystem's (NTFS in this case) limits after saving approx. 20 millions of files. My DB has one big table that holds all the data as BLOBs. Each BLOB corresponds to quotations of a single security for a given day. Of course this approach has its drawback but there are two important advantages: it was easy to develop and it's able to accommodate huge amounts of data without any performance drop.
| null |
CC BY-SA 3.0
| null |
2011-04-08T23:41:29.913
|
2011-04-08T23:41:29.913
| null | null |
493
| null |
952
|
1
|
953
| null |
7
|
453
|
Attempting to get a high level understanding of the role that quantitative finance plays in the financial global markets. Open to any suggestion on how to do this, but figure a starting point might be to estimate the percents of earnings and losses globally on an annual basis attributed to quantitative finance.
|
What is the effect of quant finance on global markets?
|
CC BY-SA 3.0
| null |
2011-04-09T16:16:51.000
|
2011-04-18T05:12:20.773
|
2011-04-09T18:22:14.213
|
719
|
719
|
[
"market-impact"
] |
953
|
2
| null |
952
|
7
| null |
This answers a slightly larger question than you're asking, but I highly recommend ["An Engine, Not a Camera: How Financial Models Shape Markets"](http://rads.stackoverflow.com/amzn/click/0262134608) by Donald MacKenzie, which is part of the burgeoning field of "Social Studies of Finance". Some relevant links:
- http://socfinance.wordpress.com/
- http://www.sociology.ed.ac.uk/finance/
Regarding your specific question: you would need to start by defining "quantitative finance" in this context. Does it include any derivative trading (for instance)? What about index funds (they were based on the insights from CAPM, after all)? Or do you just want to limit it to things like quantitative hedge funds?
| null |
CC BY-SA 3.0
| null |
2011-04-09T16:50:28.563
|
2011-04-09T18:27:52.217
|
2011-04-09T18:27:52.217
|
69
|
17
| null |
954
|
2
| null |
115
|
3
| null |
Fat tailed distributions have extreme values that follow a Frechet distribution. Try calculating VAR using method outlined in Tsay's Analysis of Financial Time Series.
If you are trying to determine a good distribution for stock price in a non academic setting use hyperbolic-secant. It is a wrong answer but it is much easier to fit then a Levy and it gives better answers then normal. Hyperbolic-Secant is bell shaped so many time series techniques "plug" right in.
| null |
CC BY-SA 3.0
| null |
2011-04-10T03:59:59.393
|
2011-04-10T03:59:59.393
| null | null | null | null |
955
|
1
|
958
| null |
29
|
4808
|
Fund managers are acting in a highly stochastic environment. What methods do you know to systematically separate skillful fund managers from those that were just lucky?
Every idea, reference, paper is welcome! Thank you!
|
Separating the wheat from the chaff: What quant methods separate skillful managers from lucky ones?
|
CC BY-SA 3.0
| null |
2011-04-10T09:36:31.907
|
2013-07-25T09:29:34.230
|
2012-02-12T08:01:09.940
|
35
|
12
|
[
"performance-evaluation"
] |
956
|
2
| null |
955
|
4
| null |
I was going to suggest that you use alpha, which is the measure of a managers excess return beyond their benchmark. But here is an alternative view which is quite interesting.
[http://money.usnews.com/money/blogs/Fund-Observer/2010/06/24/just-how-lucky-is-your-mutual-fund-manager](http://money.usnews.com/money/blogs/Fund-Observer/2010/06/24/just-how-lucky-is-your-mutual-fund-manager)
| null |
CC BY-SA 3.0
| null |
2011-04-10T17:03:11.583
|
2011-04-10T17:03:11.583
| null | null |
520
| null |
957
|
2
| null |
955
|
11
| null |
Some links:
[http://www.scribd.com/doc/34567320/Untangling-Skill-and-Luck](http://www.scribd.com/doc/34567320/Untangling-Skill-and-Luck)
[http://web.ist.utl.pt/adriano.simoes/tese/referencias/Papers%20-%20Pedro/UK%20mutual%20fund%20performance%20Skill%20or%20luck.pdf](http://web.ist.utl.pt/adriano.simoes/tese/referencias/Papers%20-%20Pedro/UK%20mutual%20fund%20performance%20Skill%20or%20luck.pdf)
Below is some code that I used recently to illustrate luck (and con-games). The story went like this:
I'll dream up your lucky lottery number for 2010.....let's say it's 20639. The number doesn't matter because we're going to use that for the seed of a random number generator. Then, I'll take the first three digits of your lucky lottery number (206) and reverse them (-.602) and use that as a multiplier on that random number. Since the S&P500 started off in 2010 at about 1100, I'll start the model at that level. Here is the code:
```
library(quantmod)
#Read 2010 S&P500 data from Yahoo
tem <- as.zoo(getSymbols("^gspc", from="2010-01-01", to="2011-01-01", auto.assign=FALSE, src = "yahoo"))
#Build a lucky lottery model based on the seed
set.seed(20639) #Your lucky lottery number
yt <- tem$GSPC.Adjusted
coredata(yt) <- 1100 * exp(-0.602 * cumsum(rnorm(length(yt), sd=0.0113)))
#Plot the results
plot(tem$GSPC.Adjusted, type="l", main="S&P500 for Year 2010", ylab="S&P500", xlab="", lwd=3, col="darkgray")
lines(yt, lwd=2, col="red")
legend("bottomright", legend=c("Actual S&P500", "Dredged S&P500"), lwd=c(3, 2), col=c("darkgray", "red"))
```

This is all meaningless BS, but it took a while for some people to untangle luck from the con. The really hard part is to convince yourself that your "skill" actually found something real. Something "real" is a very rare event in the world of investing.
| null |
CC BY-SA 3.0
| null |
2011-04-10T17:36:15.507
|
2011-04-10T17:42:46.080
|
2011-04-10T17:42:46.080
|
392
|
392
| null |
958
|
2
| null |
955
|
30
| null |
Larry Harris has a chapter on performance evaluation in [Trading and Exchanges](http://rads.stackoverflow.com/amzn/click/0195144708). He states that over a long period of time, a skilled asset manager will consistently have excess returns whereas a lucky one will be expected to have random and unpredictable returns. Thus, we start with the portfolio's market-adjusted return standard deviation:
\begin{equation}
\sigma_{adj} = \sqrt{\sigma^2_{port} + \sigma^2_{mk} - 2\rho\sigma_{port}\sigma_{mk}}
\end{equation}
where $\rho$ is the correlation between the market and portfolio returns.
For a sample size $n$ (generally number of years), the average excess returns, and the adjusted standard deviation from above, we have a [t-statistic](http://en.wikipedia.org/wiki/Student%27s_t-test):
\begin{equation}
t = \frac{\overline{R_{port}} - \overline{R_{mk}}}{\frac{\sigma_{adj}}{\sqrt{n}}}
\end{equation}
Now we can simply determine the probability that the manager's excess returns were luck by plugging this t-statistic into the [t-distribution](http://en.wikipedia.org/wiki/Student%27s_t-distribution)'s PDF with degrees-of-freedom $n - 1$. The lower the probability, the more we can believe the manager's excess returns were from skill.
| null |
CC BY-SA 3.0
| null |
2011-04-10T20:07:44.723
|
2011-04-10T20:07:44.723
| null | null |
35
| null |
959
|
1
| null | null |
28
|
6268
|
I am currently working in a firm that does algo trading. We do all of our stuff in Java. And we do make money out of it. We have debates all the time whether we would have made more money with native or VHDL on network cards.
We don't do super high-frequency though we do more complicated trades. Even though we need to be the first. After working there for quite a while it interests me more to know if Java is popular in this area. (And since no one would talk in that field I would like to raise this issue here.)
From my experience I have noticed that it has a lot to do with the reliability of the exchange or the broker. If it is not very reliable (as in many exchanges in the world) a delay of 2 milisec would be much more significant than the language you choose. But still, how many do choose managed code?
|
How good is managed code for algo trading?
|
CC BY-SA 3.0
| null |
2011-04-10T22:55:04.710
|
2014-11-18T19:19:18.150
|
2011-04-10T23:16:51.010
|
35
|
722
|
[
"programming",
"algorithm"
] |
960
|
2
| null |
959
|
17
| null |
There are [tons of languages used in this field](https://quant.stackexchange.com/q/306/35). As for Java-based trading platforms, [Marketcetera](http://www.marketcetera.com/site/) is popular with customers.
To justify switching languages, you'd need to show that there is a bottleneck preventing your team from collecting more P&L. Have you run a profiler and compared the results with tcpdump? You must show that your existing platform is the cause of the delay that loses opportunities before diving into C++.
(And if we are talking milliseconds, I assume you're co-located at the exchanges in question, if they allow it. If you aren't co-lo, then dealing with that would probably have more impact than changing languages.)
| null |
CC BY-SA 3.0
| null |
2011-04-10T23:23:37.617
|
2011-04-10T23:23:37.617
|
2017-04-13T12:46:22.953
|
-1
|
35
| null |
961
|
2
| null |
959
|
7
| null |
I know (by word of mouth from persons directly involved, so cannot back it up with a reference) that Goldman Sachs uses Java a lot in algo trading.
| null |
CC BY-SA 3.0
| null |
2011-04-11T07:04:13.470
|
2011-04-11T07:04:13.470
| null | null |
89
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.