Id
stringlengths 1
5
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
5
⌀ | ParentId
stringlengths 1
5
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
27.8k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
5
⌀ | OwnerUserId
stringlengths 1
5
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
111 | 1 | 113 | null | 143 | 174906 | I am not very sure, if this question fits in here.
I have recently begun, reading and learning about machine learning. Can someone throw some light onto how to go about it or rather can anyone share their experience and few basic pointers about how to go about it or atleast start applying it to see some results from data sets? How ambitious does this sound?
Also, do mention about standard algorithms that should be tried or looked at while doing this.
| How can I go about applying machine learning algorithms to stock markets? | CC BY-SA 2.5 | null | 2011-02-01T18:35:41.513 | 2023-05-24T06:34:51.480 | 2011-12-18T20:52:41.500 | 1106 | 175 | [
"machine-learning",
"prediction",
"mathematics"
]
|
113 | 2 | null | 111 | 161 | null | There seems to be a basic fallacy that someone can come along and learn some machine learning or AI algorithms, set them up as a black box, hit go, and sit back while they retire.
My advice to you:
Learn statistics and machine learning first, then worry about how to apply them to a given problem. There is no free lunch here. Data analysis is hard work. Read ["The Elements of Statistical Learning"](http://web.stanford.edu/~hastie/ElemStatLearn/) (the pdf is available for free on the website), and don't start trying to build a model until you understand at least the first 8 chapters.
Once you understand the statistics and machine learning, then you need to learn how to backtest and build a trading model, accounting for transaction costs, etc. which is a whole other area.
After you have a handle on both the analysis and the finance, then it will be somewhat obvious how to apply it. The entire point of these algorithms is trying to find a way to fit a model to data and produce low bias and variance in prediction (i.e. that the training and test prediction error will be low and similar). [Here is an example of a trading system using a support vector machine in R](http://quantumfinancier.wordpress.com/2010/06/26/support-vector-machine-rsi-system/), but just keep in mind that you will be doing yourself a huge disservice if you don't spend the time to understand the basics before trying to apply something esoteric.
[Edit:]
Just to add an entertaining update: I recently came across this master's thesis: ["A Novel Algorithmic Trading Framework Applying Evolution and Machine Learning for Portfolio Optimization"](http://blog.andersen.im/wp-content/uploads/2012/12/ANovelAlgorithmicTradingFramework.pdf) (2012). It's an extensive review of different machine learning approaches compared against buy-and-hold. After almost 200 pages, they reach the basic conclusion: "No trading system was able to outperform the benchmark when using transaction costs." Needless to say, this does not mean that it can't be done (I haven't spent any time reviewing their methods to see the validity of the approach), but it certainly provides some more evidence in favor of the [no-free lunch theorem](http://en.wikipedia.org/wiki/No_free_lunch_theorem).
| null | CC BY-SA 3.0 | null | 2011-02-01T18:48:50.043 | 2017-08-03T21:48:24.560 | 2017-08-03T21:48:24.560 | 25351 | 17 | null |
114 | 1 | null | null | 16 | 1927 | How do I price OANDA box options without using their slow and
machine-unfriendly user interface?:
- http://fxtrade.oanda.com (free demo account) sells "box options":
If you already know what a box option is (or visit
http://fxtrade.oanda.com/trade-forex/fxtrade/box-options or signup
for a free demo account), you can skip the rest of this section.
A box option is a highly flexible binary option where you choose
a FOREX currency pair, a price range and a time range. You win if
the FOREX currency pair hits the price range sometime during the
time range.
Here's how a box option looks on a chart on OANDA's user interface:
And the same box option in a form:
In the above example, you're betting that USDCAD will trade
between 0.98758 and 0.99674 sometime between 1455 and 2000
GMT. Note that USDCAD does not have remain in this range the
entire time: if USDCAD plunged to 0.98000 before 1455 but then
rebounded to 0.98760 at 1900 (or any time between 1455 and 2000),
you still win.
$1000, the purchase price, is the most you can lose.
If you win, OANDA will pay back $1005.48 for a profit of
$5.48. This isn't much, because it's fairly likely that you'll
win.
You can also buy an option betting the exact opposite: that
USDCAD won't trade between 0.98758 and 0.99674 any time between
1455 and 2000 (in other words, USDCAD remains below 0.98758 from
1455 to 2000, or remains above 0.99674 from 1455 to 2000).
For the opposite option, OANDA pays a little better:
since it's unlikely that USDCAD won't hit that range between 1455 and 2000.
- I'm trying to figure how OANDA prices these options:
I'm trying to optimize certain values, so I need quotes for
every price range and every time range. It's infeasible to do this
using their standard interface. Additionally, it's hard to record
values from this user interface into a file.
The prices obviously relate in some way to the probability of
hitting the time/price range. I know how to calculate these
probabilities
(https://github.com/barrycarter/bcapps/blob/master/box-option-value.m),
but can't find a correlation between the probabilities and OANDA's
price.
OANDA obviously includes a "safety factor" and "commission" in
their quotes. If a box option is 50% likely to win, they won't
return $2000 to your $1000 dollars, since that would mean no profit
for them.
When the "hit" and "miss" prices are identical (roughly meaning
they feel the box option has 50% chance of success), they seem to
pay out about $1400 on a $1000 bet (meaning a $400 profit). That's
just a rough observation though.
This might be more of a project, but I'm looking for help, tips
how to get started, brilliant insights, etc.
I realize OANDA charges a large "commission" on these options, but still think they can be useful in some cases.
| How do I price OANDA box options? | CC BY-SA 3.0 | null | 2011-02-01T18:59:40.630 | 2011-09-09T18:34:01.053 | 2011-09-09T18:34:01.053 | 1355 | null | [
"option-pricing",
"software"
]
|
115 | 1 | 116 | null | 45 | 6654 | Since Mandelbrot, Fama and others have performed seminal work on the topic, it has been suspected that stock price fluctuations can be more appropriately modeled using Lévy alpha-stable distrbutions other than the normal distribution law. Yet, the subject is somewhat controversial, there is a lot of literature in defense of the normal law and criticizing distributions without bounded variation. Moreover, precisely because of the the unbounded variation, the whole standard framework of quantitative analysis can not be simply copy/pasted to deal with these more "exotic" distributions.
Yet, I think there should be something to say about how to value risk of fluctuations. After all, the approaches using the variance are just shortcuts, what one really has in mind is the probability of a fluctuation of a certain size. So I was wondering if there is any literature investigating that in particular.
In other words: what is the current status of financial theories based on Lévy alpha-stable distributions? What are good review papers of the field?
| Lévy alpha-stable distribution and modelling of stock prices. | CC BY-SA 2.5 | null | 2011-02-01T19:07:12.580 | 2021-09-10T03:11:11.110 | 2011-02-02T08:42:46.467 | 156 | 156 | [
"risk",
"equities",
"variance",
"probability"
]
|
116 | 2 | null | 115 | 24 | null | I recently read ["Modeling financial data with stable distributions"](http://academic2.american.edu/~jpnolan/stable/StableFinance23Mar2005.pdf) (Nolan 2005) which gives a survey of this area and might be of interest (I believe it was contained in ["Handbook of Heavy Tailed Distributions in Finance"](http://rads.stackoverflow.com/amzn/click/0444508961)). Another more recent reference is ["Alpha-Stable Paradigm in Financial Markets"](http://www.finanalytica.com/uploads/TechnicalReports/09review.pdf) (2008).
I'm not aware of anything covering "risk of fluctuations" and this is still certainly not at the center of the field (i.e. most theory still includes some version of Gaussian or mixture of Gaussians). Would also be interested in other references.
| null | CC BY-SA 2.5 | null | 2011-02-01T19:28:09.343 | 2011-02-01T19:33:55.123 | 2011-02-01T19:33:55.123 | 17 | 17 | null |
118 | 2 | null | 111 | 10 | null | One basic application is predicting financial distress.
Get a bunch of data with some companies that have defaulted, and others that haven't, with a variety of financial information and ratios.
Use a machine learning method such as SVM to see if you can predict which companies will default and which will not.
Use that SVM in the future to short high-probability default companies and long low-probability default companies, with the proceeds of the short sales.
| null | CC BY-SA 2.5 | null | 2011-02-01T20:02:38.270 | 2011-02-01T20:02:38.270 | null | null | 74 | null |
119 | 2 | null | 115 | 9 | null | I am still a beginner to this topic, and have been working through Cont and Tankov's textbook Financial Modelling With Jump Processes (2003), which is a fairly elementary treatment of the subject. I think a revised second edition is to come out later this year.
One interesting area of applications that has become more prominent with a recent wave of papers are those that use Bayesian methodology to evaluate stochastic volatility, for example see: [Jacquier, Polson & Rossi](http://harrisd.net/papers/ARCHSV/Multivariate%20Models%20in%20the%20Literature/JacquierPolsonRossi2004JoE.pdf) and [Szerszen](http://www.federalreserve.gov/pubs/feds/2009/200940/200940pap.pdf) among others.
| null | CC BY-SA 2.5 | null | 2011-02-01T20:55:14.413 | 2011-02-01T20:55:14.413 | null | null | 99 | null |
120 | 2 | null | 5 | 11 | null | There is a fairly recent (2010) [monograph](http://books.google.com/books?id=f73jwdRa3t4C&printsec=frontcover&dq=Risk+Management+in+Credit+Portfolios:+Concentration+Risk+and+Basel+II&hl=en&ei=YHlITcTGF4SYOs3j5acE&sa=X&oi=book_result&ct=result&resnum=1&ved=0CDAQ6AEwAA#v=onepage&q&f=false) by Martin Hibbeln entirely devoted to this very question. He starts with the standard Asymptotic Single Risk Factor model and shows how it can be modified in order to be consistent with the Basel II framework. He also compares the accuracy and runtime of several modern models which have been developed to measure sector concentration risk.

| null | CC BY-SA 2.5 | null | 2011-02-01T21:49:17.587 | 2011-02-01T21:49:17.587 | null | null | 70 | null |
121 | 1 | 192 | null | 22 | 8587 | Suppose you have two sources of covariance forecasts on a fixed set of $n$ assets, method A and method B (you can think of them as black box forecasts, from two vendors, say), which are known to be based on data available at a given point in time. Suppose you also observe the returns on those $n$ assets for a following period (a year's worth, say). What metrics would you use to evaluate the quality of these two covariance forecasts? What statistical tests?
For background, the use of the covariances would be in a vanilla mean-variance optimization framework, but one can assume little is known about the source of alpha.
edit: forecasting a covariance matrix is a bit different, I think, than other forecasting tasks. There are some applications where getting a good forecast of the eigenvectors of the covariance would be helpful, but the eigenvalues are not as important. (I am thinking of the case where one's portfolio is $\Sigma^{-1}\mu$, rescaled, where $\Sigma$ is the forecast covariance, and $\mu$ is the forecast returns.) In that case, the metric for forecasting quality should be invariant with respect to scale of the forecast. For some cases, it seems like forecasting the first eigenvector is more important (using it like beta), etc. This is why I was looking for methods specifically for covariance forecasting for use in quant finance.
| How do you evaluate a covariance forecast? | CC BY-SA 2.5 | null | 2011-02-01T22:51:01.403 | 2022-03-04T01:02:49.437 | 2011-02-04T03:31:53.430 | 108 | 108 | [
"forecasting",
"statistics",
"covariance"
]
|
122 | 2 | null | 111 | 5 | null | I echo much of what @Shane wrote. In addition to reading ESL, I would suggest an even more fundamental study of statistics first. Beyond that, the problems I outlined in [in another question on this exchange](https://quant.stackexchange.com/questions/45/how-are-risk-management-practices-applied-to-ml-ai-based-automated-trading-system/68#68) are highly relevant. In particular, the problem of datamining bias is a serious roadblock to any machine-learning based strategy.
| null | CC BY-SA 2.5 | null | 2011-02-01T23:10:25.770 | 2011-02-01T23:10:25.770 | 2017-04-13T12:46:23.037 | -1 | 108 | null |
123 | 2 | null | 121 | 4 | null | You probably want to take it back to how one evaluates forecast models in general: using some metrics over one- or many-step forecasts, see e.g. [here for a Wikipedia discussion](http://en.wikipedia.org/wiki/Calculating_demand_forecast_accuracy). But instead of forecasting first moments, it would now be second moments.
This can still use (root) mean squared error, or [mean absolute percentage error](http://en.wikipedia.org/wiki/Mean_absolute_percentage_error), or related measures; see e.g. this paper by Rob Hyndman [on comparisons of methods](http://www.buseco.monash.edu.au/ebs/pubs/wpapers/2005/wp13-05.pdf).
| null | CC BY-SA 2.5 | null | 2011-02-01T23:40:54.520 | 2011-02-01T23:40:54.520 | null | null | 69 | null |
124 | 2 | null | 111 | 40 | null | My Advice to You:
There are several Machine Learning/Artificial Intelligence (ML/AI) branches out there:
[http://www-formal.stanford.edu/jmc/whatisai/node2.html](http://www-formal.stanford.edu/jmc/whatisai/node2.html)
I have only tried genetic programming and some neural networks, and I personally think that the "learning from experience" branch seems to have the most potential. GP/GA and neural nets seem to be the most commonly explored methodologies for the purpose of stock market predictions, but if you do some data mining on [Predict Wall Street](http://www.predictwallstreet.com/), you might be able to do some sentiment analysis too.
Spend some time learning about the various ML/AI techniques, find some market data and try to implement some of those algorithms. Each one will have its strengths and weaknesses, but you may be able to combine the predictions of each algorithm into a composite prediction (similar to what the winners of the NetFlix Prize did).
Some Resources:
Here are some resources that you might want to look into:
- Max Dama's blog: http://www.maxdama.com/search/label/Artificial%20Intelligence
- AI Stock Market Forum: http://www.ai-stockmarketforum.com/
- Weka is a data mining tool with a collection of ML/AI algorithms: http://www.cs.waikato.ac.nz/ml/weka/
The Chatter:
The general consensus amongst traders is that Artificial Intelligence is a voodoo science, you can't make a computer predict stock prices and you're sure to loose your money if you try doing it. Nonetheless, the same people will tell you that just about the only way to make money on the stock market is to build and improve on your own trading strategy and follow it closely (which is not actually a bad idea).
The idea of AI algorithms is not to build [Chip](http://en.wikipedia.org/wiki/Not_Quite_Human_%28film%29) and let him trade for you, but to automate the process of creating strategies. It's a very tedious process and by no means is it easy :).
Minimizing Overfitting:
As we've heard before, a fundamental issue with AI algorithms is [overfitting](http://en.wikipedia.org/wiki/Overfitting) (aka datamining bias): given a set of data, your AI algorithm may find a pattern that is particularly relevant to the [training set](http://en.wikipedia.org/wiki/Training_set), but it may not be relevant in the [test set](http://en.wikipedia.org/wiki/Test_set).
There are several ways to minimize overfitting:
- Use a validation set: it doesn't give feedback to the algorithm, but it allows you to detect when your algorithm is potentially beginning to overfit (i.e. you can stop training if you're overfitting too much).
- Use online machine learning: it largely eliminates the need for back-testing and it is very applicable for algorithms that attempt to make market predictions.
- Ensemble Learning: provides you with a way to take multiple machine learning algorithms and combine their predictions. The assumption is that various algorithms may have overfit the data in some area, but the "correct" combination of their predictions will have better predictive power.
Fun Facts:
Apparently [rats can trade too](http://www.rattraders.com/)!
| null | CC BY-SA 3.0 | null | 2011-02-02T00:15:30.567 | 2013-04-09T07:09:32.420 | 2013-04-09T07:09:32.420 | 5093 | 78 | null |
125 | 2 | null | 66 | 12 | null | Quantitative pair trading (as we are on the quantitative finance forum) is based on cointegration.
Two stocks are said to be cointegrated if they move together, which means that they share the same long term trend.
Precisely:
It exists a linear relationship between the price of the 2 stocks so that is mean reverting. (for instance the difference between the 2 is mean reverting). But it can be another relation.
Once you have a mean reverting basket, you can study this mean reversion (average, speed to come back to the mean, etc...) And it exists optimal strategies to trade this basket.
Don't forget that past behavior is not always a good indicator of future behavior. A cointegration relationship can evolve/break. Then:
1/ Think also about the exit/stop loss strategies.
2/ Try to make all your coefficients time varying
| null | CC BY-SA 2.5 | null | 2011-02-02T03:29:07.433 | 2011-02-02T03:29:07.433 | null | null | 134 | null |
126 | 2 | null | 104 | 2 | null | My personal opinion after 2 years of interest in MFE:
>
NYU's Courant Institute of Mathematics
UC Berkeley
Columbia University
Princeton University
I cant say about others after these top 4, IMO.
| null | CC BY-SA 2.5 | null | 2011-02-02T05:56:49.757 | 2011-02-02T05:56:49.757 | null | null | null | null |
127 | 2 | null | 81 | 4 | null | To my point of view there are never any reason that a pattern or anything else would repeat in the futur. Actualy I don't see any difference between pattern recognition and mean reversion/Trend Following in term of theoretical proof. One can read Pr Andrew Lo : "Foundations of Technical Analysis". It tries to give a theoretical background to TA by studying the enpirical distribution of stock returns conditionned or not on the presence of predefined chart pattern. His result is that there is a diference that justifies the use of TA.
| null | CC BY-SA 2.5 | null | 2011-02-02T08:47:03.850 | 2011-02-02T08:47:03.850 | null | null | 155 | null |
128 | 2 | null | 103 | 36 | null | We bet on a fair coin toss -- heads you get $\$100$, tails you get $\$0$. So the expected value is $\$50$. But it is unlikely that you'll pay $\$50$ to play this game because most people are risk averse. If you were risk neutral, then you WOULD pay $\$50$ for an expected value of $\$50$ for an expected net payoff of $\$0$. A risk neutral player will accept risk and play games with expected net payoffs of zero. Or equivalently, a risk neutral player doesn't need a positive expected net payoff to accept risk.
Let's say that you would pay $\$25$ to play this game. That means if you were risk-neutral, that you'd be assigning probabilities of 1/4 to heads and 3/4 to tails for an expected value of $\$25$ and an expected net payoff of $\$0$.
So if we can convert from the risk probability measure $(1/2, 1/2)$ to a risk neutral probability measure $(1/4, 3/4)$, then we can price this asset with a simple expectation.
So if you can find the risk neutral measure for an asset based on a set of outcomes, then you can use this measure to easily price other assets as an expected value.
| null | CC BY-SA 2.5 | null | 2011-02-02T10:40:52.510 | 2011-02-07T03:53:16.317 | 2011-02-07T03:53:16.317 | 69 | 106 | null |
129 | 2 | null | 85 | 4 | null | Maybe you think about other model than a diffusion ?
There is an article on [wilmott.com](http://www.wilmott.com/detail.cfm?articleID=346) about the Korn-Kreer-Lenssen Model.
| null | CC BY-SA 2.5 | null | 2011-02-02T11:05:40.823 | 2011-02-02T11:05:40.823 | null | null | 183 | null |
130 | 2 | null | 82 | 25 | null | I am a big believer in do-it-yourself (DIY) backtesting and data analysis, that is, obtaining your own data and writing your own code. I use my own simple [Python](http://www.python.org) scripts to process, test, analyze, and backtest, starting with text-input data files (either OHLC bars or tick data). The reason for DIY: in order to have an effective backtest, analysis, etc., you must completely understand all the assumptions, explicit and implicit, that go into the test or analysis. You must understand how that relates to the trading algorithm you implement.
As a quick example, people commonly say you must take off a tick or two in backtest results to account for slippage. However, I have found that for several of my backtest methods, I can actually count on getting better entries, on average, than the backtest. Whatever the case, I can sleep at night without worrying about someone changing something in the way the software works, which would throw off my tests without me knowing about it.
For algorithm execution, I also use a DIY Java API and Java applications build on the [TWS API](http://www.interactivebrokers.com/en/pagemap/pagemap_APISolutions.php). However, the reason for that is just to save a few bucks.
Edit: Not sure I got this point across, but there is an intimate connection between back-test code, historical data, execution code, and real-time data. The relationship is different depending on what you are doing and what you are using, but it always important to understand the relationship.
| null | CC BY-SA 2.5 | null | 2011-02-02T16:46:15.607 | 2011-02-03T17:00:24.420 | 2011-02-03T17:00:24.420 | 47 | 47 | null |
131 | 1 | 132 | null | 22 | 2660 | I am currently trading futures products on some contracts that have low volumes. More accurately, the volumes of working orders in the book are fairly light. I am trying to execute a relatively large order that could move the market. I would like to break the order up to prevent this but I am unsure of the best algorithm to do it. I know there are iceberg orders but is there any other way to do this?
| How to execute a large futures order? | CC BY-SA 3.0 | null | 2011-02-02T18:13:38.567 | 2011-10-18T16:41:17.603 | 2011-10-18T11:55:06.570 | 1106 | 176 | [
"futures",
"order-execution"
]
|
132 | 2 | null | 131 | 7 | null | You are correct that large orders should be algorithmically broken-up. Perhaps the most straightforward algo is the [VWAP](http://en.wikipedia.org/wiki/VWAP) (volume-weighted average price), which most brokers offer. Since a VWAP is easy to compute, the trading details are often transparent to the user.
There are more sophisticated algos, like [Arrival Price](http://www.itg.com/news_events/papers/implementation_shortfall.pdf), though not every broker offers these. Here's a list of common [broker algos](http://www.barx.com/futures/algorithmic/index.html).
In general, refrain from submitting a standard market order for a large number of contracts and you should be alright.
| null | CC BY-SA 2.5 | null | 2011-02-02T21:04:28.867 | 2011-02-02T21:14:49.927 | 2011-02-02T21:14:49.927 | 35 | 35 | null |
133 | 1 | null | null | 36 | 1238 | Weak schemes, such as [Ninomiya-Victoir](https://wiki.duke.edu/download/attachments/9505461/Ninomiya+and+Victoir+-+2008+-+Weak+approximation+of+stochastic+differential+equa.pdf?version=1) or Ninomiya-Ninomiya, are typically used for discretization of stochastic volatility models such as the [Heston Model](http://en.wikipedia.org/wiki/Heston_model).
Can anyone familiar with [Cubature on Wiener Spaces](http://www.math.hu-berlin.de/~finance/papers/victoir1.pdf) explain why (i.e. with a detailed proof or a reference) these weak schemes can be seen as a Cubature scheme over Wiener Spaces?
| How to show that this weak scheme is a cubature scheme? | CC BY-SA 3.0 | null | 2011-02-02T22:16:43.913 | 2023-03-22T14:45:00.500 | 2011-09-21T14:50:40.987 | 1106 | 92 | [
"option-pricing",
"stochastic-volatility"
]
|
134 | 1 | 135 | null | 28 | 2112 | The sensitivity of the option value $V$ to volatility $\sigma$ (a.k.a. vega) is different from the other greeks. It is a derivative with respect to a parameter and not a variable. To quote from Paul Wilmott On Quantitative Finance (Wiley, 2nd edition, p. 127):
>
It’s not even Greek. Among other things it is an American car, a star (Alpha Lyrae), the real name of Zorro, there are a couple of 16th century Spanish authors called Vega, an Op art painting by Vasarely and a character in the computer game ‘Street Fighter.’ And who could forget Vincent, and his brother?
Question. Does anyone know who has suggested to use the term vega for $\frac{\partial V}{\partial\sigma}$ and why it was named this way?
| Who has introduced the term 'vega' and why? | CC BY-SA 2.5 | null | 2011-02-02T22:35:46.180 | 2019-08-20T20:30:35.937 | 2017-06-26T09:35:45.790 | 2183 | 70 | [
"options",
"greeks",
"history",
"terminology"
]
|
135 | 2 | null | 134 | 16 | null | I dusted off my oldest option theory books and searched the indexes for "vega". The oldest reference I found was in Option Volatility and Pricing Strategies (1st ed.) by Sheldon Natenberg, copyright 1988. When discussing the sensitivity of prices to volatility (p. 132), he says,
>
[T]here is no single commonly accepted
term for this number. It is sometimes
referred to as vega, kappa, omega,
zeta or sigma prime.
Continuing, he adds (p. 134),
>
Because several computer services
popular among traders use the term
vega, we will also use this term to
refer to an option's change in
theoretical value with respect to a
change in volatility.
At the time, a popular option pricing service was the Schwartzatron (yes, that was the name), later purchased by Reuters. I have a dim memory that it used the term "vega". Natenberg may have been referring to that service, maybe some other.
That's the oldest reference I can find. Perhaps someone can find an older one.
(PS - I still don't have a clue why they called it "vega".)
| null | CC BY-SA 2.5 | null | 2011-02-03T01:01:44.647 | 2011-02-03T01:01:44.647 | null | null | 37 | null |
136 | 2 | null | 41 | 9 | null | Technically, yes, the VIX is a measure of implied volatility. But practically speaking, it is a measure of market uncertainty: when market participants are uncertain of the future, they buy options to protect their positions, driving up option premiums and increasing implied volatility.
The broader market hates uncertainty, however, so that same uncertainty drives some participants to sell off their holdings or, at least, stop buying. That drives down market prices, creating a correlation between rising implied volatility and falling prices.
If you want a "more pure" volatility index, perhaps [realized variance](http://en.wikipedia.org/wiki/Realized_variance) could be useful to you. That is a backward-looking measure, of course, but any forward-looking measure will inevitably be tainted by people's emotions and, hence, less pure.
| null | CC BY-SA 2.5 | null | 2011-02-03T01:26:22.683 | 2011-02-03T01:26:22.683 | null | null | 37 | null |
137 | 2 | null | 41 | 7 | null | VIX is [mechanically determined](http://www.cboe.com/micro/vix/vixwhite.pdf) from the price of S&P500 call and put options. So if the demands for S&P500 calls/puts rise, then the prices rise, then the implied vol from these options rises. During a down market there's a lot of demand for portfolio protection. If you're diversified, then S&P500 puts are good protection, so the prices for puts rise and the implied vol from puts rises. The vol rise from puts drives the VIX up. In most cases the implied vol from calls probably contributes, too, but it's the puts driving VIX.
| null | CC BY-SA 2.5 | null | 2011-02-03T02:05:59.397 | 2011-02-03T02:05:59.397 | null | null | 106 | null |
138 | 2 | null | 134 | 6 | null | I have no reference, but it's largely phonetic.
Must variables in econ/finance are Greek versions English letter you'd want to use. $\omega$ for weight, $\rho$ for rate, $\epsilon$ for error, and so.
Vega is partial derivative of price with respect to V olatility. But there's no Greek letter for V. Vega sounds kind of Greek.
| null | CC BY-SA 2.5 | null | 2011-02-03T02:14:38.197 | 2011-02-03T02:14:38.197 | null | null | 106 | null |
139 | 1 | 143 | null | 35 | 10450 | One cannot directly buy and sell the VIX index. Theoretically, however, one could approximate the index by purchasing an at-the-money straddle on the SP500, then delta-hedging the straddle.
Does anyone have experience with such a "synthetic" replication of the index? It might be very useful for betting on volatility or for spreads against the VIX futures (a sort of basis trade), but I can see potential problems if the replication is too inaccurate.
(To anticipate your comments: I'm aware of the many VIX-related ETFs; but, no, I would not consider using them. I'm also aware that the VIX calculation uses other strikes beyond the ATM options; this proposed synthetic is admittedly an approximation.)
| Trading a synthetic replication of the VIX index | CC BY-SA 2.5 | null | 2011-02-03T03:43:04.120 | 2014-03-14T08:18:25.690 | null | null | 37 | [
"vix",
"delta-neutral"
]
|
140 | 1 | 191 | null | 36 | 5863 | Are there common procedures prior or posterior backtesting to ensure that a quantitative trading strategy has real predictive power and is not just one of the thing that has worked in the past by pure luck? Surely if we search long enough for working strategies we will end up finding one. Even in a walk forward approach that doesn't tell us anything about the strategy in itself.
Some people talk about white's reality check but there are no consensus in that matter.
| What are the popular methodologies to minimize data snooping? | CC BY-SA 2.5 | null | 2011-02-03T08:11:53.253 | 2011-04-07T02:35:14.510 | 2011-04-07T02:35:14.510 | 356 | 155 | [
"backtesting",
"strategy"
]
|
141 | 1 | 168 | null | 320 | 235636 | What sources of financial and economic data are available online? Which ones are free or cheap? What has your experience been like with these data sources?
| What data sources are available online? | CC BY-SA 3.0 | null | 2011-02-03T13:31:00.310 | 2023-04-21T06:21:08.450 | 2022-05-08T07:37:00.647 | 2299 | 188 | [
"finance",
"economics",
"data-source"
]
|
142 | 2 | null | 140 | 18 | null | Building an effective backtest is not significantly different than building any other kind of predictive model. The goal is to have similar behavior out of sample as you have in sample. As such, there are methodologies developed in statistics and machine learning that can be useful:
- Understand the bias/variance tradeoff. This is covered in many places. For a technical discussion, see lecture 9 of Andrew Ng's machine learning class at Stanford.
- You can certainly use a training and test dataset. But there are also other kinds of approaches that can be used. To list two common options: cross-validation (similar to having segmented data, but can help with parameter selection) and ensemble methods (using multiple models can outperform just one and further reduce the curve-fitting problem).
So a few general recommendations:
- Your guiding principle should be Einstein's razor: 'Everything should be kept as simple as possible, but no simpler.' In other words, less degrees of freedom in your model equates to less chance for overfitting. In the statistics world, this can involve eliminating unnecessary parameters through a selection or regularization method.
- Robustness (in every respect) is also critical. Parameters that result in sharp changes in expected prediction error will be more open to the risk of overfitting. Similarly, if the model has no fundamental basis, then it should be applicable to a wide number of assets.
- Lastly, this applies to any kind of model: understand your data, your model, your objectives, assumptions, etc. There have been countless mistakes made over time from people not understanding the meaning of their models, implications, and risks. This includes things like execution assumptions and transaction costs. Make sure that you take everything into account. Lead by being skeptical of your data, constantly asking what can go wrong, or how can the future be different. Is there any survivorship bias in your data, and if so, how can you control for it? Have you introduced any look-ahead bias?
| null | CC BY-SA 2.5 | null | 2011-02-03T14:41:05.943 | 2011-02-03T14:46:06.013 | 2011-02-03T14:46:06.013 | 17 | 17 | null |
143 | 2 | null | 139 | 19 | null | A synthetic model for the VIX would be quite useful. I just mention this since it has been covered elsewhere in the past, although I don't think that it's a real solution to your problem (for a number of reasons).
Several blogs posted on the "William's VIX Fix" (WVF) in the past: [marketsci](http://marketsci.wordpress.com/2010/09/04/williams%E2%80%99-vix-fix/), [trading the odds](http://www.tradingtheodds.com/2010/09/williams%E2%80%99-vix-fix/), [mindmoneymarkets](http://davesbrain.blogs.com/mindmoneymarkets/2010/08/ftse-returns-volatility.html). The WVF is intended to be a synthetic VIX calculation, derived by [Larry Williams](http://en.wikipedia.org/wiki/Larry_R._Williams) (see [the original article here](http://www.ireallytrade.com/newsletters/VIXFix.pdf)), and is represented by the following formula:
$wvf = \frac{Highest(Close, 22) - Low}{Highest(Close, 22)} * 100$
In R, this can be represented as:
```
wvf <- function(x, n=22) {
hc <- as.xts(rollmax(as.zoo(Cl(x)), k=n, align="right"))
100*(hc-Lo(x))/hc
}
```
This has had a reasonable correlation to the VIX: from 1995-2010 it was +0.75:

| null | CC BY-SA 3.0 | null | 2011-02-03T14:59:55.217 | 2012-10-09T01:18:28.140 | 2012-10-09T01:18:28.140 | 2257 | 17 | null |
144 | 2 | null | 140 | 7 | null | The output of your model will be a realization of your assumptions. Shane's given you a great answer. Besides doing out of sample testing (i.e., calibrating on period X then testing in period Y only using info available at the time of each trade), I would add that you should test it in sub-periods. If you have a big chunk of data, break it up and see how it works on each subset of the data.
| null | CC BY-SA 2.5 | null | 2011-02-03T15:01:54.137 | 2011-02-03T15:01:54.137 | null | null | 106 | null |
145 | 2 | null | 82 | 0 | null | I have found NinjaTrader to be a very powerful free platform, but as I needed to add support for options, I ended developing and backtesting strategies. I haven't gotten to the point of doing to automated trading yet, but I have read it is possible.
Brian
| null | CC BY-SA 2.5 | null | 2011-02-03T15:27:43.917 | 2011-02-03T15:27:43.917 | null | null | 123 | null |
146 | 2 | null | 140 | -2 | null | Thanks for the answer as it tackles a lot of backtesting flaws, model parsimony, overfitting, survivorship bias, look ahead...
But actually one can look at thousands of technical trading rules and other more sofisticated strategies, and maybe find the few ones that will answer all these problems. Nevetheless we would still be left with data snooping ie we have used our data set untill we find a satisfactory result.
| null | CC BY-SA 2.5 | null | 2011-02-03T15:44:30.997 | 2011-02-03T15:44:30.997 | null | null | 155 | null |
147 | 1 | null | null | 47 | 3515 | Prompted in part by [this question on data snooping](https://quant.stackexchange.com/questions/140/what-are-the-popular-methodologies-to-minimize-data-snooping), I would be interested to know:
What are the key risks that should be considered when developing a quantitative strategy based on: (a) historical data or (b) simulated data?
| What are the key risks to the quantitative strategy development process? | CC BY-SA 2.5 | null | 2011-02-03T16:28:51.397 | 2014-06-06T09:45:59.160 | 2017-04-13T12:46:23.000 | -1 | 17 | [
"backtesting",
"risk-management"
]
|
148 | 1 | 153 | null | 30 | 8093 | I work with practical, day-to-day trading: just making money. One of my small clients recently hired a smart, new MFE. We discussed potential trading strategies for a long time. Finally, he expressed surprise that I never mentioned (much less used) stochastic calculus, which he spent many long hours studying in his MFE program. I use the products of stochastic calculus (e.g., the Black-Scholes equation) but not the calculus itself.
Now I am wondering, does stochastic calculus play a role in day-to-day trading strategies? Am I under-utilizing a potentially valuable tool?
If this client was a Wall Street investment bank that was making markets in complicated derivatives, I'm sure their research department would use stochastic calculus for modeling. But they're not, so I'm not sure how we would use stochastic calculus.
(Full disclosure: I have Masters degrees but not a PhD. I'm an applied mathematician, not a theoretician.)
| What is the role of stochastic calculus in day-to-day trading? | CC BY-SA 2.5 | null | 2011-02-03T16:37:44.127 | 2011-02-03T19:00:48.793 | null | null | 37 | [
"differential-equations",
"stochastic-calculus"
]
|
149 | 2 | null | 141 | 21 | null | I'm only aware about 3 free data sources of which 1 is still working in June 2018:
- [GAIN Capital](http://ratedata.gaincapital.com/). It contains infomation about FX rates only
Below ones are not available anymore:
- EuroNext. Bonds and Equities are available. "Search by Criteria" -> select instrument -> "Data downloads".
- RBS Databank. Interest rates, FX rate, commodities and CPI
| null | CC BY-SA 4.0 | null | 2011-02-03T16:57:52.177 | 2018-06-18T23:40:24.903 | 2018-06-18T23:40:24.903 | 31793 | 15 | null |
150 | 2 | null | 147 | 43 | null | Here are a few risks when using historical data:
- Data fidelity: Is your data an accurate reflection of history? For stocks, should you use actual closing prices or adjusted prices? For futures, how should you construct a realistic, continuous contract?
- Simulation realism: Are you making realistic assumptions about trade execution? Are you naively assuming, for example, that you can perfectly execute at the day's closing price? Did you remember frictional costs?
- Sampling variability: Is your historical sample representative of a wide range of market conditions, or did you (happen to) pick a favorable dataset?
- Curve fitting: Did you fiddle with too many parameters for too long, eventually finding a model that worked great last year but won't make a penny in the future?
- Optimism: Your actual profits will likely be only a fraction of your simulated profits. Are you assuming otherwise?
- Model risk: Even if your model back-tests well, what is it's half-life? How long will it be tradable? Very few ideas work forever.
This is not a theoretical list. I've made all these mistakes personally.
| null | CC BY-SA 2.5 | null | 2011-02-03T17:05:09.967 | 2011-02-03T17:05:09.967 | null | null | 37 | null |
151 | 2 | null | 147 | 12 | null | Accuracy
The trader must make sure the data is not only right, but that the timestamps are useable. That's why a good data warehouse will be bitemporal or point-in-time. Thus, we know not only when the item was announced, but when we received it and could act on it.
Gaps
An aggressive safety check on incoming data might inadvertently exclude correct data. For example, a tick-capture that compares today's opening price against yesterday's closing price might exclude legitimate bankruptcy notices.
Retro-fitting
The desk's manager must guard against data mining and other techniques that can cause look-ahead bias. I worked for one hedge fund that required traders to submit their models weeks before production so they could be backtested again without the benefit of hindsight.
| null | CC BY-SA 2.5 | null | 2011-02-03T17:06:25.547 | 2011-02-03T17:06:25.547 | null | null | 35 | null |
152 | 1 | null | null | 11 | 540 | When faced with a black box trading strategy with extensive historical data available, how would one select/construct a representative benchmark?
As a trivial example, when a strategy historically consists only of long trades on tech stock, the Nasdaq Composite index might be a suitable benchmark.
What about benchmarks for strategies that do not exhibit such clear tendencies in the types of trades they perform?
I can imagine constructing a composite benchmark would be appropriate. Which guidelines/ frameworks/methodologies are applicable?
| How to select/construct benchmarks for black-box trading strategies? | CC BY-SA 2.5 | null | 2011-02-03T18:06:20.490 | 2011-02-03T19:10:34.633 | null | null | 53 | [
"trading",
"strategy",
"benchmark"
]
|
153 | 2 | null | 148 | 24 | null | This is pure speculation:
MFE's are really tailored toward valuation models (how can we develop a model to price x swap, etc.). You don't entirely have to worry about those details in order to trade them: you're just quoted a price based on these models. But if you go in-house at a bank and are working as a product quant (structured products, etc.), then you really need to worry about these things.
Alternatively, it could be relevant to a trading strategy if you think that the current model is mispricing things and there's an arbitrage opportunity. This is why banks have put so much effort into having good models, and jump at opportunities for very minor improvements. This kind of behavior is documented in Derman's ["My Life as a Quant"](http://rads.stackoverflow.com/amzn/click/0471394203).
Short of that, if you are simply trading an asset in order to gain a specific kind of exposure, stochastic calculus is not really used very much.
As a final note, I would point to the draft of Steven Shreve's ["Stochastic Calculus and Finance"](http://www.stat.berkeley.edu/users/evans/shreve.pdf) as a free reference, if you're looking for one.
| null | CC BY-SA 2.5 | null | 2011-02-03T18:50:12.893 | 2011-02-03T19:00:48.793 | 2011-02-03T19:00:48.793 | 17 | 17 | null |
154 | 2 | null | 152 | 4 | null | Clearly, it's much more difficult than for a white-box strategy.
But you still have some information:
- What is the return profile?
- What is the average holding period?
- Does it go long/short?
- What assets are traded?
Now you can choose a benchmark of an index that matches these criteria as closely as possible. If an appropriate benchmark doesn't exist, then you can create one: produce a very simple model that characterizes the return profile and run it as your own index (just for benchmarking purposes).
| null | CC BY-SA 2.5 | null | 2011-02-03T19:10:34.633 | 2011-02-03T19:10:34.633 | null | null | 17 | null |
155 | 2 | null | 60 | 8 | null | Your question's title suggests the market prices are mean reverting. I strongly suggest verifying that assumption via one of the usual tests, such as the Augmented Dickey-Fuller test (implemented in the tseries package of R by the adf.test function, and in other R packages, too).
If the market is truly mean reverting, a possible strategy is
- Detrend the data.
- Monitor the market for an extreme high or extreme low, based on its historical range.
- Buy or sell-short the market at those extremes.
- Cover at a logical point: at the mean or at the half-way point, for example.
- Repeat.
Detrending is useful to eliminate the long-term trend (in stocks) or eliminate the effects of carry (in futures). "Extreme highs" and "extreme lows" must really be extreme: I look for prices in the upper 90 to 95th percentile or lower 10th to 5th percentile, based on a few years of history.
Buying or selling-short at the extremes is fine ... unless the market decides to exceed its historical limits, in which case you'll experience drawdown, potentially large. I use a momentum filter and that helps but it's not perfect.
My experience is mostly in trading mean-reverting spreads. Your mileage may vary.
(PS - I found no connection between the RSI indicator and mean reversion. I don't use it.)
| null | CC BY-SA 2.5 | null | 2011-02-03T19:47:03.307 | 2011-02-03T19:47:03.307 | null | null | 37 | null |
156 | 1 | null | null | 111 | 15781 | There are a few things that form the common canon of education in (quantitative) finance, yet everybody knows they are not exactly true, useful, well-behaved, or empirically supported.
So here is the question: which is the single worst idea still actively propagated?
Please make it one suggestion per post.
| What concepts are the most dangerous ones in quantitative finance work? | CC BY-SA 2.5 | null | 2011-02-03T21:16:21.317 | 2022-03-04T06:37:05.757 | 2011-02-03T21:39:50.437 | 69 | 69 | [
"theory",
"research"
]
|
157 | 2 | null | 156 | 55 | null | >
CAPM as an allocation strategy.
Market efficiency was predicated on several falicious ideas, including:
- Everyone can borrow (and lend) at the same rate, indefinitely (i.e. no matter their leverage)
- All information is known instantaneously by all market participants.
- There are no transaction costs.
- Rational behavior.
One conclusion is that the higher the beta, the higher the return, but this has clearly been shown to be violated.
While it is useful for segmenting $\alpha$ and $\beta$ (and for portfolio/strategy evaluation), it simply isn't entirely reliable as a portfolio allocation strategy.
As Fama/French concluded in ["The Capital Asset Pricing Model: Theory and Evidence"](http://www1.american.edu/academic.depts/ksb/finance_realestate/mrobe/Library/capm_Fama_French_JEP04.pdf) (2004):
>
The CAPM, like Markowitz's (1952,
1959) portfolio model on which it is
built, is nevertheless a theoretical
tour de force. We continue to teach
the CAPM as an introduction to the
fundamental concepts of portfolio
theory and asset pricing, to be built
on by more complicated models like
Merton's (1973) ICAPM. But we also
warn students that despite its
seductive simplicity, the CAPM's
empirical problems probably invalidate
its use in applications.
Note that CAPM adds many assumptions to Markowitz's fundamental model to built itself. Therein lies its fallacy because as said above, those are difficult assumptions. Markowitz' model itself is fairly general in that you can inject 'views' of higher returns or greater volatility etc into the basic framework (or not!) and still be quite rooted in reality for mid-long term horizons.
| null | CC BY-SA 3.0 | null | 2011-02-03T21:21:36.933 | 2013-01-07T18:21:44.930 | 2013-01-07T18:21:44.930 | 3509 | 17 | null |
158 | 2 | null | 156 | 46 | null | Everybody's favourite whipping boy: Identically and independently distributed returns, i.e. draws from $N(\mu, \sigma)$ to describe returns.
We could of course split this is arguing
- identically distributed (and mixture modeling as well as robust methods help)
- independently distributed (and everybody agrees that there is some serial correlation though a formal good model is hard to come by)
- the Normal assumption (and everybody agrees on fatter tails) yet $N(\mu, \sigma)$ makes things so temptingly tractable
| null | CC BY-SA 3.0 | null | 2011-02-03T21:22:26.737 | 2015-01-09T16:23:15.070 | 2015-01-09T16:23:15.070 | 6947 | 69 | null |
159 | 2 | null | 35 | 5 | null | By William Bernstein, [source](http://www.efficientfrontier.com/ef/499/scg.htm):
>
In June of 1992 academicians Eugene
Fama and Kenneth French ("F/F") rocked
the investing world with a study
published in the Journal of Finance,
innocuously entitled "The
Cross-Section of Expected Stock
Returns." The piece is the cognitive
equivalent of an enormous hunk of
marzipan cake which sits in your
freezer for months—there’s no way
you’ll get through it in one whack,
and is properly consumed only in small
sittings. In fact, unless you’ve
gotten considerably beyond Stat 101,
it’s probably best avoided. So, here’s
the short course:
"Beta," the measure of market exposure of a given stock or
portfolio, which was previously
thought to be the be-all/end-all
measurement of stock risk/return, is
of only limited use. F/F convincingly
showed that this parameter did not
predict the returns of all equity
portfolios, although it is still
useful in predicting the return of
stock/bond and stock/cash mixes.
so it depends on the type of portfolio, more info in the paper in italics.
| null | CC BY-SA 2.5 | null | 2011-02-03T22:42:01.747 | 2011-02-03T22:47:23.643 | 2011-02-03T22:47:23.643 | 189 | 189 | null |
160 | 2 | null | 141 | 16 | null | >
-- (historical) stock prices --
What do you mean by that? Nominal, real, corrected due to monetary-base-change, corrections with Y-other-things? What is your goal?
>
I have been able to download (historical) stock prices via yahoo and google.
Alas looking historical data from Google/Yahoo's screeners can be highly misleading and making conclusion based on it very dangerous. Please, note that you cannot always trust the data, sometimes they are nominal or real, and sometimes you won't know the type of data. Google/Yahoo are only third-parties to provide you the historical data.
Commercial Data
- CSI Data: it claims to be the provider to Google, Yahoo, Microsoft and other resellers
- Yahoo's providers here and notice the small writings at the bottom here
Educational and Research Data
- Shiller Data about stock market data
- the huge data collection by Ibbotson, book, inflation, interest rates and such things which you must take into account to do any serious research
- Yale databases (massive work done) here
- Intelligent Asset Allocator -book, by William Bernstein, in the very end has a summary of very good data sources
| null | CC BY-SA 2.5 | null | 2011-02-03T23:00:56.373 | 2011-02-03T23:23:27.313 | 2011-02-03T23:23:27.313 | 189 | 189 | null |
161 | 2 | null | 156 | 20 | null | >
Perfect delta hedging
In my opinion delta hedging is also a dangerous one, but it definitely should teach though. In the BS framework, it is an allegedly perfect way of covering the risk incurred by buying (or selling) a derivative product (such as call and put in simplest cases). Nevertheless due to several real world facts this doesn't work that well in practice :
- discrete time rebalancing of porfolio
- constant volatility so much things have been said on this I won't comment any further
- possibility of market jumps (not little ones) this affects deeply your
daily P&L
- transction costs affects the cost of the rebalancing portfolio in a way
that is not negligeable
- liquidity, if you are holding big positions in derivatives, your delta
hedging will impact the price dynamics
- etc...
The main advantage of the BS delta hedging is that it presents though the big principles of hedging the rest is a matter of sophistication and derivatives trader's vista (or chance).
| null | CC BY-SA 4.0 | null | 2011-02-03T23:05:51.547 | 2022-03-01T15:47:38.653 | 2022-03-01T15:47:38.653 | 39672 | 92 | null |
162 | 1 | 1866 | null | 17 | 1282 | I'd like to get a feel for the operating parameters of official market makers. I'm looking more for discerning characteristics, rather than exact numbers or an exhaustive list of each MM.
Examples: what is their working capital? How much leverage do they use intraday? What returns do they achieve? What is their contribution to trade volume? How to these numbers compare to other market participants?
| Operating parameters of market makers? | CC BY-SA 2.5 | null | 2011-02-03T23:23:39.660 | 2015-10-22T12:51:41.340 | 2011-09-08T20:46:30.100 | 1355 | 53 | [
"models",
"market-making"
]
|
163 | 2 | null | 156 | 8 | null | >
To trust yourself.
Concepts must be based on logical ideas and proper premises. It is easy to forget a premise and then misuse a model such as CAPM as asset-allocation method as suggested by [Shane](https://quant.stackexchange.com/questions/156/what-concepts-are-the-most-dangerous-ones-in-quantitative-finance-work/157#157) so `Y-Recheck-things`. Do not make things personal. Do not abuse models with too complicated schemes (you may abuse some basic assumption) -- and even then don't expect pretend `to know`, rather `to engineer`.
| null | CC BY-SA 3.0 | null | 2011-02-03T23:41:09.677 | 2011-05-31T22:04:59.340 | 2017-04-13T12:46:22.823 | -1 | 189 | null |
164 | 2 | null | 162 | 8 | null | Market makers covers a broad range of shops, from large investment banks to small proprietary trading firms. So working capital can be in the millions or the billions, and leverage can be anywhere from 2x to 30x. This is no different from buy-side firms, which includes a variety of both asset managers and retail investors. There is tons of diversity among market makers.
As for contribution to volume, a liquidity provider merely quotes a price; the trade doesn't happen until a buy-side counter-party decides to accept. For example, given a trade of 100 shares between a market maker and an asset manager, we wouldn't say that the market maker was responsible for 50 shares of the trade. We would just say that 100 shares were traded.
With this in mind, we can claim that the market maker is responsible for all trading volume, or we can claim that the market maker is responsible for no trading volume (and that the buy-side firm is responsible for all volume). Either seems plausible.
| null | CC BY-SA 2.5 | null | 2011-02-04T00:19:03.267 | 2011-02-04T00:19:03.267 | null | null | 35 | null |
165 | 2 | null | 156 | 8 | null | That value stocks are necessarily riskier than growth; that there has to be a hidden risk factor that we haven't yet found. The [Lakonishok, Shleifer, and Vishny](http://www.economics.harvard.edu/faculty/shleifer/files/ContrarianInvestment.pdf) abstract says it better than I can:
>
For many years, stock market analysts have argued that value strategies outperform the market. These value strategies call for buying stocks that have low prices relative to earnings, dividends, book assets, or other measures of fundamental value. While there is some agreement that value strategies produce higher returns, the interpretation of why they do so is more controversial. This paper provides evidence that value strategies yield higher returns because these strategies exploit the mistakes of the typical investor and not because these strategies are fundamentally riskier.
| null | CC BY-SA 2.5 | null | 2011-02-04T00:30:05.933 | 2011-02-04T00:30:05.933 | null | null | 106 | null |
166 | 2 | null | 82 | 1 | null | [Marketcetera.](http://www.marketcetera.com/site/products/overview) I haven't tried it yet, so I can't personally comment on its quality. But the website looks great, plus it's an open source. You can download it for free and make some modifications. The platform supports Java, Ruby, and Python.
| null | CC BY-SA 2.5 | null | 2011-02-04T01:17:03.160 | 2011-02-04T01:17:03.160 | null | null | null | null |
167 | 2 | null | 140 | 10 | null | I have seen Hansen's SPA ('Superior Predictive Ability') test and stepwise variants used for this purpose. Hansen's test is a Studentized version of White's Reality Check. The stepwise variants allow one to accept or reject the null of no predictive ability on a subset of some tested strategies while maintaining a familywise error rate.
In his book, 'Evidence-Based Technical Analysis,' David Aronson discusses the overfit bias very well, although I believe his techniques for minimizing the bias may only apply to technical strategies, because they rely on Monte Carlo simulations.
References
- P. R. Hansen, 'A Test for Superior Predictive Ability,' Journal of Business
& Economic Statistics, vol 23, no 4, 2005, http://pubs.amstat.org/doi/abs/10.1198/07350010
5000000063.
- SPA google group
- Hsu, Po-Hsuan, Hsu, Yu-Chin and Kuan, Chung-Ming, 'Testing the Predictive
Ability of Technical Analysis Using a New Stepwise Test Without Data Snooping
Bias,' 2008, http://ssrn.com/abstract=1087044
- Hsu, Po-Hsuan and Hsu, Yu-Chin, 'A Stepwise SPA Test for Data Snooping and its
Application on Fund Performance Evaluation,' 2006, http://ssrn.com/abstract=885364
- David Aronson's Evidence-Based TA.
| null | CC BY-SA 2.5 | null | 2011-02-04T03:45:19.377 | 2011-02-04T03:45:19.377 | null | null | 108 | null |
168 | 2 | null | 141 | 270 | null | This post is Quant Stack Exchange's master list of data sources.
Please append your links to other data sources to the list below. Note that a source listed under a certain topic may provide extensive data on other types of instruments as well.
## Economic Data
See [What are the most useful sources of economics data?](https://stats.stackexchange.com/questions/27237/what-are-the-most-useful-sources-of-economics-data) on Cross Validated SE.
World
- https://macrovar.com/macrovar-database/ includes free data for 5,000+ Financial and Macroeconomic Indicators of the 35 largest economies of the world. It includes macroeconomic indicators and financial markets covering equity indices, fixed income, foreign exchange, credit default swaps, futures and commodities. It also provides free financial and economic research.
- OECD.StatExtracts includes data and metadata for OECD countries and selected non-member economies.
- http://www.assetmacro.com/ includes data for 20,000+ Macroeconomic and Financial Indicators of 150 countries
- https://db.nomics.world is an open platform with more than 16,000 datasets among 50+ providers.
United Kingdom
- http://www.statistics.gov.uk/
United States
- Federal Reserve Economic Data - FRED (includes URL-based API)
- http://www.census.gov/
- http://www.bls.gov/
- http://www.ssa.gov/
- http://www.treasury.gov/
- http://www.sec.gov/
- http://www.economagic.com/
- http://www.forecasts.org/
---
## Foreign Exchange
- 1Forge Realtime FX Quotes
- OANDA Historical Exchange Rates
- Dukascopy - Historical FX prices; XML and CSV. There is a non-affiliated downloader called tickstory.
- ForexForums Historical Data - Historical FX downloads via Amazon S3
- FXCM provides an open repository of tick data starting from January 4th 2015, with a download script on github.
- GAIN Capital - Historical FX rates (in ZIP format)
- TrueFX - Historical FX rates (in ZIP/CSV format). A download helper script is available on GitHub. TrueFX.com asks for free registration. Same files are linked from Pepperstone, no registration needed.
- TraderMade - Real Time Forex Data
- [RTFXD - Real Time FX Data] 9: Delivered via ssh. Very low pricing.
- Olsen Data / Olsen Financial Technologies: Historical FX data can be ordered online in custom format. Download link sent in 2 business days. Real time data service. Expensive but very high quality.
- Zorro: 1Minute bars from 2010 in t6 format (OHLC and tick volume)
- http://polygon.io
- Norgate Data: Historical FX data covering 74 currency currency and 14 bullion crosses with daily updates.
- PortaraCQG - Historical Forex Data Supplies FX 1 min, tick and level 1 from 1987. Updates and data tools included.
- Databento Real-time and historical data direct from colocation facilities. Integrates with Python, C++ and raw TCP. Includes order book, tick data, and subsampled OHLCV aggregates at 1s, 1min, 1h, daily granularity.
## Equity and Equity Indices
- http://finance.yahoo.com/
- http://www.iasg.com/managed-futures/market-quotes
- http://kumo.swcp.com/stocks/
- Kenneth French Data Library
- http://unicorn.us.com/advdec/
- http://siblisresearch.com/
- usfundamentals.com - Quarterly and annual financial data for US companies for the five years up until 2016
- http://simfin.com/
- Olsen Data / Olsen Financial Technologies
- https://www.tiingo.com/welcome - Equity, ETF, and Mutual Fund price and fundamental data
- http://polygon.io
- Norgate Data - Deep daily history of US, Australian and Canadian equities and indices, survivorship bias-free, and daily updates.
- PortaraCQG - Historical Intraday Data - Supplies global indices 1 min, tick and level 1 from 1987. Updates and data tools included.
- Databento - Real-time and historical data direct from colocation facilities. Integrates with Python, C++ and raw TCP. Includes order book, tick data, and subsampled OHLCV aggregates at 1s, 1min, 1h, daily granularity.
- EquityRT - Historic stock trading, index and detailed fundamental equity (historic and forecast) data and financial analysis along with industry-specific financial analysis and comparisons, institutional shareholding data and news reports provided through Excel APIs or a web browser. Service includes foreign exchange, commodity and cryptocurrency prices in addition to some fixed income, and macroeconomic data. Service was geared towards emerging countries' markets but recently expanded to include developed country markets. Not free but quite reasonably priced.
- Investing.com - Trading and some fundamental data for equities in addition to pricing data for commodities, futures, foreign exchange, fixed income and cryptocurrencies through a web browser. Only pricing data can be downloaded for free. Historic financials and forecasts and financial analysis available for companies with the pro subscription through a browser along with downloading and charting, no APIs.
- algoseek - Non-free provider of intraday and other data through various types of APIs and platforms for equities, ETFs, options, cash forex, futures, and cryptocurrencies mainly for US markets.
- tidyquant - Provides APIs for R to get financial data (historic stock prices, financial statements, corporate action as well as historic economic and FX data) in a “tidy” data frame format from web-based sources.
---
## Fixed Income
- FRB: H.15 Selected Interest Rates
- Barclays Capital Live (institutional clients only)
- CDS spreads
- PortaraCQG - Institutional Supplier - Supplies Tullet Prebon Sovereign Debt 1 min, tick and level 1 from 1987. Updates and data tools included.
- Credit Rating Agency Ratings History Data - Corporate rating histories from multiple agencies converted to CSV format.
- Data on historical, cross-country nominal yield curves - A Q&A on fixed income yield data providing links to official and commercial sources for many different countries.
---
## Options and Implied Volatility
- http://www.ivolatility.com/
- http://www.optionmetrics.com/
- http://www.historicaloptiondata.com/
- https://www.commodityvol.com/
- https://datamine.cmegroup.com/
- Olsen Data / Olsen Financial Technologies
---
## Futures
- http://www.simiansavants.com/cmedata.shtml
- http://www.cmegroup.com/market-data/index.html
- http://www.quandl.com
- Olsen Data / Olsen Financial Technologies
- Norgate Data - Deep daily history of 100 futures markets from 11 worldwide exchanges, and daily updates.
- PortaraCQG - Historical Futures Data Supplies Global Futures. Daily from 1899, 1 min, tick and level 1 from 1987. Updates and data tools included.
- Databento . Real-time and historical data direct from colocation facilities. Integrates with Python, C++ and raw TCP. Includes order book, tick data, and subsampled OHLCV aggregates at 1s, 1min, 1h, daily granularity.
---
## Commodities
- LIFFE Commodity Derivatives - 15 min delay; free registration
- PortaraCQG - Historical Commodities Data Global Commodities including LME, Asian and Russian commodity exchanges.
---
## Multiple Asset Classes and Miscellaneous
- http://www.eoddata.com/
- Robert Shiller Online Data
- S#.Data is a free application for downloading and storing market data from various sources
---
## Specific Exchanges
- Spanish Futures & Options (MEFF)
- CBOE Futures Exchange (CFE Vix Futures)
| null | CC BY-SA 4.0 | null | 2011-02-04T04:10:19.353 | 2023-04-21T06:21:08.450 | 2023-04-21T06:21:08.450 | 47484 | 43 | null |
169 | 2 | null | 156 | 82 | null | >
Correlation
Correlations are notoriously unstable in financial time series - yet one of the most used concepts in quant finance because their is no good theoretical substitute for it. You could say theory is not working with it yet neither without it.
For example the concept is used for diversification of uncorrelated assets or for the modelling of credit default swaps (correlation of defaults). Unfortunately when you need it most (e.g. a crash) it just vanishes. This is one of the reasons that the financial crises started because the quants modeled the cds's with certain assumptions concerning default correlations - but when a regime shift happens this no longer works.
Edit
See my follow-up question: [What is the most stable, non-trivial dependence structure in finance?](https://quant.stackexchange.com/questions/27674/what-is-the-most-stable-non-trivial-dependence-structure-in-finance)
| null | CC BY-SA 3.0 | null | 2011-02-04T07:12:30.120 | 2016-06-17T16:47:31.373 | 2017-04-13T12:46:22.953 | -1 | 12 | null |
170 | 2 | null | 147 | 9 | null | I think the biggest risk is trusting your model too much.
I would summarize modelling like that: Model for the best but risk-manage for the worst!
As an example for modelling a portfolio approach with derivatives that could e.g. mean: use black scholes for option pricing (model) but manage your risk by assuming a power law distribution and vary your alpha to see the effect on your portfolio (simulation for risk-management).
I learned that bit in a joint seminar by Wilmott and Taleb - good practical stuff.
| null | CC BY-SA 2.5 | null | 2011-02-04T07:26:50.377 | 2011-02-04T07:26:50.377 | null | null | 12 | null |
172 | 2 | null | 156 | 32 | null | >
Value at Risk
The great idea to have systematic indicator for risk exposure but the problems arise when
- it's used as main or single indicator without looking at other risks (e.g Credit risk or Liquidity risk). Emanuel Derman wrote about it recently in his blog:
>
But they (GS) did it not with a new formula or a single rule. They did it by being smart rather than doctrinaire. They were eclectic; they had limits on all sorts of exposures -- on VaR, on the fraction of a portfolio that hadn't been modified in a year ... There isn't a formula for avoiding future losses because there isn't one cause of future losses.
- VaR becomes the purpose of risk management - not the situations when when losses exceed VaR
“An airbag that works all the time, except when you have a car accident.” (c) D.Einhorm
- It's focused on managable risk in a normal situations with the assumption that tomorrow will be like today and yesterday and without taking rare events into account.
- It becomes another parameter (like profit) that could be gamed (the same profit but with low risk).
And there are two exremely great articles about VaR among the best I've ever read:
- Risk Management - What Led to financial meltdown. NY Times
- Derivatives Strategy - Rountable on Limits of VaR
| null | CC BY-SA 2.5 | null | 2011-02-04T08:59:27.813 | 2011-02-04T08:59:27.813 | null | null | 15 | null |
173 | 2 | null | 79 | 8 | null | MCMC can be used for Bayesian inference of other models with hidden variables. Gibbs sampling, for example, is used in Hidden Markov Models. Here is a [paper](http://ba.stat.cmu.edu/journal/2008/vol03/issue04/ryden.pdf) that discuss the differences between MCMC and the more classical approach using the EM algorithm.
The question is: Are HMMs a useful model in finance? Some academics argue that they have predictive power.
One can look at: [Stock Market Forecasting Using Hidden Markov Model: A New Approach](http://doi.ieeecomputersociety.org/10.1109/ISDA.2005.85). I'm not convinced by the approach they use.
On the other hand, HMM can be used to build volatility filters for trend following strategies.
There are certainly other models parameters that can be inferred using MCMC. I personally find it very time consuming (this is only based on experience and not on convergence analysis). Furthermore, as stated in the first paper, if one wants to use Bayesian inference then the EM algorithm can be used for computing MAP parameters.
All in all I haven't found it very useful.
| null | CC BY-SA 3.0 | null | 2011-02-04T11:13:39.907 | 2011-09-08T21:05:25.420 | 2011-09-08T21:05:25.420 | 1106 | 155 | null |
174 | 1 | 179 | null | 18 | 594 | [Wikipedia](http://en.wikipedia.org/wiki/Stress_testing#Financial_sector) lists three of them:
- Extreme event: hypothesize the portfolio's return given the recurrence of a historical event. Current positions and risk exposures are combined with the historical factor returns.
- Risk factor shock: shock any factor in the chosen risk model by a user-specified amount. The factor exposures remain unchanged, while the covariance matrix is used to adjust the factor returns based on their correlation with the shocked factor.
- External factor shock: instead of a risk factor, shock any index, macro-economic series (e.g., oil prices), or custom series (e.g., exchange rates). Using regression analysis, new factor returns are estimated as a result of the shock.
But I was wondering if there is a more comprehensive list.
| What approaches are there for stress testing a portfolio? | CC BY-SA 2.5 | null | 2011-02-04T13:19:21.993 | 2011-02-04T17:23:46.037 | null | null | 159 | [
"risk",
"value-at-risk",
"stress"
]
|
175 | 1 | null | null | 20 | 1642 | Usually even good performing quant trading strategies work for a while and then return start to shrink. I see two reasons for that which would probably give rise to different analysis:
- The Strategy got known by too many traders and has been arbitraged away.
- Market conditions have changed (will or will not revert).
| Is there a way to estimate (predict) the half life of a quantitative trading system? | CC BY-SA 2.5 | null | 2011-02-04T14:37:41.727 | 2013-02-22T22:02:42.410 | 2011-02-06T16:16:50.943 | 35 | 155 | [
"algorithmic-trading"
]
|
176 | 1 | null | null | 27 | 3986 | Let's say I have a brand new fancy model on some asset class (calibration porcedure included over a set of vanilla options) in which I truly believe I made a step forward comparing to existing literature (a phantasmatic situation I recognize).
Which criteria should be in order so I can claim that I am in a position to "validate" this model?
PS:
We will assume that the math are OK and that I can prove that there is no bug in the model implementation
| Model Validation Criteria | CC BY-SA 3.0 | null | 2011-02-04T15:02:34.870 | 2020-06-16T17:32:14.230 | 2020-06-15T18:52:57.950 | 36636 | 92 | [
"models",
"model-validation",
"model-risk-management"
]
|
177 | 2 | null | 175 | 10 | null | I go out on a limb and say No.
You can of course observe how it does, but making a prediction about how and when it decays is difficult to impossible with any degree of precision. You'd need a meta-model of the market as a whole. And, well, if you had that, wouldn't you use that knowledge to make your model better?
That said, you can of course measure pnl and other return characteristics and extrapolate, but that isn't a proper predictive model in my book.
| null | CC BY-SA 2.5 | null | 2011-02-04T15:11:50.610 | 2011-02-04T15:24:35.753 | 2011-02-04T15:24:35.753 | 69 | 69 | null |
179 | 2 | null | 174 | 8 | null | Well all that you have cited seems quite all you can do with scenario maybe I can add another one which is portfolio dependent. Instead of looking to arbitrary scenarios you first decompose the factor to which you portfolio is the most sensitive to, and then look for scenarios that are specifically impacting this combination of risk factors.
Anyway,scenario losses can be quite an effective tool in risk management because it shows in a simple way where is your risk standing but it shouldn't be viewed as a standalone indicator.
One flaw is that it usually tell you something about only a few scenarios,but you can't be really sure that other scenarios won't impact you more. Moreover the more you have scenarios the less you know how to order them with respect to each other or with respect to some kind of likelihood.
Otherwise in the field of capital requirement area, I start to ear about Stressed-VaR which means among other things that you raise in a considerable way correlations between risk factors (as it usually happens during crisis) as well as volatility of risk factors.
In the end Scenarios or Stressed-VaR, look a little bit like some "cuisine" with no real theoretical grounds to support the whole building and that's the problem.
Regards
| null | CC BY-SA 2.5 | null | 2011-02-04T17:23:46.037 | 2011-02-04T17:23:46.037 | null | null | 92 | null |
180 | 1 | 319 | null | 22 | 1841 | Let's say you want to calculate a VaR for a portfolio of 1000 stocks. You're really only interested in the left tail, so do you use the whole set of returns to estimate mean, variance, skew, and shape (let's also assume a skewed generalized error distribution - SGED)? Or would you just use the left tail (let's say the bottom 10% of returns)?
For some reason, using the whole set of returns seems more correct to me (by using only the left 10% or returns you'd really be approaching a non-parametric VaR). But using the whole set of returns would likely cause some distortion in the left tail in order to get a better fit elsewhere.
How are the pros doing this? Thanks!
| How are distributions for tail risk measures estimated in practice? | CC BY-SA 2.5 | null | 2011-02-04T19:14:34.403 | 2016-01-29T04:11:24.493 | 2011-09-22T22:07:59.113 | 999 | 106 | [
"risk",
"probability",
"estimation",
"value-at-risk",
"expected-return"
]
|
181 | 1 | 370 | null | 22 | 1731 | If I design a trading model, I might want to know the [model's half life](https://quant.stackexchange.com/questions/147/what-are-the-key-risks-to-the-quantitative-strategy-development-process/150#150). Unfortunately, it doesn't seem possible to predict alpha longevity without a [meta-model of the market](https://quant.stackexchange.com/questions/175/is-there-a-way-to-estimate-predict-the-half-life-of-a-quantitative-trading-syst/177#177). Intuitively, such a meta-model does not exist, but has that ever been proven? This sounds like a Russell's paradox or Gödel's incompleteness theorems for the financial markets.
---
I'm adding a bounty to see if I can get some more responses.
| Why is there no "meta-model"? | CC BY-SA 2.5 | null | 2011-02-04T19:52:56.263 | 2014-02-13T19:48:26.607 | 2017-04-13T12:46:22.823 | -1 | 35 | [
"strategy"
]
|
182 | 1 | 196 | null | 18 | 2424 | The practice of using Gaussian copulas in modeling credit derivatives has come under a lot of criticism in the past few years. What are the major arguments against using the copula method in this respect?
| What are the limitations of Gaussian copulas in respect to pricing credit derivatives? | CC BY-SA 2.5 | null | 2011-02-05T01:20:15.523 | 2011-02-06T16:16:56.943 | null | null | 117 | [
"credit",
"valuation"
]
|
183 | 1 | 185 | null | 29 | 9946 | How do you explain what a stationary process is? In the first place, what is meant by process, and then what does the process have to be like so it can be called stationary?
| What is a stationary process? | CC BY-SA 3.0 | null | 2011-02-05T12:41:20.743 | 2019-09-21T14:24:45.307 | 2019-09-21T14:24:45.307 | 20795 | 40 | [
"time-series",
"stochastic-processes",
"stochastic-calculus",
"stationarity",
"terminology"
]
|
184 | 2 | null | 182 | 9 | null | If you want a 'pop science' account for it, the [Wired article by Felix Salmon](http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all) is a pretty good start.
If you want harder technical stuff, well then you can start at [the Wikipedia article and its section on Applications](http://en.wikipedia.org/wiki/Copula_%28statistics%29#Applications) and follow the references:
>
[...] Some believe the
methodology of applying the Gaussian
copula to credit derivatives to be one
of the reasons behind the global
financial crisis of 2008–2009.[6][7]
Despite this perception, there are
documented attempts of the financial
industry, occurring before the crisis,
to address the limitations of the
Gaussian copula and of Copula
functions more generally, specifically
the lack of dependence dynamics and
the poor representation of extreme
events[8]. The volume "Credit
Correlation: Life After Copulas",
published in 2007 by World Scientific,
summarizes a 2006 conference held by
Merrill Lynch in London where several
practitioners attempted to propose
models rectifying some of the copula
limitations. See also the article by
Donnelly and Embrechts [9] and the
book by Brigo, Pallavicini and
Torresetti [10].
| null | CC BY-SA 2.5 | null | 2011-02-05T13:39:39.967 | 2011-02-06T16:16:56.943 | 2011-02-06T16:16:56.943 | 117 | 69 | null |
185 | 2 | null | 183 | 26 | null | [A stationary process](http://en.wikipedia.org/wiki/Stationary_process) is one where the mean and variance don't change over time. This is technically "second order stationarity" or "weak stationarity", but it is also commonly the meaning when seen in literature.
In first order stationarity, the distribution of $(X_{t+1}, ..., X_{t+k})$ is the same as $(X_{1}, ..., X_{k})$ for all values of $(t, k)$.
You can see whether a series is stationary through it's [autocorrelation function](http://en.wikipedia.org/wiki/Autocorrelation) (ACF): $\rho_k = Corr(X_t, X_{t-k})$. When the ACF of the time series is slowly decreasing, this is an indication that the mean is not stationary; conversely, a stationary series should converge on zero quickly.
For instance, white noise is stationary, while a random walk is not. We can simulate these distributions easily in R ([from a prior answer of mine](https://stats.stackexchange.com/questions/2623/autocorrelation-in-the-presence-of-non-stationarity)):
```
op <- par(mfrow = c(2,2), mar = .5 + c(0,0,0,0))
N <- 500
# Simulate a Gaussian noise process
y1 <- rnorm(N)
# Turn it into integrated noise (a random walk)
y2 <- cumsum(y1)
plot(ts(y1), xlab="", ylab="", main="", axes=F); box()
plot(ts(y2), xlab="", ylab="", main="", axes=F); box()
acf(y1, xlab="", ylab="", main="", axes=F); box()
acf(y2, xlab="", ylab="", main="", axes=F); box()
par(op)
```
Which ends up looking somewhat like this:

If a time series varies over time, it is possible [to make it stationary](https://stats.stackexchange.com/questions/2077/how-to-make-a-time-series-stationary) through a number of different techniques.
| null | CC BY-SA 2.5 | null | 2011-02-05T14:12:55.267 | 2011-02-07T01:17:19.893 | 2017-04-13T12:44:17.637 | -1 | 17 | null |
186 | 2 | null | 183 | 8 | null | A process is defined [here](http://en.wikipedia.org/wiki/Stochastic_process) and is simply a collection of random variables indexed (in general) by time.
Otherwise I know the concept stated by Shane under the name of "weak stationarity", strong stationary processes are those that have probability laws that do not evolve through time.
More formally let $X_t$ be a given process, then let's call $P_X$ the probability law of the process, then for any given finite set of times $(t_1,...,t_n)$, and some borelian sets $(A_1,...,A_n)$ and any real $h>0$, $X_t$ is said (strongly) stationary iff :
$P_X(X_{t_1}\in A_1,...,X_{t_n}\in A_n)=P_X(X_{t_1+h}\in A_1,...,X_{t_n+h}\in A_n)$
For Gaussian process weak stationarity is equivalent to stationarity.
Regards
| null | CC BY-SA 2.5 | null | 2011-02-05T14:47:19.200 | 2011-02-05T22:34:42.937 | 2011-02-05T22:34:42.937 | 92 | 92 | null |
187 | 2 | null | 176 | 12 | null | I don't think that there is a precise point in time when we can say that model is valid (well, it's a model not a law). For example, [E. Derman in his article on Model risk](http://www.ederman.com/new/docs/gs-model_risk.pdf) describes the verification of model as a iterative process:
>
It is impossible to avoid errors during model development, especially when they are created under trading floor duress .... So, after the model is built, the developer tests it extensively. Thereafter, other developers “play” with it too. Next, traders who depend on the model for pricing and hedging use it. Finally, it’s released to salespeople. After a suitably long period during which most wrinkles are ironed out, it’s given to appropriate clients. This slow diffusion helps eliminate many risks,
slowly but steadily.
He also describes 6 types possibilities which constitutes model risk here:
- Incorrect model
- Correct model, incorrect solution
- Correct model, inappropriate use
- Badly approximated solution
- Software and hardware bugs
- Unstable data
In addition to that he provides some tips that can be used to avoid model risk and its consequences (to name just few):
- Test complex models in simple cases first
- Test the model’s boundaries
- Don’t ignore small discrepancies
| null | CC BY-SA 2.5 | null | 2011-02-05T16:03:23.840 | 2011-02-05T16:03:23.840 | null | null | 15 | null |
188 | 2 | null | 183 | 9 | null | A very excellent discussion of stationarity as it relates to trading can be found in Sherry's (Sherrys'?) [Mathematics of Technical Analysis](http://search.barnesandnoble.com/The-Mathematics-of-Technical-Analysis/Clifford-J-Sherry/e/9780595012077) (poorly organized, but very useful book). As he puts it, if the price changes of a stock, etc., are stationary over a time period, the underlying rules generating the price changes are effectively unchanged. The hypothesis is that a trading algorithm has little chance of working on a non-stationary series of price changes.
Below are some charts I've generated using the technique he describes in the book. Basically, you break your data set into two parts, histogram the prices changes, and construct cumulative distribution functions (cdfs). Then, do a Pearson's $\chi^2$ test to see if one cdf is significantly different than the other. You can also just look at the cdfs by eye. The case below is a borderline case. You can see the cdfs look a little bit different, but $\chi^2$ needs to be better than 14.07 to reach the 0.05 significance level in Pearson's test.

| null | CC BY-SA 2.5 | null | 2011-02-05T16:17:46.503 | 2011-02-05T16:17:46.503 | null | null | 47 | null |
189 | 1 | 255 | null | 11 | 462 | In order to use Real Option Valuation (ROV), using Black-Scholes equation, I must know the volatility of the economic returns for T years. Knowing this information what could be the appropriate measure of computing volatility of the economic returns from my reservoir?
The distribution of PV for any particular year is coming out to be gaussian. There is no historical data known.
| Appropriate measure of Volatility for economic returns from an asset? | CC BY-SA 3.0 | null | 2011-02-05T20:04:31.503 | 2016-07-12T09:30:21.863 | 2016-07-11T23:13:12.483 | null | null | [
"options",
"volatility",
"black-scholes",
"real-options"
]
|
190 | 2 | null | 181 | 7 | null | I would conjecture that the reason a proof does not seem to exist, is that in a purely theoretical framework, such a model could exist for 'corner cases'.
Under the assumption that the existence of a meta-model would modify the usage of the model / effect the market, something like Russell's paradox would seem to occur.
Except in the case where a stable market/meta-market relationship could be achived. I.e. we could construct some theoretical markets (i.e., not a model, but some purely theoretical complete description of the market) in which the existence of a meta-model would not effect the market or which in a finite number of steps would converge to final market / meta-model pair in which the existence of a final meta-model does not cause changes to the market.
| null | CC BY-SA 2.5 | null | 2011-02-05T22:35:46.190 | 2011-02-05T22:35:46.190 | null | null | 133 | null |
191 | 2 | null | 140 | 23 | null | Strictly speaking, data snooping is not the same as in-sample vs out-of-sample model selection and testing, but has to deal with sequential or multiple tests of hypothesis based on the same data set. To quote Halbert White:
>
Data snooping occurs when a given set
of data is used more than once for
purposes of inference or model
selection. When such data reuse
occurs, there is always the
possibility that any satisfactory
results obtained may simply be due to
chance rather than to any merit
inherent in the methody yielding the
results.
Let me provide an example. Suppose that you have a time series of returns for a single asset, and that you have a large number of candidate model families. You fit each of these models, on a test data set, and then check the performance of the model prediction on a hold-out sample. If the number of models is high enough, there is a non-negligible probability that the predictions provided by one model will be considered good. This has nothing to do with bias-variance trade-offs. In fact, each model may have been fitted using cross-validation on the training set, or other in-sample criteria like AIC, BIC, Mallows etc. For examples of a typical protocol and criteria, check Ch.7 of Hastie-Friedman-Tibshirani's "[The Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/)". Rather the problem is that implicitly multiple tests of hypothesis are being run at the same time. Intuitively, the criterion to evaluate multiple models should be more stringent, and a naive approach would be to apply a [Bonferroni correction](http://en.wikipedia.org/wiki/Bonferroni_correction). It turns out that this criterion is too stringent. That's where [Benjamini-Hochberg](http://www.math.tau.ac.il/~ybenja/MyPapers/benjamini_hochberg1995.pdf), [White](http://weber.ucsd.edu/~hwhite/pub_files/hwcv-077.pdf), and [Romano-Wolf](http://www-stat.wharton.upenn.edu/~steele/Courses/956/Resource/MultipleComparision/RomanoWolf05.pdf) kick in. They provide efficient criteria for model selection. The papers are too involved to describe here, but to get a sense of the problem, I recommend Benjamini-Hochberg first, which is both easier to read and truly seminal.
| null | CC BY-SA 2.5 | null | 2011-02-05T22:55:07.890 | 2011-02-05T22:55:07.890 | null | null | 194 | null |
192 | 2 | null | 121 | 10 | null | You are correct: evaluating volatility forecasts is quite different from evaluating forecasts in general, and it is a very active area of research.
Methods can be classified in several ways. One criterion is to consider evaluation methods for single forecasts (e.g., for the time series of returns of a specific portfolio) vs multiple simultaneous forecasts (e.g., for an investable universe). Another criterion is to separate direct evaluation methodsfrom indirect evaluation methods (more on this later).
Focusing on single-asset methods: historically the most commonly used approach by practitioners, and the one advocated by Barra is the "bias" statistics. If you have a forecast return process $r_t$ and a forecast $h_t$, then under the null hypothesis that the forecast is correct, $r_t/h_t$ has unit variance. The Bias statistics is defined as $T^{-1} \sum_{t=1}^T (r_t/h_t)^2$, which is asymptotically normally distributed with unit mean and st.dev. $1/\sqrt{T}$, which can be used for hypothesis testing.
An alternative is Mincer-Zarnowitz regression, in which one runs a regression between realized variance (say, 20-trading day estimate between $t$ and $t+20$) and the forecast:
$$\hat\sigma^2_t =\alpha +\beta h^2_t + \epsilon_t$$
Under the null one tests the joint hypothesis $\alpha=0, \beta=1$. [Patton and Sheppard](http://www.kevinsheppard.com/images/c/c6/Patton_Sheppard.pdf) also recommend the regression, which yields a more powerful test:
$$(\hat\sigma_t/h_t)^2 =\alpha/h^2_t +\beta + \epsilon_t$$
Both these tests can be (non-rigorously) extended to multiple forecasts by simulating random portfolios and generating statistics for each portfolio, or by assuming an identical relationship between forecasts and realizations across asset pairs:
$$vech(\Sigma_t) = \alpha + \beta vech ( H_t) + \epsilon_t$$
in which $vech$ is the "stacking" operator on a matrix (in this case, the forecast and realized variance matrices).
As for indirect tests, a popular approach is the minimum variance portfolio for risk model comparison. One finds the minimum variance portfolio under a unit budget constraint using two or more asset covariance matrices. One can prove that the true covariance matrix would result in the portfolio with the lowest realized variance. In other words, better models hedge better. The advantage of this approach is that it does take into account the quality of the forecast of $\Sigma^{-1}$, which is used in actual optimization problems; and it doesn't require providing alphas, so that the test is not a joint test of risk model and manager skill.
| null | CC BY-SA 2.5 | null | 2011-02-06T04:05:43.783 | 2011-02-06T04:05:43.783 | null | null | 194 | null |
193 | 2 | null | 35 | 11 | null | I would split the question into two sub-questions:
- Is market beta useful at all?
- Is market beta useful for high-frequency strategies that are fully hedged EOD?
With regards to the first question, I would summarize the hundreds of papers on the subject as: yes, but not as much as it was initially believed. The reason being that multi-factor models are empirically superior to CAPM and intertemporal CAPM, and in these models the market factor explains relatively little volatility, and in some cases it is entirely omitted, as the volatility is entirely captured by industry factors. Yet, risk models are not universally adopted, especially outside of equities, and moreover market beta can still be useful to assess risk premia (as is usually done in investment banking) or to tactically hedge portfolios.
With regards to the second question: it depends on the time scale. If you estimate beta using daily returns, you cannot use this loading to hedge intraday exposure to market risk, since this risk is not captured by the estimation interval. So you'd have to use 30- or 15- minute returns. This is not trivial however, because asynchroneity effects and induced autocorrelation of returns (Epps' effect). I don't know of commercial high-frequency factor models. I am not sure that statistical arbitrageurs use in house models, but there are many technical hurdles to overcome.
| null | CC BY-SA 2.5 | null | 2011-02-06T04:33:58.627 | 2011-02-07T03:24:59.033 | 2011-02-07T03:24:59.033 | 194 | 194 | null |
194 | 1 | 205 | null | 8 | 450 | I have several measures:
```
1. Profit and loss (PNL).
2. Win to loss ratio (W2L).
3. Avg gain to drawdown ratio (AG2AD).
4. Max gain to maximum drawdown ratio (MG2MD).
5. Number of consecutive gains to consecutive losses ratio (NCG2NCL).
```
If there were only 3 measures (A, B, C), then I could represent the "total" measure as a magnitude of a 3D vector:
R = SQRT(A^2 + B^2 + C^2)
If I want to combine those 5 measures into a single value, would it make sense to represent them as the magnitude of a 5D vector? Is there a better way to combine them? Is there a way to put more "weight" on certain measures, such as the PNL?
| How to combine various equity measures into a single measure (vector magnitude) | CC BY-SA 2.5 | null | 2011-02-06T08:46:44.807 | 2011-02-07T05:42:11.163 | null | null | 78 | [
"analysis",
"statistics",
"quantitative"
]
|
195 | 1 | 199 | null | 23 | 7821 | I notice that, on the surface, there are some similarities between quantitative sports betting and quantitative finance. Both have the concept of arbitraging, etc.
What are the applications of quantitative finance that lend themselves readily to sports betting?
| The application of quantitative finance in sports betting | CC BY-SA 4.0 | null | 2011-02-06T10:43:16.347 | 2019-09-22T10:13:58.033 | 2019-09-22T10:13:58.033 | 20795 | 129 | [
"application",
"betting"
]
|
196 | 2 | null | 182 | 16 | null | The limitations of the Gaussian copula were well-known among the quantitative finance practitioners before the crisis. See [this paper by D. Brigo](http://arxiv.org/abs/0912.5427).
To answer the question:
- no "fat tails"
- unable to fit the market prices without tweaks (base correlation) which make the model arbitrageable
- it's a static model (e.g. forward-starting tranches are impossible to price -- but nobody trades them now anyway)
This said, all other models are either worse or offer cosmetic improvements. Changing the Gaussian factors to some others doesn't really give you much. A few years ago the Random Factor Loading model was en vogue, but it turned out to be much harder to calibrate, and still not flexible enough.
| null | CC BY-SA 2.5 | null | 2011-02-06T16:09:27.307 | 2011-02-06T16:09:27.307 | null | null | 89 | null |
197 | 2 | null | 195 | 3 | null | To be precise, its not the application of quant finance, but rather its the general statistics which is used in myriad fields. For example, you could use monte carlo simulation (part of statistics) to determine the chances of which horse could win the horse race. For that we can form distributions of weather conditions, track conditions, horse's success rates etc. However, as these parameters are not independent so we must know the correlation between them to sample their outcomes.
Having said that, there are other fields where application of quant finance is prevalent. Like in valuation of tangible and real assets using options theory. It is widely used to assess risk of either developing or venturing out a certain product, property etc. Evaluating risk helps us make decisions eventually. So in essence, to do risk and decision analysis for real and tangible properties/products we use options theory.
| null | CC BY-SA 2.5 | null | 2011-02-06T19:36:02.513 | 2011-02-06T19:36:02.513 | null | null | null | null |
198 | 2 | null | 194 | 4 | null | A multi-alpha trading model ranks each asset according to the individual signals. For example, if I have two metrics and three stocks, I could just create this reverse-sorted table:
```
Rank| PNL W2L
----| ---------
3 | AAPL AAPL
2 | MSFT YHOO
1 | YHOO MSFT
```
Because this ranking/sorting method is non-parametric, I can just average each metric's rank by stock:
```
stock| score
-----| -----
AAPL | 3.0
MSFT | 1.5
YHOO | 1.5
```
And now it's easy to make a weighted average of the ranks; if I want PNL to be 2/3 of the value and W2L to be 1/3, I have:
```
stock| score
-----| -----
AAPL | 3.000
MSFT | 1.667
YHOO | 1.333
```
| null | CC BY-SA 2.5 | null | 2011-02-06T20:28:08.410 | 2011-02-06T20:28:08.410 | null | null | 35 | null |
199 | 2 | null | 195 | 7 | null | Well in my opinion a good parallel can be made between sport betting and bookmakers on the one hand and derivative pricing and market makers of those derivative on the other hand. I'll try to explain that if I can.
If you are given a set of bookmakers taking bets for let's say one underlying outcome and that they quote this event as a percentage (as for binary options). Then if they are smart enough they shouldn't care about the statistics of the event itself, what they shall try to do is to balance their quotes with respect the positions of their outstanding books. I mean by this that they should do the P&L scenarios of each outcome of the bets in their books and try to balance things so they can earn a living while taking as little risk as possible. This will make dynamicaly evolve the quotes of the events as the bets come to their books. Of course the clients do care about statistics and so they will think twice before taking bets that rarely occur (or pay too much for it).
If you think at the market making of binary options, the situation is quite the same as the one with the bookie, but at the end what you get is the Risk Neutral Probability. It is set by market participants and is in a way "an opinion". This RN Probability is not always coherent with the statistics of the history of the underlying.
This however can make sense if all the market makers are in some way minimizing the aggregate risk of their returns (I don't explain the word 'risk' here on purpose but here the limits set by risk managers should enter into play).
Of course all this is not properly and mathematically formalised but the appropriatness of the parallel of both situations appears quite striking to me.
Best Regards
| null | CC BY-SA 2.5 | null | 2011-02-06T21:33:29.710 | 2011-02-06T22:50:50.187 | 2011-02-06T22:50:50.187 | 92 | 92 | null |
200 | 2 | null | 115 | 16 | null | There are several application of Lévy alpha-stable distributions to finance, especially in insurance and reinsurance. I believe that [Embrechts-Kluppelberg-Mikosh's "Modelling Extremal Events for Insurance and Finance"](http://books.google.com/books?id=BXOI2pICfJUC&printsec=frontcover&dq=mikosh+extremal+events&source=bl&ots=yS18YW3H5a&sig=7o9ofLu_T0Mw4Yo6RAefvXGzaKg&hl=en&ei=FVdPTYRnhvaAB9rQ2Tk&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBMQ6AEwAA#v=onepage&q&f=false) is still an excellent reference. However, in the modeling of stock prices, this line of research is essentially inactive. The reason is that there is conclusive evidence that stock prices have finite second moments (for a survey, see [Taylor's book](http://books.google.com/books?id=I-GMGwAACAAJ&dq=taylor+asset+price+dynamics+volatility&hl=en&ei=M1hPTeiJE8LagQe3wLky&sa=X&oi=book_result&ct=result&resnum=1&ved=0CDIQ6AEwAA) or [Cont's nice survey](http://www.proba.jussieu.fr/pageperso/ramacont/papers/empirical.pdf). This essentially rules out all stable distributions except the gaussian one. Stochastic volatility models using mixture of normals for the unconditional distributions and/or diffusion/jump processes are far more popular.
| null | CC BY-SA 2.5 | null | 2011-02-07T02:31:06.587 | 2011-02-07T02:31:06.587 | null | null | 194 | null |
201 | 2 | null | 103 | 21 | null | Suppose that you and other bettors participate in a lottery with $N$ possible outcomes; event will occur with probability $\pi_n$. There are $N$ basic contracts available for purchase. Contract $n$ costs $p_n$ and entitles you to one dollar if outcome $n$ occurs, zero otherwise.
Now, imagine that you have a contingent claim that pays a complex payoff based on the outcome, say $f(n)$. The expected value of the payoff is
$$E(f(n))=\sum_n \pi_n f(n) =E(f)$$
Now, consider a portfolio of $f(1)$ units of basic contract $1$, $f(2)$ units of basic contract $2$, etc. This portfolio has exactly the same random payoff as the contingent claim. Because of the [law of one price](http://en.wikipedia.org/wiki/Law_of_one_price), it must have the same price as the contingent claim. Hence, the contingent claim has price equal to
$$\text{price}(f)=\sum_n p_n f(n)$$
Define $r= 1/(\sum_{i=1}^N p_i)$ and set $\tilde p_n := r p_n$, which is a probability measure, and you can rewrite
$$\text{price}(f)=r^{-1} \sum_n \tilde p_n f(n)=r^{-1} E^*(f)$$
So the risk-neutral probabilities are essentially the normalized prices of "state-contingent claims", i.e., outcome-specific bets. And the price of any claim is the discounted expectation according to this probability distribution. $r$ is easy to identify: if the contingent claim is 1 dollar for any outcome, then it's price is the discounted value of a dollar using the risk-free interest rate. Hence $r$ is the risk-free interest rate.
Where do these prices come from? There are three ways to think about price determination:
- They are determined by a
non-arbitrage condition, where no
bettor can make something for
nothing almost surely;
- They are determined by an
equilibrium condition, where all
bettors optimize their utility;
- They are determined by a
single-agent utility optimization
problem.
All conditions imply that the prices are strictly positive. For more information, Duffie's Dynamic Asset Pricing is still the standard reference.
This basic intuition behind this framework goes back 35 years to Cox-Rubinstein. Harrison-Kreps extended the result, and since then it has further extended. The most general forms to useless level of technicality are by Delbaen and Schachermayer.
| null | CC BY-SA 2.5 | null | 2011-02-07T03:22:11.923 | 2011-02-07T21:09:32.740 | 2011-02-07T21:09:32.740 | 194 | 194 | null |
202 | 2 | null | 156 | 25 | null | Easily, the
>
Efficient Market Hypothesis
For many reasons. First, many adherents and critics support it for the wrong (often ideological) reasons. This applies even to well-known economists like John Quiggin. Second, because even fewer people know the extent and scope of the anomalies. The literature can get very technical. So even smart people rejecting the EMH, or publishing anomalies, end up being over-optimistic about their ability to beat the market.
| null | CC BY-SA 2.5 | null | 2011-02-07T03:34:27.217 | 2011-02-07T14:59:24.077 | 2011-02-07T14:59:24.077 | 194 | 194 | null |
203 | 2 | null | 156 | 19 | null | My second best is
>
Copulas
I won't go as far as declaring gaussian copula [The formula that killed Wall Street](http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all)" (warning: lousy article), but will defer to T. Mikosch in his [very good paper](http://www.math.ku.dk/~mikosch/Preprint/Copula/s.pdf) on misuses of copulas.
| null | CC BY-SA 2.5 | null | 2011-02-07T03:39:37.260 | 2011-02-07T14:59:57.803 | 2011-02-07T14:59:57.803 | 194 | 194 | null |
204 | 2 | null | 156 | 5 | null | It is dangerous when large proportions of the trading population "religiously" believe [as a matter of unexamined faith] that certain necessary assumptions which govern the accuracy of the models they use will always hold. It is best to really understand how the models have been derived and to have a skeptics understanding of these assumptions and their impact. All models have flaws -- yet it is possible to use flawed models if you can get consistent indicators from several different or contrarian approaches based upon radically different assumptions. When the valuations from different approaches diverge, it is necessary to understand why -- when this happens, it is necessary to investigate the underlying assumptions ... this sort of environment often provides trading opportunities, but the environment can also quite easily be an opportunity for disaster.
Of course, implicit and explicit assumptions are absolutely necessary to sufficiently simplify any mathematical analysis and to make it possible to derive models that can give lots of traders the generally useful trading "yardsticks" that they rely upon. As an example, consider the Black-Scholes model. The Black-Scholes model is ubiquitous; a commonly used "yardstick" for option valuations. The Black-Scholes model of the market for a particular equity explicitly assumes that the stock price follows a geometric Brownian motion with constant drift and volatility.
This assumption of "geometric Brownian motion with constant drift and volatility" is never exactly true in the very strictest sense but, most of the time, it is a very useful, simplifying assumption because stock prices are often "like" this. It might not be reality, but the assumption is a close enough approximation of reality. This assumption is highly useful because of how makes it possible to apply stochastic partial differential equations methodology to the problem of determining appropriate option valuations. However, the assumption of "constant drift and volatility" is a very dangerous assumption in times when judgement, wisdom and intuition would tell an experienced investor "Something is "odd." It's as if we're in the calm before the storm." OR "Crowd psychology and momentum seem to be more palpable factor in the prices right now."
| null | CC BY-SA 2.5 | null | 2011-02-07T05:02:31.963 | 2011-02-07T05:02:31.963 | null | null | 121 | null |
205 | 2 | null | 194 | 4 | null | One approach would be to rescale these metrics so that they are approximately normally distributed with unit variance under the null hypothesis that the stock's price is an unbiased geometric random walk (equivalently that the log returns are zero mean). This rescaling is effectively going to 'downweight' the statistics with a large amount of variance. Once they have been rescaled to approximate normality, one *could *combine them as you have done, in which case the sum of their squares would be a Chi square with 5 degrees of freedom under the null. It would probably be more appropriate, however, to simply take their mean, because sign should not be discarded, I think.
The first two metrics should be easy to rescale. The metrics involving maximum drawdown, however, are a bit tricky. You probably want to estimate the variance of these statistics under the null via a Monte Carlo simulation.
| null | CC BY-SA 2.5 | null | 2011-02-07T05:42:11.163 | 2011-02-07T05:42:11.163 | null | null | 108 | null |
206 | 2 | null | 156 | 16 | null | Backtesting - pure and simple. Its the logical and obvious thing to do right? Yet, so many pitfalls lie in wait. Be very careful people. Do it as little as possible and as late as possible.
| null | CC BY-SA 2.5 | null | 2011-02-07T09:59:54.193 | 2011-02-07T09:59:54.193 | null | null | 200 | null |
207 | 2 | null | 195 | 5 | null | There are certain overlaps. One example is the so called [favourite-longshot-bias](http://en.wikipedia.org/wiki/Favourite-longshot_bias), e.g.,
- Stewart D. Hodges, Robert G. Tompkins, William T. Ziemba, The favorite-longshot bias in S&P 500 and FTSE 100 index futures options: the return to bets and the cost of insurance [PDF], November 27, 2002.
- Favourite-Longshot Bias
Another field is [arbitrage betting](http://en.wikipedia.org/wiki/Arbitrage_betting), so called sure bets. The third field that comes to mind is risk management, especially the [Kelly formula](http://en.wikipedia.org/wiki/Kelly_criterion) and [this wonderful book (Fortune's Formula by W. Poundstone)](https://books.google.de/books?id=xz4y3u-qM04C&lpg=PP1&dq=Fortune's%20Formula&pg=PP1#v=onepage&q&f=false).
| null | CC BY-SA 4.0 | null | 2011-02-07T11:36:41.677 | 2018-05-14T17:52:18.393 | 2018-05-14T17:52:18.393 | 12 | 12 | null |
208 | 1 | 225 | null | 14 | 1588 | In Paul Wilmott on Quantitative Finance Sec. Ed. in vol. 3 on p. 784 and p. 809 the following stochastic differential equation: $$dS=\mu\ S\ dt\ +\sigma \ S\ dX$$ is approximated in discrete time by $$\delta S=\mu\ S\ \delta t\ +\sigma \ S\ \phi \ (\delta t)^{1/2}$$ where $\phi$ is drawn from a standardized normal distribution.
This reasoning which seems to follow naturally from the definition of a [Wiener process](http://en.wikipedia.org/wiki/Wiener_process) triggered some thoughts and questions I cannot solve. Think of the following general diffusion process: $$dS=a(t,S(t))\ dt\ +b(t,S(t))\ dX$$ Now transform the second term in a similar fashion as above and drop the stochastic component:$$dS=a(t,S(t))\ dt\ +b(t,S(t))\ (dt)^{1/2}$$
NB: The last term is no [Riemann-Stieltjes integral](http://en.wikipedia.org/wiki/Riemann-Stieltjes) (that would e.g. be $d(t)^{1/2}$)
My questions(1) How would you interpret the last formula and how can the (now deterministic) differential equation be solved analytically? Will you get an additional term with a second derivative like in [Ito's lemma](http://en.wikipedia.org/wiki/Ito%27s_lemma)?(2) Is the last term a [fractional integral](http://mathworld.wolfram.com/FractionalIntegral.html) of order 1/2 (which is a [Semi-Integral](http://mathworld.wolfram.com/Semi-Integral.html))? Or is this a completely different concept?(3) Will there be a different result in the construction of the limit like with Ito integrals (from left endpoint) and [Stratonovich integrals](http://en.wikipedia.org/wiki/Stratonovich_integral) (average of left and right endpoint)?
Note: This is a cross-posting from [mathoverflow](https://mathoverflow.net/questions/20091/deterministic-interpretation-of-stochastic-differential-equation) where I got no answer to these questions.
| Deterministic interpretation of stochastic differential equation | CC BY-SA 2.5 | null | 2011-02-07T11:48:42.807 | 2021-09-24T02:34:36.403 | 2017-04-13T12:58:47.233 | -1 | 12 | [
"differential-equations",
"stochastic-calculus"
]
|
209 | 1 | 275 | null | 16 | 2369 | There are many corporate actions that will affect the stock price, like dividend, stock split and rights.
Given a large series of historical price data, how do we adjust the data to filter out the effect of the corporate action?
| How do strategies deal with corporate actions? | CC BY-SA 2.5 | null | 2011-02-07T12:01:02.797 | 2011-09-08T20:44:41.943 | 2011-09-08T20:44:41.943 | 1355 | 191 | [
"trading-systems",
"corporate-actions"
]
|
212 | 1 | 231 | null | 12 | 1091 | I've asked this [question](https://physics.stackexchange.com/questions/1873/the-concepts-of-path-integral-in-quantitative-finance) here at Physics SE, but I figured that some parts would be more appropriate to ask here. So I'm rephrasing the question again.
We know that for option value calculation, path integral is one way to solve it. But the solution I get from the Black-Scholes formula (derived from the [above question](https://physics.stackexchange.com/questions/1873/the-concepts-of-path-integral-in-quantitative-finance)):
$$\begin{array}{rcl}\mathbb{E}\left[ F(e^{x_T})|x(t)=x \right] & = & \int_{-\infty}^{+\infty} F(e^{x_T}) p(x_T|x(t)=x) dx_T \\
& = & \int_{-\infty}^{+\infty} F(e^{x_T}) \int_{\tilde{x}(t)=x}^{\tilde{x}(T)=x_T} p(x_T|\tilde{x}(\tilde{t})) p(\tilde{x}(\tilde{t})|x(t)=x) d\tilde{x}(\tilde{t}) dx_T \end{array}$$
is very cryptic and simply unusable on a computer.
My question is, how can we program this solution? Or more generally, how can we devise computer algorithms to solve path integral problem in quantitative finance?
| Solving Path Integral Problem in Quantitative Finance using Computer | CC BY-SA 3.0 | null | 2011-02-07T14:21:19.540 | 2015-01-03T07:50:35.953 | 2017-04-13T12:40:17.967 | -1 | 129 | [
"stochastic-calculus"
]
|
213 | 2 | null | 147 | 13 | null | Knowledge leads to profit BUT NOT the reverse
YOU are the biggest risk to the process. All the hopes, wishes and [bias](http://en.wikipedia.org/wiki/Cognitive_bias) you come with get in the way of making good decisions. The more you want something to be true, the more you have to kick the tires. So many people try out a bunch of random stuff, find a pattern that has a notional profit and just get blinded by dollar signs. That's when you start to skip over important details and you make all the sort of errors as detailed in the other answers here. Even when you are aware of these risks, they only get lip service while you are too busy grasping at the profit. You need to keep your feet on the ground and make reality based decisions. (Note that when you start to loose money fear takes the place of greed and you continue to make bad decisions)
Remember that in the end your own or your clients wealth will rest on the decisions you make. You will have down days, when you see your wealth falling what will you do? Unless, you have rock solid faith in your process you will start to tinker and go into a death spiral. To have that faith you need to have absolute discipline right from the start.
Luckily, there is a well established approach that has been developed of thousands of years to tackle this problem and it has proved very successful. It's called the [Scientific method](http://en.wikipedia.org/wiki/Scientific_method). At this point people throw up their arms and say that's too hard to understand or I don't have time for this. Well if you aren't prepared to work hard and spend the time, you've already lost. Believe me, I've done this for a long time now and I'm convinced this is the most valuable thing you can learn. It doesn't mean you have to have a PhD, wear a white coat and smoke a pipe.
The scientific method really is your friend here. Wandering away from reality into fantasy is so tempting but will lead to failure. Step away from the P&L! Don't look at the notional dollars instead work out what you know and what you can prove.
Question -> observe -> theory -> predict -> measure -> record/publish/peer review -> repeat
Think long and hard about what you believe are the facts/processes that underlie your strategy or idea. Ask questions - What properties can you measure?, what can you predict?, what is the cause? how long does it last?
Observe the world - yes this requires data but it doesn't require P&L! and probably not trading either. Form a theory on how the bits fit together then test this theory - make prediction and measure their success (accuracy etc. NOT P&L).
Objectively assess this new theory. If you are lucky enough to work in a team then this is where peer review and discussion really pay off. If you are on your own, then you face the hard task of dealing with all your behavioral biases and facing up to reality. Good luck with that.
If you make it to the end. Great job, you have learnt something new about the world or at least have an estimate and some associated uncertainty. Most importantly you can have faith that it's right not just wishful thinking.
Now repeat this process until you've built up a useful body of knowledge - a complete picture regarding your idea. Then and only then do you look at trying to best exploit what you know. This will probably lead to more research.
But finally you should have a strategy which you can finally back-test. As little as possible. With the minimum amount of data. If you've done the right job above you will already know what to expect and there shouldn't be many surprises. Resist the urge to tweak, and if you must - only do it with some careful thought and justification.
If its not the money machine that you expected then you made a mistake upstream - go back there and find out the piece you are missing. Start fiddling now and you're off to la la land again.
Do it right then you get to sleep better at night.
Sorry to ramble, but I really do believe that not sticking to the scientific method is the biggest risk to the process.
| null | CC BY-SA 3.0 | null | 2011-02-07T14:29:52.927 | 2014-06-06T09:45:59.160 | 2014-06-06T09:45:59.160 | 200 | 200 | null |
214 | 1 | null | null | 55 | 6603 | Everyone seems to agree that the option prices predicted by the Black-Merton-Scholes model are inconsistent with what is observed in reality. Still, many people rely on the model by using "the wrong number in the wrong formula to get the right price".
>
Question. What are some of the most important contradictions that one encounters in quantitative finance? Are there any model-independent inconsistencies? Are some of these apparent paradoxes born more equal than the others (i.e. lead to better models)?
I would like to limit the scope of the question to the contradictions arising in quantitative finance (so the well-documented paradoxes of economics and probability theory such as the [St. Petersburg paradox](http://en.wikipedia.org/wiki/St._Petersburg_paradox) or [Allais paradox](http://en.wikipedia.org/wiki/Allais_paradox) are deliberately excluded).
Edit (to adress Shane's comment). Hopefully, this question is different in focus and has a slightly more narrow scope than the previous question concerning [the most dangerous concepts in quantitative finance work](https://quant.stackexchange.com/questions/156/what-concepts-are-the-most-dangerous-ones-in-quantitative-finance-work). For instance, using VaR "naively" does not lead to immediate contradictions the way naive application of the BS model does. VaR may be considered inadequate because it seriously underestimates tail risks but it is not self-contradictory per se (please correct me if I'm wrong). Similarly, the EMH in its weaker forms may not be inconsistent with the market reality (at least the opposite has not been demonstrated decisively yet).
| Paradoxes in quantitative finance | CC BY-SA 3.0 | null | 2011-02-07T14:31:40.857 | 2015-10-28T22:41:26.067 | 2017-04-13T12:46:22.953 | -1 | 70 | [
"black-scholes",
"models"
]
|
215 | 2 | null | 209 | 5 | null | You can take a corporate action and look at a price before/after the event. In general, this simply means applying a series of multiplicative or additive factors.
As an example, with a $2:1$ stock split, the price following the event will be $1/2$ what it was prior to the split. If you want to see the prices as they were before then you need to multiply all future prices by 2. On the other hand, if you want to see historical prices so that they are continuous with the new price level, you would need to divide them by 2.
Stock dividends are also multiplicative, while cash dividends are additive (meaning that you would add or subtract the values).
Many good price sources can give you prices adjusted or unadjusted.
| null | CC BY-SA 2.5 | null | 2011-02-07T14:38:05.580 | 2011-02-07T14:38:05.580 | null | null | 17 | null |
216 | 2 | null | 214 | 16 | null | A very good book addressing such "puzzles of finance" — highly recommended!
- Puzzles of Finance: Six Practical Problems and Their Remarkable Solutions by Mark P. Kritzman
The paradoxes that are treated here are:
- Siegel's Paradox.
- Likelihood of Loss.
- Time Diversification.
- Why the Expected Return Is Not To Be Expected.
- Half Stocks All the Time or All Stocks Half the Time?
- The Irrelevance of Expected Return on Option Valuation.
| null | CC BY-SA 3.0 | null | 2011-02-07T14:42:47.587 | 2013-12-29T14:41:04.920 | 2013-12-29T14:41:04.920 | 263 | 12 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.