Id
stringlengths 1
5
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
5
⌀ | ParentId
stringlengths 1
5
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
27.8k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
5
⌀ | OwnerUserId
stringlengths 1
5
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
639
|
2
| null |
584
|
0
| null |
You might want to perform a [Two sample Kolmogorov-Smirnov](http://en.wikipedia.org/wiki/Kolmogorov_Smirnov_Test#Two-sample_Kolmogorov.E2.80.93Smirnov_test) test on the empirical CDFs of the distributions: your forecast and the realized distribution. Then use the p-value of the test as the metric of interest. Other statistical tests of difference in distribution which can be so abused are the [Baumgartner, Weiss, Schindler test](http://www.ncbi.nlm.nih.gov/pubmed/15284098), and the variations thereupon. I believe there is also a 2-sample Anderson-Darling type test. There are also the non-symmetric [Kullback Leibler divergence](http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence), and a [half-dozen other](http://en.wikipedia.org/wiki/Statistical_distance) definitions of statistical distance. Probably you are most limited by the statistical package or programming language you are working with. If using R, I would guess you can find implementations of all of the above.
| null |
CC BY-SA 2.5
| null |
2011-03-03T03:38:47.373
|
2011-03-03T03:38:47.373
| null | null |
108
| null |
641
|
2
| null |
298
|
8
| null |
[My company](http://www.miriamlaurel.com) currently uses Scala for all new projects in algorithmic trading. We also have an internal portfolio management / monitoring application written in Scala (with [Circumflex](http://www.circumflex.ru) web framework).
| null |
CC BY-SA 2.5
| null |
2011-03-04T18:54:54.583
|
2011-03-04T18:54:54.583
| null | null |
527
| null |
642
|
2
| null |
593
|
2
| null |
There is a [surplus of production](http://www.theoildrum.com/node/7519) between the Mexican Gulf and Canadian sources of crude. This makes refineries in the Midwestern US oversupplied driving the price of WIT down.
| null |
CC BY-SA 2.5
| null |
2011-03-05T16:51:21.707
|
2011-03-05T16:51:21.707
| null | null |
126
| null |
643
|
2
| null |
557
|
6
| null |
Forget about BOVESPA nobody in Brazil is really doing anything that relies on speed and stability. I can say that from my personal experience. I would say that depending on your demands QuickFIX can be as good as FIX.
| null |
CC BY-SA 2.5
| null |
2011-03-05T18:51:56.677
|
2011-03-05T18:51:56.677
| null | null |
531
| null |
644
|
2
| null |
575
|
0
| null |
ARMA, VAR, VEC. Please, check this out. [http://www.amazon.com/Analysis-Financial-Time-Ruey-Tsay/dp/0471415448](http://rads.stackoverflow.com/amzn/click/0471415448)
| null |
CC BY-SA 2.5
| null |
2011-03-05T18:56:28.223
|
2011-03-05T18:56:28.223
| null | null |
531
| null |
645
|
2
| null |
545
|
0
| null |
Well, the goal of a genetic algo is to find the best solution without going through all the possible scenarios because it would be too long. So of course it is curve fitting, that's the goal!!!
| null |
CC BY-SA 2.5
| null |
2011-03-06T20:40:37.093
|
2011-03-06T20:40:37.093
| null | null | null | null |
646
|
1
| null | null |
4
|
245
|
Can anyone provide a simple example of picking from two distributions, such that the two generated time series give a specified value of Pearson's correlation coefficient? I would like to do this in a simple monte-carlo risk assessment. Ideally, the method should take two arbitrary CDFs and a correlation coefficient as input.
I asked a [similar question on picking from correlated distributions](https://stats.stackexchange.com/questions/7515/what-are-some-techniques-for-sampling-two-correlated-random-variables) on [stats.stackexchange.com](http://stats.stackexchange.com) and learned that that mathematical machinery required is a called a copula. However I found quite a steep learning curve waiting after consulting the references... some simple examples would be extremely helpful.
Thanks!
|
Picking from two correlated distributions
|
CC BY-SA 2.5
| null |
2011-03-06T23:02:33.750
|
2011-03-08T18:13:49.087
|
2017-04-13T12:44:17.637
|
-1
|
47
|
[
"risk",
"time-series",
"monte-carlo"
] |
647
|
1
| null | null |
18
|
13374
|
Has anybody else out there made this switch? I'm considering it right now. What were the negatives and positives of the switch?
|
Switching from Matlab to Python for Quant Trading and Research
|
CC BY-SA 2.5
| null |
2011-03-07T00:02:02.093
|
2012-12-29T10:15:04.197
| null | null |
443
|
[
"quant-trading-strategies"
] |
648
|
2
| null |
647
|
2
| null |
I've only found MatLab to be useful for modeling and testing algorithms, with the implementation in any other language.
| null |
CC BY-SA 2.5
| null |
2011-03-07T01:22:38.997
|
2011-03-07T01:22:38.997
| null | null |
126
| null |
649
|
1
|
1014
| null |
5
|
3513
|
What is a persistent variable in the context of regression analysis? For example, dividend to price ratio (D/P) is considered to be persistent variable when used to model future returns (Stambaugh Bias literature).
|
What is a persistent variable?
|
CC BY-SA 2.5
| null |
2011-03-07T12:43:42.557
|
2011-04-19T14:56:43.673
|
2011-03-08T06:15:13.317
|
35
|
40
|
[
"regression"
] |
650
|
2
| null |
647
|
0
| null |
Everything has pros & cons. So it has to do with personal preferences but more importantly to me it has to do with what your shop uses. Most use a mix of interpreted/compiled language.
| null |
CC BY-SA 2.5
| null |
2011-03-07T15:38:18.733
|
2011-03-07T15:38:18.733
| null | null |
471
| null |
651
|
2
| null |
647
|
17
| null |
I made the switch years ago, and it has been great. I even switched the class I teach from Matlab to Python. Here are some things to consider
- Others can run your Python code when you share it with them. Matlab has compilers and the like, but they are an extra step you must take since most people do not have Matlab on their desk.
- Python and its extensions are open source and so allow you to see under the hood
- Python ctypes is slightly nicer than Matlab C integration
- Python syntax is excellent (e.g list comprehensions), and NumPy syntax for arrays is also cleaner than Matlab's
- Python is easier to integrate with external data sources and files
On the other hand
- Matlab integrates nicely with Java
- Matlab optimization routines are really excellent
- Matlab 3D plotting is better
| null |
CC BY-SA 2.5
| null |
2011-03-07T18:44:07.800
|
2011-03-07T18:44:07.800
| null | null |
254
| null |
652
|
1
|
666
| null |
13
|
370
|
I am currently doing a project involving Monte-Carlo method. I wonder if there is papers dealing with a "learning" refinement method to enhance the MC-convergence, example :
Objective : estimate of $E(X)\thickapprox \sum _{i=1}^{10 000}X_i$
-> step 1 (500 simulations): $approx_1=\sum _{i=1}^{500}X_i$
(i) Defining and
'Acceptance interval'
>
$ A_1 = [approx_1-\epsilon_1,approx_1+\epsilon_1]$
where $\epsilon_1$ could be a function of the empirical variance and other statistics
-> step 2 (500 other simulations): "throwing" all simulation out of the interval $A_1$ , $approx_2=\sum _{i=1}^{500}X_i^{(2)}$
New 'acceptance interval'
>
$A_2 = [approx_2-\epsilon_2,approx_2+\epsilon_2]$
where $\epsilon_2 < \epsilon_1$
...
$ \epsilon \xrightarrow {} 0$
Thanks you for your help
|
Enhancing Monte-Carlo convergence (crude method)
|
CC BY-SA 2.5
| null |
2011-03-07T20:37:19.963
|
2011-11-03T14:42:55.423
| null | null |
539
|
[
"simulations"
] |
653
|
2
| null |
646
|
4
| null |
Those people citing copulas are actually answering a different question, because they are leading you to a solution whose transformed distribution function has the requested correlation.
You have two distributions $P_1$ and $P_2$. Let me begin by pointing out that this problem is not actually solvable in the general case. That's because either $P_1$ or $P_2$ can in principle be a point distribution with 100% of its density at a single value. In that case, of course, all correlations will be zero.
More generally, the shapes of $P_1$ and $P_2$ will put a ceiling $\rho_{\text{max}}$ on the size of correlation $\rho$ that is achievable even in principle. That ceiling may be 100% but it is difficult to compute in the general case.
Your best bet would be to use a copula with 100% correlation $r$, to get a lower bound estimate for the maximum possible correlation. Compute the Pearson correlation $\rho$ of your actual distribution from your $r=$100% copula and you have an estimate for $\rho_{\text{max}}$. If your target correlation is smaller than that, you can use a root-finder with copula correlation $r$ as input and resulting correlation $\rho$ as an output. You'll have to keep recomputing $\rho$ of course, which may in principle involve a nasty integral.
| null |
CC BY-SA 2.5
| null |
2011-03-07T22:49:50.020
|
2011-03-08T18:13:49.087
|
2011-03-08T18:13:49.087
|
254
|
254
| null |
654
|
2
| null |
647
|
14
| null |
Rich, you might find this [cheatsheet](http://mathesaurus.sourceforge.net/matlab-python-xref.pdf) useful on your journey.
I was advocating Python over Matlab to a co-worker just minutes ago. I should start by saying that Matlab is a fine piece of software - its documentation is amazing, as are the pdfs that accompany the various toolboxes (as I'm sure you know).
However, regarding Python, Brian B brings up many good points. Two big advantages I would like to emphasize:
- I know that I will be able to develop
Python anywhere I might work in the
future (including at home over the
weekends). In other words, learning the language is time well spent. Learn once, and benefit for years. It's the same reason why
I love working on the command line in *nix environments, instead of GUIs (MS Office ribbons come to mind).
- I acknowledge that a very large
portion of quant research is simple, unglamorous
data manipulations - Python serves as
a strong glue language (like Perl, but with much stronger numerical libraries). I can set cron jobs for Python scripts that load data, send me emails, etc. I'm sure there are those who do this in Matlab (just like there are those that do all sorts of crazy stuff in VBA), but Python is a far better tool for these jobs.
Having said all of that, all legit quant shops can afford Matlab (and all of the costly toolboxes required for database access, xls read/write, compilation - which really should be free IMO). If you are purely research, then you can probably get by with only Matlab, but I find it somewhat restrictive and, perhaps, somewhat risky in terms of availability.
| null |
CC BY-SA 2.5
| null |
2011-03-08T01:03:56.827
|
2011-03-08T01:03:56.827
| null | null |
390
| null |
657
|
5
| null | null |
0
| null |
Black and Scholes first proposed the model in 1973 in a paper titled "The Pricing of Options and Corporate Liabilities".
The equation at the center of the model uses partial differential equations to calculate the price of options.
In 1997, Scholes received the Nobel Prize in Economics for his work. Black was ineligible as he died in 1995.
| null |
CC BY-SA 2.5
| null |
2011-03-08T04:51:05.137
|
2011-03-08T06:18:19.647
|
2011-03-08T06:18:19.647
|
126
|
126
| null |
658
|
4
| null | null |
0
| null |
Black-Scholes is a mathematical model used for pricing options.
| null |
CC BY-SA 2.5
| null |
2011-03-08T04:51:05.137
|
2011-03-08T06:18:16.620
|
2011-03-08T06:18:16.620
|
126
|
126
| null |
659
|
5
| null | null |
0
| null |
Many times it is not possible to determine an exact solution to a problem because there are random(ish) possible inputs, and therefor random outcomes.
The typical "Monte Carlo Method" goes something like:
1. Determine the possible inputs.
2. Generate the random inputs based on the probability distribution of your inputs.
3. Compute the output based on that particular random input.
4. Compile the results.
Monte Carlo simulation in the context of Quantitative Finance refers to a set of
techniques to generate artificial time series of the stock price overtime, from which option prices can be derived. There are several choices available in this regard.The first choice is to apply a standard method such as the Euler,
Milstein, or implicit Milstein scheme The advantage of these schemes is that they are easy to understand, and their convergence properties are well-known. The other choice is to use a method that is better suited, or that is specifically designed for the model.These schemes are designed to have faster
convergence to the true option price, and in some cases, to also avoid the negative variances that can sometimes be generated from standard methods
| null |
CC BY-SA 3.0
| null |
2011-03-08T05:01:11.960
|
2015-07-24T14:18:25.360
|
2015-07-24T14:18:25.360
| null |
126
| null |
660
|
4
| null | null |
0
| null |
Monte Carlo simulation methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results.
| null |
CC BY-SA 3.0
| null |
2011-03-08T05:01:11.960
|
2015-07-24T14:16:34.387
|
2015-07-24T14:16:34.387
| null |
126
| null |
661
|
2
| null |
431
|
26
| null |
I'm putting up videos about what I'm learning while I read through Paul Wilmott on Quantitative Finance, it's at [NathansLessons.com](http://nathanslessons.com). So far I have 23 videos covering chapters 1 through 16. The videos are in "virtual blackboard" format, like Khan Academy.
| null |
CC BY-SA 2.5
| null |
2011-03-08T06:13:02.667
|
2011-03-08T06:13:02.667
| null | null | null | null |
665
|
2
| null |
629
|
6
| null |
Maybe I completely misunderstood the question, but it seems to me that you are looking to find a model structure as opposed to fit a specified/known model. In your context the model specification (the trading rules) are unknown... Am I right?
If that Is the case, maybe genetic programming:
[http://en.wikipedia.org/wiki/Genetic_programming](http://en.wikipedia.org/wiki/Genetic_programming)
Is what you need?
In a nutshell, it is a sub-class of GA which applies evolutionary approach for finding a model structure (a program) which is most fit... Throughout generations of evolutionary improvements.
My guess is that a language dictionary in this case is a set of constructs (variables) you have at your disposal, and the language grammar are the rules...
Just a thought!
Btw. Good Question!
| null |
CC BY-SA 2.5
| null |
2011-03-08T15:36:31.407
|
2011-03-08T15:36:31.407
| null | null |
40
| null |
666
|
2
| null |
652
|
7
| null |
If your variable of integration is truly one-dimensional, as you seem to be saying, then you should be using quadrature to evaluate the expectation integral. The computational efficiency of quadrature is much higher than Monte Carlo in one dimension (even accounting for modified sampling).
If your problem is actually multidimensional, your best bet is to use the first few iterations (you suggest 500 above) to help choose a scheme for importance sampling. Your windowing scheme is a different trick sometimes labeled stratified sampling, and tends to get tricky from a coding perspective.
To perform importance sampling, you will modify the distribution with what is known as an equivalent measure of your random samples so that most of them fall in the "interesting" region. The easiest technique is to ensure you are sampling from the multivariate normal distribution, and then shift the mean and variance of your samples such that, say, 90% of them fall within your "interesting" region.
Having shifted your samples, you need to then track their likelihood ratio (or Radon-Nikodym derivative) versus the original distribution, because your samples now need to be weighted by that ratio in your Monte Carlo sum. In the case of a shift in normal distributions, this is fairly easy to compute for each sample $\vec{x}$, as
$$
\frac{1}{\sqrt{2\pi\ \text{det}A^{-1}}}\exp{\left[-\frac12 (x-b)A(x-b)^t\right]}
$$
where $A$ and $b$ are the covariance matrix and mean of your change to the original multivariate normal.
| null |
CC BY-SA 3.0
| null |
2011-03-08T22:47:06.013
|
2011-11-03T14:42:55.423
|
2011-11-03T14:42:55.423
|
254
|
254
| null |
667
|
5
| null | null |
0
| null | null |
CC BY-SA 2.5
| null |
2011-03-09T02:52:57.143
|
2011-03-09T02:52:57.143
|
2011-03-09T02:52:57.143
|
-1
|
-1
| null |
|
668
|
4
| null | null |
0
| null |
Interactive Brokers (IB) is a mutli-asset retail and institutional brokerage firm.
| null |
CC BY-SA 3.0
| null |
2011-03-09T02:52:57.143
|
2011-09-08T19:23:01.327
|
2011-09-08T19:23:01.327
|
1355
|
1355
| null |
669
|
5
| null | null |
0
| null | null |
CC BY-SA 2.5
| null |
2011-03-09T03:06:36.820
|
2011-03-09T03:06:36.820
|
2011-03-09T03:06:36.820
|
-1
|
-1
| null |
|
670
|
4
| null | null |
0
| null |
High-frequency trading (HFT) is the use of sophisticated technological tools to trade securities like stocks or options, and is typically characterized by fast order entry and cancellation, low-latency market access, and co-location of servers on an exchange.
| null |
CC BY-SA 3.0
| null |
2011-03-09T03:06:36.820
|
2011-12-14T15:35:35.383
|
2011-12-14T15:35:35.383
|
1106
|
1355
| null |
671
|
5
| null | null |
0
| null |
Time series analysis uses methods to extract meaningful information and statistics about a particular time series.
Time series forecasting uses the results of time series analysis to predict future events. For example, a temporal observation set of stock closing prices represents a time series, while time series forecasting is an attempt to predict a future closing price of that stock.
Time series can be uniformly sampled (one observation every day) or event based (one observation each time a trade is done, like tick by tick tapes). The techniques to work with these two different nature of stochastic processes are not the same.
| null |
CC BY-SA 4.0
| null |
2011-03-09T03:12:52.413
|
2022-11-06T02:36:52.523
|
2022-11-06T02:36:52.523
|
2299
|
126
| null |
672
|
4
| null | null |
0
| null |
A temporal sequence of events measured at discrete points in time.
| null |
CC BY-SA 3.0
| null |
2011-03-09T03:12:52.413
|
2015-04-22T14:38:57.800
|
2015-04-22T14:38:57.800
|
2553
|
126
| null |
674
|
2
| null |
629
|
3
| null |
Here is an example of the 75% trading rule coded in R: [Can one beat the random walk](http://intelligenttradingtech.blogspot.com/2011/03/can-one-beat-random-walk-impossible-you.html)
This is how the author describes the rule:
>
The following script will generate a random series of data and follow the so called 75% rule which says, Pr[Price>Price(n-1) & Pr<(n-1) < Price_median] Or [Price < Price(n-1) & Price(n-1) > Price_median] = 75%.
| null |
CC BY-SA 2.5
| null |
2011-03-09T11:09:24.750
|
2011-03-09T11:09:24.750
| null | null |
357
| null |
675
|
1
| null | null |
28
|
20362
|
Does anyone know of a python library/source that is able to calculate the traditional mean-variance portfolio? To press my luck, any resources where the library/source also contains functions such as alternative covariance functions (etc. shrinkage), Lower partial moment portfolio optimization, etc...
I have developed, like everyone else, and implemented one or two variants. Is it just me or there isn't much out there in terms of python for financial/portfolio applications. At least nothing out there matching efforts like Rmetrics for R.
|
What is the reference python library for portfolio optimization?
|
CC BY-SA 3.0
| null |
2011-03-09T21:39:17.253
|
2020-06-22T08:15:46.740
|
2016-04-17T05:11:58.620
|
467
|
557
|
[
"portfolio",
"optimization",
"python"
] |
677
|
2
| null |
675
|
18
| null |
Sorry for not being able to give more than one hyperlink, please do some web search for the project pages.
Portfolio optimization could be done in python using the cvxopt package
which covers convex optimization. This includes quadratic programming as a special case for the risk-return optimization. In this sense, the following example could be of some use:
[http://abel.ee.ucla.edu/cvxopt/examples/book/portfolio.html](http://abel.ee.ucla.edu/cvxopt/examples/book/portfolio.html)
Ledoit-Wolf shrinkage is for example covered in scikit.
| null |
CC BY-SA 2.5
| null |
2011-03-09T22:05:39.043
|
2011-03-09T22:13:40.647
|
2011-03-09T22:13:40.647
|
560
|
560
| null |
678
|
1
| null | null |
5
|
823
|
To the degree in which it's possible, I'd like to know what the community believes are the objective skills/knowledge required to run a successful Quant book.
I'm not interested in strategies, obscure maths or programming languages here. I'm interested in the practical realities of successfully managing/executing a strategy - the nitty-gritty, practical knowledge outside of the strategy that is required in practice (or a huge advantage to know). Such topics are often somewhat boring and are rarely taught in school.
This knowledge will overlap significantly with that needed by traditional PMs. Let's assume this Quant PM is at a relatively small shop which does not have a large back-office or particularly sophisticated traders.
I'm looking to fill in any potential missing areas of knowledge. I imagine other traditionally trained quants may benefit from this discussion.
Some possible suggestions:
- The ins and outs of Securities Lending and traditional services offered by PBs
- Marketing of strategy (internally, but perhaps formally to outside investors)
- Regulatory Environment / Pending Regulation
- Deep understanding of 'Liquidity' (beyond simply historical ADVs, this may include how crowded you believe your trade to be)
- Algorithmic Execution (what is the best trading approach given strategy's alpha decay? Should you always be a liquidity demander?)
- ???
|
Quant PMs need to know the following...
|
CC BY-SA 2.5
| null |
2011-03-10T03:09:16.037
|
2011-03-10T03:34:59.103
| null | null |
390
|
[
"portfolio-management"
] |
679
|
2
| null |
678
|
6
| null |
What does a PM need to know is very specific to the investment goals of a firm.
Regulatory issues are particular to location and asset class. For example, a firm that trades US equities may or may not have to become a member of FINRA, which in turn would dictate whether the PM must take the Series 7. The firm's lawyer should be able to address that.
Issues like liquidity and execution approach will be particular to the alpha model. Market makers primarily provide quotes, whereas large stat arb groups need to know their broker algos. And the broker algos for equities can be very different from the algos for interest rates.
The reason books and schools focus on programming and math is that those topics have some degree of universality. A lot of the extras are things a trader will learn on the job---often the "hard way"---which is why nobody becomes a PM without a lot of experience.
| null |
CC BY-SA 2.5
| null |
2011-03-10T03:34:59.103
|
2011-03-10T03:34:59.103
| null | null |
35
| null |
681
|
1
|
4128
| null |
8
|
2009
|
What is the varswap basis? I am not completely sure what this number represents. Is it the basis between the estimated future realized volatility and the vol surface implied volatilty at a specified tenor?
|
Varswap Basis - What is it in practice?
|
CC BY-SA 2.5
| null |
2011-03-10T11:39:42.917
|
2012-09-16T20:42:06.320
| null | null |
293
|
[
"volatility",
"swaps"
] |
682
|
2
| null |
557
|
30
| null |
In order to answer your question (for you) you would need something to compare to. You would need numbers to know if it is slower/faster, how much, and if it will impact your system overall. Also knowing your performance goals could narrow down the options.
My advice is to take a look at your overall architecture of the sytem you have or intend to build. To just look at QuickFIX is rather meaningless without the whole chain involved in processing information and reacting to it. As an example, say QuickFIX is 100 times faster than some part (in the chain of processing) you have or build. Now, replacing QuickFIX with another part which is 100 times faster than QuickFIX would not change anything because you're still held back by the slowest point. And remember that network hops are usually very expensive compared to in-memory processing of data.
If you for some reason cannot compare different candidates against each other, why not start with e.g. QuickFIX, but make the system in such a way that it can be replaced with something faster later on.
Generally speaking, QuickFIX is not the fastest option, but the key point is that it might not have to be. If performance is very critical and one has resources, you usually end up buying something or building something yourself. Drawbacks here are having resources like time, money and skilled people.
To answer your question better one would need to know other aspects as well, like available resources (money, time, skill), overall system overview, performance expectations and other factors that limit decisions. E.g. if money is not a limiting factor, just find the fastest option and buy it.
| null |
CC BY-SA 2.5
| null |
2011-03-10T13:09:20.057
|
2011-03-10T14:00:36.357
|
2011-03-10T14:00:36.357
|
563
|
563
| null |
683
|
1
|
685
| null |
10
|
1286
|
CBOE has introduced [credit event binary options](http://www.cboe.com/micro/credit/introduction.aspx), kind of as a retail trader's CDS. These binary options are worth $1 if there is a credit event (ie, bankruptcy) before expiration, and $0 if there is no credit event (ie, solvency) at expiration. The option's premium is quoted in pennies and indicates the chance of a bankruptcy during the option's lifetime (eg, $0.11 is 11% chance).
How would someone price one of these options? My gut is that the premium should be similar to the delta of a deeply out-of-the-money put option. Any other thoughts?
|
How would one price a "credit event binary option"?
|
CC BY-SA 2.5
| null |
2011-03-10T16:37:52.143
|
2011-03-10T18:47:44.793
| null | null |
35
|
[
"options"
] |
684
|
2
| null |
683
|
0
| null |
The CBOE site states that the premium will approximately reflect the probability of bankruptcy. Usually the delta reflects the probability that the OTM option will be ITM, so I am not sure what is involved in the premium calculation. However, If you can compute the delta, you can compare it to the premium to see if is over/under valued.
-Ralph Winters
| null |
CC BY-SA 2.5
| null |
2011-03-10T17:05:17.050
|
2011-03-10T17:05:17.050
| null | null |
520
| null |
685
|
2
| null |
683
|
5
| null |
I would see if a binomial tree gives reasonable answers (i.e., can you get close to the CEBOs with high volume). You could determine the probability of default over a given interval using the KMV-Merton model. Then use the probability over each of these intervals to determine the probabilities for each of the branches (since the payoff is in default, the tree will be very one-sided). Then discount each of branches back at your risk-free rate.
I don't have first-hand experience calculating the KMV-Merton model, but it's pretty common, so I think you should be able to find code out there for it (it's calculated iteratively).
Another option could be to think about no arbitrage with any CDS and swaps that are already written on the underlying. But given that your CEBO are traded, there may also be a liquidity premium wrapped up in them.
Looking quickly at the website, it doesn't look like retail investors can sell protection. Is that right? I wonder who has the other side of the option.
| null |
CC BY-SA 2.5
| null |
2011-03-10T17:49:30.770
|
2011-03-10T17:49:30.770
| null | null |
106
| null |
686
|
2
| null |
683
|
2
| null |
Since there is no recovery value, any credit default model should be suitable, were I suppose reduced form models would be more appropiate.
| null |
CC BY-SA 2.5
| null |
2011-03-10T18:47:44.793
|
2011-03-10T18:47:44.793
| null | null |
357
| null |
687
|
2
| null |
681
|
2
| null |
Whatever it is, it clearly is not a common term of art in the industry. Three possibilities come to mind:
- To the options: Since one can (under certain assumptions about continuity etc) synthesize a variance swap from European option contract prices, the basis may represent the difference between varswap price and the synthesized price.
- To the VIX: If we are talking about SPX variance swaps there is a convexity-related basis to the VIX futures.
- To another tenor: As with futures, there is a calendar basis in variance swaps.
| null |
CC BY-SA 2.5
| null |
2011-03-10T19:40:34.577
|
2011-03-10T19:40:34.577
| null | null |
254
| null |
688
|
2
| null |
156
|
8
| null |
Persistent autocorrelations in volatility processes are due to long term memory only. I cannot help but sigh at the hundreds of papers which work under this assumption. Haven't people heard about regime shifts?
| null |
CC BY-SA 2.5
| null |
2011-03-11T03:53:05.187
|
2011-03-11T03:53:05.187
| null | null | null | null |
689
|
1
|
1103
| null |
7
|
692
|
Let's restrict the scope of the question a little bit: I'm interested to learn about major differences in pricing formulae for nominal government bonds. The pricing formulae for inflation-linked bonds could well make a topic of another (and more difficult) question.
[The question](http://area51.stackexchange.com/proposals/117/quantitative-finance/685#685) was previously asked by [cletus](http://area51.stackexchange.com/users/149/cletus) during the definition phase of the site.
|
How do bond pricing formulae differ between the US, UK and the Euro zone?
|
CC BY-SA 2.5
| null |
2011-03-11T14:21:30.933
|
2022-07-01T09:58:20.080
| null | null |
70
|
[
"fixed-income",
"pricing-formulae"
] |
690
|
1
|
703
| null |
10
|
1713
|
This question is inspired by the remark due to Vladimir Piterbarg made in a related [thread](http://wilmott.com/messageview.cfm?catid=19&threadid=17173) on Wilmott back in 2004:
>
Not to be a party-pooper, but Malliavin calculus is essentially useless in finance. Any practical result ever obtained with Malliavin calculus can be obtained by much simpler methods by eg differentiating the density of the underlying process.
At the same time, it seems that recently there has been a rather noticeable flow of academic papers and books devoted to applications of the Malliavin calculus to finance (see, e.g., [Malliavin Calculus for Lévy Processes with Applications to Finance](http://books.google.com/books?id=G9EvB_HZCVwC&pg=PR2&dq=Malliavin+Calculus+for+L%C3%A9vy+Processes+with+Applications+to+Finance+%28Universitext%29&hl=en&ei=-D56TY6EIIS84gaB3rCHBg&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCgQ6AEwAA#v=onepage&q&f=false) by Di Nunno, Øksendal, and Proske, [Stochastic Calculus of Variations in Mathematical Finance](http://books.google.com/books?id=VsmdN58iWAYC&printsec=frontcover&dq=Stochastic+Calculus+of+Variations+in+Mathematical+Finance&hl=en&src=bmrr&ei=tUN6TduZBpW04gbGpPHhBQ&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCgQ6AEwAA#v=onepage&q&f=false) by Malliavin and Thalmaier, and the references therein).
Question. So do practitioners actually use the Malliavin calculus to compute Greeks these days? Are there any other real-world financial applications of the Malliavin calculus? Or does Dr. Piterbarg's assessment of the practical potential of the theory remain to be essentially accurate?
|
How do practitioners use the Malliavin calculus (if at all)?
|
CC BY-SA 2.5
| null |
2011-03-11T16:12:27.277
|
2011-03-19T17:36:57.320
| null | null |
70
|
[
"stochastic-calculus",
"application"
] |
691
|
1
| null | null |
12
|
460
|
I'm working on some financial analysis code which I'd like to test against a historical dataseries to analyze the correlations to my algorithm to some non-finance related data. Ideally, I'd like to test against a dataseries which provides a monthly float value as far back as 1997. Maybe it's the temperature somewhere on the 1st of the month. Maybe its the number of traffic accidents on the 1st day of each month in a city. Basically, just something somewhat random, but with the monthly value to test against.
Can anybody think of a publicly-available data-series which might have a monthly point of data as far back as 1997. A link to the data-source would be even better.
|
Seeking Historical Non-Finance Datapoints for Backtesting
|
CC BY-SA 2.5
| null |
2011-03-11T20:03:23.573
|
2011-03-18T19:21:29.017
| null | null |
570
|
[
"data",
"backtesting"
] |
692
|
1
|
732
| null |
15
|
3726
|
I'm looking at doing some research drawing comparisons between various methods of approaching option pricing. I'm aware of the Monte Carlo simulation for option pricing, Black-Scholes, and that dynamic programming has been used too.
Are there any key (or classical) methods that I'm missing? Any new innovations? Recommended reading or names would be very much appreciated.
Edit: Additionally, what are the standard methods and approaches for assessing/analysing a model or pricing approach?
|
Methods for pricing options
|
CC BY-SA 2.5
| null |
2011-03-11T20:16:01.970
|
2013-01-15T10:12:21.693
|
2011-03-11T21:49:02.250
|
473
|
473
|
[
"options",
"option-pricing",
"black-scholes",
"monte-carlo"
] |
693
|
2
| null |
691
|
6
| null |
Here are a few links:
[http://www.economagic.com/](http://www.economagic.com/)
[http://www.eia.doe.gov/electricity/data.cfm](http://www.eia.doe.gov/electricity/data.cfm)
[http://www.drought.unl.edu/dm/source.html](http://www.drought.unl.edu/dm/source.html)
[http://gcmd.nasa.gov/KeywordSearch/Home.do?Portal=GCMD&MetadataType=0](http://gcmd.nasa.gov/KeywordSearch/Home.do?Portal=GCMD&MetadataType=0)
| null |
CC BY-SA 2.5
| null |
2011-03-11T20:36:51.047
|
2011-03-11T20:36:51.047
| null | null |
392
| null |
694
|
2
| null |
691
|
8
| null |
The website [www.infochimps.com](http://www.infochimps.com) has a lot of unusual datasets, many of which are free.
| null |
CC BY-SA 2.5
| null |
2011-03-11T21:29:57.567
|
2011-03-11T21:29:57.567
| null | null |
571
| null |
695
|
2
| null |
692
|
5
| null |
I would also look into pricing models based upon models other than lognormal (Black-Scholes). Do some research on "fat tailed" or stable distributions. There can also be known by their specific distribution names as Levy, Levy-Poisson, or Cauchy.
[http://en.wikipedia.org/wiki/Fat_tail](http://en.wikipedia.org/wiki/Fat_tail)
| null |
CC BY-SA 2.5
| null |
2011-03-11T22:46:47.287
|
2011-03-12T22:39:43.387
|
2011-03-12T22:39:43.387
|
35
|
520
| null |
696
|
1
| null | null |
3
|
1576
|
I'm currently studying for my undergrad in CS, and considering to do a grad in both CS and Math. I would like to see what you guys would recommend to prepare myself for a field as a Quant. I am specifically interested in Algotrading (algorithmic quant) position. To be able to write models and software to do automated trading. Any suggestions on where to begin? I have no familiarity with trading. Is it possible to setup a practice account and begin doing some trades so I understand how that works?
Thanks!
|
On my way to becoming a Quant
|
CC BY-SA 2.5
| null |
2011-03-12T05:05:20.987
|
2011-03-12T09:47:47.703
| null | null |
566
|
[
"algorithmic-trading",
"software",
"trading-systems",
"learning",
"soft-question"
] |
697
|
2
| null |
696
|
5
| null |
The best way to do it is by getting an internship as an entry level analyst or some sort. They do not necessary expect you to know computational finance(they will teach you), though you need to be bright, have an outstanding academic record, and of course, good communication skills. As you get in there, you can then ask around about the specifics of what you are looking for. In general, quant positions themselves require you to have Masters, Ph.D, or a lot of experience, so your plan on getting into graduate program may work just fine. In my opinion however, it makes more sense to get a Bachelors degree first, get a job in one of the financial institutions, work there for a couple of years and then come back to pursue your graduate studies. As far as internships go, starting place could be D.E. Shaw, they have summer internships available. Since you have programming background, they might like you.
Good Luck!
| null |
CC BY-SA 2.5
| null |
2011-03-12T07:01:13.037
|
2011-03-12T07:01:13.037
| null | null |
428
| null |
698
|
2
| null |
692
|
3
| null |
There are two more methods that i am aware of from my academic curriculum. I am not sure if they are applicable to the real world but you can read up on them.
Method 1: Binomial Valuation;
Method 2: Risk Neutral Valuation;
Both of the methods are fairly easy to implement(in terms of writing a program for it or simply, using excel spreadsheets)
| null |
CC BY-SA 2.5
| null |
2011-03-12T07:07:52.173
|
2011-03-12T07:07:52.173
| null | null |
428
| null |
699
|
2
| null |
696
|
3
| null |
Quantivity has three great posts about how to learn algorithmic trading
[http://quantivity.wordpress.com/2010/01/10/how-to-learn-algorithmic-trading/](http://quantivity.wordpress.com/2010/01/10/how-to-learn-algorithmic-trading/)
http://quantivity.wordpress.com/2010/01/12/how-to-learn-algorithmic-trading-part-2/
[http://quantivity.wordpress.com/2010/01/12/how-to-learn-algorithmic-trading-part-3/](http://quantivity.wordpress.com/2010/01/12/how-to-learn-algorithmic-trading-part-3/)
| null |
CC BY-SA 2.5
| null |
2011-03-12T09:47:47.703
|
2011-03-12T09:47:47.703
| null | null |
82
| null |
700
|
1
|
704
| null |
9
|
1684
|
I was looking into the factorial function in an R package called gregmisc and came across the implementation of the gamma function, instead of a recursive or iterative process as I was expecting. The gamma function is defined as:
$$ \Gamma(z)=\int_{0}^{\infty}e^{-t}t^{z-1}dt $$
A brief history of the function points to Euler's solution to the factorial problem for non-integers (although the equation above is not his). It has some application in physics and I was curious if it is useful to any quant models, apart from being a fancy factorial calculator.
|
Does the gamma function have any application in quantitative finance?
|
CC BY-SA 2.5
| null |
2011-03-12T15:47:32.820
|
2011-03-12T19:52:15.310
| null | null |
291
|
[
"models"
] |
701
|
2
| null |
700
|
2
| null |
In certain cases some stochastic differential equations(SDE's) have closed form(deterministic) solutions in the form of well known ordinary differential equations (ODE's), partial differential equations(PDE's), and special functions like the gamma function.
Here's an example from a paper where an SDE has a closed form solution in terms of the gamma function:
[http://www.siam.org/books/dc13/DC13samplechpt.pdf](http://www.siam.org/books/dc13/DC13samplechpt.pdf)
Solving SDE's (preferably quickly), like with a closed form solution (when one is available), is a core activity in quantitative finance.
| null |
CC BY-SA 2.5
| null |
2011-03-12T17:40:54.933
|
2011-03-12T17:40:54.933
| null | null |
214
| null |
702
|
2
| null |
700
|
0
| null |
Gamma distributions are being used to model the default rate of credit portfolios by CreditRisk.
| null |
CC BY-SA 2.5
| null |
2011-03-12T18:03:46.473
|
2011-03-12T18:03:46.473
| null | null |
357
| null |
703
|
2
| null |
690
|
6
| null |
Well the problems where Malliavin Calculus is applicable are mostly regarding greeks of exotic derivatives where some non smoothness in the payoff function creates trouble when trying to get this by finite difference methods. The thing is in my opinion that Malliavin Calculus is only an opening as it gives you basically an infinite number of ways to get those derivatives by Monte Carlo simulations. Then you have to determine an optimal weight, and it appears that when the probability law of the dynamic is tractable (maybe this is related to Piterbarg's comment)you can use Likelyhood Ratio Method to get those bad greeks and this method is optimal. Anyway Malliavin Calculus alone won't really improve the solution to the problem if you don't use some other techniques together with it(such variance reduction techniques), and it might be the case that those techniques provide sufficient improvements to make difference methods good enough and so to avoid to resort to Malliavin Calculus. But anyway it is ALWAYS a good thing to have as many as possible methods as you can when dealing with a problem. So Piterbarg comment is a little provocative and it should not prevent you from getting your own idea on the subject by implementing some Malliavin Calculus techniques on concrete examples.
Here are a few references if you want to get acquainted with Malliavin Calculus and its application to Greek comptuations :
Benhamou - Smart Monte Carlo, Various Trikcs using Malliavin Calculus
Friz - An Introduction to Malliavin Calculus
Glasserman - Malliavin Greeks without Malliavin Calculus (this one is fun)
Prével - Greeks via Malliavin
Regards.
| null |
CC BY-SA 2.5
| null |
2011-03-12T19:07:03.803
|
2011-03-14T07:40:24.487
|
2011-03-14T07:40:24.487
|
92
|
92
| null |
704
|
2
| null |
700
|
4
| null |
It shows up in Bayes Analysis where a binomial distribution is involved (integer values apply):
$$ \Gamma(k + 1) = k! $$
That allows the following integral to be evaluated in closed form:
$$ \int_{0}^{1}p^{j-1}(1-p)^{k-1}dp = \frac{\Gamma(k)\Gamma(j)}{\Gamma(j+k)} $$
That integral can easily show up in the numerator and/or denominator of Bayes Equation.
| null |
CC BY-SA 2.5
| null |
2011-03-12T19:52:15.310
|
2011-03-12T19:52:15.310
| null | null |
392
| null |
705
|
1
|
713
| null |
7
|
1125
|
A [recent article from Forbes](http://www.forbes.com/forbes/2011/0314/investing-malcolm-baker-stocks-google-harvard-low-risk.html) seems to indicate that low volatility stocks outperform high volatility stocks over the long run. Does anyone have any supporting or contradicting evidence to this claim? The study cited studied returns for the entire stock market and the 1,000 largest stocks from 1968 to 2008.
Note that the answer to this question is really a simple yes or no, however the larger issue is whether or not the period under study was cherry picked and if different results would have been obtained using different end dates.
|
Do low volatility stocks outperform high volatility stocks over the long run?
|
CC BY-SA 3.0
| null |
2011-03-12T20:53:58.857
|
2011-09-17T08:42:09.217
|
2011-09-16T16:32:47.040
|
1106
|
520
|
[
"volatility",
"risk",
"research",
"equities"
] |
706
|
1
| null | null |
14
|
2556
|
Recently I was trying to reproduce the results of ["Intraday Patterns in the Cross-section of Stock Returns"](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1107590) ([published in the Journal of Finance 2010](http://dx.doi.org/10.1111/j.1540-6261.2010.01573.x)). The authors used cross-sectional regression to determine which intraday lags have predictive power.
From my understanding when doing cross-sectional regression all variables have to be from the same time period. For example, I can take the one day returns of all stocks and regress them against the number of employees in each company.
The following is a short description of how cross-sectional regression was used in the research:
>
For each lag, $k$, we run cross-sectional regressions of half-hour stock returns on returns lagged by $k$ half-hour periods,
$$
r_{i,t}=\alpha_{k,t}+\gamma_{k,t}r_{i,t-k}+u_{i,t},
$$
where $r_{i,t}$ is the return on stock i in the half-hour interval $t$. The slope coefficients $\gamma_{k,t}$ represent the response of returns at half-hour $t$ to returns over a previous interval lagged by $k$ half-hour periods, the “return responses.”
If I understood well, the returns of one period were regressed against the returns of another period and a slope was obtained from each regression. Later, autocorrelation analysis has been done on the slopes.
Unless my thoughts are wrong, I don't see the point of regressing the returns of one period against another - $R^2$ values are close to zero. Here is an example:

Did I get the cross-sectional regression wrong? By the way, I was working with a relatively small number of stocks, but I thought that 38 should be enough.
|
How do I reproduce the cross-sectional regression in "Intraday Patterns in the Cross-section of Stock Returns"?
|
CC BY-SA 3.0
| null |
2011-03-12T21:19:06.437
|
2017-12-27T15:06:30.550
|
2016-02-06T15:23:25.793
|
19321
|
537
|
[
"equities",
"research",
"statistics",
"regression",
"theory"
] |
708
|
1
| null | null |
8
|
8546
|
On a betting exchange the price (the odds that an event will happen expressed as a decimal, 1/(percentage chance event occurring) of a runner can experience a great deal of volatility before the event in question begins. Since very little about the event in question actually changes before it starts the movement in price must be down to pure market forces. This is especially true in the minutes leading up to the start of the event. A prime examples of this are the ten minutes before the start of a horse race or two minutes from the start of a greyhound race.
It is possible to monitor the market in real time (including the currently best available back and lay prices, and the amount of money it is possible to back or lay). Given all of this, what devices from quantitative finance could best be used to predict the immediate movement of the odds of a runner? As with a predictive model of price movements it is possible to trade on a betting exchange as you would a normal stock exchange.
|
Predicting Price Movements on a Betting Exchange
|
CC BY-SA 2.5
| null |
2011-03-12T23:07:48.770
|
2019-09-22T20:25:47.697
|
2019-09-22T20:25:47.697
|
20795
|
578
|
[
"modeling",
"algorithmic-trading",
"pricing-formulae",
"betting"
] |
709
|
2
| null |
706
|
12
| null |
The $R^2$s are usually close to zero for single stock regressions. The big $R^2$s that a lot of asset pricing research shows is by forming portfolios. Forming portfolios cancels a lot of the idiosyncratic returns, which has a smoothing effect.
The $R^2$s should be low here, although I don't see any in the paper for you to compare. This probably means they are very low. We don't expect that the lagged 30-min return should predict much of the future 30-min return. Maybe one percent or two? Heston et al's point is that they are correlated. We can't expect that the lagged return will tell us exactly the future return (i.e., $R^2 = 100\%$), but the two are correlated.
They run the regression $$r_{i,t} = \alpha_{k,t} + \gamma_{k,t} r_{i,t-k} + u_{i,t}$$ which finds the autocorrelation between $r_{i,t}$ and its $k^{th}$ lag $r_{i,t-k}$. This seems endogenous to me because there's an omitted variable that drives both the return and it's lag (maybe some news?). This should bias their $\gamma$, but I don't know a priori which way. When Jegadeesh (1990) looked at short-term reversals he did some other tricks to get around it, but I don't see that here. Heston et al are very well respected, so there's likely something I'm missing and I'm not very familiar with the intr-day literature, although I didn't see any discussion of this in the paper. I am interested to see this in it's peer-reviewed and published form.
Regardless, their $\gamma$s aren't autocorrelated, so they should be fine using the Fama-MacBeth approach. Oh, and that you're only doing 38 stocks shouldn't affect your $R^2$, just the t-statistics on your $\gamma$s.
| null |
CC BY-SA 2.5
| null |
2011-03-12T23:47:42.243
|
2011-03-12T23:58:48.740
|
2011-03-12T23:58:48.740
|
106
|
106
| null |
710
|
1
|
717
| null |
7
|
1250
|
If orders are filled pro rata, is there still incentive to engage in HFT? Because pro rata nullifies the time precedence rule, my intuition is no, but I figure there could be other aspects to it I'm unaware of.
|
Does HFT make sense in a pro-rata market?
|
CC BY-SA 2.5
| null |
2011-03-13T00:12:15.070
|
2013-10-25T01:56:58.487
|
2012-04-18T02:03:38.570
|
2299
|
451
|
[
"algorithmic-trading",
"high-frequency",
"market-microstructure",
"order-handling"
] |
711
|
2
| null |
710
|
2
| null |
My guess is you are right in that it will be unprofitable to be a liquidity provider because of the lack of time priority.
However, liquidity taking strategy (taking out an order that is mis-priced) is still a speed game.
| null |
CC BY-SA 2.5
| null |
2011-03-13T01:32:38.237
|
2011-03-13T01:32:38.237
| null | null |
443
| null |
712
|
2
| null |
710
|
3
| null |
Market markers still have to consume market data. The techniques required to scale a live order book in real-time will be the same regardless of the intended use case. So while the strategies will be different from what we know as HFT (and even the participants different), the systems in use will be very similar.
| null |
CC BY-SA 2.5
| null |
2011-03-13T04:42:48.023
|
2011-03-13T04:42:48.023
| null | null |
35
| null |
713
|
2
| null |
705
|
5
| null |
This current paper is highly relevant to your question:
[Risk and Return in General: Theory and Evidence (Eric G. Falkenstein)](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1420356)
>
Empirically, standard, intuitive measures of risk like volatility and beta do not generate a positive correlation with average returns in most asset classes. It is possible that risk, however defined, is not positively related to return as an equilibrium in asset markets. This paper presents a survey of data across 20 different asset classes, and presents a model highlighting the assumptions consistent with no risk premium. The key is that when agents are concerned about relative wealth, risk taking is then deviating from the consensus or market portfolio. In this environment, all risk becomes like idiosyncratic risk in the standard model, avoidable so unpriced.
BTW: The original paper Forbes references to can be found here:
[Benchmarks as Limits to Arbitrage:
Understanding the Low-Volatility Anomaly (Malcolm Baker, Brendan Bradley, and Jeffrey Wurgler)](http://people.hbs.edu/mbaker/cv/papers/Benchmarks_as_Limits_unpub_01_14_10_faj.v67.n1.4.pdf)
| null |
CC BY-SA 2.5
| null |
2011-03-13T12:13:45.137
|
2011-03-13T19:30:33.903
|
2011-03-13T19:30:33.903
|
12
|
12
| null |
714
|
1
|
1486
| null |
13
|
535
|
At work we were talking about currency hedging our equity index exposures but I am struggling to understand how this happens in a typical iShares ETF.
If we take the [Japan ETF IJPN](http://uk.ishares.com/en/pc/products/IJPN) then we see this index is against the MSCI Japan index.
However it is priced in GBP and its base and benchmark currency is USD.
So my question is: What sensitivity does this have to FX (USD/JPY and GBP/JPY) if any and why?
|
Currency Hedged ETFs
|
CC BY-SA 4.0
| null |
2011-03-13T13:49:25.650
|
2020-09-17T04:46:16.167
|
2020-09-17T04:46:16.167
|
20092
|
103
|
[
"equities",
"fx",
"etf"
] |
716
|
1
| null | null |
5
|
5360
|
Most brokers compute rollover once a day (2200 GMT), but OANDA
calculates it continuously.
I thought I'd cleverly found an arbitrage opportunity, but it turns
out OANDA knows about this and advertises it. Quoting from
[http://www.oanda.com/corp/story/innovations](http://www.oanda.com/corp/story/innovations)
```
Professional traders can exploit this flexibility by arbitraging the
continuous and discrete interest-calculation scenarios through two
trading lines--one with an established player, such as Citibank or
UBS, and the other with OANDA. Whenever they sell a
high-interest-rate currency (such as South African Rand) they can do
so with the traditional player, where they will pay no intra-day
interest for shorting that currency. On the other hand, they can
always buy a high-interest-rate currency through OANDA, where they
earn the "carry" (interest-rate differential) for the position,
however briefly they may hold it.
```
Has anyone done this? I realize the bid/ask spread on both sides would
have to be small, but this still seems viable?
My form of arbitrage is slightly different: hold a high-interest position w/ a regular broker for 1 minute on each side of rollover time, just to get the rollover (for the entire 24 hour period). Take the opposite position w/ OANDA. You'll pay rollover, but for only 2 minutes.
EDIT: Apologies, I never got around to test this. Has anyone else had a chance? I realize oanda.com's higher spreads (which are non-negotiable) may cover the arbitrage.
|
Arbitraging OANDA continuous rollover vs other brokers' discrete rollover
|
CC BY-SA 3.0
| null |
2011-03-13T17:14:32.953
|
2022-03-09T17:32:15.313
|
2011-08-11T22:45:22.367
| null | null |
[
"arbitrage"
] |
717
|
2
| null |
710
|
8
| null |
The Eurodollar market is partially pro-rata. And there is a lot of HFT on it. Getting out of the book when conditions are not right is very much HFT.
| null |
CC BY-SA 2.5
| null |
2011-03-13T17:20:22.580
|
2011-03-13T17:20:22.580
| null | null |
225
| null |
718
|
2
| null |
691
|
7
| null |
Here's a link to daily weather data. It looks like it goes as far back as the 1940s. There's a link to a CSV file at the bottom of the page. It will only give you one year's worth of data at a time, so you'll have to manually download several files.
[http://www.wunderground.com/history/airport/KNYC/2011/3/13/CustomHistory.html](http://www.wunderground.com/history/airport/KNYC/2011/3/13/CustomHistory.html)
| null |
CC BY-SA 2.5
| null |
2011-03-13T19:15:15.063
|
2011-03-13T19:15:15.063
| null | null |
584
| null |
720
|
2
| null |
27
|
4
| null |
It's well know by everybody that when the market prices goes up, the [implied volatility](http://www.livevol.com/historical_files.html) goes down and vice versa. So the strike prices is having the reverse relationship when examinig the option chains.
| null |
CC BY-SA 2.5
| null |
2011-03-14T07:37:28.763
|
2011-03-14T07:37:28.763
| null | null | null | null |
721
|
2
| null |
705
|
6
| null |
The theory predicts that expected risk and expected return should be positively related. But no one has convincingly proved this. The results are very sensitive to how you determine the expectations of risk and return and the timeline you use. Many have shown that there isn't a positive trade-off between risk and return in a CAPM-framework (i.e., Fama and French 1992 showed that $\beta$ has no predictive power in the cross section of returns -- although in 1993 they "saved" $\beta$ by adding two more factors).
What is odd about this Baker et al research is that they're buying the 20% with the highest vol and holding them for only one year. They're are constantly investing in the most uncertain stocks without ever really allowing the uncertainty to get resolved. I would be interested in how these results change with a two to five year holding period (e.g., I imagine in the mid 90s both amazon.com and pets.com where volatile; if you held both for one year, then you may or may not come out ahead; but if you held both for five years, pets.com delists and amazon.com skyrockets; but this is anecdotal and far from convincing.)
| null |
CC BY-SA 2.5
| null |
2011-03-14T11:10:05.733
|
2011-03-14T11:10:05.733
| null | null |
106
| null |
722
|
2
| null |
708
|
1
| null |
I'm not familiar with betting exchanges but what you're describing reminds me a lot of binary options, where the payout is 1 if something occurs (ie APPL trades above 1000$) and 0 if the event doesn't occur. So at any time before expiration, the option is worth the probability that the even will indeed occur, so the options is always worth between [0, 1]. Although, option pricing has nothing to do with path predictions, it has everything to do with volatility pricing, which to me seems perfect for what you're describing.
>
On a betting exchange the price of a runner can experience a great deal of volatility before the event in question begins
| null |
CC BY-SA 2.5
| null |
2011-03-14T11:51:49.717
|
2011-03-14T11:51:49.717
| null | null |
471
| null |
723
|
1
|
726
| null |
13
|
8911
|
Can someone explain to me the rationale for why the market may be moving towards OIS discounting for fully collateralized derivatives?
|
Rationale for OIS discounting for collateralized derivatives?
|
CC BY-SA 2.5
| null |
2011-03-14T21:23:14.543
|
2017-10-09T15:45:49.563
|
2017-10-09T15:45:49.563
|
20454
|
223
|
[
"valuation",
"ois-discounting",
"collateral"
] |
724
|
2
| null |
723
|
3
| null |
The OIS rate is more stable than Libor, right? And according to [this article](http://www.scribd.com/doc/37180714/Risk-Magazine-The-Price-is-Wrong) from Risk Magazine:
>
The party that is owed money at the end of the swap will have been paying an OIS rate on the collateral it has been holding, and so the ultimate value of the cash it will receive will be the sum it is owed minus the overnight interest rate it has had to pay on this collateral.
| null |
CC BY-SA 2.5
| null |
2011-03-14T23:00:07.710
|
2011-03-14T23:17:42.953
|
2011-03-14T23:17:42.953
|
35
|
35
| null |
725
|
1
|
734
| null |
7
|
221
|
Does anyone know of any research or data on US corporate bankruptcy rates as a function of standard valuation ratios, such as P/B, P/E, etc.?
I'm trying to adjust the results of backtests to account for survivorship bias. My first thought was to put an upper bound on the impact of survivorship bias by assuming a certain percentage of holdings go bankrupt each period. I was looking for some data on what that percentage should be.
I'm aware of values like Altman's Z-score, but that doesn't quite apply to what I'm trying to do. That's meant to predict bankruptcy, but I'm looking for a typical bankruptcy rate. Even an overall rate would be better than nothing, but it would be more useful if it was broken down cross-sectionally in some way.
Alternatively, are there any better methods of dealing with survivorship bias when working with incomplete data sets?
|
Data on US bankruptcy rate vs. standard valuation ratios
|
CC BY-SA 2.5
| null |
2011-03-15T05:01:46.573
|
2011-03-16T11:32:06.240
| null | null |
584
|
[
"backtesting"
] |
726
|
2
| null |
723
|
9
| null |
Most counterparty agreements specify some sort of ois rate for the interest paid/received on posted collateral. So the OIS rate is the appropriate one to use for discounting future cash flows.
Prior to 2008 the OIS/Libor spread was small and stable, so you didn't really need to worry about this, but now it's much larger, so people are taking it into account. The reason it's "big news" now is that properly switching pricing systems over to use OIS discounting is a large change, so most places are only now getting this online.
| null |
CC BY-SA 2.5
| null |
2011-03-15T06:37:10.120
|
2011-03-15T06:37:10.120
| null | null |
371
| null |
727
|
2
| null |
447
|
2
| null |
Check out MB Trading. Their API is quite good and their support is excellent.
[http://mbtrading.com/developers.aspx](http://mbtrading.com/developers.aspx)
| null |
CC BY-SA 2.5
| null |
2011-03-15T12:16:48.903
|
2011-03-15T12:16:48.903
| null | null |
593
| null |
728
|
1
|
731
| null |
60
|
24321
|
I've only recently begun exploring and learning R (especially since Dirk recommended RStudio and a lot of people in here speak highly of R). I'm rather C(++) oriented, so it got me thinking - what are the limitations of R, in particular in terms of performance?
I'm trying to weigh the C++/Python/R alternatives for research and I'm considering if getting to know R well enough is worth the time investment.
Available packages look quite promising, but there are some issues in my mind that keep me at bay for the time being:
- How efficient is R when it comes to importing big datasets? And first of all, what's big in terms of R development? I used to process a couple hundred CSV files in C++ (around 0.5M values I suppose) and I remember it being merely acceptable. What can I expect from R here? Judging by Jeff's spectacular results I assume with a proper long-term solution (not CSV) I should be even able to switch to tick processing without hindrances. But what about ad-hoc data mangling? Is the difference in performance (compared to more low level implementations) that visible? Or is it just an urban legend?
- What are the options for GUI development? Let's say I would like to go further than research oriented analysis, like developing full blown UIs for investment analytics/trading etc. From what I found mentioned here and on StackOverflow, with proper bindings I am free to use Python's frameworks here and even further chain into Qt if such a need arises. But deploying such a beast must be a real nuisance. How do you cope with it?
In general I see R's flexibility allows me to mix and match it with a plethora of other languages (either way round - using low level additions in R or embed/invoke R in projects written in another language). That seems nice, but does it make sense (I mean like thinking about it from start/concept phase, not extending preexisting solutions)? Or is it better to stick with one-and-only language (insert whatever you like/have experience with)?
So to sum up: In what quant finance applications is R a (really) bad choice (or at least can be)?
|
Switching from C++ to R - limitations/applications
|
CC BY-SA 2.5
| null |
2011-03-15T18:20:56.460
|
2016-12-03T11:30:21.613
|
2017-04-13T12:46:22.953
|
-1
|
38
|
[
"r",
"development"
] |
729
|
1
| null | null |
6
|
3305
|
I try to implemente the LSM method with this algorithm but my price is always too low. By example for an American put option with the following parameters:
>
S0 = 36, Strike = 40, rate = 6%, T = 1 year, discrete path = 50, volatility = 20%
I got 4 dollars, but the Longstaff and Schwartz article lists 4.7 dollars. With a volatility of 40%, the error is bigger at 5 dollars for me vs. 7.3 dollars for L&S. But with my tree pricer I have the same result as the L&S article.
Could you help me to find the error please?
```
void LeastSquaresMC::calcLeastSquaresMC()
{
mu_ = (rate_ - vol_*vol_*0.5)*dt; // drift
voldt = vol_*sqrt(dt); // diffusion
for (i = 0; i < M_; i++)
{
Paths(i,0) = 36;
for (j = 1; j < b; j++)
{
// generate deviate
deviate = G();
Paths(i,j) = Paths(i,j-1)*exp(mu_+voldt*deviate);
}
}
// initialize cash flow matrix by zero
for (i = 0; i < z; i++)
{
for (j = 0; j < b; j++)
{
CashFlow(i,j,0);
}
}
for (i = 0; i < z; i++)
{
for (j = 0; j < b; j++)
{
Exercise(i,j) = MAX(strike_-Paths(i,j),0);
}
}
// compute cash flows at maturity
for (i = 0; i < z; i++)
{
CashFlow(i,b-1,(Exercise(i,b-1)));
}
//cout <<CashFlow << endl;
// recursion
computeLSM(b-1, Paths, CashFlow, Exercise);
}
double LeastSquaresMC::computeLSM(int time, Matrix& Paths, Matrix& CashFlow, Matrix& Exercise)
{
double disc = exp(-rate_*dt); // discount factor
vector<double> Y; // vector of payoffs (dependent variables)
vector<double> B; // vector of regression coefficients
vector<double> C; // continuation
vector<int> num;
vector<double> stock;
vector<int>::iterator i = num.begin();
/*long z = M_*2;*/
for (j = 0; j < z; j++)
{
if(Exercise(j,time-1)>0)
{
Y.push_back(MAX(CashFlow(j,time),0)*disc);
num.push_back(j);
stock.push_back(Paths(j,time-1));
}
}
if (time > 1)
{
if(num.empty()==false)
{
int size_l = Y.size();
Matrix X(size_l,3); // 1 X X^2 (columns)
for (j = 0; j < size_l; j++)
{
X(j,0,1);
X(j,1,stock[j]);
X(j,2,stock[j]*stock[j]);
}
B = ((X.transpose()*X).Inverse())*(X.transpose()*Y);
C = X*B;
j=0;
for(i = num.begin() ; i != num.end(); ++i)
{
if (Exercise(*i,time-1)>C[j])
{
CashFlow(*i,time-1,Exercise(*i,time-1));
for (l = time; l < b; l++)
{
CashFlow(*i,l,0);
}
j++;
}
computeLSM(time-1, Paths, CashFlow, Exercise);
}
else
{
computeLSM(time-1, Paths, CashFlow, Exercise);
}
}
else
{
return computeValue(CashFlow);
}
return 0.0;
}
double LeastSquaresMC::computeValue (Matrix& CashFlow)
{
double discValue = 0.0; // discounted value
for (i = 0; i < z; i++)
{
for (j = 1; j < b; j++)
{
if (CashFlow(i, j) > 0)
{
discValue = discValue + CashFlow(i, j)*exp(-0.06*j);
}
}
}
cout <<"prix:"<<discValue/z << endl;
return discValue/z;
}
```
|
Longstaff Schwartz method
|
CC BY-SA 2.5
| null |
2011-03-15T19:45:22.620
|
2013-01-08T11:02:33.393
|
2011-03-15T19:58:56.883
|
35
| null |
[
"option-pricing"
] |
730
|
2
| null |
728
|
34
| null |
Don't have to switch -- it's not either / or after all.
Use either where it has an advantage: R for explorations, modeling, ... and C++ for industrial-strength and high-performance implementations (at a possible cost in terms of time to code).
And (with the obvious [Rcpp](http://dirk.eddelbuettel.com/code/rcpp.html) plug coming) you can even combine both of them.
| null |
CC BY-SA 2.5
| null |
2011-03-15T20:31:39.590
|
2011-03-15T20:31:39.590
| null | null |
69
| null |
731
|
2
| null |
728
|
26
| null |
R can be pretty slow, and it's very memory-hungry. My data set is only 8 GB or so, and I have a machine with 96 GB of RAM, and I'm always wrestling with R's memory management. Many of the model estimation functions capture a link to their environment, which means you can be keeping a pointer to each subset of the data that you're dealing with. SAS was much better at dealing with large-ish data sets, but R is much nicer to deal with. (This is in the context of mortgage prepayment and default modeling.)
Importing the data sets is pretty easy and fast enough, in my experience. It's the ballooning memory requirements for actually processing that data that's the problem.
Anything that isn't easily vectorizable seems like it would be a problem. P&L backtesting for a strategy that depends on the current portfolio state seems hard. If you're looking at the residual P&L from hedging a fixed-income portfolio, with full risk metrics, that's going to be hard.
I doubt many people would want to write a term structure model in R or a monte-carlo engine.
Even with all that, though, R is a very useful tool to have in your toolbox. But it's not exactly a computational powerhouse.
I don't know anything about the GUI options.
| null |
CC BY-SA 2.5
| null |
2011-03-15T20:34:13.547
|
2011-03-15T20:34:13.547
| null | null |
596
| null |
732
|
2
| null |
692
|
22
| null |
There are a wide variety of models (by which I mean the theoretical / mathematical formulation of how the underlying financial variable(s) of interest behave). The most popular ones differ depending on the asset class under consideration (though some are mathematically the same and named differently). Some examples are:
- Black-Scholes / Black / Garman-Kohlhagan
- Local-volatility [aka Dupire model]
- Stochastic-volatility - a generic term for extensions of Black-Scholes where there is a second stochastic factor driving the volatility of the spot; examples are Heston, SABR
- Levy processes (usually actually log-Levy): a wide class of models with some features that make them theoretically / technically nice; examples are VG, CGMY
- jumps (often compound Poisson) of various kinds can be added to the above models, for example Merton model is Black-Scholes with jumps
- CIR, OU processes show up in fixed-income
- There are multi-factor (ie multiple driving Brownian motions) versions of the above; e.g. Libor market model, correlated log-normal models
- Models for pricing, say, credit-default swaps are often Poisson processes with random hazard rates
- Commodities such as electricity can require specialized models to handle the particular features of that market
- etc.
Implementation methodologies can include:
- analytic formulae (usually involving special functions): examples are the classic case of European-style vanilla options in Black-Scholes / CEV / VG; many exotics in black-scholes can be "solved" in this way
- approximate analytic - for example, one might price average-rate options in Black-Scholes by approximating the final distribution (by moment-matching) with a shifted-lognormal and using the closed-form for the shifted-lognormal
- Binomial / trinomial trees can be viewed as a discretization technique for approximating, say, Black-Scholes. (Note that some people might view the approximation as a model in its own right --- a conflation of model & implementation and more of a philosophical stance than a practical consideration.)
- Numerical methods for solving or approximating the PDE governing the option price; this could be solved by finite-difference methods, finite-element methods, etc.
- Monte-carlo is a nice brute-force way to handle almost any kind of model and most options (though there are complications with early-exercise style features of options), but it typically takes a lot of computing power to get any accuracy in the price
- Interpolation could be viewed as a technique --- if you know the price of a collection of options (varying in some parameters) you can price a new option by interpolating based on the parameters (volatility surfaces implemented by interpolating a grid of given options are examples of this)
- etc.
| null |
CC BY-SA 2.5
| null |
2011-03-15T21:08:52.050
|
2011-03-15T21:08:52.050
| null | null |
540
| null |
733
|
1
| null | null |
4
|
242
|
I'm having a hard time getting my expected loss calculations to tie out with the standard recursion method when implementing the proxy distribution algorithm described by the Back To Normal CDO paper by Shelton.
Anyone have working code examples for this?
|
Any example code implementing the Shelton CDO 'Back To Normal' Paper?
|
CC BY-SA 2.5
| null |
2011-03-15T23:23:03.760
|
2011-03-19T17:31:39.523
| null | null | null |
[
"structured-credit",
"cdo"
] |
734
|
2
| null |
725
|
3
| null |
I get my stock data from Univ of Chicago's [Center for Research in Security Prices](http://www.crsp.com/) (CRSP), which keeps stocks after delisting, so there's no survivorship bias. I assume that Bloomberg and Reuters do the same. But if you use Yahoo! or Google, there will be a survivorship bias.
I think that trying to look at bankruptcies to remove a survivorship bias would be really challenging (not all bankruptcies result in delisting), and might insert an error bigger than survivorship bias. But [Compustat](http://www.compustat.com/) provides more corporate finance data than almost anyone knows what to do with.
I am not sure that there's a clean work-around for survivorship bias short of moving to a data set like CRSP. HTH.
| null |
CC BY-SA 2.5
| null |
2011-03-16T11:32:06.240
|
2011-03-16T11:32:06.240
| null | null |
106
| null |
735
|
2
| null |
708
|
6
| null |
Have you looked into "noise trader" models? This seems like a market that is mostly noise. A few betters may have some information on or real knowledge of who might win, but certainly nothing like equity markets where there are a lot of people who really know the ins and outs of the firms they're trading.
The classic model is Pete Kyle's, which should give you some of the intuition, although here I don't think there are an "liquidity" traders -- just noise and informed. This model isn't very applied, but the conclusions should help you develop your picture and you can probably find a lot of working papers in SSRN.com that apply the model in a way that you can use...
I found the [Kyle paper](http://www.arw-suisse.ch/doctoralpapers/Kyle%201985.pdf), but it's Econometrica, so it may be a little dense for an introduction. [This textbook chapter](http://www.londoninternational.ac.uk/current_students/programme_resources/lse/lse_pdf/further_units/invest_man/23_invest_man_chap4.pdf) may be a better introduction to see if it's something you'd like to pursue.
| null |
CC BY-SA 2.5
| null |
2011-03-16T11:46:59.440
|
2011-03-16T11:46:59.440
| null | null |
106
| null |
737
|
1
|
744
| null |
11
|
815
|
Do you think Walk Forward Analysis is a good method to estimate the predictability or edge of a trading system? Are there similar methods to know (estimate) how much alpha can capture an algo (in the future)?
|
Is Walk Forward Analysis a good method to estimate the edge of a trading system?
|
CC BY-SA 3.0
| null |
2011-03-16T14:43:08.900
|
2011-09-22T01:25:23.240
|
2011-09-22T01:25:23.240
|
1355
|
82
|
[
"analysis",
"simulations"
] |
738
|
1
|
745
| null |
13
|
2885
|
I built risk models using cluster analysis in a previous life. Years ago I learned about principal component analysis and I've often wondered whether that would have been more appropriate. What are the pros and cons of using PCA as opposed to clustering to derive risk factors?
If it makes a difference, I'm using the risk models for portfolio optimization, not performance attribution.
|
Cluster analysis vs PCA for risk models?
|
CC BY-SA 2.5
| null |
2011-03-16T16:33:53.857
|
2011-03-18T16:44:59.183
| null | null |
35
|
[
"risk-models",
"pca"
] |
739
|
1
|
58450
| null |
13
|
18079
|
The nice list of free online data sources [What data sources are available online?](https://quant.stackexchange.com/questions/141/data-sources-online) does not mention any data from ratings agencies.
>
Are historical time series available for sovereign credit ratings (other than as proprietary datasets)?
I realise that the ratings agencies probably would claim their ratings of sovereign debt as proprietary data, yet due to the way this information is disseminated and used it appears to be in the public interest for it to be available freely, and it also appears possible to recover some parts of the time series from the public domain. However, I cannot locate any free sources.
The Guardian made available [a snapshot in July 2010](http://www.guardian.co.uk/news/datablog/2010/apr/30/credit-ratings-country-fitch-moodys-standard), but I would like to analyse how the ratings have changed over time. I would be happy with a subset where nontrivial changes have taken place: for instance, a series for just Greece and Japan would be of interest. I can of course deduce some of the few unchanging series myself.
|
Are public historical time series available for ratings of sovereign debt?
|
CC BY-SA 2.5
| null |
2011-03-16T18:23:28.733
|
2021-03-12T05:23:18.923
|
2017-04-13T12:46:23.000
|
-1
|
599
|
[
"data",
"time-series",
"credit-ratings"
] |
740
|
1
|
1874
| null |
16
|
10742
|
I am seeing a regression model which is regressing Year-on-Year stock index returns on lagged (12 months) Year-on-Year returns of the same stock index, credit spread (difference between monthly mean of risk-free bonds and corporate bond yields), YoY inflation rate and YoY index of industrial production.
It looks thus (though you'd substitute the data specific to India in this case):
```
SP500YOY(T) = a + b1*SP500YOY(T-12) + b2*CREDITSPREAD(T)
+ b4*INDUSTRIALPRODUCTION(T+2) + b3*INFLATION(T+2)
+ b4*INFLATIONASYMM(T+2)
```
`SP500YOY` is the year-on-year returns for S&P500 index To compute this, monthly average of S&P500 values is computed and then converted to year-on-year returns for each month (i.e. Jan'10-Jan'11, Feb'10-Feb'11, Mar'10-Mar'11, . . ). On the explanatory variables side, a 12-month lagged value of the `SP500YOY` is used along with the `CREDITSPREAD` at time `T` and `INFLATION` and `INDUSTRIALPRODUCTION` two period AHEAD. The `INFLATIONASYMM` is a dummy for whether the Inflation is above a threshold value of 5.0%. The index in the parenthesis shows the time index for each variable.
This is estimated by standard OLS linear regression. To use this model for forecasting the 1,2 and 3-months ahead YOY returns of S&P500, one has to generate 3,4 and 5-month ahead forecasts for Inflation and the Index of Industrial Production. These forecasts are done after fitting an ARIMA model to each of the two individually. The CreditSpread forecasts for 1,2 and 3 month ahead are just thrown in as mental estimates.
I'd like to know whether this OLS linear regression is correct/incorrect, efficient/inefficient or generally valid statistical practice.
The first problem I see is that of using overlapping data. i.e. the daily values of the stock index are averaged each month, and then used to compute yearly returns which are rolled over monthly. This should make the error term autocorrelated. I would think that one would have to use some 'correction' on the lines of one of the following:
- White's heteroscedasticity consistent covariance estimator
- Newey & West heteroscedasticity and autocorrelation consistent (HAC) estimator
- heteroscedasticity-consistent version of Hansen & Hodrick
Does it really make sense to apply standard OLS linear regression (without any corrections) to such overlapping data, and more so, use 3-period ahead ARIMA forecasts for explanatory variables to use in the original OLS linear regression for forecasting `SP500YOY`? I have not seen such a form before, and hence cannot really judge it, without the exception of correcting for the use of overlapping observations.
|
Time Series Regression with Overlapping Data
|
CC BY-SA 3.0
| null |
2011-03-16T18:42:23.607
|
2011-09-09T15:07:30.500
|
2011-09-09T13:01:57.130
|
1355
|
139
|
[
"time-series",
"regression",
"auto-correlation"
] |
741
|
2
| null |
739
|
12
| null |
Fitch should be available right here: [Sovereign Ratings History](http://www.fitchratings.com/web_content/ratings/sovereign_ratings_history.xls)
With Moody's it's not so easy, I don't know if there's a complete source available free of charge. But [Sovereign Default and Recovery Rates, 1983-2007](http://v2.moodys.com/cust/content/content.ashx?source=StaticContent/Free%20Pages/Credit%20Policy%20Research/documents/current/2007100000482445.pdf) has some data in the appendix III, though not so up to date and in a not that convenient format.
Same goes about S&P, [Sovereign Ratings History Since 1975](http://www2.standardandpoors.com/spf/pdf/fixedincome/KR_sovereign_SovRatingsHistorySince1975.pdf).
>
UPDATE: That link no longer works. Start page for all S&P ratings is
here: http://www.standardandpoors.com/ratings/en/ap/ You can see
latest ratings only. Clicking a country name requires a (free) login,
but still only shows the latest data (though it does then have links
to methodology, background articles, etc.)
Maybe someone will be able to come up with more edible formats though.
| null |
CC BY-SA 3.0
| null |
2011-03-16T21:25:56.153
|
2013-09-04T01:03:55.317
|
2013-09-04T01:03:55.317
|
1587
|
38
| null |
742
|
1
| null | null |
4
|
953
|
I am implementing a program in Java that needs the Bank of England base rate. Rather than the user inputting this into the system, I have heard that there is a way to get a live feed of the base rate directly from (I am guessing) the Bank of England (i.e. gets automatically updated). Has anyone heard of or used something similar? - any advice would be helpful.
Thanks.
|
Bank of England base rate feed
|
CC BY-SA 2.5
| null |
2011-03-17T00:01:16.817
|
2011-03-17T11:01:11.940
| null | null |
603
|
[
"interest-rates"
] |
743
|
2
| null |
742
|
2
| null |
I'm not sure if this is what you're looking for, but they keep it updated on their homepage.
[http://www.bankofengland.co.uk/](http://www.bankofengland.co.uk/)
The Current Bank Rate is shown in the top of the box in the bottom right corner!
| null |
CC BY-SA 2.5
| null |
2011-03-17T00:13:31.780
|
2011-03-17T00:13:31.780
| null | null |
604
| null |
744
|
2
| null |
737
|
7
| null |
The term "Walk Forward Analysis" typically comes from technical analysis schemes. If that's the case here, I would be careful with whatever you're considering (or reading).
Even if you tune your model parameters (and I'm not talking about any TA scheme) with 80% of the data and then "check" your model with the remaining 20%, you're still using that last 20% to determine if your model is useful. As a result, whatever model you assemble/test/retest, the reality is, even if there is a solid basis for the structure of the model, the only way to prove its value is to use it in real time. So, it is worthwhile to build tests that allow you to dump your model at the first sign of trouble.
From personal experience, I can tell you that you'll dump far more models than you'll keep.
| null |
CC BY-SA 2.5
| null |
2011-03-17T01:50:55.597
|
2011-03-17T01:50:55.597
| null | null |
392
| null |
745
|
2
| null |
738
|
5
| null |
I've played around with both schemes, but not for portfolio optimization.
I used PCA on some interest rate models. That turned into a Partial Least Squares scheme, then into some non-linear thing. I wasn't impressed with the results.
My Cluster Analysis scheme morphed into a classification scheme, and it turned out that the K-Nearest-Neighbor method worked just as well, and possibly better. Again, this wasn't for portfolio optimization, so it may not apply to your situation.
From what I've seen, if you're depending on the computational method to find excess returns (or lower risk), you'll probably be disappointed. On the other hand, it is common for various methods to highlight some problems that weren't originally obvious. For instance, bootstrapping your portfolio(s) to determine just how good they are compared to luck. I've dumped a lot of ideas because of that issue.
| null |
CC BY-SA 2.5
| null |
2011-03-17T02:18:02.813
|
2011-03-17T02:18:02.813
| null | null |
392
| null |
746
|
2
| null |
742
|
2
| null |
As suggested by Owe, you could use the Statistics part of the BoE website. It's not like the rate is changing every second, so you don't really need a live feed, I suppose.
You could update it on every start - for example using the [Interest and Exchange Rates tables](http://www.bankofengland.co.uk/mfsd/iadb/index.asp?Travel=NIxSTxTIx&levels=1&XNotes=Y&G0Xtop.x=44&G0Xtop.y=13&C=13T&XNotes2=Y&Nodes=X3687X3688X3691X3694X3716X3738X3754X3757X3764X3771X3774&SectionRequired=I&HideNums=-1&ExtraInfo=false#BM). Just generate the format you like, and download/import it to your program. For instance, here's a direct link to [daily data in CSV](http://www.bankofengland.co.uk/mfsd/iadb/fromshowcolumns.asp?Travel=NIxSTxTIxSUx&FromSeries=1&ToSeries=50&DAT=ALL&VFD=Y&csv.x=8&csv.y=26&CSVF=TT&C=13T&Filter=N). Parsing it shouldn't be much of a problem.
| null |
CC BY-SA 2.5
| null |
2011-03-17T11:01:11.940
|
2011-03-17T11:01:11.940
| null | null |
38
| null |
747
|
1
|
2125
| null |
15
|
481
|
If one wants to smooth a time series using a window function such as Hanning, Hamming, Blackman etc. what are the considerations for favouring any one window over another?
|
What should be considered when selecting a windowing function when smoothing a time series?
|
CC BY-SA 3.0
| null |
2011-03-17T12:58:09.200
|
2011-10-06T17:04:17.837
|
2011-09-12T01:04:35.067
|
1355
|
252
|
[
"time-series"
] |
748
|
1
|
750
| null |
16
|
5139
|
Hy I posted this question first at [mathflow.net](http://www.mathflow.net) they suggested me this page, which I was not aware of.
Question
Let $(X_1,X_2)$ be a multivariate normal random vector ($X_1$ and $X_2$ need not be independent). Is it possible to calculate
$$VaR_{\alpha}(e^{X_1}+e^{X_2})$$
analyticaly?
Or is it even possible to calulate it in terms of
$VaR_{\alpha}(e^{X_1})$ and $VaR_{\alpha}(e^{X_2})$
i.e. is there a representation (a function $g(\cdot,\cdot)$ ) of the form
$$VaR_{\alpha}(e^{X_1}+e^{X_2})=g(VaR_{\alpha}(e^{X_1}),VaR_{\alpha}(e^{X_2})).$$
The case where $X_1$ and $X_2$ are independent could be aproached in terms of convolution which dont give in my eyes any impression if it is analyticaly tractable.
|
How to compute the Value-at-Risk of the sum of two dependent lognormal random variables?
|
CC BY-SA 3.0
| null |
2011-03-17T18:22:34.583
|
2016-11-23T01:19:35.820
|
2016-11-23T01:19:35.820
|
467
|
608
|
[
"risk-management",
"value-at-risk"
] |
749
|
2
| null |
748
|
1
| null |
For independent random variables the variance of a sum is the sum of the variances. If the random variables are not independent, then there's a covariance term $$Var(X_1 + X_2) = VarX_1 + VarX_2 + 2*Cov(X_1, X_2)$$
Exponentiating doesn't change this relation; it just makes your random variable log-normal.
Maybe you're looking for [Steins's lemma](http://en.wikipedia.org/wiki/Stein%27s_lemma)? Is there a further downstream question?
| null |
CC BY-SA 2.5
| null |
2011-03-17T20:16:59.863
|
2011-03-17T20:16:59.863
| null | null |
106
| null |
750
|
2
| null |
748
|
6
| null |
The answer is no, to my knowledge.
I suggest you take look at [this thread on MO](https://mathoverflow.net/questions/30619/sum-of-log-normal-random-variables), about the sum of log normal random variables. A few of the articles mentioned there might help you.
| null |
CC BY-SA 3.0
| null |
2011-03-17T20:24:11.480
|
2012-01-27T06:08:49.367
|
2017-04-13T12:58:47.233
|
-1
|
92
| null |
751
|
1
| null | null |
26
|
4576
|
There is a large literature covering volatility forecasts with high-frequency tick data. Much of this has surrounded the concept of "realized volatility", such as:
- "Realized Volatility and Correlation" (Andersen, Bollerslev, Diebold, and Labys 1999)
- "Modeling and Forecasting Realized Volatility" (Andersen, Bollerslev, Diebold, and Labys 1999)
Other efforts have looked at high/low data to improve the forecast without including all the tick data.
Robert Almgrem has [a nice lecture on the subject](http://www.courant.nyu.edu/~almgren/timeseries/notes7.pdf) as part of his ["Time Series Analysis and Statistical Arbitrage"](http://www.courant.nyu.edu/~almgren/timeseries/) course at NYU.
What's the best way for forecast volatility using high-frequency data?
Note: A similar question was previously [asked on Wilmott](http://www.wilmott.com/messageview.cfm?catid=38&threadid=62514).
|
How to forecast volatility using high-frequency data?
|
CC BY-SA 2.5
| null |
2011-03-18T01:31:59.063
|
2022-12-12T07:02:24.237
| null | null |
17
|
[
"volatility",
"forecasting",
"high-frequency"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.