Id
stringlengths 1
5
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
5
⌀ | ParentId
stringlengths 1
5
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
27.8k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
5
⌀ | OwnerUserId
stringlengths 1
5
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
434 | 1 | null | null | 3 | 352 | First thing i want to do is load data(.csv) into R. The data i am concentrating are the COT reports. Want to load a single column into R and calculate the correlation with the proper instrument.
| How to calculate correlations(COT report) in R ? | CC BY-SA 2.5 | null | 2011-02-11T11:39:25.043 | 2011-02-11T11:39:25.043 | null | null | 354 | [
"correlation"
]
|
435 | 2 | null | 422 | 7 | null | You can get 1.5 years of dense milisecond tagged forex tick data from here: [http://www.truefx.com/?page=downloads](http://www.truefx.com/?page=downloads) (free account required)
There are a few problems with it - some missing days and a couple of days with bad data.
EUR/USD has an average of 200,000 ticks/day for January 2011.
The Dukascopy ticks mentioned in another answer are also good. Another good forex tick data source can be a demo account for FXCM, dbFX or MB Trading. But you need to use the respective APIs to retrieve them.
You can also get free equity market order book data from Trading Physics - [http://www.tradingphysics.com/Feeds/DownloadHistoricalITCH.aspx](http://www.tradingphysics.com/Feeds/DownloadHistoricalITCH.aspx). A free account is required and you can retrieve a limited number of data files per week.
| null | CC BY-SA 2.5 | null | 2011-02-11T12:38:05.333 | 2011-02-11T12:38:05.333 | null | null | 225 | null |
436 | 2 | null | 431 | 10 | null | Quite a lot of lectures on Wilmott.com: [http://wilmott.com/av.cfm](http://wilmott.com/av.cfm)
| null | CC BY-SA 2.5 | null | 2011-02-11T13:18:33.397 | 2011-02-11T13:18:33.397 | null | null | 89 | null |
437 | 2 | null | 399 | 1 | null | In many popular copula models a factor driving a certain even (e.g. default) has the form
$
Y_i = \sum_k a_{ik} X_k + b_i Z_i
$
where $\lbrace X_k \rbrace, Z_i$ are independent random factors. Coefficients $a_{ik}$ and $b_i$ are commonly called "factor loadings".
| null | CC BY-SA 2.5 | null | 2011-02-11T13:22:19.820 | 2011-02-11T13:22:19.820 | null | null | 89 | null |
438 | 2 | null | 419 | 9 | null | If your intention is to use the resulting continuous contract time series to perform return based calculation as would be the case with a volatility analysis then what you want to use is the ratio-adjustment method.
If you are happy to roll on a single day this is trivially implemented by taking the ratio of next contract settlement price to leading contract settlement price on the roll day, then multiplying all historical leading contract prices by that ratio and repeating the process backwards along the contract history.
This will result in a time series that exhibits the returns of an excess return commodity index. Note that benchmark commodity indices roll on a multi-day window, e.g. DJ-UBS or GSCI roll in 20% increments over 5 days.
| null | CC BY-SA 2.5 | null | 2011-02-11T13:24:41.503 | 2011-02-11T13:24:41.503 | null | null | 335 | null |
439 | 2 | null | 430 | 10 | null | If you're looking for Java or C/C++/C#, then you will have a much harder time with this than if you looked at R, Matlab, or Python (with Scipy).
For those other languages, I recommend:
- Java: Weka is one of the most complete data mining libraries out there. Fortunately, it also comes with a very good book -- "Data Mining: Practical Machine Learning Tools and Techniques" -- that covers the field of data mining. They just came out with a new edition.
- C++: In my experience, the most complete, well-documented library for this is Shark. Just one note there: it currently going through a pretty major revision as they start to use Boost to replace their existing Array library.
In general, I don't know why you wouldn't use R for this. It's freely available, very complete, has lots of documentation, and can be easily interfaced from Java ([RJava](http://www.rforge.net/rJava/)) and C++ ([Rcpp](http://dirk.eddelbuettel.com/code/rcpp.html)). Plus, if you're using "The Elements of Statistical Learning": that textbook used S-Plus/R to do all their analysis. And R is the only langauge that I know which includes all of the algorithms from the book (including things like [lars](http://cran.r-project.org/web/packages/lars/), which was created by one of the book's authors). And I am starting to slowly reproduce most of the key examples from that book in R [on my blog](http://www.statalgo.com/).
| null | CC BY-SA 3.0 | null | 2011-02-11T14:35:23.233 | 2016-12-22T08:14:17.990 | 2016-12-22T08:14:17.990 | 2183 | 17 | null |
440 | 2 | null | 341 | 8 | null | Often one will find the argument that a random walk of price changes would be a proof of the efficient market hypothesis, but this is (IMO) a logical fallacy: Only because the EMH does imply random walks in the price changes, the finding of random walks does not imply automagically that the EMH is true.
| null | CC BY-SA 2.5 | null | 2011-02-11T15:24:03.607 | 2011-02-11T15:24:03.607 | null | null | 357 | null |
441 | 1 | 446 | null | 7 | 2007 | I am trying to analyze valuation methods for swaptions. Does anyone know of free example data for these OTC-traded securities?
| Free data on swap options | CC BY-SA 2.5 | null | 2011-02-11T15:28:02.467 | 2015-06-03T16:04:03.907 | null | null | 357 | [
"data"
]
|
442 | 2 | null | 441 | 7 | null | I will go out on a limb and say that this doesn't exist, unless you have a good relationship and can get some from your broker.
| null | CC BY-SA 2.5 | null | 2011-02-11T15:31:02.453 | 2011-02-11T15:31:02.453 | null | null | 17 | null |
443 | 2 | null | 424 | 4 | null | Unless I'm missing something, your question simply boils down to arithmetic as you have the portfolio allocation and sector returns explicitly identified:
Portfolio Return = (Sector 1 Allocation) * (Sector 1 Return) + (Sector 2 Allocation) * (Sector 2 Return) + ... + (Sector n Allocation) * (Sector n Return)
Where the allocations among n sectors add up to 100%.
| null | CC BY-SA 2.5 | null | 2011-02-11T16:11:58.940 | 2011-02-11T16:11:58.940 | null | null | 253 | null |
444 | 2 | null | 38 | 6 | null | I can't speak for all structured products but valuing a MBS is straight-forward, but not easy. It's straight-forward because you just need to calculate the net present value of the discounted cash flows. That said, accurately determining those cash flows is hard.
The most difficult cash flows to determine--prepayments and defaults/severity--also have the largest impact on MBS value. Prepayments are highly dependent on interest rates, so an OAS ([option-adjusted spread](http://en.wikipedia.org/wiki/Option-adjusted_spread)) approach is superior to a static assumption.
The valuation is fairly consistent across different MBS, especially for MBS from the GSEs. A private MBS would likely be more complicated because they don't have the pooling restrictions of the GSEs. [CMOs](http://en.wikipedia.org/wiki/Collateralized_mortgage_obligation) are even more complicated to value because of the tranche structure.
| null | CC BY-SA 2.5 | null | 2011-02-11T16:32:06.883 | 2011-02-11T16:32:06.883 | null | null | 56 | null |
445 | 2 | null | 422 | 3 | null | NYSE has a bunch of sample data on their FTP site: [ftp://sampledata:datacap@ftp2.nyxdata.com/custom/](ftp://sampledata%3adatacap@ftp2.nyxdata.com/custom/)
For NYSE's data products that I've worked with, the sample set tends to be a day's worth of data for their entire universe.
| null | CC BY-SA 2.5 | null | 2011-02-11T16:41:44.437 | 2011-02-11T16:41:44.437 | null | null | 90 | null |
446 | 2 | null | 441 | 7 | null | I agree with Shane; I seriously doubt you're going to find publicly available swaption data for free. You might get some sample data with a textbook, or from a published journal article.
If you only need one example, you can find one in the documentation for the `BermudanSwaption` function in the [RQuantLib](http://cran.r-project.org/web/packages/RQuantLib/) R package.
| null | CC BY-SA 2.5 | null | 2011-02-11T16:45:06.957 | 2011-02-11T16:45:06.957 | null | null | 56 | null |
447 | 1 | 1160 | null | 6 | 15178 | I'm looking for an options broker that provides an execution API.
I'd like to ideally test on a papertrading version of it before connecting to a real execution engine. I know IB offers that, but they require a funded account with 10K min balance.
I was hoping for no more than 2-3K.
I would appreciate your suggestions
| Option trading API other than Interactive Brokers | CC BY-SA 3.0 | null | 2011-02-11T17:03:58.373 | 2013-05-20T11:26:34.087 | 2011-09-08T19:22:51.750 | 1355 | 358 | [
"options",
"programming",
"interactive-brokers",
"trade",
"broker"
]
|
448 | 2 | null | 441 | 0 | null | You can try begging for them, if you're an academician.
| null | CC BY-SA 2.5 | null | 2011-02-11T17:04:31.337 | 2011-02-11T17:04:31.337 | null | null | 89 | null |
449 | 2 | null | 447 | 0 | null | I don't use them, but [thinkorswim](http://www.thinkorswim.com) has a $3,500 account minimum. They also have an [API](http://www.thinkpipes.com/api/) via [thinkpipes](http://www.thinkpipes.com/). I'm not sure if it costs extra to use the API.
| null | CC BY-SA 2.5 | null | 2011-02-11T17:20:38.647 | 2011-02-11T17:20:38.647 | null | null | 56 | null |
450 | 1 | null | null | 21 | 15034 | Are there any free data source for historical US equity data? Yahoo finance has daily prices but I'm looking for something more granular and goes back 2 or more years (doesn't have to be close to tick data, hourly is fine).
I've looked through [Data sources online](https://quant.stackexchange.com/questions/141/data-sources-online), there isn't much on stock market data.
| Free intra-day equity data source | CC BY-SA 2.5 | null | 2011-02-11T18:08:36.227 | 2022-12-14T00:09:05.203 | 2017-04-13T12:46:22.953 | -1 | 128 | [
"data",
"equities"
]
|
451 | 2 | null | 450 | 4 | null | Open an account with IB, and you can get access to equity, options for free, via their API.
| null | CC BY-SA 2.5 | null | 2011-02-11T18:42:58.273 | 2011-02-11T18:42:58.273 | null | null | 214 | null |
452 | 2 | null | 441 | 6 | null | Just for future reference, if you are student or academic, you can request for market data on [http://www.quantnet.com/forum/](http://www.quantnet.com/forum/).
Many of our members are Wall Street practitioners and as a policy, they will provide such data to help with your research (hence students/academic only). I have been the conduit for many of such transaction in the past.
You will need to be precise about the type of data you need (series, ticker name, timeline, etc). These helpers are not going to waste their time if you have no clue on what data you need.
| null | CC BY-SA 2.5 | null | 2011-02-11T18:47:07.270 | 2011-02-11T18:47:07.270 | null | null | 43 | null |
453 | 2 | null | 358 | 4 | null | I haven't seen a framework for options specifically, however...
The way I have done this in the past is to essentially setup a timeseries(xts or zoo) for each option(underlying,type,strike,expiry). Obviously doing this via code is important because it is intensely error prone.
We use a build function to put those into the workspace.
It is still difficult and brittel to do longer tests that span multiple expiries.
We eventually gave up on R and matlab in favor of a functional programming approach. This way as we evaluate the code scans a mapped structure, instead of an array.
Clearly this is slower, but really simplifies the programming, and is easily parallized. It performs reasonably well on a live data feed. Probably not tractable for HFT ( calcs are in ms, not microseconds).
| null | CC BY-SA 2.5 | null | 2011-02-11T18:55:24.080 | 2011-02-11T18:55:24.080 | null | null | 214 | null |
454 | 2 | null | 447 | 1 | null | TradeStation does options, not necessarily through IB.
[http://www.tradestation.com/](http://www.tradestation.com/)
| null | CC BY-SA 2.5 | null | 2011-02-11T19:08:17.830 | 2011-02-11T19:08:17.830 | null | null | 214 | null |
455 | 1 | null | null | 8 | 614 | What are the most effective techniques for reject inference in the context of retail credit scoring. [ Parcelling](http://analytics.ncsu.edu/sesug/2008/ST-160.pdf) is something I use frequently... Any other approaches out there?
| Reject inference | CC BY-SA 2.5 | null | 2011-02-11T21:27:54.637 | 2011-02-12T15:40:24.327 | null | null | 40 | [
"credit-scoring",
"reject-inference"
]
|
456 | 2 | null | 455 | 2 | null | Have you looked at the [bivariate probit model](http://www.defaultrisk.eu/img_auth.php/a/a3/%7E2660380.pdf) at all?
>
Bivariate probit model with sample
selection assumes that the
distribution of the accepted applicant
population is different from that of
the rejected applicant population.
That is, it is assumed that
$P(default|X, rejected) \neq P(default|X, accepted)$
for some vector of
explanatory variables X of the model
predicting the default of companies.
I addition to that paper, there's an article that highlights different approaches available here: [Theoretical approaches of reject inference](http://www.defaultrisk.eu/img_auth.php/2/26/Theoretical_approaches_of_reject_inference.pdf).
It gives overviews of:
- Several different parceling methods
- Fuzzy reclassification
- Iterative reclassification
- Three-groups approach
among others.
| null | CC BY-SA 2.5 | null | 2011-02-12T00:02:34.433 | 2011-02-12T15:40:24.327 | 2020-06-17T08:33:06.707 | -1 | 117 | null |
457 | 2 | null | 424 | 1 | null | Portfolio's return minus the return of a hypothetical tech and industrials portfolio with market weighting. For example, if the total market cap of the industrial sector is twice that of tech then a two sector portfolio using market relative weights would be 66.67% industrials and 33.33% tech.
If you are investing in all sectors then it is as trivial as subtracting the market return from your portfolio's return. This, however won't tell you what, if any alpha you've achieved.
| null | CC BY-SA 2.5 | null | 2011-02-12T00:19:09.367 | 2011-02-12T00:19:09.367 | null | null | 352 | null |
458 | 2 | null | 447 | 0 | null | Sterling Trader has an API that can handle options. I have not personally used it but it may be worth checking out. Sterling is mainly used by prop firms or larger independent accounts.
| null | CC BY-SA 2.5 | null | 2011-02-12T00:43:53.697 | 2011-02-12T00:43:53.697 | null | null | 341 | null |
460 | 2 | null | 306 | 7 | null | C++,
Java,
Ocaml
Very good link for further information:
[http://www.selectorweb.com/algorithmic_trading.html](http://www.selectorweb.com/algorithmic_trading.html)
| null | CC BY-SA 2.5 | null | 2011-02-12T08:19:24.130 | 2011-02-12T08:19:24.130 | null | null | 354 | null |
461 | 1 | null | null | 32 | 26606 | How should I store tick data? For example, if I have an IB trading account, how should I download and store the tick data directly to my computer? Which software should I use?
| How should I store tick data? | CC BY-SA 3.0 | null | 2011-02-12T09:48:20.777 | 2017-06-06T21:31:12.473 | 2011-08-07T13:34:16.267 | 1106 | 354 | [
"data",
"interactive-brokers"
]
|
462 | 2 | null | 141 | 6 | null | [http://www.mbtrading.com/developersPriceServer.aspx](http://www.mbtrading.com/developersPriceServer.aspx)
MBT Quote API was designed for third-party software developers and provides access to the following data feeds:
```
* NASDAQ Market
* New York Stock Exchange - NYSE
* American Stock Exchange - AMEX
* Toronto Stock Exchange - TSX
* INET and ARCA ECN books
* CBOE Options quotes
* CME Futures Quotes
* CBOT Futures quotes
* Foreign Currencies
```
Under development.
| null | CC BY-SA 2.5 | null | 2011-02-12T09:54:56.123 | 2011-02-12T09:54:56.123 | null | null | 354 | null |
463 | 1 | 504 | null | 9 | 731 | What methods can be used to map the correlation skew of a credit index on a bespoke CDO portfolio?
| Correlation skew mapping | CC BY-SA 2.5 | 0 | 2011-02-12T10:50:52.440 | 2011-02-14T18:17:10.400 | null | null | 89 | [
"structured-credit",
"cdo"
]
|
464 | 1 | 473 | null | 15 | 2419 | Libor Market Model (LMM) models the interest rate market by simulating a set of simply compounded, non-overlapping Libor rates which reset and mature on predefined dates. How do I obtain from them a Libor rate which resets and/or mature between these fixed dates?
| Rate interpolation in Libor Market Model | CC BY-SA 2.5 | null | 2011-02-12T10:56:08.987 | 2022-11-06T03:42:11.640 | 2022-11-06T03:42:11.640 | 2299 | 89 | [
"interest-rates",
"libor",
"libor-market-model"
]
|
465 | 1 | 488 | null | 8 | 755 | How do I model the randomness of recovery rate given default when pricing credit derivatives?
| Stochastic recovery rates | CC BY-SA 2.5 | 0 | 2011-02-12T10:57:19.830 | 2011-02-13T20:39:34.113 | null | null | 89 | [
"credit"
]
|
466 | 1 | 3407 | null | 8 | 2270 | What is an efficient method of pricing callable range accruals on rate spreads? As an example:
A cancellable 30 year swap which pays 6M Libor every 6M multiplied by the number of days the spread of 10-year and 2-year CMS rate is above 0, in exchange for a fixed or floating coupon.
Using LMM for this is dog slow, and 1 factor models are not enough because of both Libor and swap rates involved.
| Pricing callable range accruals on spreads | CC BY-SA 2.5 | null | 2011-02-12T11:06:42.113 | 2012-05-02T09:03:30.077 | null | null | 89 | [
"interest-rates",
"exotics"
]
|
467 | 2 | null | 430 | 7 | null | An interesting pick if you'd like to use Python within the Numpy/Scipy environment is [scikits.learn](http://scikit-learn.sourceforge.net/). And an other viable Java package is [Apache's Mahout](http://mahout.apache.org/).
| null | CC BY-SA 2.5 | null | 2011-02-12T13:18:12.640 | 2011-02-12T13:18:12.640 | null | null | 243 | null |
468 | 2 | null | 461 | 18 | null | What do you want to do with the tick data later? Run analytics? You can save tick data to a flat file for all the software cares, but that would be really slow to access later.
Instead, you should ideally save the data:
- Column-oriented - all elements in a field are stored contiguously for better caching
- Binary - all elements are ready for immediate use; no lexical casting required
There are number of column-oriented databases, though no production quality ones are open-source at the moment. You can try the [non-commercial version of q/kdb+](http://kx.com/Developers/software.php) to see what you think of it, though it's a huge learning curve if you aren't used to it already.
Something else to think about when storing tick data is the physical medium. Ideally you'll want:
- Local storage - fetching across NFS is going to be painful
- Solid state - fetching from disk is also painful
| null | CC BY-SA 2.5 | null | 2011-02-12T16:03:54.603 | 2011-02-12T16:03:54.603 | null | null | 35 | null |
469 | 2 | null | 430 | 8 | null | If you are programming in C#, you may have a look at [AForge.NET](http://www.aforgenet.com/) and [Accord.NET](http://accord-net.origo.ethz.ch/) too
| null | CC BY-SA 2.5 | null | 2011-02-12T16:55:00.930 | 2011-02-12T16:55:00.930 | null | null | 191 | null |
470 | 2 | null | 430 | 7 | null | A good resource for open-source statistical learning / machine learning libraries is [mloss.org](http://www.mloss.org/software).
| null | CC BY-SA 2.5 | null | 2011-02-12T17:11:39.603 | 2011-02-12T17:11:39.603 | null | null | 69 | null |
471 | 2 | null | 464 | 3 | null | You could bootstrap a curve based on the forward rates you get, plus your standard interpolation scheme.
That's certainly what you'd do if the rates were presented to you as a set of market quotes for FRAs. It does ignore the evolution of the future forward rates though, so I'd expect it to work best if the intetpolated rate is close to one of your simulated rates.
| null | CC BY-SA 2.5 | null | 2011-02-12T17:42:18.560 | 2011-02-12T17:42:18.560 | null | null | 371 | null |
472 | 2 | null | 465 | 2 | null | CreditMetrics uses monte carlo simulation assuming a beta-distribution fitted to historical recovery rates.
| null | CC BY-SA 2.5 | null | 2011-02-12T17:42:55.270 | 2011-02-12T17:42:55.270 | null | null | 357 | null |
473 | 2 | null | 464 | 5 | null | The subject is interesting and not so easy if you want to interpolate in an arbitrage-free way, to my knowledge a good paper on the subject is [this one](http://ideas.repec.org/p/uts/rpaper/71.html)
| null | CC BY-SA 2.5 | null | 2011-02-12T17:46:49.753 | 2011-02-12T17:46:49.753 | null | null | 92 | null |
474 | 2 | null | 431 | 13 | null | I strongly recommend Robert Shiller's ["Financial Markets"](https://youtube.com/playlist?list=PL8FB14A2200B87185).
| null | CC BY-SA 4.0 | null | 2011-02-12T18:33:30.360 | 2023-03-10T13:07:42.027 | 2023-03-10T13:07:42.027 | 66036 | 17 | null |
475 | 2 | null | 461 | 1 | null | If you have an IB account, you can use their API to request market data and save to a flat file. That being said, IB does not offer true tick data, it is filtered and you may want to consider a different data feed if you need true tick data.
| null | CC BY-SA 2.5 | null | 2011-02-12T20:02:29.320 | 2011-02-12T20:02:29.320 | null | null | 341 | null |
476 | 2 | null | 461 | 4 | null | I am using just a filesystem to store raw tick data. I am using protocol buffers to easily allow multiple languages to consume the data. Part of the reason is that I am moving more stuff onto Amazon's EC2 to use their GPU compute instances and storing data in blobs allows for easy integration with S3. I would love a proper column store put right now this has worked well with low development overhead.
We have also tuned our workloads to get around these constraints. Data is typically worked on for minutes or hours after it is retrieved. We are doing pure algo development and don't have (or need) low latency access in a UI.
| null | CC BY-SA 2.5 | null | 2011-02-12T23:03:46.633 | 2011-02-12T23:03:46.633 | null | null | 373 | null |
477 | 2 | null | 306 | 13 | null | F# was used at credit suisse and I believe a number of other desks. From people I know at Microsoft the banks told MS to make it a supported language, otherwise it would have stayed a project at Microsoft Research.
I have also seen Haskell used for derivatives trading.
| null | CC BY-SA 2.5 | null | 2011-02-12T23:17:58.883 | 2011-02-12T23:17:58.883 | null | null | 373 | null |
478 | 5 | null | null | 0 | null | null | CC BY-SA 2.5 | null | 2011-02-12T23:21:45.207 | 2011-02-12T23:21:45.207 | 2011-02-12T23:21:45.207 | -1 | -1 | null |
|
479 | 4 | null | null | 0 | null | The process of using a computer program to place orders to trade securities in financial markets. Typically, these trades are made in exchange-traded instruments, such as listed equities, options, and futures. However, automated trading can occur in numerous other products such as bonds and currencies. | null | CC BY-SA 3.0 | null | 2011-02-12T23:21:45.207 | 2011-12-14T15:29:39.137 | 2011-12-14T15:29:39.137 | 1106 | 214 | null |
480 | 5 | null | null | 0 | null | The kelly-criterion is a risk management strategy (or wagering system) developed by J.L. Kelly of Bell Labs.
Kelly's formula, as it is also called, provides an optimal risk apportionment system that relies on having 2 calculated probabilities. Namely, the odds of an event occurring and the edge the investor (gambler) has in the event.
It is optimal because it is guaranteed not to bankrupt the investor (gambler), while maximizing the expected return of each event.
| null | CC BY-SA 3.0 | null | 2011-02-12T23:27:32.830 | 2013-05-21T04:37:54.570 | 2013-05-21T04:37:54.570 | 771 | -1 | null |
481 | 4 | null | null | 0 | null | The kelly-criterion is a risk management strategy (or wagering system) providing an optimal risk apportionment system that relies on having 2 calculated probabilities. | null | CC BY-SA 3.0 | null | 2011-02-12T23:27:32.830 | 2013-05-21T04:37:54.570 | 2013-05-21T04:37:54.570 | 771 | 214 | null |
482 | 1 | 486 | null | 8 | 692 | I have been testing a trend following strategy. The results shows massive drawdowns which makes the equity curve very unstable. I just wanted to know what are some ways in which I can reduce the volatility (increase smoothness) of the equity curve? Better exits? Better entries?
| System Development / Optimization | CC BY-SA 2.5 | null | 2011-02-13T07:20:41.557 | 2011-02-14T15:52:35.103 | 2011-02-13T15:15:09.797 | 17 | 2318 | [
"trading-systems",
"drawdown",
"equity-curve"
]
|
483 | 1 | 502 | null | 12 | 784 | Is a JV model simply Local Vol + Jump Diffusion?
If so, it seems logical that an existing JV model be able to be used for valuation of both Vanilla and Exotic options. Is this true? Does a Local Vol model not model the smile parameters (Skew and Kurtosis) at all, and hence the use of JVol in equity option calculation?
| What are the main differences in Jump Volatility and Local Volatility | CC BY-SA 2.5 | null | 2011-02-13T10:01:34.830 | 2011-02-14T15:23:48.797 | null | null | 293 | [
"volatility",
"black-scholes",
"local-volatility"
]
|
484 | 2 | null | 482 | 4 | null | Be careful when you optimize the exit parameters (and any other parameter) as you could get better results in backtest that will only be due to over fitting. IF you haven't done that yetn use In and Out sample to verify your improvements. After that you can try to build entry filters. In my experience trend following usually have bigger drawdowns than mean reverting.
| null | CC BY-SA 2.5 | null | 2011-02-13T11:04:13.620 | 2011-02-13T13:23:58.230 | 2011-02-13T13:23:58.230 | 155 | 155 | null |
486 | 2 | null | 482 | 6 | null | Another possibility is to analyze the equity curve itself so as to go live with the system when good performance is expected and to either reduce risk or just paper trade when performance is expected to be negative. Are a series of positive returns followed by negative returns (i.e. is there mean reversion)? Does a trend-following "meta-system" and/or a trailing stop loss (on the total account equity) reduce risk or at least make it more tolerable? A couple other ideas might be to try [supervised learning](http://en.wikipedia.org/wiki/Supervised_learning) for the drawdown periods or incorporate concepts from [control charts](http://cssanalytics.wordpress.com/2010/06/08/concepts-from-statistical-control-theory/).
There is some danger that you might just end up multiplying your [model risk](http://en.wikipedia.org/wiki/Model_risk) by overlaying a meta-strategy on your original system. Statistically significant changes may be very hard to come by and in the end you might just have to trade smaller size. Good luck.
| null | CC BY-SA 2.5 | null | 2011-02-13T13:05:34.733 | 2011-02-14T15:52:35.103 | 2011-02-14T15:52:35.103 | 352 | 352 | null |
487 | 2 | null | 431 | 10 | null | A couple of lecture note links, no video or audio, but these are pretty useful nonetheless.
Notes from Emmanuel Derman's 2007 Columbia course on the [Volatility Smile](http://www.ederman.com/new/docs/laughter.html)
Andrew Lesniewski's 2009 notes on Interest Rate and Credit pricing, on his [Lectures and Presentations](http://lesniewski.us/presentations.html) page, there are a few other interesting presentations there as well.
| null | CC BY-SA 2.5 | null | 2011-02-13T20:23:57.293 | 2011-02-13T20:23:57.293 | null | null | 371 | null |
488 | 2 | null | 465 | 5 | null | The standard reference is Anderson and Sidenius [Extensions to the Gaussian Copula: Random recovery and random factor loadings](http://www.defaultrisk.com/pa_model_15.htm). Random recovery proved necessary in 2007/2008 when you couldn't calibration standard one factor base correlation models. [This paper](http://www.defaultrisk.com/pp_recov_45.htm) discusses this, and might be an easier starting point than the Anderson and Sidenius paper.
| null | CC BY-SA 2.5 | null | 2011-02-13T20:39:34.113 | 2011-02-13T20:39:34.113 | null | null | 371 | null |
489 | 1 | null | null | 21 | 36470 | If you are interested in determining whether there is a correlation between the Federal Reserve Balance Sheet and PPI, would you calculate the correlation between values (prices) or period-to-period change (returns)?
I've massaged both data sets to be of equal length and same date range and have labeled them WWW (WRESCRT) and PPP (PPIACO). Passing them into R we get the following:
```
> cor(WWW, PPP)
[1] 0.7879144
```
Then applying the Delt() function:
```
> PPP.d <- Delt(PPP)
```
Then applying the na.locf() function:
```
PPP.D <- na.locf(PPP.d, na.rm=TRUE)
```
Then passing it through cor() again:
```
> cor(WWW.D, PPP.D)
[1] -0.406858
```
So, bottom line is that it matters.
NOTE: To view how I created the data view [http://snipt.org/wmkpo](http://snipt.org/wmkpo). Warning: it needs refactoring but good news is that it's only 27 iines.
| Correlation between prices or returns? | CC BY-SA 2.5 | null | 2011-02-13T20:48:11.220 | 2018-06-28T13:46:12.330 | 2011-02-14T00:39:17.563 | 291 | 291 | [
"correlation"
]
|
490 | 2 | null | 489 | 15 | null | Short answer, you want to use the correlation of returns, since you're typically interested in the returns on your portfolio, rather than the absolute levels.
Also, correlations on price series have very strange properties. If you think about a time series of prices, you could write it out as [P0,P1,P2,...,PN], or
[P0,P0+R1,P0+R1+R2,...,P0+R1+...+RN], where Ri = Pi-P(i-1). Written this way you can see that the first return R1, contributes to every entry in the series, whereas the last only contributes to one. This gives the early values in the correlation of prices more weight than they should have. See the answers in [this thread](http://www.wilmott.com/messageview.cfm?catid=34&threadid=45783) for some more details.
| null | CC BY-SA 2.5 | null | 2011-02-13T21:44:41.583 | 2011-02-13T21:44:41.583 | null | null | 371 | null |
491 | 2 | null | 489 | 8 | null | It depends, usually you would want to measure correlation between variables that are both stationary, else you would always be able to measure a correlation in the case of variables developing with a trend, even if they are unrelated. In this case I would guess that you should use first differences.
| null | CC BY-SA 2.5 | null | 2011-02-13T22:22:22.957 | 2011-02-13T22:22:22.957 | null | null | 357 | null |
492 | 1 | 495 | null | 13 | 2197 | I am currently developing a commercial automated trading program in which users can write their own proprietary code and develop strategies, like in NinjaTrader, MetaTrader etc. Right now I am working on the order handling part, which is really important. I can not figure out how to reference the orders entered during the execution of a strategy.
Suppose in a strategy, there is this piece of code:
```
if(some_condition)
EnterLong("IBM",100)
```
My problem is `some_condition` can be set to true several times, which results in generating several buy orders. Somehow I have to provide the ability for users to modify or cancel some of these orders.
NinjaTrader, for example, has a feature called `EntriesPerDirection`, which limits the entry number for a direction (long or short). So you can have a certain number of order objects (or an array of orders) that are returned by order entry functions and say `order1.Cancel();` However, this does not make any sense for an iceberging strategy in which thousands of orders could be generated. Both programs enable the user to iterate over the orders that have not led to a transaction yet. This again could be painful for iceberging purposes. I think those two programs are not specifically built for handling large numbers of orders or for developing execution algorithms (e.g., VWAP, Arrival Price, Implementation Shortfall) which are widely used among buy-side firms.
Also both NT and MT have events (`OnOrderUpdate`, `OnExecution`, `OnTrade`) that are fired when the status of an order is changed. This is a good idea, but I am not sure it is the best solution.
Are there any approaches or methodologies addressing my problem that are commonly used by other automated trading software?
| What approaches are there to order handling in automated trading? | CC BY-SA 3.0 | null | 2011-02-13T23:37:11.597 | 2011-09-22T14:50:45.260 | 2011-09-22T14:50:45.260 | 1106 | 42 | [
"algorithmic-trading",
"software",
"programming",
"order-handling"
]
|
493 | 2 | null | 461 | 13 | null | If you're planning on analyzing the data later in R, you should take a look at the [indexing and mmap packages](https://r-forge.r-project.org/R/?group_id=648). Though, as [@chrisaycock](https://quant.stackexchange.com/users/35/chrisaycock) said, you'll need to save the data in a column-oriented, binary format.
If you're downloading the IB data with R, using [IBrokers](http://cran.r-project.org/web/packages/IBrokers/), you can write your own eWrapper to store the data in whatever format you want.
| null | CC BY-SA 2.5 | null | 2011-02-14T00:08:35.867 | 2011-02-14T00:08:35.867 | 2017-04-13T12:46:22.953 | -1 | 56 | null |
494 | 2 | null | 492 | 4 | null | What I've done in the past is create an OnOrderSubmit event/method that fires when an order is placed. Use set a semaphore in that method so that your tick/analytical method ignores order placement instructions until an execution occurs or a timer expires. Then flip the semaphore.
(If you're using multiple threads you want to make sure to serialize access to each thread by symbol.)
| null | CC BY-SA 2.5 | null | 2011-02-14T00:52:32.200 | 2011-02-14T00:52:32.200 | null | null | 214 | null |
495 | 2 | null | 492 | 11 | null | Regarding your order management issue, every order should have a unique identifier that the user can reference; in FIX, this is the ClOrdID. The parameters of every order the user requests should be stored in a table keyed by this identifier.
If your goal is to prevent duplicate orders from going out, consider having a trade volume limit per each symbol. That way, subsequent order requests will be rejected by your order manager even if the condition has passed. A trade volume limit is also desirable to prevent moving the market (especially when coupled with a position limit) and can act as a safety mechanism if things get out of hand (we call this the blowout preventer at my current firm).
>
... both NT and MT have events (OnOrderUpdate, OnExecution, OnTrade) that are fired when the status of an order is changed.
Event-driven programming makes real-time trading much more manageable. This paradigm is known as "complex event processing" and is quite common in institutional trading.
>
I think those two programs are not spesifically built for handling large number of orders ...
That's because they were designed for day traders who want to pretend they're quants. No institutional trader would ever use those software packages.
| null | CC BY-SA 2.5 | null | 2011-02-14T01:35:32.223 | 2011-02-14T01:35:32.223 | null | null | 35 | null |
496 | 2 | null | 306 | 3 | null | I've seen the following languages in use:
- C++
- C#
- Excel (VBA)
- F#
- Q
- MatLab
- R
- Python
| null | CC BY-SA 3.0 | null | 2011-02-14T02:37:26.717 | 2012-04-23T13:37:09.840 | 2012-04-23T13:37:09.840 | 2312 | 141 | null |
497 | 2 | null | 461 | 25 | null | Using IBrokers from R is going to be the easiest route. A quick example of capturing data to disk would be:
```
library(IBrokers)
tws <- twsConnect()
aapl.csv <- file("AAPL.csv", open="w")
# run an infinite-loop ( <C-c> to break )
reqMktData(tws, twsSTK("AAPL"),
eventWrapper=eWrapper.MktData.CSV(1),
file=aapl.csv)
close(aapl.csv)
close(tws)
```
This will send CSV style output to disk. Additionally the data can be stored in xts objects within the loop which can be appended to/filled to provide a constant in-memory object to use for analytics. Objects can be shared with many tools - including using the RBerkeley package on CRAN to share objects with other programs with Berkeley DB bindings. This latter approach, if managed intelligently is very, very fast.
Given the symbol limit of IB (100 concurrent more or less) and the 250ms updates - R can typically handle all of this without breaking a sweat (i.e. the JVM running IB's TWS or even IBGateway client is likely to be far surpassing the R/IBrokers process in terms of CPU usage).
You can even extend the syntax above to record more than one symbol by passing a list of Contracts, increasing the number on the eWrapper, and making sure you have a suitable list of files to write to.
In terms of something closer to long-term storage/access, the packages Josh referred to (mmap and indexing) are also very useful. I've given talks with some basic options data examples that are 3-4GB in size without derived columns (12GB total), and I can pull using R-style subsetting syntax any subset I need nearly instantly. e.g. finding 90k+ contracts for AAPL in 2009 (out of 70MM rows) took tens of milliseconds. All without keeping anything in RAM, and all running on a laptop with 2GB of RAM.
I'll likely get some more presentation material for the latter packages put together soon, and will be giving some talk(s) at the upcoming [R/Finance](http://www.RinFinance.com) conference in Chicago. I am also planning on some public workshops through [lemnica](http://www.lemnica.com) related to R and IB for 2011.
| null | CC BY-SA 2.5 | null | 2011-02-14T03:42:15.450 | 2011-02-14T03:42:15.450 | null | null | 347 | null |
498 | 2 | null | 492 | 3 | null | Instead of sending orders each time condition is met, try to set "wanted holding" in the trade logic thread. Trade execution will then make sure (issue sufficient number of orders) to achieve your wanted holding.... For example, the first time signal happens, you sent wanted holding to 100 shares the next time it happens you only confirm that you want 100 shares - you do not send the order!
Some other class/thread is looking after actual order management... Not trading logic
| null | CC BY-SA 2.5 | null | 2011-02-14T07:42:41.203 | 2011-02-14T07:42:41.203 | null | null | 40 | null |
499 | 2 | null | 37 | 2 | null | Try the OEC API at [http://www.openecry.com/services/api_highlights.cfm](http://www.openecry.com/services/api_highlights.cfm)
It is a free .NET based API for Futures. Very easy to work with and can give you both current and historical tick data. Not sure of the frequency, however.
| null | CC BY-SA 2.5 | null | 2011-02-14T14:45:47.867 | 2011-02-14T14:45:47.867 | null | null | 386 | null |
500 | 1 | null | null | 17 | 8766 | How does left tail risk differ from right tail risk? In what context would an analyst use these metrics?
| How does left tail risk differ from right tail risk? | CC BY-SA 3.0 | null | 2011-02-14T15:22:50.260 | 2011-09-22T20:01:12.840 | 2011-09-22T20:01:12.840 | 999 | 389 | [
"risk",
"probability",
"expected-return"
]
|
501 | 1 | 507 | null | 18 | 2419 | I can't find S&P 500 index (SPX) futures data with Greeks to create delta-hedged portfolios. Do these data exist? I have access to most of the common data sources.
In the meantime, I am trying to form so these delta=hedged portfolios "manually". Unfortunately, I can't find SPX data with maturity, so I use a continuous e-mini S&P 500 future from Datastream and form the delta-neutral portfolio based on guidance from Chapter 14 of Hull.
\begin{equation}
H_{fut} = H_{index} \exp \left( -(R_f - R_{div})T \right)
\end{equation}
where $R_{div}$ is the continuous dividend yield on SPX, $R_f$ is the one-month US Treasury bill, and $H$ are the dollar holdings of each asset. Of course this won't work without the right time to maturity. Is there a "correct" time to maturity to use with an e-mini? Or is there a better source for futures data? Thanks!
| Where to find Greeks for futures to form delta-hedged futures portfolio of S&P 500 index/futures | CC BY-SA 2.5 | null | 2011-02-14T15:23:02.757 | 2011-02-14T21:00:22.423 | null | null | 106 | [
"futures",
"hedging",
"delta-neutral",
"index"
]
|
502 | 2 | null | 483 | 5 | null | Jump volatility is a term sometimes used to describe randomly varying jump sizes in a model with asset value jumps. So strictly speaking it is merely a parameter in generic jump diffusion.
Both local volatility models and jump diffusions end up resulting in skew and kurtosis (of Black-Scholes volatilities). However, they are complementary in practice, at least with sane parameters.
Because jumps tend to "average out" over time, jump-diffusion models have trouble reproducing skew at long tenors. At the same time, it is difficult for a continuous process to achieve the kinds of skews (or equivalently implied probability distributions) observed at short tenors.
Hence, you often see exotics desks combine the two, or at least have both available.
| null | CC BY-SA 2.5 | null | 2011-02-14T15:23:48.797 | 2011-02-14T15:23:48.797 | null | null | 254 | null |
503 | 2 | null | 413 | 4 | null | You cannot derive the probability distribution you require, because for VaR you need a real-world probability distribution. From the options prices, it is only possible to obtain a risk-neutral distribution.
Now, if you are willing to assume some kind of parametric relationship between the risk-neutral and real-world distributions, then you might find the options prices useful. The resulting mathematics for a stochastic volatility model is somewhat tricky, however. You can find most of it in Jim Gatheral's books. A sloppy treatment would just take the risk-neutral distribution and shift its mean.
Obtaining the approximate risk-neutral distribution is fairly simple. Let p(S) be the time-T risk-neutral probability density. Then we see that (TeX notation alert)
\begin{equation}
C := Call(T) = B(0,T) \int_0^\infty Max(0,S-K) p(S) dS
\end{equation}
\begin{equation}
\frac{dC}{dK} = B(0,T) \int_0^\infty 1[S>=K] (-1) p(S) dS \qquad\text{[differentiate under integral] }
\end{equation}
\begin{equation}
\frac{dC}{dK} = B(0,T) \int_K^\infty (-1) p(S) dS
\end{equation}
\begin{equation}
\frac{d^2C}{dK^2} = B(0,T) p(K) \qquad \text{ [Fundamental thm of calculus]}
\end{equation}
Alternatively, you could say that p(S) is the density, and is the derivative of the cumulative distribution function P(S), and write
\begin{equation}
C := Call(T) = B(0,T) \int_0^\infty Max(0,S-K) p(S) dS
\end{equation}
\begin{equation}
\frac{dC}{dK} = B(0,T) \int_0^\infty 1[S>=K] (-1) p(S) dS \qquad\text{[differentiate under integral] }
\end{equation}
\begin{equation}
\frac{dC}{dK} = B(0,T) \int_K^\infty (-1) p(S) dS
\end{equation}
\begin{equation}
\frac{dC}{dK} = B(0,T) (-1) ( P(\infty) - P(K))
\end{equation}
\begin{equation}
\frac{d^2C}{dK^2} = B(0,T) p(K)
\end{equation}
Either way you end up finding the density
\begin{equation}
p(x) = \frac{1}{B(0,T)} \frac{d^2C(x)}{dx^2}
\end{equation}
where $x$ is the strike. So an approximate density comes from using the actual option prices available to you. You can spline interpolate, or if you have a regular grid of strikes spaced by dK you can make a histogram of values
\begin{equation}
\frac{ C(K+dK) - 2C(K) +C(K-dK) }{ dK^2}
\end{equation}
and divide by the discount factor to find your risk-neutral distribution.
| null | CC BY-SA 2.5 | null | 2011-02-14T15:59:25.617 | 2011-02-14T18:11:58.667 | 2011-02-14T18:11:58.667 | 254 | 254 | null |
504 | 2 | null | 463 | 6 | null | Find the most similar (in terms of credit risk and industry) quoted index tranches you can. Then map its base correlation skew over to your bespoke portfolio, preserving expected loss (EL) levels.
The basic formula is
\begin{equation}
c_\text{bespoke}(z) = c_\text{index}\left( z \frac{EL_\text{index}}{EL_\text{bespoke}} \right)
\end{equation}
though sometimes a scale factor $f$ is included like this
\begin{equation}
c_\text{bespoke}(z) = c_\text{index}\left( z \left( \frac{EL_\text{index}}{EL_\text{bespoke}} \right)^f \right)
\end{equation}
The main flaw here is that the dispersion of your portfolio may differ from that of the index. Try to keep that difference to a minimum.
| null | CC BY-SA 2.5 | null | 2011-02-14T16:28:45.533 | 2011-02-14T18:17:10.400 | 2011-02-14T18:17:10.400 | 254 | 254 | null |
505 | 2 | null | 500 | 5 | null | Tail risk represents the probability that the magnitude of returns on an asset/portfolio will exceed some threshold (usually three standard deviations) on the normal curve. If you visualize a normal curve on standard axes, the tail on the left side corresponds to an extreme low return and the tail on the right side corresponds to an extreme high return.
In other words, left vs right is a measurement of (the likelihood of) extreme low or high returns. An analyst might look at these in order to estimate the impact of rare but significant events.
| null | CC BY-SA 2.5 | null | 2011-02-14T16:57:11.817 | 2011-02-14T16:57:11.817 | null | null | 80 | null |
506 | 2 | null | 500 | 7 | null | Here's a partial answer:
- This partly depends on the return characteristics. One way to look at this is to analyze the skewness and kurtosis of the returns. Most strategies have a negative skewness, which roughly means that they have mostly consistent small positive returns, with the occasional large negative return. Alternatively, some strategies have "option-like features", which results in the opposite distribution: positive skewness. See, for instance "The Risk in Hedge Fund Strategies" (Fung, Hsieh 2001).
- You might want to look at some of the work done on "post-modern portfolio theory" (PMPT) which attempted to differentiate between upside and downside risks. As an example, one simple adjustment based on this would be to use the Sortino ratio instead of the Sharpe ratio as a risk/reward metric.
| null | CC BY-SA 2.5 | null | 2011-02-14T16:58:32.333 | 2011-02-14T17:05:20.267 | 2011-02-14T17:05:20.267 | 17 | 17 | null |
507 | 2 | null | 501 | 6 | null | The delta factor you seek is the spot to futures price ratio without having to use all those parameters.
Now to answer your actual question:
Since you are getting futures data, you presumably have the tickers. You can infer the expiration date from the ticker.
Expiration dates are always on the third Friday of the month, and the ticker contains four letters. The first two letters are always SP. The next letter is a month code (H=March, M=June, U=Sep, Z=Dec). The final letter is a year.
Example: SPZ2 expires on Friday, Dec 21, 2012. "Z" tells you December, and "2" tells you 2012.
Note that you can infer $R_{div}$ from the futures contract price and the interest rates (which won't always be 1 month T-bills).
| null | CC BY-SA 2.5 | null | 2011-02-14T17:23:42.937 | 2011-02-14T17:59:14.967 | 2011-02-14T17:59:14.967 | 254 | 254 | null |
509 | 2 | null | 329 | 8 | null | Each shop will differ - there is no widely used, unified framework shared across firms. Competitive advantages vary across shops, which ultimately reflect the biases/characteristics of the particular shop. Some will be far more mathematically sophisticated/inclined than others. Some maintain strong aversion to quantiative techniques such as risk models.
Regardless, the major shops all have some form of top-down risk management systems. Some will use Monte Carlo simulations to assess daily VaR. Some shops will heavily use factor models for risk assessment across positions. Some will go much further and use factor models for research to vary their level of conviction for a particular trade (e.g., for alpha generation).
| null | CC BY-SA 2.5 | null | 2011-02-14T19:17:47.617 | 2011-02-14T19:17:47.617 | null | null | 390 | null |
510 | 2 | null | 501 | 1 | null | Not sure this helps, but visit:
[http://delayedquotes.cboe.com/new/options/options_chain.html?symbol=SPX&ID_NOTATION=8941848&ID_OSI=10614550&ASSET_CLASS=IND](http://delayedquotes.cboe.com/new/options/options_chain.html?symbol=SPX&ID_NOTATION=8941848&ID_OSI=10614550&ASSET_CLASS=IND)
and click on any option to see its Greeks.
| null | CC BY-SA 2.5 | null | 2011-02-14T21:00:22.423 | 2011-02-14T21:00:22.423 | null | null | null | null |
511 | 1 | 514 | null | 11 | 770 | Aside from Black-Scholes with crazy skews, what major models are used for energy derivatives? I'm thinking particularly of electricity derivatives, but I'm also interested in natural gas and other volatile contracts(*).
(*): pun intended
| What are the major models for energy derivatives, particularly electricity derivatives? | CC BY-SA 2.5 | null | 2011-02-14T21:09:09.663 | 2011-05-01T21:44:21.090 | null | null | 254 | [
"options",
"derivatives"
]
|
512 | 1 | null | null | 14 | 1409 | What are the current computational (non-network) bottlenecks now in a quant's workflow? What computational tasks would be revolutionary with a 10-100x improvement in performance using general purpose GPUs?
| What are some computational bottlenecks that quants face? | CC BY-SA 2.5 | null | 2011-02-14T21:20:07.487 | 2012-08-08T12:22:54.900 | 2012-08-08T12:22:54.900 | 2299 | 394 | [
"performance",
"gpgpu",
"hardware"
]
|
513 | 2 | null | 388 | 9 | null | Centroidal Voronoi methods you mean? i.e. approximating a continous space with discrete points (generators) and for the sake of modeling evaluate the neighborhood around each generator as having the same value?
Example. Here is a guy who encodes images with unicode in twitter. He is quantizing in both the spacial and color spaces.
[http://www.flickr.com/photos/quasimondo/3518306770/in/photostream/](http://www.flickr.com/photos/quasimondo/3518306770/in/photostream/)
Here is a paper I wrote about it in 2001. You can use them to cluster the behavior of time series data. Useful for both portfolio diversification and so you can make fine models for each cluster of time series data.
[http://orion.math.iastate.edu:80/reu/2001/voronoi_paper/voronoi.pdf](http://orion.math.iastate.edu:80/reu/2001/voronoi_paper/voronoi.pdf)
| null | CC BY-SA 2.5 | null | 2011-02-14T21:30:44.187 | 2011-02-14T21:30:44.187 | null | null | 394 | null |
514 | 2 | null | 511 | 10 | null | That's a complicated question. There are many paths.
One path is to build a model of the underlying supply/demand relationships. For example, the sudden loss of a power supplier (or transmision corridor) shifts the supply curve to the left spiking the price. The key to the game is data, data, and more data (price, weather/wind, season, power loads, current power generation, stand-by generation, transmission line overload, etc).
There are several books written on the subject. If you dig around, you'll find everything from over-simplified books, to books that over-kill on a specific area of the business. Just a quick Google gives:
[http://www.amazon.com/Managing-Energy-Price-Risk-Challenges/dp/1904339190](http://rads.stackoverflow.com/amzn/click/1904339190)
[http://www.amazon.com/Understanding-Todays-Electricity-Business-Shively/dp/0974174416](http://rads.stackoverflow.com/amzn/click/0974174416)
My reputation level is too low to post more links.
| null | CC BY-SA 2.5 | null | 2011-02-14T21:41:28.053 | 2011-02-14T21:41:28.053 | null | null | 392 | null |
515 | 2 | null | 512 | 7 | null | Coming from an HPC background myself, I know too well the feeling of owning a hammer and yet having no nail. Your question is about computational bottlenecks that can be relieved with GPGPU, though I'm afraid to admit that there aren't many in finance. For realtime applications, the network is the bottleneck; for historical applications, the memory is the bottleneck. The CPU is rarely saturated in my line of work.
However, there is one particular area that does appear to be CPU bound: interpretation. Namely, the feed handler and the FIX parser both require many small amounts of data to be transformed from one representation to another. FPGA-based feed handlers are starting to become more popular; I haven't seen anything similar for FIX parsers though.
If you could show how to parse a FIX message with a GPU off the wire, then that might be interesting. FIXT 1.1 can support InfiniBand, so NVIDIA / Mellanox's GPU Direct set-up would be especially noteworthy, though not required. (There aren't many trading venues supporting FIXT right now anyway, so there's no rush there.)
If you wanted to generalize your work for all key-value pairs communicated over a network, you might be able to apply some of your findings to parsing HTTP headers in realtime. No doubt many cloud vendors would be pleased to see that.
By the way, the reason I advocated FIX parsing instead of feed handling is that most data vendors ship their own proprietary API. Good luck getting Wombat to cooperate with you until you have some results of your own to show.
| null | CC BY-SA 2.5 | null | 2011-02-15T02:39:37.360 | 2011-02-15T02:39:37.360 | null | null | 35 | null |
516 | 1 | 528 | null | 11 | 2802 | Can anyone explain the process and the calculations needed to select a portfolio of liquid futures assets with the least correlation? Given a set of returns for a series of assets, how do I select the best subset such that I minimize their correlation with each other?
| How can I select the least correlated portfolio of assets? | CC BY-SA 3.0 | null | 2011-02-15T06:27:59.480 | 2011-08-23T05:02:46.027 | 2011-08-21T01:22:43.570 | 1106 | 2318 | [
"correlation",
"portfolio",
"portfolio-selection"
]
|
517 | 2 | null | 512 | 2 | null | In exotics options pricing, there are lots of CPU bottlenecks -- for example the calculation of Fast Fourier Transform or Monte Carlo simulation. When I price a range accrual in Libor Market Model, I don't use a lot of data (carefully optimized, everything should fit in a few MB of L2 cache), but I do a lot of calculations. This is where, I think, a GPU may be useful.
| null | CC BY-SA 2.5 | null | 2011-02-15T09:20:54.527 | 2011-02-15T09:20:54.527 | null | null | 89 | null |
519 | 1 | 520 | null | 15 | 856 | Assuming a naive stochastic process for modelling movements in stock prices we have:
$dS = \mu S dt + \sigma S \sqrt{dt}$
where S = Stock Price, t = time, mu is a drift constant and sigma is a stochastic process.
I'm currently reading Hull and they consider a simple example where volatility is zero, so the change in the stock price is a simple compounding interest formula with a rate of mu.
$\frac{dS}{S} = \mu dt$
The book states that by, "Integrating between time zero and time T, we get"
$S_{T} = S_{0} e^{\mu T}$
i.e. the standard continuously compounding interest formula. I understand all the formulae but not the steps taken to get from the second to the third. This may be a simple request as my calculus is a bit rusty but can anyone fill in the blanks?
| Missing step in stock price movement equations | CC BY-SA 2.5 | null | 2011-02-15T12:12:39.427 | 2012-12-12T00:52:04.013 | null | null | 403 | [
"stochastic-calculus",
"equities"
]
|
520 | 2 | null | 519 | 7 | null | This is the separable differential equation for simple continuous compounding!
See this very accessible article for a step-by-step derivation (esp. under continuous compounding):
[http://plus.maths.org/content/have-we-caught-your-interest](http://plus.maths.org/content/have-we-caught-your-interest)
| null | CC BY-SA 2.5 | null | 2011-02-15T12:43:56.377 | 2011-02-15T12:43:56.377 | null | null | 12 | null |
521 | 2 | null | 431 | 23 | null | This is a great question. I hope there are many valuable contributions.
- The recent (Jan 27, 28) MIT 150 Symposium, "Economics and Finance: From Theory to Practice to Policy". https://mit150.mit.edu/symposia/economics-finance.html
Specifically, the Jan 28 should be of interest (Finance). I particularly enjoyed Ross.
- "Finding Alpha" Videos (based on Falkenstein's Wiley Finance book. http://www.efalken.com/video/index.html
I haven't watched any yet, nor read the book - but intend to - so I cannot vouch for quality.
- There are also some nuggets buried in here (J. Simons, A. Lo, etc.): http://mitworld.mit.edu/browse/topic/13
| null | CC BY-SA 4.0 | null | 2011-02-15T13:32:22.650 | 2019-06-14T21:04:08.633 | 2019-06-14T21:04:08.633 | 33410 | 390 | null |
522 | 1 | 525 | null | 32 | 7123 | The new kid on the block in finance seems to be random matrix theory. Although RMT as a theory is not so new (about 50 years) and was first used in quantum mechanics it being used in finance is a quite recent phenomenon.
RMT in itself is a fascinating area of study concerning the eigenvalues of random matrices and finding laws that govern their distribution (a little bit like the central limit theorem for random variables). These laws show up in all kinds of areas (even in such arcane places like the spacings of the zeros of the Riemann-Zeta function in number theory) - and nobody really understands why...
For a good first treatment see this non-technical [article by Terence Tao](http://terrytao.wordpress.com/2010/09/14/a-second-draft-of-a-non-technical-article-on-universality/).
My question:
Do you know (other) accessible intros to Random Matrix Theory - and its application in finance?
| Random matrix theory (RMT) in finance | CC BY-SA 2.5 | null | 2011-02-15T14:47:17.550 | 2014-03-28T08:26:34.943 | 2012-01-24T21:10:08.320 | 1800 | 12 | [
"probability",
"mathematics",
"random-matrix-theory"
]
|
523 | 2 | null | 516 | 3 | null | I would start by looking at calculating the efficient frontier. which maximizes return given a specified risk.
Here is the Wikipedia article which specifies how you would do the calculation:
[http://en.wikipedia.org/wiki/Modern_portfolio_theory](http://en.wikipedia.org/wiki/Modern_portfolio_theory)
-Ralph Winters
| null | CC BY-SA 2.5 | null | 2011-02-15T15:19:04.267 | 2011-02-15T15:19:04.267 | null | null | null | null |
524 | 1 | null | null | 17 | 570 | While many systems like to treat dividends as a continuous yield when pricing equity options, it works quite poorly for short-dated options.
In the short run, deterministic dividends are clearly the way to go, since the upcoming dividend is usually known with fairly high precision. In the medium term, we may start to think of those dividends as being linked to the stock price, but still want to treat them discretely so as to get early exercise dates right. In the long term, tracking all those discrete dividends becomes a pain and it feels nicest to go back to a yield.
Advanced option pricing frameworks allow for mixtures of these 3 treatments. What are some good ways of selecting a reasonable mixture of dividend treatments in any given circumstance?
| How do you characterize dividends for equity options? | CC BY-SA 2.5 | null | 2011-02-15T17:11:42.897 | 2019-08-09T13:20:52.017 | null | null | 254 | [
"options",
"equities"
]
|
525 | 2 | null | 522 | 14 | null | Check out page 55 in "Quantitative Equity Investing: Techniques and Strategies," Fabozzi et al.
Section is titled "Random Matrix Theory" - very intro. The context pertains to the estimation of a large covariance matrix.
Also, see work at Capital Fund Management, filed under:
[Random Matrix and Finance : correlations and portfolio optimisation](http://arxiv.org/abs/physics/0507111)
| null | CC BY-SA 3.0 | null | 2011-02-15T17:42:57.977 | 2012-12-13T11:12:47.477 | 2012-12-13T11:12:47.477 | 467 | 390 | null |
526 | 1 | 541 | null | 18 | 2422 | Assuming a directional strategy (no pairs or spread trades) is there a "standard" method for quantifying mean-reversion? Should auto-correlation, variance ratios, hurst exponent, or some other measure be preferred in all cases, or are there advantages to each given the context?
| Is there a standard method for quantifying mean-reversion for use in directional trading? | CC BY-SA 2.5 | null | 2011-02-15T18:46:55.187 | 2011-02-17T18:35:14.073 | null | null | 352 | [
"trading",
"mean-reversion"
]
|
527 | 1 | null | null | 12 | 64745 | There are many online sources about common risk factors in investing and trading e.g. market risk, credit risk, interest rate risk. There are various factor models (Fama-French, Carhart) and risk management methods to mitigate them.
What are examples of non-financial risk, such as hardware or network connection failure, death/injury of an employee, that quant trading firms face? Are there any decent examples of risk mitigation or contingency planning methods for such risks that are available online?
| What are some examples of non-financial risks and contingency plans? | CC BY-SA 3.0 | null | 2011-02-15T19:09:25.027 | 2013-01-04T03:06:20.967 | 2011-08-21T01:39:35.633 | 1106 | 352 | [
"risk",
"risk-management"
]
|
528 | 2 | null | 516 | 5 | null | Since you are asking for low correlation of the assets, I'm guessing that you are really trying to get a low (or minimum) volatility portfolio. If that is the case, then the steps for one approach are:
- estimate the variance matrix of the universe of assets
- use a portfolio optimizer to select the minimum variance portfolio given your constraints
This assumes that you don't have preferences in terms of expected returns of some assets over others. That seems to be implied from your question.
You don't indicate the size of your universe. If it is large, then you'll want to use a factor model or shrinkage model rather than the sample estimate to estimate the variance matrix.
| null | CC BY-SA 2.5 | null | 2011-02-15T19:32:53.797 | 2011-02-15T19:32:53.797 | null | null | 249 | null |
529 | 1 | 531 | null | 24 | 2907 | I would like to find stock pairs that exhibit low correlation. If the correlation between A and B is 0.9 and the correlation between A and C is 0.9 is there a minimum possible correlation for B and C? I'd like to save on search time so if I know that it is mathematically impossible for B and C to have a correlation below some arbitrary level based on A to B and A to C's correlations I obviously wouldn't have to waste time calculating the correlation of B and C.
Is there such a "law"? If not, what are other methods of decreasing the search time?
| How to quickly estimate a lower bound on correlation for a large number of stocks? | CC BY-SA 3.0 | null | 2011-02-15T20:27:56.303 | 2012-06-07T03:55:20.177 | 2011-10-18T21:05:32.860 | 1106 | 352 | [
"time-series",
"correlation",
"numerical-methods"
]
|
530 | 1 | 583 | null | 37 | 34297 | There is a concept of trading or observing the market with signal processing originally created by [John Ehler](http://www.mesasoftware.com/). He wrote three books about it.
[Cybernetic Analysis for Stocks and Futures](http://rads.stackoverflow.com/amzn/click/0471463078)
[Rocket Science for Traders](http://rads.stackoverflow.com/amzn/click/0471405671)
[MESA and Trading Market Cycles](http://rads.stackoverflow.com/amzn/click/0471151963)
There are number of indicators and mathematical models that are widely accepted and used by some trading software (even MetaStock), like MAMA, Hilbert Transform, Fisher Transform (as substitutes of FFT), Homodyne Discriminator, Hilbert Sine Wave, Instant Trendline etc. invented by John Ehler.
But that is it. I have never heard of anybody other than John Ehler studying in this area. Do you think that it is worth learning digital signal processing? After all, each transaction is a signal and bar charts are somewhat filtered form of these signals. Does it make sense?
| Digital Signal Processing in Trading | CC BY-SA 2.5 | null | 2011-02-15T20:46:15.527 | 2017-02-02T19:17:50.603 | 2011-02-15T22:58:08.627 | 42 | 42 | [
"trading",
"digital-signal-processing"
]
|
531 | 2 | null | 529 | 25 | null | Yes, there is such a rule and it is not too hard to grasp. Consider the 3-element correlation matrix
$$\left(\begin{matrix}
1 & r & \rho \\
r & 1 & c \\
\rho & c & 1
\end{matrix}\right)$$
which must be positive semidefinite. In simpler terms, that means all its eigenvalues must be nonnegative.
Assuming that $\rho$ and $r$ are known positive values, we find that the eigenvalues of this matrix go negative when
\begin{equation}
c<\rho r-\sqrt{1-\rho ^2+\rho ^2 r^2-r^2}.
\end{equation}
Therefore the right hand side of this expression is the lower bound for the AC correlation $c$ that you seek, with $\rho$ being the AB correlation and $r$ being the BC correlation.
| null | CC BY-SA 3.0 | null | 2011-02-15T20:57:30.387 | 2011-10-05T14:29:49.353 | 2011-10-05T14:29:49.353 | 254 | 254 | null |
532 | 2 | null | 530 | 9 | null | Ehler's [website](http://www.mesasoftware.com/) has a technical papers section where there are papers available for free download, with code, so you can try things out for yourself. I personally have taken some of his ideas and combined them with other reading, forums etc. on the net and think that applying DSP to trading shows great promise and is definitely worthy investigation. If you are interested, I am blogging about my progress in applying these principles [here](http://dekalogblog.blogspot.com/).
| null | CC BY-SA 2.5 | null | 2011-02-15T22:08:07.513 | 2011-02-15T22:08:07.513 | null | null | 252 | null |
533 | 2 | null | 530 | 6 | null | Cycle analysis and signal processing might be useful for seasonal patterns but without knowing more about the performance of such an approach to trading I would not consider a degree in signal processing for just trading. Would you be happy applying what you learn on standard engineering type problem because that may be what you'll be stuck doing if it doesn't work well enough with trading.
| null | CC BY-SA 2.5 | null | 2011-02-15T22:10:13.373 | 2011-02-15T22:10:13.373 | null | null | 416 | null |
534 | 2 | null | 530 | 14 | null | You need to investigate how to differentiate interpolation methods versus extrapolation methods. It's easy to build a model that repeats the past (just about any interpolation scheme will do the trick). The problem is, that model is typically worthless when it comes to extrapolating into the future.
When you hear/see the word "cycles", a red flag should be going up. Dig into the application of "Fourier Integral", "Fourier Series", "Fourier Transform", etc, and you'll find that with enough frequencies you can represent any time series well enough that most retail traders can be convinced that "it works". The problem is, it has no predictive power whatsoever.
The reason Fourier methods are useful in engineering/DSP is because that "signal" (voltage, current, temperature, whatever) typically repeats itself in the circuit/machine where it was generated. As a result, interpolating then becomes related to extrapolating.
In case youre using R, here's some hacky code to try:
```
library(gam)
#Generate and plot a 1000 data point time series
x <- 1:1000
y <- cumsum(rnorm(1000))
plot(x, y, type="l")
#Fit the first 500 points using a Generalized Additive Model (it'll fit anything)
#The red line is an example of interpolating
gam.object <- gam(y[1:500] ~ s(x[1:500]))
lines(1:500, predict(gam.object, data.frame(x=1:500)), lwd=2, col="red")
#Now, predict the future points
#The blue line is an example of extrapolating (from an interpolation model)
lines(501:1000, predict(gam.object, data.frame(x=501:1000)), lwd=2, col="blue")
#Now, notice the difference in the "fit" of the blue line versus the red line.
```

| null | CC BY-SA 3.0 | null | 2011-02-16T00:40:39.923 | 2013-01-21T18:27:51.310 | 2013-01-21T18:27:51.310 | 3383 | 392 | null |
535 | 2 | null | 526 | 2 | null | As usual, no standard method.
You can maybe test for unit root over a rolling window, but as usual (again) you are going to be lagging. It will all depend on your choice of window and of the persistence of mean reversion in the market you consider.
| null | CC BY-SA 2.5 | null | 2011-02-16T02:48:27.160 | 2011-02-16T02:48:27.160 | null | null | 134 | null |
536 | 2 | null | 530 | 3 | null | Forget all these so called "Technical indicators". They are crap, especially if you don;t know how to use them. My advice: buy a good wavelet book, and create your own strategy.
| null | CC BY-SA 2.5 | null | 2011-02-16T02:52:27.373 | 2011-02-16T02:52:27.373 | null | null | 134 | null |
537 | 2 | null | 527 | 6 | null | To give an example of a source of risk that isn't one of the ones you mentioned but still broadly on-topic for a Quant Finance site: [operational risk](http://en.wikipedia.org/wiki/Operational_risk) - for which there are many references for contigency plans. This is the domain of the back office. Trades are created (priced and analysed) by quants, executed by traders and approved by preferably at least one other person (called "four eyes approval") before being officially agreed.
This is concerned with Trade Administration. A trade goes through several states with different permissions required to move a trade from one state to the next e.g. pending, approved, rejected, confirmed, cancelled, expired, with only certain transitions being allowed.
Back office software can ensure that different sets of employees within a bank or other financial instution can only perform certain tasks and enforce limits on trading positions etc. The cost of the added complexity to the system is designed to minimise operational risk and generally prevent "rogue trader" scenarios.
| null | CC BY-SA 2.5 | null | 2011-02-16T16:52:08.100 | 2011-02-16T16:52:08.100 | null | null | 403 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.