Id
stringlengths
1
5
PostTypeId
stringclasses
6 values
AcceptedAnswerId
stringlengths
2
5
ParentId
stringlengths
1
5
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
0
27.8k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
2 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
5
OwnerUserId
stringlengths
1
5
Tags
list
538
2
null
527
3
null
Data is the lifeblood of a quantitative strategy. So I would say that the primary operational risks facing quantitative models are related to data. Some places where this can be an issue: - Misinterpreting post-hoc data: Many economic indicators are revised on a periodic basis, and it's critical to understand what the meaning of the numbers are on a real-time basis. Similarly, some exchanges will provide price corrections, and you need to determine whether these would have been applied to your data. Lastly, some historical price data is indicative (i.e. not real tick data) because it has been scrubbed or averaged in some way. - Data outliers and changes: Data inevitably has problems due to any number of factors, ranging from outliers or errors to re-scalings (and corporate actions). - Unexpected infrastructure failures: No computer system is perfect. It's critical to understand the likelihood of a down-time, whether related to a data feed or some other aspect of your infrastructure, and either build these assumptions into your model or else mitigate these risks through redundancy and failover.
null
CC BY-SA 2.5
null
2011-02-16T17:18:33.513
2011-02-16T17:18:33.513
null
null
17
null
539
1
540
null
13
231
We were talking about merger arb in a class I had last night, and when we got do deal construction it was mentioned that the different ways can be viewed as different options. For instance a fixed with collar can be a short call, because an upside limit is met, and a long put after the downside limit is met. What are some other areas where different bigger financial transactions can be broken down to easier to value options? Best,
Breaking Transactions Down into Derivatives
CC BY-SA 2.5
null
2011-02-17T16:07:02.060
2011-02-17T18:01:24.863
2011-02-17T18:01:24.863
17
60
[ "options", "financial-engineering" ]
540
2
null
539
7
null
I could be wrong, but this question seems to be about taking one set of cash flows and representing it using a set of derivatives. There are an almost unlimited number of applications for this kind of approach. There is an entire field of study dedicated to it: [financial engineering](http://en.wikipedia.org/wiki/Financial_engineering). A good textbook on the subject is ["Principles of Financial Engineering"](http://rads.stackoverflow.com/amzn/click/0123735742) (Neftci 2008), which provides a comprehensive yet understandable framework for understanding how structured products are engineered. It is simplest to consider this in terms of a payoff diagram: when you understand the structure of the object that you're trying to replicate, then you can take various different structures and combine them to match the original. This is especially common around risky credit and mortgages. For instance, Neftci provides numerous examples of how one could break down the risks involved in a risky bond by using a CDS and other swaps.
null
CC BY-SA 2.5
null
2011-02-17T17:46:06.483
2011-02-17T17:46:06.483
null
null
17
null
541
2
null
526
10
null
There is no standard method and many techniques can work well, including simple time series z-scoring. I'm many cases, I would recommend using the simpler approaches unless the added complexity can be justified. However, the challenge with all techniques is the proper calibration, which is very much context sensitive. The parameter selection needs to be guided by the characteristics of the specific security and investment horizon. Furthermore, should the parameters themselves be dynamic? Nearly all processes are driven by non-stationary distributions. The amount of leverage used may also be an important consideration.
null
CC BY-SA 2.5
null
2011-02-17T18:35:14.073
2011-02-17T18:35:14.073
null
null
390
null
542
1
null
null
5
1455
I heard that trading firms employ CEP platforms such as StreamBase, Marketcetera, JBoss Drools, and etc., to implement trading systems. I wonder which one is most popular and the recent trend of related technology.
Which CEP platform is most popular for trading systems?
CC BY-SA 2.5
null
2011-02-17T18:42:15.343
2011-02-17T18:42:15.343
null
null
434
[ "trading", "trading-systems", "order-handling", "order-execution", "cep" ]
543
2
null
524
5
null
Time to expiration is what should guide the choice. A tractable approach is to make the distinction between discrete and yield at the LEAP boundary (or simpler options with expiration more than 1yr into the future). When the options are long dated, like LEAPs for example, then the simplicity of the yield approach is usually 'good enough'. It usually makes sense for to ONLY use the the discrete dividend approach for options near expiration.
null
CC BY-SA 2.5
null
2011-02-17T19:24:56.557
2011-02-17T19:24:56.557
null
null
214
null
544
2
null
522
18
null
Google for this paper "Financial applications of random matrix theory: Old laces and new piece" from Marc Potters, Jean-Philippe Bouchaud, and Laurent Laloux. You can also check Prof. Gatheral presentation about Random Matrix Theory [http://www.math.nyu.edu/fellows_fin_math/gatheral/RandomMatrixCovariance2008.pdf](http://www.math.nyu.edu/fellows_fin_math/gatheral/RandomMatrixCovariance2008.pdf) In R, the package "tawny" has an implementation of RMT to filter noise in the correlation and covariance matrices.
null
CC BY-SA 2.5
null
2011-02-18T06:38:54.063
2011-02-18T06:38:54.063
null
null
440
null
545
1
547
null
59
34641
There is [a large body of literature](http://www.google.com/search?q=evolutionary+algorithm+financial+market) on the "success" of the application of evolutionary algorithms in general, and the genetic algorithm in particular, to the financial markets. However, I feel uncomfortable whenever reading this literature. Genetic algorithms can over-fit the existing data. With so many combinations, it is easy to come up with a few rules that work. It may not be robust and it doesn't have a consistent explanation of why this rule works and those rules don't beyond the mere (circular) argument that "it works because the testing shows it works". What is the current consensus on the application of the genetic algorithm in finance?
How useful is the genetic algorithm for financial market forecasting?
CC BY-SA 3.0
null
2011-02-18T09:00:24.720
2015-06-09T06:09:02.680
2015-06-09T06:09:02.680
129
129
[ "quant-trading-strategies", "forecasting", "algorithm" ]
546
2
null
545
3
null
if you backtest properly your GA (using only past data to generate the time serie of indicator), then you can trust the result. But I agree with you that genetic algorithms are purely empirical and thus I don't feel very comfortable using them.
null
CC BY-SA 2.5
null
2011-02-18T09:11:06.927
2011-02-18T09:11:06.927
null
null
134
null
547
2
null
545
38
null
I've worked at a hedge fund that allowed GA-derived strategies. For safety, it required that all models be submitted long before production to make sure that they still worked in the backtests. So there could be a delay of up to several months before a model would be allowed to run. It's also helpful to separate the sample universe; use a random half of the possible stocks for GA analysis and the other half for confirmation backtests.
null
CC BY-SA 2.5
null
2011-02-18T09:51:51.487
2011-02-18T09:51:51.487
null
null
35
null
548
2
null
545
25
null
I think the biggest problem that genetic algorithms have are overfitting, [data snooping bias](http://en.wikipedia.org/wiki/Data_dredging) and that they are black boxes (not so much like Neural Networks but still - it depends on the way they are implemented). I think they are not used very much. I guess there are a few hedge funds out there that use it but all in all they were hyped and then busted. (But they are still useful for getting a paper accepted ;-) BTW: There is never a real consensus in finance - everybody tries to outsmart everybody else. This is why it is so interesting. (Or put another way: this is why there are still buyers AND sellers - a real consensus is a crash ;-)
null
CC BY-SA 2.5
null
2011-02-18T10:24:35.110
2011-02-18T10:24:35.110
null
null
12
null
549
1
551
null
93
20629
Finance is drowning in a deluge of data. Humans are not very good at comprehending large amounts of data. One way out may be visualization. Traditional ways of visualizing patterns, complexities and contexts are of course charts and for derivatives e.g. payoff diagrams, a more modern approach are [heat maps](http://en.wikipedia.org/wiki/Heat_map). My question: Do you know of any innovative (or experimental) ways of visualizing financial and/or derivatives data?
Innovative ways of visualizing financial data
CC BY-SA 2.5
null
2011-02-18T10:43:15.563
2022-12-06T09:28:46.430
null
null
12
[ "data", "derivatives", "visualization" ]
550
1
1219
null
16
2816
[Esper and its .NET port NEsper](http://esper.codehaus.org/index.html) are components that enable Complex Event Processing (CEP) and Event Stream Processing (ESP) engines. They are especially suitable for trading applications. They can, among other things, make it easier to deal with data-bursts and threading and they use a query language tailored for event processing. I'm am interested in: - drawbacks of (N)Esper in a trading environment, when compared to alternatives (eg. competing ESP/CEP, Database or custom) - the type of problems one may encounter when building a trading application around (N)Esper.
Drawbacks & Caveats of using (N)Esper for ESP/CEP in trading systems?
CC BY-SA 2.5
null
2011-02-18T12:40:35.823
2012-04-28T05:03:32.803
null
null
53
[ "programming", "cep" ]
551
2
null
549
61
null
Visualization should lead to truth and understanding. As such, I find that simple visualizations tend to be the best. My favorite visualization for showing relationships is the [scatterplot](http://en.wikipedia.org/wiki/Scatter_plot). Once you start to even introduce a line plot, you are implying continuities between data that may not exist. And trying to introduce more advanced visualizations like network diagrams ([ex](https://mbostock.github.io/protovis/ex/arc.html)) or complicated pie charts ([ex](https://mbostock.github.io/protovis/ex/crimea-rose.html)) can lead to more confusion than understanding if misapplied. A few thoughts: - I think that you have already mentioned a few good ones. Heatmaps are good because they allow you to show three (or more) dimensional data without the added issues that arise when trying to create a 3D visualization. Payoff diagrams are simple but they accomplish their goal efficiently as a result. - The FinViz website has a few nice examples of visualizations, including a simple bar chart, candlesticks, and heatmap. - People often don't consider that it is possible to include more dimensions in a typical plot by changing the width, size, color, or intensity of a shape. This is a much better idea than trying to plot more than 2 dimensions spatially. - The fourth real dimension is time, and time plays a very important role in financial data. One popular way to incorporate this as another dimension in a visualization is through video. A great example is gapminder, the software created by Hans Rosling, which made for some very compelling TED talks about global poverty. This was acquired by google and is now available as part of their web toolkit (also mentioned by Ben Hoffstein). Visualization techniques from other fields are still very appropriate in finance, and the best starting point is [Edward Tufte](http://www.edwardtufte.com/tufte/), especially ["The Visual Display of Quantitative Information"](http://www.edwardtufte.com/tufte/books_vdqi) and ["Envisioning Information"](http://www.edwardtufte.com/tufte/books_ei). You also can get a benefit from learning a visualization language. I recommend any of these three (in order of complexity): - R with ggplot2, (plotly now provide an easy way to make ggplot graphes interactive) - Protovis - Processing These each have a learning curve, but once you learn how to use them they all allow for [exploratory data analysis](http://en.wikipedia.org/wiki/Exploratory_data_analysis) in a way that can't be achieved with other tools. There are also many great and innovative commercial tools. To mention a few that are all used by banks and hedge funds: - Panopticon does an amazing job with real-time visualization. - Tableau, Spotfire, and Qlikview all allow for interactive visualization of data using in-memory databases.
null
CC BY-SA 4.0
null
2011-02-18T15:01:06.773
2022-12-06T09:28:46.430
2022-12-06T09:28:46.430
-1
17
null
552
2
null
545
10
null
Assuming you avoid data-snooping bias and all the potential pitfalls of using the past to predict the future, trusting genetic algorithms to find the "right" solution pretty much boils down to the same bet you make when you actively manage a portfolio, whether quantitatively or discretionary. If you believe in market efficiency then increasing your transaction costs from active management is illogical. If, however you believe there are structural & psychological patterns or "flaws" to be exploited and the payoff is worth the time and money for researching and implementing a strategy the logical choice is active management. Running a GA derived strategy is an implicit bet against market efficiency. You're basically saying "I think there are mis-valuations that occur from some reason" (masses of irrational people, mutual funds herding because of mis-aligned incentives, etc.) and "running this GA can sort this mass of data out way quicker than I can."
null
CC BY-SA 2.5
null
2011-02-18T15:49:16.750
2011-02-18T15:49:16.750
null
null
352
null
553
2
null
545
18
null
I've applied GA to all sorts of things. I had some success in the deterministic world where a pattern actually existed and I knew that some physical structure existed (seismic analysis, vibration analysis, inventory calcs, etc). After I found a GA model that behaved, the real work started....figuring out why it behaved. I also generated a lot of GA garbage from financial data that "worked" looking backward, but was worthless looking forward. Techniques aren't the issue in finance, it's the structure. And, of course, never enough data (useful data).
null
CC BY-SA 2.5
null
2011-02-18T18:53:30.343
2011-02-18T18:53:30.343
null
null
392
null
554
1
null
null
15
587
A [commentary attributed to Matt Rothman](http://www.zerohedge.com/article/barclays-quant-commentary-worst-returns-environment-disciplined-stock-pickers-60-years) had recently (in the past six months) been making the rounds of the internet echo chamber claimed "There are no idiosyncratic returns available in the market". How would one turn this into a proper statistical hypothesis test? As part of the commentary, a plot of 'implied correlation' was given: ![from zerohedge](https://i.stack.imgur.com/Fj68a.jpg) This does not tell me much, but it suggest perhaps a test based on the two leading eigenvalues of the correlation matrix, but I am not sure how to pose the problem.
How would you test the hypothesis "There are no idiosyncratic returns available in the market"?
CC BY-SA 2.5
null
2011-02-18T19:21:47.163
2011-07-20T00:55:16.110
2011-02-20T22:18:37.080
108
108
[ "correlation", "market" ]
555
2
null
549
8
null
Here are a few recent examples: [https://stackoverflow.com/questions/4951193/find-largest-5-value-less-than-1-lowest-5-values](https://stackoverflow.com/questions/4951193/find-largest-5-value-less-than-1-lowest-5-values) [http://tables2graphs.com/doku.php?id=04_regression_coefficients#figure_6](http://tables2graphs.com/doku.php?id=04_regression_coefficients#figure_6) [http://tables2graphs.com/doku.php?id=03_descriptive_statistics#figure_5](http://tables2graphs.com/doku.php?id=03_descriptive_statistics#figure_5) [http://chartporn.org/category/innovative/](http://chartporn.org/category/innovative/)
null
CC BY-SA 2.5
null
2011-02-18T19:33:10.937
2011-02-18T19:33:10.937
2017-05-23T12:41:23.377
-1
392
null
556
2
null
549
25
null
The [Google Motion Chart](http://code.google.com/apis/visualization/documentation/gallery/motionchart.html) is a particularly elegant visualization for 'replaying' time series data. There is also an [R package](http://code.google.com/p/google-motion-charts-with-r/) to interface with it. ![enter image description here](https://i.stack.imgur.com/qndV0.png)
null
CC BY-SA 2.5
null
2011-02-18T23:05:51.603
2011-02-18T23:05:51.603
null
null
5
null
557
1
682
null
39
30234
In my firm we are beginning a new OMS (Order Management System) project and there is a debate whether we use Quickfix or we go for a professional fix engine? Because there is a common doubt that QuickFix is not enough fast and obviously we will not get any technical support. I heard that in BOVESPA it has been used for a while. They are changing it with a paid one now. Well that is enough for me. If they use it I can use it. Should I choose a professional one over QuickFix? Is it not good enough?
How fast is QuickFix ?
CC BY-SA 2.5
null
2011-02-18T23:48:16.457
2015-09-02T22:19:23.133
2011-09-08T20:41:05.040
1355
42
[ "fix" ]
558
2
null
557
13
null
How fast do you need? Have you measured round-trip times from the exchange and determined that you need a better solution? I wouldn't call QuickFix quick; it has a lot of unnecessary overhead (temporary objects, temporary strings, nested functions, etc). But I've used it in projects before without too much trouble. It is definitely possible to make a much faster FIX engine from scratch. You can use inlined accessors, call `writev()` instead of concatenating strings, etc. Then there's the standard's requirements, like replaying old FIX messages or handling multiple values per one key. It's a ton of work to do something like that correctly, so you'll need to evaluate just how much pain you're willing to go through. I haven't used any of the commercial off-the-self engines; hopefully someone else can chime-in with a specific recommendation there. Ideally the cost of that will be a fraction of doing it in-house.
null
CC BY-SA 2.5
null
2011-02-19T02:11:11.063
2011-02-19T02:11:11.063
null
null
35
null
559
1
null
null
8
8160
I figure this perhaps the best place to ask this. What technology stack is used in Bloomberg? I'm C++ developer, and I definitely prefer C++ to C, so I don't want to touch C unless it's strictly necessary. I've had my fair share of maintaining legacy systems, and it's not something I enjoy. From my reading it seems they use C heavily.
Technology stack used in Bloomberg
CC BY-SA 2.5
null
2011-02-19T13:48:03.200
2012-10-23T11:17:34.937
null
null
424
[ "programming", "bloomberg" ]
560
2
null
559
5
null
But what you see from the outside is just an interface, no? And C interfaces are much easier to handle, e.g. from Excel and other common tools. Almost a decade ago I wrote a (firm-internal, unreleased) interface from R to Bloomberg, that only worked because the C interface permitting me to use gcc / MinGW (as required by R) along with the dll/lib from Bloomberg (presumably built with a Visual C compiler). You cannot mix and match C++ libraries across compilers. That said, I think I have seen job ads for Bloomberg mentioning C++ skills. As an aside, I also recall an NVidia press release from 1 1/2 years ago when [Bloomberg switched to a bunch of GPUs for simulation-based pricing models](http://www.wallstreetandtech.com/it-infrastructure/showArticle.jhtml?articleID=220200055) so they probably do into the 'whatever gets the job done' school of tool deployment.
null
CC BY-SA 2.5
null
2011-02-19T14:42:20.467
2011-02-19T18:52:00.750
2011-02-19T18:52:00.750
35
69
null
561
2
null
559
4
null
I hate to recite [Wikipedia](http://en.wikipedia.org/wiki/Bloomberg_Terminal#Architecture), but it seems like the answer is there. To summarize, the legacy Bloomberg server was written in a mix of Fortran and C, and more recently they've begun adding C++ and embedded Javascript components. Different clients use languages appropriate to their platforms.
null
CC BY-SA 2.5
null
2011-02-19T15:49:40.950
2011-02-19T15:49:40.950
null
null
80
null
562
2
null
554
5
null
Take the CAPM regression (it's not exactly correct, but it's instructional) $$(R_i - R_f) = \alpha_i + \beta_i (R_{mkt} - R_f) + \epsilon_i$$ The author is saying that these days the $(R_{mkt} - R_f)$ term is driving all returns and that the $\alpha_i$ and $\epsilon_i$ terms are not significantly different than zero because all returns are correlated. Another way of looking at it is that before this period of high correlation you could find the stocks that were mis-priced by looking at fundamentals. If you bought the stocks that were under-priced, then the market would soon enough find out that you were correct, buy the stock and return it to the correct price, and you would capture this return in excess of predicted by $\beta_i$. He is saying that even if these stocks are mis-priced, because everything is correlated, there are no price corrections and you're only seeing returns based on $\beta_i$. That the correlation isn't 1.0 shows that he's not absolutely correct, but it's a good point. Because everything is "correlated" you can't get an excess return without buying a high $\beta_i$ stock and accepting market risk (i.e., this is a "risk on" period when all stocks are correlated). I can think of two things to test: - Fama-MacBeth regressions -- calculate $\beta_i$ for all the stocks, then do cross-sectional regression of the excess returns on $\beta_i$ and take the time series average. If he's correct, then the average coefficient on $\beta_i$ will be positive and significant. - Standard CAPM regressions (although to get a decent estimate of $\alpha_i$ you will need more risk factors) -- if he's correct, then the intercepts should not be significantly different than zero and the idiosyncratic error $\epsilon_i$ should be smaller than previous periods.
null
CC BY-SA 2.5
null
2011-02-19T15:50:06.343
2011-02-19T16:00:11.770
2011-02-19T16:00:11.770
106
106
null
563
2
null
549
12
null
Shane's advice is good. I think it's worth adding the following two techniques not already mentioned: [Self-Organizing Maps](http://en.wikipedia.org/wiki/Self-organizing_map) (SOMs) [Seriation](http://cran.r-project.org/web/packages/seriation/vignettes/seriation.pdf) (pdf pertaining to R package `seriation`, but great intro to the topic). They are not explicit visualize techniques, per se. Instead, they are algos that transform underlying data in ways that aim to lead to greater/new insight on the underlying data. Thus, in my mind, the above approaches share the common objective with xy plots, contour plots, scatter-plot matrices, heat maps, etc. For strict quantitative visualization, Tufte, as mentioned above, a great place to start. Personally, I've gotten more out of Wong's, "The Wall Street Journal Guide to Information Graphics" and Janert's "Data Analysis with Open Source Tools". However, keep in mind that each have different audiences and objectives in mind. I also believe Processing (mentioned by Shane) has a very bright future in finance - it's been used heavily by multimedia artists primarily because of its relative ease and great flexibility.
null
CC BY-SA 2.5
null
2011-02-19T16:22:53.153
2011-02-19T16:22:53.153
null
null
390
null
564
2
null
358
5
null
FWIW, here's the approach I used. I keep the dates as an integer in YYYYMMDD form and merge the calls and puts in to a data frame `both`. Then I use `ddply` to operate on each matched call and put to find the future SPX close and call/put bid-offer average `boa`. ``` library(plyr) both <- merge(calls, puts[, c("date", "exdate", "strike", "boa", "delta", "vega")], by = c("date", "exdate", "strike"), suffixes = c(".calls", ".puts"), sort = T) ``` Which gives: ``` > head(both) date exdate strike close Xms boa.calls delta.calls vega.calls 1 19960104 19960120 490 588.97 -98.97 100.375 0.965229 8.954686 2 19960104 19960120 495 588.97 -93.97 95.375 0.964108 9.199321 3 19960104 19960120 510 588.97 -78.97 80.375 0.960100 10.064710 4 19960104 19960120 515 588.97 -73.97 75.375 0.958478 10.407010 5 19960104 19960120 525 588.97 -63.97 65.500 0.949837 12.130740 6 19960104 19960120 530 588.97 -58.97 60.625 0.942449 13.559780 boa.puts delta.puts vega.puts 1 0.03125 -0.002597 0.960498 2 0.03125 -0.002725 1.003923 3 0.09375 -0.007769 2.548534 4 0.12500 -0.010310 3.279840 5 0.28125 -0.021612 6.162532 6 0.28125 -0.023069 6.512128 both <- transform(both, date.ym = date %/% 100, exdate.ym = exdate %/% 100, exdate.d = exdate %% 100) both <- ddply(both, .variables = c("exdate", "strike"), .fun = my.shift, direc = "fut", value.col = c("close", "boa.calls", "boa.puts"), .parallel = in.parallel) ``` Which gives: ``` > head(both) date exdate strike close Xms boa.calls delta.calls vega.calls 1 19960109 19960120 565 582.7998 -17.7998 18.50000 0.953215 9.323605 2 19960109 19960120 575 582.7998 -7.7998 9.06250 0.834628 23.915020 3 19960109 19960120 580 582.7998 -2.7998 6.06250 0.626327 36.502140 4 19960109 19960120 585 582.7998 2.2002 3.81250 0.445721 38.105320 5 19960109 19960120 590 582.7998 7.2002 2.06250 0.287108 32.854870 6 19960109 19960120 595 582.7998 12.2002 1.03125 0.166120 24.049020 boa.puts delta.puts vega.puts date.ym exdate.ym exdate.d close.fut 1 3.1875 -0.220331 28.53901 199601 199601 20 581.99 2 5.1875 -0.345956 35.53591 199601 199601 20 581.99 3 7.0000 -0.432961 37.89640 199601 199601 20 581.99 4 9.6250 -0.525538 38.34777 199601 199601 20 581.99 5 12.8750 -0.610547 36.90346 199601 199601 20 581.99 6 17.1250 -0.671490 34.82305 199601 199601 20 581.99 boa.calls.fut boa.puts.fut 1 17.62500 0.34375 2 8.06250 1.12500 3 4.43750 2.40625 4 1.84375 4.93750 5 0.71875 8.93750 6 0.28125 13.37500 ``` Where the `my.shift` function is: ``` my.shift <- function(x, date.col = "date", value.col = NULL, steps = 1, direc = NULL) { if (direc == "fut") { x.shift <- tail(x[, c(date.col, value.col)], -1 * steps) x.shift[, date.col] <- head(x[, date.col], -1 * steps) } else { x.shift <- head(x[, c(date.col, value.col)], -1 * steps) x.shift[, date.col] <- tail(x[, date.col], -1 * steps) } x <- merge(x, x.shift, by = date.col, suffixes = c("", paste("", direc, sep = ".", collapse = ""))) return(x) } ``` HTH (I use the `.ym` and `.d` columns later for picking the right time to expiry).
null
CC BY-SA 2.5
null
2011-02-19T16:31:10.047
2011-02-19T16:31:10.047
null
null
106
null
565
1
790
null
12
7561
What is the market standard for pricing quanto CDS (i.e. CDS which pays the contingent leg in different currency than the pricing leg)?
Quanto CDS modeling
CC BY-SA 2.5
null
2011-02-19T18:01:48.280
2017-12-01T12:32:04.540
null
null
89
[ "credit", "cds" ]
566
2
null
559
3
null
Bloomberg has client/server API libs for most modern programming languages including C++, Java, and .Net. You don't have to hit the C level api, unless you really want to :-)
null
CC BY-SA 2.5
null
2011-02-21T03:50:05.530
2011-02-21T03:50:05.530
null
null
214
null
567
1
568
null
9
2605
What is a good place to acquire cheap/free historical quarterly filings data for US Companies? Specifically, I'm interested in the consolidated financial statements. The data should be well-structured and machine-readable (csv,tsv,json, etc.) I'm familiar with CompuStat and CRSP but both are well out of my price range. Coverage need not be exhaustive, but most of the Russell 3000 (with dead companies) for the last 5-10 years would be great.
Free/cheap source of structured historical quarterly filings?
CC BY-SA 2.5
null
2011-02-21T04:31:28.683
2017-06-02T16:25:02.093
null
null
436
[ "data", "history", "fundamentals" ]
568
2
null
567
7
null
You can get quite a bit of structured data for free from the SEC's Edgar system via XML: [http://www.sec.gov/edgar/quickedgar.htm](http://www.sec.gov/edgar/quickedgar.htm) [http://xbrl.sec.gov/](http://xbrl.sec.gov/) Even the older stuff that's not xml based, is fairly readily parsable. Another source that is easier to deal with, but not free, and possibly expensive, is CapitalIQ (where Yahoo Finance gets their data from).
null
CC BY-SA 2.5
null
2011-02-21T04:45:10.440
2011-02-21T20:34:26.503
2011-02-21T20:34:26.503
35
214
null
569
1
580
null
33
13867
Currently I trade trough IB. I run my HFT strategies (100 roundtrips per hour) but I think that latency is killing me and my profits are shrinking. I need the fastest possible brokers out there which meet the following requirements: - Willing to deal with small accounts (50k). Hence lime is off the table. - Has Java api (or it's easy to buy third party api for it) - Can trade futures - Offers colocation (US and EU) - Offers data feed - Can run on unix box
What is a good broker for HFT?
CC BY-SA 3.0
null
2011-02-21T09:36:04.077
2016-03-25T03:55:11.417
2011-10-07T11:43:23.787
1106
40
[ "data", "futures", "high-frequency", "broker", "programming" ]
570
2
null
559
3
null
Tons of legacy Fortran and C, some recent C++ and Java. Lots of in-house technologies of debatable quality, including for things where there exist good open-source alternatives. If you care about using modern technologies and don't want to touch legacy systems, I'd go elsewhere. Edit: there is some variability across teams with regards to the technology stack they use on a day-to-day basis. You may be able to find a team that suits your requirements. However, bear in mind that your mobility within the organization will be restricted if you're not prepared to deal with legacy technologies and systems.
null
CC BY-SA 2.5
null
2011-02-21T10:51:39.677
2011-02-21T11:01:41.597
2011-02-21T11:01:41.597
90
90
null
571
2
null
431
7
null
I like [http://videolectures.net/](http://videolectures.net/), lots of conferences/lectures in computer science and maths, some in finance ...
null
CC BY-SA 2.5
null
2011-02-21T11:35:51.500
2011-02-21T11:35:51.500
null
null
null
null
572
1
579
null
13
1098
I am currently wrapping up my thesis. My final chapter is on applying the SABR model model for pricing purposes. I am valuing a constant maturity swap by replicating its value using plain vanille European payer and receiver swaptions as described by P. Hagan ([http://pds4.egloos.com/pds/200702/26/99/convexit.pdf](http://pds4.egloos.com/pds/200702/26/99/convexit.pdf)) in equation (2.19a). I use the SABR model to inter- and extrapolate market volatility smiles. To be more specific, I am pricing a 5Y CMS swap swapping the 10Y EURIBOR6M swap rate against a floating payment of EURIBOR3M with payments being made quarterly. I have (somewhat arbitrarily) chosen to price the CMS swap as if today is June 1st 2010, but $\beta$ should be fairly stable and a contemporary estimate would be equally helpful/interesting. My "problem" is, that using $\beta=0.25$ gives me a CMS spread of 162 bp while using $\beta=0.85$ gives me a CMS spread of 176 bp. The Bloomberg mid quote for this specific product on June 1st 2010 is 175.5 bp, but I feel that simply choosing the $\beta$ that fits better is not very... academic.
What SABR $\beta$ to use for EURIBOR swaption smiles
CC BY-SA 2.5
null
2011-02-21T13:54:38.567
2011-02-21T19:32:04.067
null
null
463
[ "structured-finance" ]
573
1
585
null
8
1456
Given a historical distribution of weekly prices and price changes for a stock, how can I estimate the the option premium for a nearly at-the-money (ATM) option, say with an expiration date 3 months in the future? We could also have the stock's historical beta and the current option premium if desired. To keep things simple assume the volatility of the stock is constant. I am writing a little Monte-Carlo sim for a buddy in Excel and would like to simulate a strategy of writing ATM covered calls, letting the option expire if OTM and re-writing. I will assume exercise at the end of the option period if the call is ITM. I have never modeled options before so another title for this question might be "Options Modeling 101". For starters I would just like to improve upon my current model, which is constant premiums. Thanks!
Simple model for option premium (for covered call simulation)?
CC BY-SA 2.5
null
2011-02-21T14:54:08.353
2011-02-22T18:22:23.587
null
null
47
[ "options", "option-pricing", "monte-carlo", "modeling" ]
574
2
null
569
4
null
Here's a discussion/conclusion about the Flash Crash: [http://www.nanex.net/FlashCrashFinal/FlashCrashAnalysis_Theory.html](http://www.nanex.net/FlashCrashFinal/FlashCrashAnalysis_Theory.html) Quoting the above link: "....A sale (or purchase) of 2,000+ contracts which rips through one-side of the depth of book in 50-100 milliseconds, will immediately overload many systems....." My point is, HFT currently takes place in computers that are located next door to the exchange's computers and happen in milliseconds. Retail trades are not involved. I now see that HFT has been discussed in the past: [How 'High' is the frequency in HFT?](https://quant.stackexchange.com/questions/310/how-high-is-the-frequency-in-hft)
null
CC BY-SA 3.0
null
2011-02-21T15:00:26.330
2011-04-26T16:40:53.273
2017-04-13T12:46:22.953
-1
392
null
575
1
763
null
17
814
I am working in the area of probability theory and for a case study I would like to make some calculations in finance. Since I am developing theory for the discrete time, I am interested in models for the stock price in the discrete time. As I remember GARCH is a good one. Could you advise me something else? It will be nice if you can tell me which model also can be used for the currency markets (since they are quite faster than the stock markets I think there should another relevant models).
Discrete-time model: stock dynamics
CC BY-SA 2.5
null
2011-02-21T15:44:25.237
2011-03-18T23:00:00.450
null
null
464
[ "models", "stochastic-processes" ]
576
2
null
395
0
null
Recently I faced some results in calculations of such probabilities in a very nice and accurate way for Markov models. If you are interested, I will be happy to tell you where you can find it.
null
CC BY-SA 2.5
null
2011-02-21T15:54:09.630
2011-02-21T15:54:09.630
null
null
464
null
577
2
null
208
4
null
(1) You can easily solve it in the case of constant coefficients. The answer will be $\infty$. In fact, this equation has no solution on any interval. The intuition is the following. For the SDE like $$ dS_t = \mu(t,S_t)dt+ \sigma(t,S_t)dw_t $$ you can mention that $dw_t = \xi_t\sqrt{t}$, where $\xi_t\sim\mathcal{N}(0,1)$ are standard gaussian i.i.d. random variables. When you do this "infinite summation" like for the integral, it will be like $$ \int\limits_0^T \sigma(t,S_t)\xi_t\sqrt{dt}. $$ Due to the fact that expectation of $\xi$ is zero and thanks to the Law of Large Numbers, this integral makes sense. Because the integrand is so often positive and so often negative on any small interval. On the other hand in your equation you reject the term $\xi_t$ - and now on some small intervals your integrand will be positive or negative. Then this integral will diverge. You can also consider the following motivation. When you discretize the time and simulate $$ \Delta S_t = \mu(t,S_t)\Delta t +\sigma(t,S_t)\xi_t\sqrt{\Delta t} $$ then for any $\Delta t$ you will have nice behaviour of the trajectories. On the other hand try to simulate $$ \Delta S_t = \mu(t,S_t)\Delta t +\sigma(t,S_t)\sqrt{\Delta t} $$ and decrease the time step. You will see immediately that even on the segment $[0,1]$ your trajectory will diverge for sufficiently small time steps. (2) Surely, these are different notions. (3) - if you talk about your deterministic equation - then there will be no difference, integrals will blow up. I hope that it was useful - otherwise please comment and ask.
null
CC BY-SA 2.5
null
2011-02-21T16:10:31.213
2011-02-21T16:10:31.213
null
null
464
null
578
2
null
212
0
null
It seems to me that you are making the problem more complicated than it is in fact. What is the process $X_t$ and what is the motivation to find this expectation as a path integral? If you would like to find the value of integral on the trajectory of the diffusion process I think it is undefined.
null
CC BY-SA 2.5
null
2011-02-21T16:14:36.360
2011-02-21T16:14:36.360
null
null
464
null
579
2
null
572
7
null
There's a paper by Fabio Mercurio called "Smiling at Convexity" which discusses this and proposes doing basically what you've done, namely setting beta to match the market prices of CMS swaps. In the Hagan et. al. SABR paper they discuss ways of setting beta based on plotting ATM vols versus the forward rates. The idea here is to plot log vol vs log fwd rate, the slope of which is 1 - $\beta$. This gives you a rough idea, but it's very noisy, so you wouldn't really be able to distinguish (say) $\beta$ = 0.5 and beta = 0.6. The convexity adjustment will be very sensitive to how you extrapolate the smile, which in the SABR model is strongly affected by the value of beta. You also have to be a bit careful with how far you take the extrapolation, the SABR model will just keep the vol going up forever as you integrate the strike up to infinity. EDIT: Just to be clear, there's no unique answer, different desks will do different things. Fitting the CMS swaps is probably a good idea, but given only one parameter $\beta$ you would only get a best fit. I've not tried to do this, but I suspect that your $\beta$ values would be very stable against moves in the the CMS market, which seems unsatisfiying, as you'd expect $\beta$ to change very slowly with time.
null
CC BY-SA 2.5
null
2011-02-21T19:22:42.730
2011-02-21T19:32:04.067
2011-02-21T19:32:04.067
371
371
null
580
2
null
569
17
null
Look at [Genesis Trading](http://www.gndt.com/). Most of the sales guys there are kinda like used car salesmen but they will work with you. Starting up with 50K should not be a problem for them. The offer full depth of book feeds if you are colocated with them. They do offer DMA and you can specify all routing instructions for your orders rather than getting stuck on IBs router. You are not doing HFT but don't get discouraged. If your executions are not good and you are getting lots of slippage it may be due to IB.
null
CC BY-SA 2.5
null
2011-02-21T20:47:07.787
2011-02-21T20:47:07.787
null
null
373
null
581
1
582
null
19
5713
Say you have an Exponential Moving Average being continuously updated over a time series using 1-second-long time periods. What should happen if there is no value for the next second, e.g. there were no price updates? Should the function decay in some way since there are no new values? Is there a correct or accepted way of handling this case?
How to update an exponential moving average with missing values?
CC BY-SA 3.0
null
2011-02-21T23:43:12.633
2011-09-23T02:36:16.830
2011-09-23T02:36:16.830
1106
373
[ "data", "time-series", "moving-average" ]
582
2
null
581
14
null
You can either - reuse the last computed EMA, or - fill-forward the previous period's sample data and recompute the EMA. I generally prefer the second option, which should cause a decay. Only go for the first option if your application won't change its logic based on missing data.
null
CC BY-SA 2.5
null
2011-02-22T00:20:33.360
2011-02-22T00:20:33.360
null
null
35
null
583
2
null
530
33
null
Wavelets are just one form of "basis decomposition". Wavelets in particular decompose in both frequency and time and thus are more useful than fourier or other purely-frequency based decompositions. There are other time-freq decompositions (for instance the HHT) which should be explored as well. Decomposition of a price series is useful in understanding the primary movement within a series. In general with a decomposition, the original signal is the sum its basis components (potentially with some scaling multiplier). The components range from the lowest frequency (a straight-line through the sample) to the highest frequency, a curve that oscillates with a frequency maximum approaching N / 2. How this is useful - denoising a series - determining the principal component of movement in the series - determining pivots Denoising is accomplished by recomposing the series by summing up the components from the decomposition, less the last few highest frequency components. This denoised (or filtered) series, if chosen well, often gives a view on the core price process. Assuming continuation in the same direction, can be used to extropolate for a short period forward. As the timeseries ticks in real-time, one can look at how the denoised (or filtering) price process changes to determine whether a price movement in a different direction is significant or just noise. One of the keys, though, is determining how many levels of the decomposition to recompose in any given situation. Too few levels (low freq) will mean that the recomposed price series responds very slowly to events. Too many levels (high freq) will mean for fast response but , perhaps too much noise in some price regimes. Given that the market shifts between sideways movements and momentum movements, a filtering process needs to adjust to regime, becoming more or less sensitive to movements in projecting a curve. There are many ways to evaluate this, such looking at the power of the filtered series versus the power of the raw price series, targeting a certain % depending on regime. Assuming one has successfully employed wavelet or other decompositions to yield a smooth, appropriately reactive signal, can take the derivative and use to detect minima and maxima as the price series progresses. Problems - One needs a basis that has "good behavior" at the endpoint so that the slope of the curve at the endpoint projects in an appropriate direction. - The basis needs to provide consistent results at the endpoint as the timeseries ticks and not be positionally biased Unfortunately, I am not aware of any wavelet basis that avoids the above problems. There are some other bases that can be chosen that do better. Conclusion If you want to pursue Wavelets and build trading rules around them, expect to do a lot of research. You may also find that though the concept is good, you will need to explore other decomposition bases to get the desired behavior. I don't use decompositions for trade decisions, but I have found them useful in determining market regime and other backward looking measures.
null
CC BY-SA 2.5
null
2011-02-22T01:19:27.890
2011-02-22T01:19:27.890
null
null
51
null
584
1
null
null
5
266
Say for a stock I want to do a simulation using 30 days of historical returns, and maybe generate 1000 paths, with 2 days as the forecast horizon. Say I have 100 of these 5 day blocks used for generating the distribution, matched with the actual values of the 2 days I am interested in forecasting. How would I estimate the accuracy of this method of generating a distribution? And what is a method I could use to compare it with a different means of forecasting the distribution? Any suggestions would help.
estimating the accuracy of a method for forecasting the distribution
CC BY-SA 2.5
null
2011-02-22T04:37:31.503
2011-03-03T03:38:47.373
null
null
null
[ "time-series" ]
585
2
null
573
9
null
The main component of that option premium is (forward-looking) volatility $\sigma$. The very simplest formula you could use for ATM options is the Bachelier model \begin{equation} \text{Call}_T = \sigma S \sqrt{\frac{T}{2\pi}} \end{equation} where the time to expiration is $T$ and $S$ is the current underlying price. This formula is "wrong" strictly speaking, but only by a factor of $\sigma^3T^{\frac32}$ which in your case will be around 5%. You'll also be ignoring a somewhat smaller error due to nonzero interest rates. To obtain $\sigma$ you can work with your available historical data to get a historical volatility. Historical volatility is not always the very best choice but it is far better here than your current constant price assumption, and it is very simple to calculate: \begin{equation} \sigma_{\text{Hist}} = \sqrt{\frac1{N-1}\sum_{i=1}^N{(r_i-\bar{r})^2}} \end{equation} where the $r_i$ are the periodic returns \begin{equation} r_i = \frac{\frac{S_{i+1}}{S_i}-1}{\Delta t_i} \end{equation} taken of the underlying $S_i$ at times $t_i$, $\Delta t_i=(t_{i+1}-t_i)$ and $\bar{r}$ is their mean. (The Black-Scholes model would have used log returns instead.) If you are happy with a crude estimate, you may assume $\bar{r}$ is zero rather than bothering to calculate it. And for a very crude estimate of historical volatility, you can instead use \begin{equation} \sigma_\text{Inaccurate} = \text{Mean}\left[|r_i|\right] \frac1{\sqrt{\text{Mean}\left[\Delta t_i\right]}} \end{equation} For maximum accuracy, you would of course want to use the Black-Scholes model. But frankly you would have an easier time finding the requisite historical option prices than you would finding a historical time series of forward-looking (implied) Black-Scholes volatilities.
null
CC BY-SA 2.5
null
2011-02-22T18:22:23.587
2011-02-22T18:22:23.587
null
null
254
null
586
2
null
554
4
null
One way of thinking about the problem is with a statistical factor model. Consider the two cases: - You have more assets than time points In this case if you accept enough factors, then there is no idiosyncratic risk. But there will be idiosyncratic risk if you restrict the number of factors. - You have more time points than assets In this case even if you accept as many factors as there are assets, you will in general have idiosyncratic risk as well. In order for your hypothesis to be true (over the time period of the data), there would need to be no idiosyncratic risk visible in this case. I don't see a good way of testing the hypothesis, but it doesn't seem like a realistic possibility to me. I think a more reasonable hypothesis is that the distribution of idiosyncratic risk is [smaller, more skewed, ...] in time frame X relative to time frame Y. Such hypotheses can be tested reasonably by estimating factor models in the two periods. Use the same number of factors in each period and then plot the densities of idiosyncratic risks. And probably try a few different choices of the number of factors. You could bootstrap to get a sense of how variable the idiosyncratic distributions are.
null
CC BY-SA 2.5
null
2011-02-22T20:38:54.970
2011-02-23T08:22:06.960
2011-02-23T08:22:06.960
249
249
null
587
2
null
584
1
null
Compare the density and cumulative density function or your forecast volatility and of the realized volatility![enter image description here](https://i.stack.imgur.com/TFExE.png)
null
CC BY-SA 2.5
null
2011-02-22T21:20:24.970
2011-02-24T23:47:07.247
2011-02-24T23:47:07.247
471
471
null
588
1
null
null
9
536
How does one calculate the implied liquidity of a specific option contract given a set of vanilla puts and calls with various strikes and maturities on a single underlying?
How do you calculate the implied liquidity of an option?
CC BY-SA 3.0
null
2011-02-23T05:16:15.610
2011-09-09T18:11:53.053
2011-09-09T18:11:53.053
1106
214
[ "options", "liquidity" ]
589
2
null
588
7
null
From [Implied Liquidity : Towards stochastic liquidity modeling and liquidity trading](https://perswww.kuleuven.be/~u0009713/ImpliedLiquidity.pdf) > We will call the parameter, fitting the bid-ask spread (under a symmetric distortion) around the mid price, the implied liquidity parameter. Hence for the European Call option (strike K and maturity T ) with given market bid (b) and ask (a) prices, the implied liquidity parameter is the specific λ > 0, such that: a = − exp(−rT )Eλ [−(ST − K )+ ] and b = exp(−rT )Eλ [(ST − K )+ ]
null
CC BY-SA 2.5
null
2011-02-23T12:20:19.113
2011-02-23T12:20:19.113
null
null
471
null
590
1
591
null
7
476
So this is a "work homework" question. As part of my job they are sending us through sort of a training course. I'm looking for advice, or a link to a site that explains how to do this with maybe some formulas or something. I am NOT looking for answers. I want to figure this stuff out myself, I just need a little guidance. Here is the question: > Given Information 3-month Libor 1.85% 6-month Libor 1.94625% 12-month Libor 2.06% Find the following: 3-month Libor 3 months forward 6-month Libor 6 months forward 3-month Libor 6 months forward 3-month Libor 9 months forward ASSUMPTION Assume 30/360 day count and even quarterly periods Thanks guys.
Need advice on finding forward spot rates
CC BY-SA 2.5
0
2011-02-23T15:36:54.537
2011-02-24T04:04:31.870
2020-06-17T08:33:06.707
-1
318
[ "futures", "interest-rates" ]
591
2
null
590
4
null
If you are looking for good explanation with example than you may want to consider reading Jim Harper aka Bionic Turtle and his article on [forward rates and spot rates](http://www.bionicturtle.com/how-to/article/forward_rates_and_spot_rates/). You can find excel spreadsheet for given example there as well.
null
CC BY-SA 2.5
null
2011-02-23T17:35:31.270
2011-02-23T17:35:31.270
null
null
15
null
592
2
null
584
1
null
Seems like you are interested in the forecast error [http://en.wikipedia.org/wiki/Forecasting](http://en.wikipedia.org/wiki/Forecasting) With regard to distribution, I would also look at the mean forecast error. A good model with have a mean error = 0, since it is not bias upward or downward. -Ralph Winters
null
CC BY-SA 2.5
null
2011-02-23T20:04:04.523
2011-02-23T20:04:04.523
null
null
null
null
593
1
621
null
12
798
As of right now, the price of Brent Crude is $\$$111.59/bbl and the price of WTI Crude is $\$$98.36/bbl. I'm well aware that futures markets don't set the spot price for oil, but actual supply/demand does. And, that I don't have access to enough refinery/supplier data to figure out the supply/demand balance for WTI, or the supply/demand balance for Brent. However, common sense says this mismatch in price will eventually close. Is anyone making any kind of bet on this situation? If so, could you share a general description of how you're controlling risk? Edit (03/01/2011): For future reference, below is a graph of the spread: ![enter image description here](https://i.stack.imgur.com/lZfsj.jpg)
Price of Brent versus West Texas Intermediate
CC BY-SA 2.5
null
2011-02-23T21:12:23.220
2018-01-26T22:02:18.510
2018-01-26T22:02:18.510
20795
392
[ "risk", "futures", "commodities" ]
594
1
null
null
17
1236
I am wondering how UBS hedges its exposure to its ETN XVIX. Unless I am grossly overestimating the trading costs, executing the strategy they describe in their prospectus with futures would be quite expensive, especially since they would incur some slippage (back months are not super liquid). Any ideas on how they do it? Do they have some way to reduce the amount of trades they have to execute in order to track the indices they base their strategy on (SP indices)?
How does UBS hedge its exposure to XVIX ETN?
CC BY-SA 3.0
null
2011-02-23T21:24:11.223
2012-09-20T18:08:23.297
2012-09-20T16:16:10.277
35
290
[ "vix", "hedging" ]
595
2
null
593
1
null
A significant portion of the price difference between different types of oil futures has to do with whether their sulfur content(heavy, light, sweet). If supply and demand for each are in equilibrium they should not have the same price. However, both brent and wti are light, but brent is not as light as wti (brent has more sulfur). More sulfur, means more loss in refining, which means that the equilibrium price of the heavier, brent, is higher than wti. The fact that the prices are contango-ed suggests a demand imbalance in brent... perhaps produced by the fact that there are supply concerns(opec basket) stemming from the political crisis in libya. Libyan oil(opec basket) is not technically brent, but lack of supply there can contribute to a demand surge in brent.
null
CC BY-SA 2.5
null
2011-02-23T21:42:04.503
2011-02-26T18:16:16.573
2011-02-26T18:16:16.573
310
214
null
596
2
null
594
6
null
The key in vix based etn's, or any exotic etf/etn in general is the tracking error. Compared to the spot vix (or short spot vix), even considering the long-short term structure features that xvix has, the tracking error is going to be nontrivial. Even leveraged equity basket etfs have terrible tracking error. The exotics are even worse... Just compare a return series over a month or more. A lot of times, even a few days will make the tracking error issue clear. In vix one of the reasons is that there isn't a perfect hedge for them, so traders use S&P500 options as a proxy. This lowers the liquidity burden, but increases the tracking error.
null
CC BY-SA 2.5
null
2011-02-23T21:50:18.077
2011-03-08T03:36:47.177
2011-03-08T03:36:47.177
131
214
null
597
2
null
590
2
null
Another good resource is "Investment Science" by David Luenberger (chapter 4). I'm actually doing "school homework" on the subject right now. I'm guessing you'll find this book good for future assignments as well.
null
CC BY-SA 2.5
null
2011-02-24T04:04:31.870
2011-02-24T04:04:31.870
null
null
475
null
598
1
null
null
9
875
Recently I have been looking at pair trading strategies from a cointegration perspective, as described in chapter 5 of Carol Alexander's [Market Risk Analysis volume 2](http://rads.stackoverflow.com/amzn/click/0470998016). As most quantitative finance texts the science is well explained, but the description of applications is a bit on the light side. Theoretically it's pretty straightforward and easy to run the tests for a given time period to see whether a certain pair is likely to be conintegrated or not. To apply the theory to an actual pairs trade however, I would like to add the time dimension to my parameters. If I simply model the spread as $residuals = y - \alpha - \beta x$, the first thing I'd like to know is the mean and variance of the residuals. In order to setup my bid/ask limits I must have a mean and some measure of the variance. In an ideal case the two would be stable, but how can I quantify this and incorporate the information into the model? Also, how can I make this analysis structural, rather than having to depend on subjective eye-balling of data? Could anyone point me in the right direction here? My guess is that I should take a look at regime switching models. But since that topic is unknown to me, I'd appreciate very much if someone could give some pointers so I can avoid the worst pitfalls. EDIT: Perhaps I wasn't clear enough, but my question is not how to do the tests, it's how can do can I quantify the stability of the parameters, i.e., the mean and variance of the spread?
How do I incorporate time-variability in a pair trading framework?
CC BY-SA 2.5
null
2011-02-24T14:15:50.677
2011-07-24T02:09:22.413
2011-02-25T04:12:34.597
478
478
[ "time-series", "cointegration" ]
599
2
null
598
3
null
If you're using R and would like a few examples, here's Paul Teetor's website: [http://quanttrader.info/public/](http://quanttrader.info/public/) And, Ernie Chan's website: [http://epchan.blogspot.com/](http://epchan.blogspot.com/) Energy pairs: [http://statistik.ets.kit.edu/download/doc_secure1/ptem342v144techreport.pdf](http://statistik.ets.kit.edu/download/doc_secure1/ptem342v144techreport.pdf)
null
CC BY-SA 2.5
null
2011-02-24T19:16:03.060
2011-02-24T19:16:03.060
null
null
392
null
600
2
null
598
2
null
I assume your 'x' and 'y' are prices and so 'residuals' also is in currency units. If so, then I would make a vector of the returns of 'residuals'. This gives you some (reasonably) independent observations on the process. Then you could do some rolling means and rolling standard deviations to look for patterns that might help you or trip you up.
null
CC BY-SA 2.5
null
2011-02-24T19:18:15.130
2011-02-24T19:18:15.130
null
null
249
null
601
1
null
null
9
1505
A convertible bond denominated in USD is issued by an Indian company (with equity traded in INR). The bond will be repaid in USD and if converted into equity in the company, the conversion price will be based on a pre-determined fixed USD/INR rate. When valuing the option embedded in the bond, should we use the USD risk-free rate or INR risk-free rate?
Which risk-free rate to use to price a bond issued in one currency but convertible into equity in another?
CC BY-SA 3.0
0
2011-02-25T07:28:48.990
2011-12-06T21:53:01.033
2011-12-06T21:53:01.033
1106
null
[ "option-pricing", "interest-rates", "embedded-options", "convertible-bond" ]
602
1
null
null
14
875
On a security by security basis, I want to be able to quantify the level of HFT activity (and later institutional & retail activity). Is it higher than it normally is? How much so? What would you say is the best way to measure? Number of quotes? # of quotes relative to volume? Relative to # of trades? Spread volatility? Reversals after block prints?
What is the most effective way of determining & measuring the level of HFT activity in a stock in (close to) real time?
CC BY-SA 2.5
null
2011-02-25T13:35:20.350
2011-09-08T15:27:48.157
2011-09-08T15:27:48.157
1355
483
[ "high-frequency", "limit-order-book" ]
603
1
611
null
23
25284
The answers to [this question on forecast assessment](https://quant.stackexchange.com/questions/20/what-type-of-analysis-is-appropriate-for-assessing-the-performance-time-series-fo) suggest that if the sequence of residuals from the forecast are not properly independent, then the model is missing something and further changes should be made to remove the correlation. That does make sense to me and it suggests that we should be able to do better than a simple GARCH(1,1) model. However, in almost all the literature on the subject, this issue is never discussed, and the fact that forecasts produced residuals that are serially correlated is taken as a fact of life. Indeed, people have produced methods for accounting for both serial and contemporaneous correlations when comparing different forecast models. So, why is this the case? If the GARCH(1,1) model does have such problems, why is it still considered a standard approach for modeling volatility?
Why are GARCH models used to forecast volatility if residuals are often correlated?
CC BY-SA 2.5
null
2011-02-25T14:36:51.097
2019-10-09T16:10:56.450
2017-04-13T12:46:22.953
-1
61
[ "volatility", "time-series", "forecasting", "best-practices" ]
604
2
null
406
6
null
Thanks gappy for your precise response. However the answer to this auto-correlation is much more important than an academic discussion of which portfolio performance ratio is best. Auto-correlation distorts max draw-down calculations raising the question of whether the (positive) auto-correlation will continue in the future producing large draw-downs, or whether it will subside to normally low levels. [ Incidentally I have never seen a negative auto-correlation in real-world monthly publicly traded asset returns.] For example take two MLP's: the well known and large cap KMP ( a pipe-line operator) and NRGY ( a mid-cap retail propane distributor.) On data (post Lehman) from 2/2009 to 2/2011 KMP's monthly returns are not auto-correlated, while NRGY's are highly correlated. The two Calmar ratios are: KMP= 0.1304 (StndErr=0.009); NRGY = 0.1472 (StndErr=0.25), i.e. risk-adjusted returns for the 2 assets are statistically equivalent. But if NRGY's auto-correlation is expected to subside then it's past mdd is overstated and it will be a better risk-adjusted investment than KMP in the future. I've done some research and have been able to calculate the theoretical maxdd's for 2 models: No auto-correlation ( the much more difficult calculation) and complete auto-correlation($\rho=1$) for a no drift, normal dist. vol model, Irrespective of the size of the returns and volatility, $\rho=1$ $mdd / \rho=0$ mdd is 4.35 - - - a large difference! In other words if period (e.g. monthly ) returns are auto-correlated you can expect a future maxdd of 4.35 times that for a normal- no auto-correlation return within the same horizon. Auto-correlation of returns can appear in low-volume traded assets, Hedge funds, Preferred stocks, etc. In common stocks it occurs in high-momentum assets. In all cases of auto-correlation, BEWARE, the maxdd's will be large. There is an easy test to determine if the returns are auto-correlated: the Ljung-Box test (Please Gappy correct my misspelling of the names if incorrect.) I have a simple R script to calculate the LB if anyone is interested.
null
CC BY-SA 3.0
null
2011-02-25T14:40:01.157
2011-09-25T21:04:45.910
2011-09-25T21:04:45.910
1106
345
null
605
2
null
602
5
null
A simple way to do this with the TAQ database (Nasdaq trade and quote) is to measure the amount of time between a quote update and a trade inside that quote. The shorter that time, the higher probability HFT is present.
null
CC BY-SA 2.5
null
2011-02-25T15:57:13.907
2011-02-25T15:57:13.907
null
null
214
null
606
2
null
601
2
null
The answer is, that it does not matter. Choose one currency as the numeraire, and stick to it. This is because of the foreign exchange interest rate carry arbitrage relationship. If that relationship doesn't hold, skip the bond and lock-in the arbitrage on the interest rate differential embedded in the USD/INR exchange rate.
null
CC BY-SA 2.5
null
2011-02-25T16:44:54.603
2011-02-25T16:44:54.603
null
null
214
null
607
2
null
602
8
null
You won't know who made the trade, so you'll need to look at the quotes. Specifically, you should look to see if there are a lot of cancellations in the full order book. That will tell you if there's higher "churn" for a particular stock since HTFs often have low fill ratios (<1% for some shops). But you'll need to control for volatility since wild market swings in general will cause market makers to pull their quotes.
null
CC BY-SA 2.5
null
2011-02-25T16:51:07.627
2011-02-25T16:51:07.627
null
null
35
null
608
1
null
null
9
830
I think I've found an arbitrage opportunity. Right now, I can do this (first via CME, second via SAXO) : BUY CADUSD AMERICAN PUT 10200 STRIKE EXPIRING 16 MAR 2011 FOR 53 pips USD SELL USDCAD EUROPEAN CALL 0.9805 STRIKE EXPIRING 16 MAR 2011 FOR 68 pips CAD The first trade costs me 530 USD, the second one gives me 680 CAD, for a net profit of 163.52 USD (assuming USDCAD at 0.9805). Now: - If USDCAD decreases (meaning CADUSD increases), both options expire worthless. - If USDCAD increases (meaning CADUSD decreases), it appears that my long put gains value faster than my short call (just barely, due to the currency difference). Is there something I'm not seeing here? Of course, the prices I'm seeing might not be tradeable, and there are some minor expiration date issues, but, assuming I could do this, is it arbitrage?
USDCAD options vs CADUSD options arbitrage?
CC BY-SA 2.5
null
2011-02-25T18:58:25.430
2011-02-26T16:23:42.877
null
null
null
[ "arbitrage" ]
609
2
null
601
3
null
Use the USD rate. The actual tricky bit is in the volatility. Normally for a cross currency bond you would use the volatility of the foreign shares as denominated in US currency. However, the fixed FX rate in this case means that the correct volatility to use is the volatility in INR.
null
CC BY-SA 2.5
null
2011-02-25T19:01:36.257
2011-02-25T19:01:36.257
null
null
254
null
610
2
null
603
11
null
GARCH(1,1) is a "standard approach for modeling volatility" mainly in academic literature. Most of us in the real world don't use it. Volatility forecasting tends to come more from looking at more-liquid comparables for future market volatility than from fitting fancy retrospective models. As for ignoring the dependence of residuals, well, folks are probably considering the problem to be less nettlesome than problems introduced by trying to fit complex models with too many parameters.
null
CC BY-SA 2.5
null
2011-02-25T19:08:14.190
2011-02-25T19:08:14.190
null
null
254
null
611
2
null
603
15
null
One of the reasons the ARCH family of models is used is that you only need price data to generate the model. These data exist back to the 1800s, so ARCH is great for looking at volatility over very long periods. I don't know that I'd say that the ARCH model has a lot of problems -- it solved the problem of not allowing volatility in time or in the level of the underlying process. [It allowed Robert Engle to put to rest Milton Friedman's idea that inflation uncertainty varied with inflation level and won Engle the Nobel Prize](http://nobelprize.org/nobel_prizes/economics/laureates/2003/engle-lecture.pdf). However, I agree with Brian that there are probably better volatility forecasts. ARCH models are necessarily based in the past, not in expectations. I think I would look first at implied volatility from options. Using implied volatility from options will also allow to forecast higher moments like skew and kurtosis.
null
CC BY-SA 2.5
null
2011-02-25T22:00:32.437
2011-02-25T22:00:32.437
null
null
106
null
612
2
null
530
6
null
DSP and Time Series analysis are the same thing. DSP uses enginering "lingo" and time series analysis uses mathematical "lingo" but the models are quite simular. Ehler's cyber cycle indicator is an ARMA(3,2). Ehlers has some unique ideas: What is the meaning of the phase of a random variable?
null
CC BY-SA 2.5
null
2011-02-26T05:04:37.713
2011-02-26T05:04:37.713
null
null
null
null
613
1
619
null
31
37993
I was wondering what is best practice for representing elements in a time series, especially with large amounts of data. The focus/context is in a back testing engine and comparing multiple series. It seems there are two options: 1) Using an integer index, or 2) Using a date-based index At the moment I am using dates, but this impacts on performance & memory usage in that I am using a hash table rather than an array, and it requires some overhead in iteration (either forward or backwards) as I have to determine the next/previous valid date before I can access it. However, it does let me aggregate data on the fly (e.g. building the ohlc for the previous week when looking at daily bars) and most importantly for me allows me to compare different series with certainty I am looking at the same date/time. If I am looking at an equity issue relative to a broader index, and say the broader index is missing a few bars for whatever reason, using an integer indexed array would mean I'm looking at future data for the broad index vs present data for the given security. I don't see how you could handle these situations unless you're using date/times. Using integer indexes would be a lot easier code wise, so I was just wondering what others are doing or if there is best practice with this.
What is the best data structure/implementation for representing a time series?
CC BY-SA 3.0
null
2011-02-26T05:55:05.937
2022-09-24T05:43:15.960
2011-08-23T01:26:13.320
1106
489
[ "data", "time-series", "market-data" ]
614
1
null
null
4
715
I make money selling covered calls on FX spot options, and some of my friends want to buy in to this without having to trade their own accounts. One method is for each of them to get an account, and have me trade it, but that doesn't scale well: I really don't want to be trading 10 accounts. The other method is to let them "buy in" to my account. My question: how do I determine what "portion" of my account someone owns, assuming many people (including myself) deposit and withdraw money? Is there a better way of doing this? What if I charge my friends a small fee to trade? I'm guessing the answer is well known, since hedge funds/mutual funds have to deal with this all the time.
Hedgefund-like behavior for covered call selling account?
CC BY-SA 2.5
null
2011-02-26T14:25:55.197
2011-03-30T18:51:56.940
null
null
null
[ "options" ]
615
2
null
614
1
null
treat it like a hedgefund(limited partnership) think of your account as a corporation. assign a number shares to your account's netliquidating value(equity). let your friends buy a given number of shares at a given price. take your fee out and treat it as a cost to the fund that lowers the equity. Repeat this process as you take in new money, or when u distribute profits/return funds.
null
CC BY-SA 2.5
null
2011-02-26T16:03:32.667
2011-02-26T16:03:32.667
null
null
214
null
616
2
null
14
4
null
Quoting prices in delta makes it easier for a trader to delta hedge their portfolio. (The trader knows teh delta they are trying to add or cut, so the price quoted in delta gives them the contract qty that they need to trade quickly, without needing a calculator, let alone a pricing model)
null
CC BY-SA 2.5
null
2011-02-26T16:08:41.727
2011-02-26T16:08:41.727
null
null
214
null
617
2
null
613
8
null
It's usually more efficient to have timeseries objects located sequentially in contiguous memory. A hashtable doesn't provide this. As good as it is, from a complexity standpoint, it's not faster than a fixed array when accessing items in a [i+1] or [i-lag] kind of operation that is typical in timeseries code. (For the most part you can estimate the array size needed to do timeseries before you start your operate, so array resizing operations can be optimized out.)
null
CC BY-SA 2.5
null
2011-02-26T16:19:58.780
2011-02-26T16:19:58.780
null
null
214
null
618
2
null
608
1
null
the difference in the expiration dates is enough to explain the source of what appears to be arbitrage. Given that there is a difference, it's not abritrage(riskless profit) because there is risk. However, that doesn't mean that you can't make money doing thsi trade. It just means that you might lose as well(not riskless).
null
CC BY-SA 2.5
null
2011-02-26T16:23:42.877
2011-02-26T16:23:42.877
null
null
214
null
619
2
null
613
26
null
Representing time series (esp. tick data) using elaborate data structures may be not the best idea. You may want to try to use two arrays of the same length to store your time series. The first array stores values (e.g. price) and the second array stores time. Note that the second series is monotonically increasing (or at least non-decreasing), i.e. it's sorted. This property enables you to search it using the binary search algorithm. Once you get an index of a time of interest in the second array you also have the index of the relevant entry in the first array. If you wrap the two arrays and the search algorithm e.g. in a class you will have the whole implementation complexity hidden behind a simple interface.
null
CC BY-SA 3.0
null
2011-02-27T10:44:07.397
2016-12-02T01:33:56.303
2016-12-02T01:33:56.303
-1
493
null
620
1
null
null
21
5908
I was trying to build a options trading/optimization system. But it often gets more inaccurate as it scans through the far from ATM options because, you know, options skews. That is because I did not price in options skews, or jump premium. I am wondering if there is a popular formula that takes "degree of options skew", and either strike price or Delta as inputs, and then give me skews premium in terms of IV as output. Thank you very much.
Is there a popular curve fitting formula of options skew vs strike price or vs Delta?
CC BY-SA 2.5
null
2011-02-28T07:51:08.480
2017-09-12T21:39:39.793
null
null
496
[ "options", "volatility-skew" ]
621
2
null
593
5
null
If I were you I would be very cautious when playing this mean reversion. For several reasons. 1/ You never know when this spread is going to close, and the contracts on which you enter the trade may have expired. Then you would have rolled. In fact the arbitrage can close without any opportunity to capture it because of the roll yield. I advise you to play directly on the backend of the curve if you can find contracts that are liquid enough. 2/ The cointegration between daily close prices of WTI and Brent is strong but the mean reverting process may be very slow to move. I would not play this "statistical" arbitrage. High frequency ok, but with daily prices... is it really statistically meaningful? This is the price of a basket which is long WTI and short brent for the same maturity. Do you see any mean reversion? ![enter image description here](https://i.stack.imgur.com/6m62T.jpg)
null
CC BY-SA 2.5
null
2011-02-28T10:34:32.773
2011-03-04T01:59:12.327
2011-03-04T01:59:12.327
134
134
null
622
2
null
620
17
null
There have been many studies of the shape of volatility skew. There are two simple empirical approaches that have proven most popular: - Spline (or piecewise polynomial) fits of observable implied vols - Parabolic parametric fits. and there is one other approach that is fairly simple yet reliable: - Edgeworth expansions Other empirical approaches also exist, of course, but quickly become unwieldy mathematically, computationally, and for calibration. The overall process is as follows: - Choose a function, such as a parabola, that you will use to represent skew and code it into the computer - Choose some input volatilities you trust - Use them in a fitting scheme to calibrate the parameters (tilt and curvature) of your skew. That is, you find parameters such that your chosen function does the best possible job of reproducing your input volatilities. - Use your fitted function to estimate volatilities anywhere on the skew curve. Sometimes, if data is particularly bad, you can copy tilt and curvature parameters from one fitted curve onto another curve where you have very little input data available. Exercise caution when you do this! The fits are typically constrained such that the parabola (or other function) cannot dip below zero for any "reasonable" strike. This is hard to achieve with linear fits, which is one major reason why parabolas are more popular. Fits also often have "cutoffs" where volatility is assumed to be constant for strikes above (or below) certain extreme values. For all such fits, one usually uses existing mathematical libraries and routines (many of which are simple copies of the serviceable code found in Numerical Recipes). You can get spline libraries, and you can fit parabolas using any linear algebra library that has a routine for ordinary least squares. See [this site](http://mathworld.wolfram.com/LeastSquaresFittingPolynomial.html) for more information. Edgeworth expansions are in the family of skew treatments that consider skew as a symptom of non-lognormal returns, and try to represent a non-lognormal distribution in some concise way. A decent place to start learning about them is Rubinstein's paper at [this location](http://www.haas.berkeley.edu/groups/finance/WP/rpf275.pdf). The real trick for any of these fits is choosing a weighting scheme. Typically, you will prefer to weight some input volatilities more than others. For example, most people believe the out-of-the-money put and call prices contain the best information. Others concentrate on weighting the near-the-money options, often by weighting according to the gamma.
null
CC BY-SA 3.0
null
2011-02-28T15:51:49.670
2015-07-06T13:17:33.333
2015-07-06T13:17:33.333
254
254
null
623
2
null
406
3
null
Have you considered a Monte Carlo simulation on your returns? Then you could look at the distribution of Maximum Drawdowns.
null
CC BY-SA 3.0
null
2011-02-28T19:12:48.520
2011-09-25T21:02:52.323
2011-09-25T21:02:52.323
1106
520
null
624
1
null
null
6
765
This is probably not the best place to ask this question but the one that seems most appropriate on stack exchange(doesn't seem to be an economics one). Anyways, My question, a serious one and not politically motivated, is about "big oil". Does anyone know why we generally see "big oil" have record profits usually in the time of crisis? The only way profits can increase from a business is: Demand increases, product price increases, efficiency increases. I think that is pretty much all and everything else will fall in one of those. Now, when oil prices skyrocket(like that have done in the past and are doing now), demand surely decreases? Maybe this is not true because the high demand for oil? No matter what the cost of the product to the oil companies, if they kept their same profit per barrel there is no way they can have "record profits" no matter what it costs us. So if they always make $\$50$ per barrel it doesn't matter if it costs them $\$10$ or $\$100$ to pull it out of the ground they will make the same profit(since they will pass only those costs to the consumer anyways). Efficiency increases is really the only one that could technically lead to an increased profit without increasing demand or product cost to consumer. Usually though this costs money to implement though so must be initially offset. In any case we shouldn't see an increase to consumers which we do in these times of "record profit". So, For record profit how can we not be increasing how much we make off each barrel if demand is reduced and efficiency is about the same? What does this say about big oil? Are they really not "gouging" us? Basically if we jump from $\$2.50$ to $\$5.00$ at the pump and they make record profits doesn't that directly imply they are stacking the deck? How can big oil reasonably make "record profits" always at the same time when oil is "hard to get"? Isn't the whole point of supply and demand is that as cost increases it is balanced out by a decrease in demand so the net stays the same? (a sort of zero sum game) e.g., if I sell 1 product for $\$1000$ it's the same if I say $1000$ for $\$1$ each(profit in both cases)? Not true in the real world cause you probably going to have to provide additional support for the $1000$ people that you don't for the $1$.
Supply and Demand of Oil
CC BY-SA 2.5
null
2011-02-28T21:14:10.787
2011-03-02T01:22:26.353
2011-02-28T21:22:47.897
106
501
[ "economics" ]
625
2
null
624
5
null
This occurs because the [price elasticity of demand](http://en.wikipedia.org/wiki/Price_elasticity_of_demand) for oil is near zero, which is to say the demand curve is nearly vertical. This is partly a limited rate of production story (i.e., in the short term the rate of oil production is fixed), but it's mostly a limited substitutes story. Our cars run on gas, as do the trucks that move goods around the nation and ships that move good internationally. If I go to the store and the price of Coke rises, then I can buy a Pepsi; if the price of Pepsi rises I can buy an RC; and so on. But if I go to the gas station and the price of gas rises, I can't fill my car with some near substitute, despite being promised a Mr. Fusion for my Delorean. True, we can change our habits, combine trips, car pool, etc., but these are at the margins and it takes time to change habits. So oil companies make money during these times because people are willing to pay a premium to secure access to this good (via futures contracts) that everyone needs.
null
CC BY-SA 2.5
null
2011-02-28T21:34:16.550
2011-02-28T21:34:16.550
null
null
106
null
626
2
null
624
4
null
> demand surely decreases? Maybe this is not true because the high demand for oil? Demand for oil, like grains, is very inelastic. People have a very hard time changing their energy consumption habits when price shocks happen. Some of this is unwillingness to change, but a lot is that there are very few alternatives. Most people in the US live too far from work to walk or ride a bike; public transportation is non-existant in most parts of the US. Staple grains, like rice in Asia, or wheat in US, also have extremely inelastic demand curves. When people grow up with a culture of "it isn't a meal unless it has ...." then those people are more likely to riot than change what they eat (see rice riots in 2008, and the current one in Vietnam). We have huge corn (maize) subsidies in the US to prevent reoccurances of the riots and protest marches (in the early 1970s) of farmers and housewives (while that administration had great experience dealing with rioting hippies, they could not figure out how to cope with "good" people rioting). > Efficiency increases is really the only one that could technically lead to an increased profit without increasing demand or product cost to consumer. Usually though this costs money to implement though so must be initially offset. The turn-over in the US automobile fleet is close to 10 years. Very few people can afford to purchase a new car when fuel prices change drastically, and it would take a very long time for the automakers to switch to new fuels, fuel stations to switch to new fuels and for consumers to buy new cars. Further compounding this issue are the emotional attachments people have with their car (or the status implied by certain makes and models) which is why most people get fleeced purchasing cars - the salespeople see them coming and know just what buttons to push. The Otto cycle (the thermodynamic cycle used by gasoline engines) is about as efficient as it can be made. Any large improvements in energy consumption for autos won't come from new engines; the only practical improvements are in weight reduction and speed reduction. The failure of the national 55 mph standard shows that we're unwilling as a nation to make the speed reductions needed, and the huge popularity of SUVs shows we won't make the weight reductions needed.
null
CC BY-SA 2.5
null
2011-03-01T00:37:22.360
2011-03-01T00:37:22.360
null
null
126
null
627
2
null
624
6
null
The current jump in price is because one or more suppliers are dropping out. That shifts the supply curve to the left such that for the same demand for oil, the new price is now further up the steeper part of the curve (see the plots below). The demand for oil will react to that new price, but it takes time. Sorry that I don't have a more recent supply curve, however below is an old oil supply curve from 2007/2009 data. The numbers aren't exactly the same, but the idea still holds. From left to right are the suppliers cost, including internal/political cost, plus profit (the gray curve). As you can see, Saudi Arabia's costs are the lowest while the cost for "Marginal Suppliers Level 2" are the highest. When a supplier like Libya drops out, his amount of crude (about 1.XXX million bbl/day) drops out, shifting part of the curve to the left by 1.XXX million bbl/day (the blue curve). If current demand is above that particular supplier such that he is currently supplying oil, when he drops out, price is pushed upward (on the graph, from about 100 USD/bbl to 120 USD/bbl). All remaining suppliers (including "big oil") now make 20 USD/bbl more for their oil. As a result, their profits go up. The top plot is the complete supply curve. The bottom plot is the same curve, zoomed in to show the price change. ![enter image description here](https://i.stack.imgur.com/Oj8js.jpg) ![enter image description here](https://i.stack.imgur.com/c3f5C.jpg) Edit (03/01/2011)===================================================== Archival, quoting your statements: 1) ".....Since US oil is higher priced they gain the most?....." Not true. Explained below. 2) ".....Effectively the "big oil" has the "big market" and hence increases in price will effect them more than Saudi Arabia? If this is true then the big oil companies still have control of their profit and could surely reduce their profit to compensate for the increased in price and keep there total profit about the same? Since Lybia is a short term thing it would pass. I'm not saying a business has to do such a thing but it can....." Not true, Explained below. 3) "....Suppose "big oil" were to keep there net profit the same this year as, say, last to reduce the price at the pump....." Go back and look at various "big oil" Net Income Statements. Not every year is a profitable year. If "big oil" becomes unprofitable, we won't be driving what we're currently driving. 4) "....How much will it actually reduce it? Would it be drastically different(say 50c or more) or due to the huge market(millions of buyers) it would just be a few cents?...." Can't be done. If I (you) was a shareholder, I'd (you'd) sue them into the ground. Archival, you need to pay attention to how supply and demand works and forget about "fairness theories" and "they should do this.....theories" . If you honestly think "big oil" is making so much money, then couldn't you buy the stock and reap that "big oil money" for yourself? The deeper you get into the numbers, the more you'll realize why it is the way that it is. I added the following Oil Supply Table to help explain what is going on in the graphs. This table, when plotted, is the supply curve above. Key Point: On the spot market EVERYONE gets the same selling price. A barrel of WTI oil is a barrel of WTI oil (or Brent, or whatever grade). The Low Cost suppliers get the same price as the High Cost suppliers. Column I is the name of the country/group, sorted by Cost (the Cost is all production costs plus "enough" profit to sell the oil). The first guy in the list has the lowest production Costs. That Cost shows up as Price in column L. This Price is the MINIMUM price that particular supplier/group would be willing to accept for his oil. Obviously, if the current price is higher, they'll take it, but if it's lower, they'll drop out. Column J tells you how much oil he has for sale, at that price or higher. This means that the Saudis will sell 10248 Thousand bbl/day for a Price of 28.23 USD/bbl or higher (any lower and they're out). The next guy in the list is Iran at a Price of 28.24 USD/bbl. He'll sell up to 4.0 million bbl/day at his Price (28.24 USD/bbl), bringing the cumulative amount of oil available to 14.3 million bbl/day (column K). So, if demand is between 10.2 and 14.3 million bbl/day, the price would be 28.24 USD/bbl. That's for EVERYONE that is supplying oil, both Saudi Arabia and Iran (again, a barrel of oil is a barrel of oil). In the above graphs, I started off with Libya still providing oil (the gray curve in both graphs). In the second graph, I chose 83 million bbl/day as the example Total Demand (the red line in the graph). When you look at column K of the table, ALL SUPPLIERS up to and including the "United States (offshore 26% of total)" will be selling their oil at the going price. That price is somewhere between 93.57 USD/bbl and 115.66 USD/bbl (the graph is drawn with a linear interpolation between prices). From the second graph, on the gray curve, it looks like 100 USD/bbl is the going price. So, before Libya drops out, to supply 83 million bbl/day of oil, the market (ALL SUPPLIERS up to the required demand) requires about 100 USD/bbl. Whatever "extra" (above "enough") profit they make is the difference in the market price and their Cost. EVERYONE that sells oil into that market gets 100 USD/bbl. Now, Libya drops out (everything else is the same). Column N shows what happens to the Cumulative Production when Libya drops out. Everything is the same above the pink line. The cumulative supply available in both column K and N up to Algeria is 78.8 million bbl/day. However, since demand is 83 million bbl/day, more suppliers are required to fulfill demand. With Libya out, you have to go all the way up to "Marginal Suppliers (Level 1)" to fulfill demand. Their Cost is 145 USD/bbl. From the second graph, on the blue curve, due to interpolation, the going price is now about 120 USD/bbl. So, after Libya drops out, to supply 83 million bbl/day of oil, the market (ALL SUPPLIERS up to the required demand) requires about 120 USD/bbl. Since this is now 20 USD/bbl higher than before, the profits of ALL SUPPLIERS that are selling oil will jump by 20 USD/bbl. Notice that no one manipulated demand or supply, and that the SAME supply curve was used both times. The only thing that changed was, Libya dropped out. Now, go back and think about how you would implement ANY kind of "fairness theory" or "they should do this....theory". The supply curve is the supply curve. It depends on the Costs of the suppliers and how many suppliers are available to provide oil. It's that simple. ![enter image description here](https://i.stack.imgur.com/VM6rb.jpg)
null
CC BY-SA 2.5
null
2011-03-01T01:03:35.327
2011-03-02T01:22:26.353
2011-03-02T01:22:26.353
392
392
null
628
1
632
null
8
496
What are the standard methods to price american call/put options on illiquid underlyings?
illiquid american options pricing
CC BY-SA 2.5
null
2011-03-01T08:57:01.243
2011-12-08T04:56:23.207
2011-03-02T02:44:48.593
134
134
[ "option-pricing", "liquidity" ]
629
1
633
null
16
1877
If you have a bunch of different econometric data (e.g. indexes, FX, commodities, interest rates...) you can try to find a formula to see if there is any relationship in the data - e.g. to forecast it by this discovered pattern. What I am asking here is a little bit different: Is there another way in the sense that you can search for a formula f() such that the given form represents a trading strategy where certain indicators are found when to go long or short (or any derivative combinations)? The idea is that the formula itself lives in n-dimensional space of indicators/ trading-strategies and tries to survive as best as it can. This must be a standard procedure for multi-agent systems simulating artificial stock markets. Alas, I am unable to find a simple approach to do just that...
How to encode trading strategies mathematically
CC BY-SA 2.5
null
2011-03-01T09:44:48.563
2018-01-03T17:32:46.490
null
null
12
[ "trading-patterns", "data-mining" ]
631
1
null
null
26
4644
What are the most used/mature/promising commercial solutions today which handle backtesting/ automated trading needs? I'm talking about vertical product suites like [QuantFactory](http://www.quanthouse.com/?q=node/19) or [Deltix](http://www.deltixlab.com/) which address market data capture, historical data, CEP, backtesting, trading, etc. (I would be more specifically interested about the backtesting part though.) - What are the big players here? - Do you have any feedback (pro/cons) about theses solutions? - What is the cost of integrating such a solution into an existing ecosystem? - Are they customizable enough to handle very specific needs? I know that sounds like a lot a different questions, but I'm trying to get a general feeling about these tools (are they popular, useful, who use them, for what, etc.)
Main backtesting & trading solutions: QuantFactory, Deltix, etc.
CC BY-SA 2.5
null
2011-03-01T12:59:05.893
2012-11-19T21:07:32.823
2011-03-06T00:07:45.147
274
328
[ "backtesting", "algorithmic-trading", "software", "trading-systems" ]
632
2
null
628
4
null
For a non-listed company you usually have a poor idea of the share value. If you do not know the underlying price it is impossible to accurately estimate the option price. So, for cases like this, people typically analyze the options on a risk/reward basis, using similar scenario analyses to those used to analyze the cashflows, assets and other components of the firm valuation.
null
CC BY-SA 2.5
null
2011-03-01T16:42:30.190
2011-03-01T16:42:30.190
null
null
254
null
633
2
null
629
12
null
Yes, you use an implementation of each signal and then use a statistical package like sas to generate a factor model for you. It generates a mathematical formula, with coefficients, and signals(variables) and even tells you the efficacy(R^2) However you quickly find yourself exposed to data snooping bias by choosing this approach. Similar to the results outlined in this paper: [http://www.eco.sdu.edu.cn/jrtzx/uploadfile/pdf/empiricalfinance/10.pdf](http://www.eco.sdu.edu.cn/jrtzx/uploadfile/pdf/empiricalfinance/10.pdf) Data-snooping bias, is why people stress the economic reasoning for their strategies over the historic statistical efficacy, which often fails to replicate going forward.
null
CC BY-SA 2.5
null
2011-03-01T20:06:21.190
2011-03-01T20:06:21.190
null
null
214
null
634
2
null
631
8
null
Some LinkedIn groups are particularily adequate to post these questions. Your question has already been asked in "Automated Trading Strategies" at this URL: [Seeking input on QuantFactory, Deltix and 4thStory: Professional end-to-end Automated Trading Solutions](http://www.linkedin.com/groupItem?view=&gid=1813979&type=member&item=43835799&qid=80b45454-3b0b-461e-8cf9-6f3f6ed31b4c&goback=.gmp_1813979). Feel free to let us know the state of your research.
null
CC BY-SA 2.5
null
2011-03-01T22:04:17.663
2011-03-02T00:30:14.173
2011-03-02T00:30:14.173
35
509
null
635
2
null
530
7
null
The DSP techniques you're referring to are great for repeating signals, but not suitable for random signals (like price movements). While there are some techniques suitable for picking weak signals out of large noise environments (GPS is one that comes to mind), those techniques depend on knowing what the signal looks like, and if you knew what the signal looked like, you'd own the market.
null
CC BY-SA 2.5
null
2011-03-02T02:57:31.037
2011-03-02T02:57:31.037
null
null
126
null
636
1
1561
null
21
10914
Could somebody advise me on where to find good literature on the justification or motivation for using support and resistance lines - and also lines of maximums and minimums. In finance, it is sometimes difficult to find the relevant literature because even if a book does not claim to be able to make you rich, the content of the book can still be bad. Any good literature on support/resistance would be nice.
Is there any good research on support and resistance?
CC BY-SA 3.0
null
2011-03-02T10:38:42.150
2014-06-13T07:06:21.643
2011-08-08T04:42:10.743
1106
464
[ "strategy", "research", "technicals" ]
637
2
null
594
3
null
I've been looking around for the holdings of this device, and so far I can't find it listed anywhere. I'll keep looking, but unless I find something to the contrary, it looks like UBS is your credit risk (if they go down, this thing goes down). And, remembering Lehman Bros, that possibility is something to think about.
null
CC BY-SA 2.5
null
2011-03-02T16:36:05.810
2011-03-02T16:36:05.810
null
null
392
null
638
2
null
636
18
null
Below, I see a lot of support and resistance. Here's the code: ``` x <- cumsum(rnorm(1000)) plot(x, type="l", main="Support and Resistance") ``` ![enter image description here](https://i.stack.imgur.com/ZIr74.jpg) Edit (03/03/2011) ================================================ Gortaur, I put my answer here to avoid filling up the comment area. Your question 1) "......I was not asking for the "garbage" literature, I can find it in net by myself........" The reason that I posted that article is because it, and a few others, are consistently referenced as "credible proof" that support and resistance, as predicted by certain groups, is a profitable strategy. As far as I can tell, "credible" comes from the fact that the author worked at FRBNY, not her results. "Proof" comes from those that accept her results, not from repeating her results. If you pick at that article long enough, the term "garbage" might just come to mind. Don't get me wrong. Unlike others, I don't think the author was trying to mislead anyone. I just think that she didn't chew on her methods well enough. Your question 2) ".......the fact that S&R appear in random walks and the human's eye try to catch it everywhere do not prove the fact that these notions are meaningless in real life where prices are NOT random walks and random walks are just model to simplify dynamics and don't care about the game theory part of the question (because the system is too huge)........" If you run that (or similar) code over and over, and count the plots without "support" and/or "resistance", you might not get to 1. If it's that close to 100% of the time, how can you tell the difference between S&R that is random and S&R that isn't? And, if both the random series and the real series "bounce" off of some arbitrary lines, does it really have to mean something (more importantly, in the random case, you know it doesn't mean anything)? My point is really simple. By the time you devise a test that means something, and then run real data through it, you might find that you can't tell when random data has been passed through it. If so, then you're back to square one. You don't have to believe me, try it yourself. As a place to start, go to .pdf page 8 in that article and look at the test (Calculating Artificial S&R Levels). Draw what is described, for a few days worth of data. Then ask yourself, could you come up with some S&R lines that will beat her test? If so, does her test mean anything? And, if you use your method of beating her test as your test, can it provide a way for you to differentiate random versus non-random results? Edit (03/04/2011) ========================================== Gortaur, from your comment below, "....If you just use logics - then what are you trying to say? A = {for the random data there is smth which looks like S&R}. B = {S&R can be used to analyze random data}. A->B = false -> what is looking like S&R in random data is not S&R (at least like a tool for analysis). On the other hand, you are trying to use this fact to prove that there is no S&R (like a tool for analysis) for the non-random data. I don't think that it is right....." I'm not sure that I understand your questions, but here goes. By using S&R levels, aren't you trying to react to something that the market is actually doing, something meaningful? For instance....a bunch of people/money want to buy at support and sell at resistance? The idea is that something meaningful and non-random is happening at areas of S&R. The problem is, a random process will consistently generate lots and lots of S&R levels, and you can be 100% sure that those S&R levels mean absolutely nothing. Think about it, how can a random process NOT turn and go the other way? You can calculate the odds. So, when you find an S&R level in real data, do you have a way to identify that this is a "random" versus "non-random" S&R level? How about a way to calculate the odds that this is a random versus non-random S&R level? If you can't get past this issue then the rest of your analysis will at least partially (maybe completely) be based on meaningless random events. Obviously, a lot of people have decided that they can't differentiate random versus non-random S&R levels with price alone, so they add volume, seasonality, sun-spots, or whatever. Lots of ink has been wasted on this subject. The bottom line is, whatever tool you develop to identify and use S&R levels, when it's all finished, run the above random "x" through it and see if your answers are any better or worse. If the performance isn't statistically different when you run random data through it, is the tool doing anything worthwhile?
null
CC BY-SA 2.5
null
2011-03-02T18:37:16.033
2011-03-04T19:07:36.103
2011-03-04T19:07:36.103
392
392
null