prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
What causes Python's float\_repr\_style to use legacy? On nearly every system, Python can give you human-readable, short representation of a floating point, not the 17 digit machine-precision: ``` Python 3.3.0 (default, Dec 20 2014, 13:28:01) [GCC 4.8.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> 0.1 0.1 >>> import sys; sys.float_repr_style 'short' ``` On an [ARM926EJ-S](http://www.arm.com/products/processors/classic/arm9/arm926.php), you don't get the short representation: ``` Python 3.3.0 (default, Jun 3 2014, 12:11:19) [GCC 4.7.3] on linux Type "help", "copyright", "credits" or "license" for more information. >>> 0.1 0.10000000000000001 >>> import sys; sys.float_repr_style 'legacy' ``` [Python 2.7](https://docs.python.org/dev/whatsnew/2.7.html#other-language-changes) apparently added this short representation to repr(), **for most systems**: > > Conversions between floating-point numbers and strings are now correctly rounded on most platforms. These conversions occur in many different places: str() on floats and complex numbers; the float and complexconstructors; numeric formatting; serializing and deserializing floats and complex numbers using the marshal, pickle and json modules; parsing of float and imaginary literals in Python code; and Decimal-to-float conversion. > > > Related to this, the repr() of a floating-point number x now returns a result based on the shortest decimal string that’s guaranteed to round back to x under correct rounding (with round-half-to-even rounding mode). Previously it gave a string based on rounding x to 17 decimal digits. > > > The rounding library responsible for this improvement works on Windows and on Unix platforms using the gcc, icc, or suncc compilers. **There may be a small number of platforms where correct operation of this code cannot be guaranteed, so the code is not used on such systems**. You can find out which code is being used by checking sys.float\_repr\_style, which will be short if the new code is in use and legacy if it isn’t. > > > Implemented by Eric Smith and Mark Dickinson, using David Gay’s [`dtoa.c`](https://github.com/python/cpython/blob/0eb135ca873dde7308564f4c2e6dd95d7300f331/Python/dtoa.c) library; [issue 7117](https://bugs.python.org/issue7117). > > > They say some platforms can't guarantee correct operation (of [`dtoa.c`](https://github.com/python/cpython/blob/0eb135ca873dde7308564f4c2e6dd95d7300f331/Python/dtoa.c) I assume), but don't say which platform limitation are the ones that cause this. What is it about the ARM926EJ-S that means the short float repr() can't be used?
Short answer: it's likely to be not a limitation of the platform, but a limitation of Python's build machinery: it doesn't have a universal way to set 53-bit precision for floating-point computations. For more detail, take a look at the [`Include/pyport.h`](https://hg.python.org/cpython/file/293d9964cf6e/Include/pyport.h) file in the Python source distribution. Here's an excerpt: ``` /* If we can't guarantee 53-bit precision, don't use the code in Python/dtoa.c, but fall back to standard code. This means that repr of a float will be long (17 sig digits). Realistically, there are two things that could go wrong: (1) doubles aren't IEEE 754 doubles, or (2) we're on x86 with the rounding precision set to 64-bits (extended precision), and we don't know how to change the rounding precision. */ #if !defined(DOUBLE_IS_LITTLE_ENDIAN_IEEE754) && \ !defined(DOUBLE_IS_BIG_ENDIAN_IEEE754) && \ !defined(DOUBLE_IS_ARM_MIXED_ENDIAN_IEEE754) #define PY_NO_SHORT_FLOAT_REPR #endif /* double rounding is symptomatic of use of extended precision on x86. If we're seeing double rounding, and we don't have any mechanism available for changing the FPU rounding precision, then don't use Python/dtoa.c. */ #if defined(X87_DOUBLE_ROUNDING) && !defined(HAVE_PY_SET_53BIT_PRECISION) #define PY_NO_SHORT_FLOAT_REPR #endif ``` Essentially, there are two things that can go wrong. One is that the Python configuration fails to identify the floating-point format of a C double. That format is almost always IEEE 754 binary64, but sometimes the config script fails to figure that out. That's the first `#if` preprocessor check in the snippet above. Look at the `pyconfig.h` file generated at compile time, and see if at least one of the `DOUBLE_IS_...` macros is `#define`d. Alternatively, try this at a Python prompt: ``` >>> float.__getformat__('double') 'IEEE, little-endian' ``` If you see something like the above, this part should be okay. If you see something like `'unknown'`, then Python hasn't managed to identify the floating-point format. The second thing that can go wrong is that we do have IEEE 754 binary64 format doubles, but Python's build machinery can't figure out how to ensure 53-bit precision for floating-point computations for this platform. The `dtoa.c` source requires that we're able to do all floating-point operations (whether implemented in hardware or software) at a precision of 53 bits. That's particularly a problem on Intel processors that are using the x87 floating-point unit for double-precision computations (as opposed to the newer SSE2 instructions): the default precision of the x87 is 64-bits, and using it for double-precision computations with that default precision setting leads to [double rounding](http://en.wikipedia.org/wiki/Rounding#Double_rounding), which breaks the `dtoa.c` assumptions. So at config time, the build machinery runs a check to see (1) whether double rounding is a potential problem, and (2) if so, whether there's a way to put the FPU into 53-bit precision. So now you want to look at `pyconfig.h` for the `X87_DOUBLE_ROUNDING` and `HAVE_PY_SET_53BIT_PRECISION` macros. So it could be either of the above. If I had to guess, I'd guess that on that platform, double rounding is being detected as a problem, and it's not known how to fix it. The solution in that case is to adapt `pyport.h` to define the `_Py_SET_53BIT_PRECISION_*` macros in whatever platform-specific way works to get that 53-bit precision mode, and then to define `HAVE_PY_SET_53BIT_PRECISION`.
Why parallelized code will not write in an Excel spreadsheet? Writing many worksheets in an `Excel` spreadsheet can take a while. Parallelizing it would be helpful. This code works well, it makes an `Excel` spreadsheet pop on the screen with four worksheets named `Sheet1`,`1`, `2`, and `3`. ``` open Microsoft.Office.Interop.Excel open FSharp.Collections.ParallelSeq let backtestWorksheets = [1..3] let app = new ApplicationClass(Visible = true) let workbook = app.Workbooks.Add(XlWBATemplate.xlWBATWorksheet) let writeInfoSheet (worksheet: Worksheet) : unit = let foo i = let si = string i worksheet.Range("A" + si, "A" + si).Value2 <- "Hello " + si List.iter foo [1..10] let wfm = [1, writeInfoSheet; 2, writeInfoSheet; 3, writeInfoSheet] |> Map.ofList let adder (workbook : Workbook) (i : int) : unit = let sheet = workbook.Worksheets.Add() :?> Worksheet sheet.Name <- string i wfm.[i] sheet List.iter (adder workbook) backtestWorksheets //PSeq.iter (adder workbook) backtestWorksheets [<EntryPoint>] let main argv = printfn "%A" argv 0 // return an integer exit code ``` However, replacing the line starting with `List.iter` with the commented line just below it makes a spreadsheet with the same four worksheets pop up, but all worksheets are blank. So my question is: Why can't code parallelized with PSeq write to Excel? Remark: Originally I had a different problem. Maybe because in my application the worksheets are heavier when I try to run code similar to the above with `PSeq` there is an exception that says ``` Unhandled Exception: System.TypeInitializationException: The type initializer for '<StartupCode$Fractal13>.$Program' threw an exception. ---> System.AggregateException: One or more errors occurred. ---> System.Runtime.InteropServices.COMException: The message filter indicated that the application is busy. (Exception from HRESULT: 0x8001010A (RPC_E_SERVERCALL_RETRYLATER)) ``` This does not happen with `List.iter` replacing `PSeq.iter`. I was not able to replicate this exception in a simple enough context to be a proper SO question, but I would still be interested in any suggestions for dealing with it.
It looks like the `Microsoft.Office.Interop.Excel` code was never designed to be called from multiple threads at once. [Here's a question someone asked](https://social.msdn.microsoft.com/Forums/office/en-US/f0a6dfd1-74a0-4ac0-b806-28dccbe5442b/can-i-update-an-excel-workbook-from-multiple-threads?forum=exceldev) in the MS Office forums about doing an update in multiple threads (in C#). I'll quote the relevant parts of that answer here: > > Using multi-threading to search in multiple worksheets ends up with using the heart of Excel – the Excel.Application object, which means threads need to be queued to run one-at a time, depriving you of the desired performance improvement for the application. > > > [...] > > > All of this is because the Office object model isn't thread safe. > > > It looks like you're stuck with using a non-parallel design if you're calling anything in the `Microsoft.Office.Interop` namespace. **Edit:** [Aaron M. Eshbach](https://stackoverflow.com/users/1748071/aaron-m-eshbach) had a great suggestion in the comments: do all the background work on multiple threads, and use a `MailboxProcessor` to do the actual updates to the spreadsheet. The MailboxProcessor's message queue will automatically serialize the update operations for you, with no extra work required on your part.
Long expression to sum data in a multi-dimensional model I am porting a linear optimization model for power plants from [GAMS](http://gams.com/) to [Pyomo](https://software.sandia.gov/trac/coopr/wiki/Pyomo). Models in both frameworks are a collection of sets (both elementary or tuple sets), parameters (fixed values, defined over sets), variables (unknowns, defined over sets, value to be determined by optimization) and equations (defining relationships between variables and parameters). In the following example, I am asking for ideas on how to make the following inequality more readable: ``` def res_stock_total_rule(m, co, co_type): if co in m.co_stock: return sum(m.e_pro_in[(tm,)+ p] for tm in m.tm for p in m.pro_tuples if p[1] == co) + \ sum(m.e_pro_out[(tm,)+ p] for tm in m.tm for p in m.pro_tuples if p[2] == co) + \ sum(m.e_sto_out[(tm,)+ s] for tm in m.tm for s in m.sto_tuples if s[1] == co) - \ sum(m.e_sto_in[(tm,)+ s] for tm in m.tm for s in m.sto_tuples if s[1] == co) <= \ m.commodity.loc[co, co_type]['max'] else: return Constraint.Skip ``` **Context:** - `m` is a model object, which contains all of the above elements (sets, params, variables, equations) as attributes. - `m.e_pro_in` for example is a 4-dimensional variable defined over the tuple set *(time, process name, input commodity, output commodity)*. - `m.tm` is a set of timesteps *t = {1, 2, ...}*, `m.co_stock` the set of stock commodity, for which this rule will apply only (otherwise, no Constraint is generated via Skip). - `m.pro_tuples` is a set of all valid (i.e. realisable) tuples *(process name, input commodity, output commodity)*. - `m.commodity` is a Pandas DataFrame that effectively acts as a model parameter. **My question now is this:** Can you give me some hints on how to improve the readability of this fragment? The combination of tuple concatenation, two nested list comprehensions with conditional clause, Pandas DataFrame indexing, and a multiline expression with line breaks all make it less than easy to read for someone who might just be learning Python while using this model.
First of all, use helper functions or explicit for loops. E.g. you're looping over `m.tm` four times. This can be done in a single loop. It might need more lines of code, but it will get much more readable. E.g. ``` def res_stock_total_rule(m, co, co_type): if not co in m.co_stock: return Constraint.Skip val = 0 for tm in m.tm: # one single loop instead of four for p in m.pro_tuples: if p[1] == co: val += m.e_pro_in[(tm,) + p)] if p[2] == co: val += m.e_pro_out[(tm,) + p)] for s in m.sto_tuples: # one single loop instead of two if s[1] == co: val += m.e_sto_out[(tm,) + p)] - m.e_sto_in[(tm,) + p)] return val <= m.commodity.loc[co, co_type]['max'] ```
Is flash storage affected by magnetic fields? I have several SD cards and USB flash drives that are often laying around my work space. Do I have to be careful to keep magnets away from them? (This paranoia is showing my age - I remember needing to make sure magnets never got near my precious 3.5 inch floppies...)
No, unless the magnet is **REALLY** strong (see below quote). There is not enough magnetic material in them. > > "A magnet powerful enough to disturb the electrons in flash would be powerful enough to suck the iron out of your blood cells" > > > If the above quote is the case you should not be working where you do. It is also a big myth that normal magnets can ruin harddrives. > > The same goes for hard drives. The only magnets powerful enough to scrub data from a drive platter are laboratory degaussers or those used by government agencies to wipe bits off media. > > > But those floppies, don't let them near your magnets. --- Source: [Busting the Biggest PC Myths | PCWorld](http://www.pcworld.com/article/116572/busting_the_biggest_pc_myths.html)
Is there a techincal issue with initializing a javascript variable to an empty string instead of null? I'm working with an API call using axios and getting back some JSON in a response. I have a variable that will hold the result of one of the JSON object's values, in this case a string: ``` let first_name = ""; //later on.. first_name = response.data.firstName; ``` I had my hand slapped because I had initialized `first_name` to an empty string instead of `null` and I'm not sure why-- the person doing the code review muttered something about best practices and didn't really answer me. My question-- if I'm checking for an empty string instead of `null` when using `first_name` later on, does it matter what I initialized it to? Is there a javascript best practice or optimization I'm missing out on by setting the variable to an empty string? [edit] Some good discussion in the comments about *how* I'm using `first_name` later on in the code. Let me elaborate. I don't want to do a code dump, so let me put it this way, after `first_name` has been assigned I check to make sure it's neither an empty string or null before using it. What I'm interested in here is whether what I did is *wrong* or inefficient in javascript terms, or 'worse' than assigning it as a null. I assigned it as a string as a mnemonic that it should be a string value if all goes well.
> > Is there a technical issue with initializing a JavaScript variable to an empty string instead of null? > > > That depends on the context, but there is no immediate technical issue. From a pure technical perspective, it doesn't matter whether we check `first_name === null`, `first_name === ""`, `first_name === undefined` or just `!first_name`. Note that all of this is very specific to JavaScript, as many other languages either don't have `Null` or use a completely other model to indicate the absence of values. # `null` versus string Let us analyse the issue a little bit further to understand the reasoning of your reviewer. ## `null` is not a string In the end `first_name` is a result of some computation that returns a string. If that string *can* be the empty string `""`, it's reasonable to use `null`, as `null !== ""`. We can check the absence of a valid value without additional `first_name_set` flags or similar. If we use `null`, then we never pool our initial value from the domain of valid values, which can be a boon in debugging, error handling and sanity checks. ## `""` might not ever be valid If, however, `first_name` will never be the empty string, the empty string *might* be a valid candidate for our invalid data. ## Best practices must be explained But this explanation for `null` is something one can up with. The reasoning of your reviewer on the other hand is in their mindset. If it is best practice according to them, it might be "best practice" throughout the company. That doesn't necessarily mean that it's also best practice in the real world, as some weird misconceptions can have a long tradition in companies with a lot of self-taught programmers. *Your* best practice as a reviewed would be to question their reasoning. A good code review improves the reviewed code, a better code review teaches better practices, but the best code reviews improve the overall code base and teach *both* sides. # Remark on initialization That being said both variants employ additional pressure on the developer: we're forced to check whether our value is valid. A better solution (if possible) is to get rid of the invalid initialization and *only* initialize with a correct value. `async` and `await` can enable us to do so in callback situations.
Replacing missing values in a data frame with the row mean I have a data frame with entries running from 0 to 5 where 0 means a missing entry. I want to repalce the missing entries with the mean of each row. The mean should be calculated only on the not missing values. How do I do that? For example, the row: ``` [1] 1 2 4 0 3 ``` Will become: ``` [1] 1.0 2.0 4.0 2.5 3.0 ```
``` set.seed(42) m <- matrix(sample(0:5,15,TRUE),ncol=5) # [,1] [,2] [,3] [,4] [,5] # [1,] 5 4 4 4 5 # [2,] 5 3 0 2 1 # [3,] 1 3 3 4 2 t(apply(m,1,function(x) {x[x==0] <- mean(x[x!=0]); x})) # [,1] [,2] [,3] [,4] [,5] # [1,] 5 4 4.00 4 5 # [2,] 5 3 2.75 2 1 # [3,] 1 3 3.00 4 2 ```
scala code snippet I am new to scala. Could anyone please explain the following piece of code to me? ``` val l: Either[String, Int] = Left("flower") val r: Either[String, Int] = Right(12) l.left.map(_.size): Either[Int, Int] // Left(6) r.left.map(_.size): Either[Int, Int] // Right(12) ``` I have a few questions regarding line 3 and line 4: 1. What does \_.size stand for in the context? 2. What does r.left returns? 3. What does r.left.map returns? 4. What does r.left.map(\_.size): Either[Int, Int] mean? Thanks!
The first line defines a variable `l` of type `Either[String, Int]`. The `Either` (abstract) class is used to denote when something can hold one of two possible values. The `Left` and `Right` classes subclass `Either`, hence the valid assignment. A convention is to use `Left` to denote some sort of failure (e.g. a `String` describing what went wrong), and `Right` to denote some sort of successful computation/value. With this info, the second line is fairly self explanatory as well. The 3rd and 4th line use projections on the `l` and `r` variables and perform a `map`. The `left` and `right` methods return projections.. if you project `left` on a `Left` variable, you can then `map` over the inside value. If you project `right` on a `Left` variable, you get the `Right` un-altered even after the `map`. Same if you do `right` on a `Left`, etc. So the 3rd line, since `l` is a `Left` and you `left` project it, you apply the `map` over the `Left("flower")` value giving you `Left(6)` since "flower" has length 6. On the 4th line, `r` is `Right` so it does nothing. For your questions: 1. `_.size` is the function passed to the `map` method.. the `map` method "unpacks" the value inside the `Left` (it does nothing to the `Right` on line 4 since it's a left projection) and calls `.size` on it. 2. `r.left` returns a left projection on the `Right(12)` value. Mapping over this does nothing, as seen in line 4 of your code. 3. The same old `Right(12)` value. 4. Since it is `Right(12)` and of type `Right[_, Int]`, it subclasses `Either[Int, Int]`, so it's valid. It is just there to show the type I believe. If you go to the REPL and type something like `val foo = Right(1234)`, you get a type `Right[Nothing, Int]` - this "works" because the `Nothing` type subclasses all types. On a related note, once you start feeling comfortable with that and/or Scala, I recommend checkout Scalaz (a library for Scala that provides functional programming things like type classes, monads, and all that good stuff) and the `Validation` class which is pretty similar to `Either`.
How to set seekbar min and max value I have a seekbar and trying to set the value from 60 to 180 for one and 40 to 190 for the second one in step of 1. ``` sb1 = (SeekBar) findViewById(R.id.progresss); sb1.setOnSeekBarChangeListener(new OnSeekBarChangeListener() { @Override public void onStopTrackingTouch(SeekBar seekBar) { // TODO Auto-generated method stub //int inVal = Integer.parseInt(String.valueOf(seekBar.getProgress())); //inVal =+ 70; //Toast.makeText(getApplicationContext(), String.valueOf(inVal),Toast.LENGTH_LONG).show(); } @Override public void onStartTrackingTouch(SeekBar seekBar) { // TODO Auto-generated method stub } @Override public void onProgressChanged(SeekBar seekBar, int progress,boolean fromUser) { // TODO Auto-generated method stub progress =+ 70; Toast.makeText(getApplicationContext(), String.valueOf(progress),Toast.LENGTH_LONG).show(); } }); ``` is not working. Any idea how to fix it?
You cannot set the `min` value of a SeekBar (always 0) and you cannot set the `step` value of a SeekBar (always 1). To set the value from **60** to **180** with a step of **1**: ``` int step = 1; int max = 180; int min = 60; // Ex : // If you want values from 3 to 5 with a step of 0.1 (3, 3.1, 3.2, ..., 5) // this means that you have 21 possible values in the seekbar. // So the range of the seek bar will be [0 ; (5-3)/0.1 = 20]. seekbar.setMax( (max - min) / step ); seekbar.setOnSeekBarChangeListener( new OnSeekBarChangeListener() { @Override public void onStopTrackingTouch(SeekBar seekBar) {} @Override public void onStartTrackingTouch(SeekBar seekBar) {} @Override public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) { // Ex : // And finally when you want to retrieve the value in the range you // wanted in the first place -> [3-5] // // if progress = 13 -> value = 3 + (13 * 0.1) = 4.3 double value = min + (progress * step); } } ); ``` I put another example within the code so that you understand the math.
How does Javascript Self-Defending work and how does it manage to enter an infinite loop upon beautifying? There is a website "obfuscator.io", which obfuscates Javascript code. One of its functions is "Self-Defending". It turns a simple `console.log()` line into this: ``` var _0x2a3a06=function(){var _0x409993=!![];return function(_0xe0f537,_0x527a96){var _0x430fdb=_0x409993?function(){if(_0x527a96){var _0x154d06=_0x527a96['apply'](_0xe0f537,arguments);_0x527a96=null;return _0x154d06;}}:function(){};_0x409993=![];return _0x430fdb;};}();var _0x165132=_0x2a3a06(this,function(){var _0x46b23c=function(){var _0x4c0e23=_0x46b23c['constructor']('return\x20/\x22\x20+\x20this\x20+\x20\x22/')()['constructor']('^([^\x20]+(\x20+[^\x20]+)+)+[^\x20]}');return!_0x4c0e23['test'](_0x165132);};return _0x46b23c();});_0x165132();console['log'](); ``` The code does work in Webkit Console, but when you beautify it using an application like "beautifier.io" or "de4js" and run it in the same console again, the code enters an infinite loop, essentially breaking the code. How does this work? Does it have something to do with the way beautifiers work, or with the way Javascript interpretes code?
Running the code through a beautifier as you did and then applying some basic variable renaming and un-escaping yields the following code: ``` var makeRun = function() { var firstMakeRun = true; return function(global, callback) { var run = firstMakeRun ? function() { if (callback) { var result = callback['apply'](global, arguments); callback = null; return result; } } : function() {}; firstMakeRun = false; return run; }; }(); var run = makeRun(this, function() { var fluff = function() { var regex = fluff['constructor']('return /" + this + "/')()['constructor']('^([^ ]+( +[^ ]+)+)+[^ ]}'); return !regex['test'](run); }; return fluff(); }); run(); console['log']() ``` The important part is where it tests the regex `/^([^ ]+( +[^ ]+)+)+[^ ]}/` against the `run` function itself, doing an implicit [`run.toString()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/toString). Now where is the infinite loop? There is none, but that regular expression, applied to a string that contains a lot of spaces, does exhibit [catastrophic backtracking](https://www.regular-expressions.info/catastrophic.html). Try running the code indented with tabs instead of spaces, and it'll work just fine - the regex matches as long as your `run` function doesn't contain multiple spaces after each other, and no space in front of the closing `}`.
How to get columnar minimums in a list of arrays? I have an array of doubles stored in a list as shown below. I want to get the minimum from each of the columns. I know i can somehow loop through list and get the minimums. But, is there a simpler way to get the same using the List.Min() from each of the columns?? ``` List<double[]> BoxCoords = new List<double[]>(); <!-----------------Console output of BoxCoords List------------------------------!> 8.03698858873275 | -1622.419367705 | 180.603950687759 //List Entry 1 8.03698858855283 | -1622.41905090503 | -220.203952098008 //List Entry 2 -7.70512234523364 | -1665.73116372802 | -220.204298594721 //List Entry 3 ```
Assuming your columns are defined by the ordinals of the wrapped arrays, and not by the values in each row, here's a method using the [`index` overload](https://msdn.microsoft.com/en-us/library/vstudio/bb534869(v=vs.100).aspx) of `Select`, to unpivot the data (with `SelectMany`), and then re-grouping the data on the indexes, after which applying `Min` is a formality: ``` var columnMins = BoxCoords .SelectMany(bc => bc.Select((v, idx) => new {Idx = idx, Val = v})) .GroupBy(up => up.Idx) .Select(grp => grp.Min(x => x.Val)); ``` You can add `.ToArray()` at the end if you need this back as an array of `double` > > [-7.70512234523364, -1665.73116372802, -220.204298594721] > > > **Demo [.Net Fiddle](https://dotnetfiddle.net/WQVR3t)**
Identify Cell after Tap Location in UICollectionView I have attached a UILongPressGestureRecognizer to a Collection View with the aim of the user hold finger on a cell in the table and being asked if they want to remove it. The problem I have is that I can not get the indexPath like I would with a table cell (using indexPathForRowAtPoint), is there an equivalent? If not I have thought about adding the gesture to each cell, but then how do I identify the which gesture (I can not see a "tag" for a gesture), e.g. can I send indexPath.item in the selector? Or some other method?
You got the same thing on the `UICollectionView`. According to the [documentation](http://developer.apple.com/library/ios/#documentation/UIKit/Reference/UICollectionView_class/Reference/Reference.html): > > indexPathForItemAtPoint: Returns the index path of the item at the > specified point in the collection view. > > > - (NSIndexPath \*)indexPathForItemAtPoint:(CGPoint)point Parameters point A point in the collection view’s coordinate system. Return Value > The index path of the item at the specified point or nil if no item > was found at the specified point. > > > Discussion This method relies on the layout information provided by > the associated layout object to determine which item contains the > point. > > > Availability Available in iOS 6.0 and later. Declared In > UICollectionView.h > > >
Are statements under curly braces load first? I know static contents are loaded first in the memory, but why is 'IT' printed before 'CT' when I haven't mentioned it as static? ``` class Person { Person() { System.out.print(" CP"); } static { System.out.print("SP"); } } class Teacher extends Person { Teacher() { System.out.print(" CT"); } { System.out.print(" IT"); } } public class StaticTest { public static void main(String[] args) { Person p = new Teacher(); } } ```
Initializer blocks such as `{System.out.print(" IT");}` are executed before the constructor. Actually, they are copied to the beginning of each constructor. > > Initializing Instance Members > > > Normally, you would put code to initialize an instance variable in a > constructor. There are two alternatives to using a constructor to > initialize instance variables: **initializer blocks** and final methods. > > > **Initializer blocks for instance variables look just like static > initializer blocks, but without the static keyword**: > > > { > > // whatever code is needed for initialization goes here > > } > > > The Java compiler copies initializer blocks into every constructor. > Therefore, this approach can be used to share a block of code between > multiple constructors. > > > ([Source](https://docs.oracle.com/javase/tutorial/java/javaOO/initial.html)) And to be more exact, here's the initialization order as described in the [JLS](https://docs.oracle.com/javase/specs/jls/se7/html/jls-12.html) : > > 1. Assign the arguments for the constructor to newly created parameter variables for this constructor invocation. > 2. If this constructor begins with an explicit constructor invocation (§8.8.7.1) of another constructor in the same class (using this), then evaluate the arguments and process that constructor invocation recursively using these same five steps. If that constructor invocation completes abruptly, then this procedure completes abruptly for the same reason; otherwise, continue with step 5. > 3. This constructor does not begin with an explicit constructor invocation of another constructor in the same class (using this). If this constructor is for a class other than Object, then this constructor will begin with an explicit or implicit invocation of a superclass constructor (using super). Evaluate the arguments and process that superclass constructor invocation recursively using these same five steps. If that constructor invocation completes abruptly, then this procedure completes abruptly for the same reason. Otherwise, continue with step 4. > 4. Execute the instance initializers and instance variable initializers for this class, assigning the values of instance variable initializers to the corresponding instance variables, in the left-to-right order in which they appear textually in the source code for the class. If execution of any of these initializers results in an exception, then no further initializers are processed and this procedure completes abruptly with that same exception. Otherwise, continue with step 5. > 5. Execute the rest of the body of this constructor. If that execution completes abruptly, then this procedure completes abruptly for the same reason. Otherwise, this procedure completes normally. > > > Notice that the instance initializers are executed in step 4, prior to the body of the constructor (step 5).
How to override @INC settings in httpd.conf on OSX How can I set where Perl looks for modules in Apache httpd.conf file on OSX? I've installed several modules via CPAN, which were installed successfully in ``` /opt/local/lib/perl5/site_perl/5.8.9 ``` I can verify this via `perldoc perllocal` If I run `perl -V` on the command line, I get (among other dirs): ``` @INC: /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level /opt/local/lib/perl5/site_perl/5.8.9 ``` When I run a perl script as CGI via Apache, however, I get errors that the modules I'm `use`ing can not be found. The list of dirs being included in @INC do not match my local perl configuration. ``` [error] [client 127.0.0.1] Can't locate Spreadsheet/ParseExcel.pm in @INC ( @INC contains: /Library/Perl/Updates/5.8.8 /System/Library/Perl/5.8.8/darwin-thread-multi-2level /System/Library/Perl/5.8.8 /Library/Perl/5.8.8/darwin-thread-multi-2level /Library/Perl/5.8.8 /Library/Perl /Network/Library/Perl/5.8.8/darwin-thread-multi-2level ... ``` How is @INC getting set when running perl as CGI on OSX - and how do I override it?
The initial value of `@INC` is hardcoded when `perl` is built, but it can be modified in a number of ways. The most convenient here are ``` SetEnv PERL5LIB ... ``` from within the Apache configuration, or using ``` use lib qw( ... ); ``` from within the Perl script. That said, it's not safe to use modules installed using Perl 5.8.9 with Perl 5.8.8 (although the other way around is safe). Even worse, one appears to be a threaded Perl and the other one isn't. Modifying `@INC` is simply not going to work. You need to install the module using the same `perl` as the one you intend to use to run the script, or you must run the script using the same `perl` as the one used to install the module.
Google BigQuery IF/While Loop ``` DECLARE var1 INT64; SET var1=(select * from abc.xyz); {SOME OTHER OPERATIONS} ``` ERROR : variable quota exceeded. To solve this issue, I want to run a batch Process where the count is taken from abc.xyz table and only 50000 records are processed in first batch and the result is stored in temp table. In next iteration the loop processes another 50000 and add them to the temp table. ``` How can this be done in google bigquery? ```
The error was because the [limit size for a variable is 1MB](https://cloud.google.com/bigquery/docs/reference/standard-sql/scripting#declare). Regarding the loop to process batches. You can use the following SQL as a reference: ``` DECLARE offset_ INT64 DEFAULT 1; -- OFFSET starts in 1 BASED on ROW NUMBER () DECLARE limit_ INT64 DEFAULT 500; -- Size of the chunks to be processed DECLARE size_ INT64 DEFAULT 7000; -- Size of the data (used for the condition in the WHILE loop) -- Table to be processed. I'm creating this new temporary table to use it as an example CREATE TEMPORARY TABLE IF NOT EXISTS data_numbered AS ( SELECT *, ROW_NUMBER() OVER() row_number FROM (SELECT * FROM `bigquery-public-data.stackoverflow.users` LIMIT 7000) ); -- WHILE loop WHILE offset_ < size_ DO IF offset_ = 1 THEN -- OPTIONAL, create the temporary table in the first iteration CREATE OR REPLACE TEMPORARY TABLE temp_table AS ( SELECT * FROM data_numbered WHERE row_number BETWEEN offset_ AND offset_ + limit_ - 1 -- Use offset and limit to control the chunks of data ); ELSE -- This is the same query as above. -- Each iteration will fill the temporary table -- Iteration -- 501 - 1000 -- 1001 - 1500 -- ... INSERT INTO temp_table ( SELECT * FROM data_numbered WHERE row_number BETWEEN offset_ AND offset_ + limit_ - 1 -- -1 because BETWEEN is inclusive, so it helps to avoid duplicated values in the edges ); END IF; -- Adjust the offset_ variable SET offset_ = offset_ + limit_; END WHILE; ``` One of the challenges while making this loop is that you cannot use variables in the LIMIT and the OFFSET clause. So, I used ROW\_NUMBER() to create a column that I could use to process with the WHERE clause: ``` WHERE row_number BETWEEN offset_ AND offset_ + limit_ ``` If you want to read more about ROW\_NUMBER() I recommend to check [this SO answer](https://stackoverflow.com/a/16534965/7517757). Finally, if you want to use this approach, consider that there are some caveats like scripting being a [Beta feature](https://cloud.google.com/bigquery/docs/reference/standard-sql/scripting#top_of_page), and possible [quota issues](https://cloud.google.com/bigquery/quotas#standard_tables) depending on how often you insert data into your temporary table. Also, since the query changes in each iteration, the first time you run it, and it is not cached, the bytes\_processed will be number\_of\_iterations\*byte\_size of the table
Navigate to a new page without putting current page on back stack? In an Windows Phone 7 application I got a CurrentPage, which, on a special event does navigate to a new page using the NavigationService: ``` NavigationService.Navigate(new Uri("/NewPage.xaml", UriKind.Relative)); ``` Now when the user clicks back on the NewPage I want the app to skip the CurrentPage and go directly to the MainPage of the app. I tried to use NavigationService.RemoveBackEntry, but this removes the MainPage instead of the CurrentPage. How do I navigate to a new page without putting the current on the back stack?
When navigating to the NewPage.xaml pass along a parameter so you know when to remove the previous page from the backstack. You can do this as such: When navigating from **CurrentPage.xaml** to **NewPage.xaml** pass along **parameter** ``` bool remove = true; String removeParam = remove ? bool.TrueString : bool.FalseString; NavigationService.Navigate(new Uri("/NewPage.xaml?removePrevious="+removeParam , UriKind.Relative)); ``` In the **OnNavigatedTo** event of **NewPage.xaml**, check whether to remove the previous page or not. ``` bool remove = false; if (NavigationContext.QueryString.ContainsKey("removePrevious")) { remove = ((string)NavigationContext.QueryString["removePrevious"]).Equals(bool.TrueString); NavigationContext.QueryString.Remove("removePrevious"); } if(remove) { NavigationService.RemoveBackEntry(); } ``` This way, you can decide on the **CurrentPage.xaml** if you want to remove it from the backstack.
Cannot read property 'weight' of undefined in d3.force implementation I've been stuck on this problem for a while now and have no idea what to do even based on existing answers. I keep getting this error on the last JSON entry of a uniform response. ``` ...{"paperCount": 1, "PMIDs": [20626970], "authorA": 79, "authorB": 80}, {"paperCount": 1, "PMIDs": [20492581], "authorA": 81, "authorB": 82}, {"paperCount": 1, "PMIDs": [20492581], "authorA": 81, "authorB": 83}, {"paperCount": 1, "PMIDs": [20492581], "authorA": 81, "authorB": 84}, {"paperCount": 1, "PMIDs": [20492581], "authorA": 82, "authorB": 83}, {"paperCount": 1, "PMIDs": [20492581], "authorA": 82, "authorB": 84}, {"paperCount": 1, "PMIDs": [20492581], "authorA": 83, "authorB": 84}]... ``` The code is below to extract that information. It keeps failing in the for loop... ``` $.getJSON('/papers.txt', function(response){ var w = 1280, h = 900, fill = d3.scale.category10(), nodes = d3.range(1000).map(Object), links = d3.range(1000).map(Object); var index = 0; var mouseX = 0, mouseY = 0; for (var j = 0; j < response.length; j++){ links[j] = {source: nodes[response[j].authorA], target: nodes[response[j].authorB], value: response[j].paperCount}; } ``` Any help would be appreciated. Thanks!
## Introduction Before all, I suggest that you take a long look at the [documentation](https://github.com/mbostock/d3/wiki/Force-Layout) as all I am going to say in this answer is written in it Well, first in the code you provide you don't use the force layout at all, you don't declare the nodes with it, you don't include the nodes in the svg... So basically, from what I see, there is no doubt that it throws an error. You should take a look at [this example](http://bl.ocks.org/mbostock/4062045) to see how to create a minimal force directed graph. I would also advise that the next time you have a question, you build a jsFiddle so we can work on it and see where the problem is, because with the code you provide it is not possible to see exactly where it went wrong as you don't use d3.js in a conventional manner. So, let's go back to the basics. ## Define the graph First, before loading the data, we need to define the general properties of the graph: ``` var width = 960, height = 500; var color = d3.scale.category20(); /* Here we define that we will use the force layout an we add some parameters to setup the basic layout of the graph */ var force = d3.layout.force() .charge(-120) .linkDistance(30) .size([width, height]); /* Here we include the graph in the page */ var svg = d3.select("body").append("svg") .attr("width", width) .attr("height", height); ``` However, for now the graph has no nodes. We need to load the data. ## Load the data To load the data, d3 provides an helper, so let's use it instead of jQuery's one, no need to mix another library. For information, the json loader needs a .json file, so don't use .txt files even if they contain json formated data. ``` d3.json("papers.json", function(error, data) { // do whatever you need to do with the json data, for this example : create the graph } ``` All that I will do later should be included inside these braces. ## Format the data From what I see here your data seems to be only a list of links, I will just consider that your nodes are just points from 0 to 99. ``` var my_nodes = d3.range(100).map(function(d){return {"name": d};}); var my_links = my_data.map(function(d){return {"source": d.authorA, "target": d.authorB, "value":d.paperCount, "origin": d};}); ``` ## Build a force directed graph To build the force directed graph out of my data, ``` force // I use the previously defined variable force to add the data. .nodes(my_nodes) .links(my_links) .start() //start the simulation ``` Now, the force layout simulation is on, it knows that there is data, but still, nothing is added to the svg. We will now add everything: ``` var link = svg.selectAll(".link") .data(my_links) .enter().append("line") .attr("class", "link") var node = svg.selectAll(".node") .data(my_nodes) .enter().append("circle") // If you wonder why enter() is used, see the documentation .attr("class", "node") .attr("r", 5) .style("fill", function(d) { return "red"; }) .call(force.drag); ``` For the class attributes, see [example](https://github.com/mbostock/d3/wiki/Force-Layout). ## Declare the tick() function Finally, you have to declare the tick function that will be called on each iteration of the force layout function ``` force.on("tick", function() { link.attr("x1", function(d) { return d.source.x; }) .attr("y1", function(d) { return d.source.y; }) .attr("x2", function(d) { return d.target.x; }) .attr("y2", function(d) { return d.target.y; }); node.attr("cx", function(d) { return d.x; }) .attr("cy", function(d) { return d.y; }); }); ``` ## Conclusion I just translated the basic example for your data to show it works: <http://bl.ocks.org/ChrisJamesC/5028030> --- ## Or... Maybe, the main problem you had is that you didn't understand that $.getJson() function is *asynchronous* so you should not just build the `links` vector in it, you should also use it only in this function because otherwise it won't be loaded yet. --- # Or... Maybe, your error just comes from the fact that you should name the json file `/papers.json` instead of `/papers.txt`.
Signature length not correct when calling PGPOnePassSignature.verify We’re using a Java libarary called [`license3j`](https://github.com/verhas/License3j) for license management. The library uses asymmetric encryption and relies itself on Bouncycastle. We create a license file using a simple `gpg` command and verify the license within our software using our public key. So far everything worked fine. BUT: In 1,000 generated licenses, there is a very small fraction which cannot be verified correctly, although, they are in fact valid (approximately 5/1000). What happens in this case: When the license is to be verified in [`com.verhas.licensor.License.setLicenseEncoded(InputStream)`](https://github.com/verhas/License3j/blob/master/src/main/java/com/verhas/licensor/License.java#L732), the [`org.bouncycastle.openpgp.PGPOnePassSignature.verify(PGPSignature)`](https://github.com/verhas/License3j/blob/master/src/main/java/com/verhas/licensor/License.java#L775) throws the following exception: `org.bouncycastle.openpgp.PGPRuntimeOperationException: unable to verify signature: Signature length not correct: got 511 but was expecting 512` Sounds rather obscure to me, having only basic cryptography knowledge. Spending hours googling, gave me the clue, that there is something about "leading zeros". So, in the given example, obviously a leading zero was stripped away somewhere (where?), and the lengths of the signature data to compare do not match. Makes sense. Now, I’ve no clue, where the issue might be located. Is it during **creation** of the license file? Essentially, we’re just doing the following: ``` gpg --armor --local-user=name.of.software --sign ``` Which will give us the license file. Or does the error happen during **verification**? Do I have to modify any Bouncycastle configuration to correctly address the leading zero issue? Their [FAQ](http://www.bouncycastle.org/wiki/display/JA1/Frequently+Asked+Questions#FrequentlyAskedQuestions-4.WhenIencryptsomethingwithRSAIamlosingleadingzerobytesoffmydata,whyareyouguysshippingsuchabrokenimplementation?) gives some hints, but the `License3j` [source](https://github.com/verhas/License3j/blob/master/src/main/java/com/verhas/licensor/License.java#L732) obviously never makes use of any `Cipher` instance, so I’m totally lost on how to integrate this into the given API. I’m aware that this is a very special problem with a library which is obviously not very well known. Thus, I appreciate any little feedback or input.
Looks like it is a bug in bouncycastle, only encountered in versions of Java after 1.6, bouncycastle has always **created** the data wrong, but Java became more strict since 1.7 in the data it will accept during **verification**. Bouncycastle is failing to pad the signature to the right length when it serializes it to the file, if the integer has enough leading zeros then the byte representation of it will be smaller. Java versions 1.7 and above expect RSA signature bytes to be the same length as the key. Bouncycastle converts the RSA signature byte array(returned from Java's RSA JCE provider) into an integer and it discards information about its length. [Line 263 of the `PGPSignatureGenerator`](https://github.com/bcgit/bc-java/blob/master/pg/src/main/java/org/bouncycastle/openpgp/PGPSignatureGenerator.java#L263) shows where the RSA signature bytes are returned from JCE and converted into an integer. This integer is eventually written to the outputstream using [`MPInteger#encode`](https://github.com/bcgit/bc-java/blob/master/pg/src/main/java/org/bouncycastle/bcpg/MPInteger.java#L42) which just uses the bitlength of the underlying biginteger to determine how much data to write. [This answer](https://stackoverflow.com/a/40343731/1544715) describes more about why you see this aproximatley one in every 200 cases and how the version of Java plays a role.
EC2 issue using PV-Grub and custom kernel: error 21 selected disk does not exist Over on EC2 I am trying to setup AMI's for PLD-Linux that use the new features that allow running custom kernels from inside the AMI using PV-Grub. I've setup custom AMI's many times before but can't get the kernel thing to go. Basically I have a ebs disk with a grub menu file and custom xenU kernel on it. Registering this and launching it as an instance starts pv-grub from the xen host and pv-grub finds my menu file and tries to boot the config found there. However no matter how I configure the grub menu items or the disc partitions, grub always throws error "Error 21: Selected disk does not exist". I have tried many permutation of the following: - Raw volume as disk with filesystem - 1 partition on volume with filesystem in that - hd0 and hd00 AKI's (hd0 finds the menu file for the raw disk version, hd00 for the one partition version) - using no root command from the grub menu - using root(hd0) - using root(hd0,0) - using various rootnoverify()'s I am using ext3 in all cases. I have no system/device map files to get in the way. Any incorrect combination of partition layout and hd0/hd00 AKI's does lead to a different error, but if I use the right one and it finds the menu file, it invariably throws the error above. What am I missing? ([Cross posted from EC2 Forum](http://developer.amazonwebservices.com/connect/thread.jspa?threadID=52387&tstart=0) because no answer there, I will take care of making sure the solution is also cross posted)
This is embarrassing but after two days of bashing my head on this, I think I solved it myself. My grub menu file had commands like `root(hd0,0)` instead of `root (hd0,0)`. **The space is essential**, and all my attempts left it out! I found this out by discovering that leaving the root line off and specifying the full path like `kernel (hd0,0)/boot/vmlinuz....` worked. The lack of space did not cause any kind of parsing error, but it seems to have just been ignored. As a result it really didn't matter what values I gave it, it was just going on defaults anyway, hence the same result with the line left out entirely. I still don't have a booting system, but I'm past the problem with grub now and onto kernel woes. Hopefully anybody else who runs into the same problem will find this solution helpful!
Create array of date pairs for next 10 days I wrote the below function which successfully creates pairs of dates in the format I need. However, I'm fairly new to Python and feel there must be a way to make this easier to read. Can this code be improved? ``` def getNextTenDays(): dates = [] arrive = datetime.datetime.now() for n in range(1, 11): arrive += datetime.timedelta(days=1) next_day = arrive + datetime.timedelta(days=1) arrive_s = datetime.datetime.strftime(arrive, '%m/%d/%Y') next_day_s = datetime.datetime.strftime(next_day, '%m/%d/%Y') dates.append({'arrive': arrive_s, 'depart': next_day_s}) return dates ```
Rather that continuing to append to arrive, just use the `n` offset from 'today'. You can also use call `strftime` from a `datetime` object directly, like so: ``` def getNextTenDays(): dates = [] today = datetime.datetime.now() for n in range(1, 11): arrive = today + datetime.timedelta(days=n) next_day = today + datetime.timedelta(days=n+1) arrive_s = arrive.strftime('%m/%d/%Y') next_day_s = next_day.strftime('%m/%d/%Y') dates.append({'arrive': arrive_s, 'depart': next_day_s}) return dates ``` If you are just going to return the strings, and throw away the actual date times you can do that in one go like so: ``` def getNextTenDays(): dates = [] today = datetime.datetime.now() for n in range(1, 11): dates.append({ 'arrive': (today + datetime.timedelta(days=n)).strftime('%m/%d/%Y'), 'depart': (today + datetime.timedelta(days=n+1)).strftime('%m/%d/%Y'), }) return dates ``` For here its a simple step to making it a list comprehension: ``` def getNextTenDays(): today = datetime.datetime.now() return [ { 'arrive': (today + datetime.timedelta(days=n)).strftime('%m/%d/%Y'), 'depart': (today + datetime.timedelta(days=n+1)).strftime('%m/%d/%Y'), } for n in range(1,11) ] ```
Instance initializer and \*this\* keyword Trying to compile this piece of code ``` public class Main { public static void main(String args[]) { new Main(); } { System.out.println(x); } //Error here int x=1; } ``` produces a `cannot reference a field before it is defined` error. But if I change the initializer row to ``` { System.out.println(this.x); } ``` it works like a charm, printing the default int value *0*. This is a bit confusing to me, why does `this` makes the difference? Shouldn't it be redundant in this case? Can anyone explain me what happens behind the scenes to make it clear how it really works? PS: I know that by declaring `x` before the initializer would make it work too.
I will try it to explain on the compiler layer. Say you have a method like: ``` int x; x = 1; System.out.println(x); ``` The compilation will succed and the execution as well. If you change the Method into this: ``` System.out.println(x); int x; x = 1; ``` It will not even compile the same it is with your given example. The compiler copies the code of the `{ }` intializer into the `ctor` and also the `x=1` initialization. As you said it works if you set the `x=1` before the `{ }` intializer. ``` public class MainC { public static void main(String args[]) { new MainC(); } int x=1; { System.out.println(x); } } ``` See the following Java bytecode: ``` public MainC(); descriptor: ()V flags: ACC_PUBLIC Code: stack=2, locals=1, args_size=1 0: aload_0 1: invokespecial #1 // Method java/lang/Object."<init>":()V 4: aload_0 5: iconst_1 6: putfield #2 // Field x:I 9: getstatic #3 // Field java/lang/System.out:Ljava/io/PrintStream; 12: aload_0 13: getfield #2 // Field x:I 16: invokevirtual #4 // Method java/io/PrintStream.println:(I)V 19: return LineNumberTable: line 1: 0 line 7: 4 line 9: 9 line 10: 19 ``` The field `x` is declared and gets the value `1` before it is used in the `System.out.println` call. So why it doesn't work if you set it after the `{ }` from the same reason you cant use the Code of my second example. The field is declared after the usage which makes no sense. So why it works with the `this` keyword?! Lets look us some code: ``` public class Main { public static void main(String args[]) { new Main(); } { System.out.println(this.x); } //Error here int x=1; } ``` The corresponding Java Bytecode for the ctor: ``` public Main(); descriptor: ()V flags: ACC_PUBLIC Code: stack=2, locals=1, args_size=1 0: aload_0 1: invokespecial #1 // Method java/lang/Object."<init>":()V 4: getstatic #2 // Field java/lang/System.out:Ljava/io/PrintStream; 7: aload_0 8: getfield #3 // Field x:I 11: invokevirtual #4 // Method java/io/PrintStream.println:(I)V 14: aload_0 15: iconst_1 16: putfield #3 // Field x:I 19: return LineNumberTable: line 1: 0 line 7: 4 line 9: 14 ``` So whats happens here? Easy speaking the `this` keyword loads the Main object reference on the stack. After that the field x can be accessed so the `System.out.println` call can be executed successfully.
Material ui wrapper - reuse typescript types for a given prop First of all sorry if this was answered before, I couldn't find it. For some reasons in the app I am working right now we have a wrapper for some material-ui components. For instance I have a MyCompanyButton which is a wrapper for the material-ui Button component. It is really simple, something like this: ``` const MyCompanyButton: React.FC<MyCompanyButtonProps> = (props): React.ReactElement => { const { label, color, type } = props; return ( <Button color={color} type={type}> {label} </Button> ); }; ``` The problem is with the types for the props. My first attempt was to define them as this: ``` type MyCompanyButtonProps = { label: string; color: string; type: string; }; ``` But when I try to use my component like this: ``` <MyCompanyButton color="primary" type="submit" label="Send" /> ``` I get the following error: **Type 'string' is not assignable to type '"default" | "inherit" | "primary" | "secondary" | undefined'. TS2769** I understand the error but I do not know how to fix it. I want to reuse material-ui types, but I can't find them. I know in current versions they are built-in in the core package, but I do not understand how to find the "color" type and apply it to my color prop. What I did so far was take the type definition from the error and apply it to my prop: color: "default" | "inherit" | "primary" | "secondary" | undefined; However, I think this is not the way to do it because if the type changes in the library I need to change it here too. I hope someone can help me. Thanks !
Since you're doing this to extend the `Button` component as `label` is not part of the MUI Button props, you can extend your `MyCompanyButtonProps` type definition with the MUI ButtonProps type like this: ``` import { Button, ButtonProps } from '@material-ui/core'; interface MyCompanyButtonProps extends ButtonProps { label: string; } export const MyCompanyButton: React.FC<MyCompanyButtonProps> = (props): React.ReactElement => { const { label, color, type } = props; return ( <Button color={color} type={type}> {label} </Button> ); }; ``` In general with the Material UI library, the component props can be imported from the same place as the component, with `Props` following the component name.
How does undo/redo basically work on iPhone OS? My app doesn't use Core Data yet. Is it true that I *must* use Core Data for undo/redo? And: How does the user do the undo/redo? I've never seen it in action, and never ever used it. Don't know how I should do it if I wanted to. There's no undo/redo button anywhere. Yet they say it has undo/redo. So how does the user trigger this?
iPhone OS 3.0 brought over the concept of NSUndoManager from the Mac, which is what enables undo on the iPhone. NSUndoManager maintains a stack of NSInvocations which are the opposite actions to any edits or other changes you make. For example, ``` - (void)observeValueForKeyPath:(NSString*)keyPath ofObject:(id)object change:(NSDictionary*)change context:(void*)context { NSUndoManager *undo = [self undoManager]; // Grab the old value of the key id oldValue = [change objectForKey:NSKeyValueChangeOldKey]; // Add edit item to the undo stack [[undo prepareWithInvocationTarget:self] changeKeyPath:keyPath ofObject:object toValue:oldValue]; // Set the undo action name in the menu [undo setActionName:@"Edit"]; } ``` can be used to observe changes in properties, creating inverse NSInvocations that will undo edits to those properties. Core Data is not needed for undo, but it makes it much, much easier. It handles the creation of these undo actions for you every time you edit your data model, including complex actions like a cascading delete down a hierarchy of managed objects. On the iPhone, to enable undo / redo, you need to set up a few things. First, NSManagedObjectContexts on the iPhone don't have an undo manager by default, so you need to create one: ``` NSUndoManager *contextUndoManager = [[NSUndoManager alloc] init]; [contextUndoManager setLevelsOfUndo:10]; [managedObjectContext setUndoManager:contextUndoManager]; [contextUndoManager release]; ``` This code would typically go right after where you would have created your NSManagedObjectContext. Once an undo manager is provided for your context, you need to enable the default gesture for undo on the iPhone, a shake of the device. To let your application handle this gesture automatically, place the following code within the `-applicationDidFinishLaunching:` method in your application delegate: ``` application.applicationSupportsShakeToEdit = YES; ``` Finally, you will need to set up each view controller that will be capable of handling the shake gesture for undo. These view controllers will need to report back the undo manager to use for that controller by overriding the `-undoManager` method: ``` - (NSUndoManager *)undoManager; { return [[[MyDatabaseController sharedDatabaseController] scratchpadContext] undoManager]; } ``` The view controllers will also need to be able to become the first responder to handle gestures, so the following method is needed: ``` - (BOOL)canBecomeFirstResponder { return YES; } ``` The view controller will need to become the first responder when it appears onscreen. This can be done by calling `[self becomeFirstResponder]` in `-loadView` or `-viewDidLoad`, but I have found that view controllers which appear onscreen immediately after launch need to have this message delayed a bit in order for it to work: ``` [self performSelector:@selector(becomeFirstResponder) withObject:nil afterDelay:0.3]; ``` With all this in place, you should get automatic undo and redo support courtesy of Core Data, with a nice animated menu.
How does glmnet handle overdispersion? I have a question about how to model text over count data, in particular how could I use the `lasso` technique to reduce features. Say I have N online articles and the count of pageviews for each article. I've extracted 1-grams and 2-grams for each article and I wanted to run a regression over the 1,2-grams. Since the features (1,2-grams) are way more than the number of observations, the lasso would be a nice method to reduce the number of features. Also, I've found `glmnet` is really handy to run lasso analysis. However, the count number of pageviews are overdispersed (variance > mean), but `glmnet` doesn't offer `quasipoisson` (explicitly) or `negative binomial` but `poisson` for count data. The solution I've thought of is to `log transform` the count data (a commonly used method among social scientists) and make the response variable roughly follow a normal distribution. As such, I could possibly model the data with the gaussian family using `glmnet`. So my question is: is it appropriate to do so? Or, shall I just use poisson for `glmnet` in case `glmnet` handles `quasipoisson`? Or, is there other R packages handle this situation? Thank you very much!
*Short answer* **Overdispersion does not matter when estimating a vector of regression coefficients for the conditional mean in a quasi/poisson model! You will be fine if you forget about the overdispersion here, use glmnet with the poisson family and just focus on whether your cross-validated prediction error is low.** The Qualification follows below. --- *Poisson, Quasi-Poisson and estimating functions:* I say the above because overdispersion (OD) in a poisson or quasi-poisson model influences anything to do with the dispersion (or variance or scale or heterogeneity or spread or whatever you want to call it) and as such has an effect on the standard errors and confidence intervals **but leaves the estimates for the conditional mean of $y$ (called $\mu$) untouched. This particularly applies to linear decompositions of the mean, like $x^\top\beta$**. This comes from the fact that the estimating equations for the coefficients of the conditional mean are practically the same for both poisson and quasi-poisson models. Quasi-poisson specifies the variance function in terms of the mean and an additional parameter (say $\theta$) as $Var(y)=\theta\mu$ (with for Poisson $\theta$=1), but the $\theta$ does not turn out to be relevant when optimizing the estimating equation. Thus the $\theta$ plays no role in estimating the $\beta$ when conditional mean and variance are proportional. **Therefore the point estimates $\hat{\beta}$ are identical for the quasi- and poisson models!** Let me illustrate with an example (notice that one needs to scroll to see the whole code and output) : ``` > library(MASS) > data(quine) > modp <- glm(Days~Age+Sex+Eth+Lrn, data=quine, family="poisson") > modqp <- glm(Days~Age+Sex+Eth+Lrn, data=quine, family="quasipoisson") > summary(modp) Call: glm(formula = Days ~ Age + Sex + Eth + Lrn, family = "poisson", data = quine) Deviance Residuals: Min 1Q Median 3Q Max -6.808 -3.065 -1.119 1.819 9.909 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 2.71538 0.06468 41.980 < 2e-16 *** AgeF1 -0.33390 0.07009 -4.764 1.90e-06 *** AgeF2 0.25783 0.06242 4.131 3.62e-05 *** AgeF3 0.42769 0.06769 6.319 2.64e-10 *** SexM 0.16160 0.04253 3.799 0.000145 *** EthN -0.53360 0.04188 -12.740 < 2e-16 *** LrnSL 0.34894 0.05204 6.705 2.02e-11 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 2073.5 on 145 degrees of freedom Residual deviance: 1696.7 on 139 degrees of freedom AIC: 2299.2 Number of Fisher Scoring iterations: 5 > summary(modqp) Call: glm(formula = Days ~ Age + Sex + Eth + Lrn, family = "quasipoisson", data = quine) Deviance Residuals: Min 1Q Median 3Q Max -6.808 -3.065 -1.119 1.819 9.909 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.7154 0.2347 11.569 < 2e-16 *** AgeF1 -0.3339 0.2543 -1.313 0.191413 AgeF2 0.2578 0.2265 1.138 0.256938 AgeF3 0.4277 0.2456 1.741 0.083831 . SexM 0.1616 0.1543 1.047 0.296914 EthN -0.5336 0.1520 -3.511 0.000602 *** LrnSL 0.3489 0.1888 1.848 0.066760 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for quasipoisson family taken to be 13.16691) Null deviance: 2073.5 on 145 degrees of freedom Residual deviance: 1696.7 on 139 degrees of freedom AIC: NA Number of Fisher Scoring iterations: 5 ``` As you can see even though we have strong overdispersion of 12.21 in this data set (by `deviance(modp)/modp$df.residual`) the regression coefficients (point estimates) do not change at all. But notice how the standard errors change. *The question of the effect of overdispersion in penalized poisson models* Penalized models are mostly used for prediction and variable selection and not (yet) for inference. So people who use these models are interested in the regression parameters for the conditional mean, just shrunk towards zero. If the penalization is the same, the estimating equations for the conditional means derived from the penalized (quasi-)likelihood does also not depend on $\theta$ and therefore **overdispersion does not matter for the estimates of $\beta$ in a model of the type:** $g(\mu)=x^\top\beta + f(\beta)$ **as $\beta$ is estimated the same way for any variance function of the form $\theta \mu$, so again for all models where conditional mean and variance are proportional.** This is just like in unpenalized poisson/quasipoisson models. If you don't want to take this at face value and avoid the math, you can find empirical support in the fact that in `glmnet`, if you set the regularization parameter towards 0 (and thus $f(\beta)=0$) you end up pretty much where the poisson and quasipoisson models land (see the last column below where lambda is 0.005). ``` > library(glmnet) > y <- quine[,5] > x <- model.matrix(~Age+Sex+Eth+Lrn,quine) > modl <- glmnet(y=y,x=x, lambda=c(0.05,0.02,0.01,0.005), family="poisson") > coefficients(modl) 8 x 4 sparse Matrix of class "dgCMatrix" s0 s1 s2 s3 (Intercept) 2.7320435 2.7221245 2.7188884 2.7172098 (Intercept) . . . . AgeF1 -0.3325689 -0.3335226 -0.3339580 -0.3340520 AgeF2 0.2496120 0.2544253 0.2559408 0.2567880 AgeF3 0.4079635 0.4197509 0.4236024 0.4255759 SexM 0.1530040 0.1581563 0.1598595 0.1607162 EthN -0.5275619 -0.5311830 -0.5323936 -0.5329969 LrnSL 0.3336885 0.3428815 0.3459650 0.3474745 ``` So what does OD do to penalized regression models? As you may know, there is still some debate about the proper way to calculate standard errors for penalized models (see e.g., [here](https://stats.stackexchange.com/questions/91462/standard-errors-for-lasso-prediction-using-r) ) and `glmnet` is not outputting any anyway, probably for that reason. It may very well be that the OD would influence the inference part of the model, just as it does in the non-penalized case but unless some consensus regarding inference in this case is reached, we won't know. As an aside, one can leave all this messiness behind if one is willing to adopt a Bayesian view where penalized models are just standard models with a specific prior.
Is there a shorter/simpler version of the for loop to anything x times? Usually we do something like a for or while loop with a counter: ``` for (int i = 0; i < 10; i++) { list.Add(GetRandomItem()); } ``` but sometimes you mix up with boundaries. You could use a while loop instead, but if you make a mistake this loop is infinite... In Perl for example I would use the more obvious ``` for(1..10){ list->add(getRandomItem()); } ``` Is there something like `doitXtimes(10){...}`?
Well you can easily write your own extension method: ``` public static void Times(this int count, Action action) { for (int i = 0; i < count; i++) { action(); } } ``` Then you can write: ``` 10.Times(() => list.Add(GetRandomItem())); ``` I'm not sure I'd actually suggest that you *do* that, but it's an option. I don't believe there's anything like that in the framework, although you can use `Enumerable.Range` or `Enumerable.Repeat` to create a lazy sequence of an appropriate length, which can be useful in some situations. --- As of C# 6, you can still access a static method conveniently without creating an extension method, using a `using static` directive to import it. For example: ``` // Normally in a namespace, of course. public class LoopUtilities { public static void Repeat(int count, Action action) { for (int i = 0; i < count; i++) { action(); } } } ``` Then when you want to use it: ``` using static LoopUtilities; // Class declaration etc, then: Repeat(5, () => Console.WriteLine("Hello.")); ```
entity framework and some general doubts with the optimistic concurrency exception I have some doubts about optimistic concurrency exception. Well, For example, I retrieve some data from the database, I modify some registers and then submit the changes. If someone update the information of the registers between my request and my update I get an optimistic exception. The classic concurrency problem. My first doubt is the following. EF to decide if the information is changed or not, retrieves the data from the database, and compare the original data that I obtain with the data that is retrieved from the database. If exists differences, then the optimistic concurrency exception is thrown. If when I catch the optimistic concurrency exception, I decide if the client wins or the store wins. in this step EF retrieves again the information or use the data from the first retrieved? Because if retrieve again the data, it would be inefficient. The second doubt is how to control the optimistic concurrency exception. In the catch block of code, I decide if the client wins or the store wins. If the client wins, then I call again saveChanges. But between the time that I decide that the client wins and the savechanges, other user could change the data, so I get again an optimistic concurrency exception. In theory, It could be an infinity loop. would it be a good idea to use a transaction (scope) to ensure that the client update the information in the database? Other solution could be use a loop to try N times to update the data, if it is not possible, exit and say it to the user. would the transaction a good idea? does it consume a lot of resources of the database? Although the transaction blocks for a moment the database, it ensures that the operation of update is finished. The loop of N times to try to complete the operation, call the database N times, and perhaps it could need more resources. Thanks. Daimroc. EDIT: I forgot to ask. is it possible set the context to use client wins by default instead to wait to the concurrency exception?
> > My first doubt is the following. EF to decide if the information is > changed or not, retrieves the data from the database ... > > > It doesn't retrieve any additional data from database. It takes original values of your entity used for concurrency handling and use them in where condition of update command. The update command is followed by selecting number of modified rows. If the number is 0 it either means that record doesn't exists or somebody has changed it. > > The second doubt is how to control the optimistic concurrency exception. > > > You simply call `Refresh` and `SaveChanges`. You can repeat pattern few times if needed. If you have so much highly concurrent application that multiple threads are fighting to update same record within fraction of seconds you most probably need to architect your data storage in different way. > > Would it be a good idea to use a transaction (scope) to ensure that the client update the information in the database? > > > `SaveChanges` always uses database transaction. TransactionScope will not add you any additional value unless you want to use transaction over multiple calls to `SaveChanges`, distributed transaction or change for example isolation level of the transaction. > > Is it possible set the context to use client wins by default instead > to wait to the concurrency exception? > > > It is set by default. Simply don't mark any of your properties with `ConcurrencyMode.Fixed` and you will have no concurrency handling = client wins.
vim custom syntax highlighting I'm wanting to create custom syntax highlighting in vim for a task-list. Task items begin with a hyphen. Two types of task items are relevant: (a) items without an '@done' tag. (b) items with an @done tag. (a) and (b) need to be highlighted differently. I'm using [taskpaper](https://github.com/davidoc/taskpaper.vim/blob/master/syntax/taskpaper.vim), which works fine, but the issue is, I'm trying to make this to work for task items that span multiple lines. For example: ``` - Regular item (works) - Completed item @done (works) - Multi-line item. This item continues on to the line below. (doesn't work) - Multi-line completed item. This item continues on to the line below. (doesn't work). @done ``` The highlighting file at [taskpaper](https://github.com/davidoc/taskpaper.vim/blob/master/syntax/taskpaper.vim) works for the first two, but not for the second two. As a workaround hack, I tried this for the last case above: ``` syn region multLineDoneItem start="{" end="}" fold HiLink multLineDoneItem NonText ``` But now, I'm forced to mark multi-line done items with braces like so: ``` - {Multi-line completed item. This item continues on to the line below. (workaround works).} ``` I've already searched stackexchange and elsewhere. I would appreciate any help! :)
You could try using the `\ze` regex atom in the end part of your syntax region. This would allow you to match everything up to but not including the next task. I haven't looked at how you do matching but something like this might work. ``` syn region muiltLineItem start="^-" end="\(\s*\n)\+\ze^-" fold syn region multiLineDoneItem start="^-" end="@done\s*\n\(\s*\n\)*\ze^-" fold HiLink multiLineItem Normal HiLink multiLineDoneItem NonText ``` I haven't tested this at all but I think it, or something like it, should work. If you wish to take indentation into account the `\z` regex atom will allow you to keep matching lines with the same indent. **UPDATE:** Try this: ``` syn match multilineItem "^-\_.\{-}\ze\(\n-\|\%$\)" fold syn match multilineDoneItem "^-\(\%(\_^-\)\@!\_.\)\{-}@done\s*\n\ze" fold command -nargs=+ HiLink highlight default link <args> HiLink multilineItem Normal HiLink multilineDoneItem NonText delcommand HiLink ``` Oh, also this should work for all four cases and not just the multi-line items.
FFmpeg: concat multiple videos, some with audio, some without I'm trying to concatenate 5 videos where the first and last have no audio track. I have tried the following command: ``` ffmpeg -i 1-copyright/copyright2018640x480.mp4 -i 2-openingtitle/EOTIntroFINAL640x480.mp4 -i 3-videos/yelling.mp4 -i 4-endtitle/EOTOutroFINAL640x480.mp4 -i 5-learnabout/Niambi640.mp4 -filter_complex "[0:v:0] [0:a:0] [1:v:0] [1:a:0] [2:v:0] [2:a:0] [3:v:0] [3:a:0] [4:v:0] [4:a:0] concat=n=5:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" output_video.mp4 ``` and I get the output error: ``` Stream specifier ':a:0' in filtergraph description [0:v:0] [0:a:0] [1:v:0] [1:a:0] [2:v:0] [2:a:0] [3:v:0] [3:a:0] [4:v:0] [4:a:0] concat=n=5:v=1:a=1 [v] [a] matches no streams. ``` I know the first and last videos have no audio but I dont know how to write the statement to ignore the audio track in those videos. I have tried removing the [0:a:0] but that just throws another error: ``` Stream specifier ':v:0' in filtergraph description [0:v:0] [1:v:0] [1:a:0] [2:v:0] [2:a:0] [3:v:0] [3:a:0] [4:v:0] [4:a:0] concat=n=5:v=1:a=1 [v] [a] matches no streams. ``` It doesnt make sense and Im kinda lost.
If you're concatenating audio as well, then all video inputs must be paired with an audio stream. If the file itself doesn't have any audio, then a dummy silent track can be used. Use ``` ffmpeg -i 1-Video.mp4 -i 2-openingtitle/EOTIntroFINAL640x480.mp4 -i 3-videos/yelling.mp4 -i 4-endtitle/EOTOutroFINAL640x480.mp4 -i 5-learnabout/Niambi640.mp4 -f lavfi -t 0.1 -i anullsrc -filter_complex "[0:v:0][5:a][1:v:0][1:a:0][2:v:0][2:a:0][3:v:0][3:a:0][4:v:0][5:a] concat=n=5:v=1:a=1 [v][a]" -map "[v]" -map "[a]" output_video.mp4 ```
Is there any dump() like function returns a string? I like Swift's dump() function like this, ``` class MyClass { let a = "Hello" let b = "Bye!" init() {} } let myClass = MyClass() dump(myClass) // Printed out these lines to Xcode's console /* ▿ MyClass #0 - a: Hello - b: Bye! */ ``` But dump() doesn't return a string. It just prints out to the console, and returns 1st parameter itself. ``` public func dump<T>(x: T, name: String? = default, indent: Int = default, maxDepth: Int = default, maxItems: Int = default) -> T ``` Is there any dump() like function returns a string?
From: <https://github.com/apple/swift/blob/master/stdlib/public/core/OutputStream.swift> ``` /// You can send the output of the standard library's `print(_:to:)` and /// `dump(_:to:)` functions to an instance of a type that conforms to the /// `TextOutputStream` protocol instead of to standard output. Swift's /// `String` type conforms to `TextOutputStream` already, so you can capture /// the output from `print(_:to:)` and `dump(_:to:)` in a string instead of /// logging it to standard output. ``` Example: ``` let myClass = MyClass() var myClassDumped = String() dump(myClass, to: &myClassDumped) // myClassDumped now contains the desired content. Nothing is printed to STDOUT. ```
How does std::endl not use any brackets if it is a function? The question is pretty much in the title. According to [C++ Reference](http://www.cplusplus.com/reference/iostream/manipulators/endl/), `std::endl` is actually a function. Looking at its declaration in `<iostream>`, this can be verified. However, when you use `std::endl`, you don't use `std::endl()`. Instead, you use: ``` std::cout << "Foo" << std::endl; ``` In fact, if you use `std::endl()`, the compiler demands more parameters, as noted on the link above. Would someone care to explain this? What is so special about `std::endl`? Can we implement functions that do not require any brackets when calling too?
`std::endl` is a function template declared (27.7.3.8): ``` template <class charT, class traits> basic_ostream<charT,traits>& endl(basic_ostream<charT,traits>& os); ``` The reason that you can "stream" it to `std::cout` is that the `basic_ostream` class template has a member declared: ``` basic_ostream<charT,traits>& operator<< ( basic_ostream<charT,traits>& (*pf)(basic_ostream<charT,traits>&) ); ``` which is defined to have the effect of returning `pf(*this)` (27.7.3.6.3). `std::endl` without parentheses refers to a set of overload functions - all possible specializations of the function template, but used in a context where a function pointer of one particular type is acceptable (i.e. as an argument to `operator<<`), the correct specialization can be unambiguously deduced.
css/javascript start faded out by default, then fade in I'm trying to make a button that causes a green check image to fade in then fade out again. It mostly works, but how do I make the check start out in the faded out position when the page loads? I tried to put `opacity: 0;` in its css, assuming that the fadeIn function changes the opacity, but then it doesn't show up at all. ``` function green_check(check) { //fade in half a sec, hold 2 sec, fade out in 3 sec $(check).fadeIn(500, function() { $(this).delay(2000).fadeOut(3000); }); } ``` ``` <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <button type="button" onclick="green_check(getElementById('check'));">Show Check</button> <img class='five_sec_check' width="50" height="50" id="check" src='http://cliparts.co/cliparts/qcB/Bex/qcBBexbc5.png' /> ``` Is there some other transparency property that fadeIn/fadeOut uses that I can set in the css before those are called? Or maybe prevent the opacity in the css from overriding the fadeIn function? Thanks
I'd use display:none in css to hide on page load: ``` #check { display:none; } ``` ``` function green_check(check){ //fade in half a sec, hold 2 sec, fade out in 3 sec $(check).fadeIn(500, function () { $(this).delay(2000).fadeOut(3000); }); } ``` ``` #check { display:none; width:16px; height:16px; } ``` ``` <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <button type="button" onclick="green_check(getElementById('check'));">Show Check</button> <img class='five_sec_check' id="check" src='http://neil.computer/s/check.png'/> ```
Python logging does not log when used inside a Pytest fixture I have a Pytest + Selenium project and I would like to use the logging module. However, when I set up logging in `conftest.py` like this ``` @pytest.fixture(params=["chrome"], scope="class") def init_driver(request): start = datetime.now() logging.basicConfig(filename='.\\test.log', level=logging.INFO) if request.param == "chrome": options = ChromeOptions() options.add_argument("--start-maximized") web_driver = webdriver.Chrome(ChromeDriverManager().install(), options=options) if request.param == "firefox": web_driver = webdriver.Firefox(GeckoDriverManager().install()) request.cls.driver = web_driver yield end = datetime.now() logging.info(f"{end}: --- DURATION: {end - start}") web_driver.close() ``` looks like `test.log` is not created at all and there are no error messages or other indications something went wrong. How can I make this work?
Two facts first: 1. `logging.basicConfig()` only has an effect if no logging configuration was done before invoking it (the target logger has no handlers registered). 2. `pytest` registers custom handlers to the root logger to be able to capture log records emitted in your code, so you can test whether your program logging behaviour is correct. This means that calling `logging.basicConfig(filename='.\\test.log', level=logging.INFO)` in a fixture will do nothing, since the test run has already started and the root logger has handlers attached by `pytest`. You thus have two options: 1. Disable the builtin `logging` plugin completely. This will stop log records capturing - if you have tests where you are analyzing emitted logs (e.g. using the [`caplog`](https://docs.pytest.org/en/latest/reference/logging.html#caplog-fixture) fixture), those will stop working. Invocation: ``` $ pytest -p no:logging ... ``` You can persist the flag in `pyproject.toml` so it is applied automatically: ``` [tool.pytest.ini_options] addopts = "-p no:logging" ``` Or in `pytest.ini`: ``` [pytest] addopts = -p no:logging ``` 2. Configure and use [live logging](https://docs.pytest.org/en/stable/logging.html#live-logs). The configuration in `pyproject.toml`, equivalent to your `logging.basicConfig()` call: ``` [tool.pytest.ini_options] log_file = "test.log" log_file_level = "INFO" ``` In `pytest.ini`: ``` [pytest] log_file = test.log log_file_level = INFO ``` Of course, the `logging.basicConfig()` line can be removed from the `init_driver` fixture in this case.
Better way of forming this if statement? I'm currently learning java from a book, and a project was to output the days and month name of a month after taking in the number of the month. I was wondering if there is any better way to set up my if statement rather then what I have already done. PS:console reader is just an included class to easily take input from the users console. ``` public class Project13 { public static void main(String[] args) { ConsoleReader console = new ConsoleReader(System.in); System.out.println("Enter a month you would like to evaluate (by number):"); int month = console.readInt(); int days = 0; String monthout = "Month"; String out = "Yes"; if(month == 1){ days = 31; monthout = "January"; out = "There are " + days + " days in " + monthout; }else if(month == 2){ System.out.println("Is it a leap year? Yes or No:"); String leap = console.readLine(); if(leap.equalsIgnoreCase("yes")){ days = 29; monthout = "February"; out = "There are " + days + " days in " + monthout; }else if(leap.equalsIgnoreCase("no")){ days = 28; monthout = "February"; out = "There are " + days + " days in " + monthout; }else{ out = "Something went wrong, please try again"; } }else if(month == 3){ days = 31; monthout = "March"; out = "There are " + days + " days in " + monthout; }else if(month == 4){ days = 30; monthout= "April"; out = "There are " + days + " days in " + monthout; }else if(month == 5){ days = 31; monthout = "May"; out = "There are " + days + " days in " + monthout; }else if(month == 6){ days = 30; monthout = "June"; out = "There are " + days + " days in " + monthout; }else if(month == 7){ days = 31; monthout = "July"; out = "There are " + days + " days in " + monthout; }else if(month == 8){ days = 31; monthout = "August"; out = "There are " + days + " days in " + monthout; }else if(month == 9){ days = 30; monthout = "September"; out = "There are " + days + " days in " + monthout; }else if(month == 10){ days = 31; monthout = "October"; out = "There are " + days + " days in " + monthout; }else if(month == 11){ days = 30; monthout = "November"; out = "There are " + days + " days in " + monthout; }else if(month == 12){ days = 31; monthout = "December"; out = "There are " + days + " days in " + monthout; }else if(month > 12){ out = "Your month input was not valid. Please try again."; } System.out.println(out); } } ```
You can replace almost the entire `if` statement with a pair of arrays like this: ``` int dayCount[] = new int[] {31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}; String monthName[] = new String[] {"January", "February", ...}; ``` With these two arrays in hand, you can do this: ``` // February is the only month that needs special handling if (month == 2) { // Do your special handling of leap year etc... } else if (month >= 1 && month <= 12) { // All other valid months go here. Since Java arrays are zero-based, // we subtract 1 from the month number days = dayCount[month-1]; monthout = monthName[month-1]; } else { // Put handling of invalid month here } out = "There are " + days + " days in " + monthout; ```
what is the best way to have a Generic Comparer I have a lot of comparer classes where the class being compared is simply checking the name property of the object and doing a string compare. For example: ``` public class ExerciseSorter : IComparer<Exercise> { public int Compare(Exercise x, Exercise y) { return String.Compare(x.Name, y.Name); } } public class CarSorter : IComparer<Car> { public int Compare(Car x, Car y) { return String.Compare(x.Name, y.Name); } } ``` what is the best way to have this code generic so i dont need to write redundant code over and over again.
I use one like this: ``` public class AnonymousComparer<T> : IComparer<T> { private Comparison<T> comparison; public AnonymousComparer(Comparison<T> comparison) { if (comparison == null) throw new ArgumentNullException("comparison"); this.comparison = comparison; } public int Compare(T x, T y) { return comparison(x, y); } } ``` Usage: ``` var carComparer = new AnonymousComparer<Car>((x, y) => x.Name.CompareTo(y.Name)); ``` If you're doing a straight property compare and the property type implements `IComparable` (for example an `int` or `string`), then, I also have this class which is a bit more terse to use: ``` public class PropertyComparer<T, TProp> : IComparer<T> where TProp : IComparable { private Func<T, TProp> func; public PropertyComparer(Func<T, TProp> func) { if (func == null) throw new ArgumentNullException("func"); this.func = func; } public int Compare(T x, T y) { TProp px = func(x); TProp py = func(y); return px.CompareTo(py); } } ``` Usage of this one is: ``` var carComparer = new PropertyComparer<Car, string>(c => c.Name); ```
Highlight text in TextBox/Label/RichTextBox using C# Good night, I would like to know how can I highlight a part of the text contained in a TextBox, Label (preferably) or RichTextBox. For example, given the string "This is a test.", I would like the control to show "This is a *test*.". Is there any easy way I can do it? Thank you very much.
``` RichTextBox r = new RichTextBox(); r.Text = "This is a test"; r.Select(r.Text.IndexOf("test"), "test".Length); r.SelectionFont = new Font(r.Font, FontStyle.Italic); r.SelectionStart = r.Text.Length; r.SelectionLength = 0; ``` Something to that effect will work and remove the selection. **EDIT** It should be relatively easy to encapsulate in your own method. You could even make an extension: ``` public static class Extensions { public static void StyleText(this RichTextBox me, string text, FontStyle style) { int curPos = me.SelectionStart; int selectLen = me.SelectionLength; int len = text.Length; int i = me.Text.IndexOf(text); while (i >= 0) { me.Select(i, len); me.SelectionFont = new Font(me.SelectionFont, style); i = me.Text.IndexOf(text, i + len); } me.SelectionStart = curPos; me.SelectionLength = selectLen; } } ``` and then use it like: ``` richTextBox1.Text = "This is a test"; richTextBox1.StyleText("test", FontStyle.Italic); ```
Luxon.js ISO 8601 parsing I'm using Luxon.js to handle Date and I'm trying to parse some ISO string which comes from the server. It has this format ``` 2019-04-04T12:12:07+03:00 ``` and I'm using Luxon's method - fromISO which should parse this str ``` DateTime.fromISO("2019-04-04T12:12:07+03:00", "dd LLLL yyyy") ``` I'm expect to see ***04 April 2019 - 15:12*** in output but it returns ***04 April 2019 - 12:12*** somehow and I can't understand why it happens? Am I doing something wrong? because when I'm trying to use this ISO string **2019-04-04T12:12:07.756Z** it works like a charm. But I don't understand why the previous one isn't working and what should I do to make it working? I'll appreciate any help! P.S. to be honest, I have some gaps of my knowledge about ISO time so sorry if the question is stupid.
`2019-04-04T12:12:07+03:00` says "in a zone with an offset of +3, it is 12:12". It isn't telling Luxon to add 3 hours; it's providing the context in which the time was expressed. For example, because zones whose offset is +3 have local times 3 hours ahead of UTC (that's what it means), then in England it's 9:12. The point is that it's a specific time on the planet, and the +3 is just telling you what rule was used in turning that time into a string. Luxon takes in all that information and figures out what time it really is, which in its conception of time is the number of milliseconds that have transpired since the beginning of 1970 in UTC. It can then translate that into any zone's local time. Of course, if your computer is in a zone whose offset is +3, then that local time is 12:12: ``` -> TZ=Europe/Moscow node > DateTime = require("luxon").DateTime [Function: DateTime] > var dt = DateTime.fromISO("2019-04-04T12:12:07+03:00") undefined > dt.toLocaleString(DateTime.TIME_24_SIMPLE) '12:12' ``` My computer is actually on the US east coast, so I see a very different local time: ``` -> node > DateTime = require("luxon").DateTime [Function: DateTime] > var dt = DateTime.fromISO("2019-04-04T12:12:07+03:00") undefined > dt.toLocaleString(DateTime.TIME_24_SIMPLE) '05:12' ``` That's because my offset is -4, which means the local time here is 7 hours earlier than the +3 in the original string. With my computer in any zone, I could of course tell Luxon to express the date in any other zone: ``` > dt.toUTC().toLocaleString(DateTime.TIME_24_SIMPLE) '09:12' > dt.setZone("Europe/Moscow").toLocaleString(DateTime.TIME_24_SIMPLE) '12:12' > dt.setZone("Asia/Tokyo").toLocaleString(DateTime.TIME_24_SIMPLE) '18:12' ```
Use of new and delete in C++ I need help in understanding when shall I use the following options ``` char *a = new char(); ``` and ``` char *a = new char[sizeof(int)+1]; ``` and how the respective memory freeing calls should be made?
The fist one allocates a single char. You delete it with: ``` delete a; ``` The second one allocates an array of chars. The length you have chosen is a little strange. You deallocate it with: ``` delete[] a; ``` Now... I hope you don't think you can put a stringified number in the second `a` (something like `"123456"`, because you'll need many more bytes. Let's say at least 12 if an `int` is 32 bits. There is a funny formula to calculate the minimum length necessary. It's an approximation of the log10 <https://stackoverflow.com/a/2338397/613130> To be clear, on my machine `sizeof(int)` == 4, but in an `int` I can put `-2147483648` that is 10 digits plus the `-`, so 11 (plus the zero terminator)
How to change default browser of cordova browser platform? I don't have chrome installed and I mainly use other browsers for development (opera, yandex etc). But the command: > > cordova run browser > > > uses chrome by default, so it fails with " The system can not find the file chrome.". Can I change which browser cordova uses?
The only way to change the default Chrome browser is using the [`--target` option](https://cordova.apache.org/docs/en/latest/reference/cordova-cli/index.html#cordova-run-command). As you can see [Chrome is the default browser for the `run` command](https://github.com/apache/cordova-browser/blob/master/bin/templates/project/cordova/run#L40). Internally, the [cordovaServe.launchBrowser function](https://github.com/apache/cordova-browser/blob/master/bin/templates/project/cordova/run#L51) is called with cli arguments. This function [is defined in the cordova-serve/serve.js file](https://github.com/apache/cordova-browser/blob/master/node_modules/cordova-serve/serve.js#L53) and you can find its body in the [cordova-serve/src/browser.js file](https://github.com/apache/cordova-browser/blob/master/node_modules/cordova-serve/src/browser.js#L32) where you can find the complete list of supported browsers for each platform: ``` var browsers = { 'win32': { 'ie': 'iexplore', 'chrome': 'chrome --user-data-dir=%TEMP%\\' + dataDir, 'safari': 'safari', 'opera': 'opera', 'firefox': 'firefox', 'edge': 'microsoft-edge' }, 'darwin': { 'chrome': '"Google Chrome" --args' + chromeArgs, 'safari': 'safari', 'firefox': 'firefox', 'opera': 'opera' }, 'linux' : { 'chrome': 'google-chrome' + chromeArgs , 'chromium': 'chromium-browser' + chromeArgs, 'firefox': 'firefox', 'opera': 'opera' } }; ``` I hope that this answer will help you to learn a bit more about cordova and the way it works.
How to have the same text in multiple places be changed or refactored at once? I have a document with a variable being referred to several times on different pages. This is the same variable and so its value is the same on each page. If I need to change this variable I would like to be able change it only in one place and have this change all the other instances of this variable. Similar to when you refactor a variable in code. I can do a find and replace, but the problem is when I have several variables with the same value the find and replace will not differentiate between them, meaning I would need to click through every instance manually and decide if to replace that instance or not. Is there a way to insert text or mark it in some way so that the document knows that this piece of text is the same as other instances and that changing one changes all others. This is similar to how text in the header and footers work, where changing one header changes all of them, except this would be in the body of the document.
A standard way to handle repeated text is to use bookmarks in conjunction with `REF` fields. Each bit of repeated text is stored in a bookmark, and each repetition of that text is inserted via a `REF` field that is linked to the bookmark. So it doesn't matter if two bookmarks happen to store identical looking text at a given time. 1. Select the text that needs to be repeated, and insert a bookmark (on the **Insert** tab, in the **Links** group, click **Bookmark**). 2. At each place where you need the text repeated, insert a `REF` field (on the **Insert** tab, in the **Text** group, click **Quick Parts** > **Field**, select **Ref**, and then select the name of the bookmark in the list). 3. After you change the bookmarked text, select the whole document, and then press `F9` to update the `REF` fields. As came up in [another recent question](https://superuser.com/questions/1205791/create-reference-to-frequently-changed-text), it helps if you make bookmarks visible if they aren't (click **File** > **Options** > **Advanced**, and then, under **Show document content**, select **Show bookmarks**). Otherwise, when you change the text, some of the new text might end up outside the bookmark, and your changes won't be propagated. When the bookmarks are visible, you can see whether all the new text is correctly inside the bookmark.
Class 'MongoDB\Client' not found, mongodb extension installed I tried to create new mongo connection executing the following code `$m = new MongoDB\Client();` and i got this error: > > Fatal error: Class 'MongoDB\Client' not found > > > i think i have properly installed MongoDB extension (Copied php\_mongodb.dll to ext folder and updated php.ini with *extension=php\_mongodb.dll*). The following code confirms it is loaded: ``` echo extension_loaded("mongodb") ? "loaded\n" : "not loaded\n"; ``` I still receive the same error. Here is [phpinfo()](https://i.stack.imgur.com/n4H1W.png) I appreciate all your help. Thank you!
If you are using latest MongoDB extension of PHP, [`MongoDB\Driver\Manager`](http://php.net/manual/en/class.mongodb-driver-manager.php) is the main entry point to the extension. Here is the sample code to retrieve data using latest extension. Let's say you have `testColl` collection in `testDb`. The you can retrieve data using [`MongoDB\Driver\Query`](http://php.net/manual/en/class.mongodb-driver-query.php) class of the extension. ``` // Manager Class $manager = new MongoDB\Driver\Manager("mongodb://localhost:27017"); // Query Class $query = new MongoDB\Driver\Query(array('age' => 30)); // Output of the executeQuery will be object of MongoDB\Driver\Cursor class $cursor = $manager->executeQuery('testDb.testColl', $query); // Convert cursor to Array and print result print_r($cursor->toArray()); ``` Output: ``` Array ( [0] => stdClass Object ( [_id] => MongoDB\BSON\ObjectID Object ( [oid] => 5848f1394cea9483b430d5d2 ) [name] => XXXX [age] => 30 ) ) ```
os.listdir() not printing out all files I've got a bunch of files and a few folders. I'm trying to append the zips to a list so I can extract those files in other part of the code. It never finds the zips. ``` for file in os.listdir(path): print(file) if file.split(".")[1] == 'zip': reg_zips.append(file) ``` The path is fine or it wouldn't print out anything. It picks up the same files each time but will not pick up any others. It picks up about 1/5th of the files in the directory. At a complete loss. I've made sure that some weird race condition with the file availability isn't the problem by putting a time.sleep(3) in the code. Didn't solve it.
It's possible your files have more than one period in them. Try using `str.endswith`: ``` reg_zips = [] for file in os.listdir(path): if file.endswith('zip'): reg_zips.append(file) ``` Another good idea (thanks, Jean-François Fabre!) is to use [`os.path.splitext`](https://docs.python.org/3/library/os.path.html#os.path.splitext), which handles the extension quite nicely: ``` if os.path.splitext(file)[-1] == '.zip': ... ``` Am even better solution, I recommend with the [`glob.glob`](https://docs.python.org/2/library/glob.html#glob.glob) function: ``` import glob reg_zips = glob.glob('*.zip') ```
Contending with JS "used before defined" and Titanium Developer I have a lengthy JavaScript file that passes JSLint except for "used before it was defined" errors. I used normal function declarations, as in... ``` function whatever() {do something;} ``` as opposed to... ``` var whatever = function(){do something;}; ``` and consistent with [Steve Harrison's reply](https://stackoverflow.com/questions/806163/jslint-using-a-function-before-its-defined-error) to an earlier post... > > Assuming you declare all your > functions with the function keyword, I > think it becomes a programming-style > question. Personally, I prefer to > structure my functions in a way that > seems logical and makes the code as > readable as possible. For example, > like you, I'd put an init function at > the top, because it's where everything > starts from. > > > ... I like seeing the functions in an order that makes sense to me. The script functions perfectly in the several browsers tested (e.g., FireFox, Safari, Mobile Safari, Fennec, IE, Chrome, Midori, etc.). Here's the problem: I want to use the script inside of an iPhone app being built with Titanium but its compiler stops with "used before defined" errors. How can I get around this? This might be a stupid question but also... If functions need to be called in a particular order, how does one resolve the matter of a called function needing to call back to the function that originally called it? For instance... ``` function buildMenu(){ Display a list of five menu items, each of which calls a function to build the associated screen. } function screen1() { Display the screen associated with menu item #1. } ``` If the functions need to be declared in order, `function screen1` would need to precede `function buildMenu`. But what if, under certain conditions, screen1 cannot be built and hence wants to redisplay the menu (i.e., calling a function that is technically not yet declared)? Oh yeah... one more question: Are there websites or programs that automatically re-sequence the functions rather than requiring the programmer to do it manually?
No, EM's answer is NOT the right solution. Try running this JavaScript: ``` (function () { foo(); // right var foo = function () { console.log("wrong"); }; foo(); // wrong function foo() { console.log("right"); } foo(); // wrong }()); ``` This is because the interpreter will first read the function declaration, create the name `foo` as a function that prints "right", then reads the `var` statement, and find that there is already a name `foo` so it will skip creating a new variable with the value `undefined`, as normally happens. Then it processes the code, line-by-line, which includes an assignment to `foo`. The function declaration does not get reprocessed. Maybe this will behave differently in Titanium, but try this in Firebug and you'll get what I got. A better solution is: ``` var screen1, buildMenu; screen1 = function () { buildMenu(); }; buildMenu = function () { screen1(); }; ``` This will also pass JSLint, and produce the correct behavior.
Select Database names and extended properties in SQL Server I have three databases, starting with "MD\_" that I have added in SQL Server 2012. Each of them has an extended property NAME = "DESCRIPTION" What I like to have returned is a result set of the database names and the "DESCRIPTION" value. Selecting the database names are easy enough but I could use some help with joining in the extended property. ``` BEGIN SELECT A.NAME FROM sys.databases A Where LEFT(A.NAME, 3) = 'MD_' END ``` Results: ``` NAME DESCRIPTION MD_1 Initial MD_2 Secondary MD_3 Final ``` Any help would be greatly appreciated! Kind regards
The link in the comments helped me get here but sys.extended\_properties is a per database view. So the properties for each database are contained in the database. This worked though. ``` CREATE TABLE #EP (DatabaseName varchar(255), PropertyName varchar(max), PropertyValue varchar(max)) EXEC sp_msforeachdb 'INSERT INTO #EP SELECT ''?'' AS DatabaseName, CAST(name AS varchar), CAST(Value AS varchar) FROM [?].sys.extended_properties WHERE class=0' ``` And if you want all the databases and just properties where they exist. ``` SELECT db.Name, #EP.PropertyName, #EP.PropertyValue FROM sys.databases db LEFT OUTER JOIN #EP ON db.name = #EP.DatabaseName ```
How to call a class function inside document.addEventListener() When I load a page I am calling `addEventListener` to initialize `deviceReady`, inside that `addEventListener` I want to call a function which is inside a class. See the following example: Example class ``` var HomePageModel = function(){ this.initModule = function(){ //doing some process; }; }; ``` I want to call the above `initModule` function in `addEventListener`. Like ``` document.addEventListener("deviceready", HomePageModel.initModule, false); ``` Is it possible to call class function inside a `eventlistener` which is outside of a class?
You need an instance first: ``` document.addEventListener("deviceready", new HomePageModel().initModule, false); ``` Note that if your `initModule` method uses `this`, it won't work, because `this` will be set to the element that triggered the event. To avoid that, you can create a new function bound to a fixed value of `this`: ``` var model = new HomePageModel(); var boundFn = model.initModule.bind(model); document.addEventListener("deviceready", boundFn, false); ``` Finally, you can always use a wrapper function as the event handler, and call your module initializer from there (not sure why you haven't considered that): ``` document.addEventListener("deviceready", function(){ new HomePageModel().initModule(); }, false); ```
Python multiprocess/multithreading to speed up file copying I have a program which copies large numbers of files from one location to another - I'm talking 100,000+ files (I'm copying 314g in image sequences at this moment). They're both on huge, VERY fast network storage RAID'd in the extreme. I'm using shutil to copy the files over sequentially and it is taking some time, so I'm trying to find the best way to opimize this. I've noticed some software I use effectively multi-threads reading files off of the network with huge gains in load times so I'd like to try doing this in python. I have no experience with programming multithreading/multiprocessesing - does this seem like the right area to proceed? If so what's the best way to do this? I've looked around a few other SO posts regarding threading file copying in python and they all seemed to say that you get no speed gain, but I do not think this will be the case considering my hardware. I'm nowhere near my IO cap at the moment and resources are sitting around 1% (I have 40 cores and 64g of RAM locally). **EDIT** Been getting some up-votes on this question (now a few years old) so I thought I'd point out one more thing to speed up file copies. In addition to the fact that you can easily 8x-10x copy speeds using some of the answers below (seriously!) I have also since found that `shutil.copy2` is excruciatingly slow for no good reason. Yes, even in python 3+. It is beyond the scope of this question so I won't dive into it here (it's also highly OS and hardware/network dependent), beyond just mentioning that by tweaking the copy buffer size in the `copy2` function you can increase copy speeds by yet another factor of 10! (however note that you will start running into bandwidth limits and the gains are not linear when multi-threading AND tweaking buffer sizes. At some point it *does* flat line).
UPDATE: I never did get Gevent working (first answer) because I couldn't install the module without an internet connection, which I don't have on my workstation. However I was able to decrease file copy times by 8 just using the built in threading with python (which I have since learned how to use) and I wanted to post it up as an additional answer for anyone interested! Here's my code below, and it is probably important to note that my 8x copy time will most likely differ from environment to environment due to your hardware/network set-up. ``` import Queue, threading, os, time import shutil fileQueue = Queue.Queue() destPath = 'path/to/cop' class ThreadedCopy: totalFiles = 0 copyCount = 0 lock = threading.Lock() def __init__(self): with open("filelist.txt", "r") as txt: #txt with a file per line fileList = txt.read().splitlines() if not os.path.exists(destPath): os.mkdir(destPath) self.totalFiles = len(fileList) print str(self.totalFiles) + " files to copy." self.threadWorkerCopy(fileList) def CopyWorker(self): while True: fileName = fileQueue.get() shutil.copy(fileName, destPath) fileQueue.task_done() with self.lock: self.copyCount += 1 percent = (self.copyCount * 100) / self.totalFiles print str(percent) + " percent copied." def threadWorkerCopy(self, fileNameList): for i in range(16): t = threading.Thread(target=self.CopyWorker) t.daemon = True t.start() for fileName in fileNameList: fileQueue.put(fileName) fileQueue.join() ThreadedCopy() ```
How do I convert a string to a symbol for use as a key in the Lisp "assoc" function? I have this association-list in Common Lisp: ``` (defvar base-list (list (cons 'a 0) (cons 2 'c))) ``` I have to call `assoc` when my argument is of type `string`. For the pair `(A . 0)` I have to convert "a" to a symbol, and for the pair `(2 . C)` I have to convert "2" to a symbol. How can I do that? This should work like this: ``` CL-USER 28 : 1 > (assoc (convert-string-to-symbol "a") base-list) (A . 0) CL-USER 28 : 1 > (assoc (convert-number-to-symbol "2") base-list) (2 . C) ``` I tried using `intern` but got `NIL`: ``` CL-USER 29 : 1 > (assoc (intern "a") base-list) NIL ```
You were close with [`intern`](http://www.lispworks.com/documentation/HyperSpec/Body/f_intern.htm); you just had the case wrong. Try this: ``` > (assoc (intern "A") base-list) (A . 0) ``` Note that here the name-as-string is capitalized. Alternately, you could use [`find-symbol`](http://www.lispworks.com/documentation/HyperSpec/Body/f_find_s.htm#find-symbol) to look for an existing symbol by name: ``` > (assoc (find-symbol "A") base-list) (A . 0) ``` The key here is that when you wrote your original `defvar` form, the *reader* read the string "a" and—by virtue of the current [*readtable case*](http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_r.htm#readtable_case)—converted the symbol name to be uppercase. Symbols with names of different case are not equal. It just so happens that at read time the reader is projecting what you wrote (lowercase) to something else (uppercase). You can inspect the current case conversion policy for the current reader using the [`readtable-case`](http://www.lispworks.com/documentation/HyperSpec/Body/f_rdtabl.htm) function: ``` > (readtable-case *readtable*) :UPCASE ``` To learn more about how the readtable case and the reader interact, see the discussion in [section 23.1.2 of the Hyperspec.](http://www.lispworks.com/documentation/HyperSpec/Body/23_ab.htm)
Java: What's the difference between ActionEvent and ItemEvent on a JRadioButton? They're both raised once after the mouse button is released and both can have all the information available on the JRadioButton right? Is there any difference?
An [`ItemListener`](http://download-llnw.oracle.com/javase/6/docs/api/java/awt/event/ItemListener.html)s are notified when ever the state of the button is changed, whether through a user interacting with the button or programmatically (via the [`setSelected`](http://download-llnw.oracle.com/javase/6/docs/api/javax/swing/AbstractButton.html#setSelected%28boolean%29) method). [`ActionListener`](http://download.oracle.com/javase/6/docs/api/java/awt/event/ActionListener.html)s on the other hand will be called when a user interacts with the button (but can be simulated programmatically via the [`onClick`](http://download-llnw.oracle.com/javase/6/docs/api/javax/swing/AbstractButton.html#doClick%28%29) method). Note that a user interacting with the button such as clicking or hitting the space bar will also change the state of the button and raise an item event as well as an action event. Generally, you will want to define either one or the other, don't listen for both action events and item events on the button.
How can I create a "progressively matching" regex? I need a regex that matches a string as the user types it. It's a bit hard to explain, so let me show what I mean: It should match this string: `"XXXX units"`, where XXXX is any number. But it should also match any substring from the beginning of that string, so: ``` "123" "123 u" "123 uni" ``` should also match. But of course, this should not match: ``` "123 xx" ``` It seems so simple, but I can't quite figure it out. This is the closest I've got: ``` ^\d+ ?u?n?i?t?s? ``` ...but that unfortunately also matches strings like "123us". Can anyone help? It's javascript, so I may be a bit limited by missing look behind/ahead features...
Just add some `()`: ``` /^\d+( (u(n(i(t(s)?)?)?)?)?)?$/ ``` Testing: ``` /^\d+( (u(n(i(t(s)?)?)?)?)?)?$/.test("123 units") -> true /^\d+( (u(n(i(t(s)?)?)?)?)?)?$/.test("123 un") -> true /^\d+( (u(n(i(t(s)?)?)?)?)?)?$/.test("123 ui") -> false /^\d+( (u(n(i(t(s)?)?)?)?)?)?$/.test("12") -> true /^\d+( (u(n(i(t(s)?)?)?)?)?)?$/.test("123 xx") -> false ```
Typical time of execution for elementary functions It is well-known that the processor instruction for multiplication takes several times more time than addition, division is even worse (UPD: which is not true any more, see below). What about more complex operations like exponent? How difficult are they? **Motivation**. I am interested because it would help in algorithm design to estimate performance-critical parts of algorithms on early stage. Suppose I want to apply a set of filters to an image. One of them operates on 3×3 neighborhood of each pixel, sums them and takes atan. Another one sums more neighbouring pixels, but does not use complicated functions. Which one would execute longer? So, ideally I want to have approximate relative times of elementary operations execution, like multiplication typically takes 5 times more time than addition, exponent is about 100 multiplications. Of course, it is a deal of orders of magnitude, not the exact values. I understand that it depends on the hardware and on the arguments, so let's say we measure average time (in some sense) for floating-point operations on modern x86/x64. For operations that are not implemented in hardware, I am interested in typical running time for C++ standard libraries. Have you seen any sources when such thing was analyzed? Does this question makes sense at all? Or no rules of thumb like this could be applied in practice?
First off, let's be clear. This: > > It is well-known that processor instruction for multiplication takes > several times more time than addition > > > is no longer true in general. It hasn't been true for many, many years, and needs to stop being repeated. On most common architectures, integer multiplies are a couple cycles and integer adds are single-cycle; floating-point adds and multiplies tend to have nearly equal timing characteristics (typically around 4-6 cycles latency, with single-cycle throughput). Now, to your actual question: it varies with both the architecture and the implementation. On a recent architecture, with a well written math library, simple elementary functions like `exp` and `log` usually require a few tens of cycles (20-50 cycles is a reasonable back-of-the-envelope figure). With a lower-quality library, you will sometimes see these operations require a few hundred cycles. For more complicated functions, like `pow`, typical timings range from high tens into the hundreds of cycles.
How to attach a SQL Server database from the command line Is it possible to enter a command line command (like in a batch file) to attach a detached database to SQL Server, in stead of opening the management studio and doing it in there?
you need to use: [sqlcmd Utility](http://msdn.microsoft.com/en-us/library/ms162773.aspx) > > The sqlcmd utility lets you enter > Transact-SQL statements, system > procedures, and script files at the > command prompt, in Query Editor in > SQLCMD mode, in a Windows script file > or in an operating system (Cmd.exe) > job step of a SQL Server Agent job. > This utility uses OLE DB to execute > Transact-SQL batches. > > > Then use [CREATE DATABASE (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms176061.aspx) to do the attach and [sp\_detach\_db (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms188031.aspx) to do the detach. The [sp\_attach\_db (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms179877.aspx) is going to be removed in a future version of Microsoft SQL Server.
How to use a DefaultDict with a lambda expression to make the default changeable? DefaultDicts are useful objects to be able to have a dictionary that can create new keys on the fly with a callable function used to define the default value. eg. Using `str` to make an empty string the default. ``` >>> food = defaultdict(str) >>> food['apple'] '' ``` You can also use lambda to make an expression be the default value. ``` >>> food = defaultdict(lambda: "No food") >>> food['apple'] 'No food' ``` However you can't pass any parameters to this lambda function, that causes an error when it tries to be called, since you can't actually pass a parameter to the function. ``` >>> food = defaultdict(lambda x: "{} food".format(x)) >>> food['apple'] Traceback (most recent call last): File "<pyshell#9>", line 1, in <module> food['apple'] TypeError: <lambda>() takes exactly 1 argument (0 given) ``` Even if you try to supply the parameter ``` >>> food['apple'](12) Traceback (most recent call last): File "<pyshell#9>", line 1, in <module> food['apple'] TypeError: <lambda>() takes exactly 1 argument (0 given) ``` How could these lambda functions be responsive rather than a rigid expression?
Using a variable in the expression can actually circumvent this somewhat. ``` >>> from collections import defaultdict >>> baseLevel = 0 >>> food = defaultdict(lambda: baseLevel) >>> food['banana'] 0 >>> baseLevel += 10 >>> food['apple'] 10 >>> food['banana'] 0 ``` The default lambda expression is tied to a variable that can change without affecting the other keys its already created. This is particularly useful when it can be tied to other functions that only evaluate when a non existant key is being accessed. ``` >>> joinTime = defaultdict(lambda: time.time()) >>> joinTime['Steven'] 1432137137.774 >>> joinTime['Catherine'] 1432137144.704 >>> for customer in joinTime: print customer, joinTime[customer] Catherine 1432137144.7 Steven 1432137137.77 ```
Facebook Like button moves and becomes 1000px high after going back to a page We're seeing a weird behavior with the facebook like button. Use case is as follows: 1. Load a page, facebook like button is in the header. 2. Click a link, go to a new page, facebook like button is on that page too. 3. Click back button. 4. Like button on the previous page has now moved ~100px to the right, taken on `position: absolute` and is now 1000px high, with a transparent element that covers all elements lower than the FB like button on the page. 5. Click forward. Second page has the same bug. Here's some of the other weird behaviors we're seeing with it: - There's a span inside the FB div that just has a height and width added to the element style. It starts out as 200px x 1000px. Occasionally it will suddenly jump to 0px x 0px -- so far I see no rhyme or reason as to why. - Some of our pages have a lot of FB like buttons. They all exhibit the same behavior. - On iPad, the FB button takes on `display: block` after coming back to a page.
Apparently this is a known issue. <https://developers.facebook.com/x/bugs/663421210369743/> We've fixed this using the following: ``` .fb-like { display: inline-block !important; vertical-align: middle !important; margin-right: 1.5em; line-height: 30px; width: 81px !important; height: 22px !important; position: relative; z-index: 1002; } .fb-like > span { vertical-align: top !important; min-width: 81px !important; min-height: 22px !important; } .fb-like > span > iframe { min-width: 81px !important; min-height: 22px !important; } ```
How to properly build json web token in python Am trying to build Json web token in python django using [pyjwt library](https://github.com/jpadilla/pyjwt) by referencing suggestion by **Madhu\_Kiran\_K** from stackoverflow [link](https://stackoverflow.com/questions/38635092/how-to-create-json-web-token-to-user-login-in-django-rest-framework) In the code below, Am trying Generates a JSON Web Token that stores this user's ID, email and has an expiry date set to 2 days into the future. I can succesfully encode and decode the token to get all the required users information and token expiration time. **My Requirement Now:** 1.)Please how do I validate and ensure that the token sent by client is still valid and hence not tampered. 2.)how to check for expiration time and print a message. Here is what I have done to that effect ``` #check if the token were tampered if content['user_id'] =='' or content['email'] == '': print("Invalid Token") else: print("Token is still valid") #check if token has expired after 2 days if content['exp'] > dt: print("Token is still active") else: print("Token expired. Get new one") ``` The time expiration checking returns **error message not supported between instances of int and datetime.datetime.** What is the proper way to validate. Thanks **below is the full code** ``` #!/usr/bin/python import sys import re import json from datetime import datetime, timedelta import jwt #https://www.guru99.com/date-time-and-datetime-classes-in-python.html #timedelta(days=365, hours=8, minutes=15) #'exp': int(dt.strftime('%s')) dt = datetime.now() + timedelta(days=2) encoded_token = jwt.encode({'user_id': "abc", 'email': "nancy@gmail.com", 'exp': dt }, 'MySECRET goes here', algorithm='HS256') print(encoded_token) #decode above token decode_token=jwt.decode(encoded_token, 'MySECRET goes here', algorithms=['HS256']) content = decode_token print(content) print(content['user_id']) print('json token successfully retrieved') if content['user_id'] =='' or content['email'] == '': print("Invalid Token") else: print("Token is still valid") #check if token has expired after 2 days if content['exp'] > dt: print("Token is still active") else: print("Token expired. Get new one") ```
You don't have to manually check the token validity or the `exp` deadline, the `jwt.decode()` function validates *both for you*. For example, `jwt.decode()` will raise an [`jwt.ExpiredSignatureError` exception](https://pyjwt.readthedocs.io/en/latest/api.html#jwt.exceptions.ExpiredSignatureError) if the token is too old, so just catch that explicitly: ``` try: decode_token = jwt.decode(encoded_token, 'MySECRET goes here', algorithms=['HS256']) print("Token is still valid and active") except jwt.ExpiredSignatureError: print("Token expired. Get new one") except jwt.InvalidTokenError: print("Invalid Token") ``` The [`jwt.InvalidTokenError` exception](https://pyjwt.readthedocs.io/en/latest/api.html#jwt.exceptions.InvalidTokenError) is the base exception, catching that covers all the possible ways that token validation can fail. If you want to catch any subclasses such as `jwt.ExpiredSignatureError`, put them in `except ...:` blocks before catching `InvalidTokenError`. See the [usage examples documentation](https://pyjwt.readthedocs.io/en/latest/usage.html), especially the [*Expiration Time Claim* section](https://pyjwt.readthedocs.io/en/latest/usage.html#expiration-time-claim-exp): > > Expiration time is automatically verified in `jwt.decode()` and raises `jwt.ExpiredSignatureError` if the expiration time is in the past[.] > > >
git push --all --tags: incompatible When I try to push all branches and tags to a remote, git emits the following error: ``` # git push origin --all --tags fatal: --all and --tags are incompatible ``` However, this works: ``` # git push origin refs/heads refs/tags Everything up-to-date ``` --- Questions: 1. Why git names push-all-branches `--all` but not `--branches` or `--heads`? `git push origin --all` only pushes branches, not all refs. What's the philosophy behind such naming? Does this mean tags are really 2nd-class citizens in a Git repo? 2. **Why git doesn't allow the use of both `--all` and `--tags`?** --- PS. I know there's a `--follow-tags` option. I know pushing all tags is not recommended by some people, but this thread is not about that. --- `man git-push`: > > --all > > > Push all branches (i.e. refs under refs/heads/); cannot be used with other <refspec>. > > > --tags > > > All refs under refs/tags are pushed, in addition to refspecs explicitly listed on the command line. > > >
The message "`--all and --tags are incompatible`" comes from [`builtin/push.c#cmd_push()`](https://github.com/git/git/blob/12039e008f9a4e3394f3f94f8ea897785cb09448/builtin/push.c#L604) This was introduced by **[Marek Zawirski](https://github.com/zawir)** in [commit b259f09](https://github.com/git/git/commit/b259f09b181c6650253f2ab60f5375d3ff8e3872) in August 2008 (Git v1.6.1-rc1): > > ## Make push more verbose about illegal combination of options > > > It may be unclear that `--all`, `--mirror`, `--tags` and/or explicit refspecs > are illegal combinations for `git push`. > > > Git was silently failing in these cases, while we can complaint more properly about it. > > > In 2008, Marek was [implementing `git push` in JGit](https://marc.info/?l=git&m=121460442905215&w=2), and [proposed that patch mentioned above](https://marc.info/?l=git&m=121890953115497&w=2), adding: > > I forgot about this one, it was reported long time ago. > > > It seems that it may be really unclear what's going on with git failing on `$ git push --tags --all` and similar, as it is implementation related perhaps. > > > While [it is possible to configure a remote](https://marc.info/?l=git&m=121021757220073&w=2) with: ``` [remote "origin"] push = refs/heads/* push = refs/tags/* ``` [Jeff King](https://github.com/peff) [discovered a bug](https://marc.info/?l=git&m=121022017823071&w=2) (kind of deadlock) which is probably why this patch exists. > > The sender does a "tellme-more" and then waits for a line back. > > The receiver gets the tellme-more, but never says anything else, presumably > because he doesn't have that commit (because master is ahead of any > tags). > > > In short, pushing branches and tags separately seems easier to support than pushing them together. See more with "[Push git commits & tags simultaneously](https://stackoverflow.com/a/3745250/6309)", with `git push --follow-tags`, or `git config --global push.followTags true`.
Dodging the inaccuracy of a floating point number I totally understand the problems associated with floating points, but I have seen a very interesting behavior that I can't explain. ``` float x = 1028.25478; long int y = 102825478; float z = y/(float)100000.0; printf("x = %f ", x); printf("z = %f",z); ``` The output is: > > x = 1028.254761 z = 1028.254780 > > > Now if floating numbers failed to represent that specific random value (1028.25478) when I assigned that to variable x. Why isn't it the same in case of variable z? P.S. I'm using pellesC IDE to test the code (C11 compiler).
I am pretty sure that what happens here is that the latter floating point variable is elided and instead kept in a double-precision register; and then passed as is as an argument to `printf`. Then the compiler will believe that it is safe to pass this number at double precision after default argument promotions. I managed to produce a *similar* result using GCC 7.2.0, with these switches: ``` -Wall -Werror -ffast-math -m32 -funsafe-math-optimizations -fexcess-precision=fast -O3 ``` The output is ``` x = 1028.254761 z = 1028.254800 ``` The number is slightly different there^. The [description for `-fexcess-precision=fast`](http://man7.org/linux/man-pages/man1/gcc.1.html) says: > > `-fexcess-precision=style` > > > This option allows further control over excess precision on > machines where floating-point operations occur in a format with > more precision or range than the IEEE standard and interchange > floating-point types. By default, `-fexcess-precision=fast` is in > effect; this means that operations may be carried out in a wider > precision than the types specified in the source if that would > result in faster code, and it is unpredictable when rounding to > the types specified in the source code takes place. When > compiling C, if `-fexcess-precision=standard` is specified then > excess precision follows the rules specified in ISO C99; in > particular, both casts and assignments cause values to be rounded > to their semantic types (whereas `-ffloat-store` only affects > assignments). This option [`-fexcess-precision=standard`] is enabled by default for C if a > strict conformance option such as `-std=c99` is used. `-ffast-math` > enables `-fexcess-precision=fast` by default regardless of whether > a strict conformance option is used. > > > This behaviour isn't C11-compliant
Trying to draw lines with JPanel I am trying to draw lines using `JPanel` and I have hit somewhat of a wall. I can get two sides down but once it comes to subtracting from the x cord it all goes wrong. ![lines i'm tring to draw](https://i.stack.imgur.com/swZXA.png) ``` package GUIstuff; import java.awt.Graphics; import javax.swing.JPanel; public class DrawPanel extends JPanel{ public void paintComponent (Graphics g){ super.paintComponent(g); int width = getWidth(); int height = getHeight(); int drawCounter = 0; // counters for all the while statements int drawCounter2 = 0; int drawCounter3 = 0; int drawCounter4 = 0; int x1 = 0; // cords change with the while statemetns int x2 = 0; int y1 = 0; int y2 = 0; while (drawCounter <= 15){ // counter y2 = 250; g.drawLine(x1, y1, x2, y2); x2 = x2 + 15; y1 = y1 + 15; drawCounter++; } int u1 = 0; int u2 = 0; int v1 = 0; int v2 = 0; while (drawCounter2 <= 15){ u2 = 250; g.drawLine(u1, v1, u2, v2); u1 = u1 + 15; v2 = v2 + 15; drawCounter2++; } int a1 = 0; int a2 = 0; int b1 = 0; int b2 = 0; while (drawCounter3 <= 15){ a2 = 250; g.drawLine(a1, b1, a2, b2); b1 = b1 + 15; a2 = a2 - 15; drawCounter3++; } } } ``` Here is my runner class ``` package GUIstuff; import javax.swing.JFrame; public class DrawPanelTest { public static void main (String args[]){ DrawPanel panel = new DrawPanel(); JFrame application = new JFrame(); application.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); application.add(panel); application.setSize (250, 250); application.setVisible(true); } } ``` I have a the lines in the bottom left and the upper right but when I try to subtract from x I just get lines going a crossed the whole box.
When doing custom painting you should override the `getPreferredSize()` method so the panel can be displayed at its preferred size. When you draw the lines two variable are the same and two variables differ. Use the width/height variable when appropriate instead of hardcoding a number. In the example below I did the left and bottom sides. The bottom side shows how to subtract. I'll let you figure out the pattern for the other two side. Also, I made the panel a little more dynamic so it will be easy to configure the number of lines you want painted and the gap between the lines. ``` import java.awt.*; import javax.swing.*; public class DrawSSCCE extends JPanel { private int lines; private int lineGap; public DrawSSCCE(int lines, int lineGap) { this.lines = lines; this.lineGap = lineGap; } @Override public Dimension getPreferredSize() { int size = lines * lineGap; return new Dimension(size, size); } @Override public void paintComponent(Graphics g) { int width = getWidth(); int height = getHeight(); // Draw lines starting from left to bottom int x = lineGap; int y = 0; for (int i = 0; i < lines; i++) { g.drawLine(0, y, x, height); x += lineGap; y += lineGap; } // Draw lines starting from bottom to right x = 0; y = height - lineGap; for (int i = 0; i < lines; i++) { g.drawLine(x, height, width, y); x += lineGap; y -= lineGap; } // Draw lines starting from right to top // Draw lines starting from top to left } private static void createAndShowUI() { JFrame frame = new JFrame("DrawSSCCE"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.add( new DrawSSCCE(15, 15) ); frame.pack(); frame.setLocationByPlatform( true ); frame.setVisible( true ); } public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { createAndShowUI(); } }); } } ```
Are Java's public fields just a tragic historical design flaw at this point? It seems to be Java orthodoxy at this point that one should basically never use public fields for object state. (I don't necessarily agree, but that's not relevant to my question.) Given that, would it be right to say that from where we are today, it's clear that Java's public fields were a mistake/flaw of the language design? Or is there a rational argument that they're a useful and important part of the language, even today? Thanks! **Update:** I know about the more elegant approaches, such as in C#, Python, Groovy, etc. I'm not directly looking for those examples. I'm really just wondering if there's still someone deep in a bunker, muttering about how wonderful public fields really are, and how the masses are all just sheep, etc. **Update 2:** Clearly static final public fields are the standard way to create public constants. I was referring more to using public fields for object state (even immutable state). I'm thinking that it does seem like a design flaw that one should use public fields for constants, but not for state… a language's rules should be enforced naturally, by syntax, not by guidelines.
I like them, as long as the field is **final** and is only used internally in the application, not exposed in an API for other applications. This keeps your code shorter and more readable. You should not expose public fields in an API because by exposing public fields, you also expose the implementation. If you expose it as a `getXXX()` method instead, you could change the implementation without changing the API interface. E.g. you could change and get the value from a remote service, but the applications that uses the API doesn't need to know this. This is a viable design for `public final` fields in **immutable** classes. From *Effective Java*: > > **Item 14: In public classes, use accessor methods, not public fields** > > > ...if a class is package-private or is a private nested class, there is nothing inherently wrong with exposing its data fields. This approach generates less clutter than the accessor-method approach. > > > While it's never a good idea for a public class to expose fields directly, it is less harmful if the fields are immutable. > > > See also [Why shouldn't I use immutable POJOs instead of JavaBeans?](https://stackoverflow.com/questions/3511120/why-shouldnt-i-use-immutable-pojos-instead-of-javabeans)
Can I throw an exception in Javascript, that stops Javascript execution? I try to simulate a problem where a script that is loaded from an external url stops execution of any more scripts on my site. I tried to simulate such a problem by calling a function that does not exits. I can see the error in firebug but different scripts on the page are still executed. Are there different kinds of errors in Javascripts? If yes: what kind of error stops script execution? I only need this answer for Firefox. **EDIT** This question is easy to misunderstood but Rob W got it: I need to throw an exception and that exception needs to stop further script execution.
> > Answer to the title: **No** > > Answer to "Are there different kinds of errors in JavaScript\*\*: Yes, see MDN: [Error](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Error) > > Syntax errors will prevent a whole script block from being executed, > other errors (TypeErrors, Reference errors) will only stop the execution after the occurrence of the error. > > > Different `<script>` blocks are executed separately. You cannot prevent the second block from execution by throwing an error in the first block (Demo: <http://jsfiddle.net/WJCEN/>). ``` <script>Example: Syntax error in this script.</script> <script>console.log('Still executed.')</script> ``` Also, if an error is caught using `try-catch` (demo: <http://jsfiddle.net/WJCEN/1/>), then the error will not even stop the execution a whole block. ``` try {throw 'Code';}catch(e){} console.log('Still executed'); ``` --- There is no general one-fit-all method to stop all code from running. For individual scripts, you can use some tricks to prevent the code from running further. ## Example 1 ([demo](http://jsfiddle.net/gaA4t/)): Temporary overwrite a method ``` 1: <script>window._alert = alert;alert=null;</script> 2: <script>alert('Some code!');confirm('Not executing');</script> 3: <script>alert=_alert;delete window._alert;alert('Done!');</script> ``` This method is based on the fact that script 2 expects `alert` to be a function. We have rewritten `alert` to a non-function property (script 1). Script 2 throws a `TypeError`, and the second block is skipped. We restore the original values in script 3. ## Example 2 ([demo](http://jsfiddle.net/gaA4t/1/)): Define a constant method, which cannot be overwritten. ``` 4. <script>Object.defineProperty(window, 'test',{value:null});</script> 5. <script>var test=function(){alert('Test');};test();alert('What?');</script> ``` This methods relies on the [`Object.defineProperty`](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Object/defineProperty) method, to effectively define a constant value. In strict mode, the `var test` declaration would throw a TypeError: "test is read-only". When strict mode is not enables, a TypeError will be thrown at `test()`: "test is not a function" (because we defined `test` to be constant, in script 4). Note: The last method is not working correctly with function declarations (see [bug #115452](http://code.google.com/p/chromium/issues/detail?id=115452), Chrome 17)
Matching IDs in two datasets I have two sets of data, comprising pre and a post data. Respondents have unique IDs, and I want to create a subset which includes only those who responded to both surveys. Example dataset: ``` pre.data <- data.frame(ID = c(1:10), Y = sample(c("yes", "no"), 10, replace = TRUE), Survey = 1) post.data <- data.frame(ID = c(1:3,6:10), Y = sample(c("yes", "no"), 8, replace = TRUE), Survey = 2) all.data <- rbind(pre.data, post.data) ``` I have the following function: ``` match <- function(dat1, dat2, dat3){ #dat1 is whole dataset(both stitched together) #dat2 is pre dataset #dat3 is post dataset selectedRows <- (dat1$ID %in% dat2$ID & dat1$ID %in% dat3$ID) matchdata <- dat1[selectedRows,] return(matchdata) } prepost.match.data <- match(all.data, pre.data, post.data) ``` I think there must be a better way than this function of doing the same thing, but I cannot think how. How I have done it seems a bit messy. I mean, it works - it does what I want it to, but I can't help thinking there's a better way. My apologies if this has already been asked in a similar way but I was unable to find it - in which case please do point me towards a relevant answer.
**Note :** Arun posted the same answer in a comment a bit earlier than me. You can use `intersect` like this : ``` all.data[all.data$ID %in% intersect(pre.data$ID, post.data$ID),] ``` Which gives : ``` ID Y Survey 1 1 yes 1 2 2 no 1 3 3 no 1 6 6 yes 1 7 7 yes 1 8 8 yes 1 9 9 no 1 10 10 yes 1 11 1 no 2 12 2 yes 2 13 3 no 2 14 6 no 2 15 7 yes 2 16 8 yes 2 17 9 no 2 18 10 yes 2 ```
What kind of controls should I use in Windows Phone 7 to make a "star" control I show some items from the database in the list and I want the user to be able to mark some of them as favorites. The best way would be to show some star icon for user to click on which then would turn into slightly different star to indicate that the item is now favorite. What controls should I use for those stars? Could I bind them to some boolean property of the item?
You can also use vector graphics to achieve this without using png icons. I created this style a while a go, basically it is for `CheckBox` but I think it also works for `ToggleButton` by simply changing the `TargetType` from `CheckBox` to `ToggleButton`. By setting the `IsChecked` of either `CheckBox` or `ToggleButton` to `True`, the star will be filled with the accent color of the phone. **The Style** ``` <Style x:Key="StarCheckBoxStyle" TargetType="CheckBox"> <Setter Property="Background" Value="{StaticResource PhoneAccentBrush}" /> <Setter Property="FontFamily" Value="{StaticResource PhoneFontFamilySemiBold}" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="CheckBox"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="FocusStates"> <VisualState x:Name="Focused" /> <VisualState x:Name="Unfocused" /> </VisualStateGroup> <VisualStateGroup x:Name="CheckStates"> <VisualStateGroup.Transitions> <VisualTransition GeneratedDuration="00:00:00.2000000" /> </VisualStateGroup.Transitions> <VisualState x:Name="Checked"> <Storyboard> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Duration="00:00:00.0010000" Storyboard.TargetName="check" Storyboard.TargetProperty="(UIElement.Opacity)"> <EasingDoubleKeyFrame KeyTime="00:00:00" Value="1" /> </DoubleAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="Indeterminate" /> <VisualState x:Name="Unchecked" /> </VisualStateGroup> <VisualStateGroup x:Name="ValidationStates"> <VisualState x:Name="Valid" /> <VisualState x:Name="InvalidUnfocused" /> <VisualState x:Name="InvalidFocused" /> </VisualStateGroup> <VisualStateGroup x:Name="CommonStates"> <VisualStateGroup.Transitions> <VisualTransition GeneratedDuration="00:00:00.2000000" /> </VisualStateGroup.Transitions> <VisualState x:Name="MouseOver" /> <VisualState x:Name="Pressed" /> <VisualState x:Name="Disabled" /> <VisualState x:Name="Normal" /> </VisualStateGroup> </VisualStateManager.VisualStateGroups> <Path x:Name="check" Stretch="Fill" Height="48" Width="48" UseLayoutRounding="False" Data="M16.000002,0 L19.77688,12.223213 L32,12.222913 L22.111122,19.776972 L25.888546,32 L16.000002,24.445454 L6.1114569,32 L9.8888807,19.776972 L8.574415E-09,12.222913 L12.223121,12.223213 z" Opacity="0" Fill="{TemplateBinding Background}" Grid.Column="1" HorizontalAlignment="Center" VerticalAlignment="Center" /> <Path x:Name="stroke" Stretch="Fill" Stroke="{TemplateBinding Background}" Height="48" Width="48" UseLayoutRounding="False" Data="M16.000002,0 L19.77688,12.223213 L32,12.222913 L22.111122,19.776972 L25.888546,32 L16.000002,24.445454 L6.1114569,32 L9.8888807,19.776972 L8.574415E-09,12.222913 L12.223121,12.223213 z" Grid.Column="1" HorizontalAlignment="Center" VerticalAlignment="Center" /> <ContentPresenter VerticalAlignment="{TemplateBinding VerticalContentAlignment}" Margin="0,0,8,0" /> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> ``` **Apply the style** ``` <CheckBox Content="unchecked state" Style="{StaticResource StarCheckBoxStyle}" /> <CheckBox IsChecked="True" Content="checked state" Style="{StaticResource StarCheckBoxStyle}" /> ``` **How they look** ![enter image description here](https://i.stack.imgur.com/pKCRL.png)
swift UIActivityIndicatorView while NSURLConnection I know how to animate the UIActivityIndicatorView I know how to make a connection with `NSURLConnection.sendSynchronousRequest` But I don't know how to animate the UIActivityIndicatorView WHILE making a connection with `NSURLConnection.sendSynchronousRequest` Thanks
Don't use `sendSynchronousRequest` from main thread (because it will block whatever thread you run it from). You could use `sendAsynchronousRequest`, or, given that `NSURLConnection` is deprecated, you should really use `NSURLSession`, and then your attempt to use `UIActivityIndicatorView` should work fine. For example, in Swift 3: ``` let indicator = UIActivityIndicatorView(activityIndicatorStyle: .gray) indicator.center = view.center view.addSubview(indicator) indicator.startAnimating() URLSession.shared.dataTask(with: request) { data, response, error in defer { DispatchQueue.main.async { indicator.stopAnimating() } } // use `data`, `response`, and `error` here } // but not here, because the above runs asynchronously ``` Or, in Swift 2: ``` let indicator = UIActivityIndicatorView(activityIndicatorStyle: .Gray) indicator.center = view.center view.addSubview(indicator) indicator.startAnimating() NSURLSession.sharedSession().dataTaskWithRequest(request) { data, response, error in defer { dispatch_async(dispatch_get_main_queue()) { indicator.stopAnimating() } } // use `data`, `response`, and `error` here } // but not here, because the above runs asynchronously ```
Sublime text 2 tail -f in windows I was wondering if sublime text 2 can "tail -f" file like in linux, I'm using windows btw. I wanted to see apache error log file in xampp. Notepad++ has it is there a plugin that can do it in sublime text 2?
I don't know if it's possible or not, but I usually use **[Baretail](http://www.baremetalsoft.com/baretail/)** which handle pretty well like tail. The thing I really love with Baretail, is the regex you can define to highlight some part of your log file. Like, put each with line with - error inside with a red background, bold and white text, - warning message, with an orange background, italic and white text, - etc ... ![enter image description here](https://i.stack.imgur.com/0OdLJ.gif) Otherwise, it seems [you are not the only one](http://sublimetext.userecho.com/topic/98212-a-tail-f-function-would-be-soooo-awesome/) who requested this kind of features (you should upvote the request on *userecho*).
how to generate such an image in Mathematica I am thinking of process an image to generate ![the following effect](https://i.stack.imgur.com/R3xte.jpg) in Mathematica given its powerful image processing capabilities. Could anyone give some idea as to how to do this? Thanks a lot.
Here's one version, using a textures. It of course doesn't act as a real lens, just repeats portions of the image in an overlapping fashion. ``` t = CurrentImage[]; (* square off the image to avoid distortion *) t = ImageCrop[t, {240,240}]; n = 20; Graphics[{Texture[t], Table[ Polygon[ Table[h*{Sqrt[3]/2, 0} + (g - h)*{Sqrt[3]/4, 3/4} + {Sin[t], Cos[t]}, {t, 0., 2*Pi - Pi/3, Pi/3} ], VertexTextureCoordinates -> Transpose[{ Rescale[ (1/4)*Sqrt[3]*(g - h) + (Sqrt[3]*h)/2., {-n/2, n/2}, {0, 1} ] + {0, Sqrt[3]/2, Sqrt[3]/2, 0, -(Sqrt[3]/2), -(Sqrt[3]/2)}/(n/2), Rescale[ (3.*(g - h))/4, {-n/2, n/2}, {0, 1} ] + {1, 1/2, -(1/2), -1, -(1/2), 1/2}/(n/2) }] ], {h, -n, n, 2}, {g, -n, n, 2} ] }, PlotRange -> n/2 - 1 ] ``` Here's the above code applied to the [standard image test](http://en.wikipedia.org/wiki/Standard_test_image) ([Lena](http://en.wikipedia.org/wiki/Lenna)) ![enter image description here](https://i.stack.imgur.com/o2Wzr.png)
How do I measure a time interval in C? I would like to measure time in C, and I am having a tough time figuring it out, all I want is something like this: - start a timer - run a method - stop the timer - report the time taken (at least to micro accuracy) Any help would be appreciated. (I am compiling in windows using mingw)
High resolution timers that provide a resolution of 1 microsecond are system-specific, so you will have to use different methods to achieve this on different OS platforms. You may be interested in checking out the following article, which implements a cross-platform C++ timer class based on the functions described below: - [Song Ho Ahn - High Resolution Timer][1] --- **Windows** The Windows API provides extremely high resolution timer functions: `QueryPerformanceCounter()`, which returns the current elapsed ticks, and `QueryPerformanceFrequency()`, which returns the number of ticks per second. Example: ``` #include <stdio.h> #include <windows.h> // for Windows APIs int main(void) { LARGE_INTEGER frequency; // ticks per second LARGE_INTEGER t1, t2; // ticks double elapsedTime; // get ticks per second QueryPerformanceFrequency(&frequency); // start timer QueryPerformanceCounter(&t1); // do something // ... // stop timer QueryPerformanceCounter(&t2); // compute and print the elapsed time in millisec elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 / frequency.QuadPart; printf("%f ms.\n", elapsedTime); } ``` **Linux, Unix, and Mac** For Unix or Linux based system, you can use `gettimeofday()`. This function is declared in "sys/time.h". Example: ``` #include <stdio.h> #include <sys/time.h> // for gettimeofday() int main(void) { struct timeval t1, t2; double elapsedTime; // start timer gettimeofday(&t1, NULL); // do something // ... // stop timer gettimeofday(&t2, NULL); // compute and print the elapsed time in millisec elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("%f ms.\n", elapsedTime); } ```
Replacing automatically created ApplicationDbContext I should preface this with a disclaimer saying I'm new to ASP.NET development and don't really understand database contexts, despite spending the last hour reading documentation. When I built my ASP.NET MVC 5 application I chose to have individual user account authentication. Visual Studio created a file called `IdentityModels.cs`, and within there it defined an `ApplicationUser` class and a `ApplicationDbContext` class. I've done some development work, and my CRUD controllers use `ApplicationDbContext` to talk to the database, by having this private property on every controller: ``` private ApplicationDbContext db = new ApplicationDbContext(); ``` In the controller actions I then do things like: ``` return View(db.Trains.ToList()); ``` I want to tidy this database context stuff up, but I need to understand it first. My main questions are: 1. Is it okay to use just one database context for my entire application, like I'm doing now? 2. Can I replace the `ApplicationDbContext` class defined in `IdentityModels.cs` with my own? 3. The `ApplicationDbContext` class derives from `IdentityDbContext<ApplicationUser>`, does that mean I should have seperate database contexts for my user authentication stuff provided by Visual Studio, and my own code? I think my end goal is to use my own database context, called `DatabaseContext`, which is then used in a base controller that all of my controllers inherit from. Then I only have one place where the database context is instantiated, instead of within every controller. Who knows, I may be thinking the wrong way about this. Everybody seems to have their own preferred way of dealing with this. Thank you!
> > Is it okay to use just one database context for my entire application, like I'm doing now? > > > - If you decided that you are going to have access to DB directly from UI layer (Which is a separate discussion) than it is OK, since `ApplicationDbContext` is a private field of your controller and controllers are created and disposed per request - `ApplicationDbContext` will be created and disposed per request. > > Can I replace the ApplicationDbContext class defined in IdentityModels.cs with my own? > > > - You definitely can do it. It is used to create a `UserStore` which receives `DbContext` as an argument in constructor so this `var manager = new ApplicationUserManager(new UserStore<ApplicationUser>(new CustomDbContext("connection")));` will work. You still have to make sure that `ApplicationUser` is an entity in your custom context. Of course you can override/replace `ApplicationUser` as well. > > The ApplicationDbContext class derives from > IdentityDbContext, does that mean I should have > seperate database contexts for my user authentication stuff provided > by Visual Studio, and my own code? > > > By default Asp.Net Identity generates a new data base for you and `ApplicationDbContext` configured to work with this data base. You can store your authentication related entities in any other data base as well, you just need to make sure that all the related tables are there.You can also extend this data base to include other tables that you are using in your application so you could use the same context all over. **P.S:** `ApplicationDbContext` doesn't have to implement `IdentityDbContext<ApplicationUser>`, Extending a default `DbContext` works as well (if you already generated a Db you will have to update it\use code migrations for the following to work): ``` public class ApplicationDbContext : DbContext { public ApplicationDbContext() : base("DefaultConnection") { } public static ApplicationDbContext Create() { return new ApplicationDbContext(); } } ```
Why can't I use filter my WHERE clause on a CASE statement column in SQL Server? In my select statement there is a CASE WHEN THEN ELSE END AS statement that I am not able to filter on in my WHERE clause. I do not see why this would be an issue, could someone shed some light? ``` SELECT CASE WHEN m.Country IN ('CANADA', 'UNITED STATES', 'USA', 'MEXICO') THEN 'NA' WHEN m.Country IN ('BRAZIL') THEN 'JD2' WHEN m.Country IN ('NZ', 'NEW ZEALAND', 'AUSTRALIA', 'AUSTRALASIA') THEN 'ANZ' ELSE 'Unknown' END AS DerivedRegion, m.ID, m.[Account Name], m.[Display Name], m.[Last Name], m.[First Name] FROM dbo.Users AS m WHERE DerivedRegion = 'Unknown' ``` There WHERE clause gives me the error: Invalid column name 'DerivedRegion', why?
`WHERE` is processed before `SELECT`. It doesn't know what `DerviedRegion` is at that point. I'd recommend using a `NOT IN` in this case to exclude the list of countries. However, if you really want to use your CASE you could do something like this ``` SELECT * FROM ( SELECT CASE WHEN m.Country IN ('CANADA', 'UNITED STATES', 'USA', 'MEXICO') THEN 'NA' WHEN m.Country IN ('BRAZIL') THEN 'JD2' WHEN m.Country IN ('NZ', 'NEW ZEALAND', 'AUSTRALIA', 'AUSTRALASIA') THEN 'ANZ' ELSE 'Unknown' END AS DerivedRegion, m.ID, m.[Account Name], m.[Display Name], m.[Last Name], m.[First Name] FROM dbo.Users AS m ) AS x WHERE x.DerivedRegion = 'Unknown' ``` Check out [MSDN](http://msdn.microsoft.com/en-us/library/ms189499.aspx) and scroll down to **Logical Processing Order of the SELECT statement** to see the order in which a query is processed.
Adding type constraints to the context of instance declarations in Haskell I am trying to represent weighted edges. I eventually want to have OutE to be an instance of Eq and Ord, with the constraint that etype is an instance of Eq and Ord. Assume I have following file as temp.hs: ``` data (Ord etype)=> OutE vtype etype = OutE {destVertex:: vtype, edgeValue::etype} applyFunBy accessor ordfun = (\x y -> (ordfun (accessor x) (accessor y))) instance Eq (OutE vtype etype) where --(==) :: Ord etype => (OutE vtype etype) -> (OutE vtype etype) -> Bool --(/=) :: Ord etype => (OutE vtype etype) -> (OutE vtype etype) -> Bool (==) = applyFunBy edgeValue (==) (/=) = applyFunBy edgeValue (/=) ``` when I load this in ghci, I get the following errors: ``` temp.hs:10:19: Could not deduce (Ord etype) from the context (Eq (OutE vtype etype)) arising from a use of `edgeValue' at temp.hs:10:19-27 Possible fix: add (Ord etype) to the context of the instance declaration In the first argument of `applyFunBy', namely `edgeValue' In the expression: applyFunBy edgeValue (==) In the definition of `==': == = applyFunBy edgeValue (==) temp.hs:11:19: Could not deduce (Ord etype) from the context (Eq (OutE vtype etype)) arising from a use of `edgeValue' at temp.hs:11:19-27 Possible fix: add (Ord etype) to the context of the instance declaration In the first argument of `applyFunBy', namely `edgeValue' In the expression: applyFunBy edgeValue (/=) In the definition of `/=': /= = applyFunBy edgeValue (/=) Failed, modules loaded: none. ``` If include the lines for the type signatures for (==) and (\=), I get: ``` temp.hs:6:1: Misplaced type signature: == :: (Ord etype) => (OutE vtype etype) -> (OutE vtype etype) -> Bool temp.hs:7:1: Misplaced type signature: /= :: (Ord etype) => (OutE vtype etype) -> (OutE vtype etype) -> Bool ```
You limited `etype` to be `Ord` in the defintion of `OutE`: ``` data (Ord etype) => OutE vtype etype = ... ``` But in the `Eq` instance, you're actually trying to define the instance for *any* `etype` unrestrictedly. ``` instance Eq (OutE vtype etype) where ``` Of course this doesn't work since `OutE` itself is just defined for `Ord etype`s, thus you'll have to add the typeclass constraint to the instance definition as well. ``` instance (Ord etype) => Eq (OutE vtype etype) where ``` Note that one definition of either `==` or `/=` is sufficient for the typeclass to work. --- Note that it's often easier and therefore considered better style not to have typeclass constraints on `data`-types, but just on instances/methods that actually require the functionality of the typeclass. In many cases, one does not need the constraint and just ends up with unnecessarily clumsy type signatures. Take e.g. some ordered map type `Ord key => Map key value`. What if we just want to list all keys? Or get the number of elements? We don't need the keys to be `Ord` for these, so why not just leave the map unrestricted with simple ``` getKeys :: Map key value -> [key] getLength :: Map key value -> Int ``` and just add the typeclass when we really need it in a function like ``` insert :: Ord key => key -> value -> Map key value ```
Why do my p-values differ between logistic regression output, chi-squared test, and the confidence interval for the OR? I have built a logistic regression where the outcome variable is being cured after receiving treatment (`Cure` vs. `No Cure`). All patients in this study received treatment. I am interested in seeing if having diabetes is associated with this outcome. In R my logistic regression output looks as follows: ``` Call: glm(formula = Cure ~ Diabetes, family = binomial(link = "logit"), data = All_patients) ... Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.2735 0.1306 9.749 <2e-16 *** Diabetes -0.5597 0.2813 -1.990 0.0466 * ... Null deviance: 456.55 on 415 degrees of freedom Residual deviance: 452.75 on 414 degrees of freedom (2 observations deleted due to missingness) AIC: 456.75 ``` However, the confidence interval for the odds ratio **includes 1**: ``` OR 2.5 % 97.5 % (Intercept) 3.5733333 2.7822031 4.646366 Diabetes 0.5713619 0.3316513 1.003167 ``` When I do a chi-squared test on these data I get the following: ``` data: check X-squared = 3.4397, df = 1, p-value = 0.06365 ``` If you'd like to calculate it on your own the distribution of diabetes in the cured and uncured groups are as follows: ``` Diabetic cure rate: 49 / 73 (67%) Non-diabetic cure rate: 268 / 343 (78%) ``` My question is: Why don't the p-values and the confidence interval including 1 agree?
With generalized linear models, there are three different types of statistical tests that can be run. These are: Wald tests, likelihood ratio tests, and score tests. The excellent UCLA statistics help site has a discussion of them [here](https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faqhow-are-the-likelihood-ratio-wald-and-lagrange-multiplier-score-tests-different-andor-similar/). The following figure (copied from their site) helps to illustrate them: ![enter image description here](https://i.stack.imgur.com/BP0cD.gif) 1. The [Wald test](http://en.wikipedia.org/wiki/Wald_test) assumes that the likelihood is normally distributed, and on that basis, uses the degree of curvature to estimate the standard error. Then, the parameter estimate divided by the SE yields a $z$-score. This holds under large $N$, but isn't quite true with smaller $N$s. It is hard to say when your $N$ is large enough for this property to hold, so this test can be slightly risky. 2. [Likelihood ratio tests](http://en.wikipedia.org/wiki/Likelihood-ratio_test) look at the ratio of the likelihoods (or difference in log likelihoods) at its maximum and at the null. This is often considered the best test. 3. The [score test](http://en.wikipedia.org/wiki/Score_test) is based on the slope of the likelihood at the null value. This is typically less powerful, but there are times when the full likelihood cannot be computed and so this is a nice fallback option. The tests that come with `summary.glm()` are Wald tests. You don't say how you got your confidence intervals, but I assume you used `confint()`, which in turn calls `profile()`. More specifically, those confidence intervals are calculated by profiling the likelihood (which is a better approach than multiplying the SE by $1.96$). That is, they are analogous to the likelihood ratio test, not the Wald test. The $\chi^2$-test, in turn, is a score test. As your $N$ becomes indefinitely large, the three different $p$'s should converge on the same value, but they can differ slightly when you don't have infinite data. It is worth noting that the (Wald) $p$-value in your initial output is just barely significant and there is little real difference between just over and just under $\alpha=.05$ ([quote](https://stats.stackexchange.com/a/783/7290)). That line isn't 'magic'. Given that the two more reliable tests are just over $.05$, I would say that your data are not quite 'significant' by conventional criteria. Below I profile the coefficients on the scale of the linear predictor and run the likelihood ratio test explicitly (via `anova.glm()`). I get the same results as you: ``` library(MASS) x = matrix(c(343-268,268,73-49,49), nrow=2, byrow=T); x # [,1] [,2] # [1,] 75 268 # [2,] 24 49 D = factor(c("N","Diabetes"), levels=c("N","Diabetes")) m = glm(x~D, family=binomial) summary(m) # ... # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) -1.2735 0.1306 -9.749 <2e-16 *** # DDiabetes 0.5597 0.2813 1.990 0.0466 * # ... confint(m) # Waiting for profiling to be done... # 2.5 % 97.5 % # (Intercept) -1.536085360 -1.023243 # DDiabetes -0.003161693 1.103671 anova(m, test="LRT") # ... # Df Deviance Resid. Df Resid. Dev Pr(>Chi) # NULL 1 3.7997 # D 1 3.7997 0 0.0000 0.05126 . chisq.test(x) # Pearson's Chi-squared test with Yates' continuity correction # # X-squared = 3.4397, df = 1, p-value = 0.06365 ``` --- As @JWilliman pointed out in a comment (now deleted), in `R`, you can also get a score-based p-value using `anova.glm(model, test="Rao")`. In the example below, note that the p-value isn't quite the same as in the chi-squared test above, because by default, `R`'s `chisq.test()` applies a continuity correction. If we change that setting, the p-values match: ``` anova(m, test="Rao") # ... # Df Deviance Resid. Df Resid. Dev Rao Pr(>Chi) # NULL 1 3.7997 # D 1 3.7997 0 0.0000 4.024 0.04486 * chisq.test(x, correct=FALSE) # Pearson's Chi-squared test # # data: x # X-squared = 4.024, df = 1, p-value = 0.04486 ```
Is anyone really using Active Directory Federation Services? Is this a technology worth investing in? More generally, who is successfully using WIF/ADFS/SSO on the Windows platform, and is it worth implementing, and what is the likeliness it will be a lasting technology? On the surface, from reading a few [whitepapers (PDF)](http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBUQFjAA&url=http%3A%2F%2Fdownload.microsoft.com%2Fdownload%2F7%2FD%2F0%2F7D0B5166-6A8A-418A-ADDD-95EE9B046994%2FWindowsIdentityFoundationWhitepaperForDevelopers-RTW.pdf&rct=j&q=microsoft%20wif%20white%20paper&ei=7HSUTeudPIeI0QGr8qDoCw&usg=AFQjCNFt6MmG6pjRM7GwMv1AkJqVCQY85w&sig2=KDfVJ1_ztVJfpnBu1x12lw&cad=rja), [articles](http://msdn.microsoft.com/en-us/magazine/ee335707.aspx) and [books](http://msdn.microsoft.com/en-us/library/ff423674.aspx) on the subject, this seems like the perfect solution -- especially for a company that has an internal web site that exposes some level of functionality to external users and partners as well (or plans to in the future). But it sounds almost *too* perfect. And most of the information I have comes from Microsoft themselves. I guess my specific questions are: - Is this a lasting technology and worth investing in (and specifically for a smaller sized (<50 ppl) company)? - Are there any major companies out there that are actively using this? - How likely is it that a partner would be willing to setup an STS if we wanted someone else to provide authentication for their company as a trusted issuer? Is there going to be a lot of push-back here? - Is this going to end up being a configuration nightmare? - Are there any other pitfalls to look out for when deciding whether to implement this?
As more applications are moved to the cloud and to online services you will see ADFS and other federated identity technologies increase in usage. Organizations with investments in Active Directory will likely move to this solution due the low cost of ownership. Is this a lasting technology and worth investing in (and specifically for a smaller sized (<50 ppl) company)? - If you plan on either providing hosted services to other companies or plan on taking advantage of them yourselves ADFS provides a fairly painless way to take advantage of your current security infrastructure. - If properly implemented it should be fairly simple to replace on federation product with another. Are there any major companies out there that are actively using this? - I'm only familiar with a government organization I've worked on, but I'm sure there are others. The nature of federated identity make if difficult to externally identify who. How likely is it that a partner would be willing to setup an STS if we wanted someone else to provide authentication for their company as a trusted issuer? Is there going to be a lot of push-back here? Is this going to end up being a configuration nightmare? - Configuration is the most difficult part of ADFS. However, once you have the trust relationships built and policies created configuration will be hands off. - Other companies will either have the infrastructure in place to support ADFS or won't. Even .NET applications require configuration changes to support ADFS and more likely will require code changes to fully support the federated identity model. If your partners have this in place it is likely they'll happily trust your STS. - Ask your what your partners have in place, they may already have or be planning infrastructure today. Are there any other pitfalls to look out for when deciding whether to implement this? - The most difficult problem I ran into was changing application developer practices. - Applications need to either be designed around Federation or will need to be retrofitted with it. - You can't logout of an ADFS application without logging out of all ADFS applications. - When a federated session expires you must send a user back to the federation service for a new ticket. This could cause loss of post data if not handled properly.
understanding some flags in find-command in linux I wanted to move only files in Linux and I found the answer [here](https://superuser.com/questions/65635/how-to-move-only-files-in-unix/65732#65732) as the following: ``` find . -maxdepth 1 -type f -exec mv {} destination_path \; ``` which worked for me perfectly. ***But my questions are***: 1. what is the meaning of `\;` in this command? 2. Are the curly bracket `{}` in this command equivalent to `*`?
The `find` command assumes that everything between the `-exec` option and a `;` character comprises the command that you want to execute on each search result. Because `;` is a reserved character in the `bash` shell, you have to escape it using `\`, otherwise bash will interpret it. See `man bash` for details. The curly brackets `{}` are a placeholder for each search result of find. The following is from the man-page of `find`: ``` -exec command ; Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of `;' is encountered. The string `{}' is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find. Both of these constructions might need to be escaped (with a `\') or quoted to protect them from expansion by the shell. ```
android font size of tabs i ask this quest for a few time ago , but i get no solutions :( my problem is, that i have an android app with a tab activity, where i have to set the font size of my tabs, but i don't know how. in my activity i set my tabs programmatically: ``` TabLayout tabLayout = (TabLayout) findViewById(R.id.tab_layout); tabLayout.addTab(tabLayout.newTab().setText("MY TAB 1")); tabLayout.addTab(tabLayout.newTab().setText("MY TAB 2")); tabLayout.addTab(tabLayout.newTab().setText("MY TAB 3")); tabLayout.setTabGravity(TabLayout.GRAVITY_FILL); ``` Problem is, that the last 1 - 2 letters will be cut, so i have to set the font size smaller, but how? i hope anyone can help me.
Write these below codes in styles.xml ``` <style name="MyTabLayout" parent="Base.Widget.Design.TabLayout"> <item name="tabTextAppearance">@style/MyTabTextAppearance</item> </style> <style name="MyTabTextAppearance" parent="TextAppearance.AppCompat.Button"> <item name="android:textSize">18sp</item> <item name="android:textColor">@android:color/white</item> <item name="textAllCaps">true</item> </style> ``` And in your tablayout, set the style like below. ``` <android.support.design.widget.TabLayout style="@style/MyTabLayout" android:layout_width="width" android:layout_height="height"/> ```
Save the file explorer panel in vim session When I have *netrw* and some files open in *vim*, only the file panels are displayed when I restore the session using `.mksession`. Is it possible to restore the `netrw` file browser panel from a saved session?
A saved Vim session contains the list of open buffers, window layout, and changed options. The *netrw* plugin (like many others) uses *scratch buffers* to display its user interface in a Vim window. These contents are not persisted, but instead generated and updated by the plugin. There's no mechanism for Vim to recognize these and ask the plugin to restore on session load. Vim just sees that the netrw buffer isn't persisted on disk (i.e. `:set buftype=nofile`), and then skips it. Some plugins like [session.vim plugin](http://www.vim.org/scripts/script.php?script_id=3150) have functionality to handle special buffers; I don't think it can restore *netrw* buffers, though. What you *could* easily do is hooking into the `SessionLoadPost` autocmd event, and reopen *netrw* then.
How to mock AngularFire 2 service in unit test? I'm trying to set up unit tests for a sample Angular 2 app using AngularFire 2 auth, the component is fairly simple: ``` import { Component } from '@angular/core'; import { AngularFire, AuthProviders } from 'angularfire2'; @Component({ moduleId: module.id, selector: 'app-root', templateUrl: 'app.component.html', styleUrls: ['app.component.css'] }) export class AppComponent { isLoggedIn: boolean; constructor(public af: AngularFire) { this.af.auth.subscribe(auth => { if (auth) { this.isLoggedIn = true; } else { this.isLoggedIn = false; } }); } loginWithFacebook() { this.af.auth.login({ provider: AuthProviders.Facebook }); } logout() { this.af.auth.logout(); } } ``` All I'm doing is wrapping around the `login` and `logout` methods in AngularFire so I was thinking about using a mock to check if the methods were called but I'm not sure where to start, I tried doing the following in my spec file: ``` import { provide } from '@angular/core'; import { AngularFire } from 'angularfire2'; import { beforeEach, beforeEachProviders, describe, xdescribe, expect, it, xit, async, inject } from '@angular/core/testing'; import { AppComponent } from './app.component'; spyOn(AngularFire, 'auth'); beforeEachProviders(() => [ AppComponent, AngularFire ]); describe('App Component', () => { it('should create the app', inject([AppComponent], (app: AppComponent) => { expect(app).toBeTruthy(); }) ); it('should log user in', inject([AppComponent], (app: AppComponent) => { expect(app.fb.auth.login).toHaveBeenCalled(); }) ); it('should log user out', inject([AppComponent], (app: AppComponent) => { expect(app.fb.auth.logout).toHaveBeenCalled(); }) ); }); ``` However I'm not sure how to mock the `login` and `logout` methods since they're part of the `auth` property, is there a way to mock `auth` and also the returning `login` and `logout` methods?
In this snippet: ``` beforeEach(() => addProviders([ AppComponent, AngularFire ]); ``` You set (or [override](https://angular.io/docs/ts/latest/api/core/testing/beforeEachProviders-function.html)) the providers that will be used in your test. That being said, you can create a different class, a mock if you will, and, using the `{ provide: originalClass, useClass: fakeClass }` notation, provide it instead of the `AngularFire` actual class. Something like this: ``` class AngularFireAuthMock extends AngularFireAuth { // added this class public login() { ... } public logout() { ... } } class AngularFireMock extends AngularFire { // added this class public auth: AngularFireAuthMock; } beforeEach(() => addProviders([ AppComponent, { provide: AngularFire, useClass: AngularFireMock } // changed this line ]); ``` And the `AngularFire`s in your tests will be `AngularFireMock`s.
Why doesn't the `href` attribute on the ` ` element work? | I've set up a [fiddle](http://jsfiddle.net/7aHDR/4/) with a table. You see, Im trying to make a table where the user will hover and click the td to show an id. Check the fiddle out and you'll understand. Now, When the user hovers Parent4, you may notice there's space on the table where there's no text and the user can't click it so the Child4 wont appear.... Now, is there any way I can make the space where there's no text clickable so it shows up child4? I tried ``` <td ahref="#child4"> but didn't work... ``` ////?EDIT As its a bit confusing... I need you to see the fiddle. You can see the cell for Parent4 is bigger than the text. So when the user hovers on the cell I want the text to change color and the cell to change color too + if the user clicks the cell Child4 won't appear because a cell is unclickable, so My question, how can I make the cell clickable to display Child4? UPD: I didn't update the fiddle, but it's now up to date.
The `href` property is designed for [anchor elements (`<a/>`)](http://www.w3.org/TR/html5/text-level-semantics.html#the-a-element). "ahref" as you've put should be `<a href="">`. `a` is an element of its own, not a HTML attribute, and `href` is an attribute it accepts. To make the text of a `td` clickable you can simply put an anchor within it: ``` <td> <a href="#child4">My clickable text</a> </td> ``` **Edit:** To fix this now that the question has been added, simply add in the following CSS: ``` td a { display:block; width:100%; } ``` What this does is display the anchor tag as a block, allowing us to adjust the width, and then set the width to 100%, allowing it to fill the remaining space. [**Working JSFiddle**](http://jsfiddle.net/7aHDR/5/).
Does the `is` operator use a \_\_magic\_\_ method in Python? The [is](http://docs.python.org/2/reference/expressions.html#is) operator is used test for identity. I was wondering if the `is` operator and `id()` function call any `__magic__` method, the way `==` calls `__eq__`. I had some fun checking out `__hash__`: ``` class Foo(object): def __hash__(self): return random.randint(0, 2 ** 32) a = Foo() b = {} for i in range(5000): b[a] = i ``` Think about dict `b` and the value of `b[a]` > > Every subsequent lookup of `d[a]` is either a `KeyError` or a random integer. > > > But as the [docs on the special methods](http://docs.python.org/2/reference/datamodel.html#specialnames) state > > [*the default implementation of*] x.`__hash__`() returns id(x). > > > So there *is* relation between the two, but just the other way around. I've seen many [questions](https://stackoverflow.com/q/13650293/383793) on `is` and `id` here, and the [answers](https://stackoverflow.com/a/2038341/383793) have helped many [confused](https://stackoverflow.com/q/13062135/383793) minds, but I couldn't find an answer to this one.
No, `is` is a straight pointer comparison, and `id` just returns the address of the object cast to a `long`. From [`ceval.c`](http://hg.python.org/cpython/file/bd8afb90ebf2/Python/ceval.c#l4423): ``` case PyCmp_IS: res = (v == w); break; case PyCmp_IS_NOT: res = (v != w); break; ``` `v` and `w` here are simply `PyObject *`. From [`bltinmodule.c`](http://hg.python.org/cpython/file/bd8afb90ebf2/Python/bltinmodule.c#l962): ``` static PyObject * builtin_id(PyObject *self, PyObject *v) { return PyLong_FromVoidPtr(v); } PyDoc_STRVAR(id_doc, "id(object) -> integer\n\ \n\ Return the identity of an object. This is guaranteed to be unique among\n\ simultaneously existing objects. (Hint: it's the object's memory address.)"); ```
MFMailComposeViewController image orientation I'm developing a Universal app and I'm coding for iOS6. I'm using the imagePickerController to take a photo and then I am sending it as an attachment using MFMailComposeViewController. All of that is working. My problem is that when I shoot a picture in portrait mode, it is displayed by the MFMailComposeViewController in landscape mode. Also, when it arrives at the destination E-Mail address, it is displayed in landscape mode. If I shoot the picture in landscape mode, it is displayed by the MFMailComposeViewController in landscape mode and when it arrives at the destination E-Mail address, it is displayed in landscape mode. So that's all OK. I have the same issue on both of my test devices; an iPhone5 and an iPad2. How can I make a picture shot in portrait mode arrive at the E-Mail destination in portrait mode? Here's how I am adding the image to the E-Mail: ``` if ( [MFMailComposeViewController canSendMail] ) { MFMailComposeViewController * mailVC = [MFMailComposeViewController new]; NSArray * aAddr = [NSArray arrayWithObjects: gAddr, nil]; NSData * imageAsNSData = UIImagePNGRepresentation( gImag ); [mailVC setMailComposeDelegate: self]; [mailVC setToRecipients: aAddr]; [mailVC setSubject: gSubj]; [mailVC addAttachmentData: imageAsNSData mimeType: @"image/png" fileName: @"myPhoto.png"]; [mailVC setMessageBody: @"Blah blah" isHTML: NO]; [self presentViewController: mailVC animated: YES completion: nil]; } else { NSLog( @"Device is unable to send email in its current state." ); } ```
I've spent a number of hours working on this issue and I am now clear about what's going on and how to fix it. To repeat, the problem I ran into is: When I shoot an image in portrait mode on the camera using the imagePickerController and pass that image to the MFMailComposeViewController and e-mail it, it arrives at the destination E-Mail address and is displayed there incorrectly in landscape mode. However, if I shoot the picture in landscape mode and then send it, it is displayed at the E-mail's destination correctly in landscape mode **So, how can I make a picture shot in portrait mode arrive at the E-Mail destination in portrait mode? That was my original question.** Here's the code, as I showed it in the original question, except that I am now sending the image as a JPEG rather than a PNG but this has made no difference. This is how I'm capturing the image with the imagePickerController and placing it into a global called gImag: ``` gImag = (UIImage *)[info valueForKey: UIImagePickerControllerOriginalImage]; [[self imageView] setImage: gImag]; // send image to screen [self imagePickerControllerRelease: picker ]; // free the picker object ``` And this is how I E-Mail it using the MFMailComposeViewController: ``` if ( [MFMailComposeViewController canSendMail] ) { MFMailComposeViewController * mailVC = [MFMailComposeViewController new]; NSArray * aAddr = [NSArray arrayWithObjects: gAddr, nil]; NSData * imageAsNSData = UIImageJPEGRepresentation( gImag ); [mailVC setMailComposeDelegate: self]; [mailVC setToRecipients: aAddr]; [mailVC setSubject: gSubj]; [mailVC addAttachmentData: imageAsNSData mimeType: @"image/jpg" fileName: @"myPhoto.jpg"]; [mailVC setMessageBody: @"Blah blah" isHTML: NO]; [self presentViewController: mailVC animated: YES completion: nil]; } else NSLog( @"Device is unable to send email in its current state." ); ``` When I describe how to solve this, I’m going to focus on working with an iPhone 5 and using its main camera which shoots at 3264x2448 just to keep things simple. This problem does, however, affect other devices and resolutions. *The key to unlocking this problem is* to realize that when you shoot an image, in either portrait of landscape, the iPhone always stores the UIImage the same way: as 3264 wide and 2448 high. A UIImage has a property which describes its orientation at the time the image was captured and you can get it like this: ``` UIImageOrientation orient = image.imageOrientation; ``` Note that the orientation property does *not* describe how the data in the UIImage is physically configured (as as 3264w x 2448h); it only describes its orientation at the time the image was captured. The property’s use is to tell software, which is about to display the image, how to rotate it so it will appear correctly. If you capture an image in portrait mode, image.imageOrientation will return UIImageOrientationRight. This tells displaying software that it needs to rotate the image 90 degrees so it will display correctly. Be clear that this ‘rotation’ does not effect the underlying storage of the UIImage which remains as 3264w x 2448h. If you capture an image in landscape mode, image.imageOrientation will return UIImageOrientationUp. UIImageOrientationUp tells displaying software that the image is fine to display as it is; no rotation is necessary. Again, the underlying storage of the UIIMage is 3264w x 2448h. Once you are clear about the difference between how the data is physically stored vs. how the orientation property is used to describe its orientation at the time it was captured, things begin to get clearer. I created a few lines of debugging code to ‘see’ all of this. Here’s the imagePickerController code again with the debugging code added: ``` gImag = (PIMG)[info valueForKey: UIImagePickerControllerOriginalImage]; UIImageOrientation orient = gImag.imageOrientation; CGFloat width = CGImageGetWidth(gImag.CGImage); CGFloat height = CGImageGetHeight(gImag.CGImage); [[self imageView] setImage: gImag]; // send image to screen [self imagePickerControllerRelease: picker ]; // free the picker object ``` If we shoot portrait, gImage arrives with UIImageOrientationRight and width = 3264 and height = 2448. If we shoot landscape, gImage arrives with UIImageOrientationUp and width = 3264 and height = 2448. If we continue on to the E-Mailng MFMailComposeViewController code, I’ve added the debugging code in there as well: ``` if ( [MFMailComposeViewController canSendMail] ) { MFMailComposeViewController * mailVC = [MFMailComposeViewController new]; NSArray * aAddr = [NSArray arrayWithObjects: gAddr, nil]; UIImageOrientation orient = gImag.imageOrientation; CGFloat width = CGImageGetWidth(gImag.CGImage); CGFloat height = CGImageGetHeight(gImag.CGImage); NSData * imageAsNSData = UIImageJPEGRepresentation( gImag ); [mailVC setMailComposeDelegate: self]; [mailVC setToRecipients: aAddr]; [mailVC setSubject: gSubj]; [mailVC addAttachmentData: imageAsNSData mimeType: @"image/jpg" fileName: @"myPhoto.jpg"]; [mailVC setMessageBody: @"Blah blah" isHTML: NO]; [self presentViewController: mailVC animated: YES completion: nil]; } else NSLog( @"Device is unable to send email in its current state." ); ``` Nothing amazing to see here. We get exactly the same values as we did back in the imagePickerController code. **Let's see exactly how the problem manifests:** To begin, the camera takes a portrait shot and it is displayed correctly in portrait mode by the line: ``` [[self imageView] setImage: gImag]; // send image to screen ``` It is display correctly because this line of code sees the orientation property and rotates the image appropriately (while NOT touching the underlying storage at 3264x2448). Flow-of-control goes to the E-Mailer code now and the orientation property is still present in gImag so when the MFMailComposeViewController code displays the image in the outgoing E-Mail, it is correctly oriented. The physical image is still stored as 3264x2448. The E-Mail is sent and, on the receiving end, knowledge of the orientation property has been lost so the receiving software displays the image as it is physically laid out as 3264x2448, i.e. landscape. In debugging this, I ran into an additional confusion. And that is that the orientation property can be stripped from the UIImage, if you make a copy of it incorrectly. This code shows the problem: ``` if ( [MFMailComposeViewController canSendMail] ) { MFMailComposeViewController * mailVC = [MFMailComposeViewController new]; NSArray * aAddr = [NSArray arrayWithObjects: gAddr, nil]; UIImageOrientation orient = gImag.imageOrientation; CGFloat width = CGImageGetWidth(gImag.CGImage); CGFloat height = CGImageGetHeight(gImag.CGImage); UIImage * tmp = [[UIImage alloc] initWithCGImage: gImag.CGImage]; orient = tmp.imageOrientation; width = CGImageGetWidth(tmp.CGImage); height = CGImageGetHeight(tmp.CGImage); NSData * imageAsNSData = UIImageJPEGRepresentation( tmp ); [mailVC setMailComposeDelegate: self]; [mailVC setToRecipients: aAddr]; [mailVC setSubject: gSubj]; [mailVC addAttachmentData: imageAsNSData mimeType: @"image/jpg" fileName: @"myPhoto.jpg"]; [mailVC setMessageBody: @"Blah blah" isHTML: NO]; [self presentViewController: mailVC animated: YES completion: nil]; } else NSLog( @"Device is unable to send email in its current state." ); ``` When we look at the debugging data for the new UIImage tmp, we get UIImageOrientationUp and width = 3264 and height = 2448. The orientation property was stripped and the default orientation is Up. If you do not know the stripping is going on, it can really confuse things. If I run this code, I now get the following results: Things are unchanged in the imagePickerController code; the image is captured as before. Flow-of-control goes on to the E-Mailer code but now the orientation property has been stripped from the tmp image so when the MFMailComposeViewController code displays the tmp image in the outgoing E-Mail, it is shown in landscape mode (because the default orientation is UIImageOrientationUp so there is no rotation done of the 3264x2448 image). The E-Mail is sent and on the receiving end, knowledge of the orientation property is missing as well so the receiving software displays the image as it is physically laid out as 3264x2448, i.e. landscape. This ‘stripping’ of the orientation property when making copies of the UIImage can be avoided, if one is aware it is going on, by using the following code to make the UIImage copy: ``` UIImage * tmp = [UIImage imageWithCGImage: gImag.CGImage scale: gImag.scale orientation: gImag.imageOrientation]; ``` That would allow you to avoid losing the orientation property along the way but it would still not deal with its loss on the far end when you e-mail the image. **There’s a better way than all this messing about and worrying about the orientation property.** I found some code [*here*](http://blog.logichigh.com/2008/06/05/uiimage-fix/) that that I’ve integrated into my program. This code will physically rotate the underlying stored image according to its orientation property. For a UIImage with an orientation of UIImageOrientationRight, it will physically rotate the UIImage so it ends up as 2448x3264 and it will strip away the orientation property so it is seen thereafter as the default UIImageOrientationUp. For an UIImage with an orientation of UIImageOrientationUp, it does nothing. It lets the sleeping landscape dogs lie. If you do this then I think (based on what I’ve seen so far), that the orientation property of the UIImage is superfluous thereafter. So long as it remains missing/stripped or set to UIImageOrientationUp, your images should be displayed correctly at each step along the way and on the distant end when the images embedded in your E-Mail is displayed. Everything I’ve discussed in the answer, I have personally single-stepped and watched it happen. **So, here my final code that works:** ``` gImag = (PIMG)[info valueForKey: UIImagePickerControllerOriginalImage]; [[self imageView] setImage: gImag]; // send image to screen [self imagePickerControllerRelease: picker ]; // free the picker object ``` and ``` if ( [MFMailComposeViewController canSendMail] ) { MFMailComposeViewController * mailVC = [MFMailComposeViewController new]; NSArray * aAddr = [NSArray arrayWithObjects: gAddr, nil]; //...lets not touch the original UIImage UIImage * tmpImag = [UIImage imageWithCGImage: gImag.CGImage scale: gImag.scale orientation: gImag.imageOrientation]; //...do physical rotation, if needed PIMG ImgOut = [gU scaleAndRotateImage: tmpImag]; //...note orientation is UIImageOrientationUp now NSData * imageAsNSData = UIImageJPEGRepresentation( ImgOut, 0.9f ); [mailVC setMailComposeDelegate: self]; [mailVC setToRecipients: aAddr]; [mailVC setSubject: gSubj]; [mailVC addAttachmentData: imageAsNSData mimeType: @"image/jpg" fileName: @"myPhoto.jpg"]; [mailVC setMessageBody: @"Blah blah" isHTML: NO]; [self presentViewController: mailVC animated: YES completion: nil]; } else NSLog( @"Device is unable to send email in its current state." ); ``` And, finally, here's the code I grabbed from [here](http://blog.logichigh.com/2008/06/05/uiimage-fix/) that does the physical rotation, if necessary: ``` - (UIImage *) scaleAndRotateImage: (UIImage *) imageIn //...thx: http://blog.logichigh.com/2008/06/05/uiimage-fix/ { int kMaxResolution = 3264; // Or whatever CGImageRef imgRef = imageIn.CGImage; CGFloat width = CGImageGetWidth(imgRef); CGFloat height = CGImageGetHeight(imgRef); CGAffineTransform transform = CGAffineTransformIdentity; CGRect bounds = CGRectMake( 0, 0, width, height ); if ( width > kMaxResolution || height > kMaxResolution ) { CGFloat ratio = width/height; if (ratio > 1) { bounds.size.width = kMaxResolution; bounds.size.height = bounds.size.width / ratio; } else { bounds.size.height = kMaxResolution; bounds.size.width = bounds.size.height * ratio; } } CGFloat scaleRatio = bounds.size.width / width; CGSize imageSize = CGSizeMake( CGImageGetWidth(imgRef), CGImageGetHeight(imgRef) ); UIImageOrientation orient = imageIn.imageOrientation; CGFloat boundHeight; switch(orient) { case UIImageOrientationUp: //EXIF = 1 transform = CGAffineTransformIdentity; break; case UIImageOrientationUpMirrored: //EXIF = 2 transform = CGAffineTransformMakeTranslation(imageSize.width, 0.0); transform = CGAffineTransformScale(transform, -1.0, 1.0); break; case UIImageOrientationDown: //EXIF = 3 transform = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height); transform = CGAffineTransformRotate(transform, M_PI); break; case UIImageOrientationDownMirrored: //EXIF = 4 transform = CGAffineTransformMakeTranslation(0.0, imageSize.height); transform = CGAffineTransformScale(transform, 1.0, -1.0); break; case UIImageOrientationLeftMirrored: //EXIF = 5 boundHeight = bounds.size.height; bounds.size.height = bounds.size.width; bounds.size.width = boundHeight; transform = CGAffineTransformMakeTranslation(imageSize.height, imageSize.width); transform = CGAffineTransformScale(transform, -1.0, 1.0); transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0); break; case UIImageOrientationLeft: //EXIF = 6 boundHeight = bounds.size.height; bounds.size.height = bounds.size.width; bounds.size.width = boundHeight; transform = CGAffineTransformMakeTranslation(0.0, imageSize.width); transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0); break; case UIImageOrientationRightMirrored: //EXIF = 7 boundHeight = bounds.size.height; bounds.size.height = bounds.size.width; bounds.size.width = boundHeight; transform = CGAffineTransformMakeScale(-1.0, 1.0); transform = CGAffineTransformRotate(transform, M_PI / 2.0); break; case UIImageOrientationRight: //EXIF = 8 boundHeight = bounds.size.height; bounds.size.height = bounds.size.width; bounds.size.width = boundHeight; transform = CGAffineTransformMakeTranslation(imageSize.height, 0.0); transform = CGAffineTransformRotate(transform, M_PI / 2.0); break; default: [NSException raise: NSInternalInconsistencyException format: @"Invalid image orientation"]; } UIGraphicsBeginImageContext( bounds.size ); CGContextRef context = UIGraphicsGetCurrentContext(); if ( orient == UIImageOrientationRight || orient == UIImageOrientationLeft ) { CGContextScaleCTM(context, -scaleRatio, scaleRatio); CGContextTranslateCTM(context, -height, 0); } else { CGContextScaleCTM(context, scaleRatio, -scaleRatio); CGContextTranslateCTM(context, 0, -height); } CGContextConcatCTM( context, transform ); CGContextDrawImage( UIGraphicsGetCurrentContext(), CGRectMake( 0, 0, width, height ), imgRef ); UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return( imageCopy ); } ``` Cheers, from New Zealand.
iPhone: Fade transition between two RootViewControllers `Obj-C` or `MonoTouch C#` answers are fine. The initial UIWindow's RootViewController is a simple login screen. ``` window.RootViewController = loginScreen; ``` After login, I set the Root to the main app ``` window.RootViewController = theAppScreen; ``` How do I Fade-transition between the two RootViewControllers in this instance?
I might suggest a different approach that will get you your animation. Just go to the `theAppScreen` controller *first*, and if you need the user to log in, have it do the `presentViewController` to get to the `loginScreen` (you don't have to animate this step if you want it look like it went directly to the login screen). That way, when you've successfully logged in, the loginScreen can just `dismissViewControllerAnimated` and you've got your animation back to the main `theAppScreen`. (Obviously, if you want the fade effect, don't forget to set the controller's `modalTransitionStyle` to `UIModalTransitionStyleCrossDissolve`.) If you're dead set on changing your `rootViewController`, the only way I can think of doing it (and I don't like it) would be to do something like: ``` MainAppViewController *controller = [[MainAppViewController alloc] initWithNibName:@"MainAppViewController" bundle:nil]; // animate the modal presentation controller.modalTransitionStyle = UIModalTransitionStyleCrossDissolve; [self.window.rootViewController presentViewController:controller animated:YES completion:^{ // and then get rid of it as a modal [controller dismissViewControllerAnimated:NO completion:nil]; // and set it as your rootview controller self.window.rootViewController = controller; }]; ``` The first technique seems much cleaner to me.
Why is 1.0f in C code represented as 1065353216 in the generated assembly? In C I have this code block: ``` if(x==1){ a[j][i]=1; } else{ a[j][i]=0; } ``` `a` is a matrix of float values, if I try to see the compiled assembly of this code in nasm syntax the line `a[j][i]=0;` assignment, was coded in this way ``` dword [rsi+rdi], 0 ``` but the line `a[j][i]=1;` assignment, was coded in this way ``` dword [rsi+rdi], 1065353216 ``` How can **1065353216** represent a **1.0f**??
Because **1065353216** is the unsigned 32-bit integer representation of the 32-bit floating point value 1.0. More specifically, 1.0 as a 32-bit float becomes: ``` 0....... ........ ........ ........ sign bit (zero is positive) .0111111 1....... ........ ........ exponent (127, which means zero) ........ .0000000 00000000 00000000 mantissa (zero, no correction needed) ___________________________________ 00111111 10000000 00000000 00000000 result ``` So the end result is 2^0 + 0, which is 1 + 0, which is 1. You can use [binaryconvert.com](http://www.binaryconvert.com/) or [this useful converter](http://www.h-schmidt.net/FloatConverter/IEEE754.html) to see other values. As to why 127 suddenly means zero in the exponent: it's actually a pretty clever trick called [exponent bias](http://en.wikipedia.org/wiki/Exponent_bias) that makes it easier to compare floating-point values. Try out the converter with wildly different values (10, 100, 1000...) and you'll see the the exponent increases as well. Sorting is also the reason the sign bit is the first bit stored.
How can private member variables in C++ be simulated in C? Can private member variables (as in C++) be implemented in C and how can this can be achieved? I thought about static global variables limiting the scope of the variable just to the file where it is defined, but then how would one access it from other files? Is there a way to implement private member variables in C?
You need to create an opaque type for this, two structs which start with the same fields and then you can add private fields to one of them and never let the user access the one with the private fields. Then provide accessor functions to modify/read the private fields in the struct. An example is better than a thousand words **struct.c** ``` #include <stdlib.h> struct Private { int public; int private; }; struct Public { int public; }; struct Public *new_public(int private, int public) { struct Private *instance; instance = malloc(sizeof(*instance)); if (instance == NULL) return NULL; instance->private = private; instance->public = public; return (struct Public *)instance; } int public_get_private(struct Public *public) { if (public == NULL) return -1; /* or some invalid value */ return ((struct Private *)public)->private; } void public_set_private(Public *public, int value) { if (public == NULL) return; ((struct Private *)public)->private = value; } ``` **struct.h** ``` #ifndef __STRUCT_H__ #define __STRUCT_H__ struct Public { int public; }; typedef struct Public Public; Public *new_public(int private, int public); int public_get_private(Public *instance); void public_set_private(Public *instance, int value); /* you can add more fields to the structures and more access functions */ #endif ``` **main.c** ``` #include "struct.h" #include <stdlib.h> #include <stdio.h> int main() { Public *instance; instance = new_public(1, 2); if (instance == NULL) return -1; printf("%d\n", instance->public); printf("%d\n", public_get_private(instance)); free(instance); return 0; } ``` 1. `new_public()` acts as a constructor. 2. You can get the value at `private` by means of the function `public_get_private()` which behaves as a member function. 3. You can't access the `private` member from the `Public` struct. Of course you can do stupid things like ``` Public *public = malloc(sizeof(*public)); ``` but thats what I find so sweet about c, you are free to do whatever you want, just don't do things wrong. I also think this is nicer than `private` members, because both the definition and the implementation details are hidden from the library user.
Get variables in Sphinx templates I can't figure out how to get variables into Sphinx documents via a template. I am certainly making an elementary mistake, but while there is lots of documentation for using Jinja/Flask templates for web service and some documentation for [Sphinx using it](http://sphinx-doc.org/templating.html), I am having trouble doing the following. Maybe it's not possible and I'm barking up the wrong tree, but then this is fairly different from how variables work in general in web (as opposed to doc) templates? I am working within a much larger project. Suppose in my project's `conf.py` I make a variable, say ``` LANGS = ['en', 'de', 'cn'] ``` I know that this works because if I do the docbuild (we have a custom docbuild but I don't think it does anything really crazy other than a customized logger and eating a bunch of 'chatter') with the following line in `conf.py` ``` print len(LANGS) ``` it shows up during the docbuild. But now of course I want to access this variable in my template. As far as I can tell, we override `index.rst` with `templates/index.html`, which inherits from the basic layout.html for Sphinx. And if I do ``` <p>We have {{ LANGS|len }} languages</p> ``` I get > > We have 0 languages > > > Now, this is weird, because sometimes I can cause an error in the build by referring to variables not defined (though not consistently), so that somehow it 'knows' that the variable is defined but thinks it has length zero. Or does a "null" variable have length zero automatically? **How do I get this variable defined - or is it not possible?** What I *want* to do is then do something for each language in the list (make an outside link, in particular), but I figure there is no point in trying `{% for %}/{% endfor %}` or whatever if I can't get this working. Maybe Sphinx implements only a subset of Jinja? Anyway, please help!
There are at least two ways to pass variables to a template: 1. Via [`html_context`](http://sphinx-doc.org/config.html#confval-html_context): > > A dictionary of values to pass into the template engine’s context for all pages. Single values can also be put in this dictionary using the -A command-line option of sphinx-build. > > > Example: ``` # conf.py: html_context = {'num_langs': len(LANGS)} <!-- template: --> <p>We have {{ num_langs }} languages</p> ``` 2. Via the [`html_theme_options`](http://sphinx-doc.org/config.html#confval-html_theme_options). This requires adding an option to `theme.conf` (you can create a theme by inheriting from a standard one): ``` [options] num_langs = 1 ``` Then you can set `num_langs` in `conf.py` via `html_theme_options`: ``` html_theme_options = {'num_langs': len(LANGS)} ``` and use it in a template: ``` <p>We have {{ theme_num_langs }} languages</p> ```
C# Win Api DDE connection multithread I have function implementation of DDE client using Win Api in C#. Everything works fine in case that I call `DdeInitializeW` and `DdeConnect` in single thread. Specifically, these are wrapper definitions: ``` [DllImport("user32.dll")] protected static extern int DdeInitializeW(ref int id, DDECallback cb, int afcmd, int ulres); [DllImport("user32.dll")] static extern IntPtr DdeConnect( int idInst, // instance identifier IntPtr hszService, // handle to service name string IntPtr hszTopic, // handle to topic name string IntPtr pCC // context data ); ``` If I called `DdeInitializeW` and `DdeConnect` in different threads, `DdeConnect` return null pointer. Also, if I called both of them (established DDE connection) in one thread, I can't use this DDE channel in another thread (i'm getting `INVALIDPARAMETER` DDE error). As I said, everything works without problems in single thread.
The behaviour you describe is expected. DDE is tied to the individual thread. This is because DDE (which is generally considered a legacy technology) works internally by passing windows messages, and windows handles (`HWND`) have thread affinity. - You must call `DdeInitializeW` from the same thread you call `DdeConnect`. - That thread must pump messages (so it cannot be a thread-pool thread). - You will get callbacks/replies on that same thread also. In other words you need to do your DDE from a thread which calls `Application.Run`, or which frequently calls `Application.DoEvents` at moments when it is appropriate for the events to be sent or received. You can use DDE from more than one thread, but each must call `DdeInitializeW` and replies will always be received on the thread the request was sent from.
Flask admin remember form value In my application, I have Users and Posts as models. Each post has a foreign key to a username. When I create a ModelView on top of my Posts model I can create posts as specific users in the admin interface as seen in the screenshot below [![enter image description here](https://i.stack.imgur.com/DdBkJ.png)](https://i.stack.imgur.com/DdBkJ.png) After I have added a post and click "Save and Add Another", the "User" reverts back to "user1". How can I make the form remember the previous value "user2"? My reserach has led me to believe it can be done by modifying on\_model\_change and on\_form\_prefill, and saving the previous value in the flask session, but it seems to be overengineering such a simple task. There must be a simpler way. My code can be seen below ``` from flask import Flask from flask_sqlalchemy import SQLAlchemy import flask_admin from flask_admin.contrib import sqla app = Flask(__name__) db = SQLAlchemy() admin = flask_admin.Admin(name='Test') class Users(db.Model): """ Contains users of the database """ user_id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(64), index=True, unique=True, nullable=False) def __str__(self): return self.username class Posts(db.Model): """ Contains users of the database """ post_id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(11), db.ForeignKey(Users.username), nullable=False) post = db.Column(db.String(256)) user = db.relation(Users, backref='user') def build_sample_db(): db.drop_all() db.create_all() data = {'user1': 'post1', 'user1': 'post2', 'user2': 'post1'} for user, post in data.items(): u = Users(username=user) p = Posts(username=user, post=post) db.session.add(u) db.session.add(p) db.session.commit() class MyModelView(sqla.ModelView): pass if __name__ == '__main__': app.config['SECRET_KEY'] = '123456790' app.config['DATABASE_FILE'] = 'sample_db.sqlite' app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///database' app.config['SQLALCHEMY_ECHO'] = True db.init_app(app) admin.init_app(app) admin.add_view(MyModelView(Posts, db.session)) with app.app_context(): build_sample_db() # Start app app.run(debug=True) ```
I have come across this situation before and i have solved it using 2 functions. its pretty easy and small. ``` @expose('/edit/', methods=('GET', 'POST')) def edit_view(self): #write your logic to populate the value into html self._template_args["arg_name"] = stored_value # in your html find this value to populate it as you need ``` the above function will let you populate the values in html when user tries to edit any value. This can be populated using the value stored. And below is a function that helps you save the value from previous edit. within this `class MyModelView(sqla.ModelView):` you need to add the below 2 functions. ``` def on_model_change(self, form, model, is_created): stored_value = model.user # this is your user name stored # get the value of the column from your model and save it ``` This is a 2 step operation that's pretty small and does not need a lot of time. I have added just a skeleton/pseudo code for now.
Compare dictionary content with Object I have the following object ``` class LidarPropertiesField(object): osversion = '' lidarname = '' lat = 0.0 longit = 0.0 alt = 0.0 pitch = 0.0 yaw = 0.0 roll = 0.0 home_el = 0.0 home_az = 0.0 gps = 0 vad = 0 ppi = 0 rhi = 0 flex_traj = 0 focuse = 0 type = 0 range_no = 0 hard_target = 0 dbid = 0 ``` Also I have a dictionary with the same fields, is it possible to compare the object fields with the dictionary fields in a for loop?
Assuming the `dict` is called `d`, this will check if `LidarPropertiesField` has the same values as `d` for all keys in `d`: ``` for k, v in d.iteritems(): if getattr(LidarPropertiesField, k) != v: # difference found; note, an exception will be raised # if LidarPropertiesField has no attribute k ``` Alternatively, you can convert the class to a `dict` with something like ``` dict((k, v) for k, v in LidarPropertiesField.__dict__.iteritems() if not k.startswith('_')) ``` and compare with `==`. Note the skipping over all class attributes that start with `_` to avoid `__doc__`, `__dict__`, `__module__` and `__weakref__`.
C#: Need one of my classes to trigger an event in another class to update a text box Total n00b to C# and events although I have been programming for a while. I have a class containing a text box. This class creates an instance of a communication manager class that is receiving frames from the Serial Port. I have this all working fine. Every time a frame is received and its data extracted, I want a method to run in my class with the text box in order to append this frame data to the text box. So, without posting all of my code I have my form class... ``` public partial class Form1 : Form { CommManager comm; public Form1() { InitializeComponent(); comm = new CommManager(); } private void updateTextBox() { //get new values and update textbox } . . . ``` and I have my CommManager class ``` class CommManager { //here we manage the comms, recieve the data and parse the frame } ``` SO... essentially, when I parse that frame, I need the updateTextBox method from the form class to run. I'm guessing this is possible with events but I can't seem to get it to work. I tried adding an event handler in the form class after creating the instance of CommManager as below... ``` comm = new CommManager(); comm.framePopulated += new EventHandler(updateTextBox); ``` ...but I must be doing this wrong as the compiler doesn't like it... Any ideas?!
Your code should look something like: ``` public class CommManager() { delegate void FramePopulatedHandler(object sender, EventArgs e); public event FramePopulatedHandler FramePopulated; public void MethodThatPopulatesTheFrame() { FramePopulated(); } // The rest of your code here. } public partial class Form1 : Form { CommManager comm; public Form1() { InitializeComponent(); comm = new CommManager(); comm.FramePopulated += comm_FramePopulatedHander; } private void updateTextBox() { //get new values and update textbox } private void comm_FramePopulatedHandler(object sender, EventArgs e) { updateTextBox(); } } ``` And here's a link to the .NET Event Naming Guidelines mentioned in the comments: [MSDN - Event Naming Guidelines](http://msdn.microsoft.com/en-us/library/h0eyck3s(v=VS.71).aspx)
The action of UIBarbuttonItem on UIToolBar not called I am having trouble as the action of UIBarbuttonItem on UIToolBar is not be called. In the following code, although the `doneBtn` on `toolBar` is tapped, the action `doneBtnAction:` is not be called. Do you have any idea to fix it? ``` - (void)viewDidLoad { UIPickerView *pickerView = [[UIPickerView alloc] init]; UIToolbar *toolBar = [[UIToolbar alloc] initWithFrame:CGRectMake(0, -44, 320, 44)]; UIBarButtonItem *doneBtn = [[UIBarButtonItem alloc] initWithTitle:@"Done" style:UIBarButtonItemStyleDone target:self action:@selector(doneBtnAction:)]; UIBarButtonItem *flex = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemFlexibleSpace target:nil action:nil]; toolBar.items = @[flex, doneBtn]; [pickerView addSubview:toolBar]; UITextField *textField = [[UITextField alloc] init]; textField.inputView = pickerView; } - (void)doneBtnAction:(UIBarButtonItem *)sender { NSLog(@"%@", sender); } ```
Don't add the toolbar as a subview of the picker view, especially with a negative y origin (No touches reach the toolbar because the taps are clipped to the picker view's frame). Instead, make the toolbar the `inputAccessoryView` of the text field. ``` textField.inputAccessoryView = toolBar; ``` Complete code: ``` - (void)viewDidLoad { UIPickerView *pickerView = [[UIPickerView alloc] init]; UIToolbar *toolBar = [[UIToolbar alloc] initWithFrame:CGRectMake(0, 0, 320, 44)]; UIBarButtonItem *doneBtn = [[UIBarButtonItem alloc] initWithTitle:@"Done" style:UIBarButtonItemStyleDone target:self action:@selector(doneBtnAction:)]; UIBarButtonItem *flex = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemFlexibleSpace target:nil action:nil]; toolBar.items = @[flex, doneBtn]; UITextField *textField = [[UITextField alloc] init]; textField.inputView = pickerView; textField.inputAccessoryView = toolBar; } ``` One other note - Why not use the standard system Done type for the bar button item?
Printing out Visual Studio PowerShell variables Is there a simple way to print out the VS variables like. ``` $(SolutionDir) $(TargetDir) $(MSBuildProjectDir) $(PacketOutputDir) ect ``` I know I could look up what each one does, and induce their value from there, but is there something I can type in the Package Manager Console or something to just get what the current value of it is? Or an easy way to Output them to the output? Trying to configure my nuget.targets file and I think I'm putting something in the wrong place, finding these values on the fly would be very helpful
As far as I know, those are not variables that are directly exposed to PowerShell - I think Visual Studio substitutes them on the fly when needed (like the pre and post build steps). They are referred to as macros in the UI as well. You can view their values a couple of different ways. You may have some luck with the [NuGetPSVariables](http://www.nuget.org/packages/NuGetPSVariables) package - it contains scripts that log the values for the variables in the init, install and uninstall nuget scripts (if you are building a nuget package). The uninstall script definitely expects that the script start with a params declaration to capture the `$installPath, $toolsPath, $package and $project` variables. You can also view the values of those macros by looking at the project properties, build tab and clicking on the Edit Pre-Build button. At the bottom of the dialog box that opens it will show you the current values and all the available variables: ![Visual Studio Pre/Post Build Macros Dialog](https://i.stack.imgur.com/lZnKZ.png) This page has quite a bit of information on how to package things up. Hopefully it can shed some additional light on what might be missing: <http://docs.nuget.org/docs/creating-packages/creating-and-publishing-a-package>
DTS Package sql job Ive never used sql jobs or dts. What does.. ``` N'DTSRun /~Z0xB6A63469D036090E592289E9BB069DBBB66DA8B3F00BB110255F2D64DD7346100CB714C2190C187070FAAF5BA84AB86B45D9EDCF423D9EE4FD4440C56ED3BB66F3337538BBC796E1A2AA15E8A78ED222DED7F01A55CF9AB025F8CB97E387FB129E9C77A6602918F64627B07CD005ED09569E30567F5AF1991346894A13CB2D7083C04A03AD842CF0C18665 ' ``` mean? where do i find the code to change this routine?
This is an encrypted `DtsRun` Command With `DTSRun` is also possible run ecrypted command line (as in your case) You have to decypt the command to be able to locate your package anyway the below are the step to decrypt it 1. Copy the DTSRun command line from the job step (including the very long encrypted string!) 2. Open a Windows Command Line Window and paste the command into it. 3. Add /!X and /!C to the end of the DTSRun command. (/!X says do not run and /!C says copy results into clipboard) 4. Execute the command. (You should see DTSRun: Loading … and DTSRun: Executing …. as is displayed below and the cleartext parameters should now be in your paste buffer) more info about `DtsRun` utility ![enter image description here](https://i.stack.imgur.com/GDXSb.jpg) more info at the below link [Decrypting the encrypted DTSRun /~Z command-line parameters](http://bisqlserver.rdacorp.com/2008/08/decrypting-encrypted-dtsrun-z-command.html)
Does anyone know if Heroku is still with Facebook? I created an facebook app and now there is no "host your app in Heroku" I wanted check the default python application for the facebook app. So Im not sure if now I'm not able to create more apps in heroku or heroku is no longer with facebook or maybe it is a bug. Does someone know what is happening here ? Thank you
The formal Facebook integration is [no longer available](https://devcenter.heroku.com/articles/configuring-your-facebook-app-as-a-canvas-page). However, independent of whether or not the button to create a Heroku app is on Facebook or not, [you can always create an app on Heroku that integrates with Facebook](https://devcenter.heroku.com/articles/facebook). You'll just have to manually perform the administrative and configuration steps that were previously automated. You mentioned wanting the default Python app? You can find the various Heroku/Facebook app templates in GitHub: - [Python Facebook app](https://github.com/heroku/facebook-template-python) - [Ruby Facebook app](https://github.com/heroku/facebook-template-ruby) - [PHP Facebook app](https://github.com/heroku/facebook-template-php) - [Node.js Facebook app](https://github.com/heroku/facebook-template-nodejs)
Insufficient definition of replicate I have a question that I think is rather tricky. > > The standard prelude contains the function > > > > ``` > replicate :: Int -> a -> [a] > > ``` > > The following might seem like a reasonable definition for it > > > > ``` > replicate n x = take n [x,x,..] > > ``` > > But it is actually not sufficient. Why not? > > > I know that the `replicate` function is defined as: ``` replicate :: Int -> a -> [a] replicate n x = take n (repeat x) ``` And `repeat` is defined as: ``` repeat :: a -> [a] repeat x = xs where xs = x:xs ``` Is the definition insufficient (from the question) because it uses an infinite list?
First of all there is a small syntax error in the question, it should be: ``` replicate n x = take n [x,x..] -- ^ no comma ``` but let's not be picky. Now when you use range syntax (i.e. `x..`), then `x` should be of a type that is an instance of `Enum`. Indeed: ``` Prelude> :t \n x -> take n [x,x..] \n x -> take n [x,x..] :: **Enum a =>** Int -> a -> [a] ``` You can argue that `x,x..` will only generate `x`, but the Haskell compiler does not know that at compile time. So the type in `replicate` (in the question) is **too specific**: it implies a type constraint - `Enum a` - that is actually *not* necessary. Your own definition on the other hand is perfectly fine. Haskell has no problem with infinite lists since it uses lazy evaluation. Furthermore because you define `xs` with `xs` as tail, you actually constructed a circular linked list which also is better in terms of memory usage.
Ways to Install and uninstall programs in deb format What are ways to install/uninstall programs in Ubuntu. I will mention only **.deb** since it is the official package format and for example installing a .sh or .run program can typically be as simple as (In latest Ubuntu) `./program.sh` or `sh program.run`. Also note that am talking about HOW to install packages IN Ubuntu, not talking about downloading a new installer (Like some Super Cow Software Center Duke Nukem version from sourceforge or something). Only tools that already come in the repositories (In any of the 4 standard ones, main, universe, restricted or multiverse). Take into consideration all GUI and terminal ways of doing it, with basic format of doing the installation. With this in mind I am talking for example about Software Center, synaptic, apt-get, aptitude, dpkg, etc... a way to show all tools that are available in Ubuntu to install deb packages. NOTE - Forgot, this should be Community Wiki since it will have all ways a user can install/uninstall something. Like a friendly guide on doing that. UPDATED question to remove the confusion some are having about sh/run files.
Ways to Install & Uninstall .deb packages in Ubuntu: # SOFTWARE CENTER This is the default GUI installer/uninstaller for Ubuntu to search, install and remove applications. ![enter image description here](https://i.stack.imgur.com/cKIQS.png) # SYNAPTIC This was the old way which can still be used by installing the `synaptic` package. It works similar to Software Center with a few differences like installing multiple packages at the same time and other details.. ![enter image description here](https://i.stack.imgur.com/5drRb.png) # APT-GET & APT-CACHE This are the default terminal ways in Ubuntu to search, install, update and remove applications. ![Searching & Installing](https://i.stack.imgur.com/RNoct.png) Search & Install ![enter image description here](https://i.stack.imgur.com/MSzWT.png) Remove an application ![enter image description here](https://i.stack.imgur.com/sgpl1.png) Remove an application COMPLETELY (Including config files) ![enter image description here](https://i.stack.imgur.com/R4VrG.png) Update Repositories # APTITUDE This was the old terminal way of searching, installing, updating and removing applications. Can be used by installing the `aptitude` package. ![enter image description here](https://i.stack.imgur.com/xh1hX.png) Search & Install ![enter image description here](https://i.stack.imgur.com/xICES.png) Remove an application ![enter image description here](https://i.stack.imgur.com/AHSba.png) Remove an application COMPLETELY (Including config files) ![enter image description here](https://i.stack.imgur.com/CoCCX.png) Update Repositories # DPKG ![enter image description here](https://i.stack.imgur.com/ACTLY.png) Installing a package ![enter image description here](https://i.stack.imgur.com/mzZZP.png) Removing a package ![enter image description here](https://i.stack.imgur.com/opNHa.png) Purging a package There are other cases like **dselect**, **PackageKit** and **GDebi**. For Kubuntu you have alternatives like Kubuntu's **Adept Manager**, newer **KPackageKit** or **Muon**. As an alternative to DEB packages you have **alien** which can installed RPM packages (Not to be confused with Revolutions Per Minute since that would make Fedora really fast ;) )
What will GroupBy in LINQ do if it has two parameters? I am trying to understand the code below: ``` Color32[] colors = mesh.colors32; IEnumerable<IGrouping<byte, int>> hierarchyMap = colors.Select((color, index) => new { color, index }).GroupBy(c => c.color.g, c => c.index); ``` I used google and only found some tutorials for GroupBy(xxx)(only one parameter inside brackets), which means xxx is the key for the group. What if there were two parameters inside the brackets?
Technically, the accepted answer is trying to group by using two keys. It doesn't explain what if there are two parameters inside the `GroupBy` bracket. If there are two parameters inside the bracket, it will group the elements of a sequence according to a specified key selector function and projects the elements for each group by using a specified function. Let say we have an `Employee` class ``` public class Employee { public string Name { get; set; } public int Age { get; set; } } ``` And then the code logic is below. ``` var employees = new List<Employee> { new Employee { Name="Dave", Age=25 }, new Employee { Name="John", Age=23 }, new Employee { Name="Michael", Age=30 }, new Employee { Name="Bobby", Age=30 }, new Employee { Name="Tom", Age=25 }, new Employee { Name="Jane", Age=21 } }; var query = employees.GroupBy(employee => employee.Age, employee => employee.Name); foreach (IGrouping<int, string> employeeGroup in query) { Console.WriteLine(employeeGroup.Key); foreach (string name in employeeGroup) { Console.WriteLine($"=> {name}"); } } ``` The output will be: ``` 25 => Dave => Tom 23 => John 30 => Michael => Bobby 21 => Jane ``` Reference from [MSDN](https://learn.microsoft.com/en-us/dotnet/api/system.linq.enumerable.groupby?view=netframework-4.8#System_Linq_Enumerable_GroupBy__3_System_Collections_Generic_IEnumerable___0__System_Func___0___1__System_Func___0___2__)
Android PagedListAdapter with filterable I have a action bar menu item with search action. I have a recyclerview which binds data from pagedlistadapter where it loads pagedlist. I need to filter the adapter data white the searchview text changes as we do in other adapters I have done something like this, but it does not work. **Edits:** This is my complete code I use for adapter ``` public class ProductsRvAdapter extends PagedListAdapter<Product, ProductsRvAdapter.ViewHolder> implements Filterable { private Context context; public ProductsRvAdapter(Context context) { super(DIFF_CALLBACK); this.context = context; } private static DiffUtil.ItemCallback<Product> DIFF_CALLBACK = new DiffUtil.ItemCallback<Product>() { @Override public boolean areItemsTheSame(Product oldProduct, Product newProduct) { return oldProduct.getId() == newProduct.getId(); } @Override public boolean areContentsTheSame(Product oldProduct, @NonNull Product newProduct) { return oldProduct.equals(newProduct); } }; @NonNull @Override public ViewHolder onCreateViewHolder(@NonNull ViewGroup viewGroup, int i) { return new ViewHolder(LayoutInflater.from(viewGroup.getContext()).inflate(R.layout.row_item_product, viewGroup, false)); } @Override public void onBindViewHolder(@NonNull ViewHolder viewHolder, int i) { Product product = getItem(i); if (product == null) { viewHolder.clear(); } else { viewHolder.bind(product); } } public class ViewHolder extends RecyclerView.ViewHolder { //declare views public ViewHolder(@NonNull View itemView) { super(itemView); //initialize views } public void bind(Product product) { //bind data to views } public void clear() { //clear views data } } @Override public Filter getFilter() { return new Filter() { @Override protected FilterResults performFiltering(CharSequence charSequence) { String query = charSequence.toString(); PagedList<Product> filtered = null; if (query.isEmpty()) { filtered = getCurrentList(); } else { for (Product product : itemList) { if (product.getProductName().toLowerCase().contains(query.toLowerCase())) { filtered.add(product); } } } FilterResults results = new FilterResults(); results.count = filtered.size(); results.values = filtered; return results; } @Override protected void publishResults(CharSequence charSequence, FilterResults results) { submitList((PagedList<Product>) results.values); } }; } } ```
The thing is if you'll apply filter {} to PagedList you will have List and in ``` submitList((PagedList<Product>) results.values); ``` it will crash because types are different. Behaviour what you want is possible but is different in case of "paged recycler" In simple RecyclerView you load and "store" data in it so you can filter it. If user change page or scroll down you just add more data and again you can filter it (but you also have to fetch and add in adapter data from server with applied filter in case if they not exist in adapter yet). In paged list, in my opinion, if you need to filter it, you should add filter in your DataSource and create new PagedList with this filtered data source (filter applied to api or database call in loadInitial and loadRange methods). So when it will scroll down (or load new page) you will load items from filtered pages. Hope it'll help
How to keep all intermediate property variables in a deeply nested destructure Given this: ``` var metadata = { title: 'Scratchpad', translations: [ { title: 'JavaScript-Umgebung' } ] }; const output = ({ title: englishTitle, translations: [{ title: localeTitle }] }) => ( console.log(englishTitle), // "Scratchpad" console.log(localeTitle) // "JavaScript-Umgebung" ) output(metadata); ``` Now `translations` is undefined: ``` console.log(JSON.stringify(translations)) // ❌ "undefined" ``` *Question*: Is there a way to create a variable `translations` within the params destructure expressions (e.g. within the function param parenthesis)
You can destructure translations as well as it's inner members: ``` var metadata = { title: 'Scratchpad', translations: [ { title: 'JavaScript-Umgebung' } ] }; const output = ({ title: englishTitle, translations, // get translations translations: [{ title: localeTitle }] }) => ( console.log(englishTitle), // "Scratchpad" console.log(localeTitle), // "JavaScript-Umgebung" console.log(translations) // [{ title: 'JavaScript-Umgebung' }] ) output(metadata); ```
CRM 2011: Global JavaScript and button in status bar I'm not so new in CRM 2011, but I faced with one big problem... I found some solution on net that makes some scoring/ranking system in CRM. I was completely confused when I saw star in top status bar, above ribbon buttons bar, next to username on right corner of screen. ![enter image description here](https://i.stack.imgur.com/rLp3a.png) When I click on this button, I open div with some information about users, and scores they have. 1. Where I can put Java Script function (jQuery for example) that can be executed globally? How to call that function, what event to catch? I need this button/function be active on all pages in CRM like this one. 2. What is id of that place in top bar? I need it to put this button from my script.
The CRM solution you appear to be talking about is this <http://www.wave-access.com/Public_en/ms_crm_gamification_product.aspx> This is obviously unsupported. However they achieve it by adding a dummy button to the ribbon, specifically the Jewel Menu. This button command is linked to a JS function in a webresource. The button is always hidden but the JS file is always loaded. It should be noted that your JS is loaded into Main.aspx (the root document) From there its a matter of injecting HTML elements or javascript into the desired frame. (Nav or Content) Here is the RibbonDiffXML to add to a solution. ``` <RibbonDiffXml> <CustomActions> <CustomAction Id="Dummy.CustomAction" Location="Mscrm.Jewel.Controls1._children" Sequence="41"> <CommandUIDefinition> <Button Id="Dummy" Command="Dummy.Command" Sequence="50" ToolTipTitle="$LocLabels:Dummy.LabelText" LabelText="$LocLabels:Dummy.LabelText" ToolTipDescription="$LocLabels:Dummy.Description" TemplateAlias="isv" /> </CommandUIDefinition> </CustomAction> </CustomActions> <Templates> <RibbonTemplates Id="Mscrm.Templates"></RibbonTemplates> </Templates> <CommandDefinitions> <CommandDefinition Id="Dummy.Command"> <EnableRules /> <DisplayRules> <DisplayRule Id="Dummy.Command.DisplayRule.PageRule" /> </DisplayRules> <Actions> <JavaScriptFunction Library="$webresource:MyGlobal.js" FunctionName="Anything" /> </Actions> </CommandDefinition> </CommandDefinitions> <RuleDefinitions> <TabDisplayRules /> <DisplayRules> <DisplayRule Id="Dummy.Command.DisplayRule.PageRule"> <PageRule Address="aaaa" /> </DisplayRule> </DisplayRules> <EnableRules /> </RuleDefinitions> <LocLabels> <LocLabel Id="Dummy.Description"> <Titles> <Title languagecode="1033" description="Description" /> </Titles> </LocLabel> <LocLabel Id="Dummy.LabelText"> <Titles> <Title languagecode="1033" description="Description" /> </Titles> </LocLabel> </LocLabels> ``` This goes in the root ImportExportXml element of the customizations.xml You may also need to add Application Ribbons as a solution component via the UI
Set the scope parameter for Microsoft Graph I have registered an app with `apps.dev.microsoft.com` And set its permissions to this: ![Screen shot from apps.dev.microsoft.com panel](https://i.stack.imgur.com/lqiTx.png) I am getting a token to use Microsoft Graph API like this: <https://developer.microsoft.com/en-us/graph/docs/concepts/auth_v2_service#4-get-an-access-token> I set the scope to: `scope=https://graph.microsoft.com/.default` I get back a token without `Directory.ReadWrite.All` permission. How can I modify the request to get back the token with permission specified under `apps.dev.microsoft.com` portal? Should I change the scope parameter? I tried with `graph.microsoft.com/directory.readwrite.all` as suggest online, without any luck.
To use `schemaExtentions` you need the `Directory.AccessAsUser.All` scope. The problem you have here however is that you're using the `client_credentials` grant (aka "App-Only Authentication") which only supports Application Permissions (of which `Directory.AccessAsUser.All` isn't one). In order to use any of the Delegated Permissions, you need to authenticate a user rather than just the application. Delegated permissions are just that, permissions a user has delegated (aka transferred/assigned) to you application for a period of time. There are a couple of options here. If the only issue you're running into is with `schemaExtentions` and you're not selling a commercial solution (an ISV), you most likely don't need to bake this into your application in the first place. Instead, try using [Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer) to create them. *Please note that you will need to execute Admin Consent for Graph Explorer before this will work with your tenant first.* Another option is to support both `client_credential` and `code` grants in the application. If you're an ISV selling a commercial solution, this is likely the best bet. The cleanest way to execute this is to have a "setup" app that is executed by an Administrator. The Admin authenticates themselves using a `code` grant and you create all of the needed extensions. The bonus of this process is that it provides a clean and logical place to kick off the Admin Consent for the `client_credential` grant you're going to need for the production application. One thing to note is that you can use the same App ID for both [Auth Code](https://www.rfc-editor.org/rfc/rfc6749#section-1.3.1), [Implicit](https://www.rfc-editor.org/rfc/rfc6749#section-1.3.2), and [Client Credential](https://www.rfc-editor.org/rfc/rfc6749#section-1.3.4) grants. Depending on the grant you select it will use either the Delegate or Application permissions you defined. This makes the above scenario pretty straightforward. The registration UX unfortunately doesn't do a great job of surfacing how delegate vs application permission are applied, leading many users to assume both sets are always being used. In reality, only one set is every applicable depending on the grant type in use. The one exception to this is Admin Consent which consents to all of the permissions requested (i.e. you don't need separate Delegate and Application consent flows).
Correlation matrix of grouped variables in dplyr I have a grouped data frame (using `dplyr`) with 50 numeric columns, which are split into groups using one of the columns. I want to calculate a matrix of correlation between all non grouping columns and one particular column. An example with the `mtcars` dataset: ``` data(mtcars) cor(mtcars[,2:11], mtcars[,2]) ``` returns a list of correlations between miles per galleon and the other variables. Let's say however, that I wish to calculate this same correlation for each group of cylinders, e.g.: ``` library(dplyr) mtcars <- mtcars %>% group_by(cyl) ``` How would I do this? I am thinking something like ``` mtcars %>% group_by(cyl) %>% summarise_each(funs(cor(...)) ``` But I do not know what to put in the `...` as I don't know how to specify a column in the `dplyr` chain. **Related**: [Linear model and dplyr - a better solution?](https://stackoverflow.com/questions/26765426/linear-model-and-dplyr-a-better-solution/26765951#26765951) has an answer which is very similar to @akrun's answer. Also, over on cross validated: <https://stats.stackexchange.com/questions/4040/r-compute-correlation-by-group> has other solutions using packages which are not `dplyr`.
We could use `do`. ``` library(dplyr) mtcars %>% group_by(cyl) %>% do(data.frame(Cor=t(cor(.[,3:11], .[,3])))) # A tibble: 3 x 10 # Groups: cyl [3] # cyl Cor.disp Cor.hp Cor.drat Cor.wt Cor.qsec Cor.vs Cor.am Cor.gear Cor.carb # <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #1 4 1.00 0.435 -0.500 0.857 0.328 -0.187 -0.734 -0.0679 0.490 #2 6 1.00 -0.514 -0.831 0.473 0.789 0.637 -0.637 -0.899 -0.942 #3 8 1 0.118 -0.0922 0.755 0.195 NA -0.169 -0.169 0.0615 ``` **NOTE:** `t` part is contributed by @Alex --- Or use `group_modify` ``` mtcars %>% select(-mpg) %>% group_by(cyl) %>% group_modify(.f = ~ as.data.frame(t(cor(select(.x, everything()), .x[['disp']])))) # A tibble: 3 x 10 # Groups: cyl [3] # cyl disp hp drat wt qsec vs am gear carb # <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #1 4 1.00 0.435 -0.500 0.857 0.328 -0.187 -0.734 -0.0679 0.490 #2 6 1.00 -0.514 -0.831 0.473 0.789 0.637 -0.637 -0.899 -0.942 #3 8 1 0.118 -0.0922 0.755 0.195 NA -0.169 -0.169 0.0615 ``` --- Or another option is `summarise` with `across`. Created a new column 'disp1' as 'disp' then grouped by 'cyl', get the `cor` of columns 'disp' to 'carb' with 'disp1' ``` mtcars %>% mutate(disp1 = disp) %>% group_by(cyl) %>% summarise(across(disp:carb, ~ cor(., disp1))) # A tibble: 3 x 10 # cyl disp hp drat wt qsec vs am gear carb #* <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #1 4 1.00 0.435 -0.500 0.857 0.328 -0.187 -0.734 -0.0679 0.490 #2 6 1.00 -0.514 -0.831 0.473 0.789 0.637 -0.637 -0.899 -0.942 #3 8 1 0.118 -0.0922 0.755 0.195 NA -0.169 -0.169 0.0615 ``` Or ``` library(data.table) d1 <- copy(mtcars) setnames(setDT(d1)[, as.list(cor(.SD, .SD[[1]])) , cyl, .SDcols=3:11], names(d1)[2:11])[] ```
Transposing data with R I have the following dataset : ``` prod month count sub12 2012-06 566 sub1 2012-06 10239 sub6 2012-06 524 sub12 2012-07 2360 sub1 2012-07 13853 sub6 2012-07 2352 sub12 2012-08 3950 sub1 2012-08 14738 sub6 2012-08 4104 ``` I'm trying to have this : ``` prod 2012-06 2012-07 sub12 566 2360 sub1 10239 13853 sub6 524 2352 ``` I've run the following command lines: ``` aperm(table(data)) data.frame(table(data)) ``` but I'm getting the wrong results. Where am I wrong, please?
Search for "reshape long wide". ``` data <- read.table(text="prod month count sub12 2012-06 566 sub1 2012-06 10239 sub6 2012-06 524 sub12 2012-07 2360 sub1 2012-07 13853 sub6 2012-07 2352 sub12 2012-08 3950 sub1 2012-08 14738 sub6 2012-08 4104",header=TRUE) library(reshape2) dcast(prod~month,data=data) # prod 2012-06 2012-07 2012-08 # 1 sub1 10239 13853 14738 # 2 sub12 566 2360 3950 # 3 sub6 524 2352 4104 ```
I can't get preg\_match to test if the entire string matches the regex I'm using this regular expression to test if a username is valid: `[A-Za-z0-9 _]{3,12}` when I test it for matches in a text editor with the string `test'ing`, it highlights 'test' and 'ing', but when I use the following code in PHP: `if(!preg_match('/[A-Za-z0-9 _]{3,12}/', $content)` where $content is `test'ing` and it should return FALSE, it still returns true. Is there something wrong with my regular expression? I need: - Minimum length 3, max 12 `{3,12}` - No spaces/underscores in front or after the string, and no spaces/underscores in a row anywhere (I'm using additional checks for this because I'm not very good with regex) - Only alphanumerics, spaces and underscores allowed `[A-Za-z0-9 _]` Thanks in advance...
You're missing the anchors in the regular expression, so the regex can comfortably match 3 characters in the character class *anywhere* in the string. This is not what you want. You want to check if your regex matches against the **entire string**. For that, you need to include the anchors (`^` and `$`). ``` if(!preg_match('/^[A-Za-z0-9 _]{3,12}$/', $content) ^ ^ ``` `^` asserts the position at the beginning of the string and `$` asserts position at the end of the string. It's important to note that these meta characters do not actually consume characters. They're zero-width assertions. Further reading: - [Regex Anchors](http://www.regular-expressions.info/anchors.html) on [regular-expressions.info](http://regular-expressions.info) - [The Stack Overflow Regular Expressions FAQ](https://stackoverflow.com/q/22937618/1438393)
Clojure function evaluation I was searching for a Clojure logarithm function and concluded there really isn't one for v1.3+. What took me so long to invoke `Math/log` was this: ``` user=> Math/log CompilerException java.lang.RuntimeException: Unable to find static field: log in class java.lang.Math, compiling:(NO_SOURCE_PATH:0:0) ``` I tried variations like `(use 'java.lang.Math)` and so forth. It wasn't until I tried an example that I found it's there after all: ``` user=> (Math/log 10) 2.302585092994046 ``` This must be by design, but what is the reason?
A static method imported from Java is not a Clojure function or any other kind of Clojure object. The compiler reports any misfit as a missing field, even where the context is looking for a function. All of these ... ``` Math/log (Math/log) (map Math/log (range 1 5)) ``` ... produce such errors. For a corresponding Clojure function: - `inc` returns the function; - `(inc)` reports `... wrong number of args(0) ...`; - `(map inc (range 1 5))` returns `(2 3 4 5)`. You can wrap a Java method as a Clojure one: ``` (defn log [x] (Math/log x)) ``` ... getting the expected result: ``` (map log (range 1 5)) ;(0.0 0.6931471805599453 1.0986122886681098 1.3862943611198906) ``` The Clojure [Java Interop](http://clojure.org/java_interop#Java%20Interop-The%20Dot%20special%20form-(memfn%20method-name%20arg-names*)) page recommends `(map #(Math/log %) (range 1 5))` ... in such cases.
How are multiple requests to one file on a server dealt with? This a bit of a theoretical question coming from someone very inexperienced in programming and servers, but here goes. If I have a PHP file stored on a server, what happens if multiple people start accessing this script at the same time? Do multiple requests stack or can they be processed in parallel? What should I look up to better understand how servers work? I hope this makes sense!
The webserver *(Apache, for example)* is generally able to deal with several requests at the same time *(the default being 200 or 400 for Apache)*. If the requests correspond to read-only situations, there should be no problem at all : several process can read the same file at the same time -- and if that file is a PHP script, several process can execute it at the same time. If your script is query-ing a database, there should not be too much problems : databases are made to deal with concurrency situations *(even if reads scale better than writes, which may have to be stacked, if they modify the same data)*. If your script is trying to write to a file, you should put some locking mecanism in place *(using [`flock`](http://www.php.net/flock), for example)*, to avoid that ; as a consequence, each process will wait until there is no other writing to the file, before writing.
getting .class for List in java I have the following function EDIT: Changed int to class Long ``` protected <T> ApiResponse<T> getApiResponse(Object obj, Class<T> clasx) ``` How can I pass `List<Long>` class as the second argument? Can I pass `List<Long>.class` but that is not working?
[Type erasure](http://docs.oracle.com/javase/tutorial/java/generics/erasure.html) means that you can only pass `List.class`. From the linked documentation: > > When a generic type is instantiated, the compiler translates those types by a technique called type erasure — a process where the compiler removes all information related to type parameters and type arguments within a class or method. Type erasure enables Java applications that use generics to maintain binary compatibility with Java libraries and applications that were created before generics. > > > For instance, `Box<String>` is translated to type `Box`, which is called the raw type — a raw type is a generic class or interface name without any type arguments. This means that you can't find out what type of Object a generic class is using at runtime. The following operations are not possible: > > > ``` public class MyClass<E> { public static void myMethod(Object item) { // Compiler error if (item instanceof E) { ... } E item2 = new E(); // Compiler error E[] iArray = new E[10]; // Compiler error E obj = (E)new Object(); // Unchecked cast warning } } ``` > > The operations shown in bold are meaningless at runtime because the compiler removes all information about the actual type argument (represented by the type parameter `E`) at compile time. > > >