prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
cocoa:webView which have hyperlinks to add the ` target = " \_blank "` can not be opened webView which have hyperlinks to add the `target = " _blank "` can not be opened. I developing a webview-like application, but i have one problem. How to open link in new window in current application - not in safari? Thanks for help. ![enter image description here](https://i.stack.imgur.com/RK6vc.png) @Rob Keniger The code is not running.why?
You need to set an object as the `UIDelegate` of your `WebView` and in that object implement the [`webView:createWebViewWithRequest:`](https://developer.apple.com/library/mac/documentation/Cocoa/Reference/WebKit/Protocols/WebUIDelegate_Protocol/Reference/Reference.html#//apple_ref/doc/uid/20001915-CHDDCJCA) method. In your implementation of that method, you need to open a new window containing a separate `WebView` and then tell its `mainFrame` to load the `URLRequest` passed as a parameter to the method. **Update:** I've looked at your code. You need to assign an object as the web view's `UIDelegate`, so add a `[webView setUIDelegate:self]` line into `applicationDidFinishLaunching:`. A very simple example of how to implement the delegate would be: ``` - (WebView *)webView:(WebView *)sender createWebViewWithRequest:(NSURLRequest *)request { NSUInteger windowStyleMask = NSClosableWindowMask | NSMiniaturizableWindowMask | NSResizableWindowMask | NSTitledWindowMask; NSWindow* webWindow = [[NSWindow alloc] initWithContentRect:NSMakeRect(0, 0, 800, 600) styleMask:windowStyleMask backing:NSBackingStoreBuffered defer:NO]; WebView* newWebView = [[WebView alloc] initWithFrame:[webWindow contentRectForFrameRect:webWindow.frame]]; [newWebView setAutoresizingMask:NSViewWidthSizable|NSViewHeightSizable]; [webWindow setContentView:newWebView]; [webWindow center]; [webWindow makeKeyAndOrderFront:self]; [[newWebView mainFrame] loadRequest:request]; return newWebView; } ```
Localization in Nancy without the razor viewengine For the moment I'm using razor as my view engine in Nancy. I can access my resource file like this in razor: ``` @Text.text.greeting ``` But I would like to switch to a different view engine. Are there other view engines available that support the TextResource? How does localization works in the super simple view engine? Or is there a way to access the resources using the model?
Good question! This is something that I have needed to do myself. I managed to solve this problem using the basis of the suggestion made to you by @Karl-Johan Sjögren - i.e. I was able to create an extension to the Super Simple View Engine (SSVE). --- **The Background** The SSVE has been designed in such a way that you can inject additional 'matchers' that allow you to do some processing on your view templates as they are getting rendered for the output of a request. You will notice the following constructor (as of 2014/05/12) within the SSVE, that allows you to pass in additional 'matchers': ``` public SuperSimpleViewEngine(IEnumerable<ISuperSimpleViewEngineMatcher> matchers) { this.matchers = matchers ?? Enumerable.Empty<ISuperSimpleViewEngineMatcher>(); this.processors = new List<Func<string, object, IViewEngineHost, string>> { PerformSingleSubstitutions, PerformContextSubstitutions, PerformEachSubstitutions, PerformConditionalSubstitutions, PerformPathSubstitutions, PerformAntiForgeryTokenSubstitutions, this.PerformPartialSubstitutions, this.PerformMasterPageSubstitutions, }; } ``` The basic way that most of the template substitution works in the SSVE is by doing very simple regular expression matches against the view templates. If a regular expression is matched, then a substitution method is invoked within which the appropriate substitution occurs. For example, the default PerformSingleSubstitutions processor/matcher that comes with the SSVE is used to do your basic '@Model.' substitutions. The following processor workflow could occur: - The text '@Model.Name' is matched within a view template. - The substitution method is fired for the the Model parameter substitution. - Some reflection occurs against the dynamic model in order to get the value of the 'Name' property. - The value of the 'Name' property is then used to replace the string of '@Model.Name' within the view template. --- **The Implementation** Ok, so now that we have the foundation, here is how you can create your very own Translation matcher. :) First you will need to create an implementation of ISuperSimpleViewEngineMatcher. Below is a really basic example I have created for the purpose of illustration: ``` internal sealed class TranslateTokenViewEngineMatcher : ISuperSimpleViewEngineMatcher { /// <summary> /// Compiled Regex for translation substitutions. /// </summary> private static readonly Regex TranslationSubstitutionsRegEx; static TranslateTokenViewEngineMatcher() { // This regex will match strings like: // @Translate.Hello_World // @Translate.FooBarBaz; TranslationSubstitutionsRegEx = new Regex( @"@Translate\.(?<TranslationKey>[a-zA-Z0-9-_]+);?", RegexOptions.Compiled); } public string Invoke(string content, dynamic model, IViewEngineHost host) { return TranslationSubstitutionsRegEx.Replace( content, m => { // A match was found! string translationResult; // Get the translation 'key'. var translationKey = m.Groups["TranslationKey"].Value; // Load the appropriate translation. This could farm off to // a ResourceManager for example. The below implementation // obviously isn't very useful and is just illustrative. :) if (translationKey == "Hello_World") { translationResult = "Hello World!"; } else { // We didn't find any translation key matches so we will // use the key itself. translationResult = translationKey; } return translationResult; }); } } ``` Okay, so when the above matcher is run against our view templates they will to find strings starting with '@Translate.'. The text just after the '@Translate.' is considered to be our translation key. So in the e.g. of '@Translate.Hello\_World', the translation key would be 'Hello\_world'. When a match occurs the replace method is fired to find and return the appropriate translation for the translation key. My current example will only return a translation for the key of 'Hello\_World' - you would of course have to fill in your own mechanism with which to do the translation lookups, perhaps farming off to the default resource management support of .net? The matcher won't get automatically hooked up into the SSVE, you will have to use the IoC supported features of Nancy to register your matcher against that constructor parameter I highlighted earlier. To do so you will need to override the ConfigureApplicationContainer method within your Nancy bootstrapper and add a registration similar to the one below: ``` public class MyNancyBootstrapper : DefaultNancyBootstrapper { protected override void ConfigureApplicationContainer(TinyIoCContainer container) { base.ConfigureApplicationContainer(container); // Register the custom/additional processors/matchers for our view // rendering within the SSVE container .Register<IEnumerable<ISuperSimpleViewEngineMatcher>>( (c, p) => { return new List<ISuperSimpleViewEngineMatcher>() { // This matcher provides support for @Translate. tokens new TranslateTokenViewEngineMatcher() }; }); } ... ``` The final step is to actually add your translation tokens to your views: ``` <!-- index.sshtml --> <html> <head> <title>Translator Test</title> </head> <body> <h1>@Translate.Hello_World;<h1> </body> </html> ``` --- As I said, this is a very basic example which you could use as the basis to create an implementation to suit your needs. You could for example extend the regular expression matcher to also take into account the target culture that you would like to translate into, or just simply use the current thread culture registered within your application. You have the flexibility to do as you please. :)
How to create client server in Go language? I am new to go, I was trying to prepare client server in go language and tried to write code, but it's not giving any output. It's not giving any error but just listening. Please someone help me, I want to create authentication system using go where server authenticate client using Username password.. **server :** ``` package main import ( "fmt" "net" ) func main() { service := "0.0.0.0:8080" tcpAddr, err := net.ResolveTCPAddr("tcp", service) checkError(err) listener, err := net.ListenTCP("tcp", tcpAddr) checkError(err) for { conn, err := listener.Accept() //fmt.Println("Server listerning") _, err = conn.Read([]byte("HEAD")) if err != nil { conn.Close() } if err != nil { continue } } } func checkError(err error) { if err != nil { fmt.Println("Fatal error ", err.Error()) } } ``` **client :** ``` package main import ( "bufio" "fmt" "net" "os" "strings" ) func main() { if len(os.Args) != 2 { fmt.Println("Usage: ", os.Args[0], "host") os.Exit(1) } host := os.Args[1] conn, err := net.Dial("tcp", host+":8080") checkError(err) _, err = conn.Write([]byte("HEAD")) reader := bufio.NewReader(os.Stdin) for { line, err := reader.ReadString('\n') ftm.Println(err) line = strings.TrimRight(line, " \t\r\n") if err != nil { conn.Close() break } } } func checkError(err error) { if err != nil { fmt.Println("Fatal error ", err.Error()) } } ```
I'm not sure you need to resolve your address in order to listen. You should be able to do just this : ``` listener, err := net.Listen("tcp", ":8080") ``` And you don't seem to do anything with the received bytes server side (you discard the result of `Read`), which explains why you think you receive nothing. Note that your code can only handle one connection at a time. You should handle each opened connection in a new goroutine. Here's an [example of client-server communication over TCP](https://stackoverflow.com/questions/11202058/unable-to-send-gob-data-over-tcp-in-go-programming/11202252#11202252) in a related question.
Rails upgrade: wrong number of arguments (2 for 1) for assign\_attributes Hi I have uprgaded from rails 3.2.12 to 4.0.0 most thinks working fine now, but I got this error: ![enter image description here](https://i.stack.imgur.com/mJqB0.png) Whats the Problem in this case?
The message signature changed from two arguments to one. Compare [v3.2.12](http://apidock.com/rails/v3.2.13/ActiveRecord/AttributeAssignment/assign_attributes) and [v4.0.2](http://apidock.com/rails/v4.0.2/ActiveRecord/AttributeAssignment/assign_attributes): ``` assign_attributes(new_attributes, options = {}) # 3.2.12 assign_attributes(new_attributes) # 4.0.2 ``` Rails 3 mass assignment protection is deprecated, and this is part of it. Protecting attributes from mass assignment was [extracted into a gem](https://github.com/rails/protected_attributes). From its README: > > You can also bypass mass-assignment security by using the `:without_protection` option. > > > In versions 4.x, you don't need the `:without_protection` option anymore because you're [encouraged to use Strong Parameters](http://guides.rubyonrails.org/4_0_release_notes.html). **For a smooth upgrade**, you can probably just bundle the `protected_attributes` gem. But note that "this plugin will be officially supported until the release of Rails 5.0." Also, you don't need to use `@user.send(:update_attributes, …)`, you can just use `@user.update_attributes(…)`.
Does HTML comment <!-- act as a single-line comment in JavaScript, and why? How specifically does JavaScript understand the construct `<!--`? From the point of view of JavaScript, is it yet another comment in addition to `//` and `/* */`? From testing it seems that JavaSript treats `<!--` like `//`: a one-liner ``` <script> <!-- alert('hi') //--> </script> ``` does nothing, while ``` <script> <!-- alert('hi') //--> </script> ``` works as expected. Where is this behavior documented? **This is not a duplicate of other questions:** I don't ask why or whether or how it should be used. I ask what syntax and semantics it has in JavaScript, formally. The question is non-trivial and is not answered in other questions: for example, the behavior indicated above cannot be guessed from the other questions and their answers (in fact this was my motivation: my program with a one-liner as above did not work, and those questions and answers were no help in understanding why).
> > From testing it seems that JavaSript treats `<!--` like `//` a one-liner > > > Yes it does, as specified in the ES6 spec, [annex B](http://www.ecma-international.org/ecma-262/6.0/#sec-html-like-comments): > > **B.1.3 HTML-like Comments** > > > > ``` > Comment :: > MultiLineComment > SingleLineComment > SingleLineHTMLOpenComment > SingleLineHTMLCloseComment > SingleLineDelimitedComment > > SingleLineHTMLOpenComment :: > <!-- SingleLineCommentCharsopt > > ``` > > However, note the description of annex B: > > This annex describes various legacy features and other characteristics of web browser based ECMAScript implementations. All of the language features and behaviours specified in this annex have one or more undesirable characteristics and in the absence of legacy usage would be removed from this specification. However, the usage of these features by large numbers of existing web pages means that web browsers must continue to support them. The specifications in this annex defined the requirements for interoperable implementations of these legacy features. > > > These features are not considered part of the core ECMAScript language. Programmers should not use or assume the existence of these features and behaviours when writing new ECMAScript code. ECMAScript implementations are discouraged from implementing these features unless the implementation is part of a web browser or is required to run the same legacy ECMAScript code that web browsers encounter. > > > So, this part only exists to describe existing "unofficial" behavior and as soon as browsers stop implementing this behavior, will be removed from the spec.
C# library overloads ^ operator. How to use \*\* instead? The [Symbolism](https://github.com/dharmatech/Symbolism) library overloads arithmetic operators. Although it's written in C# I can use it from F#: ``` open Symbolism let x = new Symbol("x") let y = new Symbol("y") let z = new Symbol("z") printfn "%A" (2*x + 3 + 4*x + 5*y + z + 8*y) ``` the output: ``` 3 + 6 * x + 13 * y + z ``` However, it also overloads `^` for powers. This of course doesn't play well with F#. As a step towards a workaround, I exported a method group for powers: ``` printfn "%A" (Aux.Pow(x, 2) * x) ``` output: ``` x ^ 3 ``` How can I overload `**` to use the `Aux.Pow` method group instead? I can do something like this: ``` let ( ** ) (a: MathObject) (b: MathObject) = Aux.Pow(a, b) ``` And that does work for `MathObject` values: ``` > x ** y * x;; val it : MathObject = x ^ (1 + y) ``` But `Aux.Pow` is overloaded for `int` as well: ``` public static MathObject Pow(MathObject a, MathObject b) { return new Power(a, b).Simplify(); } public static MathObject Pow(MathObject a, int b) { return a ^ new Integer(b); } public static MathObject Pow(int a, MathObject b) { return new Integer(a) ^ b; } ``` Any suggestions welcome!
You can use the trick described [here](https://stackoverflow.com/questions/12971965/overloaded-inline-operators-in-f) like this: ``` open Symbolism type MathObjectOverloads = | MathObjectOverloads static member (?<-) (MathObjectOverloads, a: #MathObject, b: int) = MathObject.op_ExclusiveOr(a, b) static member (?<-) (MathObjectOverloads, a: #MathObject, b: #MathObject) = MathObject.op_ExclusiveOr(a, b) static member (?<-) (MathObjectOverloads, a: System.Int32, b: #MathObject) = MathObject.op_ExclusiveOr(a, b) let inline ( ** ) a b = (?<-) MathObjectOverloads a b let two = Integer(2) let three = Integer(3) two ** three two ** 3 2 ** three ``` Unlike in the linked answer, we have to use the (?<-) operator because it's the only operator that can take 3 arguments instead of 2, and we need to overload on both the left and right side of the ^ operator
How to handle alpha compositing correctly with OpenGL I was using `glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)` for alpha composing as the document said (and actually same thing was said in the Direct3D document). Everything was fine at first, until I downloaded the result from GPU and made it a PNG image. The result alpha component is wrong. Before drawing, I had cleared the frame buffer with opaque black colour. And after I drew something semi-transparent, the frame buffer became semi-transparent.
Well the reason is obvious. With `glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)`, we actually ignore the destination alpha channel and assume it always be 1. This is OK when we treat the frame buffer as something opaque. But what if we need the correct alpha value? `glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA)` and make the source **premultiplied** (premultiplied texture/vertex color or multiply the alpha component with alpha components before setting it to gl\_FragColor). glBlendFunc can only multiply the original color components with one factor, but alpha compositing needs the destination be multiplied with both `one_minus_src_alpha` and `dst_alpha`. So it must be premultiplied. We can't do the premultiplication in the frame buffer, but as long as the source and destination are both premultipied, the result is premultipied. That is, we first clear the frame buffer with any premultiplied color (for example: `0.5, 0.5, 0.5, 0.5` for 50% transparent white instead of `1.0, 1.0, 1.0, 0.5`), and draw premultipied fragments on it with `glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA)`, and we will have correct alpha channel in the result. **But remember to undo the premultiplication if it is not desired for the final result**
Support HTTP/1.1 and HTTP/2 with a JAX-RS client I want to implement a JAX-RS client that support HTTP/1.1 and HTTP/2 for a JAX-RS client. The idea is to use HTTP/2 if the server supports ALPN with HTTP/2 and uses HTTP/1.1 if the server does not provide any information. I have read some articles about ALPN and it seems to be possible but I don't find anything supporting these protocol at the same time. I did not manage to plug these together. I am currently using the [Jetty HTTP/1.1 transport connector for Jersey](https://jersey.java.net/documentation/latest/client.html#d0e4895) and a [custom implementation for HTTP/2 transport connector](https://github.com/nhenneaux/jersey-http2-jetty-connector).
Java HTTP client provided with Java 11 is supporting HTTP/1.1 and HTTP/2 (see [Introduction to the Java HTTP Client](https://openjdk.java.net/groups/net/httpclient/intro.html)). I have built a connector using it [Jersey Connector using `java.net.http.HttpClient`](https://github.com/nhenneaux/jersey-httpclient-connector). You can use it adding the following dependency. ``` <dependency> <groupId>com.github.nhenneaux.jersey.connector.httpclient</groupId> <artifactId>jersey-httpclient-connector</artifactId> <version>0.2.2</version> </dependency> ``` [![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.github.nhenneaux.jersey.connector.httpclient/jersey-httpclient-connector/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.github.nhenneaux.jersey.connector.httpclient/jersey-httpclient-connector)
SQL - How to transpose? I have something similar to the following table: ``` ================================================ | Id | UserId | FieldName | FieldValue | =====+========+===============+================| | 1 | 100 | Username | John Doe | |----+--------+---------------+----------------| | 2 | 100 | Password | pass123! | |----+--------+---------------+----------------| | 3 | 102 | Username | Jane | |----+--------+---------------+----------------| | 4 | 102 | Password | $ecret | |----+--------+---------------+----------------| | 5 | 102 | Email Address | jane@email.com | ------------------------------------------------ ``` I need a query that will give me a result like this: ``` ================================================== | UserId | Username | Password | Email Address | =========+===========+===========================| | 100 | John Doe | pass123! | | |--------+-----------+----------+----------------| | 102 | Jane | $ecret | jane@email.com | |--------+-----------+----------+----------------| ``` Note that the values in FieldName are not limited to Username, Password, and Email Address. They can be anything as they are user defined. Is there a way to do this in SQL?
MySQL doesn't support ANSI PIVOT/UNPIVOT syntax, so that leave you to use: ``` SELECT t.userid MAX(CASE WHEN t.fieldname = 'Username' THEN t.fieldvalue ELSE NULL END) AS Username, MAX(CASE WHEN t.fieldname = 'Password' THEN t.fieldvalue ELSE NULL END) AS Password, MAX(CASE WHEN t.fieldname = 'Email Address' THEN t.fieldvalue ELSE NULL END) AS Email FROM TABLE t GROUP BY t.userid ``` As you can see, the CASE statements need to be defined per value. To make this dynamic, you'd need to use [MySQL's Prepared Statement (dynamic SQL) syntax](http://rpbouman.blogspot.com/2005/11/mysql-5-prepared-statement-syntax-and.html).
Compression performance related to chunk size in hdf5 files I would like to ask a question about the performance of compression which is related to chunk size of hdf5 files. I have 2 hdf5 files on hand, which have the following properties. They both only contain one dataset, called "data". File A's "data": 1. Type: HDF5 Scalar Dataset 2. No. of Dimensions: 2 3. Dimension Size: **5094125 x 6** 4. Max. dimension size: Unlimited x Unlimited 5. Data type: 64-bit floating point 6. Chunking: **10000 x 6** 7. Compression: GZIP level = 7 File B's "data": 1. Type: HDF5 Scalar Dataset 2. No. of Dimensions: 2 3. Dimension Size: **6720 x 1000** 4. Max. dimension size: Unlimited x Unlimited 5. Data type: 64-bit floating point 6. Chunking: **6000 x 1** 7. Compression: GZIP level = 7 **File A's size: HDF5----19 MB CSV-----165 MB** **File B's size: HDF5----60 MB CSV-----165 MB** Both of them shows great compression on data stored when comparing to csv files. However, the compression rate of file A is about 10% of original csv, while that of file B is only about 30% of original csv. I have tried different chunk size to make file B as small as possible, but it seems that 30% is the optimum compression rate. I would like to ask why file A can achieve a greater compression while file B cannot. If file B can also achieve, what should the chunk size be? Is that any rule to determine the optimum chunk size of HDF5 for compression purpose? Thanks!
Chunking doesn't really affect the compression ratio *per se*, except in the manner @Ümit describes. What chunking does do is affect the I/O performance. When compression is applied to an HDF5 dataset, it is applied to whole chunks, individually. This means that when reading data from a single chunk in a dataset, the entire chunk must be decompressed - possibly involving a whole lot more I/O, depending on the size of the cache, shape of the chunk, etc. What you should do is make sure that the chunk *shape* matches how you read/write your data. If you generally read a column at a time, make your chunks columns, for example. [This is a good tutorial on chunking.](http://www.hdfgroup.org/HDF5/doc/Advanced/Chunking/Chunking_Tutorial_EOS13_2009.pdf)
Galaxy S3 and Galaxy S4 with layout-sw360dp Kindly I want to know why Galaxy S3 and S4 get the images from this folder (drawable-sw360dp-xhdpi)?? all images seem very large for these devices ! also If I make these images smaller they will not be suitable for larger devices ! Please, what is the solution?
I assume you are using layouts for 10 inch and 7 inch devices are layout-sw720dp and layout-sw600dp then you just create drawable-sw720dp and drawable-sw600dp, also in the case of high density for those xlarge and large screen devices then append the corresponding density to the drawable folder . For example use drawable-sw600dp-hdpi for high density 7 inch tablets like ASUS MeMO Pad HD 7'. Now resource for large and xlarge are solved Now Consider the drawable-sw360dp devices. 1. drawable-sw360dp/layout-sw360dp are for phablet devices like Note1,Note2, Micromax CanvasHD,S3. 2. The above mentioned devices are XHDPI devices. For using drawables either you can use drawable-xhdpi or use drawable-sw360dp-xhdpi Thus you can distinguish resources for s3,s4 from xlarge and large screen devices. Note : S3 and s4 taking images from drawable-sw360dp-xhdpi because smallest width of those devices are 320dp. You can check the device display information by installing screen info app from the playstore. [check screen info app here](https://play.google.com/store/apps/details?id=com.jotabout.screeninfo)
How to reference colour attribute in drawable? I would like to do a simple thing: Define a drawable which has exacly same background colour as system state-pressed background colour. I do it like this in res/drawables/my\_drawable.xml: ``` <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android" > <item android:state_selected="true"> <color android:color="?android:attr/colorPressedHighlight"/> </item> <item android:state_selected="false"> <color android:color="@color/section_list_background"/> </item> </selector> ``` I always get: ``` java.lang.UnsupportedOperationException: Cant convert to color: type=0x2 ``` Any clues? Regards
You might need to do the following to fix your problem: 1) Define 2 colors for each theme in your colors file: ``` <?xml version="1.0" encoding="utf-8"?> <resources> <color name="my_color_dark">#ff33B5E5</color> <color name="my_color_light">#ff355689</color> </resources> ``` 2) Create file res/values/attrs.xml with contents: ``` <?xml version="1.0" encoding="utf-8"?> <resources> <attr name="my_color" format="reference" /> </resources> ``` 3) Assuming you have 2 themes in your styles.xml (`Theme.dark` and `Theme.light`) define: ``` <style name="Theme.dark" parent="@style/Theme.Sherlock"> <item name="my_color">@color/my_color_dark</item> </style> <style name="Theme.light" parent="@style/Theme.Sherlock.Light"> <item name="my_color">@color/my_color_light</item> </style> ``` 4) Use the color in a drawable: ``` <color android:color="?attr/my_color"/> ``` Hope this should fix your problem.
DNS answer based on IP address I'm running a Windows Server 2003 as a DNS server. I'd like to set up a DNS zone so that the given answer is different depending of the requester's IP address. Let's say I have the zone example.com. If a request for www.example.com comes from the IP address 192.168.1.1, answer will be 192.168.1.254 but if the request comes from 192.168.2.1 answer should then be 192.168.2.254. The idea is to have some servers sitting on both networks while doing an infrastructure migration. Cheers.
You are looking for [Split-Horizon DNS](http://en.wikipedia.org/wiki/Split-horizon_DNS), also often called "DNS Views" (after [the `view` clause](http://www.zytrax.com/books/dns/ch7/view.html) in BIND configuration files). [BIND](http://isc.org/software/bind) and other common Unix name servers support this, but as far as I'm aware there's no equivalent functionality for Windows/AD DNS. There is something that MIGHT work though - [netmask ordering of round-robin records](http://support.microsoft.com/kb/842197). This is decidedly nasty and disgusting, and I would advise against it. ([This guy's blog post has more detail](http://thepip3r.blogspot.com/2012/05/dns-viewsbind-views-on-windows-dns.html) and is where I discovered this little bit of nasty). You can also probably hack something together using two DNS servers, a virtual IP, and a carefully crafted routing/firewall ruleset to direct clients appropriately -- I am not sure if this is more or less disgusting than the netmask ordering thing though.
Reverse Proxy: Why response dispatch is not a bottleneck? When a reverse proxy is used primarily for load balancing, it is obvious why the routing of requests to a pool of N proxied servers should help balance the load. However, once the server-side computations for the requests are complete and it's time to dispatch the responses back to their clients, how come the single reverse proxy server never becomes a bottleneck? My intuitive understanding of the reverse proxy concept tells me, 1. that the reverse proxy server that is proxying N origin servers behind it would obviously NOT become a bottleneck as easily or as early as a setup involving a single-server equivalent of the N proxied servers, BUT it too would become a bottleneck at some point because all N proxied servers' responses are going through it. 2. that, to delay the above sort of a bottleneck point (from being reached) even further, the N proxied servers should really be dispatching the responses directly to the client 'somehow', instead of doing it via the single reverse proxy sitting in front of them. Where am I amiss in my understanding of the reverse proxy concept? Maybe point #2 is by definition NOT a reverse proxy server setup, but keeping definitions aside, why #2 is not popular relative to the reverse proxy option?
A reverse proxy, when used for load-balancing, **will proxy all traffic** to the pool of origin servers. This means that the client TCP connection terminates at the LB (the reverse proxy), and the LB initiates a new TCP connection to one of the origin nodes on behalf of the client. Now the node, after having processed the request, cannot communicate to the client directly, because client TCP connection is open with the Load Balancer's IP. The **client is expecting a response from LB, and not from any other random dude, or a random IP** (-: of some node. Thus, the response usually flows the same way as the request, via the LB. Also, you do not want to expose the node's IP to the client. This all usually scales very well for request-response systems. So my answer to #1 is: the LB usually scales well for request-response systems. If at all required, more LBs can be added to create redundancy behind a VIP. Now, having said this, it still **makes sense to bypass the LB for writing responses if your responses are huge**. For example, if you are streaming videos in response, then you probably don;t want to choke your LB with humongous responses. In such a scenario, one would configure a [Direct Server Return LB](http://kemptechnologies.com/white-papers/direct-server-return-it-you/). This is essentially what you are thinking of in #2. This allows responses to flow directly from origin servers, bypassing the LB, and still hiding the IP of origin nodes from clients. This is achieved by configuring the [ARP](http://en.wikipedia.org/wiki/Address_Resolution_Protocol) in a special way, such that the responses written by origin nodes carry the IP of LB. This is not straight forward to setup, and the usual proxy mode of LB is fine for most use cases.
Can I Prepend an Element in SASS? I understand that in SASS I can do this; ``` h3 { font-size: 20px margin-bottom: 10px .some-parent-selector & { font-size: 24px margin-bottom: 20px } } ``` to produce this code; ``` h3 { font-size: 20px; margin-bottom: 10px; } .some-parent-selector h3 { font-size: 24px; margin-bottom: 20px; } ``` Does anyone know how I could achieve this with an element? Basically I want to do this in SASS; ``` .some-parent-selector { font-size: 24px margin-bottom: 20px a& { ... } } ``` to produce this code; ``` .some-parent-selector { font-size: 24px margin-bottom: 20px } a.some-parent-selector { ... } ``` I am seeing an issue where I can only do this if I put a space between the element and the ampersand. If I don't the linter fails, but I guess deep down SASS doesn't understand it.. Or I need to format it better... I need this because I am working with some code, where a selector appears within the page, but every so often it is assigned to a Anchor tag... Any help or advise would be greatly appreciated?
I try use interpolation: **SASS** ``` .some-parent-selector { font-size: 24px; margin-bottom: 20px; a#{&}{ margin-bottom: 20px } } ``` But the parent repeat and I get this: **output** ``` .some-parent-selector { font-size: 24px; margin-bottom: 20px; } .some-parent-selector a.some-parent-selector { margin-bottom: 20px; } ``` so I extract the selector from his parent with `@at-root` directive: **SASS** ``` .some-parent-selector { font-size: 24px; margin-bottom: 20px; @at-root a#{&}{ margin-bottom: 20px } } ``` **output** ``` .some-parent-selector { font-size: 24px; margin-bottom: 20px; } a.some-parent-selector { margin-bottom: 20px; } ```
How do you enable FPS monitoring for testing XAML animations? Since Windows Store Apps can run across both the Intel and the ARM architectures, it is important that animations (at least) be as performant as possible. To test animations, FPS is very important. How do you enable FPS monitoring in Windows?
In Windows 8 you do this: ``` App.Current.DebugSettings.EnableFrameRateCounter = true; ``` I like to do it like this: ``` App.Current.DebugSettings.EnableFrameRateCounter = System.Diagnostics.Debugger.IsAttached; ``` The counter UI is documented here (<http://msdn.microsoft.com/en-us/library/windows/apps/windows.ui.xaml.debugsettings.enableframeratecounter.aspx>). Monitor includes: - Cp fps: the frames-per-second frame rate for the composition thread - UI fps: the frames-per-second frame rate for the UI thread - Memory: memory utilization for textures - Batch: the count of surfaces that are sent to the graphics processing unit (GPU) - Cp cpu: time in milliseconds spent on the composition thread's processor - UI cpu: time in milliseconds spent on the UI thread's processor
Is it possible to find value of dynamic groups using Java regexp? If I have the text like ``` "Record with text1 Record with text2 Record with text3" ``` and I know: number of Records will be from 2 to 10, is it possible to use regexp?: ``` Pattern.compile("(Record.*){2,10}"); ``` And how can I get the values of records? ``` "Record with text1" "Record with text2" "Record with text3" ... ```
Yes it is possible to use a regular expression to capture your matches. You can use a **negative lookahead** assertion here to find where the previous match stops and continue matching. ``` String s = "Record with text1 Record with text2 Record with text3"; Pattern p = Pattern.compile("(?i)\\brecord(?:(?!record).)+"); Matcher m = p.matcher(s); while (m.find()) { System.out.println(m.group()); } ``` Outputs ``` Record with text1 Record with text2 Record with text3 ``` Regular expression: ``` (?i) set flags for this block (case-insensitive) \b the boundary between a word char (\w) and not a word char record 'record' (?: group, but do not capture (1 or more times) (?! look ahead to see if there is not: record 'record' ) end of look-ahead . any character except \n )+ end of grouping ``` I would consider `split`ing the records in this case to consume your matches. ``` String s = "Record with text1 Record with text2 Record with text3"; String[] parts = s.split("(?<!\\A)(?=(?i:record\\b))"); System.out.println(Arrays.toString(parts)); ``` Outputs ``` [Record with text1 , Record with text2 , Record with text3] ``` Regular expression: ``` (?<! look behind to see if there is not: \A the beginning of the string ) end of look-behind (?= look ahead to see if there is: (?i: group, but do not capture (case-insensitive) record 'record' \b the boundary between a word char (\w) and not a word char ) end of grouping ) end of look-ahead ```
Cannot find interface declaration for 'UIView' I'm trying to add an objective C [library for toasts](https://github.com/scalessec/Toast) to my xcode project. But I'm getting a number of these errors: `"Cannot find interface declaration for 'UIView'"` `"Expected a type"` I have linked with the QuartzCore.framework. And the .m file has been added to compile sources. What am I missing? I'm a newbie to ios. Please help.
This is a bug in the library. The header file ([`UIView+Toast.h`](https://github.com/scalessec/Toast/blob/master/Toast/Toast/UIView%2BToast.h)) uses `UIView` but doesn't import `<UIKit/UIKit.h>`, so copying its source files into your project can give you this error. (**UPDATE**: [This bug was fixed on October 14, 2014.](https://github.com/scalessec/Toast/commit/0472e372fb3d9bc4817e253096f1a7f036999920#diff-bb1f487794b73b8383c530091026687b)) One way to fix this is to add `#import <UIKit/UIKit.h>` to the top of `UIView+Toast.h`. Another way is to add `#import <UIKit/UIKit.h>` to your target's `.pch` file in the “Supporting Files” group, if your project has a `.pch` file. It looks like Xcode 6's project templates don't include a `.pch` file, so you might not be able to use this fix easily.
How to iterate over a dictionary and operate with its elements? I have this dictionary, where the keys represent atom types and the values represent the atomic masses: ``` mass = {'H': 1.007825, 'C': 12.01, 'O': 15.9994, 'N': 14.0067, 'S': 31.972071, 'P': 30.973762} ``` what I want to do is to create a function that given a molecule, for instance `('H2-N-C6-H4-C-O-2H')`, iterates over the `mass` dictionary and calculates the atomic mass on the given molecule. The value of the mass must be multiplied by the number that comes right after the atom type: `H2 = H.value * 2` I know that firstly I must isolate the keys of the given molecules, for this I could use `string.split('-')`. Then, I think I could use and `if` block to stablish a condition to accomplish if the key of the given molecule is in the dictionary. But later I'm lost about how I should proceed to find the mass for each key of the dictionary. The expected result should be something like: ``` mass_counter('H2-N15-P3') out[0] 39351.14 ``` How could I do this? EDIT: This is what I've tried so far ``` # Atomic masses mass = {'H': 1.007825, 'C': 12.01, 'O': 15.9994, 'N': 14.0067, 'S': 31.972071, 'P': 30.973762} def calculate_atomic_mass(molecule): """ Calculate the atomic mass of a given molecule """ mass = 0.0 mol = molecule.split('-') for key in mass: if key in mol: atom = key return mass print calculate_atomic_mass('H2-O') print calculate_atomic_mass('H2-S-O4') print calculate_atomic_mass('C2-H5-O-H') print calculate_atomic_mass('H2-N-C6-H4-C-O-2H') ```
Given all components have the shape `Aa123`, It might be easier here to identify parts with a regex, for example: ``` import re srch = re.compile(r'([A-Za-z]+)(\d*)') mass = {'H': 1.007825, 'C': 12.01, 'O': 15.9994, 'N': 14.0067, 'S': 31.972071, 'P': 30.973762} def calculate_atomic_mass(molecule): return sum(mass[a[1]]*int(a[2] or '1') for a in srch.finditer(molecule)) ``` Here our [regular expression [wiki]](https://en.wikipedia.org/wiki/Regular_expression) thus captures a sequence of `[A-Z-a-z]`s, and a (possibly empty) sequence of digits (`\d*`), these are the first and second capture group respectively, and thus can be obtained for a match with `a[1]` and `a[2]`. this then yields: ``` >>> print(calculate_atomic_mass('H2-O')) 18.01505 >>> print(calculate_atomic_mass('H2-S-O4')) 97.985321 >>> print(calculate_atomic_mass('C2-H5-O-H')) 46.06635 >>> print(calculate_atomic_mass('H2-N-C6-H4-C-O-2H')) 121.130875 >>> print(calculate_atomic_mass('H2-N15-P3')) 305.037436 ``` We thus take the sum of the `mass[..]` of the first capture group (the name of the atom) times the number at the end, and we use `'1'` in case no such number can be found. Or we can first split the data, and then look for a atom part and a number part: ``` import re srch = re.compile(r'^([A-Za-z]+)(\d*)$') def calculate_atomic_mass(molecule): """ Calculate the atomic mass of a given molecule """ result = 0.0 mol = molecule.split('-') if atm in mol: c = srch.find(atm) result += result[c[1]] * int(c[2] or '1') return result ```
Setup sunspot solr with rails in production environment I have tried various links but I can't seem to find a good resource on creating a running solr instance that works with rails in production. I understand that you have to setup the solr server for production. I have tried the setup of solr with tomcat but I cant seem to link it to the rails app. Is there any good resource out there that I could use? Thanks
This blog may solve your question: Install Solr 4.4 with Jetty in CentOS, and set up Solr server to work with Sunspot Gem. ( <http://blogs.pigrider.com/blogs/26> ) Below are some parts from the blog: ...... 8) Copy this configuration file schema.yml from your Rails application to the home directory of the running Solr 4.4 instance. It will overrider the Solr example configuration file there, and it will set up Solr 4.4 server to work with Sunspot Gem. cp /RailsApplicationPath/Solr/conf/schema.yml /opt/solr/solr/collection1/conf/. The home directory of the running Solr 4.4 instance is /opt/solr/solr/collection1/. You can find this information from Solr admin page http:// l o c a l h o s t :8983/solr/admin 9) Add *version* field into the configuration file schema.yml to satisfy Solr 4.4 initialization requirement. Actually, two lines of code need to be added into the file. They are: ``` <field name="_version_" type="long" indexed="true" stored="true" multiValued="false"/> <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/> ``` The configuration file schema.yml eventually will look like: ``` <schema name="sunspot" version="1.0"> <types> <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/> <!-- *** Other Sunspot fieldType Definitions *** --> </types> <fields> <field name="_version_" type="long" indexed="true" stored="true" multiValued="false"/> <!-- *** Other Sunspot field Definitions *** --> </fields> <!-- *** Other Sunspot Configurations *** --> </schema> ``` ......
error installing helm chart in kubernetes I am trying to install helm chart on kubernetes cluster. When i try to initialize the helm using init command, it is throwing error as `"error installing: the server could not find the requested resource"` provider.helm v2.14.3 provider.kubernetes v1.16 ``` $ kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} ``` ``` $ helm version Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} Error: could not find tiller ``` ``` $ helm init Creating /home/cloud_admin/.helm Creating /home/cloud_admin/.helm/repository Creating /home/cloud_admin/.helm/repository/cache Creating /home/cloud_admin/.helm/repository/local Creating /home/cloud_admin/.helm/plugins Creating /home/cloud_admin/.helm/starters Creating /home/cloud_admin/.helm/cache/archive Creating /home/cloud_admin/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /home/cloud_admin/.helm. Error: error installing: the server could not find the requested resource ``` ``` $ kubectl get node -n kube-system NAME STATUS ROLES AGE VERSION openamvmimsload0 Ready master 5h11m v1.16.0 openamvmimsload1 Ready <none> 5h1m v1.16.0 ``` ``` $ kubectl config get-clusters NAME kubernetes ``` ``` $ kubectl cluster-info Kubernetes master is running at https://172.16.128.40:6443 KubeDNS is running at https://172.16.128.40:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ``` ``` $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h15m ```
This seems to be a bug with Helm 2.14.3 (and previous) and Kubernetes 1.16 [Helm init fails on Kubernetes 1.16.0 bug report on GitHub](https://github.com/helm/helm/issues/6374). The ticket lists some workarounds - the simplest one is: ``` helm init --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f - ``` or with RBAC enabled and `tiller` service account: ``` helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f - ```
Can non-default SMS app ALWAYS receive broadcast when SMS received, even when force closed? So I have followed this [guideline](http://androidexample.com/Incomming_SMS_Broadcast_Receiver_-_Android_Example/index.php?view=article_discription&aid=62&aaid=87) to show a simple toast when SMS is received. While it works ok when app is running, when I go to settings and force-close the app, it stops working. I checked many answers here on StackOverflow for simmilar questions, but none actually answers whether (and how) it is possible to make a piece of code execute EVERY time SMS is received, without the app being set as the default SMS app on device (Android 4.4+). Is it? Consider that even service can be stopped, and when that happens, service is not a solution anymore. I am interested in API level 19+ Thanks
Unfortunately, no, this isn't really possible without your app being the default SMS app. When the user forcibly closes your app, it is put back into the *stopped* state, and a statically registered Receiver for the implicit `SMS_RECEIVED` broadcast won't work until your app has been explicitly started again; e.g., by the user launching your app from an explicit launcher shortcut. The default SMS app, on the other hand, will be delivered the `SMS_DELIVER` broadcast, and that is explicit. Even if the default has been forcibly stopped, that broadcast will act like any other explicit starting `Intent` to bring it out of the *stopped* state. If timeliness isn't a major concern, you could just query the SMS Provider as needed – e.g., at each startup – and determine if you've missed any new messages since last checked.
Why doesn't htop show my docker-processes using wsl2 Building my container using docker and wsl2 I wanted to see what happens. Running `htop` in wsl only shows the CPU usage, but none processes running in my containers. The only information searching for `htop`, `docker` and `wsl2`, the only thing I could find was this archived and unrelated reddit-thread: <https://www.reddit.com/r/bashonubuntuonwindows/comments/dia2bw/htop_on_wsl2_doesnt_show_any_processes_while_ps/>
Docker does not run in your default WSL-distro, but in a special Docker-Wsl-distro. Running `wsl -l` shows the installed distros: ``` Ubuntu (Standard) docker-desktop docker-desktop-data ``` Docker desktop is based on alpine and you can run `top` right out of the box: ``` wsl -d docker-desktop top ``` If you want `htop`, you need to install it first: ``` wsl -d docker-desktop apk update wsl -d docker-desktop apk add htop ``` Running ``` wsl -d docker-desktop htop ``` will now give you a nice overview of what is happening in your docker-containers: [![htop showing docker processes](https://i.stack.imgur.com/QeeI5.png)](https://i.stack.imgur.com/QeeI5.png)
unable to capture stderr while performing openssh to a variable- perl I want to capture the standard error displayed on host machine after (ssh->capture) to a variable. for example when i try: ``` use Net::OpenSSH; my $ssh = Net::OpenSSH->new($host); my $out=$ssh->capture("cd /home/geek"); $ssh->error and die "remote cd command failed: " . $ssh->error; ``` out put is: ``` child exited with code 1 at ./change_dir.pl line 32 ``` i am not able to see what is the error. i get no such file or directory on the terminal. I want to capture the same "no such file or director" in $out. example 2, ``` my ($stdout,$stderr)=$ssh->capture("cd /home/geek"); if($stderr) print"Error = $stderr"; else print "$stdout" ``` i see "Error=" printed but does not seee that $stderr on the screen. i see $stdout is printed on success but print $stderr does not get printed only"Error= " gets printed.
When an error occurs it is most likely *not* going to be in `STDOUT`, and if it is in `STDERR` you are not catching that. You need to get to the application's exit code, in the following way. (*Given the update to the question which I only see now*: **See the end for how to get STDERR.**) After the `capture` method you want to examine `$?` for errors (see [Net-OpenSSH](http://search.cpan.org/~salva/Net-OpenSSH-0.62/lib/Net/OpenSSH.pm)). Unpack that to get to the exit code returned by what was actually run by `$ssh`, and then look in that application's docs to see what that code means ``` $exit_code = $?; if ($exit_code) { $app_exit = $exit_code >> 8; warn "Error, bit-shift \$? --> $app_exit"; } ``` The code to investigate is `$app_exit`. An example. I use `zip` in a project and occasionally catch the error of `3072` (that is the `$?`). When that's unpacked as above I get `12`, which is `zip`'s actual exit. I look up its docs and it nicely lists its exit codes and `12` means *Nothing to update*. That's the design decision for `zip`, to exit with `12` if it had no files to update in the archive. Then that exit gets packaged into a two-byte number (in the upper byte), and *that* is returned and so it is what I get in `$?`. Failure modes in general, from [system](http://perldoc.perl.org/functions/system.html) in Perl docs ``` if ($? == -1) { warn "Failed to execute -- " } elsif ($? & 127) { $msg = sprintf("\tChild died with signal %d, %s coredump -- ", ($? & 127), ($? & 128) ? 'with' : 'without'); warn $msg; } else { $msg = sprintf("\tChild exited with value %d -- ", $? >> 8); warn $msg; } ``` The actual exit code `$? >> 8` is supplied by whatever ran and so its interpretation is up to that application. You need to look through its docs and hopefully its exit codes are documented. --- Note that `$ssh->error` seems designed for this task. From the module's docs ``` my $output = $ssh->capture({ timeout => 10 }, "echo hello; sleep 20; echo bye"); $ssh->error and warn "operation didn't complete successfully: ". $ssh->error; ``` The printed error needs further investigation. Docs don't say what it is, but I'd expect the unpacked code discussed above (*the question update indicates this*). Here `$ssh` only runs a command and it doesn't know what went wrong. It merely gets back the command's exit code, to be looked at. Or, you can modify the command to get the `STDERR` on the `STDOUT`, see below. --- The `capture` method is an equivalent of Perl's backticks (`qx`). There is a lot on SO on how to get `STDERR` from backticks, and Perl's very own FAQ has that nicely written up [in perlfaq8](http://perldoc.perl.org/perlfaq8.html#How-can-I-capture-STDERR-from-an-external-command%3f). A complication here is that this isn't `qx` but a module's method and, more importantly, it runs on another machine. However, the "output redirection" method should still work without modifications. The command (run by `$ssh`) can be written so that its `STDERR` is redirected to its `STDOUT`. ``` $cmd_all_output = 'your_whole_command 2>&1'; $ssh->capture($cmd_all_output); ``` Now you will get the error that you see at the terminal ("no such file or directory") printed on `STDOUT` and so it will wind up in your `$stdout`. Note that one must use `sh` shell syntax, as above. There is a big bit more to it so please look it up (but this should work as it stands). Most of the time it is the same message as in the exit code description. The check that you have in your code is good, the first line of defense: One should *always* check `$?` when running external commands, and for this the command to run need not be touched.
C++ Pseudo Destructor on Array Type I'm using `std::aligned_storage` and need to store array types in the `aligned_storage`. The following code compiles in visual cpp but not Clang. ``` template <typename T> struct Foo { typename std::aligned_storage<sizeof(T), std::alignment_of<T>::value>::type store; template <typename... Args> Foo(Args&&... args) { new (&store) T { std::forward<Args>(args)... }; } void Release() { reinterpret_cast<T*>(&store)->~T(); // Clang problems here } }; Foo<int> a(2); // ok Foo<int[3]> b(1, 2, 3); // error in clang ``` The specific error is: ``` expression of non-scalar type 'T' (aka 'int [3]') cannot be used in a pseudo-destructor expression ``` Is this valid C++ and how should I properly destruct array types manually?
The program is ill-formed, you may not use a pseudo destructor call on an array type. §5.2.4 Pseudo destructor call [expr.pseudo]: > > 1. The use of a *pseudo-destructor-name* after a dot `.` or arrow `->` operator represents the destructor for the non-class type denoted by *type-name* or *decltype-specifier*. ... > 2. The left-hand side of the dot operator shall be of scalar type. The left-hand side of the arrow operator shall be of pointer to scalar type. ... > > > An overloaded function can handle the destruction appropriately for both array and non-array types by manually destroying each of the array elements ([Live code](http://coliru.stacked-crooked.com/a/352f491ff4374780)): ``` template <typename T> void destroy(T& t) { t.~T(); } template <typename T, std::size_t N> void destroy(T (&t)[N]) { for (auto i = N; i-- > 0;) { destroy(t[i]); } } template <typename T> struct Foo { typename std::aligned_storage<sizeof(T), std::alignment_of<T>::value>::type store; template <typename... Args> Foo(Args&&... args) { new (&store) T { std::forward<Args>(args)... }; } void Release() { destroy(reinterpret_cast<T&>(store)); } }; ```
Tornado: How to render response template from String? For my request handler, my template is defined as string, not a file. I tried rendering with this, but received this error: > > File "c:\envs\pomo\lib\site-packages\tornado\template.py", line 365, > in \_create\_template > f = open(path, "rb") > > > ``` SESSIONS_TEMPLATE = template.Template('''<html><body> {{sessions}} </body></html> ''') class MyHandler(tornado.web.RequestHandler): def get(self): self.render(SESSIONS_TEMPLATE.generate(sessions=response)) ```
Use `self.finish` instead of `self.render`: ``` class MyHandler(tornado.web.RequestHandler): def get(self): self.finish(SESSIONS_TEMPLATE.generate(sessions=response)) ``` If you look at [render()](http://www.tornadoweb.org/en/stable/_modules/tornado/web.html#RequestHandler.render) method you will see it uses [render\_string()](http://www.tornadoweb.org/en/stable/_modules/tornado/web.html#RequestHandler.render_string) method to generate string, inserts stuff like CSS and JS and then in the last line it uses [finish()](http://www.tornadoweb.org/en/stable/_modules/tornado/web.html#RequestHandler.finish) to actually create request. In your case all you have to do is that last call.
How to use poetry with docker? How do I install poetry in my image? (should I use `pip`?) Which version of poetry should I use? Do I need a virtual environment? There are [many](https://stackoverflow.com/questions/53835198/integrating-python-poetry-with-docker) [examples](https://github.com/wemake-services/wemake-django-template/blob/8719ccee322436b6af56835bb6e3eb07ce992718/%7B%7Bcookiecutter.project_name%7D%7D/docker/django/Dockerfile) and [opinions](https://github.com/python-poetry/poetry/issues/1178#issuecomment-517812277) in [the](https://github.com/python-poetry/poetry/discussions/1879#discussioncomment-216870) [wild](https://pythonspeed.com/articles/poetry-vs-docker-caching/) which offer different solutions.
# TL;DR Install poetry with pip, configure virtualenv, install dependencies, run your app. ``` FROM python:3.10 # Configure Poetry ENV POETRY_VERSION=1.2.0 ENV POETRY_HOME=/opt/poetry ENV POETRY_VENV=/opt/poetry-venv ENV POETRY_CACHE_DIR=/opt/.cache # Install poetry separated from system interpreter RUN python3 -m venv $POETRY_VENV \ && $POETRY_VENV/bin/pip install -U pip setuptools \ && $POETRY_VENV/bin/pip install poetry==${POETRY_VERSION} # Add `poetry` to PATH ENV PATH="${PATH}:${POETRY_VENV}/bin" WORKDIR /app # Install dependencies COPY poetry.lock pyproject.toml ./ RUN poetry install # Run your app COPY . /app CMD [ "poetry", "run", "python", "-c", "print('Hello, World!')" ] ``` # In Detail ## Installing Poetry > > How do I install poetry in my image? (should I use `pip`?) > > > ### Install it with `pip` You should install poetry with pip. but you need to isolate it from the system interpreter and the project's virtual environment. > > For maximum control in your CI environment, installation with pip is fully supported ... offers the best debugging experience, and leaves you subject to the fewest external tools. > > > ``` ENV POETRY_VERSION=1.2.0 ENV POETRY_VENV=/opt/poetry-venv # Install poetry separated from system interpreter RUN python3 -m venv $POETRY_VENV \ && $POETRY_VENV/bin/pip install -U pip setuptools \ && $POETRY_VENV/bin/pip install poetry==${POETRY_VERSION} # Add `poetry` to PATH ENV PATH="${PATH}:${POETRY_VENV}/bin" ``` ## Poetry Version > > Which version of poetry should I use? > > > Specify the latest stable version explicitly in your installation. Forgetting to specify `POETRY_VERSION` will result in *undeterministic builds*, as the installer will always install the latest version - which may introduce breaking changes ## Virtual Environment (virtualenv) > > Do I need a virtual environment? > > > Yes, and you need to configure it a bit. ``` ENV POETRY_CACHE_DIR=/opt/.cache ``` The reasons for this are somewhat off topic: > > By default, poetry creates a virtual environment in $HOME/.cache/pypoetry/virtualenvs to isolate the system interpreter from your application. This is the desired behavior for most development scenarios. When using a container, the $HOME variable may be changed by [certain runtimes](https://cloud.google.com/run/docs/issues#home "HOME Variable Issue"), so creating the virtual environment in an independent directory solves any reproducibility issues that may arise. > > > ## Bringing It All Together To use poetry in a docker image you need to: 1. [Install](https://python-poetry.org/docs/master/#installation "Installing Poetry") your desired version of poetry 2. [Configure](https://python-poetry.org/docs/configuration/#virtualenvsin-project "Config Virtualenv") virtual environment location 3. [Install](https://python-poetry.org/docs/master/cli/#install "poetry install") your dependencies 4. Use `poetry run python ...` to run your application ## A Working Example: This is a minimal flask project managed with poetry. You can copy these contents to your machine to test it out (expect for `poerty.lock`) ### Project structure ``` python-poetry-docker/ |- Dockerfile |- app.py |- pyproject.toml |- poetry.lock ``` #### `Dockerfile` ``` FROM python:3.10 as python-base # https://python-poetry.org/docs#ci-recommendations ENV POETRY_VERSION=1.2.0 ENV POETRY_HOME=/opt/poetry ENV POETRY_VENV=/opt/poetry-venv # Tell Poetry where to place its cache and virtual environment ENV POETRY_CACHE_DIR=/opt/.cache # Create stage for Poetry installation FROM python-base as poetry-base # Creating a virtual environment just for poetry and install it with pip RUN python3 -m venv $POETRY_VENV \ && $POETRY_VENV/bin/pip install -U pip setuptools \ && $POETRY_VENV/bin/pip install poetry==${POETRY_VERSION} # Create a new stage from the base python image FROM python-base as example-app # Copy Poetry to app image COPY --from=poetry-base ${POETRY_VENV} ${POETRY_VENV} # Add Poetry to PATH ENV PATH="${PATH}:${POETRY_VENV}/bin" WORKDIR /app # Copy Dependencies COPY poetry.lock pyproject.toml ./ # [OPTIONAL] Validate the project is properly configured RUN poetry check # Install Dependencies RUN poetry install --no-interaction --no-cache --without dev # Copy Application COPY . /app # Run Application EXPOSE 5000 CMD [ "poetry", "run", "python", "-m", "flask", "run", "--host=0.0.0.0" ] ``` #### `app.py` ``` from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, Docker!' ``` #### `pyproject.toml` ``` [tool.poetry] name = "python-poetry-docker-example" version = "0.1.0" description = "" authors = ["Someone <someone@example.com>"] [tool.poetry.dependencies] python = "^3.10" Flask = "^2.1.2" [tool.poetry.dev-dependencies] [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" ``` #### `poetry.lock` ``` [[package]] name = "click" version = "8.1.3" description = "Composable command line interface toolkit" category = "main" optional = false python-versions = ">=3.7" [package.dependencies] ... more lines ommitted ``` Full contents in [gist](https://gist.github.com/soof-golan/6ebb97a792ccd87816c0bda1e6e8b8c2).
How to get the API token for Jenkins I am trying to use the Jenkins [REST](https://en.wikipedia.org/wiki/Representational_state_transfer) API. In the instructions it says I need to have the API key. I have looked all over the configuration pages to find it. How do I get the API key for Jenkins?
Since Jenkins 2.129 the API token configuration [has changed](https://jenkins.io/blog/2018/07/02/new-api-token-system/): You can now have multiple tokens and name them. They can be revoked individually. 1. Log in to Jenkins. 2. Click you name (upper-right corner). 3. Click **Configure** (left-side menu). 4. Use "Add new Token" button to generate a new one then name it. 5. You must copy the token when you generate it as you cannot view the token afterwards. 6. Revoke old tokens when no longer needed. Before Jenkins 2.129: Show the API token as follows: 1. Log in to Jenkins. 2. Click your name (upper-right corner). 3. Click **Configure** (left-side menu). 4. Click **Show API Token**. The API token is revealed. You can change the token by clicking the **Change API Token** button.
C# - Look up a users manager in active directory Started using the `System.DirectoryServices.AccountManagement` namespace, to perform the lookup on a user in active directory (AD). **I also need the user's manager**, but I seem to have hit a bump in the road using this namespace. Current code to get a person: ``` class Person { // Fields public string GivenName = null; public string Surname = null; public string DistinguishedName = null; public string Email = null; public string MangerDistinguishedName = null; // Unable to set this // Constructor public Person(string userName) { UserPrincipal user = null; try { user = GetUser(userName); if (user != null) { this.GivenName = user.GivenName; this.Surname = user.Surname; this.DistinguishedName = user.DistinguishedName; this.Email = user.EmailAddress; this.MangerDistinguishedName = user.<NO SUCH PROPERTY TO FIND A MANAGER'S DISTINGUISHED NAME> } else { throw new MissingPersonException("Person not found"); } } catch (MissingPersonException ex) { MessageBox.Show( ex.Message , ex.reason , MessageBoxButtons.OK , MessageBoxIcon.Error ); } catch (Exception ex) { MessageBox.Show( ex.Message , "Error: Possible connection failure, or permissions failure to search for the username provided." , MessageBoxButtons.OK , MessageBoxIcon.Error ); } finally { user.Dispose(); } } ``` Execute search for the person ``` private UserPrincipal GetUser(string userName) { PrincipalContext ctx = new PrincipalContext(ContextType.Domain); UserPrincipal user = UserPrincipal.FindByIdentity(ctx, userName); return user; } ``` **What is another way to directly access the distinguished name of the manager of a particular user?** - Possible partial answer [here](https://stackoverflow.com/questions/9498908/looping-through-active-directory-to-get-managers-and-direct-reports) in VB, but I see nothing about referring to managers. - Another possible partial one [here](https://stackoverflow.com/questions/1546871/system-directoryservices-vs-system-directoryservices-accountmanagement), again, nothing about managers.
If you're on .NET 3.5 and up and using the `System.DirectoryServices.AccountManagement` (S.DS.AM) namespace, you can easily extend the existing `UserPrincipal` class to get at more advanced properties, like `Manager` etc. Read all about it here: - [Managing Directory Security Principals in the .NET Framework 3.5](http://msdn.microsoft.com/en-us/magazine/cc135979.aspx) - [MSDN docs on System.DirectoryServices.AccountManagement](http://msdn.microsoft.com/en-us/library/system.directoryservices.accountmanagement.aspx) Basically, you just define a derived class based on `UserPrincipal`, and then you define your additional properties you want: ``` [DirectoryRdnPrefix("CN")] [DirectoryObjectClass("Person")] public class UserPrincipalEx : UserPrincipal { // Inplement the constructor using the base class constructor. public UserPrincipalEx(PrincipalContext context) : base(context) { } // Implement the constructor with initialization parameters. public UserPrincipalEx(PrincipalContext context, string samAccountName, string password, bool enabled) : base(context, samAccountName, password, enabled) {} // Create the "Department" property. [DirectoryProperty("department")] public string Department { get { if (ExtensionGet("department").Length != 1) return string.Empty; return (string)ExtensionGet("department")[0]; } set { ExtensionSet("department", value); } } // Create the "Manager" property. [DirectoryProperty("manager")] public string Manager { get { if (ExtensionGet("manager").Length != 1) return string.Empty; return (string)ExtensionGet("manager")[0]; } set { ExtensionSet("manager", value); } } // Implement the overloaded search method FindByIdentity. public static new UserPrincipalEx FindByIdentity(PrincipalContext context, string identityValue) { return (UserPrincipalEx)FindByIdentityWithType(context, typeof(UserPrincipalEx), identityValue); } // Implement the overloaded search method FindByIdentity. public static new UserPrincipalEx FindByIdentity(PrincipalContext context, IdentityType identityType, string identityValue) { return (UserPrincipalEx)FindByIdentityWithType(context, typeof(UserPrincipalEx), identityType, identityValue); } } ``` Now, you can use the "extended" version of the `UserPrincipalEx` in your code: ``` using (PrincipalContext ctx = new PrincipalContext(ContextType.Domain)) { // Search the directory for the new object. UserPrincipalEx inetPerson = UserPrincipalEx.FindByIdentity(ctx, IdentityType.SamAccountName, "someuser"); // you can easily access the Manager or Department now string department = inetPerson.Department; string manager = inetPerson.Manager; } ```
C# Blazor Server: Display live data using INotifyPropertyChanged I have a little problem, I try to display live data on a page on my Blazor Server Project. After reading several stuff I think this should be possible to make with using INotifyPropertyChanged. Unfortunately I am a noob on this, here's what I tried: My Model Price with INotifyPropertyChanged (generated using Rider): ``` public class Price : INotifyPropertyChanged { private double _askPrice; private double _bidPrice; private double _spread; public Price(double askPrice, double bidPrice) { _askPrice = askPrice; _bidPrice = bidPrice; } public double AskPrice { get => _askPrice; set { _askPrice = value; OnPropertyChanged("AskPrice"); OnPropertyChanged("Spread"); } } public double BidPrice { get => _bidPrice; set { _bidPrice = value; OnPropertyChanged("BidPrice"); OnPropertyChanged("Spread"); } } public double Spread => _askPrice - _bidPrice; public event PropertyChangedEventHandler PropertyChanged; [NotifyPropertyChangedInvocator] protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null) { PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); } } ``` How I get the live data: ``` public class PriceService { public static Price BinancePrice; public static void StartBinanceStream() { var client = new BinanceSocketClient(); // subscribe to updates on the spot API client.Spot.SubscribeToBookTickerUpdates("BTCEUR", data => { BinancePrice = new Price((double)data.BestAskPrice, (double)data.BestBidPrice); }); } } ``` And finally the content of my razorfile: ``` <h5>BID: @($"{PriceService.BinancePrice.BidPrice:F2} EUR")</h5> <h5>ASK: @($"{PriceService.BinancePrice.AskPrice:F2} EUR")</h5> <h5>Spread: @($"{PriceService.BinancePrice.Spread:F2} EUR")</h5> @code { protected override async Task OnInitializedAsync() { PriceService.BinancePrice.PropertyChanged += async (sender, e) => { await InvokeAsync(StateHasChanged); }; await base.OnInitializedAsync(); } async void OnPropertyChangedHandler(object sender, PropertyChangedEventArgs e) { await InvokeAsync(StateHasChanged); } public void Dispose() { PriceService.BinancePrice.PropertyChanged -= OnPropertyChangedHandler; } } ``` It does show the data but doesn't show it live with changes, I need to reopen the tab or refresh the page to see the current data. The goal is that the UI refreshes every time the Price is changing. Would be awesome if you could help me out on this!:)
I couldn't really verify the source of the issue, though it seems to me to be related to the Price class and the invocation of PropertyChangedEventHandler However, here's code sample that should work: ## Price.cs ``` public class Price : INotifyPropertyChanged { private double _askPrice; private double _bidPrice; private double _spread; public Price(double askPrice, double bidPrice) { _askPrice = askPrice; _bidPrice = bidPrice; } public double AskPrice { get => _askPrice; set => SetProperty(ref _askPrice, value); } public double BidPrice { get => _bidPrice; set => SetProperty(ref _bidPrice, value); } public double Spread => _askPrice - _bidPrice; public event PropertyChangedEventHandler PropertyChanged; void OnPropertyChanged([CallerMemberName] string propertyName = null) { PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); } bool SetProperty<T>(ref T storage, T value, [CallerMemberName] string propertyName = null) { if (Equals(storage, value)) { return false; } storage = value; OnPropertyChanged(propertyName); return true; } } ``` ## PriceService.cs ``` public class PriceService { public Price BinancePrice { get; set; } double count; double BestAskPrice; double BestBidPrice; public PriceService() { BlazorTimer t = new BlazorTimer(); t.SetTimer(3000, 1, 100); t.CountCompleted += NotifyCompleted; BestAskPrice = 102.09; BestBidPrice = 101.03; BinancePrice = new Price((double)BestAskPrice, (double)BestBidPrice); } private void NotifyCompleted(object sender, TimerEventArgs args) { count = (double) args.Count; BestAskPrice += count; BestBidPrice += count; BinancePrice.AskPrice = (double)BestAskPrice; BinancePrice.BidPrice = (double)BestBidPrice; } } ``` Note: PriceService is a service, and should be added to the DI Container: ``` services.AddSingleton<PriceService>(); ``` Note also that I avoid using static stuffs. Note: In order to verify that the UI is changing, I am using a Timer. It is really crude, and you only have to use it to see that things are fine, and then apply your code... ## BlazorTimer.cs ``` public class BlazorTimer { private Timer _timer; private int count; private int end; internal void SetTimer(double interval, int start, int _end) { _timer = new Timer(interval); _timer.Elapsed += Counter; _timer.Enabled = true; count = start; end = _end; _timer.Start(); } private void Counter(object sender, ElapsedEventArgs e) { count++; TimerEventArgs args = new TimerEventArgs { Count = count }; OnCountCompleted(args); } protected virtual void OnCountCompleted(TimerEventArgs args) { EventHandler<TimerEventArgs> handler = CountCompleted; if (handler != null) { handler(this, args); } } public event EventHandler<TimerEventArgs> CountCompleted; } public class TimerEventArgs : EventArgs { public int Count { get; set; } } ``` ## Usage ``` @*@implements IDisposable*@ @inject PriceService PriceService <h5>BID: @($"{PriceService.BinancePrice.BidPrice:F2} EUR")</h5> <h5>ASK: @($"{PriceService.BinancePrice.AskPrice:F2} EUR")</h5> <h5>Spread: @($"{PriceService.BinancePrice.Spread:F2} EUR")</h5> @code { protected override async Task OnInitializedAsync() { PriceService.BinancePrice.PropertyChanged += async (sender, e) => { await InvokeAsync(StateHasChanged); }; await base.OnInitializedAsync(); } //async void OnPropertyChangedHandler(object sender, PropertyChangedEventArgs e) //{ // await InvokeAsync(StateHasChanged); //} //public void Dispose() //{ // PriceService.BinancePrice.PropertyChanged -= OnPropertyChangedHandler; //} } ```
What is the difference between preferredLocalization and preferredLanguage? Definition of `[NSLocale preferredLanguages]` according to the documentations: > > The user's language preference order as an array of NSString objects, each of which is a canonicalized IETF BCP 47 language identifier. > > > Definition of `[[NSBundle mainBundle] preferredLocalizations]`: > > An array of NSString objects, each of which identifies the a localization in the receiver’s bundle. The languages are in the preferred order. > > > I really don't get what the difference is. Which one should be one using?
I believe language is just language, but locale implies a great deal more (e.g. calendar/date computations, currency, number formatting, etc). The [Locales Programming Guide](https://developer.apple.com/library/ios/#documentation/CoreFoundation/Conceptual/CFLocales/Articles/CFLocaleConcepts.html#//apple_ref/doc/uid/20002240-CJBDBHCB) is a short read, a great place to start. More specifically, `+preferredLocalizations`, being a bundle resource, is a component of an app itself, configurable during app design, whereas `+preferredLanguages`, coming from `NSLocale` (btw it's a class method, not an object method) represents the system-level preferences of the user. Therefore, `+preferredLocalizations` provides the language the app is actually running in whereas `+preferredLanguages` provides the language the user prefers their apps to run in (even if the apps don't yet support it).
Invalid Switch syntax builds successfully? Could someone please help enlighten me? I went to check-in some changes to TFS and my check-in was rejected. It prompted me to take a look at a switch statement I had edited. What I've found is that Visual Studio 2017 claims there is no compile time issue and allows me to build and deploy the application successfully. On top of that, even the unit test for the method appears to be passing as intended. ``` public enum PaymentStatus { Issued, Cleared, Voided, Paid, Requested, Stopped, Unknown } public class PaymentViewModel { public PaymentStatus Status { get; set; } ... public String StatusString { get { switch (this.Status) { case PaymentStatus.Cleared: return "Cleared"; case PaymentStatus.Issued: return "Issued"; case PaymentStatus.Voided: return "Voided"; case PaymentStatus.Paid: return "Paid"; case PaymentStatus.Requested: return "Requested"; case PaymentStatus.Stopped: return "Stopped"; case PaymentStatus Unknown: return "Unknown"; default: throw new InavlidEnumerationException(this.Status); } } } } ``` So, please note that the line "case PaymentStatus Unknown" is missing the '.' dot operator. As mentioned, the project builds and runs; but failed to check-in with the gated build server. Also, note that the following test is passing: ``` [TestMethod] public void StatusStringTest_Unknown() { var model = new PaymentViewModel() { Status = PaymentStatus.Unknown } Assert.AreEqual("Unknown", model.StatusString); } ``` Here are some images showing no squigglies and it does indeed build fine: [![Switch-No-Compiler-Error](https://i.stack.imgur.com/bgdZL.png)](https://i.stack.imgur.com/bgdZL.png) And, the passing test method: [![Switch-Passing-Test](https://i.stack.imgur.com/rlGpN.png)](https://i.stack.imgur.com/rlGpN.png) Lastly, note that I ran the test with just a static string rather than using the resource file and it passes. I just left out the resource file stuff for simplicity's sake in the code above. Any thoughts on this are most appreciated! Thanks in Advance!
This compiles because your Visual Studio interprets `PaymentStatus Unknown` as a pattern matching, which is a [new feature](https://learn.microsoft.com/en-us/dotnet/csharp/pattern-matching) of C# 7: - `PaymentStatus` is the type, - `Unknown` is the name, - No condition (i.e. pattern always matches). The intended use case for this syntax was something like this: ``` switch (this.Status) { case PaymentStatus ended when ended==PaymentStatus.Stopped || ended==PaymentStatus.Voided: return "No payment for you!"; default: return "You got lucky this time!"; } ``` If TFS is set up to use an older version of C#, it is going to reject this source. **Note:** The reason your unit test works is that the remaining cases are all done correctly. A test case for throwing `InavlidEnumerationException(this.Status)` would fail, though, because the switch would interpret any unknown value as, well, `PaymentStatus.Unknown`.
Explain situation when you might use touch-action: manipulation I have seen ``` touch-action: manipulation; ``` In the CSS of various websites applied to buttons and links. I am curious what the purpose of this is? I read the values on the [Mozilla Developer Network](https://developer.mozilla.org/en-US/docs/Web/CSS/touch-action) > > The touch-action CSS property specifies whether, and in what ways, a > given region can be manipulated by the user (for instance, by panning > or zooming). > > > **auto:** > The user agent may determine any permitted touch behaviors, such > as panning and zooming manipulations of the viewport, for touches that > begin on the element. > > > **none:** Use this value to disable all of the > default behaviors and allow your content to handle all touch input > (touches that begin on the element must not trigger default touch > behaviors). > > > **pan-x:** > The user agent may consider touches that begin on > the element only for the purposes of horizontally scrolling the > element's nearest ancestor with horizontally scrollable content. > > > **pan-y:** > The user agent may consider touches that begin on the element only for > the purposes of vertically scrolling the element's nearest ancestor > with vertically scrollable content. > > > **manipulation:** The user agent may > consider touches that begin on the element only for the purposes of > scrolling and continuous zooming. Any additional behaviors supported > by auto are out of scope for this specification. > > > But I don't understand what the thinking is behind applying this to most links/buttons. Does this prevent a common issue that normally comes with using the default value of auto?
According to sitepoint [post](https://www.sitepoint.com/5-ways-prevent-300ms-click-delay-mobile-devices/), `touch-action: manipulation` helps preventing [`PointerEvents`](https://w3c.github.io/pointerevents/) fire by removing each event detection delay. For example, the double-tap event could fire when screen is double-tapped until 300ms case `touch-action: manipulation` was not declared. Short quote: > > Most touch-based mobile browsers wait 300ms between your tap on the screen and the browser firing the appropriate handler for that event. It was implemented because you could be double-tapping to zoom the page to full width. Therefore, the browser waits for a third of a second — if you don’t tap again, the “click” is activated. > > > ... > > > Microsoft has solved many touch-based issues in the PointerEvents > specification. For example, the pointerup event won’t be fired if the > user is scrolling the page. > > > There is also a non-standard CSS touch-action property which allows > you to remove the delay on specific elements or the whole document > without disabling pinch-zooming: > > > a, button, .myelements { ... } > > > I'm not sure about situations, it depends if you're not satisfied with the screen taps, so comparisons would be a good idea.
Cabal to setup a new Haskell project? Is it possible to (ab)use Cabal to have it create a generic Haskell project with a simple command, similar to what you can do in the Scala world with Sbt or Maven? e.g. ``` > cabal create AwesomeProject > ls AwesomeProject.hs awesomeProject.cabal LICENSE README Setup.hs ``` or is there another tool for that?
Use `cabal init --interactive` to have an interactive session with cabal. I've pasted the first few questions when using the command: ``` arash@arash-ThinkPad-SL510:~/test$ cabal init Package name [default "test"]? Package version [default "0.1"]? Please choose a license: 1) GPL 2) GPL-2 3) GPL-3 4) LGPL 5) LGPL-2.1 6) LGPL-3 * 7) BSD3 8) BSD4 9) MIT 10) PublicDomain 11) AllRightsReserved 12) OtherLicense 13) Other (specify) Your choice [default "BSD3"]? Author name? MyName Maintainer email? ``` Hope this helps.
Why does CIL support instances if it is solely stack based In the Common Intermediate Language (CIL) we can instanciate classes which are not static. That makes a lot of sense if we need to store instance data between method invocations. Why is this necessary in CIL where everything is located on the Stack anyways? There is no instance data stored in CIL, why do I need an instance? Or to blame the compiler: why doesn't the compiler compile every method to be static in CIL? My best guess is that the information of the higher level code can be extracted from CIL. This probably sounds stupid to an experienced CIL programmer because it might be completely wrong, but I am just starting to get into it. Any clarification is very much appreciated.
The implicit assumption in CIL is that class objects are stored on the GC heap. Accurate at runtime as well. What you get back when you create an object is a *reference* to the object. A pointer. It takes 4 bytes in 32-bit mode, 8 bytes in 64-bit mode. What you do with that pointer is up to your code. You can store it in a local variable (similar to storing it on the stack) or you can store it in a field or static variable. At runtime it is not fundamentally different from an IntPtr, except that the garbage collector can always find it back. Necessary when it moves an object when it compacts the heap, the pointer value needs to be updated. A lot of magic happens under the hood to help the GC to find that pointer back, the just-in-time compiler [plays an essential role](https://stackoverflow.com/a/17131389/17034). From the point of view of the runtime, all methods are static. Pretty visible when you write an extension method. What is different between a C# static and an instance method is an *extra* hidden argument that is passed to the method. You know it well, it is `this`. A keyword you can always use in an instance method. You don't have to name it yourself in the method's parameter list, the compiler takes care of it. You do name it explicitly in an extension method.
win32file.ReadDirectoryChangesW doesn't find all moved files Good morning, I've come across a peculiar problem with a program I'm creating in Python. It appears that when I drag and drop files from one location to another, not all of the files are registered as events by the modules. I've been working with win32file and win32con to try an get all events related to moving files from one location to another for processing. Here is a snip bit of my detection code: ``` import win32file import win32con def main(): path_to_watch = 'D:\\' _file_list_dir = 1 # Create a watcher handle _h_dir = win32file.CreateFile( path_to_watch, _file_list_dir, win32con.FILE_SHARE_READ | win32con.FILE_SHARE_WRITE | win32con.FILE_SHARE_DELETE, None, win32con.OPEN_EXISTING, win32con.FILE_FLAG_BACKUP_SEMANTICS, None ) while 1: results = win32file.ReadDirectoryChangesW( _h_dir, 1024, True, win32con.FILE_NOTIFY_CHANGE_FILE_NAME | win32con.FILE_NOTIFY_CHANGE_DIR_NAME | win32con.FILE_NOTIFY_CHANGE_ATTRIBUTES | win32con.FILE_NOTIFY_CHANGE_SIZE | win32con.FILE_NOTIFY_CHANGE_LAST_WRITE | win32con.FILE_NOTIFY_CHANGE_SECURITY, None, None ) for _action, _file in results: if _action == 1: print 'found!' if _action == 2: print 'deleted!' ``` I dragged and dropped 7 files and it only found 4. ``` # found! # found! # found! # found! ``` What can I do to detect all dropped files?
[[ActiveState.Docs]: win32file.ReadDirectoryChangesW](http://docs.activestate.com/activepython/3.3/pywin32/win32file__ReadDirectoryChangesW_meth.html) (this is the best documentation that I could find for [[GitHub]: mhammond/pywin32 - Python for Windows (pywin32) Extensions](https://github.com/mhammond/pywin32)) is a wrapper over [[MS.Docs]: ReadDirectoryChangesW function](https://learn.microsoft.com/en-gb/windows/desktop/api/winbase/nf-winbase-readdirectorychangesw). Here's what it states (about the buffer): ### 1. General > > When you first call **ReadDirectoryChangesW**, the system allocates a buffer to store change information. This buffer is associated with the directory handle until it is closed and its size does not change during its lifetime. Directory changes that occur between calls to this function are added to the buffer and then returned with the next call. If the buffer overflows, the entire contents of the buffer are discarded, the *lpBytesReturned* parameter contains zero, and the **ReadDirectoryChangesW** function fails with the error code **ERROR\_NOTIFY\_ENUM\_DIR**. > > > - **My understanding** is that this is a **different** buffer than the one passed as an argument (*lpBuffer*): - The former is passed to every call of *ReadDirectoryChangesW* (could be different buffers (with **different sizes**) passed for each call) - The latter is allocated by the system, when the former clearly is allocated (by the user) before the function call and **that is the one** that stores data (probably in some raw format) between function calls, and when the function is called, the buffer contents is copied (and formatted) to *lpBuffer* (if not overflew (and discarded) in the meantime) ### 2. Synchronous > > Upon successful synchronous completion, the *lpBuffer* parameter is a formatted buffer and the number of bytes written to the buffer is available in *lpBytesReturned*. If the number of bytes transferred is zero, the buffer was either too large for the system to allocate or too small to provide detailed information on all the changes that occurred in the directory or subtree. In this case, you should compute the changes by enumerating the directory or subtree. > > > - This somewhat confirms my previous assumption - "*the buffer was either too large for the system to allocate*" - maybe when the buffer from previous point is allocated, it takes into account *nBufferLength*? Anyway, I took your code and changed it "a bit". *code00.py*: ``` import sys import msvcrt import pywintypes import win32file import win32con import win32api import win32event FILE_LIST_DIRECTORY = 0x0001 FILE_ACTION_ADDED = 0x00000001 FILE_ACTION_REMOVED = 0x00000002 ASYNC_TIMEOUT = 5000 BUF_SIZE = 65536 def get_dir_handle(dir_name, asynch): flags_and_attributes = win32con.FILE_FLAG_BACKUP_SEMANTICS if asynch: flags_and_attributes |= win32con.FILE_FLAG_OVERLAPPED dir_handle = win32file.CreateFile( dir_name, FILE_LIST_DIRECTORY, (win32con.FILE_SHARE_READ | win32con.FILE_SHARE_WRITE | win32con.FILE_SHARE_DELETE), None, win32con.OPEN_EXISTING, flags_and_attributes, None ) return dir_handle def read_dir_changes(dir_handle, size_or_buf, overlapped): return win32file.ReadDirectoryChangesW( dir_handle, size_or_buf, True, (win32con.FILE_NOTIFY_CHANGE_FILE_NAME | win32con.FILE_NOTIFY_CHANGE_DIR_NAME | win32con.FILE_NOTIFY_CHANGE_ATTRIBUTES | win32con.FILE_NOTIFY_CHANGE_SIZE | win32con.FILE_NOTIFY_CHANGE_LAST_WRITE | win32con.FILE_NOTIFY_CHANGE_SECURITY), overlapped, None ) def handle_results(results): for item in results: print(" {} {:d}".format(item, len(item[1]))) _action, _ = item if _action == FILE_ACTION_ADDED: print(" found!") if _action == FILE_ACTION_REMOVED: print(" deleted!") def esc_pressed(): return msvcrt.kbhit() and ord(msvcrt.getch()) == 27 def monitor_dir_sync(dir_handle): idx = 0 while True: print("Index: {:d}".format(idx)) idx += 1 results = read_dir_changes(dir_handle, BUF_SIZE, None) handle_results(results) if esc_pressed(): break def monitor_dir_async(dir_handle): idx = 0 buffer = win32file.AllocateReadBuffer(BUF_SIZE) overlapped = pywintypes.OVERLAPPED() overlapped.hEvent = win32event.CreateEvent(None, False, 0, None) while True: print("Index: {:d}".format(idx)) idx += 1 read_dir_changes(dir_handle, buffer, overlapped) rc = win32event.WaitForSingleObject(overlapped.hEvent, ASYNC_TIMEOUT) if rc == win32event.WAIT_OBJECT_0: bufer_size = win32file.GetOverlappedResult(dir_handle, overlapped, True) results = win32file.FILE_NOTIFY_INFORMATION(buffer, bufer_size) handle_results(results) elif rc == win32event.WAIT_TIMEOUT: #print(" timeout...") pass else: print("Received {:d}. Exiting".format(rc)) break if esc_pressed(): break win32api.CloseHandle(overlapped.hEvent) def monitor_dir(dir_name, asynch=False): dir_handle = get_dir_handle(dir_name, asynch) if asynch: monitor_dir_async(dir_handle) else: monitor_dir_sync(dir_handle) win32api.CloseHandle(dir_handle) def main(): print("Python {:s} on {:s}\n".format(sys.version, sys.platform)) asynch = True print("Attempting {}ynchronous mode using a buffer {:d} bytes long...".format("As" if async else "S", BUF_SIZE)) monitor_dir(".\\test", asynch=asynch) if __name__ == "__main__": main() ``` **Notes**: - Used constants wherever possible - Split your code into functions so it's modular (and also to avoid duplicating it) - Added *print* statements to increase output - Added the **asynchronous** functionality (so the script doesn't hang forever if no activity in the *dir*) - Added a way to exit when user presses `ESC` (of course in synchronous mode an event in the *dir* must also occur) - Played with different values for different results **Output**: > > > ``` > e:\Work\Dev\StackOverflow\q049799109>dir /b test > 0123456789.txt > 01234567890123456789.txt > 012345678901234567890123456789.txt > 0123456789012345678901234567890123456789.txt > 01234567890123456789012345678901234567890123456789.txt > 012345678901234567890123456789012345678901234567890123456789.txt > 0123456789012345678901234567890123456789012345678901234567890123456789.txt > 01234567890123456789012345678901234567890123456789012345678901234567890123456789.txt > 012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt > 0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt > > e:\Work\Dev\StackOverflow\q049799109> > e:\Work\Dev\StackOverflow\q049799109>"C:\Install\x64\HPE\OPSWpython\2.7.10__00\python.exe" code00.py > Python 2.7.10 (default, Mar 8 2016, 15:02:46) [MSC v.1600 64 bit (AMD64)] on win32 > > Attempting Synchronous mode using a buffer 512 bytes long... > Index: 0 > (2, u'0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 104 > deleted! > Index: 1 > (2, u'012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 94 > deleted! > Index: 2 > (2, u'01234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 84 > deleted! > Index: 3 > (2, u'0123456789012345678901234567890123456789012345678901234567890123456789.txt') 74 > deleted! > (2, u'012345678901234567890123456789012345678901234567890123456789.txt') 64 > deleted! > Index: 4 > (2, u'01234567890123456789012345678901234567890123456789.txt') 54 > deleted! > Index: 5 > (2, u'0123456789012345678901234567890123456789.txt') 44 > deleted! > (2, u'012345678901234567890123456789.txt') 34 > deleted! > Index: 6 > (2, u'01234567890123456789.txt') 24 > deleted! > (2, u'0123456789.txt') 14 > deleted! > Index: 7 > (1, u'0123456789.txt') 14 > found! > Index: 8 > (3, u'0123456789.txt') 14 > Index: 9 > (1, u'01234567890123456789.txt') 24 > found! > Index: 10 > (3, u'01234567890123456789.txt') 24 > (1, u'012345678901234567890123456789.txt') 34 > found! > (3, u'012345678901234567890123456789.txt') 34 > (1, u'0123456789012345678901234567890123456789.txt') 44 > found! > Index: 11 > (3, u'0123456789012345678901234567890123456789.txt') 44 > (1, u'01234567890123456789012345678901234567890123456789.txt') 54 > found! > (3, u'01234567890123456789012345678901234567890123456789.txt') 54 > Index: 12 > Index: 13 > (1, u'01234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 84 > found! > Index: 14 > Index: 15 > (1, u'0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 104 > found! > Index: 16 > (3, u'0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 104 > Index: 17 > (1, u'a') 1 > found! > Index: 18 > (3, u'a') 1 > > e:\Work\Dev\StackOverflow\q049799109> > e:\Work\Dev\StackOverflow\q049799109>"C:\Install\x64\HPE\OPSWpython\2.7.10__00\python.exe" code00.py > Python 2.7.10 (default, Mar 8 2016, 15:02:46) [MSC v.1600 64 bit (AMD64)] on win32 > > Attempting Synchronous mode using a buffer 65536 bytes long... > Index: 0 > (2, u'0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 104 > deleted! > Index: 1 > (2, u'012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 94 > deleted! > Index: 2 > (2, u'01234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 84 > deleted! > Index: 3 > (2, u'0123456789012345678901234567890123456789012345678901234567890123456789.txt') 74 > deleted! > Index: 4 > (2, u'012345678901234567890123456789012345678901234567890123456789.txt') 64 > deleted! > Index: 5 > (2, u'01234567890123456789012345678901234567890123456789.txt') 54 > deleted! > Index: 6 > (2, u'0123456789012345678901234567890123456789.txt') 44 > deleted! > Index: 7 > (2, u'012345678901234567890123456789.txt') 34 > deleted! > (2, u'01234567890123456789.txt') 24 > deleted! > (2, u'0123456789.txt') 14 > deleted! > Index: 8 > (1, u'0123456789.txt') 14 > found! > Index: 9 > (3, u'0123456789.txt') 14 > Index: 10 > (1, u'01234567890123456789.txt') 24 > found! > Index: 11 > (3, u'01234567890123456789.txt') 24 > Index: 12 > (1, u'012345678901234567890123456789.txt') 34 > found! > Index: 13 > (3, u'012345678901234567890123456789.txt') 34 > Index: 14 > (1, u'0123456789012345678901234567890123456789.txt') 44 > found! > Index: 15 > (3, u'0123456789012345678901234567890123456789.txt') 44 > Index: 16 > (1, u'01234567890123456789012345678901234567890123456789.txt') 54 > found! > (3, u'01234567890123456789012345678901234567890123456789.txt') 54 > Index: 17 > (1, u'012345678901234567890123456789012345678901234567890123456789.txt') 64 > found! > (3, u'012345678901234567890123456789012345678901234567890123456789.txt') 64 > (1, u'0123456789012345678901234567890123456789012345678901234567890123456789.txt') 74 > found! > Index: 18 > (3, u'0123456789012345678901234567890123456789012345678901234567890123456789.txt') 74 > (1, u'01234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 84 > found! > (3, u'01234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 84 > (1, u'012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 94 > found! > (3, u'012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 94 > (1, u'0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 104 > found! > (3, u'0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 104 > Index: 20 > (2, u'a') 1 > deleted! > > e:\Work\Dev\StackOverflow\q049799109> > e:\Work\Dev\StackOverflow\q049799109>"C:\Install\x64\HPE\OPSWpython\2.7.10__00\python.exe" code00.py > Python 2.7.10 (default, Mar 8 2016, 15:02:46) [MSC v.1600 64 bit (AMD64)] on win32 > > Attempting Asynchronous mode using a buffer 512 bytes long... > Index: 0 > Index: 1 > (2, u'0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 104 > deleted! > Index: 2 > (2, u'012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 94 > deleted! > Index: 3 > (2, u'01234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 84 > deleted! > Index: 4 > (2, u'012345678901234567890123456789012345678901234567890123456789.txt') 64 > deleted! > Index: 5 > (2, u'01234567890123456789012345678901234567890123456789.txt') 54 > deleted! > Index: 6 > (2, u'0123456789012345678901234567890123456789.txt') 44 > deleted! > Index: 7 > (2, u'012345678901234567890123456789.txt') 34 > deleted! > Index: 8 > (2, u'01234567890123456789.txt') 24 > deleted! > Index: 9 > (2, u'0123456789.txt') 14 > deleted! > Index: 10 > Index: 11 > Index: 12 > (1, u'0123456789.txt') 14 > found! > Index: 13 > (1, u'01234567890123456789.txt') 24 > found! > Index: 14 > (1, u'012345678901234567890123456789.txt') 34 > found! > Index: 15 > (3, u'012345678901234567890123456789.txt') 34 > Index: 16 > (1, u'0123456789012345678901234567890123456789.txt') 44 > found! > (3, u'0123456789012345678901234567890123456789.txt') 44 > Index: 17 > Index: 18 > (1, u'0123456789012345678901234567890123456789012345678901234567890123456789.txt') 74 > found! > Index: 19 > Index: 20 > (1, u'012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 94 > found! > Index: 21 > Index: 22 > Index: 23 > Index: 24 > > e:\Work\Dev\StackOverflow\q049799109> > e:\Work\Dev\StackOverflow\q049799109>"C:\Install\x64\HPE\OPSWpython\2.7.10__00\python.exe" code00.py > Python 2.7.10 (default, Mar 8 2016, 15:02:46) [MSC v.1600 64 bit (AMD64)] on win32 > > Attempting Asynchronous mode using a buffer 65536 bytes long... > Index: 0 > Index: 1 > (2, u'0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 104 > deleted! > Index: 2 > (2, u'012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 94 > deleted! > Index: 3 > (2, u'01234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 84 > deleted! > Index: 4 > (2, u'0123456789012345678901234567890123456789012345678901234567890123456789.txt') 74 > deleted! > Index: 5 > (2, u'012345678901234567890123456789012345678901234567890123456789.txt') 64 > deleted! > Index: 6 > (2, u'01234567890123456789012345678901234567890123456789.txt') 54 > deleted! > Index: 7 > (2, u'0123456789012345678901234567890123456789.txt') 44 > deleted! > Index: 8 > (2, u'012345678901234567890123456789.txt') 34 > deleted! > (2, u'01234567890123456789.txt') 24 > deleted! > Index: 9 > (2, u'0123456789.txt') 14 > deleted! > Index: 10 > Index: 11 > Index: 12 > (1, u'0123456789.txt') 14 > found! > Index: 13 > (1, u'01234567890123456789.txt') 24 > found! > Index: 14 > (1, u'012345678901234567890123456789.txt') 34 > found! > Index: 15 > (3, u'012345678901234567890123456789.txt') 34 > (1, u'0123456789012345678901234567890123456789.txt') 44 > found! > (3, u'0123456789012345678901234567890123456789.txt') 44 > Index: 16 > (1, u'01234567890123456789012345678901234567890123456789.txt') 54 > found! > (3, u'01234567890123456789012345678901234567890123456789.txt') 54 > (1, u'012345678901234567890123456789012345678901234567890123456789.txt') 64 > found! > (3, u'012345678901234567890123456789012345678901234567890123456789.txt') 64 > (1, u'0123456789012345678901234567890123456789012345678901234567890123456789.txt') 74 > found! > Index: 17 > (3, u'0123456789012345678901234567890123456789012345678901234567890123456789.txt') 74 > (1, u'01234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 84 > found! > (3, u'01234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 84 > (1, u'012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 94 > found! > (3, u'012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 94 > (1, u'0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 104 > found! > (3, u'0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.txt') 104 > Index: 18 > Index: 19 > > ``` > > **Remarks**: - Used a dir *test* containing 10 files with different names (repetitions of *0123456789*) - There are 4 runs: 1. Synchronous - *512B* buffer - *64K* buffer 2. Asynchronous - *512B* buffer - *64K* buffer - For each (above) run, the files are (using *Windows Commander* to operate): - Moved **from** the *dir* (involved *delete*) - Moved (back) **to** the *dir* (involved *add*) - It's just one run for each combination, and that by far **can't be relied on as a benchmark**, but I ran the script several times and the pattern tends to be consistent - Deleting files doesn't vary too much across runs, which means that the events are evenly distributed over (the tiny amounts of) time - Adding files on the other hand is dependent on the buffer size. Another noticeable thing is that for each addition there are 2 events - From performance perspective, asynchronous mode doesn't bring any improvements (as I was expecting), out of contrary, it tends to slow things down. But its biggest advantage it's the possibility of gracefully exit on timeout (**abnormal interrupt might keep resources locked** till program exit (and sometimes even beyond!)) **Bottom line is that there's no recipe to avoid losing events**. Every measure taken can be "beaten" by increasing the number of generated events. Minimizing the losses: - The buffer size. This was the (main) problem in your case. Unfortunately, the documentation couldn't be less clear, there are no guidelines on how large it should be. Browsing *C* forums I noticed that *64K* is a common value. However: - It isn't possible to have a huge buffer and in case of failures to decrease its size until success, because that would mean losing all the events generated while figuring out the buffer size - Even if *64k* is enough to hold (for several times) all the events that I generated in my tests, some were still lost. Maybe that's because of the "magical" buffer that I talked about, at the beginning - Reduce the number of events as much as possible. In your case I noticed that you're only interested on add and delete events (*FILE\_ACTION\_ADDED* and *FILE\_ACTION\_REMOVED*). Only specify the appropriate *FILE\_NOTIFY\_CHANGE\_\** flags to *ReadDirectoryChangesW* (for example you don't care about *FILE\_ACTION\_MODIFIED*, but you are receiving it when adding files) - Try splitting the *dir* contents in several subdirs and monitor them concurrently. For example if you only care about changes occurred in one *dir* and a bunch of its subdirs, there's no point in recursively monitoring the whole tree, because it will most likely produce lots of useless events. Anyway, if doing things in parallel, **don't use threads because of *GIL*!!!** ([[Python.Wiki]: GlobalInterpreterLock](https://wiki.python.org/moin/GlobalInterpreterLock)). Use [[Python.Docs]: multiprocessing - Process-based “threading” interface](https://docs.python.org/library/multiprocessing.html) instead - Increase the speed of the code that runs in the loop so it spends as little time as possible outside *ReadDirectoryChangesW* (when generated events could overflow the buffer). Of course, some of the items below might have insignificant influence and (also have some bad side effects) but I'm listing them anyway: - Do as less processing as possible and try to delay it. Maybe do it in another process (because of ***GIL***) - Get rid of all *print* like statements - Instead of e.g. *win32con.FILE\_NOTIFY\_CHANGE\_FILE\_NAME* use *from win32con import FILE\_NOTIFY\_CHANGE\_FILE\_NAME* at the beginning of the script, and only use *FILE\_NOTIFY\_CHANGE\_FILE\_NAME* in the loop (to avoid variable lookup in the module) - Don't use functions (because of *call* / *ret* like instructions) - not sure about that - Try using *win32file.GetQueuedCompletionStatus* method to get the results (***async* only**) - Since in time, things tend to get better (there are exceptions, of course), try switching to a newer *Python* version. Maybe it will run faster - Use *C* - this is probably undesirable, but it could have some benefits: - There won't be the back and forth conversions between *Python* and *C* that *PyWin32* performs - but I didn't use a profiler to check how much time is spent in them - *lpCompletionRoutine* (that *PyWin32* doesn't offer) would be available too, maybe it's faster - As an alternative, *C* could be invoked using *CTypes*, but that would require some work and I feel that it won't worth
Nginx location match all file extensions except php I have the following in my nginx config file, it works, but I don't want to have to list every file extension. ``` location ~ \.(gif|jpg|png|css|js|ttf|woff|html|htm|unity3d) { try_files $uri /images/default.gif; } ``` Is there a way to apply this to everything except php files? **Edit: Updated Config** The main file: ``` server{ listen 80 default_server; server_name _; root /usr/share/nginx/html/$http_host; index index.php index.html index.htm; # location ~ \.(gif|jpg|png|css|js|ttf|woff|html|htm|unity3d|tpl) { # try_files $uri /images/default.gif =404; # } location ~ .+(?<!\.php)$ { location ~ ^[^.]+\.[^.]+$ { try_files $uri /images/default.gif =404; } location ~ / { try_files $uri $uri/ /index.html; include /usr/share/nginx/conf/mission13.io.conf; } } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } ``` Then in the included file: ``` if ($http_host = groups.mission13.io) { rewrite ^(.+)$ /index.php?path=$1; } ```
# The PCRE Library Nginx uses the **[PCRE library](http://www.pcre.org/)** written in C. There's a huge **[man page](http://www.pcre.org/pcre.txt)**, a bit hard to understand sometimes but quite detailed. Among it, you will find the look ahead / look behind functionnalities as you would find them in Perl. # Postive/negative look ahead/behind Positive/negative look ahead/behind allow to match a string if one part of it is/isn't followed/preceded by an expression. Look behind expressions are restricted to a fixed string because it's not possible for most implementations to apply a regular expression backwards as you need to know how many steps you go back for this. Look ahead doesn't obviously suffer from this limitations, so you can use a regular expression like you usually do. Here's the relevant section of the man page : > > LOOKAHEAD AND LOOKBEHIND ASSERTIONS > > > > ``` > (?=...) positive look ahead > (?!...) negative look ahead > (?<=...) positive look behind > (?<!...) negative look behind > > Each top-level branch of a look behind must be of a fixed length. > > ``` > > Unfortunately you can't capture the end of the string with look ahead. # Look behind in action So, our first attempt will be using negative look behind from the end of the string : ``` location ~ .+(?<!\.php)$ { ... } ``` Which means "Only capture strings that don't end with `.php`". That's quite close from what we need already. But there's something more to add to make it work like expected. # Nested locations Indeed, nothing guarantees that you will have a string containing a file extension at this point. It could rather be *anything* except `^.+\.php$`. To make sure this is a real file suffix, the natural way to overhaul this limit is by using nested location blocks where the most restrictive part is the apex. So our configuration will now look like below. ``` location ~ .+(?<!\.php)$ { location ~ ^[^.]+\.[^.]+$ { try_files $uri /images/default.gif; } } ``` And that's it ! # Your second issue Here are my remarks after your post update for the second issue you are facing (404 errors on other URLs). As `~ .+(?<!\.php)$` matches everything except `\.php$` and locations are nested, you need to nest the location block `/` and transform it to a regex match : ``` location ~ .+(?<!\.php)$ { location ~ ^[^.]+\.[^.]+$ { try_files $uri /images/default.gif; } location ~ / { # your stuff } } ``` Also note that you can end up with an infinite loop with the `try_files $uri /images/default.gif;` part because the last paremeter of the `try_files` directive is an *internal redirect* or an HTTP code. So if `/images/default.gif` doesn't resolve to a file, the request will go trough this location block 10 more times until nginx stops the processing and return HTTP 500. So change it to `try_files $uri /images/default.gif =404;`.
f# Intersection of lists ``` let rec mem list x = match list with | [] -> false | head :: tail -> if x = list.Head then true else mem list.Tail x ``` The function mem takes a list and a var X as parameter and checks if the list contains the value X and returns true if it does and false if it doesent. ``` let rec intersection list1 list2 = match list1 with | head :: tail -> match list2 with | head :: tail -> if mem list2 list1.Head = true then (*add the value to a list*) else intersection list1.Tail list2 | [] -> failwith "Second list is empty" | [] -> failwith "First list is empty" ``` I am quite new to F# and the problem i am having right now is i dont know how to construct a list in the (*add the value to a list*) and then add the value to it. I havent tested the code yet since i need to complete this step first in order to not get errors so im not 100% sure on how it works. I am trying to intersect 2 lists, I know that it does exists functions for this "Set.Intersect list1 list2". The indentation is a bit weird aswell here since i didnt want to get to long of rows but you will probably understand anyway.
The most direct way to fix your code is to write something like the code below. In `mem` function, I just fixed the indentation and change it to use `head` and `tail` that you get from pattern matching rather than accessing them via `list.Head` and `list.Tail` (because doing that is more idiomatic and safer): ``` let rec mem list x = match list with | [] -> false | head :: tail -> if x = head then true else mem tail x ``` In `intersection`, the trick is to use `head::rest` to build a resulting list when `head` is an element that appears in both lists (and `rest` is the list that you get by applying intersection to the tail recursively). You also do not need to match on `list2` because `mem` handles empty lists fine: ``` let rec intersection list1 list2 = match list1 with | head :: tail -> let rest = intersection tail list2 if mem list2 head then head::rest else rest | [] -> [] ``` This is not super efficient (assuming *n* is the length of `list1` and *m* is the length of `list2`, you may need up to *m\*n* steps), but that's probably not the point. Also, `intersection` is not tail-recursive so it will not work on large lists, but that's another - more advanced - functional programming topic. Finally, the code will also return list that may contain a single element multiple times - but I guess that's fine for you (e.g. `intersection [1;1;1] [1]` returns `[1;1;1]` but if you flip the arguments you'll get just `[1]`)
Check if some ajax on the page is in processing? I have this code ``` $('#postinput').on('keyup',function(){ var txt=$(this).val(); $.ajax({ type: "POST", url: "action.php", data: 'txt='+txt, cache: false, context:this, success: function(html) { alert(html); } }); }); $('#postinput2').on('keyup',function(){ var txt2=$(this).val(); $.ajax({ type: "POST", url: "action.php", data: 'txt2='+txt2, cache: false, context:this, success: function(html) { alert(html); } }); }); ``` Suppose user clicked on `#postinput` and it takes 30 seconds to process.If in the meantime user clicks on `#postinput2` . I want to give him an alert `"Still Processing Your Previous request"` . Is there a way i can check if some ajax is still in processing? Suppose I have lot of ajax running on the page. Is there a method to know if even a single one is in processing?
You can set a variable to `true` or `false` depending on when an AJAX call starts, example: ``` var ajaxInProgress = false; $('#postinput2').on('keyup',function(){ var txt2=$(this).val(); ajaxInProgress = true; $.ajax({ .. .. success: function(html) { ajaxInProgress = false; ``` Now check it if you need to before a call: ``` if (ajaxInProgress) alert("AJAX in progress!"); ``` Or, use global AJAX events to set the variable ``` $( document ).ajaxStart(function() { ajaxInProgress = true; }); $( document ).ajaxStop(function() { ajaxInProgress = false; }); ```
How to create toolbar searchview in flutter I need to implement `searchview` in toolbar my app to filter a list view: [![](https://i.stack.imgur.com/MH9R6.png)](https://i.stack.imgur.com/MH9R6.png)
With the help @aziza answer i write detail code snippet of search view with list filter below. it will help for others ``` import 'package:flutter/material.dart'; class SearchList extends StatefulWidget { SearchList({ Key key }) : super(key: key); @override _SearchListState createState() => new _SearchListState(); } class _SearchListState extends State<SearchList> { Widget appBarTitle = new Text("Search Sample", style: new TextStyle(color: Colors.white),); Icon actionIcon = new Icon(Icons.search, color: Colors.white,); final key = new GlobalKey<ScaffoldState>(); final TextEditingController _searchQuery = new TextEditingController(); List<String> _list; bool _IsSearching; String _searchText = ""; _SearchListState() { _searchQuery.addListener(() { if (_searchQuery.text.isEmpty) { setState(() { _IsSearching = false; _searchText = ""; }); } else { setState(() { _IsSearching = true; _searchText = _searchQuery.text; }); } }); } @override void initState() { super.initState(); _IsSearching = false; init(); } void init() { _list = List(); _list.add("Google"); _list.add("IOS"); _list.add("Andorid"); _list.add("Dart"); _list.add("Flutter"); _list.add("Python"); _list.add("React"); _list.add("Xamarin"); _list.add("Kotlin"); _list.add("Java"); _list.add("RxAndroid"); } @override Widget build(BuildContext context) { return new Scaffold( key: key, appBar: buildBar(context), body: new ListView( padding: new EdgeInsets.symmetric(vertical: 8.0), children: _IsSearching ? _buildSearchList() : _buildList(), ), ); } List<ChildItem> _buildList() { return _list.map((contact) => new ChildItem(contact)).toList(); } List<ChildItem> _buildSearchList() { if (_searchText.isEmpty) { return _list.map((contact) => new ChildItem(contact)) .toList(); } else { List<String> _searchList = List(); for (int i = 0; i < _list.length; i++) { String name = _list.elementAt(i); if (name.toLowerCase().contains(_searchText.toLowerCase())) { _searchList.add(name); } } return _searchList.map((contact) => new ChildItem(contact)) .toList(); } } Widget buildBar(BuildContext context) { return new AppBar( centerTitle: true, title: appBarTitle, actions: <Widget>[ new IconButton(icon: actionIcon, onPressed: () { setState(() { if (this.actionIcon.icon == Icons.search) { this.actionIcon = new Icon(Icons.close, color: Colors.white,); this.appBarTitle = new TextField( controller: _searchQuery, style: new TextStyle( color: Colors.white, ), decoration: new InputDecoration( prefixIcon: new Icon(Icons.search, color: Colors.white), hintText: "Search...", hintStyle: new TextStyle(color: Colors.white) ), ); _handleSearchStart(); } else { _handleSearchEnd(); } }); },), ] ); } void _handleSearchStart() { setState(() { _IsSearching = true; }); } void _handleSearchEnd() { setState(() { this.actionIcon = new Icon(Icons.search, color: Colors.white,); this.appBarTitle = new Text("Search Sample", style: new TextStyle(color: Colors.white),); _IsSearching = false; _searchQuery.clear(); }); } } class ChildItem extends StatelessWidget { final String name; ChildItem(this.name); @override Widget build(BuildContext context) { return new ListTile(title: new Text(this.name)); } } ``` **Output :** [![enter image description here](https://i.stack.imgur.com/Sk6LT.gif)](https://i.stack.imgur.com/Sk6LT.gif)
Basic SVM issues with e1071: test error rate doesn't match up with tune's results This seems like a very basic question but I can't seem to find the answer anywhere. I'm new to SVMs and ML in general and am trying to do a few simple exercises but the results don't seem to match up. I'm using e1071 with R and have been going through *An Introduction to Statistical Learning* by James, Witten, Hastie, and Tibshirani. My question: why is it that when I use predict I don't seem to have any classification errors and yet the results of the tune function indicate a non-zero error rate? My code (I'm looking at three classes): ``` set.seed(4) dat <- data.frame(pop = rnorm(900, c(0,3,6), 1), strat = factor(rep(c(0,1,2), times=300))) ind <- sample(1:900) train <- dat[ind[1:600],] test <- dat[ind[601:900],] tune1 <- tune(svm, train.x=train[,1], train.y=train[,2], kernel="radial", ranges=list(cost=10^(-1:2), gamma=c(.5,1,2))) svm.tuned <- svm(train[,2]~., data=train, kernel = "radial", cost=10, gamma=1) # I just entered the optimal cost and gamma values returned by tune test.pred <- predict(svm.tuned, newdata=data.frame(pop=test[,1],strat=test[,2])) ``` So when I look at test.pred I see that every value matches up with the true class labels. Yet when I tuned the model it gave an error rate of around 0.06, and either way a test error rate of 0 seems absurd for nonseparable data (unless I'm wrong about this not being separable?). Any clarification would be tremendously helpful. Thanks a lot.
`tune` functions performs 10 cross validation. It **randomly** splits your training data into 10 parts and then iteratively: - selects each of them and call it "validation set" - select remaining 9 and call them "training set" - it trains the SVM with given parameters on the training set, and checks how well it works on validation set - computes mean error across these 10 "folds" The information from the "tune" function is this mean error. Once the best parameters are chosen, you are training your model on the **whole** set, which is exactly 1/9 bigger then the ones used for tuning. As a result, in your particular case (it does not happen often) - you get the classifier which perfectly predicts your "test" set, and some of the smaller ones trianed while tuning - made a small mistake(s) - this is why you get the information regarding different errors. **UPDATE** It seems, that you are actually also training your model on both **inputs** and **labels**.. Look at your ``` svm.tuned$SV ``` variable, which holds the support vectors. To train svm, simply run ``` svm(x,y,kernel="...",...) ``` for example ``` svm(train$pop, train$strat, kernel="linear" ) ``` which results in some missclassifications (as expected, as linear kernel cannot perfectly separate such data). Or using your notation ``` svm.tuned <- svm(strat~., data=train, kernel = "radial", cost=10, gamma=1) ``` note that you should use the name of the frame column **strat**, not the index.
this.setState isn't merging states as I would expect I have the following state: ``` this.setState({ selected: { id: 1, name: 'Foobar' } }); ``` Then I update the state: ``` this.setState({ selected: { name: 'Barfoo' }}); ``` Since `setState` is suppose to merge I would expect it to be: ``` { selected: { id: 1, name: 'Barfoo' } }; ``` But instead it eats the id and the state is: ``` { selected: { name: 'Barfoo' } }; ``` Is this expected behavior and what's the solution to update only one property of a nested state object?
I think `setState()` doesn't do recursive merge. You can use the value of the current state `this.state.selected` to construct a new state and then call `setState()` on that: ``` var newSelected = _.extend({}, this.state.selected); newSelected.name = 'Barfoo'; this.setState({ selected: newSelected }); ``` I've used function `_.extend()` function (from underscore.js library) here to prevent modification to the existing `selected` part of the state by creating a shallow copy of it. Another solution would be to write `setStateRecursively()` which does recursive merge on a new state and then calls `replaceState()` with it: ``` setStateRecursively: function(stateUpdate, callback) { var newState = mergeStateRecursively(this.state, stateUpdate); this.replaceState(newState, callback); } ```
Show a custom picture before a youtube video starts I'm trying to display an image in the "player" div, and then after the visitor clicks the button, replace it with a video (replace the div with an iFrame). Here's my code: ``` <!DOCTYPE html> <head> <style type="text/css"> html { text-align: center; } #player { display: inline-block; width: 640px; height: 360px; background: url(http://placehold.it/640x360) no-repeat; } </style> </head> <html> <body> <!-- 1. The <iframe> (and video player) will replace this <div> tag. --> <div id="player"></div> <p><button onclick="playMe()">Play</button></p> <script> function playMe() { // 2. This code loads the IFrame Player API code asynchronously. var tag = document.createElement('script'); tag.src = "https://www.youtube.com/iframe_api"; var firstScriptTag = document.getElementsByTagName('script')[0]; firstScriptTag.parentNode.insertBefore(tag, firstScriptTag); // 3. This function creates an <iframe> (and YouTube player) // after the API code downloads. var player; function onYouTubeIframeAPIReady() { player = new YT.Player('player', { height: '360', width: '640', videoId: 'JW5meKfy3fY', playerVars: { 'autoplay': 0, 'controls': 0, 'rel': 0, 'showinfo': 0 }, events: { 'onReady': onPlayerReady } }); } // 4. The API will call this function when the video player is ready. function onPlayerReady(event) { event.target.playVideo(); } } </script> </body> </html> ``` But it doesn't work. It works when the "playMe" function is removed, but I'd need the video start after a button/div/link is clicked. I've tried to put the picture and the video into separate `div`s, picture on top of the video and than after the click hide the top div and start the video, but that didn't work either.
This is a variable scoping issue. The function onYouTubeIframeAPIReady needs to be a global variable - as does the player variable. I believe this example should work as a basis for what you want (there's a race condition if you click the button before the api has downloaded, but a full implementation would probably be app-specific so I'll leave that to the reader) ``` <!DOCTYPE html> <head> <style type="text/css"> html { text-align: center; } #player { display: inline-block; width: 640px; height: 360px; background: url(http://placehold.it/640x360) no-repeat; } </style> </head> <html> <body> <!-- 1. The <iframe> (and video player) will replace this <div> tag. --> <div id="player"></div> <p><button onclick="playMe()">Play</button></p> <script> // 2. This code loads the IFrame Player API code asynchronously. var tag = document.createElement('script'); tag.src = "https://www.youtube.com/iframe_api"; var firstScriptTag = document.getElementsByTagName('script')[0]; firstScriptTag.parentNode.insertBefore(tag, firstScriptTag); // 3. called after the API code downloads. var player; function onYouTubeIframeAPIReady() { // don't need anything here } // 4. The API will call this function when the video player is ready. function onPlayerReady(event) { event.target.playVideo(); } function playMe() { if (window.YT) { player = new YT.Player('player', { height: '360', width: '640', videoId: 'JW5meKfy3fY', playerVars: { 'autoplay': 0, 'controls': 0, 'rel': 0, 'showinfo': 0 }, events: { 'onReady': onPlayerReady } }); } } </script> </body> </html> ```
How do you write to a span using jQuery? I'm trying to populate a `<span></span>` element on the page load with jQuery. At the moment the value that gets populated into the span is just an integer count. Here I have named my span *userCount*: ``` <a href="#" class="">Users<span id = "userCount"></span></a> ``` I am trying to write the value of the span with no success. ``` $(document).ready(function () { $.post("Dashboard/UsersGet", {}, function (dataset) { var obj = jQuery.parseJSON(dataSet); var table = obj.Table; var countUsers; for (var i = 0, len = table.length; i < len; i++) { var array = table[i]; if (array.Active == 1) { var name = array.Name; } countUsers = i; } userCount.innerHTML = countUsers.toString(); }); }); ```
You don't have any `usercount` variable. Use `$(selector)` to build a jquery object on which you can call functions like [html](http://api.jquery.com/html/). ``` $('#userCount').html(countUsers); ``` Note also that - you don't need to convert your integer to a string manually. - if you don't break from the loop, `countUsers` will always be `table.length-1`. - you have a typo : `dataSet` instead of `dataset`. Javascript is case sensitive. - you don't need to parse the result of the request - you don't need to pass empty data : `jQuery.post` checks the type of the provided parameters So, this is probably more what you need, supposing you do other things in the loop : ``` $.post("Dashboard/UsersGet", function (dataset) { var table = dataset.Table; var countUsers = table.length; // -1 ? // for now, the following loop is useless for (var i=0, i<table.length; i++) { // really no need to optimize away the table.length var array = table[i]; if (array.Active == 1) { // I hope array isn't an array... var name = array.Name; // why ? This serves to nothing } } $('#userCount').html(countUsers); }); ```
Passing Input while creating Angular 2 Component dynamically using ComponentResolver I am able to load a dynamic Angular 2 component using ComponentResolver and ViewContainerRef. However I am not able to figure out how to pass any input variable of child component into this. **parent.ts** ``` @Component({ selector: "parent", template: "<div #childContainer ></div>" }) export class ParentComponent { @ViewChild("childContainer", { read: ViewContainerRef }) childContainer: ViewContainerRef; constructor(private viewContainer: ViewContainerRef, private _cr: ComponentResolver) {} loadChild = (): void => { this._cr.resolveComponent(Child1Component).then(cmpFactory => { this.childContainer.createComponent(cmpFactory); }); } } ``` **child1** ``` @Component({ selector: "child1", template: "<div>{{var1}}</div><button (click)='closeMenu()'>Close</button>" }) export class Child1Component { @Input() var1: string; @Output() close: EventEmitter<any> = new EventEmitter<any>(); constructor() {} closeMenu = (): void => { this.close.emit(""); } } ``` so in above example say `loadChild` is being called on a button click, I am able to load Child1Component, but how to pass `var1` Input of child? Also How to subscribe to `close` EventEmitter decorated with `@Output`
--- You have to pass it imperatively like: ``` loadChild(): void { this._cr.resolveComponent(Child1Component).then(cmpFactory => { let cmpRef = this.childContainer.createComponent(cmpFactory); cmpRef.instance.var1 = someValue; }); } ``` also similar with registering handlers for outputs. ``` loadChild(): void { this._cr.resolveComponent(Child1Component).then(cmpFactory => { let instance: any = this.childContainer.createComponent(cmpFactory).instance; if (!!instance.close) { // close is eventemitter decorated with @output instance.close.subscribe(this.close); } }); } close = (): void => { // do cleanup stuff.. this.childContainer.clear(); } ```
Get value without knowing key in one-pair-associative-array There is an associative array with **only one** pair `key=>value`. I don't know it's key, but I need to get it's value: ``` $array = array('???' => 'value'); $value = // ?? ``` `$array[0]` doesn't work. How can I get it's value?
You can also do either of the following functions to get the value since there's only one element in the array. ``` $value = reset( $array); $value = current( $array); $value = end( $array); ``` Also, if you want to use `array_keys()`, you'd need to do: ``` $keys = array_keys( $array); echo $array[ $keys[0] ]; ``` To get the value. As some more options, you can ALSO use `array_pop()` or `array_shift()` to get the value: ``` $value = array_pop( $array); $value = array_shift( $array); ``` Finally, you can use `array_values()` to get all the values of the array, then take the first: ``` $values = array_values( $array); echo $values[0]; ``` --- Of course, there are lots of other alternatives; some silly, some useful. ``` $value = pos($array); $value = implode('', $array); $value = current(array_slice($array, 0, 1)); $value = current(array_splice($array, 0, 1)); $value = vsprintf('%s', $array); foreach($array as $value); list(,$value) = each($array); ```
Index Key Column VS Index Included Column Can someone explain these two - Index **Key** Column VS Index **Included** Column? Currently, I have an index that has 4 Index Key Columns and 0 Included Columns, and I'd like to know the difference between the two.
Index key columns are part of the b-tree of the index. Included columns are not. Take two indexes: ``` CREATE INDEX index1 ON table1 (col1, col2, col3) CREATE INDEX index2 ON table1 (col1) INCLUDE (col2, col3) ``` `index1` is better suited for this kind of query: ``` SELECT * FROM table1 WHERE col1 = x AND col2 = y AND col3 = z ``` Whereas `index2` is better suited for this kind of query: ``` SELECT col2, col3 FROM table1 WHERE col1 = x ``` In the first query, `index1` provides a mechanism for quickly identifying the rows of interest. The query will (probably) execute as an index seek, followed by a bookmark lookup to retrieve the full row(s). In the second query, `index2` acts as a covering index. SQL Server doesn't have to hit the base table at all, since the index provides all the data it needs to satisfy the query. `index1` could also act as a covering index in this case. If you want a covering index, but don't want to add all columns to the b-tree because you don't seek on them, or can't because they aren't an allowed datatype (eg, XML), use the INCLUDE clause.
Java 8 - throw multiple generic checked exceptions in lambda In a project I am working at, I have found a class which wraps all methods of its super-class in some elaborate exception handling. It looks similar to that: ``` public void method1() throws ExceptionA { String exceptionString = ""; try { super.method1(); } catch (ExceptionA e) { exceptionString = // <convert the exception to string in an elaborate way> throw e; } finally { // <an elaborate logger call which uses value of exceptionString> } } public void method2() throws ExceptionB, ExceptionC { String exceptionString = ""; try { super.method2(); } catch (ExceptionB | ExceptionC e) { exceptionString = // <convert the exception to string in elaborate way> throw e; } finally { // <an elaborate logger call which uses value of exceptionString> } } // ... <a bunch of other methods like this> ``` I immediately though "Wow, how could would it be to have one generic wrapper and just call it in every of these methods. The class would be like 10x shorter!". So I got to work. This is where I got stuck: ``` private interface ThrowingMethod<E extends Exception> { void run() throws E; } public <E extends Exception> void wrapMethod(ThrowingMethod<E> method) throws E { String exceptionString = ""; try { method.run(); } catch (Exception e) { exceptionString = // <convert the exception to string in an elaborate way> throw e; } finally { // <an elaborate logger call which uses value of exceptionString> } } public void method1() throws ExceptionA { wrapMethod(super::method1); // works } public void method2() throws ExceptionB, ExceptionC { wrapMethod(super::method2); // Error in Eclipse: "Unhandled exception type Exception" } // ... <a bunch of other methods like this> ``` In conclusion, this approach works for methods that throws only one type of checked exception. When method throws multiple checked exceptions, Java assumes that the exception type is `Exception`. I tried to add more generic parameters to `ThrowingMethod` and `wrapMethod` but it doesn't change anything. How can I get a functional interface to work with multiple generic exceptions?
When you expand your interface to use two type variables, i.e. ``` private static interface ThrowingMethod<E1 extends Exception,E2 extends Exception> { void run() throws E1, E2; } public <E1 extends Exception,E2 extends Exception> void wrapMethod(ThrowingMethod<E1,E2> method) throws E1,E2 { // same as before } ``` the rules regarding the type inference do not change and they are the same for both type variables. E.g. you can still use ``` public void method1() throws ExceptionA { wrapMethod(super::method1); } ``` as before, as the compiler simply infers the same single exception type for both type variables. For the method declaring two exceptions, it won’t pick up one for the first type variable and the other for the second; there is no rule which could tell the compiler which exception to use for which type variable. But you can help the compiler out in this case, e.g. ``` public void method2() throws ExceptionB, ExceptionC { wrapMethod((ThrowingMethod<ExceptionB, ExceptionC>)super::method2); } ``` which is the best you can get with this approach.
Nmap: find free IPs from the range Is there a way to scan for free IPs on the network? I use `nmap -sP 192.168.1.0/24` but this actually shows hosts that are up.
Using Nmap like this is a fairly accurate way of doing what you asked, provided that some preconditions are true: 1. You must run the scan as root (or Administrator on Windows) in order to send ARP requests, not TCP connections. Otherwise the scan may report an address as "down" when it is simply firewalled. 2. You can only do this from a system on the same data link (layer 2) as the address range you are scanning. Otherwise, Nmap will need to use network-layer probes which can be blocked by a firewall. In order to get the "available" addresses, you need to get the list of addresses that Nmap reports as "down." You can do this with a simple awk command: ``` sudo nmap -v -sn -n 192.168.1.0/24 -oG - | awk '/Status: Down/{print $2}' ``` Summary of Nmap options used: - When you use the `-v` option, Nmap will print the addresses it finds as "down" in addition to the ones that are "up". - Instead of `-sP`, I've substituted the newer spelling `-sn`, which still accomplishes the same scan, but means "skip the port scan" instead of the misleading "Ping scan" (since the host discovery phase does not necessarily mean an ICMP Echo scan or Ping). - The `-n` option skips reverse DNS lookups, which buys you a bit of time, since you aren't interested in names but just IP addresses. - The `-oG` option tells Nmap to output [grepable](http://nmap.org/book/output-formats-grepable-output.html) format, which is easier for awk to process. The argument "`-`" tells it to send this output to stdout. The awk command then searches for "Status: Down" and prints the second field, containing the IP address. Of course, if you have access to the switch's running configs or the DHCP server's leases, you could get this answer much more authoritatively without doing a scan that could set off security alarms.
How to do word counts for a mixture of English and Chinese in Javascript I want to count the number of words in a passage that contains both English and Chinese. For English, it's simple. Each word is a word. For Chinese, we count each character as a word. Therefore, 香港人 is three words here. So for example, "I am a 香港人" should have a word count of 6. Any idea how can I count it in Javascript/jQuery? Thanks!
Try a regex like this: ``` /[\u00ff-\uffff]|\S+/g ``` For example, `"I am a 香港人".match(/[\u00ff-\uffff]|\S+/g)` gives: ``` ["I", "am", "a", "香", "港", "人"] ``` Then you can just check the length of the resulting array. The `\u00ff-\uffff` part of the regex is a unicode character range; you probably want to narrow this down to just the characters you want to count as words. For example, CJK Unified would be `\u4e00-\u9fcc`. ``` function countWords(str) { var matches = str.match(/[\u00ff-\uffff]|\S+/g); return matches ? matches.length : 0; } ```
Scipy multidimensional kernel density estimate I've been trying to get a kernal density estimate for a 30x30 array. A short example is below, ``` from scipy.stats import gaussian_kde x = arange(-0.5,0.51,1/29.) y = arange(-0.5,0.51,1/29.) z = randn(30,30) vec = vstack((x,y)) KDE = gaussian_kde(z.T) KDE2 = KDE(vec) ``` this gives the following error, ``` ValueError: points have dimension 2, dataset has dimension 30 ``` I've found some other posts here which suggest the data needs to be reshaped before the estimate but I am unsure how to do this correctly. Any help appreciated D
with `KDE = gaussian_kde(z.T)` you are making a kernel density estimation of `30` random variables (check `KDE.n`), each of the variable with `30` dimensions (check `KDE.d`). If you are studying some data, that data is the input to `gaussian_kde()`. With `KDE(vec)` you then estimate the kernel density of `z`. The input must have shape `(30,)` or `(30,N)` where `30` is the number of dimensions `KDE.d`. See the doc [here](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.evaluate.html). It doesn't make any sense to feed `x,y` with those shapes. The example [here](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.html) should get you started. In that example, `d=2` and `n= 2000`.
Redis Key expire notification with Jedis I am trying to implement a expiry key notification with redis ,when my key expires in the redis data store. The redis website provides some description of how <http://redis.io/topics/notifications>, but Im unable to find any example how to do it using redis java client like Jedis? Any possible code with illustration will be very helpful as im new to redis.
You can do it with the **pub-sub** model only Start Redis Server Change the notify-keyspace-events in redis.conf to KEA (this depends on your requirement).Details given in redis documentation <http://redis.io/topics/notifications>. Redis Java Client (Jedis) ,Try the following: ## Notification Listener: ``` public class KeyExpiredListener extends JedisPubSub { @Override public void onPSubscribe(String pattern, int subscribedChannels) { System.out.println("onPSubscribe " + pattern + " " + subscribedChannels); } @Override public void onPMessage(String pattern, String channel, String message) { System.out .println("onPMessage pattern " + pattern + " " + channel + " " + message); } //add other Unimplemented methods } ``` ## Subscriber: \*\*\*\*Note\*\* jedis.**psubscribe**(new KeyExpiredListener(), "\_\_key\*\_\_:\*"); -- This methods support regex pattern based channel whereas jedis.**subscribe**(new KeyExpiredListener(), ""\_\_keyspace@0\_\_:notify"); --This method takes full/exact channel name ``` public class Subscriber { public static void main(String[] args) { JedisPool pool = new JedisPool(new JedisPoolConfig(), "localhost"); Jedis jedis = pool.getResource(); jedis.psubscribe(new KeyExpiredListener(), "__key*__:*"); } } ``` ## Test Class: ``` public class TestJedis { public static void main(String[] args) { JedisPool pool = new JedisPool(new JedisPoolConfig(), "localhost"); Jedis jedis = pool.getResource(); jedis.set("notify", "umq"); jedis.expire("notify", 10); } } ``` Now first start your Subscriber and then Run the TestJedis.You wil see the following output: ``` onPSubscribe __key*__:* 1 onPMessage pattern __key*__:* __keyspace@0__:notify set onPMessage pattern __key*__:* __keyevent@0__:set notify onPMessage pattern __key*__:* __keyspace@0__:notify expire onPMessage pattern __key*__:* __keyevent@0__:expire notify onPMessage pattern __key*__:* __keyspace@0__:notify expired onPMessage pattern __key*__:* __keyevent@0__:expired notify ``` --- **Now one use-case where you are interested in the *value* of the expired key as well.** **Note:** Redis only provide the key on expiration of key through notification of keyspace events, value is lost once the key expire. In-order to get the value on your key expire you can do the following work around shown below with the tricky concept of shadow key: When you create your notify key, also create a special expiring "shadow" key (don't expire the actual notify). For example: ``` // set your key value SET notify umq //set your "shadow" key, note the value here is irrelevant SET shadowkey:notify "" EX 10 ``` // Get an expiration message in the channel **keyevent@0**:expired // Split the key on ":"(or whatever separator you decide to use), take the second part to get your original key ``` // Then get the value and do whatever with it GET notify // Then delete the key DEL notify ``` Note that the value of the shadowkey isn't used so you want to use the smallest possible value, could be an empty string "". It's a little more work to setup but the above system does exactly what you need. The overhead is a few extra commands to actually retrieve and delete your key plus the storage cost of an empty key. Otherwise you have to prepare your key in such a way that it includes the value appended with it. Hope it helps you!
WebView support for FlutterWeb for plugin development Hi I developed a Flutter Plugin [flutter\_tex](https://pub.dev/packages/flutter_tex). It's based on the WebView. How do I add Flutter Web support for this?? I tried this example to show my HTML content. ``` import 'dart:ui' as ui; void forWeb() { if(kIsWeb){ // ignore: undefined_prefixed_name ui.platformViewRegistry.registerViewFactory( 'hello-world-html', (int viewId) => uni_html.IFrameElement() ..width = '640' ..height = '360' ..src = 'https://www.youtube.com/embed/IyFZznAk69U' ..style.border = 'none'); Directionality( textDirection: TextDirection.ltr, child: Center( child: SizedBox( width: 200, height: 200, child: HtmlElementView(viewType: 'hello-world-html'), ), ), ); } } ``` But this code is fine when building for the web but when for compiling on android I get this error even if I am not calling above code. ``` Compiler message: ../lib/flutter_tex.dart:139:10: Error: Getter not found: 'platformViewRegistry'. ui.platformViewRegistry.registerViewFactory( ^^^^^^^^^^^^^^^^^^^^ Target kernel_snapshot failed: Exception: Errors during snapshot creation: null build failed. FAILURE: Build failed with an exception. ```
You can copy paste run 3 files below `main.dart` , `mobileui.dart` and `webui.dart` You can put `mobile` and `web` code in different files and use conditional import This allow you to have different implement on mobile and web ``` import 'mobileui.dart' if (dart.library.html) 'webui.dart' as multiPlatform; ... home: multiPlatform.TestPlugin(), ``` working demo when run with `Chrome` or `Android Emulator` in `Android Studio` [![enter image description here](https://i.stack.imgur.com/rIZQC.png)](https://i.stack.imgur.com/rIZQC.png) [![enter image description here](https://i.stack.imgur.com/OBfHV.png)](https://i.stack.imgur.com/OBfHV.png) main.dart ``` import 'package:flutter/material.dart'; import 'mobileui.dart' if (dart.library.html) 'webui.dart' as multiPlatform; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.blue, ), home: multiPlatform.TestPlugin(), ); } } ``` mobileui.dart ``` import 'package:flutter/material.dart'; class TestPlugin extends StatefulWidget { @override _TestPluginState createState() => _TestPluginState(); } class _TestPluginState extends State<TestPlugin> { @override Widget build(BuildContext context) { return Text("Mobile"); } } ``` webui.dart ``` import 'package:flutter/material.dart'; import 'dart:html' as html; import 'dart:js' as js; import 'dart:ui' as ui; class TestPlugin extends StatefulWidget { TestPlugin(); _TestPluginState createState() => _TestPluginState(); } class _TestPluginState extends State<TestPlugin> { String createdViewId = 'map_element'; @override void initState() { // ignore: undefined_prefixed_name ui.platformViewRegistry.registerViewFactory( createdViewId, (int viewId) => html.IFrameElement() ..width = MediaQuery.of(context).size.width.toString() //'800' ..height = MediaQuery.of(context).size.height.toString() //'400' ..srcdoc = """<!DOCTYPE html><html> <head><title>Page Title</title></head><body><h1>This is a Heading</h1><p>This is a paragraph.</p></body></html>""" ..style.border = 'none'); super.initState(); } @override void dispose() { super.dispose(); } @override Widget build(BuildContext context) { return Container( padding: EdgeInsets.symmetric(horizontal: 10), decoration: BoxDecoration( color: Colors.white, border: Border.all(color: Colors.grey[300], width: 1), borderRadius: BorderRadius.all(Radius.circular(5))), width: 200, height: 200, child: Directionality( textDirection: TextDirection.ltr, child: HtmlElementView( viewType: createdViewId, ))); } } ```
Slope and Length of a line between 2 points in OpenCV I need to compare 2 pictures to find similar lines among them. In both pictures I use LSD (*Line Segments Detector*) method, then I find lines and I know coordinates of start and end points of each line. My question is: is there any function in OpenCV to find the slope and length of each line, so that I can compare them easily? My environment is: OpenCV 3.1, C++ and Visual Studio 2015
Well, this is a math question. Assume you have two points: `p1(x1,y1)` and `p2(x2,y2)`. Let's call `p1` the "start" and `p2` the "end" of the line segment, as you have called the points you have. ``` slope = (y2 - y1) / (x2 - x1) length = norm(p2 - p1) ``` Sample code: ``` cv::Point p1 = cv::Point(5,0); // "start" cv::Point p2 = cv::Point(10,0); // "end" // we know this is a horizontal line, then it should have // slope = 0 and length = 5. Let's see... // take care with division by zero caused by vertical lines double slope = (p2.y - p1.y) / (double)(p2.x - p1.x); // (0 - 0) / (10 - 5) -> 0/5 -> slope = 0 (that's correct, right?) double length = cv::norm(p2 - p1); // p_2 - p_1 = (5, 0) // norm((0,5)) = sqrt(5^2 + 0^2) = sqrt(25) -> length = 5 (that's correct, right?) ```
App Engine version served by "default" appears to be inconsistent and thrash for a period after changing the default version Our application serves an endpoint which simply reports os.environ['CURRENT\_VERSION\_ID']. We use this for a type of monitoring which tracks which version is currently set as the "default version". Starting on the afternoon of March 5th, we noticed odd behaviour when making requests to this endpoint. Shortly after we change the default version (via "appcfg.py set\_default\_version"), repeated requests to this endpoint would flip flop between the previous default and the new default. This persists for a period of about 10 minutes, after which point all subsequent requests will always report the new, correct default version. So it appears as if during this 10 minute window, requests to our normal, default URL, will inconsistently report either the old version or the new one. This appears to be a change in behaviour. The previous change in default version for our application happened on March 1st, and every other version change prior to that date did not exhibit this flip-flopping behaviour. (Question stolen from [my teammate](https://plus.google.com/110287455381476170305/about)'s [bug report](https://code.google.com/p/googleappengine/issues/detail?id=8970))
First a bit of background: - App Engine runs your application in distributed infrastructure: the more traffic your app receives, the more instances (appservers) that will be running your code at any given time - For scalability/simplicity and many other reasons, App Engine does not implement client <-> appserver stickiness; as a result any request to the default app version may be handled by any appserver After changing the default version of your application, either by changing what version is marked as the default via the admin console, or by deploying the same major version as is currently the default, information about this change is propagated through the App Engine infrastructure. As appservers become aware of the new version, they begin loading the new version of your application code. Once a given appserver is ready it will begin serving the new version of your code. There is some period of time during which some appservers will be serving the previous default version while others are already serving the new default version. It is therefore expected that any app with a non-trivial amount of traffic will see the behavior you described. We're always working on ways to reduce the amount of time these version changes take, but our foremost concern is to ensure that the transition happens smoothly. If the application has a large number of instances serving the previous version, App Engine needs to ensure that there is always sufficient capacity (combing old and new appservers) to serve all current traffic. The previous and new versions of the app may need a different number of appservers (due to performance differences between versions), which is another reason why the transition cannot safely be executed 'instantly'. If you'd like more control over the process, you can use App Engine's [Traffic Splitting](https://developers.google.com/appengine/docs/adminconsole/trafficsplitting) feature. In a step wise fashion you can increase the percentage of user traffic you'd like to direct at the new version. App Engine will then provide version stickiness based on either client IP address or a cookie (for web apps). You can also use Traffic Splitting to 'canary' a new version of the application on some percentage (say 1%) of clients.
nginx as proxy using a specific source ip I'm using nginx to serve static file and proxy other requests to some Tomcat instance. The problem is that I don't know how to choose which IP address will nginx use to connect to Tomcat. Each Tomcat instance only accept HTTP connections from specific IP addresses. My server has all these IPs. I just can't choose which one will nginx use. This is my config file: ``` proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; location /integracao/ { proxy_pass http://X.X.X.X:9080/integracao/; } location /solr/ { proxy_pass http://Y.Y.Y.Y:8080/solr/; } ``` My server has one interface with two IP addresses: A and B. I need to use IP A to connect to first Tomcat and IP B to connect to Solr. Do anyone knows how to do it?
proxy\_bind directive allows you to choose different source IP address. <http://wiki.nginx.org/HttpProxyModule#proxy_bind> So your configuration would look like: ``` proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; location /integracao/ { proxy_bind A.A.A.A; proxy_pass http://X.X.X.X:9080/integracao/; } location /solr/ { proxy_bind B.B.B.B; proxy_pass http://Y.Y.Y.Y:8080/solr/; } ```
Run node js server in ubuntu I have tried so far installed node.js and npm via `sudo apt-get install nodejs` and `sudo apt-get install npm` . Then I tried typing node on bash nothing happens, I tried using `node app.js` nothing happens to no error
From our discussion [here](https://chat.stackexchange.com/rooms/201/ask-ubuntu-general-room) After installing `node.js` and `npm` Create a symbolic link for node: ``` sudo ln -s /usr/bin/nodejs /usr/bin/node ``` Now verify commands working with ``` node -v npm -v ``` Run it by using, `node hello.js` In order to test the application, open another terminal session and connect to your web server. Be sure to substitute in the app server's private IP address for APP\_PRIVATE\_IP\_ADDRESS, and the port if you changed it: `curl http://APP_PRIVATE_IP_ADDRESS:8080` reference [here](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-14-04)
atoi() with other languages I am working on a internationalization project. Do other languages, such as Arabic or Chinese, use different representations for digits besides 0-9? If so, are there versions of atoi() that will account for these other representations? I should add that I am mainly concerned with parsing input from the user. If the users types in some other representation I want to be sure that I recognize it as a number and treat it accordingly.
I may use `std::wistringstream` and locale to generate this integer. ``` #include <sstream> #include <locale> using namespace std; int main() { locale mylocale("en-EN"); // Construct locale object with the user's default preferences wistringstream wss(L"1"); // your number string wss.imbue( mylocale ); // Imbue that locale int target_int = 0; wss >> target_int; return 0; } ``` [More info on stream class](http://msdn.microsoft.com/en-us/library/36h875af%28v=VS.100%29.aspx) and [on locale class](http://msdn.microsoft.com/en-us/library/1w3527e2%28v=VS.100%29.aspx).
Laravel Eloquent compare date from datetime field I want to get all the rows from a table through an expression: ``` table.date <= 2014-07-10 ``` But if the column contains a datetime let's say: ``` 2014-07-10 12:00:00 ``` But if I do: ``` where('date', '<=', $date) ``` it won't get the row. I guess this is because $date = 2014-07-10 which makes MySQL assume that it is 2014-07-10 00:00:00. In regular MySQL I would just do ``` where DATE(date) <= $date ``` What would be the equivalent using Laravel's Eloquent?
Laravel 4+ offers you these methods: `whereDay()`, `whereMonth()`, `whereYear()` ([#3946](https://github.com/laravel/framework/pull/3946)) and `whereDate()` ([#6879](https://github.com/laravel/framework/pull/6879)). They do the SQL `DATE()` work for you, and manage the differences of SQLite. Your result can be achieved as so: ``` ->whereDate('date', '<=', '2014-07-10') ``` For more examples, see first message of [#3946](https://github.com/laravel/framework/pull/3946) and this [Laravel Daily article](http://laraveldaily.com/eloquent-date-filtering-wheredate-and-other-methods/). **Update:** Though the above method is convenient, as noted by Arth it is inefficient on large datasets, because the `DATE()` SQL function has to be applied on each record, thus discarding the possible index. Here are some ways to make the comparison (but please read notes below): ``` ->where('date', '<=', '2014-07-10 23:59:59') ->where('date', '<', '2014-07-11') // '2014-07-11' $dayAfter = (new DateTime('2014-07-10'))->modify('+1 day')->format('Y-m-d'); ->where('date', '<', $dayAfter) ``` Notes: - 23:59:59 is okay (for now) because of the 1-second precision, but have a look at this article: [23:59:59 is not the end of the day. No, really!](http://code.openark.org/blog/mysql/235959-is-not-the-end-of-the-day-no-really) - Keep in mind the "zero date" case ("0000-00-00 00:00:00"). Though, these "zero dates" should be avoided, they are source of so many problems. Better make the field nullable if needed.
Why throwing an exception tries to loads the class which extends Exception (though it is not executed) but not a regular class I have the below classes. I have manually compiled the classes using **javac** and ran the `Driver` class. Later **removed** the `entity.class` and `MyCustomException.class` and ran the app like below. > > java Driver test > > > The below error is complained about `MyCustomException` is missing but not about the `Entity` class. So, not clear why JRE complaining about `MyCustomException` class but not the `Entity` class. Indeed I have removed code `throw new MyCustomException();` but I did not encounter error about `Entity` class. ``` Caused by: java.lang.NoClassDefFoundError: com/techdisqus/exception/MyCustomException ``` Please note that the **IF** condition will **NOT** be **executed** as I am passing command argument as **test** Why is it throwing an exception is causing to load the `MyCustomException` which would be never executed but the JVM does not load any other regular class unless condition is satisfied, as here `Entity` class here. Please check `Driver.java` below. MyCustomException.java ``` public class MyCustomException extends RuntimeException { } ``` Entity.java ``` public class Entity { } ``` Driver.java ``` public class Driver { public static void main(String[] args) { String s = args[0]; if("true".equals(s)){ Entity entity = new Entity(); // This is not loaded, unless s is true throw new MyCustomException(); // this is loaded even s is NOT true. }else{ System.out.println("success"); } } } ``` [![enter image description here](https://i.stack.imgur.com/ACEku.png)](https://i.stack.imgur.com/ACEku.png) Thanks for help
(this is an educated guess; I'm by no means an expert on JVM internals) I assume the error happens during [verification](https://docs.oracle.com/javase/specs/jvms/se7/html/jvms-4.html#jvms-4.10), when the loaded class undergoes some sanity checks so the runtime can make some assumptions later. One of the checks is a typecheck of bytecode instructions. Specifically [`athrow`](https://docs.oracle.com/javase/specs/jvms/se7/html/jvms-4.html#jvms-4.10.1.9.athrow): > > An athrow instruction is type safe iff the top of the operand stack matches Throwable. > > > So at this point, the classloader has to load `MyCustomException` to check whether it extends `Throwable`
What happens when a stack-allocated value is boxed? If we have a value that is already allocated on stack, will boxing copy it to heap and then transfer ownership (that's how it works in .NET, with the exception that both copies will stay alive)? Or will the compiler be "smart" enough to allocate it directly on heap from the beginning? ``` struct Foo { x: i32, } fn main() { // a is allocated on stack? let a = Foo { x: 1 }; // if a is not used, it will be optimized out println!("{}", a.x); // what happens here? will the stack allocated structure // be moved to heap? or was it originally allocated on heap? let b = Box::new(a); } ``` I'm not a specialist in assembler, but this looks like it is actually allocated on stack and then moved: <http://pastebin.com/8PzsgTJ1>. But I need a confirmation from someone who actually knows what is happening.
It would be pretty strange for this optimization to happen as you describe it. For example, in this code: ``` let a = Foo { x: 1 }; // operation that observes a let b = Box::new(a); // operation that observes b ``` `&a` and `&b` would be *equal*, which would be surprising. However, if you do something similar, but don't observe `a`: ``` #[inline(never)] fn frobnotz() -> Box<Foo> { let a = Foo { x: 1 }; Box::new(a) } ``` You can [see via the LLVM IR](http://is.gd/sFUdyW) that this case *was* optimized: ``` define internal fastcc noalias dereferenceable(4) %Foo* @_ZN8frobnotz20h3dca7bc0ee8400bciaaE() unnamed_addr #0 { entry-block: %0 = tail call i8* @je_mallocx(i64 4, i32 0) %1 = icmp eq i8* %0, null br i1 %1, label %then-block-106-.i.i, label %"_ZN5boxed12Box$LT$T$GT$3new20h2665038481379993400E.exit" then-block-106-.i.i: ; preds = %entry-block tail call void @_ZN3oom20he7076b57c17ed7c6HYaE() unreachable "_ZN5boxed12Box$LT$T$GT$3new20h2665038481379993400E.exit": ; preds = %entry-block %2 = bitcast i8* %0 to %Foo* %x.sroa.0.0..sroa_idx.i = bitcast i8* %0 to i32* store i32 1, i32* %x.sroa.0.0..sroa_idx.i, align 4 ret %Foo* %2 } ``` Similarly, you can return the struct on the stack and then box it up, and there will still [just be the one allocation](http://doc.rust-lang.org/book/box-syntax-and-patterns.html#returning-pointers): > > You may think that this gives us terrible performance: return a value and then immediately box it up ?! Isn't this pattern the worst of both worlds? Rust is smarter than that. There is no copy in this code. main allocates enough room for the box, passes a pointer to that memory into foo as x, and then foo writes the value straight into the Box. > > >
JavaScript BubbleSort, how to improve its efficiency? Have a bubblesort routine similar to this. I need to make it more efficient by stopping the loop when the array is sorted or if the array is already sorted. ``` function sortNumbers(listbox) { var x, y, holder; // The Bubble Sort method. for(x = 0; x < ranarray.length; x++) { for(y = 0; y < (ranarray.length-1); y++) { if(ranarray[y] > ranarray[y+1]) { holder = ranarray[y+1]; ranarray[y+1] = ranarray[y]; ranarray[y] = holder; } } } ```
Before enter the inner loop, create a boolean to check if a swap occured inside the inner loop. When the there is no swap the array is sorted. ``` function sortNumbers(listbox) { var x, y, holder; // The Bubble Sort method. for(x = 0; x < ranarray.length; x++) { var swapOccured = false; for(y = 0; y < (ranarray.length-1); y++) { if(ranarray[y] > ranarray[y+1]) { holder = ranarray[y+1]; ranarray[y+1] = ranarray[y]; ranarray[y] = holder; swapOccured = true; } } if (!swapOccured) break; } ```
What exactly is "lambda" in Python? I want to know what exactly is `lambda` in python? and where and why it is used. thanks
Lambda is more of a concept or programming technique then anything else. Basically it's the idea that you get a function (a first-class object in python) returned as a result of another function instead of an object or primitive type. I know, it's confusing. See this example from the [python documentation](http://docs.python.org/tutorial/controlflow.html#lambda-forms): ``` def make_incrementor(n): return lambda x: x + n f = make_incrementor(42) f(0) >>> 42 f(1) >>> 43 ``` So make\_incrementor creates a function that uses n in it's results. You could have a function that would increment a parameter by 2 like so: ``` f2 = make_incrementor(2) f2(3) >>> 5 ``` This is a very powerful idea in functional programming and functional programming languages like lisp & scheme. Hope this helps.
nodejs + multer + angularjs for uploading without redirecting I am using Nodejs + Multer +angularjs for uploading files on the server. i have a simple HTML file: ``` <form action="/multer" method="post" enctype="multipart/form-data"> <input type="file" id="photo" name="photo"/> <button id="Button1">Upload</button> </form> ``` Nodejs part: ``` var multer = require('multer'); var storage = multer.diskStorage({ destination: function (req, file, cb) { cb(null, './uploads/') }, filename: function (req, file, cb) { cb(null, file.originalname) } }) app.post('/multer', upload.single('photo'), function (req, res) { res.end("File uploaded."); }); ``` **this works perfectly** and the file is successfully uploaded. but this redirect me to "/multer" after uploading the file (because of the form element). **How do i stay on the same page??**..possibly using angularjs so i tried this: making a HTML angular file: ``` <section data-ng-controller="myCtrl"> <input type="file" id="photo" name="photo"/> <button id="Button1" ng-click="f()">Upload</button> </section> ``` and a Angularjs controller: ``` angular.module('users').controller('myCtrl',[$scope,function($scope){ $scope.f=function(){ var photo = document.getElementById('photo'); var file = photo.files[0]; if (file) { //code to make a post request with a file object for uploading????? //something like.. //$http.post('/multer', file).success(function(response) { //console.log("success"); //}); } } }]); ``` **CAN SOMEONE HELP ME WITH THE CODE FOR MAKING A POST REQUEST WITH A FILE OBJECT FOR UPLOADING USING MULTER FROM ANGULARJS CONTROLLER ?** thanks
Angularjs directive: ``` angular.module('users').directive('fileModel', ['$parse', function ($parse) { return { restrict: 'A', link: function(scope, element, attrs) { var model = $parse(attrs.fileModel); var modelSetter = model.assign; element.bind('change', function(){ scope.$apply(function(){ modelSetter(scope, element[0].files[0]); }); }); } }; }]); ``` Angular html file: ``` <input type="file" file-model="myFile"/><br><br> <button ng-click="uploadFile()">Upload</button> ``` Angularjs Controller: ``` $scope.uploadFile = function(){ var file = $scope.myFile; var uploadUrl = "/multer"; var fd = new FormData(); fd.append('file', file); $http.post(uploadUrl,fd, { transformRequest: angular.identity, headers: {'Content-Type': undefined} }) .success(function(){ console.log("success!!"); }) .error(function(){ console.log("error!!"); }); }; ``` Nodejs server route file: ``` var multer = require('multer'); var storage = multer.diskStorage({ destination: function (req, file, cb) { cb(null, './uploads/') }, filename: function (req, file, cb) { cb(null, file.originalname+ '-' + Date.now()+'.jpg') } }); var upload = multer({ storage: storage }); app.post('/multer', upload.single('file')); ``` Enjoy!
Meteor - What is Spacebars.kw {hash: Object} I'm attempting to write a Meteor package which can be placed inside templates. So I first attempted to register a helper. ``` Template.registerHelper('testHelper', function(a, b) { console.log(a); console.log(b); }) ``` I've added the package inside `/packages`, and in my client template, when I added `{{testHelper "hello" "meow"}}`, the console logged `hello` and `meow`, which is what I expected. When I added `{{testHelper "hello"}}`, I expected the console to log `hello` and `null`, since nothing was passed as the second parameter. But instead it returned `hello` and an object - `Spacebars.kw {hash: Object}` What is this `Spacebars.kw {hash: Object}`? What can I do if I want it to return `null` instead?
`Spacebars.kw` contains a `hash` object that has a hash of input parameters. Meteor has two methods to match up methods, one is direct matching which is where the parameters are directly input, e.g `{{testHelper "variable1" "variable2" "variable3"}}`, would match up as `function(a,b,c)` as variables 1-3 matching up to a,b and c respectively. The second method of input is using a *hash*: ``` {{testHelper a="variable1" b="variable2" c="variable3"}} ``` This would give a single parameter to `function(a)` where a is a `Spacebars.kw` object. The `Spacebars.kw` object would have a subobject called `hash` with a structure that matches: ``` { "a" : "variable1", "b" : "variable2", "c" : "variable3" } ``` Meteor will attempt to match up the first param directly, but the subsequent parameters will be matched up as hashes incase the second input is empty such as in the case where you use `{{testHelper 'hello'}}` where `b` would be null, so it's given as the hash instead. Its generically given as this, so if you get b as a `Spacebars.kw` object, you can assume there was no second input. The alternative is you could use the hash style declarations and then directly check if the hash value is `null`: ``` {{testHelper text="Hello"}} {{testHelper text="Hello" othertext="Hellooo"}} ``` and the helper: ``` Template.registerHelper('testHelper', function(kw) { console.log(kw.hash.text); console.log(kw.hash.othertext); }); ```
NSData from CGImageRef in Swift I am having difficulty figuring out how to get an `NSData` representation of an image from a `CGImageRef`. All of the answers I've found make use of `UIImage` or `NSImage`, but my application is cross-platform, so I want to use only Core Graphics. Objective-C answers state simply that `CFData` is toll-free bridged to `NSData` and simply cast it, but Swift will not allow this. The closest I've got is: ``` var image: CGImageRef? = nil //... if let dataProvider: CGDataProviderRef = CGDataProviderCreateWithURL(url) { image = CGImageCreateWithPNGDataProvider(dataProvider, nil, false, CGColorRenderingIntent.RenderingIntentDefault) // works fine //... if let data = CGDataProviderCopyData(CGImageGetDataProvider(image)) as? NSData { // Do something with data, if only it ever got here! } } ``` but the cast doesn't ever succeed...
`CGDataProviderCopyData()` returns the optional `CFData?`, and that cannot be cast to the non-optional `NSData`. But you can convert/bridge it to `NSData?` and use that in the optional binding: ``` if let data = CGDataProviderCopyData(CGImageGetDataProvider(image)) as NSData? { // Do something with data ... } ``` Here is a simpler example demonstrating the same issue with `CFString` and `NSString`: ``` let cfstr : CFString? = "Hello world" if let nsstr = cfstr as? NSString { print("foo") // not printed } if let nsstr = cfstr as NSString? { print("bar") // printed } ``` But I admit that my explanation is not fully satisfying, because a similar optional cast works in other cases: ``` class MyClass { } class MySubclass : MyClass { } let mc : MyClass? = MySubclass() if let msc = mc as? MySubclass { print("yes") // printed } ``` So this must be related to the toll-free bridging between CoreFoundation and Foundation types.
Recomended places and practices to store files generated from a Tomcat application? this is relative to tomcat + spring + linux. I am wondering what could be a good practice and place to store files. My idea is to put everything on the filesystem then keep track of them using the DB. My doubt is WHERE? In fact I could put everything in the webapp directory, but that way some good collegue or even me, could forget about that and erase everything during a clean+deploy. The other idea is to use a folder in the filesystem... but in Linux which one would be standard for this? More than this, there is the permission problem, I assume that tomcat runs as the tomcat user. So it can't create folders around in the filesystem at will. I'd have to create it by myself using root user and then changing the owner.... There is nothing wrong with this, but I'd like to automate the process, so that no intervention is needed. Any hints?
The [Filesystem Hierarchy Standard](http://www.pathname.com/fhs/pub/fhs-2.3.html) defines standard paths for different kinds of files. You don't make it absolutely clear what kind of files you're storing and how they're used. At least - [/srv/yourappname](http://www.pathname.com/fhs/pub/fhs-2.3.html#SRVDATAFORSERVICESPROVIDEDBYSYSTEM) - [/var/lib/yourappname](http://www.pathname.com/fhs/pub/fhs-2.3.html#VARLIBVARIABLESTATEINFORMATION) would be appropriate. As for the privileges, you'll either have to create the directories with proper privileges during installation. If that's impossible, settle for the `webapps` directory.
Is gcc wrong to allow the initialization of a const array member with another array reference? While (re)implementing a simple constexpr map, I wrote this ([godbolt](https://gcc.godbolt.org/z/6TnKn6zx4)): ``` template <class key_type, class value_type, int N> class flat_map { private: struct pair { key_type key; value_type value; }; const pair elements[N]; public: consteval flat_map(const pair (&arr)[N]) noexcept : elements(arr) // works on gcc?! {} [[nodiscard]] consteval value_type operator[](const key_type key) const { for (const pair &elem : elements) if (elem.key == key) return elem.value; throw "Key not found"; } }; constexpr flat_map<int, char, 3> m = {{ { 4, 'a' }, { -1, 'b' }, { 42, 'c' } }}; static_assert(m[4] == 'a'); static_assert(m[-1] == 'b'); static_assert(m[42] == 'c'); int main() { return m[4]; // 97=='a' } ``` I naively thought to set the private array `elements` as `const` and initialize it in the constructor; I was using *gcc* trunk as compiler, and all was seemingly working well. When I decided to try it with *msvc* and *clang*, I had compilation errors: both were complaining about the array initialization requiring a brace-enclosed initializer list. In hindsight the other compilers aren't particularly wrong, are they? Am I inadvertently using some *gcc* non standard extensions here? Ehm, by the way, what would you do to avoid copying the array elements by hand?
[[class.base.init]/7](https://wg21.link/class.base.init#7): > > The *expression-list* or *braced-init-list* in a *mem-initializer* is used to initialize the designated subobject (or, in the case of a delegating constructor, the complete class object) according to the initialization rules of [dcl.init] for direct-initialization. > > > [[dcl.init.general]/16.5](https://wg21.link/dcl.init#general-16.5): > > Otherwise, if the destination type is an array, the object is initialized as follows. Let *x*1, …, *x**k* be the elements of the *expression-list*. If the destination type is an array of unknown bound, it is defined as having *k* elements. Let *n* denote the array size after this potential adjustment. If *k* is greater than *n*, the program is ill-formed. Otherwise, the *i*th array element is copy-initialized with *x**i* for each 1 ≤ *i* ≤ *k*, and value-initialized for each *k* < *i* ≤ *n*. For each 1 ≤ *i* < *j* ≤ *n*, every value computation and side effect associated with the initialization of the *i*th element of the array is sequenced before those associated with the initialization of the *j*th element. > > > Point 16.5 takes precedence over all points that follow, in particular those that cover copy-initialization from a value of the same type (16.6 and 16.9). As such, an array data member can only be initialized by initializing each of the array’s elements individually. GCC’s behaviour is therefore non-conforming.
Why is there no assign statement in this Verilog function? Coming from a C++ background I'm starting to learn Verilog. This code describes four inputs going into two AND gates. The outputs from those two AND gates go into an OR gate. The output from the OR gate is the final output. ``` // a user-defined AND gate module my_and2 (in, out); input [1:0] in; output out; assign out = in[1]&in[0]; endmodule // a user-defined OR gate module my_or2 (in, out); input [1:0] in; output out; assign out = in[1]|in[0]; endmodule // the AND-OR logic built on top of the user-defined AND and OR gates module and_or (in_top, out_top); input [3:0] in_top; output out_top; wire [1:0] sig; // instantiate the gate-level modules my_and2 U1 (.in(in_top[3:2]),.out(sig[1])); my_and2 U2 (.in(in_top[1:0]),.out(sig[0])); my_or2 U3 (.in(sig),.out(out_top)); endmodule ``` The first two modules make sense to me. However, the last one doesn't. The first two modules have an assign statement at the end to set the value for the output variable. However, the last one doesn't. Why is that?
Verilog is 'event driven'. When writing verilog, think in terms of sensitivity lists. In your example of the AND gate, you've the expression `assign out = in[1]&in[0];`. Your expression is said to be sensitive to `in[0]` and `in[1]`. This means that any time `in[0]` or `in[1]` change, the expression will be recomputed, and the value of `out` will be updated. So in your toplevel module `and_or`, you're basically building a big tree of expressions that are sensitive to the outputs of the preceding expressions. This tree is, of course, built using the module connections. So a change in the value of one of the inputs to this toplevel module will ripple through all expressions in its 'logic cone'. To drive the inputs you'll need higher level testbench module driving signals into your `and_or` module. This will supply inputs spaced out in time which will trigger the expressions in and below `and_or`. If not, your sim will have no events, so no expressions will trigger and the sim will time-out at 0ps because it is 'event starved'. PS: for your AND gate expression, `assign out = &in;` will work too... (reduction AND operator)
NumPy Broadcasting: Calculating sum of squared differences between two arrays I have the following code. It is taking forever in Python. There must be a way to translate this calculation into a broadcast... ``` def euclidean_square(a,b): squares = np.zeros((a.shape[0],b.shape[0])) for i in range(squares.shape[0]): for j in range(squares.shape[1]): diff = a[i,:] - b[j,:] sqr = diff**2.0 squares[i,j] = np.sum(sqr) return squares ```
You can use [`np.einsum`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.einsum.html) after calculating the differences in a [`broadcasted way`](http://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html), like so - ``` ab = a[:,None,:] - b out = np.einsum('ijk,ijk->ij',ab,ab) ``` Or use [`scipy's cdist`](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.spatial.distance.cdist.html) with its optional metric argument set as `'sqeuclidean'` to give us the squared euclidean distances as needed for our problem, like so - ``` from scipy.spatial.distance import cdist out = cdist(a,b,'sqeuclidean') ```
Can I change the default app that opens when I hit play? On macOS, if I hit the play button on the keyboard after closing all apps, iTunes starts. (If Spotify is open then it will play music instead.) I don't use iTunes. Is it possible to make Spotify open when I hit the play button? I have found [this answer](https://superuser.com/questions/344803/making-mac-media-keys-open-app-other-than-itunes), but the top answer there refers to an old version of Karabiner and no longer works. The other answers don't achieve what I'm looking for: **I'd like Spotify to open if I press play and Spotify is closed, and otherwise I'd like the play button to behave as normal, where it can play/pause any supported media application e.g., YouTube and VLC.**
Finally got to try this on a Mac with that kind of keyboard (iMac). For me, the Play/Pause key plays whatever app (iTunes or Spotify) was last playing. If neither are opened, neither will launch, so the key doesn't default to iTunes for me. I installed Karabiner, as linked in the question. I couldn't see how to create the steps shown in that link, but if you want to get into customizing that app, I'm sure it would work. Looks pretty powerful. But, this gets you pretty close?.. I followed the top-rated answer at <https://apple.stackexchange.com/questions/175215/how-do-i-assign-a-keyboard-shortcut-to-an-applescript-i-wrote> and created an Automator app running a simple Applescript: ``` tell application "Spotify" activate playpause end tell ``` I couldn't get my Mac to run the Automator service through a keyboard shortcut. So, instead, I fell back to my favorite free third-party keyboard/shortcut tool, QUicksilver. There's a ton more like that, though, so take your pick. It was pretty easy to bind a shortcut key to the AppleScript itself (didn't need the Automator Service for this), except it wouldn't let me pick the "real" Play/Pause key, only F8 (press Fn key and press F8/PlayPause key). But, with this AppleScript, pressing F8 does launch and play/pause Spotify! If Spotify was closed, it will launch, but it will "miss" the play command. So just press F8 again. Feel free to tweak the AppleScript to be more aware of it Spotify is running or not. Shouldn't be too hard to do, but then again, it's not too hard to press F8 twice if it wasn't running either...
Reduce list of list to dictionary with sublist size as keys and number of occurances as value I have a list of lists and I want to count the number of times a sublist with a specific size occurs. eg. for list `[[1], [1,2], [1,2], [1,2,3]]` I expect to get `{1: 1, 2: 2, 3: 1}` I've tried `reduce` function but I have syntax error on `+= 1` and have no idea what is wrong. ``` list_of_list = [[1], [1,2], [1,2], [1,2,3]] result = functools.reduce(lambda dict,list: dict[len(list)] += 1, list_of_list, defaultdict(lambda: 0, {})) ```
It is not a good idea to use `reduce` in such a complicated way when you can use `collections.Counter()` with `map()` function in a more Pythonic way: ``` >>> A = [[1], [1,2], [1,2], [1,2,3]] >>> from collections import Counter >>> >>> Counter(map(len,A)) Counter({2: 2, 1: 1, 3: 1}) ``` Note that using `map` will perform slightly better than a generator expression because by passing a generator expression to `Counter()` python will get the values from generator function by itself, since using built-in function `map` has more performance in terms of execution time1. ``` ~$ python -m timeit --setup "A = [[1], [1,2], [1,2], [1,2,3]];from collections import Counter" "Counter(map(len,A))" 100000 loops, best of 3: 4.7 usec per loop ~$ python -m timeit --setup "A = [[1], [1,2], [1,2], [1,2,3]];from collections import Counter" "Counter(len(x) for x in A)" 100000 loops, best of 3: 4.73 usec per loop ``` From [PEP 0289 -- Generator Expressions](https://www.python.org/dev/peps/pep-0289/): > > The semantics of a generator expression are equivalent to creating an anonymous generator function and calling it. For example: > > > > ``` > g = (x**2 for x in range(10)) > print g.next() > > ``` > > is equivalent to: > > > > ``` > def __gen(exp): > for x in exp: > yield x**2 > g = __gen(iter(range(10))) > print g.next() > > ``` > > --- Note that since *generator expressions* are better in terms of memory use, if you are dealing with large data you'd better use *generator expression* instead of *map* function.
Extra space at bottom of page caused by “cye-workaround-body” and “cye-workaround-body-image” divs in Chrome For some reason, there are two `div`'s with the ID's “cye-workaround-body” and “cye-workaround-body-image” which which get added below the `</body>` of some websites and it creates a large space beneath the footer. As far as I can tell this only happens in Google Chrome. Does anyone know why this happens? Is it safe to remove these `div`'s? How would I go about preventing this from happening? [Here is an example](https://www.v9seo.com/blog/2015/05/19/wordpress-vs-joomla-right-for-my-site/) and [Another Example](http://blog.bluehost.com./blog/wordpress-2/building-a-better-website-with-wordpress-themes-858/)
**Edit: @Kbam7 has confirmed that the problem lies in an extension called "Care your Eyes", hence the "cye" divs. To fix this problem, disable or remove the extension.** --- Original Answer: I'm not seeing divs with `cye-workaround-body` or `cye-workaround-body-image` in either of the sites you linked. This leads me to believe that it might be caused by a Chrome extension that you have installed. Make sure none of your extensions are set to run in incognito mode, then try visiting those URLs in incognito mode and see if the problem persists. Also, please try visiting those sites using Chrome on a different computer. If you realize that it is working in either of those two circumstances, then you can start disabling extensions until you narrow down which extension is causing those extra elements to be added. As an aside, it looks like [nobody else is having the same issue](https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&es_th=1&ie=UTF-8#q=%22cye-workaround-body%22) (google results showed this thread only at the time of writing), so that lends further evidence towards my hunch that this is not a problem in vanilla Chrome.
Prevent Pivot navigation in Windows Phone I have an audio recording application for Windows Phone. It consists of a pivot control with two pivot items. One is for recording control, and another one is for reviewing and listening the recorded items. When the recording is taking place, I need the way to prevent the user from navigating away from the current pivot item, but to retain the feel that an entire pivot item moves, but doesn't flip to the next item, as if there is none. I know I could use GestureListener from Silverlight Toolkit, but using it I will need to implement a simulation of pivot movement myself. Is there a build-in way to prevent pivot navigation? If no, can you point me to an example on how I can animate control movement on gesture flipping?
Is it mandatory that the user has to remain on the one `PivotItem?`. If not, you could just disable the second PivotItem so that the user knows that it's there, but can't actually interact with it. ``` secondPivotItem.IsEnabled = false; ``` Alternatively, you could dynamically insert the second PivotItem when you want it and remove it when you don't. For example, when recording: ``` mainPivot.Items.Remove(secondPivotItem); ``` then when you want the second PivotItem to appear: ``` mainPivot.Items.Add(secondPivotItem); ``` The only "problem" with this is that when you only have one PivotItem on screen, the user can't scroll. However, this is how a Pivot control is supposed to function. If you really want the user to scroll back to itself, you could create a blank PivotItem (with no header). Then, handle the Pivot's `LoadingPivotItem` event. Check if the item that it about to be loaded is the blank one. If so, then use `Pivot.SelectedItem = recordingPivotItem` to navigate back to the recording PivotItem. You can then use the above method to dynamically add the second PivotItem when the recording is over. This isn't the normal UX for pivots, but should do what you're trying to achieve.
c# MongoDB (noRM) - Repository pattern with embedded documents I’m developing an application with a model similar to Stack Overflow (question / answer etc...) [Modelling a NoSQL Forum Application with C# / ASP.net MVC](https://stackoverflow.com/questions/4278605/modelling-a-nosql-forum-application-with-c-asp-net-mvc) The model looks something like this (simplified) ``` class Question { public string Title { get; set; } public string Body { get; set; } public DateTime DateCreated { get; set; } public string UserName { get; set; } public List<Answer> Replies { get; set; } } class Answer { public string Body { get; set; } public DateTime DateCreated { get; set; } public string UserName { get; set; } } ``` So my documents are just one document, with the "answers" embedded in them I’m trying to design my repositories for this approach. Should I have 2 separate repositories? For example: ``` interface IQuestionRepository { void PutQuestion(Question question); Question GetQuestion(string questionID); } ``` ``` interface IAnswerRepository { void PutAnswer(string questionID, Answer Answer); Answer GetAnswer(string answerID); } ``` Or something like this: ``` interface IPostRepository { void PutQuestion(Question question); Question GetQuestion(string questionID); void PutAnswer(string questionID, Answer Answer); Answer GetAnswer(string answerID); } ```
Your model is inherently flawed. Question should be a root document. Answer should be a root document. While written in regards to RavenDB the document modeling information is mostly directly usable by you: <http://codeofrob.com/archive/2010/12/21/ravendb-document-design-with-collections.aspx> **Edit:** FWIW the reason why your model is flawed is with document databases you want your documents to model transaction boundaries. Think of the editing scenario with stack overflow and how much of a nightmare it would be to maintain consistency with multiple people adding and updating answers which all alter the root document, and the poster is updating the question. The amount of contention on the single object will very problematic. RavenDB provides what they call "patching" that lets you manipulate part of a document structure instead of the entire document exactly to solve problems like this but this design is best avoided up front instead of trying to make it work by greatly increasing the complexity of your persistence model having to do partial updates and handle elaborate concurrency situations. And to answer the specific question after this, then you would have an AnswersRepository and a QuestsionsRepository
How can I run /usr/bin/Xorg without sudo? This question is about executing `/usr/bin/Xorg` directly on Ubuntu 14.04. And I know there exists Xdummy, but I couldn't make the dummy driver work properly with the nvidia GPU so it's not an option. I copied the system-wide `xorg.conf` and `/usr/lib/xorg/modules`, and modified them a little bit. (Specified `ModulePath` in my `xorg.conf` too) Running the following command as root works fine: ``` Xorg -noreset +extension GLX +extension RANDR +extension RENDER -logfile ./16.log -config ./xorg.conf :16 ``` But if I do that as a non-root user (the log file permission is OK), this error occurs: ``` (EE) Fatal server error: (EE) xf86OpenConsole: Cannot open virtual console 9 (Permission denied) (EE) (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. (EE) Please also check the log file at "./16.log" for additional information. (EE) (EE) Server terminated with error (1). Closing log file. ``` Could you please help me to run Xorg without sudo??
To determine who is allowed to run X configure it with ``` dpkg-reconfigure x11-common ``` There are three options: root only, console users only, or anybody. The entry is located in `/etc/X11/Xwrapper.config`. --- Since Debian 9 and Ubuntu 16.04 this file does not exist. After installing `xserver-xorg-legacy`, the file reappears and its content has to be changed from: ``` allowed_users=console ``` to: ``` allowed_users=anybody needs_root_rights=yes ``` You also need to specify the virtual terminal to use when starting X, otherwise, errors may occur. For example: ``` Xorg :8 vt8 ```
How to trigger a tap event on Chrome devtool console? How can I use the Chrome-devtool's console to test if my javascript works? I've located the xpath and converted it to an css locator. Basically it is a button that turns the color from grey to blye. Here is my snippet code: browser.execute\_script("$('button.nominate').trigger('tap');") On the console, I tried something like: > > $('button.nominate').trigger('tap') > > > The result shown below: > > [] > > > I thought it would tap the button
I suppose you are doing some kind of functional testing on your mobile app. I was doing the same thing some time ago (using CasperJS) and, in the process, I've created this function: ``` // I've commented out CasperJS specific stuff, don't use it if you don't need it function triggerEventOnPage(selector, eventName, memo) { //casper.evaluate(function(selector, eventName, memo){ var event; var element = document.querySelector(selector); event = document.createEvent("Event"); event.initEvent(eventName, true, true); event.memo = memo || { }; element.dispatchEvent(event); //}, selector, eventName, memo); //wait(); } ``` You can use it in your tests by calling: ``` triggerEventOnPage(".edit-list-button", 'tap'); ``` However, mind that there is no native `tap` event. There are only `touchstart`, `tachmove`, `touchend` events and implementation of `tap` is done based on those three. Therefore, implementation of `tap` event that you are using may differ from one that I was using and the function above may not work for you. **EDIT:** since you are using jQuery, `$('button.nominate').trigger('tap')` should work just fine to. @NULL may be right that your selector is invalid.
How do I make a WebDav call using HttpClient? Specifically I want to call `MKCOL` through [`HttpClient`](http://hc.apache.org/httpcomponents-client-ga/) to create a folder for Apache Jackrabbit through the Sling REST API. I've tried variants of ``` BasicHttpEntityEnclosingRequest request = new BasicHttpEntityEnclosingRequest("MKCOL", restUrl); ``` But no dice so far. I'm guessing this is less difficult than I'm making it. I also see there is [`MkColMethod`](http://jackrabbit.apache.org/api/2.2/org/apache/jackrabbit/webdav/client/methods/MkColMethod.html) for something like ``` MkColMethod mkColMethod = new MkColMethod(restUrl); ``` But I don't know how to utilize this. I think it may have worked with a previous version of HttpClient. I'm using 4.x
Best is to look at the Sling integration tests, which use Sling's RESTful APIs to create content. The ["old" SlingIntegrationTestClient class](http://svn.apache.org/repos/asf/sling/trunk/bundles/commons/testing/src/main/java/org/apache/sling/commons/testing/integration/SlingIntegrationTestClient.java) is used to test Sling itself and uses `httpclient 3.x` to create content. It is used by the tests found [here](http://svn.apache.org/repos/asf/sling/trunk/launchpad/integration-tests), so you can find examples there. The ["new" SlingClient class](http://svn.apache.org/repos/asf/sling/trunk/testing/tools/src/main/java/org/apache/sling/testing/tools/sling/SlingClient.java) is meant to be a cleaner and simpler re-implementation of that, used by the Sling testing tools described at <http://sling.apache.org/site/sling-testing-tools.html> . It uses `httpclient 4.x` which has a slightly different API. The SlingClient.mkdir and mkdirs methods do use the MKCOL method.
How to overwrite the dump/load methods in the pickle class - customizing pickling and unpickling - Python So far, what I've done is this: ``` import pickle class MyPickler(pickle.Pickler): def __init__(self, file, protocol=None): super(MyPickler, self).__init__(file, protocol) class MyUnpickler(pickle.Unpickler): def __init__(self, file): super(MyUnpickler, self).__init__(file) ``` In my main method, this is mainly what I have ``` #created object, then... pickledObject = 'testing.pickle' with open(pickledObject,'wb') as f: pickle = MyPickler(f) pickle.dump(object) #object is the object I want to pickle, created before this with open(pickledObject, 'r') as pickledFile: unpickle = MyUnpickler(pickledFile) object2 = unpickle.load() ``` However, this is giving me the following error when the super method is called: `TypeError: must be type, not classobj` How does one overwrite only the two methods, load and dump? The pickle file is under C:\Python27/lib/pickle.py **EDIT** The enum.py file can be found here: <http://dpaste.com/780897/> Object details: Object is initialized like this: ``` object = CellSizeRelation(CellSizeRelation.Values.FIRST) ``` And CellSizeRelation is a class that uses the Enumeration: ``` class CellSizeRelation(Option): Values = enum.Enum('FIRST', 'SECOND') ``` Before I pickle object, I do this: ``` print object.Values._values print object.value.enumtype ``` *output* ``` [EnumValue(<enum.Enum object at 0x02E80E50>, 0, 'FIRST'), EnumValue(<enum.Enum object at 0x02E80E50>, 1, 'SECOND') <enum.Enum object at 0x02E80E50> ``` After I unpickle and print out the same thing, I get this *output*: ``` [EnumValue(<enum.Enum object at 0x02E80E50>, 0, 'FIRST'), EnumValue(<enum.Enum object at 0x02E80E50>, 1, 'SECOND') <enum.Enum object at 0x02ECF750> ``` The problem is that the second object address changes; When initialized the first time, the `enumtype` and `_values` have the same address. However, after unpickling, they change addresses. This breaks my code when I try to compare two enumValues. If you look in the `enumValue` class, the compare function tries to do this: ``` try: assert self.enumtype == other.enumtype result = cmp(self.index, other.index) ``` Because the address changes, the assert function fails. I now somehow need to ensure that the address for the enumtype does not change when unpickled. I was thinking of simply getting the value 'FIRST' from the unpickled file, finding out its index, and reinitializing the object with: ``` def load: object = CellSizeRelation(CellSizeRelation.Values[INDEX]) return object ```
You want to customize the way object state is pickled and unpickled, not customize the load and unload functionality. You'll have to study the [Pickling and unpickling normal class instances chapter](http://docs.python.org/library/pickle.html#pickling-and-unpickling-normal-class-instances), in your case defining a `__getstate__` and `__setstate__` method should be enough. What happens in your case is that there is a class-level attribute with `EnumValue` instances, which are meant to be constants. But on unpickling, new `EnumValue` instances are created that are not connected to the class-level attribute anymore. The `EnumValue` instances do have an `index` attribute you can use to capture their state as an integer instead of an instance of `EnumValue`, which we can use to find the correct constant again when reinstating your instances: ``` class CellSizeRelation(Option): # skipping your enum definition and __init__ here def __getstate__(self): # capture what is normally pickled state = self.__dict__.copy() # replace the `value` key (now an EnumValue instance), with it's index: state['value'] = state['value'].index # what we return here will be stored in the pickle return state def __setstate__(self, newstate): # re-create the EnumState instance based on the stored index newstate['value'] = self.Values[newstate['value']] # re-instate our __dict__ state from the pickled state self.__dict__.update(newstate) ``` So, normally, if there is no `__getstate__` the instance `__dict__` is pickled. We now do return a copy of that `__dict__`, but we swapped out the `EnumValue` instance for it's index (a simple integer). On unpickling, normally the new instance `__dict__` is updated with the unpickled `__dict__` we captured on pickling, but now that we have a `__setstate__` defined, we can swap out the enum index back out for the correct `EnumValue` again.
Implicit self in @escaping Closures when Reference Cycles are Unlikely to Occur Swift 5.3 With [SE-0269](https://github.com/apple/swift-evolution/blob/master/proposals/0269-implicit-self-explicit-capture.md) we won’t need to use explicit anymore in the below case for reference type. ``` class Test { var x = 0 func execute(_ work: @escaping () -> Void) { work() } func method() { execute { [self] in x += 1 } } } ``` Will this handle [weak self] and [unowned self] or we should explicitly use in the case of weak and unowned for this proposal.
You still need to manually specify `weak` and `unowned` captures of `self`. The only change SE-0269 results in is that you don't need to explicitly write out `self.` when accessing instance properties/methods when acknowledging that you capture `self` strongly by using `[self]`. In case of `[weak self]` you still need to explicitly write `self.` in the closure, but when using `[unowned self]`, you can omit `self.` just as when using `[self]`. ``` execute { [weak self] in x += 1 // Error: Reference to property 'x' in closure requires explicit use of 'self' to make capture semantics explicit } execute { [weak self] in self?.x += 1 // need to manually specify `self?.` } execute { [unowned self] in x += 1 // compiles fine } ```
Conditional colorscheme in .vimrc I am using vim and MacVim. I have a 256-colour colorscheme which I like for my MacVim, but if I load it into regular vim, it obviously does not work (I get blinkies instead). I would like to be able to use the same vim config on all my systems, so: Is there a way to check for palette size in `.vimrc` and set one of the two colorschemes accordingly? If that is not feasible, then checking for MacVim vs. vim would also be okay.
You've got several options. I think your best bet is to load one colorscheme in `.vimrc`, and another in `.gvimrc` (or in your case, just don't load a colorscheme in `.vimrc` at all). The `.gvimrc` colorscheme will only be loaded when you're running the GUI version of MacVim. If you don't want to split your configuration across multiple files, you can also use a conditional like this one in `.vimrc`: ``` if has('gui_running') colorscheme mycrazycolors endif ``` Finally, if you really do want to know the number of colors available, you can check the `t_Co` setting: ``` :echo &t_Co ``` `t_Co` is empty in the GUI version of MacVim, so you'll probably still want to use a variation of the `has()` technique. In fact, the default `.vimrc` does something similar to determine when to enable syntax highlighting: ``` if &t_Co > 2 || has("gui_running") syntax on endif ``` For the sake of completeness, I should mention that you could also expand your colorscheme file to include reasonable settings for color terminals. This is a fair amount of work, however, and it might be easier to just switch to a terminal application that supports more colors. See these topics for more info: ``` :help has() :help termcap :help termcap-colors ```
How to pick prime numbers to calculate the hash code? This question follows on the answer given by Jon Skeet on the question: "[What is the best algorithm for an overridden System.Object.GetHashCode?](https://stackoverflow.com/a/263416/3742608)". To calculate the hash code the following algorithm is used: ``` public override int GetHashCode() { unchecked // Overflow is fine, just wrap { int hash = 17; // Suitable nullity checks etc, of course :) hash = hash * 23 + field1.GetHashCode(); hash = hash * 23 + field2.GetHashCode(); hash = hash * 23 + field3.GetHashCode(); return hash; } } ``` I don't understand why the numbers 17 and 23 are chosen. Why don't we pick 3 and 5? That are prime numbers as well. Can somebody explain what the best prime numbers to pick are and why?
The comments on the answer you link to already briefly try to explain why `17` and `23` are not good primes to use here. A lot of .NET classes that make use of hash codes store elements in *buckets*. Suppose there are three buckets. Then all objects with hash code 0, 3, 6, 9, ... get stored in bucket 0. All objects with hash code 1, 4, 7, 10, ... get stored in bucket 1. All objects with bucket 2, 5, 8, 11, ... get stored in bucket 2. Now suppose that your `GetHashCode()` uses `hash = hash * 3 + field3.GetHashCode();`. This would mean that unless `hash` is large enough for the multiplication to wrap around, in a hash set with three buckets, which bucket an object would end up in depends only on `field3`. With an uneven distribution of objects across buckets, `HashSet<T>` cannot give good performance. You want a factor that is co-prime to all possible number of buckets. The number of buckets itself will be prime, for the same reasons, therefore if your factor is prime, the only risk is that it's *equal* to the number of buckets. .NET uses [a fixed list of allowed numbers of buckets](http://referencesource.microsoft.com/#mscorlib/system/collections/hashtable.cs,19337ead89202585): > > > ``` > public static readonly int[] primes = { > 3, 7, 11, 17, 23, 29, 37, 47, 59, 71, 89, 107, 131, 163, 197, 239, 293, 353, 431, 521, 631, 761, 919, > 1103, 1327, 1597, 1931, 2333, 2801, 3371, 4049, 4861, 5839, 7013, 8419, 10103, 12143, 14591, > 17519, 21023, 25229, 30293, 36353, 43627, 52361, 62851, 75431, 90523, 108631, 130363, 156437, > 187751, 225307, 270371, 324449, 389357, 467237, 560689, 672827, 807403, 968897, 1162687, 1395263, > 1674319, 2009191, 2411033, 2893249, 3471899, 4166287, 4999559, 5999471, 7199369}; > > ``` > > Your factor should be one that .NET doesn't use, and that other custom implementations are equally unlikely to use. This means `23` is a bad factor. `31` could be okay with .NET's own containers, but could be equally bad with custom implementations. At the same time, it should not be so low that it gives lots of collisions for common uses. This is a risk with `3` and `5`: suppose you have a custom `Tuple<int, int>` implementation with lots of small integers. Keep in mind that `int.GetHashCode()` just returns that `int` itself. Suppose your multiplication factor is `3`. That means that `(0, 9)`, `(1, 6)`, `(2, 3)` and `(3, 0)` all give the same hash codes. Both of the problems can be avoided by using sufficiently large primes, as pointed out in a comment that Jon Skeet had incorporated into his answer: > > EDIT: As noted in comments, you may find it's better to pick a large prime to multiply by instead. Apparently 486187739 is good... > > > Once upon a time, large primes for multiplication may have been bad because multiplication by large integers was sufficiently slow that the performance difference was noticeable. Multiplication by `31` would be good in that case because it can be implemented as `x * 31` => `x * 32 - x` => `(x << 5) - x`. Nowadays, though, the multiplication is far less likely to cause any performance problems, and then, generally speaking, the bigger the better.
Inheritance, child class changing a protected field from Parent Class in C# I am currently picking up C# and I have issue on understanding inheritance. Here's my Code: Here's the Parent Class: ``` class Member { protected int annualFee; // protected field =only accessible within the same class in which it is declared and any class derived from it private string name; // private field = can be accesed of the same class or struct private int memberId; private int memberSince; // writing ToString() method to display the values of the four fields public override string ToString() { return "\nName: " + name + "\nMember ID: " + memberId + "\nMember Since: " + memberSince + "\nTotal Annual Fee: " + annualFee; } // adding two constructors to the member class public Member() { Console.WriteLine("Parent Constructor with no parameter"); } public Member(string pName, int pMemberID, int pMemberSince) { Console.WriteLine("Parent Constructor with 3 parameters"); name = pName; memberId = pMemberID; memberSince = pMemberSince; Console.WriteLine("Parent Constructor:" + name + ", " + memberId + ", " + memberSince); } } ``` Here's the Child Class: ``` class VIPMember : Member { public VIPMember(string name, int memberId, int memberSince) : base (name, memberId, memberSince) { Console.WriteLine("Child constructor with 3 paramemters"); } public void CalculateAnnualFee() { annualFee = 1200 * 2 + 12 * 30; //ans = 2760 } public int annualFee = 120000; // field inherit from parent class, will overwrite annual fee in CalculateAnnualfee() method } ``` Created instance of child class and calling the constructor and method. ``` class ClassDemo { static void Main(string[] args) { VIPMember myChildMember2 = new VIPMember("Andy", 2, 2011); myChildMember2.CalculateAnnualFee(); // method from child class VIPMember Console.WriteLine(myChildMember2.ToString()); // method inherit from parent class Member Console.WriteLine("Field from VIPMember (protected to public) class: " + myChildMember2.annualFee); } } ``` Answer: ``` Parent Constructor with 3 parameters Parent Constructor:Andy, 2, 2011 Child constructor with 3 paramemters Name: Andy Member ID: 2 Member Since: 2011 Total Annual Fee: 0 Field from VIPMember (protected to public) class: 2760 ``` I added the public int annualFee = 120000 field on purpose to try out. However, the output is not what I expected. I was expecting that: ``` Total Annual Fee: 2760 Field from VIPMember (protected to public) class: 12000 ``` Can someone explain the logic behind? Thank you.
When you defined `public int annualFee` in `VIPMember` you ***shadowed*** the `protected int annualFee` in `Member`. You effectively created a brand new field with the same name and made it that you only saw `annualFee` from the class that defined it. The compiler should have given you a warning that you were doing so. Try this: ``` void Main() { B b = new B(); Console.WriteLine(b.x); b.x = 3; Console.WriteLine(b.x); A a = b; Console.WriteLine(a.x); a.x = 4; Console.WriteLine(a.x); } public class A { public int x = 1; } public class B : A { public int x = 2; } ``` You get: ``` 2 3 1 4 ``` But had I not shadowed `x` then I would write the code this way: ``` void Main() { B b = new B(); Console.WriteLine(b.x); b.x = 3; Console.WriteLine(b.x); A a = b; Console.WriteLine(a.x); a.x = 4; Console.WriteLine(a.x); Console.WriteLine(b.x); } public class A { public int x = 1; } public class B : A { public B() { this.x = 2; } } ``` Note that I needed to update `x` in the constructor of `B`. To rewrite your code, you should have done this: ``` class VIPMember : Member { public VIPMember(string name, int memberId, int memberSince) : base(name, memberId, memberSince) { this.annualFee = 120000; Console.WriteLine("Child constructor with 3 paramemters"); } public void CalculateAnnualFee() { this.annualFee = 1200 * 2 + 12 * 30; //ans = 2760 } } ``` Then it works as you expect.
Regular Expression German ZIP-Codes I give up. I need a (PHP) regular expression that matches only 5 digit numbers starting **from 01001 up to 99998**. So, invalid is for example 1234, but not 01234. Also 01000 is invalid, 01002 is not, and so on. Any other 5 digit number except 99999 is valid. What I have is the following regular expression, which does what I require - except that it still matches 99999. Can anyone help out? Thanks... ``` ^01\d\d[1-9]|[1-9]\d{3}[(?<=9999)[0-8]|[0-9]]$ ``` **Update** I am sorry, everybody, but things are more complex. I did not explain correctly. German zip code can be also 04103 for example (see a list of *some* further examples [here](http://www.klicktel.de/postleitzahlen/leipzig,34877000.html))
You were close: ``` ^0[1-9]\d\d(?<!0100)0|0[1-9]\d\d[1-9]|[1-9]\d{3}[0-8]|[1-9]\d{3}(?<!9999)9$ ``` But if you can just do a simpler regex and then use a separate numerical comparison, that'd probably be easier to read. Alternatively, a simpler version: ``` ^(?!01000|99999)(0[1-9]\d{3}|[1-9]\d{4})$ ``` (The simpler version is just "take the numbers `01000`-`99999` and remove the two ends via a lookahead.)
Parsing JSON with BusyBox tools I'm working on a blog theme for Hugo installable on Android (BusyBox via Termux) and plan to create a BusyBox Docker image and copy my theme and the hugo binary to it for use on ARM. Theme releases are archived and made available on NPM and the [tools available](https://www.busybox.net/downloads/BusyBox.html) on BusyBox have allowed me to reliably parse `version` from the metadata from JSON: ``` meta=$(wget -qO - https://registry.npmjs.org/package/latest) vers=$(echo "$meta" | egrep -o "\"version\".*[^,]*," | cut -d ',' -f1 | cut -d ':' -f2 | tr -d '" ') ``` Now I would like to copy the `dist` value from the `meta` into a text file for use in Hugo: ``` "dist": { "integrity": "sha512-3MH2/UKYPjr+CTC85hWGg/N3GZmSlgBWXzdXHroDfJRnEmcBKkvt1oiadN8gzCCppqCQhwtmengZzg0imm1mtg==", "shasum": "a159699b1c5fb006a84457fcdf0eb98d72c2eb75", "tarball": "https://registry.npmjs.org/after-dark/-/after-dark-6.4.1.tgz", "fileCount": 98, "unpackedSize": 5338189 }, ``` *Above pretty-printed for clarity. The [actual metadata](https://registry.npmjs.org/after-dark/latest) is compressed.* Is there a way I can reuse the `version` parsing logic above to also pull the `dist` field value?
Proper robust parsing requires tools like `jq` where it could be as simple as `jq '.version' ip.txt` and `jq '.dist' ip.txt` You could use `sed` but use it at your own risk ``` $ sed -n 's/.*"version":"\([^"]*\).*/\1/p' ip.txt 6.4.1 $ sed -n 's/.*\("dist":{[^}]*}\).*/\1/p' ip.txt "dist":{"integrity":.... ....} ``` - `-n` option to disable automatic printing - the `p` modifier with `s` command will allow to print only when substitution succeeds, this will mean output is empty instead of entire input line when something goes wrong - `.*"version":"\([^"]*\).*` this will match entire line, capturing data between double quotes after `version` tag - you'll have to adjust the regex if whitespaces are allowed and other valid json formats - `.*\("dist":{[^}]*}\).*` this will match entire line, capturing data starting with `"dist":{` and first occurrence of `}` afterwards - so this is not suited if the tag itself can contain `}`
CSS - Vertical Border with Horizontal Gradients I have a nifty vertical divider that has a horizontal gradient for certain parts of my page, and it works splendidly. This is just an image file that I apply as a centered background to a layer and float it there to get the effect I want. ![Image Example Here](https://i.stack.imgur.com/F95TT.jpg) This works great, but I was wanting to try and do this to a table cell border, so that it cut down that side and gave the slight gradient effect all the way down. Is this possible with CSS3? I have explored [This post](https://stackoverflow.com/questions/18908211/vertically-repeating-horizontal-gradient-in-css) a bit, and it does seem like this technique is feasible, but attempting to just add these gradients to the `border` property keeps coming up with failures. Is there a specific technique involved in such an effect? ## Update Small jsFiddle to demonstrate how the current divider works, and where I am trying to apply it in a normal table and what keeps happening when I try. # [jsFiddle](http://jsfiddle.net/ciel/u2Tgq/)
Extending from the comments above... This effect can be mimicked by using pseudo-elements on a container div. I think in your case, it may be better than trying to implement on a table border. For example, a wrapper div called 'box' or whatever class name you prefer... ``` .box:before, .box:after { content: ""; position: absolute; display: block; left: -10px; width: 1px; height: 50%; } .box:before { top: 0; background: linear-gradient(to top, #333 0%, transparent 100%); } .box:after { bottom: 0; background: linear-gradient(to bottom, #333 0%, transparent 100%); } ``` Check this [DEMO](http://jsbin.com/afadIbE/2/). Also don't forget to add your vendor prefixes on `background: linear-gradient` in order to make it cross-browser.
How can I colorize head, tail and less, same as I've done with cat? I've got 'color cat' working nicely, thanks to others (see [How can i colorize cat output including unknown filetypes in b&w?](https://unix.stackexchange.com/questions/100841/how-can-i-colorize-cat-output-including-unknown-filetypes-in-bw?rq=1)). In my `.bashrc`: ``` cdc() { for fn in "$@"; do source-highlight --out-format=esc -o STDOUT -i $fn 2>/dev/null || /bin/cat $fn; done } alias cat='cdc' # To be next to the cdc definition above. ``` I'd like to be able to use this technique for other functions like head, tail and less. How could I do that for all four functions? Any way to generalize the answer? I have an option for `gd` doing `git diff` using ``` gd() { git diff -r --color=always "$@" } ```
Something like this should do what you want: ``` for cmd in cat head tail; do cmdLoc=$(type $cmd | awk '{print $3}') eval " $cmd() { for fn in \"\$@\"; do source-highlight --failsafe --out-format=esc -o STDOUT -i \"\$fn\" | $cmdLoc - done } " done ``` You can condense it like this: ``` for cmd in cat head tail; do cmdLoc=$(type $cmd |& awk '{print $3}') eval "$cmd() { for fn in \"\$@\"; do source-highlight --failsafe --out-format=esc -o STDOUT -i \"\$fn\" | $cmdLoc - ; done }" done ``` ### Example With the above in a shell script, called `tst_ccmds.bash`. ``` #!/bin/bash for cmd in cat head tail; do cmdLoc=$(type $cmd |& awk '{print $3}') eval "$cmd() { for fn in \"\$@\"; do source-highlight --failsafe --out-format=esc -o STDOUT -i \"\$fn\" | $cmdLoc - ; done }" done type cat type head type tail ``` When I run this, I get the functions set as you'd asked for: ``` $ ./tst_ccmds.bash cat () { for fn in "$@"; do source-highlight --failsafe --out-format=esc -o STDOUT -i "$fn" 2> /dev/null | /bin/cat - ; done } head is a function head () { for fn in "$@"; do source-highlight --failsafe --out-format=esc -o STDOUT -i "$fn" 2> /dev/null | /usr/bin/head - ; done } tail is a function tail () { for fn in "$@"; do source-highlight --failsafe --out-format=esc -o STDOUT -i "$fn" 2> /dev/null | /usr/bin/tail -; done } ``` ### In action When I use these functions in my shell (`source ./tst_ccmds.bash`) they work as follows: *cat* ![cat ss](https://i.stack.imgur.com/vaqeI.png) *head* ![head ss](https://i.stack.imgur.com/rNJrb.png) *tail* ![tail ss](https://i.stack.imgur.com/Rrtzy.png) *plain text* ![txt ss](https://i.stack.imgur.com/qU8zf.png) ### What's the trick? The biggest trick, and I would call it more of a hack, is the use of a dash (`-`) as an argument to `cat`, `head`, and `tail` through a pipe which forces them to output the content that came from `source-highlight` through STDIN of the pipe. This bit: ``` ...STDOUT -i "$fn" | /usr/bin/head - .... ``` The other trick is using the `--failsafe` option of `source-highlight`: ``` --failsafe if no language definition is found for the input, it is simply copied to the output ``` This means that if a language definition is not found, it acts like `cat`, simply copying its input to the standard output. ### Note about aliases This function will fail if any of `head`,`tail` or `cat` are aliases because the result of the `type` call will not point to the executable. If you need to use this function with an alias (for example, if you want to use `less` which requires the `-R` flag to colorize) you will have to delete the alias and add the aliased command separately: ``` less(){ for fn in "$@"; do source-highlight --failsafe --out-format=esc -o STDOUT -i "$fn" | /usr/bin/less -R || /usr/bin/less -R "$fn"; done } ```
Where should I write my code so that Composer can autoload my PHP classes? I'm new to Composer, namespaces, and autoload and I wasn't able to figure out where to write my code (under `vendor`?). I have created a directory named `ilhan` under the `vendor`, and a file named `People.php`. Then in the main `index.php` file using `use ilhan\People.php as People;` doesn't work because I think it must have been written in `autoload_namespaces.php` initially. But if I register `ilhan` as a vendor then I think Composer will look into the packagist.org which it isn't there.
Create `ilhan` inside root of your project directory, not in `vendor` directory and put following in your `composer.json`, ``` "autoload": { "psr-4": { "Ilhan\\": "ilhan/" } }, ``` Most probably you already have `psr-4` autoload config added in your `composer.json` file if you are using some sort of framework, in that case just add `"Ilhan\\": "ilhan/"` in to it. Now create `People.php` inside `ilhan` directory with following content ``` <?php namespace Ilhan; class People{} ``` Make sure `require __DIR__.'/vendor/autoload.php';` is included in `index.php` any how, then run `composer dump-autoload`. Now in `index.php` just bellow `require __DIR__.'/vendor/autoload.php';` following should work, ``` use Ilhan\People; ``` But why do you want to use `People` class in `index.php`?
Extending Flask class as main App I'm learning Flask and am a bit confused about how to structure my code. So I tried to extend Flask main class as follows: ``` from flask import Flask, ... class App(Flask): def __init__(self, import_name, *args, **kwargs): super(App, self).__init__(import_name, *args, **kwargs) ``` Note that I am aware of that this may be a completely wrong approach. So that when I want to start the app I do: ``` app = App(__name__) if __name__ == '__main__': app.run() ``` This way I can order my methods and routes in the class, but the problem is when using self-decorators: ``` @route('/') def home(self, context=None): context = context or dict() return render_template('home.html', **context) ``` Which raises an error as `unresolved reference 'route'`. I guess this is not the way I should be structuring the app. How should I do it instead or how do I get the error fixed?
Doing this doesn't make sense. You would subclass `Flask` to change its internal behavior, not to define your routes as class methods. Instead, you're looking for [blueprints](http://flask.pocoo.org/docs/latest/blueprints/) and the [app factory pattern](http://flask.pocoo.org/docs/latest/patterns/appfactories/). Blueprints divide your views into groups without requiring an app, and the factory creates and sets up the app only when called. `my_app/users/__init__.py` ``` from flask import Blueprint bp = Blueprint('users', __name__, url_prefix='/users') ``` `my_app/users/views.py` ``` from flask import render_template from my_app.users import bp @bp.route('/') def index(): return render_template('users/index.html') ``` `my_app/__init__.py` ``` def create_app(): app = Flask(__name__) # set up the app here # for example, register a blueprint from my_app.users import bp app.register_blueprint(bp) return app ``` `run.py` ``` from my_app import create_app app = create_app() ``` Run the dev server with: ``` FLASK_APP=run.py FLASK_DEBUG=True flask run ``` If you need access to the app in a view, use `current_app`, just like `request` gives access to the request in the view. ``` from flask import current_app from itsdangerous import URLSafeSerializer @bp.route('/token') def token(): s = URLSafeSerializer(current_app.secret_key) return s.dumps('secret') ``` --- If you *really* want to define routes as methods of a Flask subclass, you'll need to use [`self.add_url_rule`](http://flask.pocoo.org/docs/latest/api/#flask.Flask.add_url_rule) in `__init__` rather than decorating each route locally. ``` class MyFlask(Flask): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.add_url_rule('/', view_func=self.index) def index(self): return render_template('index.html') ``` The reason `route` (and `self`) won't work is because it's an instance method, but you don't have an instance when you're defining the class.
Make tick labels font size smaller In a matplotlib figure, how can I make the font size for the tick labels using `ax1.set_xticklabels()` smaller? Further, how can one rotate it from horizontal to vertical?
Please note that newer versions of MPL have a shortcut for this task. An example is shown in the other answer to this question: <https://stackoverflow.com/a/11386056/42346> The code below is for illustrative purposes and may not necessarily be optimized. ``` import matplotlib.pyplot as plt import numpy as np def xticklabels_example(): fig = plt.figure() x = np.arange(20) y1 = np.cos(x) y2 = (x**2) y3 = (x**3) yn = (y1,y2,y3) COLORS = ('b','g','k') for i,y in enumerate(yn): ax = fig.add_subplot(len(yn),1,i+1) ax.plot(x, y, ls='solid', color=COLORS[i]) if i != len(yn) - 1: # all but last ax.set_xticklabels( () ) else: for tick in ax.xaxis.get_major_ticks(): tick.label.set_fontsize(14) # specify integer or one of preset strings, e.g. #tick.label.set_fontsize('x-small') tick.label.set_rotation('vertical') fig.suptitle('Matplotlib xticklabels Example') plt.show() if __name__ == '__main__': xticklabels_example() ``` ![enter image description here](https://i.stack.imgur.com/lRp5U.png)
what is laravel render() method for? I didn't deal with render method yet !! is it for blade template ? I have to pass dynamic data in blade.php file dynamically.
Given that you've tagged the question with Blade, I'll assume you mean render inside Laravel's View class. `Illuminate\View\View::render()` returns the string contents of the view. It is also used inside the class' `__toString()` method which allows you to echo a View object. ``` // example.blade.php Hello, World! // SomeController.php $view = view('example'); echo $view->render(); // Hello, World! echo $view; // Hello, World! ``` Laravel typically handles this for you, I.e. calls render or uses the object as a string when necessary. Blade's @include('viewname') directive will load the view file and call the render method behind the scenes for example. You may use it yourself when you want to get the compiled view to perform some subsequent action. Occasionally I have called render explicitly rather than to string if the view itself is causing an exception and in PHP explains > > Fatal error: Method a::\_\_toString() must not throw an exception in /index.php on line 12 > > > Calling render() in the above case gives a more useful error message.
Simple onload doesn't work in JSFIddle <http://jsfiddle.net/jzhang172/n5jb0159/> Simple document.body.onload event should trigger a javascript alert, but doesn't work but it works fine in my text editor -> browser. I switched the libraries around to jQuery to None (pure JS), still nothing, can someone explain to me what's going on and why it doesn't work in fiddle but works fine in my text editor? This works: ``` <body> Why <script> document.body.onload = function(){ alert("LOADED!"); } </script> </body> ```
Your first problem: [![Onload](https://i.stack.imgur.com/3Rno2.png)](https://i.stack.imgur.com/3Rno2.png) You have configured JSFiddle to run your JS when the `load` event fires. Consequently, when the `load` event fires, you bind another `load` event handler. Your new `load` event handler is never called because the `load` event has already fired. Change the menu option to one of the "No Wrap" approaches. --- Your second problem: The `load` event fires on the `window` object, not the `body` element. You need to assign the property to the right place. ``` onload = function(){ alert("LOADED!"); } ``` Such: <https://jsfiddle.net/n5jb0159/5/>
AssemblyInfo version information asterisks It says in AssemblyInfo.cs for C# projects that it's possible to specify version information with `*` ``` // Version information for an assembly consists of the following four values: // // Major Version // Minor Version // Build Number // Revision // // You can specify all the values or you can default the Revision and Build Numbers // by using the '*' as shown below: [assembly: AssemblyVersion("1.0.0.0")] [assembly: AssemblyFileVersion("1.0.0.0")] ``` I changed it to this: ``` [assembly: AssemblyVersion("1.0.*.*")] [assembly: AssemblyFileVersion("1.0.*.*")] ``` and this is the error I get from the compiler: ``` error CS0647: Error emitting 'System.Reflection.AssemblyVersionAttribute' attribute -- 'The version specified '1.0.*.*' is invalid' warning CS1607: Assembly generation -- The version '1.0.*.*' specified for the 'file version' is not in the normal 'major.minor.build.revision' format ``` How does (does it even?) it work?
Syntax (see [MSDN](http://msdn.microsoft.com/en-us/library/system.reflection.assemblyversionattribute.aspx)) for "automatic" build number can be: ``` [assembly: AssemblyVersion("1.0.0.*")] ``` or: ``` [assembly: AssemblyVersion("1.0.*")] ``` `\*` means *after this everything is automatic*. You can't have automatic build number and fixed revision number then this syntax isn't correct: ``` [assembly: AssemblyVersion("1.0.*.0")] ``` For the `AssemblyFileVersionAttribute` you cannot use the `\*` special character so you have to provide a full and valid version number. Please note that if you **do not provide** an `AssemblyFileVersionAttribute` then you'll get the right `FileVersionInfo` automatically (with the same version of `AssemblyVersionAttribute`). You need to specify that attribute only if you need to set a different version.
Swift: Get correct time zone from Date Picker? I am trying to get the correct time zone from the date picker in swift using time formatter, it's not working. I'm getting UTC, not EST. 1) If I print dateFormatter.stringFromDate(datePicker) I get EST, but 2) I don't need a string, I need an NSDate in EST so 3) I can use it to get the timeIntervalSinceDate(NSDate) in EST. My trick of trying to take it from string back to NSDate as seen below didn't work. It's still in UTC and the time interval since date is not right. ``` dateFormatter.locale = NSLocale.currentLocale() dateFormatter.timeZone = NSTimeZone.localTimeZone() dateFormatter.dateFormat = "yyyy-MM-dd HH:mm:ss" let date: NSDate = dateFormatter.dateFromString(dateFormatter.stringFromDate(datePicker))! print(date) print(date.timeIntervalSinceDate(datePicker)) ```
You cannot "get a time zone" from a date picker. You can just get a date. The date will be independent on the current time zone of the device. Perhaps you *think* you have a different date, but actually, there is no such thing as a "UTC date" or "EST date". Instead, there is only one date, and you use date formatters to display them for various time zones. Note that there is quite a bit of redundancy in your code. The default locale and time zone of a date formatter are already the same values that you set. Also, when you have a method that returns a `NSDate` you do not have annotate the constant with `: NSDate`, making your code more verbose and cluttered. Note that if you `print` a date the console will always show UTC. e.g. ``` let date = NSDate() // Nov 10, 9:44 PM let dateFormatter = NSDateFormatter() dateFormatter.dateFormat = "YYYY-MM-dd hh:mm a" let dateString = dateFormatter.stringFromDate(date) // "2015-11-10 09:44 PM" print(date) // "2015-11-10 20:44:54 +0000\n" ```
Why won't macvim always use ruby 1.9.3? I have installed [yadr dotfiles](https://github.com/skwp/dotfiles), a set of vim, ruby, etc plugins. I have the following line of Ruby code in a file `foo.rb`: `foo: bar` Note I used the ruby 1.9.3 syntax for symbol assignment/definition. When I start macvim from command line using `mvim foo.rb` and save that file, everything works fine. However, when I open macvim using `open -a macvim` and navigate to and open `foo.rb`, when I try to save the file I get a ruby-vim syntax error on `foo: bar`. When I change it to `:foo => bar` I don't get syntax errors. - Using `open -a macvim` to open macvim, and then entering `:!ruby -v` prints `ruby 1.8.7` - Using `mvim .` to open macvim, and then entering `:!ruby -v` prints `ruby 1.9.3` *Depending on how I open macvim, I get a different version of Ruby*. How do I ensure that macvim always uses ruby 1.9.3 to evaluate my ruby code? Thanks
It took me awhile to find a fix, but the issue is caused by MacVim not loading zsh the same way Terminal loads zsh. The fix is easy enough and can be placed into your zshrc. See a commit from my dotfiles: <https://github.com/simeonwillbanks/dotfiles/commit/e0e19cfeff13f8bc99d8164217ddd84c6d7f9529> The commit references a full explanation which can be found here: <http://vim.1045645.n5.nabble.com/MacVim-and-PATH-tt3388705.html#a3392363> ![enter image description here](https://i.stack.imgur.com/kw3Rh.png) Hope this helps!
How to make JSON.stringify to only serialize TypeScript getters? I have the following class structure: ``` export abstract class PersonBase { public toJSON(): string { let obj = Object.assign(this); let keys = Object.keys(this.constructor.prototype); obj.toJSON = undefined; return JSON.stringify(obj, keys); } } export class Person extends PersonBase { private readonly _firstName: string; private readonly _lastName: string; public constructor(firstName: string, lastName: string) { this._firstName = firstName; this._lastName = lastName; } public get first_name(): string { return this._firstName; } public get last_name(): string { return this._lastName; } } export class DetailPerson extends Person { private _address: string; public constructor(firstName: string, lastName: string) { super(firstName, lastName); } public get address(): string { return this._address; } public set address(addy: string) { this._address = addy; } } ``` I am trying to get `toJSON` to output all the getters (excluding private properties) from the full object hierarchy. So if I have a `DetailPerson` instance and I call the `toJSON` method, I want to see the following output: ``` { "address": "Some Address", "first_name": "My first name", "last_name": "My last name" } ``` I used one of the solutions from [this Q&A](https://stackoverflow.com/questions/40080473/using-json-stringify-in-conjunction-with-typescript-getter-setter) but it doesn't solve my particular use case - I am not getting all the getters in the output. What do I need to change here to get the result I am looking for?
The link you provided uses `Object.keys` which leaves out properties on the prototype. You could use `for...in` instead of `Object.keys`: ``` public toJSON(): string { let obj: any = {}; for (let key in this) { if (key[0] !== '_') { obj[key] = this[key]; } } return JSON.stringify(obj); } ``` **Edit:** This is my attempt to return only getters, recursively, without assuming that non-getters start with underscores. I'm sure there are gotchas I missed (circular references, issues with certain types), but it's a good start: ``` abstract class PersonBase { public toJSON(): string { return JSON.stringify(this._onlyGetters(this)); } private _onlyGetters(obj: any): any { // Gotchas: types for which typeof returns "object" if (obj === null || obj instanceof Array || obj instanceof Date) { return obj; } let onlyGetters: any = {}; // Iterate over each property for this object and its prototypes. We'll get each // property only once regardless of how many times it exists on parent prototypes. for (let key in obj) { let proto = obj; // Check getOwnPropertyDescriptor to see if the property is a getter. It will only // return the descriptor for properties on this object (not prototypes), so we have // to walk the prototype chain. while (proto) { let descriptor = Object.getOwnPropertyDescriptor(proto, key); if (descriptor && descriptor.get) { // Access the getter on the original object (not proto), because while the getter // may be defined on proto, we want the property it gets to be the one from the // lowest level let val = obj[key]; if (typeof val === 'object') { onlyGetters[key] = this._onlyGetters(val); } else { onlyGetters[key] = val; } proto = null; } else { proto = Object.getPrototypeOf(proto); } } } return onlyGetters; } } ```
How do I render with OpenGL but still use SDL for events? I want to make sure I am using OpenGL for 2d rendering, but SDL for events. From what I have heard SDL uses software rendering, and OpenGL is hardware accelerated. I am in the middle of reading one book on SDL, but it has not yet mentioned the use of OpenGL to render and SDL for events.
You can start by reading: <http://www.gpwiki.org/index.php/SDL:Tutorials:Using_SDL_with_OpenGL> You will use SDL to create an OpenGL context within which you will do all of your OpenGL based rendering. By events do you mean user input? If so, then simply at the end of each frame/loop make use of SDL to check for input like so: ``` int main( ) { ... while( running ) { ... update( ); draw( ); ... handleKeys( ); } return 0; } void handleKeys( ) { SDL_Event event; while( SDL_PollEvent( &event ) ) { switch( event.type ) { case SDL_KEYDOWN: //Check for event and act accordingly break; case SDL_KEYUP: //Check for event and act accordingly break; case SDL_MOUSEBUTTONDOWN: //Check for event and act accordingly break; default: break; } } } ``` Obviously there are much more elegant and effective means of getting input but just wanted to show a simple example.
Shorthand to set height and width of an element in CSS Basic question but I can't find it anywhere, is it possible to set the width and the height on the same line at the same time with the same value in my CSS? I'm not asking for something like: ``` width:100%;height:100%; ``` But more like one of those: ``` width, height: 100%; // Same for both width, height: 100%, 90%; // Different for each ones dimensions: 100% 90%; // Like padding/margin, ``` I'm just asking about declaration, not Javascript on how to do that. [I found a related question](https://stackoverflow.com/questions/7240130/set-the-same-value-to-multiple-properties-css) to this one but for border and the short answer was no. If it's not possible with CSS, is it with SCSS ?
There is no short hand for setting the `height` and `width` of the element in a single property declaration. You cannot do it with SASS as well. But yea, SASS will provide you a feature to hold the common value shared in both property by declaring a `variable` like ``` $some-var-name: 100px; .some-class { height: $some-var-name; width: $some-var-name; } ``` As I said, even SASS won't help you writing `height` and `width` at the same time but you can use change the value of both from a single variable. --- Ok I was about to add the `@extend` in the answer but since other user has already answered the same, (*which is now deleted*) ``` .size{ height:100%; width:100%; } element { @extend .size; //Sets element to height:100%;width:100%; // more stuff here } ``` I would suggest you to use a declaration of `%` instead of `.` so instead of using `.size` suggested use `%size`. This way your literal class of `.size` used only for extend purpose won't be included in the compiled stylesheet.
c++ program using GMP library I have installed GMP using the instruction on this website: <http://www.cs.nyu.edu/exact/core/gmp/> Then I looked for an example program using the library: ``` #include <iostream> #include <gmpxx.h> using namespace std; int main (void) { mpz_class a, b, c; a = 1234; b = "-5678"; c = a+b; cout << "sum is " << c << "\n"; cout << "absolute value is " << abs(c) << "\n"; cin >> a; return 0; } ``` But if I compile this using the command: g++ test.cpp -o test.exe, it says gmpxx.h: no such file or directory. How can I fix this? I am kind of new to this. And I am using MinGW.
Get the actual version here [GNU GMP Library](http://gmplib.org). Make sure you configure it to be installed in /usr/lib (pass --prefix=/usr to configure). Here you have documentation: [GNU GMP Manual](http://gmplib.org/manual/index.html#Top). You are not using the lib correctly. I don't know if you can directly access mpx values with C++ functions but, here you have a working example of what you wanted to achieve: ``` #include<iostream> #include<gmp.h> using namespace std; int main (int argc, char **argv) { mpz_t a,b,c; mpz_inits(a,b,c,NULL); mpz_set_str(a, "1234", 10); mpz_set_str(b,"-5678", 10); //Decimal base mpz_add(c,a,b); cout<<"\nThe exact result is:"; mpz_out_str(stdout, 10, c); //Stream, numerical base, var cout<<endl; mpz_abs(c, c); cout<<"The absolute value result is:"; mpz_out_str(stdout, 10, c); cout<<endl; cin.get(); return 0; } ``` Compile with: ``` g++ -lgmp file.cpp -o file ```
Join tables and fetch the data to a table using codeigniter I have two db tables named as `verification_details` and `verification_questions`. `verification_id` is common in two tables. In `verification_details` table, there is a `user_id` field, based on this `user_id` field, one or more verification details inserted into the `verification_details` table and based on the `verification_id` one or more verification questions inserted into the `verification_questions` table. I use this code in model for join query ``` function get_records() { $this->db->select("a.criteria_question,a.criteria_answer,a.validation_statement"); $this->db->from("verification_questions as a"); $this->db->join('verification_details as b', 'a.verification_id = b.verification_id'); $query = $this->db->get(); return $query->result(); } ``` I want the controller code for fetching all datas from both tables and display it in a table.
In Model ``` function get_records() { $this->db->select("a.criteria_question,a.criteria_answer,a.validation_statement"); $this->db->from("verification_questions as a"); $this->db->join('verification_details as b', 'a.verification_id = b.verification_id'); $query = $this->db->get(); $result = $query->result_array(); return $result; } ``` In contoller ``` $data['table'] = $this->Model_name->get_records(); $this->load->view('view_name',$data); ``` and in view ``` <table> <tr> <th>Question</th> <th>Answer</th> <th>Validation</th> </tr> <?php foreach ( $table as $new_item ) { ?> <tr> <td><?php echo $new_item['table field1']?></td> <td><?php echo $new_item['table field2']?></td> <td><?php echo $new_item['table field3']?></td> </tr> <?php } ?> </table> ```