prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Unable to get logback-spring.xml property file using Spring Cloud Config and Discovery I'm using [Discovery first bootstrap](http://cloud.spring.io/spring-cloud-static/spring-cloud-config/1.3.4.RELEASE/single/spring-cloud-config.html#discovery-first-bootstrap) feature and Consul as a Discovery Server, url to Config Server is located during start-up and I was able to get `application.properties`. I need also to get `logback-spring.xml` configuration from Config server and I don't know how. What should I specify in `logging.config={???}logback-spring.xml` property to not hardcode url to Config Server? Before Consul integration I was using url formed according to [Serving Plain text documentation](http://cloud.spring.io/spring-cloud-static/spring-cloud-config/1.4.3.RELEASE/single/spring-cloud-config.html#_serving_plain_text) with hardcoded Config server url in properties and it was working fine, but now we want to avoid this. From what I debugged there is no usage of Discovery client during reinitializing logging system in `PropertySourceBootstrapConfiguration`.
I used [Customizing Bootstrap Configuration](http://projects.spring.io/spring-cloud/spring-cloud.html#_customizing_the_bootstrap_configuration) to resolve my issue in a 'custom' way because I didn't find the solution in the documentation and source code. Example: Add new file `src/main/resources/META-INF/spring.factories` and add there custom bootstrap configuration: `org.springframework.cloud.bootstrap.BootstrapConfiguration=sample.custom.CustomPropertySourceLocator` In CustomPropertySourceLocator create property that will point to config server url (looked up via discovery) ``` @Configuration public class CustomPropertySourceLocator implements PropertySourceLocator { private final String configServiceName; private final DiscoveryClient discoveryClient; public CustomPropertySourceLocator( @Value("${spring.cloud.config.discovery.service-id}") String configServiceName, DiscoveryClient discoveryClient){ this.configServiceName = configServiceName; this.discoveryClient = discoveryClient; } @Override public PropertySource<?> locate(Environment environment) { List<ServiceInstance> instances = this.discoveryClient.getInstances(this.configServiceName); ServiceInstance serviceInstance = instances.get(0); return new MapPropertySource("customProperty", Collections.singletonMap("configserver.discovered.uri", serviceInstance.getUri())); } } ``` In code above we created custom property source that will have one property `configserver.discovered.uri`. We can use this property in our code (using @Value) or in other property files (even if they are located in the config-server storage). `logging.config=${configserver.discovered.uri}/<path to the text file>/logback-spring.xml` where `<path to text file>` should be formed according to the [Serving Plain Text Documentation](https://cloud.spring.io/spring-cloud-config/multi/multi__serving_plain_text.html) and the way how you configured your config-server.
How can Continuous Delivery work in practice? Continuous Delivery sounds good, but my years of software development experience suggest that in practice it can't work. (Edit: To make it clear, I always have lots of tests running automatically. My question is about how to get the confidence to deliver on each checkin, which I understand is the full form of CD. The alternative is not year-long cycles. It is iterations every week (which some might consider still CD if done correctly), two weeks, or month; including an old-fashioned QA at the end of each one, supplementing the automated tests.) - Full test coverage is impossible. You have to put in lots of time -- and time is money -- for every little thing. This is valuable, but the time could be spent contributing to quality in other ways. - Some things are hard to test automatically. E.g. GUI. Even Selenium won't tell you if your GUI is wonky. Database access is hard to test without bulky fixtures, and even that won't cover weird corner cases in your data storage. Likewise security and many other things. Only business-layer code is effectively unit-testable. - Even in the business layer, most code out there is not simple functions whose arguments and return values can be easily isolated for test purposes. You can spend lots of time building mock objects, which might not correspond to the real implementations. - Integration/functional tests supplement unit tests, but these take a lot of time to run because they usually involve reinitializing the entire system on each test. (If you don't reinitialize, the test environment is inconsistent.) - Refactoring or any other changes break lots of tests. You spend lots of time fixing them. If it's a matter of validating meaningful spec changes, that's fine, but often tests break because of meaningless low-level implementation details, not stuff that really provides important information. Often the tweaking is focused on reworking the internals of the test, not on truly checking the functionality that is being tested. - Field reports on bugs cannot easily be matched with the precise micro-version of the code.
> > my years of software development experience suggest that in practice it can't work. > > > Have you tried it? Dave and I wrote the book based on many collective years of experience, both of ourselves and of other senior people in ThoughtWorks, actually doing the things we discuss. Nothing in the book is speculative. Everything we discuss has been tried and tested even on large, distributed projects. But we don't suggest you take it on faith. Of course you should try it yourself, and please write up what you find works and what doesn't, including the relevant context, so that others can learn from your experiences. Continuous Delivery has a big focus on automated testing. We spend about 1/3 of the book talking about it. We do this because the alternative - manual testing - is expensive and error-prone, and actually not a great way to build high quality software (as Deming said, "Cease dependence on mass inspection to achieve quality. Improve the process and build quality into the product in the first place") > > Full test coverage is impossible. You have to put in lots of time -- and time is money -- for every little thing. This is valuable, but the time could be spent contributing to quality in other ways. > > > Of course full test coverage is impossible, but what's the alternative: zero test coverage? There is a trade-off. Somewhere in between is the correct answer for your project. We find that in general you should expect to spend about 50% of your time creating or maintaining automated tests. That might sound expensive until you consider the cost of comprehensive manual testing, and of fixing the bugs that get out to users. > > Some things are hard to test automatically. E.g. GUI. Even Selenium won't tell you if your GUI is wonky. > > > Of course. Check out Brian Marick's test quadrant. You still need to perform exploratory testing and usability testing manually. But that's what you should be using your expensive and valuable human beings for - not regression testing. The key is that you need to put a deployment pipeline in place so that you only bother running expensive manual validations against builds that have passed a comprehensive suite of automated tests. Thus you both reduce the amount of money you spend on manual testing, and the number of bugs that ever make it to manual test or production (by which time they are *very* expensive to fix). Automated testing done right is *much* cheaper over the lifecycle of the product, but of course it's a capital expenditure that amortizes itself over time. > > Database access is hard to test without bulky fixtures, and even that won't cover weird corner cases in your data storage. Likewise security and many other things. Only business-layer code is effectively unit-testable. > > > Database access is tested implicitly by your end-to-end scenario based functional acceptance tests. Security will require a combination of automated and manual testing - automated penetration testing and static analysis to find (e.g.) buffer overruns. > > Even in the business layer, most code out there is not simple functions whose arguments and return values can be easily isolated for test purposes. You can spend lots of time building mock objects, which might not correspond to the real implementations. > > > Of course automated tests are expensive if you build your software and your tests badly. I highly recommend checking out the book "growing object-oriented software, guided by tests" to understand how to do it right so that your tests and code are maintainable over time. > > Integration/functional tests supplement unit tests, but these take a lot of time to run because they usually involve reinitializing the entire system on each test. (If you don't reinitialize, the test environment is inconsistent.) > > > One of the products I used to work on has a suite of 3,500 end-to-end acceptance tests that takes 18h to run. We run it in parallel on a grid of 70 boxes and get feedback in 45m. Still longer than ideal really, which is why we run it as the second stage in the pipeline after the unit tests have run in a few minutes so we don't waste our resources on a build that we don't have some basic level of confidence in. > > Refactoring or any other changes break lots of tests. You spend lots of time fixing them. If it's a matter of validating meaningful spec changes, that's fine, but often tests break because of meaningless low-level implementation details, not stuff that really provides important information. Often the tweaking is focused on reworking the internals of the test, not on truly checking the functionality that is being tested. > > > If your code and tests are well encapsulated and loosely coupled, refactoring will not break lots of tests. We describe in our book how to do the same thing for functional tests too. If your acceptance tests break, that's a sign that you're missing one or more unit tests, so part of CD involves constantly improving your test coverage to try and find bugs earlier in the delivery process where the tests are more fine-grained and the bugs are cheaper to fix. > > Field reports on bugs cannot easily be matched with the precise micro-version of the code. > > > If you're testing and releasing more frequently (part of the point of CD) then it *is* relatively straightforward to identify the change that caused the bug. The whole point of CD is to optimize the feedback cycle so you can identify bugs as soon as possible after they are checked in to version control - and indeed, preferably before they're checked in (which is why we run the build and unit tests before check-in).
Strange behaviour of string length function w.r.t null character? I have this code say: ``` std::string str("ashish"); str.append("\0\0"); printf("%d", str.length()); ``` It is printing 6 but if I have this code ``` std::string str("ashish"); str.append("\0\0",2); printf("%d", str.length()); ``` it is printing 8 ! Why?
It's because `str.append("\0\0")` uses the null character to determine the end of the string. So "\0\0" is length zero. The other overload, `str.append("\0\0",2)`, just takes the length you give it, so it appends two characters. From the standard: > > > ``` > basic_string& > append(const charT* s, size_type n); > > ``` > > 7         *Requires:* `s` points to an array of at least `n` elements of `charT`. > > > 8         *Throws:* length\_error if size() + n > max\_size(). > > > 9         *Effects:* The function replaces the string controlled by `*this` with a string of length `size() + n` whose first `size()` elements are a copy of the original string controlled by `*this` and whose remaining elements are a copy of the initial `n` elements of `s`. > > > 10         *Returns:* `*this`. > > > > ``` > basic_string& append(const charT* s); > > ``` > > 11         *Requires:* `s` points to an array of at least `traits::length(s) + 1` elements of `charT`. > > > 12         *Effects:* Calls `append(s, traits::length(s))`. > > > 13         *Returns:* `*this`. > > >                                                                *— [string::append] 21.4.6.2 p7-13* > > >
Validate Anti forgery key not working with ajax post [![added](https://i.stack.imgur.com/9e32h.png)](https://i.stack.imgur.com/9e32h.png)I have tried to use validate antiforgery token with ajax post request but the response is that no root element found . i remove the antiforgery token it works perfectly . Here is my code : javascript ; ``` function Save() { let GroupName = GetElementValue("GroupName"); let GroupId = GetElementValue("GroupId"); var Group = { __RequestVerificationToken: gettoken(), GroupId: :1", GroupName: "My Group Name" }; if (IsFormValid("GroupForm")) { AjaxPost("/Groups/AddGroup", Group).done(function () { GetGroups(); }); } } function gettoken() { var token = '@Html.AntiForgeryToken()'; token = $(token).val(); return token; } function AjaxPost(url, data) { return $.ajax({ type: "post", contentType: "application/json;charset=utf-8", dataType: "json", responseType: "json", url: url, data: JSON.stringify(data) }); } ``` I have also tried this : ``` $.ajax({ type: "POST", url: "/Groups/AddGroup", data: { __RequestVerificationToken: gettoken(), GroupId: 1, GroupName: "please work" }, dataType: 'json', contentType: 'application/x-www-form-urlencoded; charset=utf-8', }); ``` Here Is The backend : ``` [HttpPost] [ValidateAntiForgeryToken] public void AddGroup([FromBody] GroupView Group) { if (Group.GroupName.Trim().Length>0) { bool existed = _context.Groups.Any(x => x.GroupName.ToLower().TrimEnd().Equals(Group.GroupName.ToLower().TrimEnd())); if (!existed) { Groups group = new Groups() { GroupName = Group.GroupName }; _context.Groups.AddAsync(group); _context.SaveChanges(); int? groupId = group.GroupId; } } } ``` [![Here is the parameter passing perfectly ](https://i.stack.imgur.com/ovNrU.png)](https://i.stack.imgur.com/ovNrU.png) And Here Is My Class GroupView ``` public class GroupView { public string GroupId { get; set; } public string GroupName { get; set; } } ``` I want to use the method where i send the serial token with my data normally , how can i make it works ? any help!
In ASP.NET Core you can pass antiforgery token either via form or headers. So I can suggest 2 solutions for you. **Solution 1. Headers** In order to let the framework read token from headers you need to configure `AntiforgeryOptions` and set `HeaderName` to non `null` value. Add this code to `Startup.cs` ``` //or if you omit this configuration //HeaderName will be "RequestVerificationToken" by default services.AddAntiforgery(options => { options.HeaderName = "X-CSRF-TOKEN"; //may be any other valid header name }); ``` And pass antiforgery token in `AJAX` ``` function Save() { //.. //no need to set token value in group object var Group = { GroupId: "1", GroupName: "My Group Name" }; //.. } function AjaxPost(url, data) { return $.ajax({ type: "post", contentType: "application/json;charset=utf-8", dataType: "json", responseType: "json", headers: { "X-CSRF-TOKEN": gettoken() }, url: url, data: JSON.stringify(data) }); ``` **Solution 2. Form** You have already tried to pass token via form but it didn't work. Why? The reason is that the default implementation of `IAntiforgeryTokenStore` (is used for reading tokens from request) cannot read antiforgery token from json but reads it as form data. If you want to make it work then don't `stringify` request data and remove `contentType` property from `$.ajax` call. JQuery will set appropriate content type and serialize data respectively for you. ``` //all other original code is unchanged, group needs to contain a token function AjaxPost(url, data) { return $.ajax({ type: "post", dataType: "json", responseType: "json", url: url, data: data }); ``` Also you need to remove `[FromBody]` attribute from action parameter to let model binder properly bind model in this case ``` [HttpPost] [ValidateAntiForgeryToken] public IActionResult AddGroup(GroupView group) ```
Providing postgres windows system permission for copy (windows 8) I'm looking to copy CSV files using pgadmin iii. Very new to this. When I run a "copy" command from the query builder, I'm getting the following error: ``` ERROR: could not open file 'C:\\Users\\Nick\\Documents\\CDR\\csv1.csv" for reading: Permission denied SQL state: 42501 ``` I've found this mentioned a few other places ([here](https://stackoverflow.com/questions/9263408/postgresql-inconsistent-copy-permissions-errors) [here](https://stackoverflow.com/questions/14083311/permission-denied-when-trying-to-import-a-csv-file-from-pgadmin), and [here](https://stackoverflow.com/questions/21026536/import-data-from-excel-to-postgresql) for example , and the general fix is to add "postgres" to the file permissions (people also advise moving csv to the public folder, but this causes problems for other reasons). But when I try to add postgres to the people with permissions in Windows 8, when I check the name I get an "the object postgres cannot be found" error. If I add "Everyone" it works, but for obvious reasons I won't want to leave an important folder with "Everyone" access. Can anyone please advise on how to give postgres permissions in Windows 8? Thanks!
Recent versions of PostgreSQL for windows don't use the `postgres` OS account, they use a `NetworkService` system account instead. This is specified in the properties of the PostgreSQL service in Windows. That's presumably the reason of `the object postgres cannot be found` error. Changing the permissions of the file is not really needed anyway. Recent versions of pgAdmin (1.16+) are able to feed COPY contents from the client to the server without having the server to open the file. Right-click on a table name inside the object browser and check out a menu called `Import`. Internally this will use the `COPY FROM STDIN` variant. If that's not satisying, there's also the the option of using the `psql.exe` command line tool and its `\copy` command. This command has the same functionality and syntax as the SQL `COPY` command except that it streams the file from client to server instead of having the server open it itself. If you're CLI-oriented, make it your premium choice, it's easier than pgAdmin.
Ridge regression to minimize RMSE instead of MSE Cross-posted from [my identical question on math.stackexchange](https://math.stackexchange.com/questions/2893190/ridge-regression-to-minimize-rmse-instead-of-mse): Given a metrix $X$ and a vector $\vec{y}$, ordinary least squares (OLS) regression tries to find $\vec{c}$ such that $\left\| X \vec{c} - \vec{y} \right\|\_2^2$ is minimal. (If we assume that $\left\| \vec{v}\right\|\_2^2=\vec{v} \cdot \vec{v}$.) Ridge regression tries to find $\vec{c}$ such that $\left\| X \vec{c} - \vec{y} \right\|\_2^2 + \left\| \Gamma \vec{c} \right\|\_2^2 $ is minimal. However, I have an application where I need to minimize not the sum of squared errors, but the square root of this sum. Naturally, the square root is an increasing function, so this minimum will be at the same location, so the OLS regression will still give the same result. But will ridge regression? On the one hand, I don't see how minimizing $\left\| X \vec{c} - \vec{y} \right\|\_2^2 + \left\| \Gamma \vec{c} \right\|\_2^2 $ will necessarily result in the same $\vec{c}$ as minimizing $\sqrt{ \left\| X \vec{c} - \vec{y} \right\|\_2^2 } + \left\| \Gamma \vec{c} \right\|\_2^2 $. On the other hand, I've read (though never seen shown) that minimizing $\left\| X \vec{c} - \vec{y} \right\|\_2^2 + \left\| \Gamma \vec{c} \right\|\_2^2 $ (ridge regression) is identical to minimizing $\left\| X \vec{c} - \vec{y} \right\|\_2^2$ under the constraint that $ \left\|\Gamma \vec{c}\right\|\_2^2 < t$, where $t$ is some parameter. And if this is the case, then it should result in the same solution as minimizing $\sqrt{ \left\| X \vec{c} - \vec{y} \right\|\_2^2}$ under the same constraint.
minimizing $$ \left\| X \vec{c} - \vec{y} \right\|\_2^2 + \left\| \Gamma \vec{c} \right\|\_2^2 $$ and minimizing $$ \sqrt{\left\| X \vec{c} - \vec{y} \right\|\_2^2} + \left\| \Gamma \vec{c} \right\|\_2^2 $$ do not *directly* relate to minimizing ${\left\| X \vec{c} - \vec{y} \right\|\_2^2}$ or $\sqrt{\left\| X \vec{c} - \vec{y} \right\|\_2^2}$ under the constraint $\left\|\vec{c}\right\|\_2^2 < t$. There will need to be a conversion between $t$ and $\Gamma$ which will be *different* for the two different cost functions. Thus the minimization of MSE and RMSE with a same penalty term defined by $\Gamma$ will relate to a constrained minimization with *different* constraints $t$. Note that for every solution $\vec{c}$ to minimizing the MSE with penalty term $\Gamma\_1$ there will be a penalty term $\Gamma\_2$ that results in the same solution $\vec{c}$ when minimizing the penalized RMSE. So for many practical purposes you can use any methods/software that solves the penalized MSE problem, but only you need to use a different cost function when, for instance, performing cross validation to select the ideal $\Gamma$.
Why are slice objects not hashable in python Why are slice objects in python not hashable: ``` >>> s = slice(0, 10) >>> hash(s) TypeError Traceback (most recent call last) <ipython-input-10-bdf9773a0874> in <module>() ----> 1 hash(s) TypeError: unhashable type ``` They seem to be immutable: ``` >>> s.start = 5 TypeError Traceback (most recent call last) <ipython-input-11-6710992d7b6d> in <module>() ----> 1 s.start = 5 TypeError: readonly attribute ``` Context, I'd like to make a dictionary that maps python ints or slice objects to some values, something like this: ``` class Foo: def __init__(self): self.cache = {} def __getitem__(self, idx): if idx in self.cache: return self.cache[idx] else: r = random.random() self.cache[idx] = r return r ``` As a workaround I need to special case slices: ``` class Foo: def __init__(self): self.cache = {} def __getitem__(self, idx): if isinstance(idx, slice): idx = ("slice", idx.start, idx.stop, idx.step) if idx in self.cache: return self.cache[idx] else: r = random.random() self.cache[idx] = r return r ``` This isn't a big deal, I'd just like to know if there is some reasoning behind it.
From the [Python bug tracker](https://bugs.python.org/issue1733184): > > Patch [# 408326](https://bugs.python.org/issue408326) was designed to make assignment to d[:] an error where > d is a dictionary. See discussion starting at > <http://mail.python.org/pipermail/python-list/2001-March/072078.html> . > > > Slices were specifically made unhashable so you'd get an error if you tried to slice-assign to a dict. Unfortunately, it looks like mailing list archive links are unstable. The link in the quote is dead, and the [alternate link I suggested using](https://mail.python.org/pipermail/python-list/2001-March/033254.html) died too. The best I can point you to is the archive link for [that entire month of messages](https://mail.python.org/pipermail/python-list/2001-March/thread.html); you can Ctrl-F for `{` to find the relevant ones (and a few false positives).
Event Listener in Google Charts API I'm busy using [Google Charts](http://code.google.com/apis/chart/interactive/docs/gallery/table.html#Events) in one of my projects to display data in a table. Everything is working great. Except that I need to see what line a user selected once they click a button. This would obviously be done with Javascript, but I've been struggling for days now to no avail. Below I've pasted code for a simple example of the table, and the Javascript function that I want to use (that doesn't work). ``` <html> <head> <script type='text/javascript' src='https://www.google.com/jsapi'></script> <script type='text/javascript'> google.load('visualization', '1', {packages:['table']}); google.setOnLoadCallback(drawTable); var table = ""; function drawTable() { var data = new google.visualization.DataTable(); data.addColumn('string', 'Name'); data.addColumn('number', 'Salary'); data.addColumn('boolean', 'Full Time Employee'); data.addRows(4); data.setCell(0, 0, 'Mike'); data.setCell(0, 1, 10000, '$10,000'); data.setCell(0, 2, true); data.setCell(1, 0, 'Jim'); data.setCell(1, 1, 8000, '$8,000'); data.setCell(1, 2, false); data.setCell(2, 0, 'Alice'); data.setCell(2, 1, 12500, '$12,500'); data.setCell(2, 2, true); data.setCell(3, 0, 'Bob'); data.setCell(3, 1, 7000, '$7,000'); data.setCell(3, 2, true); table = new google.visualization.Table(document.getElementById('table_div')); table.draw(data, {showRowNumber: true}); } function selectionHandler() { selectedData = table.getSelection(); row = selectedData[0].row; item = table.getValue(row,0); alert("You selected :" + item); } </script> </head> <body> <div id='table_div'></div> <input type="button" value="Select" onClick="selectionHandler()"> </body> </html> ``` Thanks in advance for anyone taking the time to look at this. I've honestly tried my best with this, hope someone out there can help me out a bit.
Here is a working version. There was no listener to the select event, and you mixed up, `data` and `table` for the `getValue` call. ``` <html> <head> <script src='https://www.google.com/jsapi'></script> <script> google.load('visualization', '1', {packages:['table']}); google.setOnLoadCallback(drawTable); var table, data; function drawTable() { data = new google.visualization.DataTable(); data.addColumn('string', 'Name'); data.addColumn('number', 'Salary'); data.addColumn('boolean', 'Full Time Employee'); data.addRows(4); data.setCell(0, 0, 'Mike'); data.setCell(0, 1, 10000, '$10,000'); data.setCell(0, 2, true); data.setCell(1, 0, 'Jim'); data.setCell(1, 1, 8000, '$8,000'); data.setCell(1, 2, false); data.setCell(2, 0, 'Alice'); data.setCell(2, 1, 12500, '$12,500'); data.setCell(2, 2, true); data.setCell(3, 0, 'Bob'); data.setCell(3, 1, 7000, '$7,000'); data.setCell(3, 2, true); table = new google.visualization.Table(document.getElementById('table_div')); table.draw(data, {showRowNumber: true}); //add the listener google.visualization.events.addListener(table, 'select', selectionHandler); } function selectionHandler() { var selectedData = table.getSelection(), row, item; row = selectedData[0].row; item = data.getValue(row,0); alert("You selected :" + item); } </script> </head> <body> <div id='table_div'></div> <input type="button" value="Select" onClick="selectionHandler()"> </body> </html> ```
"Unreachable code detected" in MSIL or Native code Does compiler compile "Unreachable Codes" in MSIL or Native code in run time?
The question is a bit unclear but I'll take a shot at it. First off, Adam's answer is correct insofar as there is a difference in the IL that the compiler emits based on whether the "optimize" switch is on or off. The compiler is much more aggressive about removing unreachable code with the optimize switch on. There are two kinds of unreachable code that are relevant. First there is *de jure* unreachable code; that is, code that the C# language specification calls out as *unreachable*. Second, there is *de facto* unreachable code; that is code that the C# specification does not call out as unreachable, but nevertheless, cannot be reached. Of the latter kind of unreachable code, there is code that is *known to the optimizer* to be unreachable, and there is code *not known to the optimizer* to be unreachable. The compiler typically always removes *de jure* unreachable code, but only removes *de facto* unreachable code if the optimizer is turned on. Here's an example of each: ``` int x = 123; int y = 0; if (false) Console.WriteLine(1); if (x * 0 != 0) Console.WriteLine(2); if (x * y != 0) Console.WriteLine(3); ``` All three Console.WriteLines are unreachable. The first is *de jure* unreachable; the C# compiler states that this code must be treated as unreachable for the purposes of definite assignment checking. The second two are *de jure* reachable but *de facto* unreachable. They must be checked for definite assignment errors, but the optimizer is permitted to remove them. Of the two, the optimizer detects the (2) case but not the (3) case. The optimizer knows that an integer multiplied by zero is always zero, and that therefore the condition is always false, so it removes the entire statement. In the (3) case the optimizer does not track the possible values assigned to y and determine that y is always zero at the point of the multiplication. Even though you and I know that the consequence is unreachable, the optimizer does not know that. The bit about definite assignment checking goes like this: if you have an unreachable statement then all local variables are considered to be assigned in that statement, and all assignments are considered to not happen: ``` int z; if (false) z = 123; Console.WriteLine(z); // Error if (false) Console.WriteLine(z); // Legal ``` The first usage is illegal because z has not been definitely assigned when it is used. The second usage is *not* illegal because the code isn't even reachable; z can't be used before it is assigned because control never gets there! C# 2 had some bugs where it confused the two kinds of reachability. In C# 2 you could do this: ``` int x = 123; int z; if (x * 0 != 0) Console.WriteLine(z); ``` And the compiler would not complain, even though *de jure* the call to Console.WriteLine is reachable. I fixed that in C# 3.0 and we took the breaking change. Note that we reserve the right to change up how the unreachable code detector and code generator work at any time; we might decide to always emit the unreachable code or never emit it or whatever.
When should I use MySQL compressed protocol? I've learned that MySQL can compress communication between servers and clients. > > Compression is used if both client and > server support zlib compression, and > the client requests compression. > > > (from [MySQL Forge Wiki](http://forge.mysql.com/wiki/MySQL%5FInternals%5FClientServer%5FProtocol#Compression)) The most obvious pros and cons are - pros: Reduced payload size - cons: Increased computation time So, is compressed protocol something I should enable whenever I can afford servers with adequate specs? Are there other factors I should consider?
Performance benefits are going to be largely dependent on the size of the result sets that you are sending, in addition to the network bandwidth and latency between the database server and its clients. The larger the result sets, the larger the latency, or the less bandwidth, the more likely you will see the benefit of compression. Your maximum level of service is limited to the smallest bottleneck. So, you need to analyze where you're currently at regarding network and CPU resources. The most optimized database server utilizes 100% of its CPU 100% of the time, otherwise you're wasting computing resources by having a processor that's sitting there not doing anything. Of course, you don't want it at 101%, so your target range is well below 100%. Yet, my point is that if you have a lot of headroom before you reach a CPU bottleneck, and the result sets are a significant size, and the network is a factor, then turn compression on. CPU cycles are cheap, especially unused ones (you do pay for electricity and cooling). If you pay for bandwidth, trading CPU usage for bandwidth is easily justified, and even if you're not anywhere near reaching the bandwidth bottleneck, that faster speed, and higher level of service, is worth something. Don't forget that the client must also expend CPU cycles to decompress the data. Not a major issue, but still a factor. In general, today's CPUs are faster than today's networks.
How do I use Reference Parameters in C++? I am trying to understand how to use reference parameters. There are several examples in my text, however they are too complicated for me to understand why and how to use them. How and why would you want to use a reference? What would happen if you didn't make the parameter a reference, but instead left the `&` off? For example, what's the difference between these functions: ``` int doSomething(int& a, int& b); int doSomething(int a, int b); ``` I understand that reference variables are used in order to change a formal->reference, which then allows a two-way exchange of parameters. However, that is the extent of my knowledge, and a more concrete example would be of much help.
Think of a reference as [an alias](https://isocpp.org/wiki/faq/references). When you invoke something on a reference, you're really invoking it on the object to which the reference refers. ``` int i; int& j = i; // j is an alias to i j = 5; // same as i = 5 ``` When it comes to functions, consider: ``` void foo(int i) { i = 5; } ``` Above, `int i` is a value and the argument passed is passed *by value*. That means if we say: ``` int x = 2; foo(x); ``` `i` will be a *copy* of `x`. Thus setting `i` to 5 has no effect on `x`, because it's the copy of `x` being changed. However, if we make `i` a reference: ``` void foo(int& i) // i is an alias for a variable { i = 5; } ``` Then saying `foo(x)` no longer makes a copy of `x`; `i` *is* `x`. So if we say `foo(x)`, inside the function `i = 5;` is exactly the same as `x = 5;`, and `x` changes. Hopefully that clarifies a bit. --- Why is this important? When you program, you *never* want to copy and paste code. You want to make a function that does one task and it does it well. Whenever that task needs to be performed, you use that function. So let's say we want to swap two variables. That looks something like this: ``` int x, y; // swap: int temp = x; // store the value of x x = y; // make x equal to y y = temp; // make y equal to the old value of x ``` Okay, great. We want to make this a function, because: `swap(x, y);` is much easier to read. So, let's try this: ``` void swap(int x, int y) { int temp = x; x = y; y = temp; } ``` This won't work! The problem is that this is swapping *copies* of two variables. That is: ``` int a, b; swap(a, b); // hm, x and y are copies of a and b...a and b remain unchanged ``` In C, where references do not exist, the solution was to pass the address of these variables; that is, use pointers\*: ``` void swap(int* x, int* y) { int temp = *x; *x = *y; *y = temp; } int a, b; swap(&a, &b); ``` This works well. However, it's a bit clumsy to use, and actually a bit unsafe. `swap(nullptr, nullptr)`, swaps two nothings and dereferences null pointers...undefined behavior! Fixable with some checks: ``` void swap(int* x, int* y) { if (x == nullptr || y == nullptr) return; // one is null; this is a meaningless operation int temp = *x; *x = *y; *y = temp; } ``` But looks how clumsy our code has gotten. C++ introduces references to solve this problem. If we can just alias a variable, we get the code we were looking for: ``` void swap(int& x, int& y) { int temp = x; x = y; y = temp; } int a, b; swap(a, b); // inside, x and y are really a and b ``` Both easy to use, and safe. (We can't accidentally pass in a null, there are no null references.) This works because the swap happening inside the function is really happening on the variables being aliased outside the function. (Note, never write a `swap` function. :) One already exists in the header `<algorithm>`, and it's templated to work with any type.) --- Another use is to remove that copy that happens when you call a function. Consider we have a data type that's very big. Copying this object takes a lot of time, and we'd like to avoid that: ``` struct big_data { char data[9999999]; }; // big! void do_something(big_data data); big_data d; do_something(d); // ouch, making a copy of all that data :< ``` However, all we really need is an alias to the variable, so let's indicate that. (Again, back in C we'd pass the address of our big data type, solving the copying problem but introducing clumsiness.): ``` void do_something(big_data& data); big_data d; do_something(d); // no copies at all! data aliases d within the function ``` This is why you'll hear it said you should pass things by reference all the time, unless they are primitive types. (Because internally passing an alias is probably done with a pointer, like in C. For small objects it's just faster to make the copy then worry about pointers.) Keep in mind you should be const-correct. This means if your function doesn't modify the parameter, mark it as `const`. If `do_something` above only looked at but didn't change `data`, we'd mark it as `const`: ``` void do_something(const big_data& data); // alias a big_data, and don't change it ``` We avoid the copy *and* we say "hey, we won't be modifying this." This has other side effects (with things like temporary variables), but you shouldn't worry about that now. In contrast, our `swap` function cannot be `const`, because we are indeed modifying the aliases. Hope this clarifies some more. --- \*Rough pointers tutorial: A pointer is a variable that holds the address of another variable. For example: ``` int i; // normal int int* p; // points to an integer (is not an integer!) p = &i; // &i means "address of i". p is pointing to i *p = 2; // *p means "dereference p". that is, this goes to the int // pointed to by p (i), and sets it to 2. ``` So, if you've seen the pointer-version swap function, we pass the address of the variables we want to swap, and then we do the swap, dereferencing to get and set values.
Make ESLint apply rules to only certain file name patterns Is it possible to configure ESLint in a way that it only applies rules to files that names are matching certain pattern? I know it's possible to have separate rule sheets in directories, but in case the structure is following: ``` app | |- module1 | |- module1.js | |- module1.spec.js | |- module2 | |- module2.js | |- module2.spec.js ``` And I want a project-wide rules that would only apply to the \*.spec.js files. I'd expect to have a setting like ``` "include-pattern": "*.spec.js" ``` in the .eslintrc or any other, simlilar way to specify which filenames should be considered for specific rules.
**Updated answer:** Yes, now it's possible. In your example it could look like ``` { "rules": { "quotes": [2, "double"] }, "overrides": [ { "files": ["app/module1/*.js"], "excludedFiles": "*.spec.js", "rules": { "quotes": [2, "single"] } } ] } ``` You can read more at [eslint doc page](https://eslint.org/docs/user-guide/configuring#example-configuration). **Old answer:** Currently this is not supported. But for `2.0.0` version we are actively working on this feature. To track progress: <https://github.com/eslint/eslint/issues/3611>
KQL, time difference between separate rows in same table I have `Sessions` table ``` Sessions |Timespan|Name |No| |12:00:00|Start|1 | |12:01:00|End |2 | |12:02:00|Start|3 | |12:04:00|Start|4 | |12:04:30|Error|5 | ``` I need to extract from it duration of each session using KQL (but if you could give me suggestion how I can do it with some other query language it would be also very helpful). But if next row after `start` is also `start`, it means session was abandoned and we should ignore it. Expected result: ``` |Duration|SessionNo| |00:01:00| 1 | |00:00:30| 4 | ```
You can try something like this: ``` Sessions | order by No asc | extend nextName = next(Name), nextTimestamp = next(timestamp) | where Name == "Start" and nextName != "Start" | project Duration = nextTimestamp - timestamp, No ``` When using the operator `order by`, you are getting a [Serialized row set](https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/windowsfunctions#serialized-row-set), which then you can use operators such as [`next`](https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/nextfunction) and [`prev`](https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/prevfunction). Basically you are seeking rows with `No == "Start"` and `next(Name) == "End"`, so this is what I did, You can find this query running at [Kusto Samples open database](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA02OQQ6CUAxE955iZAUJHoEdazZwgQpNwOT/T9oSNfHwVgR01cm08zotq04p6umFJAMLrk80CaQ93OKHcRwQfTYUGNUq848uylV3U2A1CvO+s90oPH8fWRjfaIWsNRLLQP/I8+H7/Szpxr2hXoTMa23Q35MLDn7pPd8iaXz7vgAAAA==).
Entity frame work: many to many relationship tables I have a news entity and I get the news based on their NewsID. Now I defined a new entity , a Group and I want to get the news based on their Group ID. I defined a Group news Table Aslo to relate this to table together. ![enter image description here](https://i.stack.imgur.com/fbAwS.png) in news model I have : ``` public virtual ICollection<GroupNews> RelatedGroupID { get; set; } ``` So I Assumed that I defined the GroupNews table values and I can use it in NewsService. Now lets look at NewsService : ``` Expression<Func<News, bool>> constraint = null; if (user_id > 0 && project_id > 0) { constraint = e => (e.CreatorID == user_id && e.RelatedProjectTags.Any(p => p.ProjectID == project_id)); } else if (user_id > 0) { constraint = e => (e.CreatorID == user_id); } else if (project_id > 0) { constraint = e => (e.RelatedProjectTags.Any(p => p.ProjectID == project_id)); } else { constraint = null; } IEnumerable<News> result_list = null; if (constraint != null) { result_list = newsRepository.GetMany(constraint).OrderByDescending(e => e.CreatedOn).Skip(offset); } else { result_list = newsRepository.GetAll().OrderByDescending(e => e.CreatedOn).Skip(offset); } if (count > 0) { result_list = result_list.Take(count); } return result_list.ToList<News>(); } ``` } I add this line to it in order to define a constraint based on GroupID. ``` else if (groupId > 0) { constraint = e => (e.RelatedGroupID.Any(n => n.GroupID == groupId)); } ``` it seems wrong and gives me this error : > > {"Invalid object name 'dbo.GroupNewsNews'."} > > >
1.You does not need GroupNewsID in GroupNews table. You need to drop this column and create complex key by GroupID and NewsID. In the News entity you need to define property: ``` public virtual ICollection<Group> Groups { get; set; } ``` In the default constructor for this entity you need to initialize property(need for lazy load): ``` Groups = new List<Group>(); ``` Similar changes for Group entity. 2.In the GroupMap.cs you need to define ``` this.HasMany(t => t.News) .WithMany(t => t.Groups) .Map(m => { m.ToTable("GroupNews"); m.MapLeftKey("GroupID"); m.MapRightKey("NewsID"); }); ``` 3.Write tests for NewsRepository and GroupRepository.
Get stock quotes of NSE and BSE using web-service and parse it using json I want to show all stock price of NSE and BSE on simple html page. I got information from google that i can call any web service that is already exist and they will give all information in json form. And then I have to parse that jason code. Now I want someone to provide me link by which i can call the webservice. And let me know how can i call that web service using jQuery. And how can i parse output json data. If any one can give me sample code then it will be best.. Thank you so much for helping me.. :)
If you want to fetch the data from NSE web service, then below is the link: <http://www.nseindia.com/live_market/dynaContent/live_watch/get_quote/ajaxGetQuoteJSON.jsp?symbol=dhfl> But I'm not sure whether you can use this link or not for non commercial as well as commercial purpose. As the site has been updated and old site is no more working, updating answers with new updated links: Equity link - <https://www.nseindia.com/api/quote-equity?symbol=RELIANCE> <https://www.nseindia.com/api/quote-equity?symbol=RELIANCE&section=trade_info> Derivatives link: <https://www.nseindia.com/api/quote-derivative?symbol=RELIANCE> **I'm not sure whether you can use this link or not for non commercial as well as commercial purpose**
Are you allowed to use a method inside the init method? I just started learning about OOP in python3 and we made this little class today. ``` class Square: def __init__(self, side): self.side = side def show_perimeter(self): print(self.side * 4) test_square = Square(10) test_square.show_perimeter() >> 40 ``` Now I'm thinking if it's possible to get the value of the perimeter while creating the object, something like... ``` class Square: def __init__(self, side): self.side = side self.permiter = get_perimeter(self) def get_perimeter(self): return self.side * 4 test_square = Square(10) print(test_square.perimeter) ``` Is this something you're able to do? If so, is it a good practice? **If not, what would be the best way to get the perimeter by just using the side?**
There is nothing special about the `__init__` method other than it is invoked automatically as part of the construction of an object (`__new__` creates the object, and `__init__` initializes it). Therefore, you can do whatever you need to do within it. However, you need to make sure that you don't inadvertently do things that would cause operations on a partially initialized object. The use of `@property` below can solve most of these edge cases. The only difference here is it is better form to call `self.method()` rather than `method(self)` in most cases. ``` class Square: def __init__(self, side): self.side = side self.permiter = self.get_perimeter() def get_perimeter(self): return self.side * 4 test_square = Square(10) print(test_square.perimeter) ``` However, I'd like to point out that a property might be better in this case: ``` class Square(): def __init__(self, side): self.side = side; @property def perimeter(self): return self.side * 4 x = Square(10) print(x.perimeter) >>> 40 ``` In this case, the `@property` decorator converts the `perimeter` method to a property which can be accessed just like it is another attribute, but it is calculated at the time it is asked for.
Animating the drawing of a line I'm trying to animate the drawing of a line by the following way: ## .h ``` CAShapeLayer *rootLayer; CAShapeLayer *lineLayer; CGMutablePathRef path; ``` ## .m ``` path = CGPathCreateMutable(); CGPathMoveToPoint(path, nil, self.frame.size.width/2-100, 260); CGPathAddLineToPoint(path, nil, self.frame.size.width/2+100.0, 260); CGPathCloseSubpath(path); self.rootLayer = [CALayer layer]; rootLayer.frame = self.bounds; [self.layer addSublayer:rootLayer]; self.lineLayer = [CAShapeLayer layer]; [lineLayer setPath:path]; [lineLayer setFillColor:[UIColor redColor].CGColor]; [lineLayer setStrokeColor:[UIColor blueColor].CGColor]; [lineLayer setLineWidth:1.5]; [lineLayer setFillRule:kCAFillRuleNonZero]; [rootLayer addSublayer:lineLayer]; [self performSelector:@selector(startTotalLine) withObject:nil afterDelay:1.5]; - (void)startTotalLine { CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:@"animatePath"]; [animation setDuration:3.5]; animation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]; [animation setAutoreverses:NO]; [animation setFromValue:(id)path]; [animation setToValue:(id)path]; [lineLayer addAnimation:animation forKey:@"animatePath"]; } ``` The line had drawn before the `startTotalLine` method is invoked. Also, the `startTotalLine` method doesn't affect the line. I want it to animate the the line drawing from right to left.
I think the easiest way to do what you want, is to present some UIView that is 1.5 pixel height and animate it's width. Ask me if I'm not clear. I think your code doesn't work because your variable `path` is not a layer property. Read [manuals](http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Reference/CABasicAnimation_class/Introduction/Introduction.html#//apple_ref/occ/cl/CABasicAnimation): > > CABasicAnimation provides basic, single-keyframe animation > capabilities for a layer property. > > > And you do something strange here: ``` [animation setFromValue:(id)path]; [animation setToValue:(id)path]; ``` EDIT: I stumbled upon an [article](http://tumbljack.com/post/179975074/complex-interpolation-with-cashapelayer-free), and understood what you were try to achieve! Now I think the reason you failed is that you can animate path that doesn't change number of points. And now I thought you can create a path-line with two points. At first they are at same place and another path is the line you want to end up with. Now animate from first path to the second. I think it should work, but I'm not sure. EDIT: Definitely! You need this guy's code. [Git link](http://github.com/joericioppo/Shape_04/tree/master).
Are Program Managers still being used? I recently read an an article about [Functional Specifications](https://www.joelonsoftware.com/2000/10/04/painless-functional-specifications-part-3-but-how/) by [Joel Spolsky](https://en.wikipedia.org/wiki/Joel_Spolsky) (if his name sounds familiar, its likely because the good Mr. Spolsky is one of the StackExchange founders). In the article Joel mentions how Microsoft used Program Managers as the persons who manage the software design during development (okay that might be a bit oversimplified, there is a more complete description in the [article](https://www.joelonsoftware.com/2000/10/04/painless-functional-specifications-part-3-but-how/)). However, the article is quite old, so my question is this: > > Are Program Managers still a thing? And if not, then what is the more modern way of managing software design? > > >
**Programme managers in general ?** [Programme manager](https://www.wrike.com/blog/program-manager-vs-project-manager/) is a job function which is not specific to software. The term "programme" in it is not directly related to "programming" or a software artifact. "Programme" is a term that means a long term endeavor, be it in politics (e.g. a new healthcare programme) or in industry (e.g. the Airbus A340 programme). In the software industry, a programme often corresponds to a family of products, and [programme management](https://en.wikipedia.org/wiki/Program_management) is hence a part of product management. A programme manager is then [responsible](https://www.pmi.org/learning/library/roles-responsibilities-skills-program-management-6799) to manage and coordinate all the many interdependent projects aiming to realize and contribute the long term objective and the larger picture (e.g. the different software components and their releases). So yes, programme manager are still being used ! **Programme managers according to your article** Nevertheless, there's an ambiguity in the article that you quote: - A programme manager (according to the definition I give above) will make sure that the overall vision and objectives are clear, and that the projects remain aligned with it. But in large programmes, he/she will in principle not be personnally involved in the writing of functional specifications. He/she might not even be involved in the approval of these specs. He/she "just" has to ensure that the many projects are able to get it right. - But your article describes a small team and a "programme manager" who'd write the specs. This seems not to be the "programme manager" as generally understood. It seems to be more a traditional project organisation, where the [project manager cumulates the role of lead analyst](http://work.chron.com/can-systems-analyst-project-manager-well-25743.html). This approach is also still in use, but I think that "project manager" is the name that most of us would give to such role. However the trend, is to go more for agile approaches, with "[product owners](https://www.mountaingoatsoftware.com/agile/scrum/roles/product-owner)" replacing the traditional "project managers" in a less hierarchical way. The specs are then worked out collaboratively in form of user stories, the product owner playing the role of business advocate to validate the stories and the priorities.
Pressing "Return" in a HTML-Form with multiple Submit-Buttons Let's imagine a HTML-form with with two submit buttons. one of them is positioned in the upper half of the form and does something less important. the other button is the actual submit button, which saves the entered data. this button is positioned at the end of the form. the two buttons will trigger different action-urls. experienced users like to submit their forms by pressing "enter" or "return" instead of clicking on the according button. unfortunately, the browser will look for the first submit-button of the current form and use this to execute the form-submit. since in my form the second button is the actual submit-button, i need to tell the browser to use this particular button (or the action-url that is associated with it). i don't link javascript listeners, which are looking for key pressed or something like that. so i'm looking for a better approach to this problem. however, javascript or jquery solutions (without keypressed-listerner) are welcome. thank you very much for your help in advance.
You could, theoretically at least, have three `submit` buttons in your form. Button two is the existing 'less-important' button (from halfway down the form), button three is the existing 'actual-submit' button from your existing form. Button one should be hidden (using CSS `display:none` or `visibility: hidden`) and should perform exactly the same function as your current 'actual-submit.' I think it'll still be the first button to be found by the browser, regardless of its visibility. ``` <form method="post" method="whatever.php" enctype="form/multipart"> <fieldset id="first"> <label>...<input /> <label>...<input /> <label>...<input /> <input type="submit" value="submit" style="visibility: hidden;" <!-- or "display: none" --> /> <input class="less_important" type="submit" value="submit" /> </fieldset> <fieldset id="second"> <label>...<input /> <label>...<input /> <label>...<input /> <input type="submit" value="submit" class="actual_submit" /> </fieldset> </form> ``` --- **Edited in response to comments:** > > I thought hidden buttons were also disabled by default? [md5sum] > > > A valid point, but I made the mistake of testing only in Firefox (3.5.7, Ubuntu 9.10) before posting, in which the technique worked1, for both. The complete xhtml file is pasted (below) that forms the basis of my testing subsequently to these comments. ``` <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>3button form</title> <link rel="stylesheet" type="text/css" href="css/stylesheet.css" /> <script type="text/javascript" src="js/jquery.js"></script> <script type="text/javascript"> $(document).ready( function() { $('input[type="submit"]').click( function(e){ alert("button " + $(this).attr("name")); } ); } ); </script> </head> <body> <form method="post" method="whatever.php" enctype="form/multipart"> <fieldset id="first"> <label>...<input /> <label>...<input /> <label>...<input /> <input name="one" type="submit" value="submit" style="display:none;" /><!-- or "display: none" --> <input name="two" class="less_important" type="submit" value="submit" /> </fieldset> <fieldset id="second"> <label>...<input /> <label>...<input /> <label>...<input /> <input name="three" type="submit" value="submit" class="actual_submit" /> </fieldset> </form> </body> </html> ``` > > `display: none` should prevent a button from being an active part of the form (included in the result set, and eligible for default-button-ness); `visibility: hidden` should not. However both of these cases are got wrong by some browsers. The normal way to have an invisible first submit button is to `position: absolute;` it and move it way off the page (eg. with left: -4000px). This is ugly but reliable. It's also a good idea to change its tabindex so it doesn't interfere in the expected form tabbing order. > > > There are, at least, two points I have to raise to this comment. In order: 1. "The normal way..." I was unaware that there was a normal way, and presented this option as a possibility to achieve an aim, in the full knowledge that there were/are almost certainly any number of better ways, particularly given that I don't see a good reason for multiple submit buttons on the same form. 2. Given the latter sentence of the above point, I'd like to make it clear that I don't advocate doing this. At all. It feels like an ugly, and non-semantic, hack to have more than one submit button, with -in the OP's instance- one button apparently *not being a submit button*. 3. The notion of `position: absolute; left: -4000px;` had occurred to me, but it seemed to effect much the same as `visibility: hidden;`, and I have an innate dislike of `position: absolute;` for whatever reason...so I went with the option that was less objectionable to me at the time of writing... =) I appreciate your comment about the tabindex, though, that was something that I never gave any thought to, at all. I'm sorry if I sound somewhat snippy, it's late, I'm tired...yadda-yadda; I've been testing in various browsers since my return home and it seems that Firefox 3.5+ gives the same behaviour -reporting 'button one' on both Windows XP and Ubuntu 9.10, all Webkit browsers (Midori, Epiphany, Safari and Chrome) fail and report 'button two.' So it's definitely a fail-worthy idea to `display: none;` the submit button. Whereas the `visibility:hidden` at least works. --- 1. By which I mean that hitting 'enter' triggered the form-submit event, or the click event of the first submit button of the form, regardless of whether that first submit was `display: none;` or `visibility: hidden`. Please be aware that my jQuery skills are limited, so the tests employed (I ran only at a time to try and prevent conflicts occurring in execution, commenting out the one I didn't run at that time, both are presented -one, clearly, commented out) may well be insufficient and non-representative.
How to use jinja2 and its i18n extension (using babel) outside flask How can I use jinja2 with babel outside a flask application. Supposing that I have locale dir which is populated using pybabel command. I want to load translation files and translate my template files.
I found the solution. Here's how you can use jinja2/babel without flask integration. ## Preconditions Preconditions are described just to complete the example, all of them can have other values or names. You use message domain named "html" for messages (domain is arbitrary name, default is "message"). There is a directory "i18n" with translated and compiled messages (e.g. with a file `i18n/cs/LC_MESSAGES/html.mo`). You prefer to render your templates using "cs" or "en" locale. The templates are located in directory `templates` and there exists a jinja2 template named `stack.html` there, so there exists a file `templates/stack.html`. ## Code sample ``` from jinja2 import Environment, FileSystemLoader from babel.support import Translations locale_dir = "i18n" msgdomain = "html" list_of_desired_locales = ["cs", "en"] loader = FileSystemLoader("templates") extensions = ['jinja2.ext.i18n', 'jinja2.ext.autoescape', 'jinja2.ext.with_'] translations = Translations.load(locale_dir, list_of_desired_locales) env = Environment(extensions=extensions, loader=loader) # add any other env options if needed env.install_gettext_translations(translations) template = env.get_template("stack.html") rendered_template = template.render() ``` The `rendered_template` contains the rendered HTML content now, probably in "cs" locale.
Using grunt server, how can I redirect all requests to root url? I am building my first [Angular.js](http://angularjs.org/) application and I'm using [Yeoman](http://yeoman.io/). Yeoman uses Grunt to allow you to run a node.js connect server with the command 'grunt server'. I'm running my angular application in html5 mode. According to the angular docs, this requires a modification of the server to redirect all requests to the root of the application (index.html), since angular apps are single page ajax applications. > > "Using [html5] mode requires URL rewriting on server side, basically you have to rewrite all your links to entry point of your application (e.g. index.html)" > > > The problem that I'm trying to solve is detailed in [this](https://stackoverflow.com/questions/16569841/angularjs-html5-mode-reloading-the-page-gives-wrong-get-request) question. How can I modify my grunt server to redirect all page requests to the index.html page?
First, using your command line, navigate to your directory with your gruntfile. Type this in the CLI: ``` npm install --save-dev connect-modrewrite ``` At the top of your grunt file put this: ``` var modRewrite = require('connect-modrewrite'); ``` Now the next part, you only want to add [modRewrite](https://www.npmjs.com/package/connect-modrewrite) into your connect: ``` modRewrite(['!\\.html|\\.js|\\.svg|\\.css|\\.png$ /index.html [L]']), ``` Here is a example of what my "connect" looks like inside my Gruntfile.js. You don't need to worry about my lrSnippet and my ssIncludes. The main thing you need is to just get the modRewrite in. ``` connect: { options: { port: 9000, // Change this to '0.0.0.0' to access the server from outside. hostname: '0.0.0.0', }, livereload: { options: { middleware: function (connect) { return [ modRewrite(['!\\.html|\\.js|\\.svg|\\.css|\\.png$ /index.html [L]']), lrSnippet, ssInclude(yeomanConfig.app), mountFolder(connect, '.tmp'), mountFolder(connect, yeomanConfig.app) ]; } } }, test: { options: { middleware: function (connect) { return [ mountFolder(connect, '.tmp'), mountFolder(connect, 'test') ]; } } }, dist: { options: { middleware: function (connect) { return [ mountFolder(connect, yeomanConfig.dist) ]; } } } }, ```
how to start an animation on the end of another animation? I am trying to have a dot fade from white to red and then from white to red. This is what I have thus far: ``` <circle id="test" fill="#ED1C24" cx="96.881" cy="91.953" r="26.485"/> <animate id="testies" attributeName="fill" from="#ED1C24" to="#fff" xlink:href="#test" dur="2s" fill="freeze" /> <animate attributeName="fill" from="" to="#ED1C24" xlink:href="#test" begin="" onrepeat=" dur="2s" fill="freeze" /> ``` I want the second animation to begin when the first one ends, I know this is possible, I just can't figure it out.
Using your example, here's how: ``` <circle id="test" fill="#ED1C24" cx="96.881" cy="91.953" r="26.485"/> <animate id="testies" attributeName="fill" from="#ED1C24" to="#fff" xlink:href="#test" dur="2s" fill="freeze" /> <animate attributeName="fill" from="" to="#ED1C24" xlink:href="#test" begin="testies.end" dur="2s" fill="freeze" /> ``` or as an equivalent alternative without the `xlink:href` syntax: ``` <circle id="test" fill="#ED1C24" cx="96.881" cy="91.953" r="26.485"> <animate id="testies" attributeName="fill" from="#ED1C24" to="#fff" dur="2s" fill="freeze" /> <animate attributeName="fill" from="" to="#ED1C24" begin="testies.end" dur="2s" fill="freeze" /> </circle> ``` So, basically just add the id of the element you want to trigger the other animation from and add a ".end" suffix. You can also specify ".begin" to trigger on the beginning of an animation, or add a time offset, e.g `begin="someId.end+2s"`. It's also possible to use events to trigger animations, the syntax is similar: id followed by a dot and then the name of the event and optionally a time offset. See the list of events that are required in SVG 1.1 [here](http://www.w3.org/TR/SVG11/interact.html#SVGEvents) (the leftmost column labeled "Event name" is what applies here). If you're not scared of specifications see the [full syntax of the begin attribute](http://www.w3.org/TR/SVG11/animate.html#BeginValueListSyntax) for all the details.
Sequalizejs adding paranoid configuration to an existing table I created a table without the paranoid option, and i now want to change that table's definition to use paranoid. I don't want to re-create the database since its already in production. how can i do that using migrations? should i use addColumn with deletedAt and just add the paranoid definition to the model, or is there a better way?
I added the deletedAt field using migration like this: ``` "use strict"; module.exports = { up: function(migration, DataTypes, done) { // add altering commands here, calling 'done' when finished migration.addColumn( 'mytablename', 'deletedAt', { type: DataTypes.DATE, allowNull: true, validate: { } } ); done(); }, down: function(migration, DataTypes, done) { // add reverting commands here, calling 'done' when finished migration.removeColumn('mytablename', 'deletedAt'); done(); } }; ``` And added the configuration: ``` paranoid: true, ``` to my model It seems to work. Does anyone have a better solution?
How can I throw CHECKED exceptions from inside Java 8 lambdas/streams? How can I throw CHECKED exceptions from inside Java 8 lambda, used in a stream for example? In other words, I want to make code like this compile: ``` public List<Class> getClasses() throws ClassNotFoundException { List<Class> classes = Stream.of("java.lang.Object", "java.lang.Integer", "java.lang.String") .map(className -> Class.forName(className)) .collect(Collectors.toList()); return classes; } ``` This code does not compile, since the `Class.forName()` method above throws `ClassNotFoundException`, which is checked. Please note I do NOT want to wrap the checked exception inside a runtime exception and throw the wrapped unchecked exception instead. **I want to throw the checked exception itself**, and without adding ugly `try`/`catches` to the stream.
This `LambdaExceptionUtil` helper class lets you use any checked exceptions in Java streams, like this: ``` Stream.of("java.lang.Object", "java.lang.Integer", "java.lang.String") .map(rethrowFunction(Class::forName)) .collect(Collectors.toList()); ``` Note `Class::forName` throws `ClassNotFoundException`, which is **checked**. The stream itself also throws `ClassNotFoundException`, and NOT some wrapping unchecked exception. ``` public final class LambdaExceptionUtil { @FunctionalInterface public interface Consumer_WithExceptions<T, E extends Exception> { void accept(T t) throws E; } @FunctionalInterface public interface BiConsumer_WithExceptions<T, U, E extends Exception> { void accept(T t, U u) throws E; } @FunctionalInterface public interface Function_WithExceptions<T, R, E extends Exception> { R apply(T t) throws E; } @FunctionalInterface public interface Supplier_WithExceptions<T, E extends Exception> { T get() throws E; } @FunctionalInterface public interface Runnable_WithExceptions<E extends Exception> { void run() throws E; } /** .forEach(rethrowConsumer(name -> System.out.println(Class.forName(name)))); or .forEach(rethrowConsumer(ClassNameUtil::println)); */ public static <T, E extends Exception> Consumer<T> rethrowConsumer(Consumer_WithExceptions<T, E> consumer) throws E { return t -> { try { consumer.accept(t); } catch (Exception exception) { throwAsUnchecked(exception); } }; } public static <T, U, E extends Exception> BiConsumer<T, U> rethrowBiConsumer(BiConsumer_WithExceptions<T, U, E> biConsumer) throws E { return (t, u) -> { try { biConsumer.accept(t, u); } catch (Exception exception) { throwAsUnchecked(exception); } }; } /** .map(rethrowFunction(name -> Class.forName(name))) or .map(rethrowFunction(Class::forName)) */ public static <T, R, E extends Exception> Function<T, R> rethrowFunction(Function_WithExceptions<T, R, E> function) throws E { return t -> { try { return function.apply(t); } catch (Exception exception) { throwAsUnchecked(exception); return null; } }; } /** rethrowSupplier(() -> new StringJoiner(new String(new byte[]{77, 97, 114, 107}, "UTF-8"))), */ public static <T, E extends Exception> Supplier<T> rethrowSupplier(Supplier_WithExceptions<T, E> function) throws E { return () -> { try { return function.get(); } catch (Exception exception) { throwAsUnchecked(exception); return null; } }; } /** uncheck(() -> Class.forName("xxx")); */ public static void uncheck(Runnable_WithExceptions t) { try { t.run(); } catch (Exception exception) { throwAsUnchecked(exception); } } /** uncheck(() -> Class.forName("xxx")); */ public static <R, E extends Exception> R uncheck(Supplier_WithExceptions<R, E> supplier) { try { return supplier.get(); } catch (Exception exception) { throwAsUnchecked(exception); return null; } } /** uncheck(Class::forName, "xxx"); */ public static <T, R, E extends Exception> R uncheck(Function_WithExceptions<T, R, E> function, T t) { try { return function.apply(t); } catch (Exception exception) { throwAsUnchecked(exception); return null; } } @SuppressWarnings ("unchecked") private static <E extends Throwable> void throwAsUnchecked(Exception exception) throws E { throw (E)exception; } } ``` Many other examples on how to use it (after statically importing `LambdaExceptionUtil`): ``` @Test public void test_Consumer_with_checked_exceptions() throws IllegalAccessException { Stream.of("java.lang.Object", "java.lang.Integer", "java.lang.String") .forEach(rethrowConsumer(className -> System.out.println(Class.forName(className)))); Stream.of("java.lang.Object", "java.lang.Integer", "java.lang.String") .forEach(rethrowConsumer(System.out::println)); } @Test public void test_Function_with_checked_exceptions() throws ClassNotFoundException { List<Class> classes1 = Stream.of("Object", "Integer", "String") .map(rethrowFunction(className -> Class.forName("java.lang." + className))) .collect(Collectors.toList()); List<Class> classes2 = Stream.of("java.lang.Object", "java.lang.Integer", "java.lang.String") .map(rethrowFunction(Class::forName)) .collect(Collectors.toList()); } @Test public void test_Supplier_with_checked_exceptions() throws ClassNotFoundException { Collector.of( rethrowSupplier(() -> new StringJoiner(new String(new byte[]{77, 97, 114, 107}, "UTF-8"))), StringJoiner::add, StringJoiner::merge, StringJoiner::toString); } @Test public void test_uncheck_exception_thrown_by_method() { Class clazz1 = uncheck(() -> Class.forName("java.lang.String")); Class clazz2 = uncheck(Class::forName, "java.lang.String"); } @Test (expected = ClassNotFoundException.class) public void test_if_correct_exception_is_still_thrown_by_method() { Class clazz3 = uncheck(Class::forName, "INVALID"); } ``` **UPDATE as of Nov 2015** The code has been improved with the help of @PaoloC, [please check his answer below and upvote it](https://stackoverflow.com/questions/27644361/how-can-i-throw-checked-exceptions-from-inside-java-8-streams/30974991#:%7E:text=You%20can!-,Extending%20%40marcg%20%27s,-UtilException%20and%20adding). He helped solve the last problem: now the compiler will ask you to add throw clauses and everything is as if you could throw checked exceptions natively on Java 8 streams. #### Note 1 The `rethrow` methods of the `LambdaExceptionUtil` class above may be used without fear, and are **OK to use in any situation**. #### Note 2 The `uncheck` methods of the `LambdaExceptionUtil` class above are bonus methods, and may be safely removed them from the class if you don't want to use them. If you do used them, do it with care, and not before understanding the following use cases, advantages/disadvantages and limitations: - You may use the `uncheck` methods if you are calling a method which literally can never throw the exception that it declares. For example: new String(byteArr, "UTF-8") throws UnsupportedEncodingException, but UTF-8 is guaranteed by the Java spec to always be present. Here, the throws declaration is a nuisance and any solution to silence it with minimal boilerplate is welcome: ``` String text = uncheck(() -> new String(byteArr, "UTF-8")); ``` - You may use the `uncheck` methods if you are implementing a strict interface where you don't have the option for adding a throws declaration, and yet throwing an exception is entirely appropriate. Wrapping an exception just to gain the privilege of throwing it results in a stacktrace with spurious exceptions which contribute no information about what actually went wrong. A good example is `Runnable.run()`, which does not throw any checked exceptions. - In any case, if you decide to use the `uncheck` methods, be aware of these two consequences of throwing CHECKED exceptions without a throws clause: 1. The calling-code won't be able to catch it by name (if you try, the compiler will say: "Exception is never thrown in body of corresponding try statement"). It will bubble and probably be caught in the main program loop by some `catch` `Exception` or `catch` `Throwable`, which may be what you want anyway. 2. It violates the principle of least surprise: it will no longer be enough to catch `RuntimeException` to be able to guarantee catching all possible exceptions. For this reason, I believe this should not be done in framework code, but only in business code that you completely control. ### References - <http://www.philandstuff.com/2012/04/28/sneakily-throwing-checked-exceptions.html> - <http://www.mail-archive.com/javaposse@googlegroups.com/msg05984.html> - Project Lombok annotation: @SneakyThrows - Brian Goetz opinion (against) here: [How can I throw CHECKED exceptions from inside Java 8 lambdas/streams?](https://stackoverflow.com/questions/27644361/how-can-i-throw-checked-exceptions-from-inside-java-8-streams) - <https://softwareengineering.stackexchange.com/questions/225931/workaround-for-java-checked-exceptions?newreg=ddf0dd15e8174af8ba52e091cf85688e> \*
Remove the last character using strrchr and substr in PHP User may enter abc.com/ instead of abc.com, so I want to do validation using `strchr`. This works but looks strange ``` if(strrchr($url, "/") == "/"){ $url = substr($url, 0,-1); echo $url; } ``` Is there a better way of doing this?
Yes - using the optional second argument to [`trim()` or `rtrim()`](http://php.net/manual/en/function.rtrim.php) you can specify a character list to trim off the end of a string.: ``` $url = rtrim($url, '/'); ``` If the trailing `/` is present, it will be stripped and the string returned without it. If not present, the string will be returned in its original form. ``` // URL with trailing / $url1 = 'example.com/'; echo rtrim($url1, '/'); // example.com // URL without trailing $url2 = 'example.com'; echo rtrim($url2, '/'); // example.com // URL with many trailing / $url3 = 'example.com/////'; echo rtrim($url3, '/'); // example.com ``` Add additional characters into the string with `'/'` if you want to strip whitespace as well, as in `rtrim($url, ' /')` If you merely want to test if the last character of the URL was `/`, you may do so with `substr()` ``` if (substr($url, -1) === '/') { // true } ```
Does file ".bash\_history" always record every command I ever issue? I'd like to be able to look through my command history (all the way back to the beginning of the user). Is there any guarantee that file *.bash\_history* will continue to be appended to? If there is a limit where the file will start to be truncated (hopefully from the beginning), is there a way to remove that limit?
There are a number of environment variables that control how history works in Bash. Relevant excerpt from bash manpage follows: ``` HISTCONTROL A colon-separated list of values controlling how commands are saved on the history list. If the list of values includes ignorespace, lines which begin with a space character are not saved in the history list. A value of ignoredups causes lines matching the previous history entry to not be saved. A value of ignoreboth is shorthand for ignorespace and ignoredups. A value of erasedups causes all previous lines matching the current line to be removed from the history list before that line is saved. Any value not in the above list is ignored. If HISTCONTROL is unset, or does not include a valid value, all lines read by the shell parser are saved on the history list, subject to the value of HISTIGNORE. The second and subsequent lines of a multi-line compound command are not tested, and are added to the history regardless of the value of HISTCONTROL. HISTFILE The name of the file in which command history is saved (see HISTORY below). The default value is ~/.bash_history. If unset, the command history is not saved when an interactive shell exits. HISTFILESIZE The maximum number of lines contained in the history file. When this variable is assigned a value, the history file is truncated, if necessary, by removing the oldest entries, to contain no more than that number of lines. The default value is 500. The history file is also truncated to this size after writing it when an interactive shell exits. HISTIGNORE A colon-separated list of patterns used to decide which command lines should be saved on the history list. Each pattern is anchored at the begin- ning of the line and must match the complete line (no implicit `*' is appended). Each pattern is tested against the line after the checks speci- fied by HISTCONTROL are applied. In addition to the normal shell pattern matching characters, `&' matches the previous history line. `&' may be escaped using a backslash; the backslash is removed before attempting a match. The second and subsequent lines of a multi-line compound command are not tested, and are added to the history regardless of the value of HISTIGNORE. HISTSIZE The number of commands to remember in the command history (see HISTORY below). The default value is 500. ``` To answer your questions directly: No, there isn't a guarantee, since history can be disabled, some commands may not be stored (e.g. starting with a white space) and there may be a limit imposed on the history size. As for the history size limitation: if you unset `HISTSIZE` and `HISTFILESIZE`: ``` unset HISTSIZE unset HISTFILESIZE ``` you'll prevent the shell from truncating your history file. However, if you have an instance of a shell running that has these two variables set, it will truncate your history while exiting, so the solution is quite brittle. In case you absolutely must maintain long term shell history, you should not rely on shell and copy the files regularly (e.g., using a [cron](https://en.wikipedia.org/wiki/Cron) job) to a safe location. History truncation always removes the oldest entries first as stated in the [man page](https://en.wikipedia.org/wiki/Man_page) excerpt above.
How to I check that a string contains only digits and / in python? I am trying to check that a string contains only / and digits, to use as a form of validation, however I cannot find and way to do both. ATM I have this: ``` if Variable.isdigit() == False: ``` This works for the digits but I have not found a way to check also for the slashes.
There are many options, as showed here. A nice one would be list comprehensions. Let's consider two strings, one that satisfies the criteria, other that doesn't: ``` >>> match = "123/456/" >>> no_match = "123a456/" ``` We can check if a character of them matches by using `isdigit()` and comparation: ``` >>> match[0].isdigit() or match[0] == '/' True ``` But we want to know if all chars match. We can get a list of results by using [list comprehensions](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions): ``` >>> [c.isdigit() or c == '/' for c in match] [True, True, True, True, True, True, True, True] >>> [c.isdigit() or c == '/' for c in no_match] [True, True, True, False, True, True, True, True] ``` Note that the list of the non-matching string has `False` at the same position of the `'a'` char. Since we want *all* chars to match, we can use the [`all()` function](https://stackoverflow.com/questions/19389490/how-pythons-any-and-all-functions-work). It expects a list of values; if at least one of them is false, then it returns false: ``` >>> all([c.isdigit() or c == '/' for c in match]) True >>> all([c.isdigit() or c == '/' for c in no_match]) False ``` ## Bonus points ### Put on a function You would be better to put it on a function: ``` >>> def digit_or_slash(s): ... return all([c.isdigit() or c == '/' for c in s]) ... >>> digit_or_slash(match) True >>> digit_or_slash(no_match) False ``` ### Generator expressions [Generator expressions](https://docs.python.org/2/reference/expressions.html#generator-expressions) tend to be more efficient: ``` >>> def digit_or_slash(s): ... return all(c.isdigit() or c == '/' for c in s) ... ``` But in your case it is probably negligible anyway. ### What about `in`? I would prefer to use the `in` operator, as below: ``` >>> def digit_or_slash(s): ... return all(c in "0123456789/" for c in s) ``` Note that this is only one of the options. Sadly, your problem fails this [Zen of Python recommendation](https://www.python.org/dev/peps/pep-0020/) (`>>> import this`): > > There should be one- and preferably only one -obvious way to do it. > > > But that's ok, now you can choose whatever you prefer :)
Python split words before ':' character but not time structure I'm trying to `Split` words before `:` character using regex. However, I don't want to `split` time structure, like 15:46:00 hrs. All the information I have is inside a `for` loop, and comes from scraping a pdf file. so, using this code: ``` for item in result: for key in keyHeaders: if key in item.encode('utf-8'): item = item.replace(key, '') if ':' in item: item = item.replace(':', ':\n') ``` Output: ``` 15: 46: 00 State: NY Phone: x-xxx-xxx ``` Using regex or non-regex how can I `split` specifically words but not numbers joined by `:` character. I tried this, but nothing happen. In fact, it doesn´t split anything. ``` for item in result: for key in keyHeaders: if key in item.encode('utf-8'): item = item.replace(key, '') lines = re.compile(r'(?<!\\d\\d):(?!\\d\\d)') # expect split words before ':' if item == re.findall(lines, item): item = item.replace(':', ':\n') ``` output: ``` 15:46:00 State:NY Phone:x-xxx-xxx ``` Thanks for your support!
You have two issues with your code. First you used a raw string and then still doubled the backslashes on '\\d', change those to '\d'. The other issue is that your comparing the entire item with the value returned by re.findall. If your regexp were correct, re.findall would have returned only a ':' for the items that are not dates, so you should either compare with ':' or just that anything is returned. Your regexp is also overly complex for a relatively simple match. I would use something like: ``` if not re.findall(r'\d\d:\d\d:\d\d', item): item = item.replace(':', ':\n') ``` There are also likely simpler ways to do the whole job with re.sub or re.split, but this should get you over your current hurdle.
Send Base64 image from Android with JSON to php webservice, decode, save to SQL Like the description says, I am taking a photo in Android. It is compressed and added to a `byte[]` then `base64encoded`. It sends with **JSON** to my webservice where it is "supposed" to be decoded and saved in a **SQL table row**. I can save the encoded string in a separate row so I know it's getting there. Can anyone look at this and show me where I am doing it wrong? \*Sorry for the lengthy code. I don't want to miss something if being offered help! **ANDROID SIDE** ``` @Override protected String doInBackground(String... args) { // TODO Auto-generated method stub // Check for success tag int success; stream = new ByteArrayOutputStream(); picture.compress(Bitmap.CompressFormat.JPEG, 50, stream); image = stream.toByteArray(); String ba1 = Base64.encodeToString(image, Base64.DEFAULT); SharedPreferences sp = PreferenceManager.getDefaultSharedPreferences(MainScreen.this); String post_username = sp.getString("username", "anon"); try { ArrayList<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("username", post_username)); params.add(new BasicNameValuePair("picture", ba1)); JSONObject json = jsonParser.makeHttpRequest(POST_COMMENT_URL, "POST", params); success = json.getInt(TAG_SUCCESS); if (success == 1) { Log.d("Picture Added!", json.toString()); //finish(); return json.getString(TAG_MESSAGE); } else { Log.d("Upload Failure!", json.getString(TAG_MESSAGE)); return json.getString(TAG_MESSAGE); } } catch (JSONException e) { e.printStackTrace(); } return null; } protected void onPostExecute(String file_url) { // dismiss the dialog once product deleted pDialog.dismiss(); if (file_url != null) { Toast.makeText(MainScreen.this, file_url, Toast.LENGTH_LONG) .show(); } } } } ``` **PHP SIDE** ``` <?php require("config.inc.php"); if (!empty($_POST)) { $user = $_POST['username']; $data = $_POST['picture']; $data = base64_decode($data); $im = imagecreatefromstring($data); header('Content-Type: image/jpeg', true); ob_start(); imagejpeg($im); $imagevariable = ob_get_contents(); ob_end_clean(); $query = "INSERT INTO pictures ( username, photo, rawdata ) VALUES ( :user, :photo, :raw ) "; $query_params = array( ':user' => $user, ':photo' => $imagevariable, ':raw' => $_POST['picture'] ); try { $stmt = $db->prepare($query); $result = $stmt->execute($query_params); } catch (PDOException $ex) { $response["success"] = 0; $response["message"] = "Database Error. Couldn't add post!"; die(json_encode($response)); } $response["success"] = 1; $response["message"] = "Picture Successfully Added!"; echo json_encode($response); } else { } ?> ```
I wanted to post my solution in case anyone else is having trouble with this. I always come to S.O. for answers so now it's my turn to help someone out. I was having problems with out of memory errors using the bitmaps. I changed it to a multipart post to upload the picture as a file and a string like their name but you could add any string to it. The first part is the android side and below that is the php for the database. The picture is added to a file in the directory with the move file method. The database stores the path to that picture. I searched for two days to piece it together all from stack overflow posts. **ANDROID** ``` public void onClick(View v) { if (v.getId() == R.id.capture_btn) { try { Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); startActivityForResult(intent, CAMERA_IMAGE_CAPTURE); } catch (ActivityNotFoundException anfe) { String errorMessage = "Whoops - your device doesn't support capturing images!"; Toast toast = Toast.makeText(this, errorMessage, Toast.LENGTH_SHORT); toast.show(); } } } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == CAMERA_IMAGE_CAPTURE && resultCode == Activity.RESULT_OK) { getLastImageId(); new PostPicture().execute(); } } private int getLastImageId() { // TODO Auto-generated method stub final String[] imageColumns = { MediaStore.Images.Media._ID, MediaStore.Images.Media.DATA }; final String imageOrderBy = MediaStore.Images.Media._ID + " DESC"; Cursor imageCursor = managedQuery( MediaStore.Images.Media.EXTERNAL_CONTENT_URI, imageColumns, null, null, imageOrderBy); if (imageCursor.moveToFirst()) { int id = imageCursor.getInt(imageCursor .getColumnIndexOrThrow(MediaStore.Images.Media._ID)); fullPath = imageCursor.getString(imageCursor .getColumnIndex(MediaStore.Images.Media.DATA)); Log.d("pff", "getLastImageId: :id " + id); Log.d("pff", "getLastImageId: :path " + fullPath); return id; } else { return 0; } } class PostPicture extends AsyncTask<String, String, String> { @Override protected void onPreExecute() { super.onPreExecute(); pDialog = new ProgressDialog(MainScreen.this); pDialog.setMessage("Uploading Picture"); pDialog.setIndeterminate(false); pDialog.setCancelable(true); pDialog.show(); } @Override protected String doInBackground(String... args) { // TODO Auto-generated method stub // Check for success tag HttpClient client = new DefaultHttpClient(); HttpPost post = new HttpPost("http://www.your-php-page.php"); try { MultipartEntity entity = new MultipartEntity( HttpMultipartMode.BROWSER_COMPATIBLE); File file = new File(fullPath); cbFile = new FileBody(file, "image/jpeg"); Log.d("sending picture", "guest name is " + guest_name); Log.d("Sending picture", "guest code is " + guest_code); entity.addPart("name", new StringBody(guest_name, Charset.forName("UTF-8"))); entity.addPart("code", new StringBody(guest_code, Charset.forName("UTF-8"))); entity.addPart("picture", cbFile); post.setEntity(entity); HttpResponse response1 = client.execute(post); HttpEntity resEntity = response1.getEntity(); String Response = EntityUtils.toString(resEntity); Log.d("Response", Response); } catch (IOException e) { Log.e("asdf", e.getMessage(), e); } return null; } protected void onPostExecute(String file_url) { // dismiss the dialog once product deleted pDialog.dismiss(); if (file_url != null) { Toast.makeText(MainScreen.this, file_url, Toast.LENGTH_LONG) .show(); } } } ``` And this is the **PHP**. Also note that I am including my Database log in page. You can enter your d.b. password and log in here but I choose not to. ``` <?php require("config.inc.php"); if (!empty($_POST)) { if (empty($_POST['name'])) { $response["success"] = 0; $response["message"] = "Did not receive a name"; die(json_encode($response)); } else { $name = $_POST['name']; } if (empty($_FILES['picture'])) { $response["success"] = 0; $response["message"] = "Did not receive a picture"; die(json_encode($response)); } else { $file = $_FILES['picture']; } $target_path = "uploads/whatever-you-want-it-to-be/"; // It could be any string value above /* Add the original filename to our target path. Result is "uploads/filename.extension" */ $target_path = $target_path . basename( $_FILES['picture']['name']); if(move_uploaded_file($_FILES['picture']['tmp_name'], $target_path)) { echo "The file ". basename( $_FILES['picture']['name']). " has been uploaded"; } else{ $response["success"] = 0; $response["message"] = "Database Error. Couldn't upload file."; die(json_encode($response)); } } else { $response["success"] = 0; $response["message"] = "You have entered an incorrect code. Please try again."; die(json_encode($response)); } $query = "INSERT INTO name-of-table ( directory, name, photo ) VALUES ( directory, :name, :photo ) "; $query_params = array( ':directory' => $directory, ':name' => $name, ':photo' => $_FILES['picture']['name'] ); try { $stmt = $db->prepare($query); $result = $stmt->execute($query_params); } catch (PDOException $ex) { $response["success"] = 0; $response["message"] = "Database Error. Couldn't add path to picture"; die(json_encode($response)); } $response["success"] = 1; $response["message"] = "Picture Successfully Added!"; die (json_encode($response)); } ?> ```
Interpreting a list of free monads vs. interpreting a free monad of a list I'm learning functional programming and have some (maybe obvious, but not for me :) ) question about monad. Every monad is an applicative functor. Applicative functor in turn can be defined as a higher-kinded type as follows (`pure` method omitted): ``` trait ApplicativeFunctor[F[_]]{ def ap[A](fa: F[A])(f: F[A => B]): F[B] } ``` As far as I understand this typeclass means that we can take two values of `F[A]`, `F[B]` and a function `(A, B) => C` and construct `F[C]`. This property enables us to construct the list reversing function: ``` def reverseApList[F[_]: ApplicativeFunctor, A](lst: List[F[A]]): F[List[A]] ``` Let we have ``` trait SomeType[A] ``` Now consider ``` type MyFree[A] = Free[SomeType, A] val nt: NaturalTransformation[SomeType, Id] val lst: List[MyFree[Int]] ``` ***QUESTION:*** Why are `lst.map(_.foldMap(nt))` and `reverseApList(lst).foldMap(nt)` the same? Is it following from applicative functor laws or there is another reason? Can you please explain?
It follows from the laws of Traversable functors. First, realize that `_.foldMap(nt)` is itself a natural transformation from `MyFree` to `Id`. Moreover, by the very definition of what it means to be a free monad, it is required to be a *monad homomorphism1* (for any `nt`). Let's start from your ``` reverseApList(lst).foldMap(nt) ``` which can also be written as ``` lst.sequence.foldMap(nt) ``` Now we are going to apply the [naturality law of Traversable functors](https://github.com/scalaz/scalaz/blob/fdbf1e3696b03929c39f3cf688460d2169116999/core/src/main/scala/scalaz/Traverse.scala#L175-L186), with `_.foldMap(nt)` as the natural transformation `nat`. For it to be applicable, our natural transformation has to be an *applicative homomorphism*, which is expressed by [the two extra conditions](https://github.com/scalaz/scalaz/blob/fdbf1e3696b03929c39f3cf688460d2169116999/core/src/main/scala/scalaz/Traverse.scala#L177-L178). But we already know that our natural transformation is a monad homomorphism, which is even stronger (preserves more structure) than an applicative homomorphism. We may therefore proceed to apply this law and obtain ``` lst.map(_.foldMap(nt)).sequence : Id[List[Int]] ``` Now using just the laws in the linked scalaz file it is provable (although in a roundabout way) that this last `sequence` through `Id` is actually a no-op. We get ``` lst.map(_.foldMap(nt)) : List[Id[Int]] ``` which is what we wanted to show. --- 1 : A natural transformation `h: M ~> N` is a monad homomorphism if it preserves the monadic structure, i.e. if it satisfies - for any `a: A`: `h(Monad[M].point[A](a)) = Monad[N].point[A](a)` - for any `ma: M[A]` and `f: A => M[B]`: `h(ma.flatMap(f)) = h(ma).flatMap(a => h(f(a)))`
SonarQube Technical Debt management with Quality Gate Configuring a custom Quality Gate, the default SonarQube Way has been taken as initial reference and further adjusted and customized (adding further checks). Our current quality gate looks as following (old version vs current version): ``` Blocker issues: error threshold at 0 Complexity/class: error threshold at 12 Complexity/file: error threshold at 12 Complexity/function error threshold at 2 Coverage error threshold at 100 >> changed to 65 Critical issues error threshold at 0 Duplicated lines (%) error threshold at 5 Info issues error threshold at 10 Major issues error threshold at 50 Minor issues error threshold at 100 Overall coverage error threshold at 100 >> changed to 65 Public documented API (%) error threshold at 50 Skipped Unit tests error threshold at 0 Technical Debts error threshold at 10d >> change to (?? < 10) Unit test errors error threshold at 0 Unit test failures error threshold at 0 ``` The main point is about the Technical Debts days, which should be enforced from 10 to something smaller, given that other checks have been relaxed (complexity and coverage). This is indeed reasonable: relaxing some rules you should have more margin for controlled technical debt and hence shorter threshold for the number of accumulated days for uncontrolled technical debt. However, the overall quality gate should somehow (mathematically?) follow a certain proportion. **Question**: how to calculate the most appropriate technical debt threshold given the relaxations above? From an [old article](http://www.sonarqube.org/evaluate-your-technical-debt-with-sonar/) (2009, hence most probably not applicable any longer) the following formula has been deducted: ``` TechDebt = (cost_to_fix_one_block * duplicated_blocks) + \ (cost_to fix_one_violation * mandatory_violations) + \ (cost_to_comment_one_API * public_undocumented_api) + \ (cost_to_cover_one_of_complexity * uncovered_complexity_by_tests) + \ (cost_to_split_a_method * function_complexity_distribution) + \ (cost_to_split_a_class * class_complexity_distribution) ``` Note: `\` added for readability. However, there are too many unknown variables to make a proper calculation, yet it is not covering all of the quality gate items above (again, it's an old reference). Other more recent [sources](http://docs.sonarqube.org/display/SONAR/Metric+Definitions) explain in details concerned items, but not how to adjust values **in a proportionated manner**. The `sonar.technicalDebt.developmentCost` (*Admin* / *Configuration* / *Technical Debt*) has a default value of **30** minutes, which means 1 LOC (cost to develop 1 line of code) = 30, but still not at the granularity level of the variables above nor useful in this case.
A Quality Gate is made up of a set of conditions. Your list of conditions is far longer than the one in the default quality gate. Most of the conditions you list aren't in the default quality gate. It looks instead as though you've edited the default thresholds of a number of rules. And in a sense, you're talking about apples and oranges. A Technical Debt threshold can be included in a Quality Gate, but by default is not. Instead, the Technical Debt Ratio on New Code is included in the default QG. But the concept of the Technical Debt Ratio does have bearing on your question. If you set a hard threshold on technical debt in a quality gate, small projects will have an easier time passing the QG than large projects. If you instead use the Technical Debt Ratio or the Technical Debt Ratio on New Code (recommended) then you're setting your quality gate on a ratio based on code base size versus technical debt. So every project has the same chance of passing or failing. The formula is this: > > Remediation cost / (Cost to develop 1 line of code \* Number of lines of code) > > > with, an estimated line development cost of 30 min. This value is editable, BTW: *Administration > Technical Debt > Development cost* The default quality gate includes a Technical Debt Ratio on New Code error threshold of 5.
jQuery Detect browser IE9 and below and throw up a modal to upgrade I want to be able to detect IE9 or less using jQuery (or if there's a better method?). If the browser version is IE9 or less I then want to load a modal with the option to upgrade to Chrome, FF etc. I have read that $.browser doesn't work anymore, so just wondering what the best way is to achieve what I want: ``` $(document).ready(function(){ /* Get browser */ $.browser.chrome = /chrome/.test(navigator.userAgent.toLowerCase()); /* Detect Chrome */ if($.browser.chrome){ /* Do something for Chrome at this point */ alert("You are using Chrome!"); /* Finally, if it is Chrome then jQuery thinks it's Safari so we have to tell it isn't */ $.browser.safari = false; } /* Detect Safari */ if($.browser.safari){ /* Do something for Safari */ alert("You are using Safari!"); } }); ```
Don't use jQuery for detecting IE versions. Use conditional comments instead. Why? Well think about why you want to target these old versions in the first place. It is probably because they don't support certain JS/CSS features that you need. So do you really want to maintain your own JS code that you are sure will run in these older versions? If you go that route then you need to start thinking about whether the detection code you write will work in IE6 or 5 or 4... painful! Instead try the following: 1. Add your modal/banner element to your HTML. 2. In your main css file, hide this element using display: none. This ensures that recent versions of IE and non-IE browsers will not see it. 3. Create an IE only css or js file that will reveal this element. 4. Include this file inside conditional comments that target the IE versions that you want. The example below uses a simple reveal using CSS but you can easily replace this with JS if you prefer. Just don't make it too complicated or it could easily break in early IE versions. ``` #index.html <html> <head> <link rel="stylesheet" type="text/css" href="main.css"> <!--[if lte IE 9]> <link rel="stylesheet" type="text/css" href="ie-only.css"> <![endif]--> </head> <body> <div class="ie-only">Go upgrade your browser!</div> </body> </html> #main.css .ie-only { display: none } #ie-only.css .ie-only { display: block } ``` Here is a useful [conditional comments reference](http://www.quirksmode.org/css/condcom.html).
How to show multiple markers on MapFragment in Google Map API v2? I am using `Google Map API v2` in my application to show Maps. I have followed all the steps, that is to be followed to enable Google Map in my application. ``` public class PinLocationOnMapView extends FragmentActivity { private double mLatitude = 0.0, mLongitude = 0.0; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); SupportMapFragment fragment = SupportMapFragment.newInstance(); getSupportFragmentManager().beginTransaction() .add(android.R.id.content, fragment).commit(); } } ``` If I use this code, it shows me map, but if I provide my latitude/longitude values, map tiles does not load, and I see only white tiles. Following is the code written in onCreate() of above class: ``` if (getIntent().getExtras() != null) { final Bundle bundle = getIntent().getBundleExtra("LOCATION"); mLatitude = bundle.getDouble("LATITUDE"); mLongitude = bundle.getDouble("LONGITUDE"); } else { finish(); } GoogleMapOptions options = new GoogleMapOptions(); LatLng latLng = new LatLng(mLatitude, mLongitude); CameraPosition cameraPosition;// = new CameraPosition(latLng, 0, 0, 0); cameraPosition = CameraPosition.fromLatLngZoom(latLng, (float) 14.0); options.mapType(GoogleMap.MAP_TYPE_SATELLITE).camera(cameraPosition) .zoomControlsEnabled(true).zoomGesturesEnabled(true); SupportMapFragment fragment = SupportMapFragment.newInstance(options); getSupportFragmentManager().beginTransaction() .add(android.R.id.content, fragment).commit(); ``` Also, I have a list of lat/long values. I want to show them on `MapFragment`, **how to show multiple markers on `MapFragment`**? I tried with `MapView` and `ItemizedOverlay`, but it didn't work for me. I believe I have correctly created the `SHA1` key to get the `API` key, because if that was wrong, I could not see map using `MapFragment` as well, but I can see that if I don't pass the lat/log value.
I do it like this to show car positions on the map with markers of different colors: ``` private void addMarkersToMap() { mMap.clear(); for (int i = 0; i < Cars.size(); i++) { LatLng ll = new LatLng(Cars.get(i).getPos().getLat(), Cars.get(i).getPos().getLon()); BitmapDescriptor bitmapMarker; switch (Cars.get(i).getState()) { case 0: bitmapMarker = BitmapDescriptorFactory.defaultMarker(BitmapDescriptorFactory.HUE_RED); Log.i(TAG, "RED"); break; case 1: bitmapMarker = BitmapDescriptorFactory.defaultMarker(BitmapDescriptorFactory.HUE_GREEN); Log.i(TAG, "GREEN"); break; case 2: bitmapMarker = BitmapDescriptorFactory.defaultMarker(BitmapDescriptorFactory.HUE_ORANGE); Log.i(TAG, "ORANGE"); break; default: bitmapMarker = BitmapDescriptorFactory.defaultMarker(BitmapDescriptorFactory.HUE_RED); Log.i(TAG, "DEFAULT"); break; } mMarkers.add(mMap.addMarker(new MarkerOptions().position(ll).title(Cars.get(i).getName()) .snippet(getStateString(Cars.get(i).getState())).icon(bitmapMarker))); Log.i(TAG,"Car number "+i+" was added " +mMarkers.get(mMarkers.size()-1).getId()); } } } ``` Cars is an `ArrayList` of custom objects and mMarkers is an `ArrayList` of markers. Note : You can show map in fragment like this: ``` private GoogleMap mMap; .... private void setUpMapIfNeeded() { // Do a null check to confirm that we have not already instantiated the // map. if (mMap == null) { // Try to obtain the map from the SupportMapFragment. mMap = ((SupportMapFragment) getSupportFragmentManager().findFragmentById(R.id.map)).getMap(); // Check if we were successful in obtaining the map. if (mMap != null) { setUpMap(); } } } private void setUpMap() { // Hide the zoom controls as the button panel will cover it. mMap.getUiSettings().setZoomControlsEnabled(false); // Add lots of markers to the map. addMarkersToMap(); // Setting an info window adapter allows us to change the both the // contents and look of the // info window. mMap.setInfoWindowAdapter(new CustomInfoWindowAdapter()); // Set listeners for marker events. See the bottom of this class for // their behavior. mMap.setOnMarkerClickListener(this); mMap.setOnInfoWindowClickListener(this); mMap.setOnMarkerDragListener(this); // Pan to see all markers in view. // Cannot zoom to bounds until the map has a size. final View mapView = getSupportFragmentManager().findFragmentById(R.id.map).getView(); if (mapView.getViewTreeObserver().isAlive()) { mapView.getViewTreeObserver().addOnGlobalLayoutListener(new OnGlobalLayoutListener() { @SuppressLint("NewApi") // We check which build version we are using. @Override public void onGlobalLayout() { LatLngBounds.Builder bld = new LatLngBounds.Builder(); for (int i = 0; i < mAvailableCars.size(); i++) { LatLng ll = new LatLng(Cars.get(i).getPos().getLat(), Cars.get(i).getPos().getLon()); bld.include(ll); } LatLngBounds bounds = bld.build(); mMap.moveCamera(CameraUpdateFactory.newLatLngBounds(bounds, 70)); mapView.getViewTreeObserver().removeGlobalOnLayoutListener(this); } }); } } ``` And just call `setUpMapIfNeeded()` in `onCreate()`
Restful PATCH on collection to update sorting parameter in bulk We have a big list ("collection") with a number of entities ("items"). This is all managed via a RESTful interface. The items are manually sortable via an `order` property on the item. When queried, the database lists all items in a collection based on the order. Now we want to expose this mechanism to users where they can update the complete sorting of all items in one call. The database does not allow the same `order` for the same `collection_id` (unique `collection_id` + `order`), so you can't (and definitely shouldn't) update all items one by one. I thought of a PATCH request but not on the resource, so ``` PATCH /collections/123/items/ ``` With a body like ``` [ {'id': 1, 'order': 3}, {'id': 2, 'order': 1}, {'id': 3, 'order': 2} ] ``` However, how do you handle errors for this bulk-type of request? How do you send a response when some update succeeded partially? Is it allowed to PATCH a collection instead of a resource? If this is the wrong line of thought, what is a better approach?
First, answering your questions in the last paragraph: 1. How to handle errors in bulk requests depends a lot on the request. In your case, I think partial success should not be allowed and you should rollback the whole operation and return an error, as the only reason for a failure is someone dealing with an outdated representation. When you are creating or deleting resources in bulk, for instance, it's fine to accept a partial success. 2. You can handle errors in bulk requests by using the `207 Multi-Status` HTTP code. It's a WebDAV code, but it's pretty standard by now. The response should be a document with detailed HTTP status codes and messages for each item. 3. A collection is a resource too, so there's nothing inherently wrong in using `PATCH` with a collection, but... A `PATCH` request should have some sort of diff format as payload, determining the state you want to transition from, and the final state. I wouldn't go with `PATCH` for doing what you want unless you're willing to use a more standardized format. You might want to check the [json-patch](https://www.rfc-editor.org/rfc/rfc6902#section-4.3) format, create a diff between the current and the desired order and see if you like the format. For instance, in your case it would be something like: ``` [{"path": "/0/order", "value": 1, "op": "test"}, {"path": "/0/order", "value": 2, "op": "replace"}, {"path": "/1/order", "value": 2, "op": "test"}, {"path": "/1/order", "value": 3, "op": "replace"}, {"path": "/2/order", "value": 3, "op": "test"}, {"path": "/2/order", "value": 1, "op": "replace"}] ``` Of course, if the client doesn't care about the current order, he can remove the `test` operations. You might also add a precondition with the `If-Unmodified-Since` or `If-Match` header instead. You probably noticed how the format above is completely generic and has no direct coupling to the resource you're changing. You can implement the above in a generic way and reuse that to implement `PATCH` wherever you need it in your API. Anyway, your case is simple enough that I would do it by having another resource with the order in a simple, flat format. Something like this, returning a list of ids in the current order: ``` GET /collections/123/items/ordering [1, 2, 3] ``` And you can change the order by: ``` PUT /collections/123/items/ordering [2, 3, 1] ```
What determines the internal order of a Python set? When I read about sets in Python, it is always mentioned that they return items in an arbitrary/random order. What exactly causes this "randomness"?
It's not so much that they are random, but that since it is undefined, you should make no expectations as to what order they are in unless you specifically order it. There are a variety of python interpreters. [There are a lot in fact](https://wiki.python.org/moin/PythonImplementations). When you add items to a list, the python interpreter is responsible for keeping track of the items. Each interpreter could keep track of them in different ways. The underlying storage mechanism could change from one version to the next of a single interpreter. It could change storage strategies based on the size of the list. It could change storage strategies based on the processor architecture, based on what day of the week it is, basically any criteria the interpreter developers decide is important. So the advice to treat the order as random is to form the habit not to depend on a non-required behavior for your programs. Sure, chances are good that the order of a list is going to be the same as the order items are added, **until suddenly it isn't** and your program explodes. If you always mentally treat the order as arbitrary/random, you'll never be caught by surprise because you'll always explicitly order the list in whatever ordering is required for the task at hand.
angular-ui-grid: how to highlight row on mouseover? I want to highlight an entire row of angular-ui-grid when the mouse is on top of it. I have tried to modify the rowTemplate by adding ng-mouseover and ng-mouseleave callbacks which set the background-color. Here is my rowTemplate--I expect the first three lines to change the entire row color to red. But only the cell currently under the mouse gets changed. ``` <div ng-mouseover="rowStyle={'background-color': 'red'}; grid.appScope.onRowHover(this);" ng-mouseleave="rowStyle={}" ng-style="rowStyle" ng-repeat="(colRenderIndex, col) in colContainer.renderedColumns track by col.uid" ui-grid-one-bind-id-grid="rowRenderIndex + '-' + col.uid + '-cell'" class="ui-grid-cell" ng-class="{ 'ui-grid-row-header-cell': col.isRowHeader }" role="{{col.isRowHeader ? 'rowheader' : 'gridcell'}}" ui-grid-cell> </div> ``` How do I highlight an entire row on mouseover events? Another approach that I tried: using some callback function do this--like the appScope.onRowHover in the example. Here, I have the scope of the row being hovered on. How do I go about styling that row within the onRowHover function? Thanks! [Link to Plunkr](http://plnkr.co/edit/dwjOEn70fTkyVlQdMfYT?p=preview). I expect mousing over a grid cell to turn the entire row red.
Here, is it. I made a change in the row template. <http://plnkr.co/edit/gKqt8JEo2FukS3URRLJ5?p=preview> RowTemplate: ``` <div ng-mouseover="rowStyle={'background-color': 'red'}; grid.appScope.onRowHover(this);" ng-mouseleave="rowStyle={}"> <div ng-style="rowStyle" ng-repeat="(colRenderIndex, col) in colContainer.renderedColumns track by col.uid" ui-grid-one-bind-id-grid="rowRenderIndex + '-' + col.uid + '-cell'" class="ui-grid-cell" ng-class="{ 'ui-grid-row-header-cell': col.isRowHeader }" role="{{col.isRowHeader ? 'rowheader' : 'gridcell'}}" ui-grid-cell> </div> </div> ```
Remove prefix letter from column variables I have all column names that start with '`m`'. Example: `mIncome, mAge`. I want to remove the `prefix`. So far, I have tried the following: ``` df %>% rename_all(~stringr::str_replace_all(.,"m","")) ``` This removes all the column names that has the letter '`m`'. I just need it removed from from the start. Any suggestions?
We need to specify the location. The `^` matches the start of the string (or here the column name). So, if we use `^m`, it will only match 'm' at the beginning or start of the string and not elsewhere. ``` library(dplyr) library(stringr) df %>% rename_all(~stringr::str_replace(.,"^m","")) # ba Mbgeg gmba cfor #1 1 2 4 6 #2 2 3 5 7 #3 3 4 6 8 ``` Also, if the case should be ignored, wrap with `regex` and specify `ignore_case = TRUE` ``` df %>% rename_all(~ stringr::str_replace(., regex("^m", ignore_case = TRUE), "")) # ba bgeg gmba cfor #1 1 2 4 6 #2 2 3 5 7 #3 3 4 6 8 ``` Another option is word boundary (`\\bm`), but this could match the beginning of words where there are multi word column names NOTE: `str_replace_all` is used when we want to replace multiple occurrence of the `pattern`. Here, we just need to replace the first instance and for that `str_replace` is enough. ### data ``` df <- data.frame(mba = 1:3, Mbgeg = 2:4, gmba = 4:6, cfor = 6:8) ```
Aliases not available when using sudo I was playing around with aliases today and I noticed that aliases don't seem to be available whilst using `sudo`: ``` danny@kaon:~$ alias alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l='ls -CF' alias la='ls -A' alias ll='ls -alF' alias ls='ls --color=auto' danny@kaon:~$ ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // danny@kaon:~$ sudo -i root@kaon:~# ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // root@kaon:~# exit logout danny@kaon:~$ sudo ll -d / sudo: ll: command not found ``` Is there any reason why you cannot use aliases whilst using `sudo`?
Add the following line to your `~/.bashrc`: ``` alias sudo='sudo ' ``` From the [bash manual](http://www.gnu.org/software/bash/manual/bashref.html#Aliases): > > Aliases allow a string to be substituted for a word when it is used as the first word of a simple command. The shell maintains a list of aliases that may be set and unset with the alias and unalias builtin commands. > > > **The first word of each simple command, if unquoted, is checked to see if it has an alias**. If so, that word is replaced by the text of the alias. The characters ‘/’, ‘$’, ‘`’, ‘=’ and any of the shell metacharacters or quoting characters listed above may not appear in an alias name. The replacement text may contain any valid shell input, including shell metacharacters. The first word of the replacement text is tested for aliases, but a word that is identical to an alias being expanded is not expanded a second time. This means that one may alias ls to "ls -F", for instance, and Bash does not try to recursively expand the replacement text. **If the last character of the alias value is a space or tab character, then the next command word following the alias is also checked for alias expansion**. > > > (Emphasis mine). Bash only checks the first word of a command for an alias, any words after that are not checked. That means in a command like `sudo ll`, only the first word (`sudo`) is checked by bash for an alias, `ll` is ignored. We can tell bash to check the next word after the alias (i.e `sudo`) by adding a space to the end of the alias value.
How can I run commands in a running container in AWS ECS using Fargate If I am running container in AWS ECS using EC2, then I can access running container and execute any command. ie. `docker exec -it <containerid> <command>` How can I run commands in the running container or access container in AWS ECS using Fargate?
**Update(16 March, 2021):** AWS [announced](https://aws.amazon.com/about-aws/whats-new/2021/03/amazon-ecs-now-allows-you-to-execute-commands-in-a-container-running-on-amazon-ec2-or-aws-fargate/) a new feature called **ECS Exec** which provides the ability to exec into a running container on Fargate or even those running on EC2. This feature makes use of AWS Systems Manager(SSM) to establish a secure channel between the client and the target container. This detailed blog [post](https://aws.amazon.com/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/) from Amazon describes how to use this feature along with all the prerequisites and the configuration steps. **Original Answer:** With Fargate you don't get access to the underlying infrastructure so `docker exec` doesn't seem possible. The documentation doesn't mention this explicitly but it's mentioned in this [Deep Dive into AWS Fargate presentation](https://www.slideshare.net/AmazonWebServices/deep-dive-into-aws-fargate-88065990) by Amazon where this is mentioned on slide 19: > > Some caveats: can’t exec into the container, or access the underlying > host (this is also a good thing) > > > There's also some discussion about it on this [open issue](https://github.com/aws/amazon-ecs-cli/issues/143) in ECS CLI github project. You could try to run an SSH server inside a container to get access but I haven't tried it or come across anyone doing this. It also doesn't seem like a good approach so you are limited there.
Controlling the FormsAuthentication createPersistentCookie expiration In an ASP.NET MVC2 app, we have the standard login action... ``` if (ValidateUser(model.Email, model.Password) { FormsAuthentication.SetAuthCookie(model.Email, model.RememberMe); ... ``` where the second parameter to SetAuthCookie is `createPersistentCookie` with the following documentation: ``` createPersistentCookie Type: System.Boolean true to create a persistent cookie (one that is saved across browser sessions); otherwise, false. ``` We would like to have the persistent cookie expire after 2 weeks (i.e., a user could return to the site within 2 weeks and not be required to re-authenticate. After that time they would be asked to login again). How do we set the expiration for the persistent cookie?
Can you not do this? ``` <system.web> <authentication mode="Forms"> <forms timeout="20160"/> </authentication> </system.web> ``` The timeout is in minutes. This timeout value is irrespective of whether or not you are creating a persistent cookie. It simply says that if you don't explicitly terminate the cookie (FormsAuthentication.SignOut), it will automatically expire after the given time period. In other words, if you do: ``` FormsAuthentication.SetAuthCookie(someMembershipName, false); ``` Will result in the cookie expiring when: - The user closes the browser, or - The timeout is reached. As opposed to if you do: ``` FormsAuthentication.SetAuthCookie(someMembershipName, true); ``` Will result in the cookie only expiring when the timeout is reached. HTH **EDIT**: Take from [MSDN](http://msdn.microsoft.com/en-us/library/1d3t3c61.aspx): the **timeout** attribute is described as follows: > > Specifies the time, in integer > minutes, after which the cookie > expires. If the SlidingExpiration > attribute is true, the timeout > attribute is a sliding value, expiring > at the specified number of minutes > after the time that the last request > was received. To prevent compromised > performance, and to avoid multiple > browser warnings for users who have > cookie warnings turned on, the cookie > is updated when more than half of the > specified time has elapsed. This might > cause a loss of precision. The default > is "30" (30 minutes). > > > Note Under ASP.NET V1.1 persistent > cookies do not time out, regardless of > the setting of the timeout attribute. > **However, as of ASP.NET V2.0, > persistent cookies do time out > according to the timeout attribute**. > > > In other words, this expiration setting handles the Forms Authentication cookie only. The Forms Authentication cookie is a client-side cookie, it has nothing to do with other server-side session you may have (ie a Shopping Cart). That Session is expired with the following setting: ``` <sessionstate mode="inproc" cookieless="false" timeout="20" ```
Application is crashing on MFMailComposeViewController object in IOS7 I am creating ``` MFMailComposeViewController *picker = [[MFMailComposeViewController alloc] init]; ``` but picker is nil yet and application is crashing with error Terminating app due to uncaught exception `'NSInvalidArgumentException'`, reason: `'Application tried to present a nil modal view controller on target`. It is working fine in simulator but crashing in Device . How can is use `MFMailComposerViewController` with IOS 7.
You should check if MFMailComposeViewController is able to send your mail just before trying to send it (for example user could not have any mail account on the iOS device). So in your case for Objective-C: ``` MFMailComposeViewController *myMailCompose = [[MFMailComposeViewController alloc] init]; if ([MFMailComposeViewController canSendMail]) { myMailCompose.mailComposeDelegate = self; [myMailCompose setSubject:@"Subject"]; [myMailCompose setMessageBody:@"message" isHTML:NO]; [self presentViewController:myMailCompose animated:YES completion:nil]; } else { // unable to send mail, notify your users somehow } ``` Swift 3: ``` let myMailCompose = MFMailComposeViewController() if MFMailComposeViewController.canSendMail() { myMailCompose.mailComposeDelegate = self myMailCompose.setSubject("Subject") myMailCompose.setMessageBody("message", isHTML: false) self.present(myMailCompose, animated: true, completion: nil) } else { // unable to send mail, notify your users somehow } ```
Why database gets deleted when an apk has been reinstalled? I have a problem related to the app that I developed which runs on honeycomb. When I reinstall the apk its database gets deleted. There did not used to be this almost 1 week ago. Why this is happening now? What can cause db to be deleted and how to prevent it?
I think, If your database stored in application's internal storage `/data/data/<package_name>/databases/` then when **your application un-installed from device, all directories with your application package are removed from the device this cause your database removed**. To prevent put your **database copy** in application's `/asset` directory so whenever your application first time runt it copy the database from asset to internal storage path. And you can access it whenever application re-installed, also you can put your database in `/sdcard` but user can also delete it.. **EDIT:** [Using your own SQLite database in Android applications](http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/) and [How to ship an Android application with a database?](https://stackoverflow.com/questions/513084/how-to-ship-an-android-application-with-a-database) Thanks...
Exception during list comprehension. Are intermediate results kept anywhere? When using try-except in a for loop context, the commands executed so far are obviously done with ``` a = [1, 2, 3, 'text', 5] b = [] try: for k in range(len(a)): b.append(a[k] + 4) except: print('Error!') print(b) ``` results with ``` Error! [5, 6, 7] ``` However the same is not true for list comprehensions ``` c=[] try: c = [a[k] + 4 for k in range(len(a))] except: print('Error!') print(c) ``` And the result is ``` Error! [] ``` Is the intermediate list, built before the exception occurred, kept anywhere? Is it accessible?
The list comprehension intermediate results are kept on an internal CPython stack, and are not accessible from the Python expressions that are part of the list comprehension. Note that Python executes the `[.....]` **first**, which produces a list object, and only **then** assigns that result to the name `c`. If an exception occurs within the `[....]` expression, the expression is terminated and exception handling kicks in instead. Your `print(c)` expression thus can only ever show the *previous* object that `c` was bound to, which here is an empty list object. It could have been anything else: ``` >>> c = 'Anything else' >>> try: ... c = [2 // i for i in (1, 0)] ... except ZeroDivisionError: ... pass ... >>> c 'Anything else' ``` In your first example, no new list object is produced. You instead manipulate (using `b.append()`) an *existing* list object, which is why you can see what all successful `b.append()` calls have done to it.
How do I serialize a Python dictionary into a string, and then back to a dictionary? How do I serialize a Python dictionary into a string, and then back to a dictionary? The dictionary will have lists and other dictionaries inside it.
It depends on what you're wanting to use it for. If you're just trying to save it, you should use [`pickle`](https://docs.python.org/3/library/pickle.html) (or, if you’re using CPython 2.x, [`cPickle`](https://docs.python.org/2/library/pickle.html#module-cPickle), which is faster). ``` >>> import pickle >>> pickle.dumps({'foo': 'bar'}) b'\x80\x03}q\x00X\x03\x00\x00\x00fooq\x01X\x03\x00\x00\x00barq\x02s.' >>> pickle.loads(_) {'foo': 'bar'} ``` If you want it to be readable, you could use [`json`](https://docs.python.org/3/library/json.html): ``` >>> import json >>> json.dumps({'foo': 'bar'}) '{"foo": "bar"}' >>> json.loads(_) {'foo': 'bar'} ``` `json` is, however, very limited in what it will support, while `pickle` can be used for arbitrary objects (if it doesn't work automatically, the class can define `__getstate__` to specify precisely how it should be pickled). ``` >>> pickle.dumps(object()) b'\x80\x03cbuiltins\nobject\nq\x00)\x81q\x01.' >>> json.dumps(object()) Traceback (most recent call last): ... TypeError: <object object at 0x7fa0348230c0> is not JSON serializable ```
What does adding the @Stateful or @Stateless annotations actually do? I'm just getting to grips with Java EE. I know that adding `@Stateful` or `@Stateless` annotations to a class will make it an EJB bean. But what is actually happening in the background once I do that? I see the following listed on Wikipedia in relation to EJBs. - Transaction processing - Integration with the persistence services offered by the Java Persistence API (JPA) - Concurrency control - Eventing using Java Message Service and Java EE Connector Architecture - Asynchronous method invocation   1. When I mark a class as an EJB do items listed above get 'taken care of' in the background? An entirely different code path is followed that goes through each of the above once I make a class an EJB, is that what's happening? 2. I see that using CDI I have the option of injecting EJB beans as oppposed to CDI beans. In that case should I always be using EJB beans instead of CDI beans as EJB beans are more powerful than CDI beans?
See [this answer](https://stackoverflow.com/questions/13487987/where-to-use-ejb-3-1-and-cdi/13504763#13504763) for some insight on both questions. The highlights to focus on in that answer are that: - EJBs and CDI beans are proxied components, the object you get is a fake, the real object is hidden and this is how services are added: caller->proxy->services->realObject - CDI and EJB are effectively the same as such, mix them freely. Which you use depends on what you're attempting to do. I tend to use CDI unless I need one of the items listed in that answer. Then I just upgrade or add a new bean. Note, one thing I did miss in that answer was the entire `@MessageDriven` concept. ### MessageDriven Beans It's very interesting you put JMS / Connector on the same line as that is exactly how they are implemented. Message-Driven Beans (MDBs) should actually be called "Connector-Driven Beans" as all communication and lifecycle of an MDB is actually tied to the Connector Architecture specification and has nothing to do with JMS directly -- JMS is just the only Connector people ever see. [There's a great deal of potential there](http://blog.dblevins.com/2010/10/ejbnext-connectorbean-api-jax-rs-and.html). Hopefully we'll see some improvements in Java EE 7.
SwiftUI view not animating when bound to @Published var if action isn't completed immediately I have a SwiftUI view that is swapping out certain controls depending on state. I'm trying to use MVVM, so most/all of my logic has been pushed off to a view model. I have found that when doing a complex action that modifies a `@Published var` on the view model, the `View` will not animate. Here's an example where a 1.0s timer in the view model simulates other work being done before changing the `@Published var` value: ``` struct ContentView: View { @State var showCircle = true @ObservedObject var viewModel = ViewModel() var body: some View { VStack { VStack { if showCircle { Circle().frame(width: 100, height: 100) } Button(action: { withAnimation { self.showCircle.toggle() } }) { Text("With State Variable") } } VStack { if viewModel.showCircle { Circle().frame(width: 100, height: 100) } Button(action: { withAnimation { self.viewModel.toggle() } }) { Text("With ViewModel Observation") } } } } class ViewModel: ObservableObject { @Published var showCircle = true public func toggle() { // Do some amount of work here. The Time is just to simulate work being done that may not complete immediately. Timer.scheduledTimer(withTimeInterval: 1.0, repeats: false) { [weak self] _ in self?.showCircle.toggle() } } } ```
In the case of view model workflow your `withAnimation` does nothing, because not state is changed during this case (it is just a function call), only timer is scheduled, so you'd rather need it as ``` Button(action: { self.viewModel.toggle() // removed from here }) { Text("With ViewModel Observation") } ... Timer.scheduledTimer(withTimeInterval: 1.0, repeats: false) { [weak self] _ in withAnimation { // << added here self?.showCircle.toggle() } } ``` However I would rather recommend to rethink view design... like ``` VStack { if showCircle2 { // same declaration as showCircle Circle().frame(width: 100, height: 100) } Button(action: { self.viewModel.toggle() }) { Text("With ViewModel Observation") } .onReceive(viewModel.$showCircle) { value in withAnimation { self.showCircle2 = value } } } ``` Tested with Xcode 11.2 / iOS 13.2
A better way for a Python 'for' loop We all know that the common way of executing a statement a certain number of times in Python is to use a `for` loop. The general way of doing this is, ``` # I am assuming iterated list is redundant. # Just the number of execution matters. for _ in range(count): pass ``` I believe nobody will argue that the code above is the common implementation, however there is another option. Using the speed of Python list creation by multiplying references. ``` # Uncommon way. for _ in [0] * count: pass ``` There is also the old `while` way. ``` i = 0 while i < count: i += 1 ``` I tested the execution times of these approaches. Here is the code. ``` import timeit repeat = 10 total = 10 setup = """ count = 100000 """ test1 = """ for _ in range(count): pass """ test2 = """ for _ in [0] * count: pass """ test3 = """ i = 0 while i < count: i += 1 """ print(min(timeit.Timer(test1, setup=setup).repeat(repeat, total))) print(min(timeit.Timer(test2, setup=setup).repeat(repeat, total))) print(min(timeit.Timer(test3, setup=setup).repeat(repeat, total))) # Results 0.02238852552017738 0.011760978361696095 0.06971727824807639 ``` I would not initiate the subject if there was a small difference, however it can be seen that the difference of speed is 100%. Why does not Python encourage such usage if the second method is much more efficient? Is there a better way? The test is done with **Windows 10** and **Python 3.6**. Following @Tim Peters' suggestion, ``` . . . test4 = """ for _ in itertools.repeat(None, count): pass """ print(min(timeit.Timer(test1, setup=setup).repeat(repeat, total))) print(min(timeit.Timer(test2, setup=setup).repeat(repeat, total))) print(min(timeit.Timer(test3, setup=setup).repeat(repeat, total))) print(min(timeit.Timer(test4, setup=setup).repeat(repeat, total))) # Gives 0.02306803115612352 0.013021619340942758 0.06400113461638746 0.008105080015739174 ``` Which offers a much better way, and this pretty much answers my question. Why is this faster than `range`, since both are generators. Is it because the value never changes?
Using ``` for _ in itertools.repeat(None, count) do something ``` is the non-obvious way of getting the best of all worlds: tiny constant space requirement, and no new objects created per iteration. Under the covers, the C code for `repeat` uses a native C integer type (not a Python integer object!) to keep track of the count remaining. For that reason, the count needs to fit in the platform C `ssize_t` type, which is generally at most `2**31 - 1` on a 32-bit box, and here on a 64-bit box: ``` >>> itertools.repeat(None, 2**63) Traceback (most recent call last): ... OverflowError: Python int too large to convert to C ssize_t >>> itertools.repeat(None, 2**63-1) repeat(None, 9223372036854775807) ``` Which is plenty big for my loops ;-)
Returning a private collection using a getter method in Java I have a number of Java classes that use private sets or lists internally. I want to be able to return these sets/lists using a get...List() method. The alternatives I am considering: 1. return a reference to the internal object 2. construct a new set/list and fill it up (this seems bad practice?) 3. use `Collections.unmodifiableList(partitions);` Which of these is the most common / best way to solve this issue?
There are many aspects to consider here. As others already have pointed out, the final decision depends on what your intention is, but some general statements regarding the three options: **1. return a reference to the internal object** This may impose problems. You can hardly ever guarantee a consistent state when you are doing this. The caller might obtain the list, and then do nasty things ``` List<Element> list = object.getList(); list.clear(); list.add(null); ... ``` Maybe not with a malicious *intention* but accidentally, because he *assumed* that it was safe/allowed to do this. --- **2. construct a new set/list and fill it up (this seems bad practice?)** This is not a "bad practice" in general. In any case, it's by far the safest solution in terms of API design. The only caveat here may be that there might be a performance penalty, depending on several factors. E.g. how many elements are contained in the list, and how the returned list is used. Some (questionable?) patterns like this one ``` for (int i=0; i<object.getList().size(); i++) { Element element = object.getList().get(i); ... } ``` might become prohibitively expensive (although one could argue whether in this particular case, it was the fault of the *user* who implemented it like that, the *general* issue remains valid) --- **3. use Collections.unmodifiableList(partitions);** This is what I personally use rather often. It's safe in the sense of API design, and involves only a negligible overhead compared to copying the list. However, it's important for the caller to know whether this list may change after he obtained a reference to it. This leads to... --- ## The most important recommendation: **Document** what the method is doing! Don't write a comment like this ``` /** * Returns the list of elements. * * @return The list of elements. */ public List<Element> getList() { ... } ``` Instead, specify what you can make sure about the list. For example ``` /** * Returns a copy of the list of elements... */ ``` or ``` /** * Returns an unmodifiable view on the list of elements... */ ``` --- Personally, I'm always torn between the two options that one has for this sort of documentation: - Make clear *what* the method is doing and *how* it may be used - Don't expose or overspecify implementation details So for example, I'm frequently writing documentations like this one: ``` /** * Returns an unmodifiable view on the list of elements. * Changes in this object will be visible in the returned list. */ ``` The second sentence is a clear and *binding* statement about the behavior. It's **important** for the caller to know that. For a concurrent application (and most applications *are* concurrent in one way or the other), this means that the caller *has* to assume that the list may change concurrently after he obtained the reference, which may lead to a `ConcurrentModificationException` when the change happens while he is iterating over the list. However, such detailed specifications limit the possibilities for changing the implementation afterwards. If you later decide to return a copy of the internal list, then the behavior will change in an incompatible way. So sometimes I also explicitly specify that the behavior is not specified: ``` /** * Returns an unmodifiable list of elements. It is unspecified whether * changes in this object will be visible in the returned list. If you * want to be informed about changes, you may attach a listener to this * object using this-and-that method... */ ``` These questions are mainly imporant when you intent do create a public API. Once you have implemented it in one way or another, people will **rely** on the behavior in one or the other way. So coming back to the first point: It always depends on what you want to achieve.
Google maps v3 draggable marker I'm new in google maps, and I'm trying to learn it. ``` marker = new google.maps.Marker( { map:map, draggable:true, animation: google.maps.Animation.DROP, position: results[0].geometry.location }); ``` This is my marker position, when I'm initialising the marker position than I know the place name (for example: XY street, New York,), but because of the draggable option it is changing, and my question is how can I get the new place name, what event handler do I need.
Finally I found the answer: ``` marker = new google.maps.Marker( { map:map, draggable:true, animation: google.maps.Animation.DROP, position: results[0].geometry.location }); google.maps.event.addListener(marker, 'dragend', function() { geocodePosition(marker.getPosition()); }); function geocodePosition(pos) { geocoder = new google.maps.Geocoder(); geocoder.geocode ({ latLng: pos }, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { $("#mapSearchInput").val(results[0].formatted_address); $("#mapErrorMsg").hide(100); } else { $("#mapErrorMsg").html('Cannot determine address at this location.'+status).show(100); } } ); } ``` Source [The Wizz.art](https://www.thewizz.art/2022/01/03/how-to-get-the-location-name-when-the-marker-is-dragged-and-dropped-on-google-maps/)
UITextView stange animation glitch on paste action (iOS11) I'm facing a very strange bug. When I paste anything inside `UITextView`, I receive a surprising glitch of animation. To reproduce it I've just created a black `.xcodeproj`, added `UITextView` to `ViewController` via storyboard and ran the application. The only similiar problem I've found is <https://twitter.com/twostraws/status/972914692195790849> And it says that it's a bug of `UIKit` in iOS11. But there lot of applications on my iPhone with `UITextview` that works correctly on iOS11. You can see the bug in the video here – <https://twitter.com/twostraws/status/972914692195790849> Any suggestions or help would be appreciated. What I tried? - Tried the new clear project with minimal changes; - Disabled all the autocorrection types; - Removed constraints; - Tried on several iPhones with different version – 11.2.5 and 11.4.2. The original project is attached. It's made on `Swift 4.1` with `Xcode 9.4(9F1027a)` <https://ufile.io/fzyj8>
I checked some other applications on my iPhone like `Todoist` and found the same bug there. But also I've found the solution. I hope Apple will urgently fix this bug. So you may implement the `UITextPasteDelegate` and disabled the animation on paste action. This API is available only iOS11+, but it seems that the bug is also reproduced only on iOS11. ``` class ViewController: UIViewController { @IBOutlet weak var textView: UITextView! override func viewDidLoad() { super.viewDidLoad() textView.pasteDelegate = self } } extension ViewController: UITextPasteDelegate { func textPasteConfigurationSupporting(_ textPasteConfigurationSupporting: UITextPasteConfigurationSupporting, shouldAnimatePasteOf attributedString: NSAttributedString, to textRange: UITextRange) -> Bool { return false } } ```
Data truncating after 255 characters when inserting into dataset from Excel but no issue when populating DataTable I am trying to insert the data from Excel file to dataset using ADO.NET. Below is the procedure adopted 1. First all excel data are loaded into dataset using > > connString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=mydb.xlsx;Extended Properties=\"Excel 12.0;HDR=Yes;IMEX=2\""; > > > 1. when populating dataset it inserts only 255 characters. (I couldn't find where it is truncating). In our source code there is no code for truncating. But when same connection is used to fill a datatable no such problem occurs. Please help me to over come this problem Thanks in advance
The problem is that the ACE driver is inferring a TEXT data type for the column when you're populating the data set. Text columns are limited to 255 characters. You need to force it to use the MEMO data type. When you're filling the database, presumably, the database column is of a type that supports more than 255 characters, and the driver picks that up. In addition to the methods discussed in [this thread on the problem](http://social.msdn.microsoft.com/Forums/pl-PL/adodotnetdataproviders/thread/6dba5b86-1ad6-46c5-8428-556753b32ce4), you can force it to use the memo data type by inserting a dummy row of data and inserting 256 or more characters in the cell for that column. (That thread concerns Jet but should also be applicable to ACE.) Your best alternative, perhaps, is to use some other library to read the file and populate the DataSet without Jet/ACE.
Reverse integer bitwise without using loop I want to write a program which reverses the bits of an integer. Ex 11000101 to 10100011 I know how to solve this using a loop, but I came across solutions that do it using byte shift: ``` num>>4|num<<4 ``` I don't understand how does this work. Can somebody hep me with this?
That's not reversing the bits, it's swapping the nybbles (4-bit units). In other words, it will turn: ``` 1100 0101 (abcd efgh) ``` into: ``` 0101 1100 (efgh abcd) ``` and it will do so only if the data type is actually 8 bits (otherwise `num << 4` places some bits left of the eight rightmost ones. A safer way to do it is to ensure all other bits are cleared before shifting: ``` ((num & 0xf0) >> 4) | ((num & 0x0f) << 4) ``` For a precis on how bitwise operators work, see [this excellent answer](https://stackoverflow.com/questions/1746613/bitwise-operation-and-usage/1746642#1746642). The equivalent expression for a full bit reversal, `hgfe dcba`, is the rather monstrous: ``` ((num & 0x01) << 7) | ((num & 0x02) << 5) | ((num & 0x04) << 3) | ((num & 0x08) << 1) | ((num & 0x10) >> 1) | ((num & 0x20) >> 3) | ((num & 0x40) >> 5) | ((num & 0x80) >> 7) ``` which extracts and shifts each of the eight bits. There are also optimisations that can handle groups of non-contiguous bits in one operation, such as: ``` num = ((num & 0xf0) >> 4) | ((num & 0x0f) << 4) // abcdefgh -> efghabcd num = ((num & 0xcc) >> 2) | ((num & 0x33) << 2) // efghabcd -> ghefcdab num = ((num & 0xaa) >> 1) | ((num & 0x55) << 1) // ghefcdab -> hgfedcba ``` These work by grabbing the non-contiguous bits and moving them left or right, with the mask values showing which bits get affected: ``` 0xf0, 0x0f -> 1111-0000, 0000-1111, shift by 4 0xcc, 0x33 -> 1100-1100, 0011-0011, shift by 2 0xaa, 0x55 -> 1010-1010, 0101-0101, shift by 1 ``` The first bit mask in each line extracts the bits to shift right, the second grabs the bits to shift left. The two results are then recombined. To take the second one as an example, say you have the bits `abcdefgh` beforehand and you evaluate the expression `((num & 0xcc) >> 2) | ((num & 0x33) << 2)`: ``` (num&0xcc)>>2 (num&0x33)<<2 ------------- ------------- abcdefgh abcdefgh 11001100 00110011 'and' with mask -------- -------- ab00ef00 00cd00gh 00ab00ef cd00gh00 shift right/left \ / 00ab00ef cd00gh00 'or' them together -------- cdabghef ``` Hence you can see how the actions of bit extraction, shifting and recombination allow you to reverse the order of sections within the value: ``` ab cd ef gh \ / \ / X X / \ / \ cd ab gh ef ``` I suggest you try a similar experiment with the third operation `num = ((num & 0xaa) >> 1) | ((num & 0x55) << 1)`, you'll see it also acts as expected, reversing individual bits in each group of two.
Need permission for Windows client to access linux NFS I have a linux NFS the /etc/exports is like below: ``` /opt/nfs 10.8.0.0/20(no_root_squash, rw, sync) ``` I can r/w files from other linux machines. However, I only have read permission on Windows client. What I did on the Windows Server 2012 R2 box is installing 'Services for NFS' and use following command to mount it. Can somebody point out what is wrong? Thanks! ``` mount \\10.8.0.2\opt\nfs X: ``` EDIT: I have tried to use `mount -u:user -p:password \\...` with a user I created identical both on linux and windows side, still doesn't work. Here is the Windows message: You need permission to perform this action You require permission from S-1-1-0 to make changes to this file
Here is a trick I found to set the default UID and GID of the Windows client to match the UID and GID of the nfs share. Here is a link to the full article [Windows 7: Client for NFS and User Name Mapping without AD, SUA](https://support.dbssolutions.com/support/solutions/articles/1649-windows-7-client-for-nfs-and-user-name-mapping-without-ad-sua) and here are the basic steps. 1) Run `regedit` on the Windows machine and locate `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ClientForNFS\CurrentVersion\Default` 2) Add two DWORD values: `AnonymousUid` and `AnonymousGid` 3) Set these values to the UID and GID of the owner of the shared linux directory. 4) Restart the Client for NFS service or reboot the computer. `*.reg` file example for quick adding: ``` Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ClientForNFS\CurrentVersion\Default] "AnonymousUid"=dword:000003e8 "AnonymousGid"=dword:000003e8 ```
How do I pass Javascript variable to and JSTL? How do I pass Javascript variable to and JSTL? ``` <script> var name = "john"; <jsp:setProperty name="emp" property="firstName" value=" "/> // How do I set javascript variable(name) value here ? <c:set var="firstName" value=""/> // How do I set javascript variable (name) value here ? </script> ```
You need to send it as a request parameter. One of the ways is populating a hidden input field. ``` <script>document.getElementById('firstName').value = 'john';</script> <input type="hidden" id="firstName" name="firstName"> ``` This way you can get it in the server side as request parameter when the form is been submitted. ``` <jsp:setProperty name="emp" property="firstName" value="${param.firstName}" /> ``` An alternative way is using Ajax, but that's a completely new story/answer at its own. ### See also: - [Communication between Java/JSP/JSF and JavaScript](http://balusc.blogspot.com/2009/05/javajspjsf-and-javascript.html) - [Your previous question regarding the subject](https://stackoverflow.com/questions/3288032/how-do-i-assign-hidden-value-to-jstl-variable) - [Your other previous question regarding the subject](https://stackoverflow.com/questions/3287114/how-to-set-the-jstl-variable-value-in-javascript) If you can't seem to find your previously asked questions back, head to your [user profile](https://stackoverflow.com/users/91485/thomman)!
Python- load data on first instance of class I have a python class that will need to reference a large data set. I need to create thousands of instances of the class so I don't want to load the data set every time. It would be straightforward to put the data in another class that has to be created first and passed to the other one as an argument: ``` class Dataset(): def __init__(self, filename): # load dataset... class Class_using_dataset(): def __init__(self, ds) # use the dataset and do other stuff ds = Dataset('file.csv') c1 = Class_using_dataset(ds) c2 = Class_using_dataset(ds) # etc... ``` But I don't want my user to have to deal with the dataset since its always the same one if I can just do it in the background. Is there a pythonic/canonical way to load the data into a global namespace when I create the first instance of my class? I'm hoping for something like: ``` class Class_using_dataset(): def __init__(self): if dataset doesn't exist: load dataset into global namespace use dataset ```
You can either load the dataset into a class variable at the time the `Class_using_dataset` class is parsed, or when the user creates the first instance of the class. The first strategy simply requires you to move the line loading the dataset into the class itself. ``` class Dataset(): def __init__(self, filename): # load dataset... class Class_using_dataset(): ds = Dataset('file.csv') def __init__(self) # use the dataset and do other stuff # `Class_using_dataset.ds` already has the loaded dataset c1 = Class_using_dataset() c2 = Class_using_dataset() ``` For the second, assign `None` to the class variable, and only load the dataset in the `__init__` method if `ds` is `None`. ``` class Dataset(): def __init__(self, filename): # load dataset... class Class_using_dataset(): ds = None def __init__(self) if Class_using_dataset.ds is None: Class_using_dataset.ds = Dataset('file.csv') # use the dataset and do other stuff # `Class_using_dataset.ds` is `None` c1 = Class_using_dataset() # Now the dataset is loaded c2 = Class_using_dataset() ```
Draw Sphere on TImage control of Delphi I want to draw sphere like this: ![enter image description here](https://i.stack.imgur.com/yvHKu.png) Below code is generates Circle's Vertices and Drawing a Circle on TIMAGE BUT i want it for SPHERE: ``` for i := 0 to 360 do begin //Find value of X and Y pntCordXY.X := Radius * Cos(DegToRad(i)); pntCordXY.Y := Radius * Sin(DegToRad(i)); if i = 0 then image1.Canvas.MoveTo(Round(pntCordXY.X), Round(pntCordXY.Y)) else image1.Canvas.LineTo(Round(pntCordXY.X), Round(pntCordXY.Y)); end; ```
This turned out to be a fun exercise; nice question! At first, you ask specifically for drawing such a sphere on a `TImage`, but that component is supposed to be used for showing graphics. Sure, it has a canvas on which can be drawn, but hereunder I use a `TPaintBox` which is the preferred component for own painting. Because, you will have to paint this yourself. Entirely. ### Ingredients needed: - Some math for calculating the 3D points on a sphere, for rotating the globe around multiple axes, and maybe for converting the 3D points to the 2D screen coordinate system. The basics are: ``` type TPoint3D = record X: Double; Y: Double; Z: Double; end; function Sphere(Phi, Lambda: Double): TPoint3D; begin Result.X := Cos(Phi) * Sin(Lambda); Result.Y := Sin(Phi); Result.Z := Cos(Phi) * Cos(Lambda); end; function RotateAroundX(const P: TPoint3D; Alfa: Double): TPoint3D; begin Result.X := P.X; Result.Y := P.Y * Cos(Alfa) + P.Z * Sin(Alfa); Result.Z := P.Y * -Sin(Alfa) + P.Z * Cos(Alfa); end; function RotateAroundY(const P: TPoint3D; Beta: Double): TPoint3D; begin Result.X := P.X * Cos(Beta) + P.Z * Sin(Beta); Result.Y := P.Y; Result.Z := P.X * -Sin(Beta) + P.Z * Cos(Beta); end; ``` - Some globe-variables to work with: ``` var Alfa: Integer; //Rotation around X axis Beta: Integer; //Rotation around Y axis C: TPoint; //Center R: Integer; //Radius Phi: Integer; //Angle relative to XY plane Lambda: Integer; //Angle around Z axis (from pole to pole) P: TPoint3D; //2D projection of a 3D point on the sphere's surface ``` - Code to calculate all points of the latitude circles: ``` for Phi := -8 to 8 do for Lambda := 0 to 360 do begin P := Sphere(DegToRad(Phi * 10), DegToRad(Lambda)); P := RotateAroundX(P, Alfa); P := RotateAroundY(P, Beta); end; ``` - Code to calculate all points of the longitude meridians: ``` for Lambda := 0 to 17 do for Phi := 0 to 360 do begin P := Sphere(DegToRad(Phi), DegToRad(Lambda * 10)); P := RotateAroundX(P, Alfa); P := RotateAroundY(P, Beta); end; ``` These points can be used to draw lines or curves on the paint box. The Z value of these points are not used for drawing, but they are helpful to decide whether the point lies on the back or front side of the globe. - Logic and aids. Before all points, lines or curves in front of the globe can be drawn, the ones in the back of globe have to be drawn first, in order to preserve *depth*. - A drawing framework or drawing library. Delphi is by default equipped with standard Windows GDI, available via the `Canvas` property of the paint box. Another possibility is GDI+ which is more advanced and can be more efficient. Especially considering anti-aliassing. These are the two frameworks I worked with, but there are also others. For example: OpenGL, which converts 3D objects to 2D automatically and is capable of adding 3D surfaces, lights, materials, shaders, and many more features. - A testing application, which is added at the bottom of this question. - A double buffering technique to get the paint work flicker-free. I chose a separate bitmap object on which everything is drawn, prior to painting that bitmap on the paint box. The demo program also demonstrates the performance without it (routine: `GDIMultipleColorsDirect`). ### Setup: Drop a paint box on your form, and set its `Align` property to `alClient`, add a timer component for simulation, add form event handlers for `OnCreate`, `OnDestroy`, `OnKeyPress`, and `OnResize`, and add an event handler for `PaintBox1.OnPaint`. ``` object Form1: TForm1 Left = 497 Top = 394 Width = 450 Height = 450 Caption = 'Sphere' Color = clWhite Font.Charset = DEFAULT_CHARSET Font.Color = clWindowText Font.Height = -11 Font.Name = 'MS Sans Serif' Font.Style = [] OldCreateOrder = False OnCreate = FormCreate OnDestroy = FormDestroy OnKeyPress = FormKeyPress OnResize = FormResize PixelsPerInch = 96 TextHeight = 13 object PaintBox1: TPaintBox Left = 0 Top = 0 Width = 434 Height = 414 Align = alClient OnPaint = PaintBox1Paint end object Timer1: TTimer Interval = 25 OnTimer = Timer1Timer Left = 7 Top = 7 end end ``` ### First attempt: With default GDI, I draw lines from every point to every next point. To add a feeling of depth (perspective), I gave the lines in front a greater width. Also, I gradually let the colors of the lines overflow from dark to light (routine: `GDIMultipleColors`). ![Sphere 1](https://i.stack.imgur.com/DxnIy.png) ### Second attempt: Nice, but all pixels are so hard! Let's try doing some anti-aliassing ourselfs... ;) Furthermore, I reduced the color count to two: dark in front, light in the back. This in order to get rid of all separate line segments: now every circle and meridian is devided into two polylines. I used a third color in between for the anti-aliassing effect (routine: `GDIThreeColors`). ![Sphere 2](https://i.stack.imgur.com/XA8qA.png) ### GDI+ to the rescue: This anti-aliassing isn't most charming. To get really smooth paint work, let's convert the code to GDI+ style. For Delphi 2009 and up, the library is available [from here](http://www.bilsen.com/gdiplus/index.shtml). For older Delphi versions, the library is available [from here](http://www.progdigy.com/?page_id=7). In GDI+, drawing works a bit differently. Create a `TGPGraphics` object and attach it to a device context with its constructor. Subsequently, drawing operations on the object are translated by the API and will be output to the destination context, the bitmap in this case (routine: `GDIPlusDualLinewidths`). ![Sphere 3](https://i.stack.imgur.com/Le1tK.png) ### Can it even better? Well, that's quite someting already. But this globe is made up out of polylines with just two different line widths. Let's add some in between. The count of segments in each circle or meridian is controlled by the `Precision` constant (routine: `GDIPlusMultipleLinewidths`). ![enter image description here](https://i.stack.imgur.com/utjYr.png) ### Sample application: Press a key to cycle through the above mentioned routines. ``` unit Globe; interface uses Windows, SysUtils, Classes, Graphics, Controls, Forms, ExtCtrls, Math, GDIPAPI, GDIPOBJ; type TForm1 = class(TForm) PaintBox1: TPaintBox; Timer1: TTimer; procedure FormCreate(Sender: TObject); procedure FormDestroy(Sender: TObject); procedure FormResize(Sender: TObject); procedure Timer1Timer(Sender: TObject); procedure FormKeyPress(Sender: TObject; var Key: Char); procedure PaintBox1Paint(Sender: TObject); private FBmp: TBitmap; FPen: TGPPen; procedure GDIMultipleColorsDirect; procedure GDIMultipleColors; procedure GDIThreeColors; procedure GDIPlusDualLinewidths; procedure GDIPlusMultipleLinewidths; public A: Integer; //Alfa, rotation round X axis B: Integer; //Beta, rotation round Y axis C: TPoint; //Center R: Integer; //Radius end; var Form1: TForm1; implementation {$R *.DFM} const LineColorFore = $00552B00; LineColorMiddle = $00AA957F; LineColorBack = $00FFDFBF; BackColor = clWhite; LineWidthFore = 4.5; LineWidthBack = 1.5; Precision = 10; //Should be even! type TCycle = 0..Precision - 1; TPoint3D = record X: Double; Y: Double; Z: Double; end; function Sphere(Phi, Lambda: Double): TPoint3D; begin Result.X := Cos(Phi) * Sin(Lambda); Result.Y := Sin(Phi); Result.Z := Cos(Phi) * Cos(Lambda); end; function RotateAroundX(const P: TPoint3D; Alfa: Double): TPoint3D; begin Result.X := P.X; Result.Y := P.Y * Cos(Alfa) + P.Z * Sin(Alfa); Result.Z := P.Y * -Sin(Alfa) + P.Z * Cos(Alfa); end; function RotateAroundY(const P: TPoint3D; Beta: Double): TPoint3D; begin Result.X := P.X * Cos(Beta) + P.Z * Sin(Beta); Result.Y := P.Y; Result.Z := P.X * -Sin(Beta) + P.Z * Cos(Beta); end; { TForm1 } procedure TForm1.FormCreate(Sender: TObject); begin Brush.Style := bsClear; //This is múch cheaper then DoubleBuffered := True FBmp := TBitmap.Create; FPen := TGPPen.Create(ColorRefToARGB(ColorToRGB(clBlack))); A := 35; B := 25; end; procedure TForm1.FormDestroy(Sender: TObject); begin FPen.Free; FBmp.Free; end; procedure TForm1.FormResize(Sender: TObject); begin C.X := PaintBox1.ClientWidth div 2; C.Y := PaintBox1.ClientHeight div 2; R := Min(C.X, C.Y) - 10; FBmp.Width := PaintBox1.ClientWidth; FBmp.Height := PaintBox1.ClientHeight; end; procedure TForm1.Timer1Timer(Sender: TObject); begin A := A + 2; B := B + 1; PaintBox1.Invalidate; end; procedure TForm1.FormKeyPress(Sender: TObject; var Key: Char); begin Tag := Tag + 1; PaintBox1.Invalidate; end; procedure TForm1.PaintBox1Paint(Sender: TObject); begin case Tag mod 5 of 0: GDIMultipleColorsDirect; 1: GDIMultipleColors; 2: GDIThreeColors; 3: GDIPlusDualLinewidths; 4: GDIPlusMultipleLinewidths; end; end; procedure TForm1.GDIPlusMultipleLinewidths; var Lines: array of TPointFDynArray; PointCount: Integer; LineCount: Integer; Drawing: TGPGraphics; Alfa: Double; Beta: Double; Cycle: TCycle; Phi: Integer; Lambda: Integer; P: TPoint3D; Filter: TCycle; PrevFilter: TCycle; I: Integer; procedure ResetLines; begin SetLength(Lines, 0); LineCount := 0; PointCount := 0; end; procedure FinishLastLine; begin if PointCount < 2 then Dec(LineCount) else SetLength(Lines[LineCount - 1], PointCount); end; procedure NewLine; begin if LineCount > 0 then FinishLastLine; SetLength(Lines, LineCount + 1); SetLength(Lines[LineCount], 361); Inc(LineCount); PointCount := 0; end; procedure AddPoint(X, Y: Single); begin Lines[LineCount - 1][PointCount] := MakePoint(X, Y); Inc(PointCount); end; function CycleFromZ(Z: Single): TCycle; begin Result := Round((Z + 1) / 2 * High(TCycle)); end; function CycleToLineWidth(ACycle: TCycle): Single; begin Result := LineWidthBack + (LineWidthFore - LineWidthBack) * (ACycle / High(TCycle)); end; function CycleToLineColor(ACycle: TCycle): TGPColor; begin if ACycle <= (High(TCycle) div 2) then Result := ColorRefToARGB(ColorToRGB(LineColorBack)) else Result := ColorRefToARGB(ColorToRGB(LineColorFore)); end; begin Drawing := TGPGraphics.Create(FBmp.Canvas.Handle); try Drawing.Clear(ColorRefToARGB(ColorToRGB(clWhite))); Drawing.SetSmoothingMode(SmoothingModeAntiAlias); Alfa := DegToRad(A); Beta := DegToRad(B); for Cycle := Low(TCycle) to High(TCycle) do begin ResetLines; //Latitude for Phi := -8 to 8 do begin NewLine; PrevFilter := 0; for Lambda := 0 to 360 do begin P := Sphere(DegToRad(Phi * 10), DegToRad(Lambda)); P := RotateAroundX(P, Alfa); P := RotateAroundY(P, Beta); Filter := CycleFromZ(P.Z); if Filter <> PrevFilter then begin AddPoint(C.X + P.X * R, C.Y + P.Y * R); NewLine; end; if Filter = Cycle then AddPoint(C.X + P.X * R, C.Y + P.Y * R); PrevFilter := Filter; end; end; //Longitude for Lambda := 0 to 17 do begin NewLine; PrevFilter := 0; for Phi := 0 to 360 do begin P := Sphere(DegToRad(Phi), DegToRad(Lambda * 10)); P := RotateAroundX(P, Alfa); P := RotateAroundY(P, Beta); Filter := CycleFromZ(P.Z); if Filter <> PrevFilter then begin AddPoint(C.X + P.X * R, C.Y + P.Y * R); NewLine; end; if Filter = Cycle then AddPoint(C.X + P.X * R, C.Y + P.Y * R); PrevFilter := Filter; end; end; FinishLastLine; FPen.SetColor(CycleToLineColor(Cycle)); FPen.SetWidth(CycleToLineWidth(Cycle)); for I := 0 to LineCount - 1 do Drawing.DrawLines(FPen, PGPPointF(@(Lines[I][0])), Length(Lines[I])); if Cycle = (High(TCycle) div 2 + 1) then Drawing.DrawEllipse(FPen, C.X - R, C.Y - R, 2 * R, 2 * R); end; finally Drawing.Free; end; PaintBox1.Canvas.Draw(0, 0, FBmp); end; procedure TForm1.GDIPlusDualLinewidths; const LineColors: array[Boolean] of TColor = (LineColorFore, LineColorBack); LineWidths: array[Boolean] of Single = (LineWidthFore, LineWidthBack); BackColor = clWhite; var Lines: array of TPointFDynArray; PointCount: Integer; LineCount: Integer; Drawing: TGPGraphics; Alfa: Double; Beta: Double; Phi: Integer; Lambda: Integer; BackSide: Boolean; P: TPoint3D; PrevZ: Double; I: Integer; procedure ResetLines; begin SetLength(Lines, 0); LineCount := 0; PointCount := 0; end; procedure FinishLastLine; begin if PointCount < 2 then Dec(LineCount) else SetLength(Lines[LineCount - 1], PointCount); end; procedure NewLine; begin if LineCount > 0 then FinishLastLine; SetLength(Lines, LineCount + 1); SetLength(Lines[LineCount], 361); Inc(LineCount); PointCount := 0; end; procedure AddPoint(X, Y: Single); begin Lines[LineCount - 1][PointCount] := MakePoint(X, Y); Inc(PointCount); end; begin Drawing := TGPGraphics.Create(FBmp.Canvas.Handle); try Drawing.Clear(ColorRefToARGB(ColorToRGB(clWhite))); Drawing.SetSmoothingMode(SmoothingModeAntiAlias); Alfa := DegToRad(A); Beta := DegToRad(B); for BackSide := True downto False do begin ResetLines; //Latitude for Phi := -8 to 8 do begin NewLine; PrevZ := 0; for Lambda := 0 to 360 do begin P := Sphere(DegToRad(Phi * 10), DegToRad(Lambda)); P := RotateAroundX(P, Alfa); P := RotateAroundY(P, Beta); if Sign(P.Z) <> Sign(PrevZ) then NewLine; if (BackSide and (P.Z < 0)) or (not BackSide and (P.Z >= 0)) then AddPoint(C.X + P.X * R, C.Y + P.Y * R); PrevZ := P.Z; end; end; //Longitude for Lambda := 0 to 17 do begin NewLine; PrevZ := 0; for Phi := 0 to 360 do begin P := Sphere(DegToRad(Phi), DegToRad(Lambda * 10)); P := RotateAroundX(P, Alfa); P := RotateAroundY(P, Beta); if Sign(P.Z) <> Sign(PrevZ) then NewLine; if (BackSide and (P.Z < 0)) or (not BackSide and (P.Z >= 0)) then AddPoint(C.X + P.X * R, C.Y + P.Y * R); PrevZ := P.Z; end; end; FinishLastLine; FPen.SetColor(ColorRefToARGB(ColorToRGB(LineColors[BackSide]))); FPen.SetWidth(LineWidths[BackSide]); for I := 0 to LineCount - 1 do Drawing.DrawLines(FPen, PGPPointF(@(Lines[I][0])), Length(Lines[I])); end; Drawing.DrawEllipse(FPen, C.X - R, C.Y - R, 2 * R, 2 * R); finally Drawing.Free; end; PaintBox1.Canvas.Draw(0, 0, FBmp); end; procedure TForm1.GDIThreeColors; const LineColors: array[TValueSign] of TColor = (LineColorBack, LineColorMiddle, LineColorFore); LineWidths: array[TValueSign] of Integer = (2, 4, 2); var Lines: array of array of TPoint; PointCount: Integer; LineCount: Integer; Alfa: Double; Beta: Double; Phi: Integer; Lambda: Integer; BackSide: Boolean; P: TPoint3D; PrevZ: Double; I: TValueSign; J: Integer; procedure ResetLines; begin SetLength(Lines, 0); LineCount := 0; PointCount := 0; end; procedure FinishLastLine; begin if PointCount < 2 then Dec(LineCount) else SetLength(Lines[LineCount - 1], PointCount); end; procedure NewLine; begin if LineCount > 0 then FinishLastLine; SetLength(Lines, LineCount + 1); SetLength(Lines[LineCount], 361); Inc(LineCount); PointCount := 0; end; procedure AddPoint(APoint: TPoint); overload; var Last: TPoint; begin if PointCount > 0 then begin Last := Lines[LineCount - 1][PointCount - 1]; if (APoint.X = Last.X) and (APoint.Y = Last.Y) then Exit; end; Lines[LineCount - 1][PointCount] := APoint; Inc(PointCount); end; procedure AddPoint(X, Y: Integer); overload; begin AddPoint(Point(X, Y)); end; begin FBmp.Canvas.Brush.Color := BackColor; FBmp.Canvas.FillRect(Rect(0, 0, FBmp.Width, FBmp.Height)); Alfa := DegToRad(A); Beta := DegToRad(B); for BackSide := True downto False do begin ResetLines; //Latitude for Phi := -8 to 8 do begin NewLine; PrevZ := 0; for Lambda := 0 to 360 do begin P := Sphere(DegToRad(Phi * 10), DegToRad(Lambda)); P := RotateAroundX(P, Alfa); P := RotateAroundY(P, Beta); if Sign(P.Z) <> Sign(PrevZ) then NewLine; if (BackSide and (P.Z < 0)) or (not BackSide and (P.Z >= 0)) then AddPoint(Round(C.X + P.X * R), Round(C.Y + P.Y * R)); PrevZ := P.Z; end; end; //Longitude for Lambda := 0 to 17 do begin NewLine; PrevZ := 0; for Phi := 0 to 360 do begin P := Sphere(DegToRad(Phi), DegToRad(Lambda * 10)); P := RotateAroundX(P, Alfa); P := RotateAroundY(P, Beta); if Sign(P.Z) <> Sign(PrevZ) then NewLine; if (BackSide and (P.Z < 0)) or (not BackSide and (P.Z >= 0)) then AddPoint(Round(C.X + P.X * R), Round(C.Y + P.Y * R)); PrevZ := P.Z; end; end; FinishLastLine; if BackSide then begin FBmp.Canvas.Pen.Color := LineColors[-1]; FBmp.Canvas.Pen.Width := LineWidths[-1]; for J := 0 to LineCount - 1 do FBmp.Canvas.Polyline(Lines[J]); end else for I := 0 to 1 do begin FBmp.Canvas.Pen.Color := LineColors[I]; FBmp.Canvas.Pen.Width := LineWidths[I]; for J := 0 to LineCount - 1 do FBmp.Canvas.Polyline(Lines[J]) end end; FBmp.Canvas.Brush.Style := bsClear; FBmp.Canvas.Ellipse(C.X - R, C.Y - R, C.X + R, C.Y + R); PaintBox1.Canvas.Draw(0, 0, FBmp); end; procedure TForm1.GDIMultipleColors; var Alfa: Double; Beta: Double; Phi: Integer; Lambda: Integer; P: TPoint3D; Backside: Boolean; function ColorFromZ(Z: Single): TColorRef; var R: Integer; G: Integer; B: Integer; begin Z := (Z + 1) / 2; R := GetRValue(LineColorFore) - GetRValue(LineColorBack); R := GetRValue(LineColorBack) + Round(Z * R); G := GetGValue(LineColorFore) - GetGValue(LineColorBack); G := GetGValue(LineColorBack) + Round(Z * G); B := GetBValue(LineColorFore) - GetBValue(LineColorBack); B := GetBValue(LineColorBack) + Round(Z * B); Result := RGB(R, G, B); end; begin FBmp.Canvas.Pen.Width := 2; FBmp.Canvas.Brush.Color := BackColor; FBmp.Canvas.FillRect(PaintBox1.ClientRect); Alfa := DegToRad(A); Beta := DegToRad(B); for Backside := True downto False do begin if not BackSide then FBmp.Canvas.Pen.Width := 3; //Latitude for Phi := -8 to 8 do for Lambda := 0 to 360 do begin P := Sphere(DegToRad(Phi * 10), DegToRad(Lambda)); P := RotateAroundX(P, Alfa); P := RotateAroundY(P, Beta); if (Lambda = 0) or (Backside and (P.Z >= 0)) or (not Backside and (P.Z < 0)) then FBmp.Canvas.MoveTo(C.X + Round(P.X * R), C.Y + Round(P.Y * R)) else begin FBmp.Canvas.Pen.Color := ColorFromZ(P.Z); FBmp.Canvas.LineTo(C.X + Round(P.X * R), C.Y + Round(P.Y * R)); end; end; //Longitude for Lambda := 0 to 17 do for Phi := 0 to 360 do begin P := Sphere(DegToRad(Phi), DegToRad(Lambda * 10)); P := RotateAroundX(P, Alfa); P := RotateAroundY(P, Beta); if (Phi = 0) or (Backside and (P.Z >= 0)) or (not Backside and (P.Z < 0)) then FBmp.Canvas.MoveTo(C.X + Round(P.X * R), C.Y + Round(P.Y * R)) else begin FBmp.Canvas.Pen.Color := ColorFromZ(P.Z); FBmp.Canvas.LineTo(C.X + Round(P.X * R), C.Y + Round(P.Y * R)); end; end; end; PaintBox1.Canvas.Draw(0, 0, FBmp); end; procedure TForm1.GDIMultipleColorsDirect; var Alfa: Double; Beta: Double; Phi: Integer; Lambda: Integer; P: TPoint3D; Backside: Boolean; function ColorFromZ(Z: Single): TColorRef; var R: Integer; G: Integer; B: Integer; begin Z := (Z + 1) / 2; R := GetRValue(LineColorFore) - GetRValue(LineColorBack); R := GetRValue(LineColorBack) + Round(Z * R); G := GetGValue(LineColorFore) - GetGValue(LineColorBack); G := GetGValue(LineColorBack) + Round(Z * G); B := GetBValue(LineColorFore) - GetBValue(LineColorBack); B := GetBValue(LineColorBack) + Round(Z * B); Result := RGB(R, G, B); end; begin PaintBox1.Canvas.Pen.Width := 2; PaintBox1.Canvas.Brush.Color := BackColor; PaintBox1.Canvas.FillRect(PaintBox1.ClientRect); Alfa := DegToRad(A); Beta := DegToRad(B); for Backside := True downto False do begin if not BackSide then PaintBox1.Canvas.Pen.Width := 3; //Latitude for Phi := -8 to 8 do for Lambda := 0 to 360 do begin P := Sphere(DegToRad(Phi * 10), DegToRad(Lambda)); P := RotateAroundX(P, Alfa); P := RotateAroundY(P, Beta); if (Lambda = 0) or (Backside and (P.Z >= 0)) or (not Backside and (P.Z < 0)) then PaintBox1.Canvas.MoveTo(C.X + Round(P.X * R), C.Y + Round(P.Y * R)) else begin PaintBox1.Canvas.Pen.Color := ColorFromZ(P.Z); PaintBox1.Canvas.LineTo(C.X + Round(P.X * R), C.Y + Round(P.Y * R)); end; end; //Longitude for Lambda := 0 to 17 do for Phi := 0 to 360 do begin P := Sphere(DegToRad(Phi), DegToRad(Lambda * 10)); P := RotateAroundX(P, Alfa); P := RotateAroundY(P, Beta); if (Phi = 0) or (Backside and (P.Z >= 0)) or (not Backside and (P.Z < 0)) then PaintBox1.Canvas.MoveTo(C.X + Round(P.X * R), C.Y + Round(P.Y * R)) else begin PaintBox1.Canvas.Pen.Color := ColorFromZ(P.Z); PaintBox1.Canvas.LineTo(C.X + Round(P.X * R), C.Y + Round(P.Y * R)); end; end; end; end; end. ``` (With thanks to bummi's [comment](https://stackoverflow.com/users/1699210/bummi).)
Wordpress: get\_attached\_media('image') sorted by title I want to get all the images attached to a specific post. This works just fine with: ``` $media = get_attached_media('image'); ``` What I need now, is to sort these images by their title. I can already produce a list of the titles that are in the array: ``` for ($i = 0; $i < count($media); $i++) { get_the_title(array_keys($media)[$i]) } ``` I have no idea how to sort this by title. Can anyone help?
It would be better to fetch the attachments already ordered instead of ordering the result array, right? This would save you code, headaches and processing. If you look at WP Codex, [`get_attached_media()`](http://codex.wordpress.org/Function_Reference/get_attached_media) calls [`get_children()`](http://codex.wordpress.org/Function_Reference/get_children), which calls [`get_posts()`](http://codex.wordpress.org/Template_Tags/get_posts) (yeah, that escalated quickly). In WordPress, attachments (and pretty much almost anything) is a `post` in essence. Having all that in mind, this should fetch you the **list of images attached to a post ordered by title**: ``` $media = get_posts(array( 'post_parent' => get_the_ID(), 'post_type' => 'attachment', 'post_mime_type' => 'image', 'orderby' => 'title', 'order' => 'ASC' )); ``` --- **Edit:** As pointed out by [ViszinisA](https://stackoverflow.com/users/656976/viszinisa) and [Pieter Goosen](https://stackoverflow.com/users/1908141/pieter-goosen), I changed the call to `get_posts()` directly. There was no point into calling `get_children()`. **Note:** The `'post_parent'` parameter is needed, so I added it using `get_the_ID()` as it's value. Keep in mind that you **need to be within the loop** for `get_the_ID()` to retrieve the **current post ID**. When using outside the loop, you should change this parameter value accordingly to the context.
Chrome Extension: How to insert CSS conditioned on variable from Chrome storage using insert CSS? I'm somewhat new to Chrome Extensions, so I think I may just be missing something small, but I'm trying to allow users to load an additional page of CSS and a page of JS depending on their options (it's basically a different theme). I'm storing the variable that should trigger the load as `tm` Right now, in the JS page, I have: ``` var Theme; function getVars() { chrome.storage.sync.get({ tm: "", }, function(items) { Theme = items.tm; changeTheme(); }); } ``` Which triggers the `changeTheme` function, and—depending on the value of `Theme`—runs the correct JS functions. The trouble is that I also need a CSS sheet to load, but I can't figure this part out. Right now, in that same JS file, in the `changeTheme` function, I'm also calling: ``` chrome.tabs.insertCSS({file: "starWars.css"}); ``` This gives me the error: ``` Error in response to storage.get: TypeError: Cannot read property 'insertCSS' of undefined at changeTheme (chrome-extension://ipicalmjapbogbfpaddglpachiijdffn/starWars.js:96:14) at Object.callback (chrome-extension://ipicalmjapbogbfpaddglpachiijdffn/starWars.js:7:5) at getVars (chrome-extension://ipicalmjapbogbfpaddglpachiijdffn/starWars.js:3:23) at chrome-extension://ipicalmjapbogbfpaddglpachiijdffn/starWars.js:10:1 ``` My manifest version is `2`, unlike the other StackOverflow posts I've found on this. You can't use background in a version 2 manifest, and I need to stay in this version for other reasons. I do have `"tabs"` and `"activeTab"` in my permissions, as well as the page that the extension is for. Can anyone point me in the right direction from here? Glad to provide more information if needed.
1. You **can** use the background page/script with Manifest v2. The [documentation](https://developer.chrome.com/extensions/manifestVersion) says: > > The background\_page property has been replaced with a `background` property that contains either a `scripts` or `page` property. Details are available in the [Event Pages](https://developer.chrome.com/extensions/event_pages) documentation. > 2. You **can** use `chrome.tabs.insertCSS` from a popup/background script > 3. You **can't** use `chrome.tabs.insertCSS` from a content script > > > So if you want to inject the CSS file inside your content script you have two options: 1. [send a message](https://developer.chrome.com/extensions/messaging) to the background script and insert the CSS file in the message listener 2. declare the CSS file in manifest.json under ["web\_accessible\_resources"](https://developer.chrome.com/extensions/manifest/web_accessible_resources) key and simply add a `<link>` element that points to that CSS file: ``` document.head.insertAdjacentHTML('beforeend', '<link rel="stylesheet" type="text/css" href="' + chrome.runtime.getURL("starWars.css") + '">' ); ```
De-interleave an array in place? Lets say I have an array of interlaced data, such as 1a2b3c4d5e, and I want to de-interlace it into an array that looks like 12345abcde, in place (without a temporary buffer). What would be the fastest way of doing this? What I have so far is this ``` template<typename T> void deinterlace(T* arr, int length){ if(length<=1) return; int i = 1; for(i = 1; i*2<length; i++){ //swap i with i*2 T temp = arr[i]; arr[i] = arr[i*2]; arr[i*2] = temp; } deinterlace(arr+i, length-i); } ``` which unfortunately doesn't work with arrays not a power of 2 in size **edit**: this algo fails at larger powers of 2 anyway so I guess I'm at square 0 again **edit 2**: I have found an nlogn algorithm for this, given either an O(n) array rotate function, or an initial size which is a power of 2 works like so: 1a2b3c4d5e6f7g, "chunk size" = 1 initial, split into groups of chunk size \*4 1a2b 3c4d 5e6f 7g swap the inner 2 chunks of each group 12ab 34cd 56ef 7g repeat with chunk size = chunk size \*2 12ab34cd 56ef7g (read: 56 ef 7 g) -> 1234abcd 567efg 1234abcd567efg -> 1234567abcdefg ``` template<typename T> void deinterlace(T* arr, int length, int group_ct = 1){ if(group_ct*2 >= length) return; for(int i = 0; i<length; i+=group_ct*4){ int rot_count = group_ct; int i1 = i + group_ct; int i2 = i+group_ct*4 - group_ct; if(i2+group_ct > length){ i2 = i1 + (length-i1)/2+group_ct/2; } rotate(arr, i1, i2, group_ct); } deinterlace(arr, length, group_ct * 2); } ``` **edit 3** I guess the correct term is deinterleave, not deinterlace
This is essentially a matrix transposition problem. Your array ``` [1 a] [2 b] [3 c] [4 d] ``` is equivalent to `1, a, 2, b, 3, c, 4, d` if represented as a vector (by reading rows first). The transpose of this matrix is: ``` [1 2 3 4] [a b c d] ``` which is equivalent to `1, 2, 3, 4, a, b, c, d`. There is a [wikipedia page](http://en.wikipedia.org/wiki/In-place_matrix_transposition) that deals with in-place matrix transposition for the general cases. I guess, the algorithm for non-square matrix would be directly applicable. --- There is a slow (not sure if O(n^2) or worse, and it is late) algorithm that you can use. The idea is to in place rotate the sub-array from position `i` to position `2*i`. For example: ``` START: 1a2b3c4d5e6f 1(a2)... -> 1(2a)... 12(ab3)... -> 12(3ab)... 123(abc4)... -> 123(4abc)... 1234(abcd5)... -> 1234(5abcd)... 12345(abcde6)... -> 12345(6abcde).. 123456(abcdef) -> DONE ``` The first member of the array is index 0. At step 1, you select the sub-array `a[1:2]`, and rotate it right (all members go to next location, and the last one goes to start). Next step, you select `a[2:4]`, and rotate that, etc. Make sure you don't rotate the last sub-array `a[n/2:n]`. --- And a final option, if you do not need to do bulk operations for performance (such as `memcpy`), is to provide an accessor function, and transform the index instead of moving any bytes. Such a function is almost trivial to write: if index is less than `max/2`, return entry at `2*index`, otherwise, return entry at `2*(index-max/2)+1`.
How to make C# Switch Statement use IgnoreCase If I have a switch-case statement where the object in the switch is string, is it possible to do an ignoreCase compare? I have for instance: ``` string s = "house"; switch (s) { case "houSe": s = "window"; } ``` Will `s` get the value "window"? How do I override the switch-case statement so it will compare the strings using ignoreCase?
As you seem to be aware, lowercasing two strings and comparing them is not the same as doing an ignore-case comparison. There are lots of reasons for this. For example, the Unicode standard allows text with diacritics to be encoded multiple ways. Some characters includes both the base character and the diacritic in a single code point. These characters may also be represented as the base character followed by a combining diacritic character. These two representations are equal for all purposes, and the culture-aware string comparisons in the .NET Framework will correctly identify them as equal, with either the CurrentCulture or the InvariantCulture (with or without IgnoreCase). An ordinal comparison, on the other hand, will incorrectly regard them as unequal. Unfortunately, `switch` doesn't do anything but an ordinal comparison. An ordinal comparison is fine for certain kinds of applications, like parsing an ASCII file with rigidly defined codes, but ordinal string comparison is wrong for most other uses. What I have done in the past to get the correct behavior is just mock up my own switch statement. There are lots of ways to do this. One way would be to create a `List<T>` of pairs of case strings and delegates. The list can be searched using the proper string comparison. When the match is found then the associated delegate may be invoked. Another option is to do the obvious chain of `if` statements. This usually turns out to be not as bad as it sounds, since the structure is very regular. The great thing about this is that there isn't really any performance penalty in mocking up your own switch functionality when comparing against strings. The system isn't going to make a O(1) jump table the way it can with integers, so it's going to be comparing each string one at a time anyway. If there are many cases to be compared, and performance is an issue, then the `List<T>` option described above could be replaced with a sorted dictionary or hash table. Then the performance may potentially match or exceed the switch statement option. Here is an example of the list of delegates: ``` delegate void CustomSwitchDestination(); List<KeyValuePair<string, CustomSwitchDestination>> customSwitchList; CustomSwitchDestination defaultSwitchDestination = new CustomSwitchDestination(NoMatchFound); void CustomSwitch(string value) { foreach (var switchOption in customSwitchList) if (switchOption.Key.Equals(value, StringComparison.InvariantCultureIgnoreCase)) { switchOption.Value.Invoke(); return; } defaultSwitchDestination.Invoke(); } ``` Of course, you will probably want to add some standard parameters and possibly a return type to the CustomSwitchDestination delegate. And you'll want to make better names! If the behavior of each of your cases is not amenable to delegate invocation in this manner, such as if differnt parameters are necessary, then you’re stuck with chained `if` statments. I’ve also done this a few times. ``` if (s.Equals("house", StringComparison.InvariantCultureIgnoreCase)) { s = "window"; } else if (s.Equals("business", StringComparison.InvariantCultureIgnoreCase)) { s = "really big window"; } else if (s.Equals("school", StringComparison.InvariantCultureIgnoreCase)) { s = "broken window"; } ```
Retrieving HTML5 video duration separately from the file I am working on building a HTML5 video player with a custom interface, but I am having some problems getting the video duration information to display. My HTML is simple: ``` <video id="video" poster="image.jpg" controls> <source src="video_path.mp4" type="video/mp4" /> <source src="video_path.ogv" type="video/ogg" /> </video> <ul class="controls"> <li class="time"><p><span id="timer">0</span> of <span id="duration">0</span></p></li> </ul> ``` And the javascript I am using to get and insert the duration is ``` var duration = $('#duration').get(0); var vid_duration = Math.round(video.duration); duration.firstChild.nodeValue = vid_duration; ``` The problem is nothing happens. I know the video file has the duration data because if I just use the default controls, it displays fine. But the real strange thing is if I put alert(duration) in my code like so ``` alert(duration); var vid_duration = Math.round(video.duration); duration.firstChild.nodeValue = vid_duration; ``` then it works fine (minus the annoying alert that pops up). Any ideas what is happening here or how I can fix it? UPDATE: Ok so although I haven't solved this problem exactly, but I did figure out a work around that handles my biggest concern... the user experience. First the video doesn't begin loading until after the viewer hits the play button, so I am assuming that the duration information wasn't available to be pulled (I don't know how to fix this particular issue... although I assume that it would involve just loading the video metadata separately from the video, but I don't even know if that is possible). So to get around the fact that there is no duration data, I decided to hide the duration info (and actually the entire control) completely until you hit play. That said, if anyone knows how to load the video metadata separately from the video file, please share. I think that should completely solve this problem.
The issue is in WebKit browsers; the video metadata is loaded after the video so is not available when the JS runs. You need to query the readyState attribute; this has a series of values from 0 to 4, letting you know what state the video is in; when the metadata has loaded you'll get a value of 1. So you need to do something like: ``` window.setInterval(function(t){ if (video.readyState > 0) { var duration = $('#duration').get(0); var vid_duration = Math.round(video.duration); duration.firstChild.nodeValue = vid_duration; clearInterval(t); } },500); ``` I haven't tested that code, but it (or something like it) should work. There's more information about media element attributes on [developer.mozilla.org](https://developer.mozilla.org/En/NsIDOMHTMLMediaElement).
Docker 'latest' Tagging and Pushing I have a docker image generated from my build process. I am looking to tag the latest build image with tag build id and 'latest'. I see two ways to do this. First approach - (Add Multiple tags and Push once) ``` docker tag <id> <user>/<image>:build_id docker tag <id> <user>/<image>:latest docker push <user>/<image> ``` Second - Tag individually and push ``` docker tag <id> <user>/<image>:build_id docker push <user>/<image>:build_id docker tag <id> <user>/<image>:latest docker push <user>/<image>:latest ``` The docker documentation says if there is an image in the registry with a specific tag already, then docker push with a new image with same tag would overwrite the earlier image. 1. Are both First and Second Option specified above functionally the same? 2. Is there any preferred way/Best practice?
(About the original version of the question, which used `docker push` without arguments) `docker push` will not work unless you provide repository name. ``` $ docker push "docker push" requires exactly 1 argument. See 'docker push --help'. Usage: docker push [OPTIONS] NAME[:TAG] [flags] Push an image or a repository to a registry ``` That means, you need to push with repository name. And you can either provide TAG or not. If you do not provide TAG, you are pushing all images for that repository. --- In *first approach*, you are pushing all images under `<user>/<image>` repository. In *second approach*, you are pushing image one by one. ***Answer of question*** > > 1. Are both First and Second Option specified above functionally the same? > > > Both First and Second Option specified above are functionally the same (in your case). If you do not provide TAG, you are pushing all images for that repository. In your case ``` $ docker push <user>/<image> ``` will push both TAG `build_id` and `latest` > > 2. Is there any preferred way/Best practice? > > > I think, second option is better and preferred Because, you may not want to push all images. In that case, you can choose which image you want to push following second approach.
How to crop biggest rectangle out of an image I have a few images of pages on a table. I would like to crop the pages out of the image. Generally, the page will be the biggest rectangle in the image, however, all four sides of the rectangle might not be visible in some cases. I am doing the following but not getting desired results: ``` import cv2 import numpy as np im = cv2.imread('images/img5.jpg') gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) ret,thresh = cv2.threshold(gray,127,255,0) _,contours,_ = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE) areas = [cv2.contourArea(c) for c in contours] max_index = np.argmax(areas) cnt=contours[max_index] x,y,w,h = cv2.boundingRect(cnt) cv2.rectangle(im,(x,y),(x+w,y+h),(0,255,0),2) cv2.imshow("Show",im) cv2.imwrite("images/img5_rect.jpg", im) cv2.waitKey(0) ``` Below are a few examples: **1st Example**: I can find the rectangle in this image , however, would like if the remaining part of the wood can be cropped out as well. [![enter image description here](https://i.stack.imgur.com/OBF2r.jpg)](https://i.stack.imgur.com/OBF2r.jpg) [![enter image description here](https://i.stack.imgur.com/hx8vL.jpg)](https://i.stack.imgur.com/hx8vL.jpg) **2nd Example**: Not finding the correct dimensions of the rectangle in this image. [![enter image description here](https://i.stack.imgur.com/z8TQm.jpg)](https://i.stack.imgur.com/z8TQm.jpg) [![enter image description here](https://i.stack.imgur.com/uWx6n.jpg)](https://i.stack.imgur.com/uWx6n.jpg) **3rd Example**: Not able to find the correct dimensions in this image either. [![enter image description here](https://i.stack.imgur.com/ofWJG.jpg)](https://i.stack.imgur.com/ofWJG.jpg) [![enter image description here](https://i.stack.imgur.com/AaJ5r.jpg)](https://i.stack.imgur.com/AaJ5r.jpg) **4th Example**: Same with this as well. [![enter image description here](https://i.stack.imgur.com/dSIqG.jpg)](https://i.stack.imgur.com/dSIqG.jpg) [![enter image description here](https://i.stack.imgur.com/VginS.jpg)](https://i.stack.imgur.com/VginS.jpg)
As I have previously done something similar, I have experienced with hough transforms, but they were much harder to get right for my case than using contours. I have the following suggestions to help you get started: 1. Generally paper (edges, at least) is white, so you may have better luck by going to a colorspace like YUV which better separates luminosity: ``` image_yuv = cv2.cvtColor(image,cv2.COLOR_BGR2YUV) image_y = np.zeros(image_yuv.shape[0:2],np.uint8) image_y[:,:] = image_yuv[:,:,0] ``` 2. The text on the paper is a problem. Use a blurring effect, to (hopefully) remove these high frequency noises. You may also use morphological operations like dilation as well. ``` image_blurred = cv2.GaussianBlur(image_y,(3,3),0) ``` 3. You may try to apply a canny edge-detector, rather than a simple threshold. Not necessarily, but may help you: ``` edges = cv2.Canny(image_blurred,100,300,apertureSize = 3) ``` 4. Then find the contours. In my case I only used the extreme outer contours. You may use CHAIN\_APPROX\_SIMPLE flag to compress the contour ``` contours,hierarchy = cv2.findContours(edges,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) ``` 5. Now you should have a bunch of contours. Time to find the right ones. For each contour `cnt`, first find the convex hull, then use `approaxPolyDP` to simplify the contour as much as possible. ``` hull = cv2.convexHull(cnt) simplified_cnt = cv2.approxPolyDP(hull,0.001*cv2.arcLength(hull,True),True) ``` 6. Now we should use this simplified contour to find the enclosing quadrilateral. You may experiment with lots of rules you come up with. The simplest method is picking the four longest longest segments of the contour, and then create the enclosing quadrilateral by intersecting these four lines. Based on your case, you can find these lines based on the contrast the line makes, the angle they make and similar things. 7. Now you have a bunch of quadrilaterals. You can now perform a two step method to find your required quadrilateral. First you remove those ones that are probably wrong. For example one angle of the quadrilateral is more than 175 degrees. Then you can pick the one with the biggest area as the final result. You can see the orange contour as one of the results I got at this point: [![All Contours](https://i.stack.imgur.com/xLGJw.jpg)](https://i.stack.imgur.com/xLGJw.jpg) 8. The final step after finding (hopefully) the right quadrilateral, is transforming back to a rectangle. For this you can use `findHomography` to come up with a transformation matrix. ``` (H,mask) = cv2.findHomography(cnt.astype('single'),np.array([[[0., 0.]],[[2150., 0.]],[[2150., 2800.]],[[0.,2800.]]],dtype=np.single)) ``` The numbers assume projecting to letter paper. You may come up with better and more clever numbers to use. You also need to reorder the contour points to match the order of coordinates of the letter paper. Then you call `warpPerspective` to create the final image: ``` final_image = cv2.warpPerspective(image,H,(2150, 2800)) ``` This warping should result in something like the following (from my results before): [![Warping](https://i.stack.imgur.com/cU57Y.jpg)](https://i.stack.imgur.com/cU57Y.jpg) I hope this helps you to find an appropriate approach in your case.
How do I get one component "below" another in React? Here is my React code. I have two components: BlackBox and SomeText. I want the BlackBox to be full screen and the SomeText to be below it. I would assume that two divs in a row would render in order (either next to each other or below one another, but BlackBox in this example is completely on top of the SomeText). ``` class BlackBox extends React.Component { render() { return (<div id="blackbox"></div>); } } class SomeText extends React.Component { render() { return(<div id="sometext">Hello</div>); } } class App extends React.Component { render() { return( <div> <BlackBox/> <SomeText/> </div> ); } } ``` Here is the CSS code to make the BlackBox full screen: ``` #blackbox { background-color: #00000; height: 100%; width: 100%; left: 0; top: 0; position: absolute; } ``` Why does the BlackBox cover the SomeText?
> > Why does the BlackBox cover the SomeText? > > > Because it is absolutely positioned. And, from what I can assume, `SomeText` is not. Make `#blackbox` position relative. You will most likely run into a problem of making it full height. It's easy to solve but requires some styling for other html elements on the page. Take a look at this for example: <https://stackoverflow.com/a/4550198/7492660> Or just do this: ``` #block1{ position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: #000; z-index: 0; } #block2{ position: absolute; left: 0; right: 0; bottom: 0; background: #ff4444; padding: 3px 8px; color: #fff; z-index: 1; } ``` ``` <div id="block1"></div> <div id="block2">Some Text</div> ```
Can C++ streambuf methods throw exceptions? I'm trying to find a method to get the number of characters read or written to a stream that is reliable even if there is an error and the read/write ends short. I was doing something like this: ``` return stream.rdbuf()->sputn(buffer, buffer_size); ``` but if the streambuf implementation of `overflow` is permitted to throw excpections this won't work. Is it? I've not been able to find it documented anywhere.
`basic_streambuf::overflow` is allowed to throw an exception upon failure as documented in *27.6.3.4.5/6*, and sadly there's no way to ensure *compile-time* that the function won't ever throw an exception. Seems like you are running out of luck and the only way to be 100% sure that `overflow` won't throw an exception is to write your own `streambuf` that just doesn't do that upon failure. --- > > [*27.6.3.4.5/2-3*] **`int_type overflow(int_type = c = traits::eof ())`** > > ... > > > > > [*27.6.3.4.5/5*] > > > **Requires**: Every overriding definition of this virtual function shall obey the following constraints: > > > 1) The effect of consuming a character on the associated output > sequence is specified309 > > > 2) Let r be the number of characters in the pending sequence not > consumed. If r is non-zero then pbase() and pptr() shall be set so > that: pptr() - pbase() > == r and the r characters starting at pbase() are the associated output stream. In case r is zero (all characters of the pending > sequence have been consumed) then either pbase() is set to NULL, or > pbase() and pptr() are both set to the same NULL non-value. > > > 3) The function may fail if either appending some character to the > associated output stream fails or if it is unable to establish pbase() > and pptr() according to the above rules. > > > > > [*27.6.3.4.5/6*] > > > **Returns**: `traits::eof ()` or **throws an exception if the function fails** > > >
To reorganize code, what to choose between library and service? I want to reorganize a large application with lot of code duplication into multiple components. Plus, some code is also duplicated over other applications. The common set of functionality that can be taken out of main application is clearly defined. Now, do I write a library or do I write a service for this functionality; so that all such applications continue to work and there is only one code-base (of common functionality) to maintain ?
There's a good reason why every heavily-used, broadly-developed Linux application is structured as a executable (*e.g.*, `/usr/bin/blarf`) that handles the command line input and output, and a library (*e.g.*, `/usr/lib/libblarf.so`). The idea is that the **real work** happens inside library functions or methods, that can be called by any program that wants to perform the operation, and that the command-line interface is just one of possibly many ways to ask for that to happen. For example, the [Subversion](http://en.wikipedia.org/wiki/Subversion) version control system has `libsvn`, which is used by both the `svn` command and the various GUIs. When you say "*do I write a library or do I write a service*", I assume by "library", you mean something like `libsvn`, and that by "service", you mean something like a [REST](http://en.wikipedia.org/wiki/REST) interface. I'd say, start by refactoring the library out of your main program, and then build a service that uses the library if you feel the need.
Why is this JavaScript map NOT an Infinite Loop? I'm learning JavaScript. I wrote this code to learn the map function. But then I got confused as to why is this not mapping over it continuously as with each map sequence a new element is pushed to the array. Shouldn't it continue to push new elements as it is mapping over ? **Why does the map function only run for the original three elements and not for the new pushed ones?** I tried to debug it in the node environment and the `arr` variable goes in a **closure**. I know what a closure but I'm not able to understand what is going on here. ``` let array = [1, 2, 3]; array.map((element) => { array.push(10); console.log(element); }); ``` I expect that the output should be `1,2,3,10,10,10,10,10,10,10,10......10` But the actual output is only `1,2,3`.
To quote from MDN: > > The range of elements processed by map is set before the first invocation of callback. Elements which are appended to the array after the call to map begins will not be visited by callback. If existing elements of the array are changed, their value as passed to callback will be the value at the time map visits them. > > > <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map#Description> So, it behaves like that because that is how it is designed. And it is designed that way, amongst other reasons, to prevent infinite loops! `map` is a function from the Functional Programming world, where immutability is an important principle. According to this principle, if you call `map` on an input (and other variables don't change) you will always get **exactly** the same result. Allowing modification of the input breaks immutability.
How to fix Error CS1738 Named argument specifications must appear after all fixed arguments have been specified ``` public static Mutex CreateMutex(){ MutexAccessRule rule = new MutexAccessRule(new SecurityIdentifier(WellKnownSidType.WorldSid, null), MutexRights.FullControl, AccessControlType.Allow); MutexSecurity mutexSecurity = new MutexSecurity(); mutexSecurity.AddAccessRule(rule); bool createdNew; return new Mutex(initiallyOwned: false,"Global\\E475CED9-78C4-44FC-A2A2-45E515A2436", out createdNew,mutexSecurity); } ``` > > Error CS1738 Named argument specifications must appear after all fixed arguments have been specified > > >
So citing the C# Doc > > Named arguments, when used with positional arguments, are valid as long > > > - as they're not followed by any positional arguments > > > So this is the cause, why you are running into the compile error. When using C# 7.2 and later it says: > > Named arguments, when used with positional arguments, are valid as long > > > - as they're used in the correct position. > > > --- *For more information see: [Named and Optional Arguments](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/named-and-optional-arguments)* --- So if we take a look at the [constructor](https://learn.microsoft.com/en-us/dotnet/api/system.threading.mutex.-ctor?view=netframework-4.8#System_Threading_Mutex__ctor_System_Boolean_System_String_System_Boolean__System_Security_AccessControl_MutexSecurity_): ``` public Mutex (bool initiallyOwned, string name, out bool createdNew, System.Security.AccessControl.MutexSecurity mutexSecurity); ``` We will see that in your case the position is in the right order. And if you would switch to C# 7.2 or later your code will compile. But for lower Version, you have two options: 1. remove the argument naming for `initiallyOwned` 2. use named arguments for all arguments like: ``` return new Mutex(initiallyOwned: false, name: "Global\E475CED9-78C4-44FC-A2A2-45E515A2436", createdNew: out createdNew, mutexSecurity: mutexSecurity); ``` When using the second option the position of the named arguments does not matter anymore.
How do I avoid multiple t-tests (or is it ok to use multiple t tests here?) I've searched around CV for a bit, and have not found a good answer as to what test to use instead of multiple t-tests in my particular situation. I am interested in comparing mean size (of fish) between two habitat types for 20 different species. For each species, I want to know if fish are larger on one habitat type vs the other. I do not care about the relationship between species (I know that some species are bigger than others), and so ANOVA has not seemed appropriate. It seems that I could do 20 t-tests to see if some species are larger in one habitat over the other- or perhaps there is a simpler (or more statistically sound) approach? If I were to go with t-tests, would this be a case of multiple comparisons, even though the species are independent of one another?
There are two approaches to handling data of this nature: fixed effects and mixed effects. The T-test is basically a linear regression model with the size of the fish as an outcome variable and the lake from which the fish was drawn as the indicator of interest (a 0/1 variable). We have to assume the fish you sampled are representative of the population of fish in either pond, even after accounting for species! The species of the fish is a substantial source of heterogeneity in this case, since some species are always very large (but may be larger in a healthier pond), and some are always very small (but still may be larger in a healthier pond). By conditioning on the species of fish using fixed or mixed effects, you are ensuring that you are comparing "apples to apples" and can detect small intraspecies differences that are otherwise washed out by the overall larger variability due to species. A fixed effect, that is a 0/1 indicator for each species of fish in the sample excluding some "referent" group, is a good approach if you have many (perhaps 10 or more) of each species in the sample. If there are a large number of species relative to the overall sample (for instance, 120 fish sampled among 50 different species), a random intercept model for species can handle this variability.
Best Practice for Errors in RESTful API What are the best practices for returning HTTP status codes in RESTful API? I am using Laravel 4 for my PHP framework. In the case of an error, should I use ``` return Response::json('User Exists', 401); ``` or **include flag for `success`** ``` return Response::json([ 'success' => false, 'data' => 'User Exists'], 401 ); ``` or **use 200 instead of 4xx, relying on `success` to determine if there is an error** ``` return Response::json([ 'success' => false, 'data' => 'User Exists'], 200 ); ``` And in the case of success and there is no need to return any data, do you still return anything? **PHP API Code** ``` public function getCheckUniqueEmail() { // Check if Email already exist in table 'users' $uniqueEmail = checkIfEmailExists(); // Return JSON Response if($uniqueEmail) { // Validation fail (user exists) return Response::json('User Exists', 401); } else { // Validation success // - Return anything? } } ```
When you look at the [list of available HTTP status codes](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html), you will at some point realize that there are plenty of them, but used alone they cannot really explain an error by itself. So to answer your question, there are two parts. One is: How can your API communicate the reasons for an error and add useful information that the user of the API (which in most cases is another developer) can read and act upon. You should add as much information as possible, both machine readable and human readable. The other part: How can HTTP status codes help in distinguish certain error (and success) states? This latter part is actually harder than one might thing. There are the obvious cases where 404 is used to tell "not found". And 500 for any errors that are server-side. I wouldn't use status 401, unless I really want to allow the operation to succeed if there are HTTP authentication credentials present. 401 usually triggers a dialog box in the browser, which is bad. In case of a ressource being unique and already existing, status "409 Conflict" seems appropriate. And if creating a user succeeds, status "201 Created" sounds like a good idea, too. Note that there are a lot more status codes, some of them related to extensions of the HTTP protocol (like DAV), some completely unstandardized (like status "420 Enhance your calm" from the Twitter API). Have a look at <http://en.wikipedia.org/wiki/List_of_HTTP_status_codes> to see what has been used so far, and decide whether you want to use something appropriate for your error cases. From my experience, it is easy to simply pick a status code and use it, but it is hard to do so consistently and in accordance with existing standards. I wouldn't however stop here just because others might complain. :) Doing RESTful interfaces right is a hard task in itself, but the more interfaces exists, the more experience has been gathered. *Edit:* Regarding versioning: It is considered bad practice to put a version tag into the URL, like so: `example.com/api/v1/stuff` It will work, but it isn't nice. But the first thing is: How does your client specify which kind of representation he wants to get, i.e. how can he decide to either get JSON or XML? Answer: With the `Accept` header. He could send `Accept: application/json` for JSON and `Accept: application/xml` for XML. He might even accept multiple types, and it is for the server to decide then what to return. Unless the server is designed to answer with more than one representation of the resource (JSON or XML, client-selected), there really isn't much choice for the client. But it still is a good thing to have the client send at least "application/json" as his only choice and then return a header `Content-type: application/json` in response. This way, both sides make themselves clear about what they do expect the other side to see the content like. Now for the versions. If you put the version into the URL, you effectively create different resources (v1 and v2), but in reality you have only one resource (= URL) with different methods to access it. Creating a new version of an API must take place when there is a breaking change in the parameters of a request and/or the representation in the response which is incompatible with the current version. So when you create an API that uses JSON, you do not deal with generic JSON. You deal with a concrete JSON structure that is somehow unique to your API. You can and probably should indicate this in the `Content-type` sent by the server. The "vendor" extension is there for this: `Content-type: application/vnd.IAMVENDOR.MYAPI+json` will tell the world that the basic data structure is application/json, but it is your company and your API that really tells which structure to expect. And that's exactly where the version for the API request fits in: `application/vnd.IAMVENDOR.MYAPI-v1+json`. So instead of putting the version in the URL, you expect the client to send an `Accept: application/vnd.IAMVENDOR.MYAPI-v1+json` header, and you respond with `Content-type: application/vnd.IAMVENDOR.MYAPI-v1+json` as well. This really changes nothing for the first version, but let's see how things develop when version 2 comes into play. The URL approach will create a completely unrelated set of new resources. The client will wonder if `example.com/api/v2/stuff` is the same resource as `example.com/api/v1/stuff`. The client might have created some resources with the v1 API and he stored the URLs for this stuff. How is he supposed to upgrade all these resources to v2? The resources really have not changed, they are the same, the only thing that changed is that they look different in v2. Yes, the server might notify the client by sending a redirect to the v2 URL. But a redirect does not signal that the client also has to upgrade the client part of the API. When using an accept header with the version, the URL of the resource is the same for all versions. The client decides to request the resource with either version 1 or 2, and the server might be so kind and still answers version 1 requests with version 1 responses, but all version 2 requests with the new and shiny version 2 responses. If the server is unable to answer to a version 1 request, he can tell the client by sending HTTP status "406 Not Acceptable" (The requested resource is only capable of generating content not acceptable according to the Accept headers sent in the request.) The client can send the accept header with both versions included, which enabled the server to respond with the one he likes most, i.e. a smart client might implement versions 1 and 2 and now sends both as accept header, and waits for the server to upgrade from version 1 to 2. The server will tell in every response whether it is version 1 or 2, and the client can act accordingly - he does not need to know the exact date of the server version upgrade. To sum it up: For a very basic API with limited, maybe internal, usage even having a version might be overkill. But you never know if this will be true a year from now. It is always a very good idea to include a version number into an API. And the best spot for this is inside the mime type that your API is about to use. Checking for the single existing version should be trivial, but you have the option of transparently upgrading later, without confusing existing clients.
Chrome not honoring border radius on children I'm having a bit of trouble getting Chrome to honor the border radius on child elements. Here's the setup: ``` <div class='wrapper'> <img id='sosumi' src='http://images.apple.com/safari/images/overview_hero.jpg' /> </div> ``` if the wrapper is a positioned element (e.g. position: relative) and has a border-radius, then the border radius will not be applied to the img content. it doesn't have to be an image, either. any content that fills the background. Here's a reduced example page that shows off the problem. View in Safari, Mobile Safari, Firefox, or IE and the corners of the image will be clipped to the round corner. Viewed in Chrome the image overflows the corner (despite the overflow:hidden css) and looks ugly. Have a look: <https://dl.dropbox.com/u/433436/no-rounding/index.html> The question: Is there some workaround for this that's not too insane? Does anyone know why this affects one WebKit based browser and not others? Perhaps this is coming to update in Chrome soon?
You need to remove the `position: relative` If your *really* need position relative then you can double wrap your element: HTML: ``` <div class="outer"> <div class="wrapper"> <div class="inside"> </div> </div> </div> ``` CSS: ``` .outer { position: relative; } .wrapper { width: 100px; height: 100px; overflow: hidden; border: 3px solid red; border-radius: 20px; } .inside { width: 100px; height: 100px; background-color: #333; } ``` <http://jsfiddle.net/eprRj/> See these related questions: - [Forcing child to obey parent's curved borders in CSS](https://stackoverflow.com/questions/3714862/forcing-child-to-obey-parents-curved-borders-in-css) - [CSS Border radius not trimming image on Webkit](https://stackoverflow.com/questions/8705249/css-border-radius-not-trimming-image-on-webkit) - [How to make CSS3 rounded corners hide overflow in Chrome/Opera](https://stackoverflow.com/questions/5736503/how-to-make-css3-rounded-corners-hide-overflow-in-chrome-opera)
Adding a Google +1 button in Android App I was just wondering if there was anyway to add a Google +1 button inside my Android app. I have seen a +1 on the Android Market so I would think there would be some way to do this.
With the Google+ platform for Android, you are now able to integrate a native +1 button in your Android app. 1) You first need to [initialize](https://developers.google.com/+/mobile/android/#initialize_the_plusclient) the `PlusClient` object in your Activity. 2) Include the PlusOneButton in your layout: ``` <com.google.android.gms.plus.PlusOneButton xmlns:plus="http://schemas.android.com/apk/lib/com.google.android.gms.plus" android:id="@+id/plus_one_button" android:layout_width="wrap_content" android:layout_height="wrap_content" plus:size="standard" plus:annotation="inline" /> ``` 3) Assign the PlusOneButton to a member variable in your Activity.onCreate handler. ``` @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mPlusClient = new PlusClient(this, this, this); mPlusOneButton = (PlusOneButton) findViewById(R.id.plus_one_button); } ``` 4) Refresh the PlusOneButton's state each time the activity receives focus in your Activity.onResume handler. ``` protected void onResume() { super.onResume(); // Refresh the state of the +1 button each time the activity receives focus. mPlusOneButton.initialize(mPlusClient, URL); } ``` For more information, see <https://developers.google.com/+/mobile/android/#recommend_content_with_the_1_button>
node.js async.each callback, how do I know when it's done? I am trying to wrap my head around Node.js and some async operations. In the following code, I fetch some RSS feeds and store the articles found if I haven't stored them before. The code works and stores new articles. However, I'm not sure how to alter this so that I know when all the articles are done being parsed. For example, the callback on each `async.eachLimit` gets called each time the limit is met (after 10 articles or 5 feeds). So how do I know when they're done? ``` var FeedParser = require('feedparser'); var request = require('request'); var mysql = require('mysql'); var async = require('async'); var connection = mysql.createConnection({ host : 'localhost', user : 'someuser', password : 'somepass', database : 'somedb' }); connection.connect(); connection.query('SELECT * FROM rssfeed', function(err, rows, fields) { if(err == null){ async.eachLimit(rows, 5, parseFeed, function(err) { if(! err) { //horray } else { console.log(err); } }); } }); function parseFeed(feed, callback) { var articles = []; request(feed.link) .pipe(new FeedParser()) .on('error', function(error) { callback(error); }) .on('meta', function(meta) { }) .on('readable', function() { var stream = this, item; item = stream.read(); if(item != null) { item.rssfeed_id = feed.id; articles.push(item); } }) .on('finish', function() { async.eachLimit(articles, 10, parseArticle, function(err) { if(! err) { console.log('article each callback'); } else { callback(error); } }); }); callback(); } function parseArticle(item, callback) { if(item.hasOwnProperty('rssfeed_id') && item.hasOwnProperty('guid') && item.hasOwnProperty('link') && item.hasOwnProperty('title')){ connection.query('SELECT * FROM rssarticle WHERE rssfeed_id = ? AND guid = ?', [item.rssfeed_id, item.guid], function(err, rows, fields) { if(rows.length == 0){ connection.query('INSERT INTO rssarticle SET ?', { rssfeed_id: item.rssfeed_id, link: item.link, title: item.title, description: item.description, publish_date: item.pubDate, guid: item.guid }, function(err, result){ if(err != null){ console.log(err); } }); } }); } callback(); } ```
For one, you're preemptively calling your callbacks way too early. ``` function parseFeed(feed, callback) { request .streamStuff() .streamStuff() .streamStuff(); callback(); } ``` You shouldn't be calling `callback` before you're done. Otherwise your "finished" method will be called but your async code will actually still be running. So instead of doing: ``` .on('finish', function() { async.eachLimit(articles, 10, parseArticle, function(err) { if(! err) { console.log('article each callback'); } else { callback(error); } }); }); callback(); ``` Just do ``` .on('finish', function() { async.eachLimit(articles, 10, parseArticle, function(err) { if(! err) { // assuming this is a stub and really ends up doing `callback();` console.log('article each callback'); } else { callback(error); } }); }); ```
What is the Android/Java equivalent of the iOS call [NSDate timeIntervalSinceNow]? Here is some sample code from iOS: ``` NSDate *startDateX = [NSDate date]; // Do a bunch of stuff NSLog(@"Time difference: %f", -[startDateX timeIntervalSinceNow]); ``` The output looks like this: ``` Time difference: 15.009682 ``` What is the equivalent way to do this in Android/Java?
You can use [`System.currentTimeMillis()`](http://developer.android.com/reference/java/lang/System.html#currentTimeMillis%28%29) to calculate milliseconds: ``` long start = System.currentTimeMillis(); // Do stuff long difference = System.currentTimeMillis() - start; Log.v("Time difference:", String.valueOf(difference)); ``` If you want something more precise use [`System.nanoTime()`](http://developer.android.com/reference/java/lang/System.html#nanoTime%28%29). If you want seconds simply divide `System.currentTimeMillis()` by 1000. There is the DateUtils class (`android.text.format.DateUtils`) that can calculate relative time differences like "5 mins ago". Also [Joda Time](http://joda-time.sourceforge.net/) is a great 3rd party library if you plan to do a lot of work with time.
Draw a rectangle in Golang? I want to draw a mailing label with some rectangles, barcodes, and then finally generate a PNG/PDF file. Is there is a better way to draw a shape in Go other than to do it with primitives - pixel by pixel?
The standard Go library does not provide primitive drawing or painting capabilities. What it provides is models for colors ([`image/color`](http://golang.org/pkg/image/color/) package) and an [`Image`](http://golang.org/pkg/image/#Image) interface with several implementations ([`image`](http://golang.org/pkg/image/) package). The blog post [**The Go Image package**](http://blog.golang.org/go-image-package) is a good introduction to this. It also provides a capability to combine images (e.g. draw them on each other) with different operations in the [`image/draw`](http://golang.org/pkg/image/draw/) package. This can be used to a lot more than it sounds at first. There is a nice blog article about the `image/draw` package which showcases some of its potential: [**The Go image/draw package**](http://blog.golang.org/go-imagedraw-package) Another example is the open-source game [Gopher's Labyrinth](https://github.com/gophergala/golab) (*disclosure: I'm the author*) which has a graphical interface and it uses nothing else just the standard Go library to assemble its view. ![Gopher's Labyrinth Screenshot](https://i.stack.imgur.com/cIyhe.png) It's open source, check out its sources how it is done. It has a scrollable game view with moving images/animations in it. The standard library also supports reading and writing common image formats like [GIF](http://golang.org/pkg/image/gif/), [JPEG](http://golang.org/pkg/image/jpeg/), [PNG](http://golang.org/pkg/image/png/), and support for other formats are available out of the box: [BMP](http://godoc.org/golang.org/x/image/bmp), [RIFF](http://godoc.org/golang.org/x/image/riff), [TIFF](http://godoc.org/golang.org/x/image/tiff) and even [WEBP](http://godoc.org/golang.org/x/image/webp) (only a reader/decoder). Although support is not given by the standard library, it is fairly easy to draw lines and rectangles on an image. Given an `img` image which supports changing a pixel with a method: `Set(x, y int, c color.Color)` (for example [`image.RGBA`](http://golang.org/pkg/image/#RGBA) is perfect for us) and a `col` of type [`color.Color`](http://golang.org/pkg/image/color/#Color): ``` // HLine draws a horizontal line func HLine(x1, y, x2 int) { for ; x1 <= x2; x1++ { img.Set(x1, y, col) } } // VLine draws a veritcal line func VLine(x, y1, y2 int) { for ; y1 <= y2; y1++ { img.Set(x, y1, col) } } // Rect draws a rectangle utilizing HLine() and VLine() func Rect(x1, y1, x2, y2 int) { HLine(x1, y1, x2) HLine(x1, y2, x2) VLine(x1, y1, y2) VLine(x2, y1, y2) } ``` Using these simple functions here is a runnable example program which draws a line and a rectangle and saves the image into a `.png` file: ``` import ( "image" "image/color" "image/png" "os" ) var img = image.NewRGBA(image.Rect(0, 0, 100, 100)) var col color.Color func main() { col = color.RGBA{255, 0, 0, 255} // Red HLine(10, 20, 80) col = color.RGBA{0, 255, 0, 255} // Green Rect(10, 10, 80, 50) f, err := os.Create("draw.png") if err != nil { panic(err) } defer f.Close() png.Encode(f, img) } ``` If you want to draw texts, you can use the [Go implementation of FreeType](https://github.com/golang/freetype). Also check out this question for a simple introduction to drawing strings on images: [How to add a simple text label to an image in Go?](https://stackoverflow.com/questions/38299930/how-do-add-a-text-label-to-an-image-in-go) If you want advanced and more complex drawing capabilities, there are also [many external libraries](https://github.com/golang/go/wiki/Projects#graphics-and-audio) available, for example: <https://github.com/llgcode/draw2d> <https://github.com/fogleman/gg>
Is it possible to extend Wordpress XMLRPC interface from a plugin? Is it possible to create a plugin that, when active, would add a new "function" to the XMLRPC interface and handle its calling?
In short, yes. You can add a function as either a plug-in or in your theme's functions.php file that handles XMLRPC calls. You'll need the following sections: ``` function xml_add_method( $methods ) { $methods['myClient.myMethod'] = 'my_method_callback'; return $methods; } add_filter( 'xmlrpc_methods', 'xml_add_method'); ``` This function adds your method call to the built-in XMLRPC method handler. When someone makes a request to <http://yoursite.com/xmlrpc.php> with this method, all parameters will be sent to the `my_method_callback()` function: ``` function my_method_callback( $args ) { // Do Something // Return Something } ``` I use this system to handle error reporting with my plug-ins. When one of my plug-ins malfunctions on a client's website, it reports the malfunction by posting data to <http://www.mywordpressinstallation.com/xmlrpc.php>. On my site, I have a plug-in that stores this information in a database so I can review it later and fix bugs.
algorithm to find the nearby friends? I have a python program sitting in server side managing user location informations, each friend has a pair of (longtitude, latitude), given a (longtitude, latitude) point, how I can find the nearby(say within 5KM) friends efficiently? I have 10K users online... Thanks. Bin
New Answer: I would store lat and long in separate columns. Place indexes on them. Then when you want to find the nearby friends of a particular user, just do something like ``` select field1, field1, ..., fieldn from users where user_lat > this_lat - phi and user_lat < this_lat + phi and user_lon > this_lon - omega and user_lon < this_lon + omega ``` where `phi` and `omega` are the degrees of latitude and longitude that correspond to your desired distance. This will vary depending on where on the globe you are but there are established equations for figuring it out. There's also the possibility that your database can do those calculations for you. --- old answer. I would look at [quadtrees](http://en.wikipedia.org/wiki/Quadtree) and [kd-trees](http://en.wikipedia.org/wiki/Kd-tree). Kd-trees would be the canonical solution here, I believe.
Should I be concerned about the Linux.Darlloz worm? As an Ubuntu user, any tips on how Linux.Darlloz could affect Ubuntu and ways to prevent it?
If you don't have PHP-cgi installed or if you upgrade it recently. Nothing. > > Linux.Darlloz is a worm that spreads to vulnerable systems by exploiting the PHP 'php-cgi'. > > > Ubuntu already **release fixes to prevent spread more than a year ago**: - <http://people.canonical.com/~ubuntu-security/cve/2012/CVE-2012-1823.html> - <http://people.canonical.com/~ubuntu-security/cve/2012/CVE-2012-2311.html> - <http://people.canonical.com/~ubuntu-security/cve/2012/CVE-2012-2335.html> - <http://people.canonical.com/~ubuntu-security/cve/2012/CVE-2012-2336.html>
Can you convert an R raw vector representing an RDS file back into an R object without a round trip to disk? I have an RDS file that is uploaded and then download via `curl::curl_fetch_memory()` (via `httr`) - this gives me a raw vector in R. Is there a way to read that raw vector representing the RDS file to return the original R object? Or does it always have to be written to disk first? I have a setup similar to below: ``` saveRDS(mtcars, file = "obj.rds") # upload the obj.rds file ... # download it again via httr::write_memory() ... obj # [1] 1f 8b 08 00 00 00 00 00 00 03 ad 56 4f 4c 1c 55 18 1f ca 02 bb ec b2 5d # ... is.raw(obj) #[1] TRUE ``` It seems `readRDS()` should be used to uncompress it, but it takes a connection object and I don't know how to make a connection object from an R raw vector - `rawConnection()` looked promising but gave: ``` rawConnection(obj) #A connection with #description "obj" #class "rawConnection" #mode "r" #text "binary" #opened "opened" #can read "yes" #can write "no" readRDS(rawConnection(obj)) #Error in readRDS(rawConnection(obj)) : unknown input format ``` Looking through `readRDS` it looks like it uses `gzlib()` underneath but couldn't get that to work with the raw vector object. If its download via `httr::write_disk()` -> `curl::curl_fetch_disk()` -> `readRDS()` then its all good but this is a round trip to disk and I wondered if it could be optimised for big files.
By default, RDS file streams are gzipped. To read a raw connection you need to manually wrap it into a [`gzcon`](https://stat.ethz.ch/R-manual/R-devel/library/base/html/gzcon.html): ``` con = rawConnection(obj) result = readRDS(gzcon(con)) ``` This works even when the stream *isn’t* gzipped. But unfortunately it fails if a different supported compression method (e.g. `'bzip2'`) was used to create the RDS file. Unfortunately R doesn’t seem to have a `gzcon` equivalent for bzip2 or xz. For those formats, you can manually decompress the data and use `unserialize` instead of `readRDS`: ``` result = unserialize(memDecompress(obj)) ``` This works for any data produced by `saveRDS`. It might fail for RDS objects manually created via `memCompress(serialize(…, NULL))`, because `memCompress` is not guaranteed to write a complete compression header that allows detecting the compression method.
Merge element of nested List in unique list c# I have a nested List as in the example: ``` List<List<int>> myList = new List<List<int>>(); myList.Add(new List<int> { 2, 7, 3 }); myList.Add(new List<int> { 4, 6}); myList.Add(new List<int> { 2, 5, 1 }); myList.Add(new List<int> { 7, 0, 2 }); myList.Add(new List<int> { 4, 9 }); ``` I want to merge all the lists having at least an element in common so that the output will be a `List<List<int>>` with elements: ``` List<int> 2, 7, 3, 5, 1, 0 List<int> 4,6,9 ``` Thank you
You can use [`HashSet`](https://msdn.microsoft.com/en-us/library/bb359438(v=vs.110).aspx) for a solution, though I am sure efficiency can be improved: ``` public static void Main(string[] args) { List<List<int>> myList = new List<List<int>>(); myList.Add(new List<int> { 2, 7, 3 }); myList.Add(new List<int> { 4, 6}); myList.Add(new List<int> { 2, 5, 1 }); myList.Add(new List<int> { 7, 0, 2 }); myList.Add(new List<int> { 4, 9 }); var result = FindCommonSets(myList); } static List<HashSet<T>> FindCommonSets<T>(IEnumerable<IEnumerable<T>> data) { List<HashSet<T>> sets = new List<HashSet<T>>(); bool anyModified = false; foreach (var list in data) { //find a set which already overlaps this list. var set = sets.FirstOrDefault(s => s.Overlaps(list)); if (set != null) { //if we find one, dump all the elements of this list into the set. set.UnionWith(list); anyModified = true; } else { //if not, add a new set based on this list. sets.Add(new HashSet<T>(list)); } } if (anyModified) { //run the whole thing again with the new data if anything was changed in this iteration. return FindCommonSets(sets); } return sets; } ``` edit: Changed to a recursive implementation following the issues brought up in the comments.
qsort not sorting the content correctly, kruskal's algorithm I am trying to sort an array of structs using `qsort`, but it's not properly sorting the content. The structure node consists of the starting vertex, the ending vertex, and the cost of reaching from vertex 'a' to vertex 'b'. I am writing the code for Kruskal's algorithm ``` #include <stdio.h> #include <stdlib.h> int v, e; typedef struct node { int a; int b; int cost; } node; int compare(const void *a, const void *b) { const node *x = *(node **)a; const node *y = *(node **)b; return (x->cost > y->cost) ? 1 : 0; } int main() { scanf("%d %d", &v, &e); int i; node *arr[e]; for (i = 0; i < e; i++) { int a, b, cost; scanf("%d %d %d", &a, &b, &cost); arr[i] = (node *)malloc(sizeof(node)); arr[i]->a = a; arr[i]->b = b; arr[i]->cost = cost; } qsort(arr, e, sizeof(node *), compare); printf("\n\n"); for (int i = 0; i < e; i++) { printf("%d %d %d\n", arr[i]->a, arr[i]->b, arr[i]->cost); } return 0; } ``` Input: ``` 9 14 2 5 4 7 8 7 0 1 4 1 7 11 0 7 8 7 6 1 6 5 2 5 4 10 3 5 14 3 4 9 2 3 7 1 2 8 2 8 2 8 6 6 ``` Output: ``` 2 5 4 2 8 2 0 1 4 8 6 6 6 5 2 7 6 1 7 8 7 2 3 7 1 2 8 0 7 8 3 4 9 5 4 10 1 7 11 3 5 14 ``` The first few rows are not sorted properly as per the output. Please help me out.
The comparison function must return one of the following values, negative, zero or positive value, depending on whether the first compared elements is greater than, equal to or less than the second compared element for the descending sorting. So this function definition ``` int compare(const void* a,const void* b){ const node* x = *(node**)a; const node* y = *(node**)b; return (x->cost > y->cost)?1:0; } ``` is incorrect. Instead you could write the following way (provided that you are going to sort the array in the descending order ``` int compare(const void* a,const void* b){ const node* x = *(node**)a; const node* y = *(node**)b; return ( x->cost < y->cost ) - ( y->cost < x->cost ); } ``` If you want to sort the array in the ascending order then the comparison function can look like ``` int compare(const void* a,const void* b){ const node* x = *(node**)a; const node* y = *(node**)b; return ( y->cost < x->cost ) - ( x->cost < y->cost ); } ``` Here is a demonstration program. ``` #include <stdio.h> #include <stdlib.h> typedef struct node{ int a; int b; int cost; }node; int ascending(const void* a,const void* b){ const node* x = *(node**)a; const node* y = *(node**)b; return ( y->cost < x->cost ) - ( x->cost < y->cost ); } int descending(const void* a,const void* b){ const node* x = *(node**)a; const node* y = *(node**)b; return ( x->cost < y->cost ) - ( y->cost < x->cost ); } int main(void) { node * arr[] = { malloc( sizeof( node ) ), malloc( sizeof( node ) ), malloc( sizeof( node ) ) }; const size_t N = sizeof( arr ) / sizeof( *arr ); arr[0]->a = 2; arr[0]->b = 5; arr[0]->cost = 4; arr[1]->a = 7; arr[1]->b = 8; arr[1]->cost = 7; arr[2]->a = 0; arr[2]->b = 1; arr[2]->cost = 4; for ( size_t i = 0; i < N; i++ ) { printf( "%d %d %d\n", arr[i]->a, arr[i]->b, arr[i]->cost ); } putchar( '\n' ); qsort( arr, N, sizeof( *arr ), ascending ); for ( size_t i = 0; i < N; i++ ) { printf( "%d %d %d\n", arr[i]->a, arr[i]->b, arr[i]->cost ); } putchar( '\n' ); qsort( arr, N, sizeof( *arr ), descending ); for ( size_t i = 0; i < N; i++ ) { printf( "%d %d %d\n", arr[i]->a, arr[i]->b, arr[i]->cost ); } putchar( '\n' ); for ( size_t i = 0; i < N; i++ ) { free( arr[i] ); } } ``` The program output is ``` 2 5 4 7 8 7 0 1 4 2 5 4 0 1 4 7 8 7 7 8 7 2 5 4 0 1 4 ```
Finding out the name to `:require` in a namespace I am following this tutorial: <https://practicalli.github.io/blog/posts/web-scraping-with-clojure-hacking-hacker-news/> and I have had a hard time dealing with the `:require` part of the `ns` macro. This tutorial shows how to parse HTML and pull out information from it with a library called enlive, and to use it, I first had to put ``` ... :dependencies [[org.clojure/clojure "1.10.1"] [enlive "1.1.6"]] ... ``` in my `project.clj`, and require the library in `core.clj` as the following: ``` (ns myproject.core (:require [net.cgrand.enlive-html :as html]) (:gen-class)) ``` I spent so much time finding out the name `net.cgrand.enlive-html`, since it was different from the package's name itself (which is just `enlive`), and I couldn't find it through any of the `lein` commands (I eventually found it out by googling). How can I easily find out what name to `require`?
### Practical approach If your editor/IDE helps with auto-completion and docs, that might be a first route. Other than that, libraries usually have some read-me online, where they show off what they do (what to require, how to use that). ### Strict approach If you really have nothing about a library, you will find the downloaded library in you `~/.m2/repository` directory. Note that deps without the naming convention of "group/artifact" will just double on the artifact name, Next is the version. So you can find your libraries JAR file here: `.m2/repository/enlive/enlive/1.1.6/enlive-1.1.6.jar`. JAR files are just ZIP-Files. Inside the JAR file you will usually find the source files of the library. The directory structure reflects the package structure. E.g. one file there is `net/cgrand/enlive_html.clj` (note the use of the `_` instead of `-`, this is due to name munging for the JVM). You then can require the file in your REPL and explore with `doc` or `dir` etc. Or you open this file, to see the docs and source code in one chunk.
Change size of Custom Excel Ribbon dropdown [![enter image description here](https://i.stack.imgur.com/7NR8Q.png)](https://i.stack.imgur.com/7NR8Q.png) I have this dropdown in the ribbon containing all the visible sheets in the workbook. Users can select a sheet in there to jump to it. It's important because there are a ton of sheets in this workbook. Unfortunately when the name of the sheet is long, it doesn't show completely. **I'd like to make it wider.** I used the CustomUI Editor for Microsoft Office to create it using my not-very-fluent XML skills. Here is part of the code: ``` <customUI xmlns="http://schemas.microsoft.com/office/2006/01/customui" onLoad="InitS3Ribbon"> <ribbon> <tabs> <tab id="s3Tab" label="S3 Menu"> <group id="grGeneral" label="General"> <dropDown id="navigation" label="Navigation" getItemCount="GetNavigateItemCount" getItemLabel="GetNavigateLabel" onAction="MenuNavigate" getSelectedItemIndex="SetNavigateIndex" showLabel="true" /> <button id="bShowHideSheet" imageMso="PivotPlusMinusButtonsShowHide" label="Show/Hide sheets" size="normal" onAction="MenuShowHideSheets" /> <button id="bPreviousPage" imageMso="LeftArrow2" label="Previous sheet" size="large" onAction="MenuPreviousSheet" /> <button id="bNextPage" imageMso="RightArrow2" label="Next sheet" size="large" onAction="MenuNextSheet" /> </group> ``` I found [this resource](https://blogs.msdn.microsoft.com/vsto/2008/12/19/setting-the-width-of-a-drop-down-combo-box-or-edit-box-in-the-ribbon-designer-norm-estabrook/) saying that it can be changed with the [SizeString](https://msdn.microsoft.com/en-us/library/microsoft.office.tools.ribbon.ribbondropdown.sizestring.aspx) property but I'm not even sure how or where to include that in my code. Looks to me like it's supposed to be in the VBA section? I'm not sure I understand and I'd like guidance. I'm not sure whether to edit XML or VBA right now and how.
The official XML spec can be found here: <https://msdn.microsoft.com/en-us/library/cc313070(v=office.12).aspx> It looks like sizeString can be used directly as an attribute to your dropdown XML tag. So something like: ``` <dropDown id="navigation" label="Navigation" sizeString="MY_MAX_LENGTH_STRING" getItemCount="GetNavigateItemCount" getItemLabel="GetNavigateLabel" onAction="MenuNavigate" getSelectedItemIndex="SetNavigateIndex" showLabel="true" /> ``` You'll just have to know what the longest string you'll encounter will be, then put that in as `MY_MAX_LENGTH_STRING`. Considering you're using the dropdown to hold sheetnames, which are capped at 31 characters, you could probably use that length as a starting point.
sum same column across multiple files using awk ? I want to add the 3rd column of 5 files such that the new file will have the same 2nd col and the sum of the 3rd col of the 5 files. I tried something like this: ``` $ cat freqdat044.dat | awk '{n=$3; getline <"freqdat046.dat";print $2" " n+$3}' > freqtrial1.dat freqdat048.dat`enter code here`$ cat freqdat044.dat | awk '{n=$3; getline <"freqdat046.dat";print $2" " n+$3}' > freqtrial1.dat ``` The files names: ``` freqdat044.dat freqdat045.dat freqdat046.dat freqdat047.dat freqdat049.dat freqdat050.dat ``` And saved in output file the contain only `$2` and the new col form the summation of the 3rd
``` awk '{x[$2] += $3} END {for(y in x) print y,x[y]}' freqdat044.dat freqdat045.dat freqdat046.dat freqdat047.dat freqdat049.dat freqdat050.dat ``` This does not necessarily print lines as they appear in the first file. If you want to preserve that sorting, then you have to save that ordering somewhere: ``` awk 'FNR==NR {keys[FNR]=$2; cnt=FNR} {x[$2] += $3} END {for(i=1; i<=cnt; ++i) print keys[i],x[keys[i]]}' freqdat044.dat freqdat045.dat freqdat046.dat freqdat047.dat freqdat049.dat freqdat050.dat ```
How to extend FastAPI docs with another swagger docs? I decided to make a micro-services gateway in Python's FastApi framework. My authorization service is written in Django and there are already generated by `drf-yasg` package swagger docs. I was thinking if there is a way to somehow import auth's schema to the gateway. I can serve the schema in `json` format via http and access it from the gateway. The question is how to integrate FastApi's docs with raw swagger schema file.
According to [docs](https://fastapi.tiangolo.com/advanced/extending-openapi/#modify-the-openapi-schema) you can modify the openAPI json. Example: ``` from fastapi import FastAPI from fastapi.openapi.utils import get_openapi app = FastAPI() @app.get("/items/") async def read_items(): return [{"name": "Foo"}] def custom_openapi(): if app.openapi_schema: return app.openapi_schema openapi_schema = get_openapi( title="Custom title", version="2.5.0", description="This is a very custom OpenAPI schema", routes=app.routes, ) openapi_schema["paths"]["/api/auth"] = { "post": { "requestBody": {"content": {"application/json": {}}, "required": True}, "tags": ["Auth"] } } app.openapi_schema = openapi_schema return app.openapi_schema app.openapi = custom_openapi ``` Result: [![resulting swagger ui](https://i.stack.imgur.com/5MITs.png)](https://i.stack.imgur.com/5MITs.png)
Build an input dialog box? I understand that there is no default input dialog in silverlight for windows phone 7. But i need this for my project. I want it to have the same metro look as the default messagebox class. Whats the easiest way to do this? Can i extend the messagebox class and add som kind of textfield to it? Or should i perhaps use popup or child window? Please help my out on this one guys :) Stack overflow has been a great asset and has helped me alot when I get stuck in my projects!
You could use [InputPrompt](http://coding4fun.codeplex.com/wikipage?title=InputPrompt) from the [Coding4Fun Toolkit](http://coding4fun.codeplex.com/): ``` InputPrompt prompt = new InputPrompt(); prompt.Title = "Here Is A Title"; prompt.Message = "Specify a unique message:"; prompt.Show(); prompt.Completed += (pResult,sResult) => { } ``` Or you could use the `CustomMessageBox` from [WPToolkit](http://phone.codeplex.com/): ``` CustomMessageBox box = new CustomMessageBox() { Caption = "Your Caption Here", Message = "Enter a unique message", LeftButtonContent = "ok", RightButtonContent = "cancel", Content = textBox }; box.Dismissed += (s, boxEventArgs) => { } box.Show(); ``` Both are great options and at the end of the day it will be a matter of preference as to which one to use for your specific case.
How to remove whitespace from right end of NSString? This removes white space from both ends of a string: ``` NSString *newString = [oldString stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; ``` How do I remove white space from just the right end of a string? UPDATE: The code from the [accepted answer](https://stackoverflow.com/questions/5689288/how-to-remove-whitespace-from-right-end-of-nsstring/5691567#5691567) is now part of [SSToolkit](https://github.com/samsoffes/sstoolkit/blob/master/SSToolkit/NSString+SSToolkitAdditions.m#L288-318). Yay!
Building on the answers by @Regexident & @Max, I came up with the following methods: ``` @implementation NSString (SSToolkitAdditions) #pragma mark Trimming Methods - (NSString *)stringByTrimmingLeadingCharactersInSet:(NSCharacterSet *)characterSet { NSRange rangeOfFirstWantedCharacter = [self rangeOfCharacterFromSet:[characterSet invertedSet]]; if (rangeOfFirstWantedCharacter.location == NSNotFound) { return @""; } return [self substringFromIndex:rangeOfFirstWantedCharacter.location]; } - (NSString *)stringByTrimmingLeadingWhitespaceAndNewlineCharacters { return [self stringByTrimmingLeadingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; } - (NSString *)stringByTrimmingTrailingCharactersInSet:(NSCharacterSet *)characterSet { NSRange rangeOfLastWantedCharacter = [self rangeOfCharacterFromSet:[characterSet invertedSet] options:NSBackwardsSearch]; if (rangeOfLastWantedCharacter.location == NSNotFound) { return @""; } return [self substringToIndex:rangeOfLastWantedCharacter.location+1]; // non-inclusive } - (NSString *)stringByTrimmingTrailingWhitespaceAndNewlineCharacters { return [self stringByTrimmingTrailingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; } @end ``` And here are the GHUnit tests, which all pass of course: ``` @interface StringCategoryTest : GHTestCase @end @implementation StringCategoryTest - (void)testStringByTrimmingLeadingCharactersInSet { NSCharacterSet *letterCharSet = [NSCharacterSet letterCharacterSet]; GHAssertEqualObjects([@"zip90210zip" stringByTrimmingLeadingCharactersInSet:letterCharSet], @"90210zip", nil); } - (void)testStringByTrimmingLeadingWhitespaceAndNewlineCharacters { GHAssertEqualObjects([@"" stringByTrimmingLeadingWhitespaceAndNewlineCharacters], @"", nil); GHAssertEqualObjects([@"\n \n " stringByTrimmingLeadingWhitespaceAndNewlineCharacters], @"", nil); GHAssertEqualObjects([@"\n hello \n" stringByTrimmingLeadingWhitespaceAndNewlineCharacters], @"hello \n", nil); } - (void)testStringByTrimmingTrailingCharactersInSet { NSCharacterSet *letterCharSet = [NSCharacterSet letterCharacterSet]; GHAssertEqualObjects([@"zip90210zip" stringByTrimmingTrailingCharactersInSet:letterCharSet], @"zip90210", nil); } - (void)testStringByTrimmingTrailingWhitespaceAndNewlineCharacters { GHAssertEqualObjects([@"" stringByTrimmingLeadingWhitespaceAndNewlineCharacters], @"", nil); GHAssertEqualObjects([@"\n \n " stringByTrimmingLeadingWhitespaceAndNewlineCharacters], @"", nil); GHAssertEqualObjects([@"\n hello \n" stringByTrimmingTrailingWhitespaceAndNewlineCharacters], @"\n hello", nil); } @end ``` I submitted a GitHub pull request to [SSToolkit](http://sstoolk.it/) with these methods added.
How to map and group array of objects I'm building an application with Node.js, Express, Postgres and Sequelize. I get a response that looks like this: ``` [ { "id": 101, "type": 0, "bookings": [ { "date": "2019-04-15T02:00:00.000Z" } ] }, { "id": 102, "type": 4, "bookings": [ { "date": "2019-04-17T02:00:00.000Z" } ] }, { "id": 103, "type": 0, "bookings": [ { "date": "2019-04-15T02:00:00.000Z" } ] }, { "id": 104, "type": 0, "bookings": [ { "date": "2019-04-17T02:00:00.000Z" } ] } ] ``` I want to group all the events that happen on the same date. I tried ``` _.forEach(response, function(value) { _.groupBy(value, value.bookings[0].date) }) ``` but it doesn't work. How can I map and group an array? Eventually I want to have an object (or array) that looks something like this: ``` { 2019-04-15: [ { id: 101, type: 0 }, { id: 103, type: 0} ], 2019-04-17: [ { id: 102, type: 4 }, { id: 104, type: 0} ] } ```
You can use [**reduce**](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce) ``` let data = [{"id": 101,"type": 0,"bookings": [{"date": "2019-04-15T02:00:00.000Z"}]},{"id": 102,"type": 4,"bookings": [{"date": "2019-04-17T02:00:00.000Z"}]},{"id": 103,"type": 0,"bookings": [{"date": "2019-04-15T02:00:00.000Z"}]},{"id": 104,"type": 0,"bookings": [{"date": "2019-04-17T02:00:00.000Z"}]}] let op = data.reduce((op,{bookings,...rest}) => { let key = bookings[0].date.split('T',1)[0] op[key] = op[key] || [] op[key].push(rest) return op },{}) console.log(op) ```
How to list all currently targeted hosts in an Ansible play I am running an Ansible play and would like to list all the hosts targeted by it. Ansible docs [mentions that this is possible](http://docs.ansible.com/playbooks_delegation.html#run-once), but their method doesn't seem to work with a complex targeted group (targeting like hosts: web\_servers:&data\_center\_primary) I'm sure this is doable, but cant seem to find any further documentation on it. Is there a var with all the currently targeted hosts?
You are looking for 'play\_hosts' variable ``` --- - hosts: all tasks: - name: Create a group of all hosts by app_type group_by: key={{app_type}} - debug: msg="groups={{groups}}" run_once: true - hosts: web:&some_other_group tasks: - debug: msg="play_hosts={{play_hosts}}" run_once: true ``` would result in ``` TASK: [Create a group of all hosts by app_type] ******************************* changed: [web1] => {"changed": true, "groups": {"web": ["web1", "web2"], "load_balancer": ["web3"]}} TASK: [debug msg="play_hosts={{play_hosts}}"] ********************************* ok: [web1] => { "msg": "play_hosts=['web1']" } ``` inventory: ``` [proxy] web1 app_type=web web2 app_type=web web3 app_type=load_balancer [some_other_group] web1 web3 ```
Difference between re.findall() and re.finditer() when using groups in regex? Consider the following string ``` text2 = ''' Mr. Schafer Mr Smith Ms Davis Mrs. Robinson Mr. T ''' ``` I want regex to match the complete name, as in 'Mr. Schafer' for example Using finditer(): ``` matches = re.finditer(r'(Mr|Ms|Mrs)\.?\s[A-Z]\w*', text2) for match in matches: print(match) ``` Results: ``` <_sre.SRE_Match object; span=(1, 12), match='Mr. Schafer'> <_sre.SRE_Match object; span=(13, 21), match='Mr Smith'> <_sre.SRE_Match object; span=(22, 30), match='Ms Davis'> <_sre.SRE_Match object; span=(31, 44), match='Mrs. Robinson'> <_sre.SRE_Match object; span=(45, 50), match='Mr. T'> ``` finditer() gives me the results I want, but not in a list. But when I use findall(): ``` re.findall(r'(Mr|Ms|Mrs)\.?\s[A-Z]\w*', text2) ``` Results: ``` ['Mr', 'Mr', 'Ms', 'Mrs', 'Mr'] ``` **Why is this? How can I get the result I want using findall()** **I want this result:** ``` ['Mr. Schafer', 'Mr Smith', 'Ms Davis', 'Mrs. Robinson', 'Mr. T'] ```
The list returned by `re.findall` contains: - the text of each match, if the regex has no captures - the text of the capture in each match, if the regex has exactly one capture - a tuple of substrings corresponding to each capture, if the regex has has more than one capture. A capture is a part of the regular expression surrounded by parentheses, unless you use `(?:...)`; the `?:` in this context tells Python's regex library to not consider the parentheses as defining a capture. (It's still used for grouping of course.) So the simplest (and probably fastest) solution is to make sure the regex has no captures, by using `(?:...)` to surround the title rather than just `(...)`: ``` >>> re.findall(r'(?:Mr|Ms|Mrs)\.?\s[A-Z]\w*', text2) ['Mr. Schafer', 'Mr Smith', 'Ms Davis', 'Mrs. Robinson', 'Mr. T'] ``` You could also explicitly capture the complete name: ``` >>> re.findall(r'((?:Mr|Ms|Mrs)\.?\s[A-Z]\w*)', text2) ['Mr. Schafer', 'Mr Smith', 'Ms Davis', 'Mrs. Robinson', 'Mr. T'] ``` There's not much point doing that in this case, but the "one capture" form can be useful if you want to part of the pattern to not show up in the output. Finally, you might want both the honorific and the surname in a tuple: ``` >>> re.findall(r'(?:(Mr|Ms|Mrs)\.?\s([A-Z]\w*))', text2) [('Mr', 'Schafer'), ('Mr', 'Smith'), ('Ms', 'Davis'), ('Mrs', 'Robinson'), ('Mr', 'T')] ```
Are JavaScript template literals guaranteed to call toString()? ``` const num = 42 const str = `My number is ${num}` ``` In this code what guarantee do I have about the conversion of `num` to a `string` ? Is it guaranteed to just call its `toString()` method or could the conversion be done in another way ?
Untagged templates use the ECMAScript `ToString()` abstract operation. The logic of template literal evaluation is spread over several sections which makes it difficult to follow, so I'll just post a link to it: <https://tc39.es/ecma262/#sec-template-literals-runtime-semantics-evaluation> [`ToString(argument)`](https://tc39.es/ecma262/#sec-tostring) uses a table instead of algorithmic steps, so I'll write out some pseudocode here: ``` switch (Type(argument)) { case 'Undefined': return 'undefined'; case 'Null': return 'null'; case 'Boolean': return argument ? 'true' : 'false'; case 'Number': return Number::toString(argument); case 'String': return argument; case 'Symbol': throw new TypeError(); case 'BigInt': return BigInt::toString(arugment); case 'Object': return ToString(ToPrimitive(argument, 'string')); } ``` As you can see, no js execution happens at all for primitive values, the engine internally creates a string representation. For objects, we go into the `ToPrimitive()` algorithm. [`ToPrimitive(input, PreferredType)`](https://tc39.es/ecma262/#sec-toprimitive) will try to get the `Symbol.toPrimitive` method from `input`, and if it's present, call it with the given `PreferredType` hint. If `input` does not have a `Symbol.toPrimitive` property, it falls back to `OrdinaryToPrimitive`. [`OrdinrayToPrimitive(O, hint)`](https://tc39.es/ecma262/#sec-ordinarytoprimitive) will try to call the `toString` and `valueOf` methods. If `hint` is `'string'`, it try to call `toString` method first, otherwise it will try to call the `valueOf` method first. If either of those methods are present and they don't return an object, their return value will be used. If neither are present or they both return objects, a TypeError will be thrown. So to answer your original question, converting `42` will not call any other methods. The engine will internally create a string representation (`'42'`), and use that.
Visualise distances between texts I'm working on a research project for school. I've written some text mining software that analyzes legal texts in a collection and spits out a score that indicates how similar they are. I ran the program to compare each text with every other text, and I have data like this (although with many more points): ``` codeofhammurabi.txt crete.txt 0.570737 codeofhammurabi.txt iraqi.txt 1.13475 codeofhammurabi.txt magnacarta.txt 0.945746 codeofhammurabi.txt us.txt 1.25546 crete.txt iraqi.txt 0.329545 crete.txt magnacarta.txt 0.589786 crete.txt us.txt 0.491903 iraqi.txt magnacarta.txt 0.834488 iraqi.txt us.txt 1.37718 magnacarta.txt us.txt 1.09582 ``` Now I need to plot them on a graph. I can easily invert the scores so that a small value now indicates texts that are similar and a large value indicates texts that are dissimilar: the value can be the distance between points on a graph representing the texts. ``` codeofhammurabi.txt crete.txt 1.75212 codeofhammurabi.txt iraqi.txt 0.8812 codeofhammurabi.txt magnacarta.txt 1.0573 codeofhammurabi.txt us.txt 0.7965 crete.txt iraqi.txt 3.0344 crete.txt magnacarta.txt 1.6955 crete.txt us.txt 2.0329 iraqi.txt magnacarta.txt 1.1983 iraqi.txt us.txt 0.7261 magnacarta.txt us.txt 0.9125 ``` SHORT VERSION: Those values directly above are distances between points on a scatter plot (1.75212 is the distance between the codeofhammurabi point and the crete point). I can imagine a big system of equations with circles representing the distances between points. What's the best way to make this graph? I have MATLAB, R, Excel, and access to pretty much any software I might need. If you can even point me in a direction, I'll be infinitely grateful.
Your data are really distances (of some form) in the multivariate space spanned by the corpus of words contained in the documents. Dissimilarity data such as these are often ordinated to provide the best *k*-d mapping of the dissimilarities. Principal coordinates analysis and non-metric multidimensional scaling are two such methods. I would suggest you plot the results of applying one or the other of these methods to your data. I provide examples of both below. First, load in the data you supplied (without labels at this stage) ``` con <- textConnection("1.75212 0.8812 1.0573 0.7965 3.0344 1.6955 2.0329 1.1983 0.7261 0.9125 ") vec <- scan(con) close(con) ``` What you effectively have is the following distance matrix: ``` mat <- matrix(ncol = 5, nrow = 5) mat[lower.tri(mat)] <- vec colnames(mat) <- rownames(mat) <- c("codeofhammurabi","crete","iraqi","magnacarta","us") > mat codeofhammurabi crete iraqi magnacarta us codeofhammurabi NA NA NA NA NA crete 1.75212 NA NA NA NA iraqi 0.88120 3.0344 NA NA NA magnacarta 1.05730 1.6955 1.1983 NA NA us 0.79650 2.0329 0.7261 0.9125 NA ``` R, in general, needs a dissimilarity object of class `"dist"`. We could use `as.dist(mat)` now to get such an object, or we could skip creating `mat` and go straight to the `"dist"` object like this: ``` class(vec) <- "dist" attr(vec, "Labels") <- c("codeofhammurabi","crete","iraqi","magnacarta","us") attr(vec, "Size") <- 5 attr(vec, "Diag") <- FALSE attr(vec, "Upper") <- FALSE > vec codeofhammurabi crete iraqi magnacarta crete 1.75212 iraqi 0.88120 3.03440 magnacarta 1.05730 1.69550 1.19830 us 0.79650 2.03290 0.72610 0.91250 ``` Now we have an object of the right type we can ordinate it. R has many packages and functions for doing this (see the [Multivariate](http://cran.r-project.org/web/views/Multivariate.html) or [Environmetrics](http://cran.r-project.org/web/views/Environmetrics.html) Task Views on CRAN), but I'll use the **vegan** package as I am somewhat familiar with it... ``` require("vegan") ``` ## Principal coordinates First I illustrate how to do principal coordinates analysis on your data using **vegan**. ``` pco <- capscale(vec ~ 1, add = TRUE) pco > pco Call: capscale(formula = vec ~ 1, add = TRUE) Inertia Rank Total 10.42 Unconstrained 10.42 3 Inertia is squared Unknown distance (euclidified) Eigenvalues for unconstrained axes: MDS1 MDS2 MDS3 7.648 1.672 1.098 Constant added to distances: 0.7667353 ``` The first PCO axis is by far the most important at explaining the between text differences, as exhibited by the Eigenvalues. An ordination plot can now be produced by plotting the Eigenvectors of the PCO, using the `plot` method ``` plot(pco) ``` which produces ![enter image description here](https://i.stack.imgur.com/jpAMR.png) ## Non-metric multidimensional scaling A non-metric multidimensional scaling (nMDS) does not attempt to find a low dimensional representation of the original distances in an Euclidean space. Instead it tries to find a mapping in *k* dimensions that best preserves the **rank** ordering of the distances between observations. There is no closed-form solution to this problem (unlike the PCO applied above) and an iterative algorithm is required to provide a solution. Random starts are advised to assure yourself that the algorithm hasn't converged to a sub-optimal, locally optimal solution. Vegan's `metaMDS` function incorporates these features and more besides. If you want plain old nMDS, then see `isoMDS` in package **MASS**. ``` set.seed(42) sol <- metaMDS(vec) > sol Call: metaMDS(comm = vec) global Multidimensional Scaling using monoMDS Data: vec Distance: user supplied Dimensions: 2 Stress: 0 Stress type 1, weak ties No convergent solutions - best solution after 20 tries Scaling: centring, PC rotation Species: scores missing ``` With this small data set we can essentially represent the rank ordering of the dissimilarities perfectly (hence the warning, not shown). A plot can be achieved using the `plot` method ``` plot(sol, type = "text", display = "sites") ``` which produces ![enter image description here](https://i.stack.imgur.com/DFD6z.png) In both cases the distance on the plot between samples is the best 2-d approximation of their dissimilarity. In the case of the PCO plot, it is a 2-d approximation of the real dissimilarity (3 dimensions are needed to represent all of the dissimilarities fully), whereas in the nMDS plot, the distance between samples on the plot reflects the rank dissimilarity not the actual dissimilarity between observations. But essentially distances on the plot represent the computed dissimilarities. Texts that are close together are most similar, texts located far apart on the plot are the most dissimilar to one another.
C# changing a value with extension method If I have the following extension methods: ``` internal static class Extensions { public static void Increase(this uint value) { value += 1; } public static void Decrease(this uint value) { if (value > 0) value -= 1; } } ``` Why does this not result in a changing `i` to 1? ``` uint i = 0; i.Increase(); ```
The parameter is being passed by value, that's all. There's nothing special about extension methods on this front. It's equivalent to: ``` Extensions.Increase(i); ``` You'd need the method to have the first parameter passed by reference (with `ref`) for that to have any effect... and that's prohibited for extension methods anyway. So while you could write a method allowing you to call it as: ``` Extensions.Increase(ref i); ``` you won't be able to make that an extension method. An alternative is to make the method *return* a value, at which point you could have: ``` i = i.Increase(); ``` If you're not entirely clear on pass-by-reference vs pass-by-value semantics, you might want to read my [article on the topic](http://pobox.com/~skeet/csharp/parameters.html).
What happens under the hood of vector::push\_back memory wise? My question is regarding the effect of `vector::push_back`, I know it adds an element in the end of the vector but what happens underneath the hood? IIRC memory objects are allocated in a sequential manner, so my question is whether `vector::push_back` simply allocates more memory immediately after the vector, and if so what happens if there is not enough free memory in that location? Or perhaps a pointer is added in the "end" to cause the vector to "hop" to the location it continues? Or is it simply reallocated through copying it to another location that has enough space and the old copy gets discarded? Or maybe something else?
If there is enough space already allocated, the object is copy constructed from the argument in place. When there is not enough memory, the vector will grow it's internal databuffer following some kind of geometric progression (each time the new size will be `k*old_size` with `k > 1`[1]) and all objects present in the original buffer will then be *moved* to the new buffer. After the operation completes the old buffer will be released to the system. In the previous sentence *move* is not used in the technical *move-constructor*/ *move-assignment* sense, they could be *moved* or *copied* or any equivalent operation. [1] Growing by a factor `k > 1` ensures that the amortized cost of `push_back` is constant. The actual constant varies from one implementation to another (Dinkumware uses 1.5, gcc uses 2). The amortized cost means that even if every so often one `push_back` will be highly expensive (`O(N)` on the size of the vector at the time), those cases happen rarely enough that the cost of all operations over the whole set of insertions is linear in the number of insertions, and thus each insertion *averages* a constant cost)
How curdate() works in comparisons in MySQL I'm trying to retrieve entries in my DB using curdate() and I think there is some underlying logic that I'm unaware of. For instance: ``` SELECT * FROM foo WHERE event='bar' AND created_at >= SUBDATE(CURDATE(), 90) AND created_at <= CURDATE() ``` This call returns with the previous 90 days but it does not have any of the entries from today. As I understand it curdate() is only YYYY-MM-DD or YYYYMMDD and completely ignores the time of day, we save that for curtime() and now(). I suspect that time is being included somewhere in here because when I make a similar call in Rails and pass the date as a DateTime.beginning\_of\_day it works whereas any other way it will not include today up to a certain hour. I've checked out a few sources, including the MySQL reference manual, and haven't come up with any real answers. Is it possible that some element of the current time is being included in the query or is there some other business going on behind closed doors?
If your `created_at` column is a full `datetime` type, then when comparing to a date, it will take the beginning of that date. So, comparing to `2011-01-01` is akin to comparing to `2011-01-01 00:00:00`. That's why you're not getting anything back for `CURDATE()` (unless the `created_at` timestamp is **exactly** midnight on that date). Example: ``` select curdate(); -- 2011-01-31 select cast(curdate() as datetime); -- 2011-01-31 00:00:00 select if('2011-01-31' <= curdate(), 'yep', 'not this time'); -- yep select if('2011-01-31 00:00:01' <= curdate(), 'yep', 'not this time'); -- not this time select if('2011-01-31 01:00:00' <= adddate(curdate(), interval 1 day), 'yep', 'not this time'); -- yep ```
Scala object extends nothing I came across a piece of code in our codebase, which looked invalid to me, but compiled successfully and worked. ``` object Main extends { def main(args: Array[String]): Unit = { print("Hello World") } } ``` > > Hello World > > > Can someone explain me, what happens here? Does `Main` class extend here an anonymous `class`/`trait`?
If we decompile the code using `scala -Xprint:typer`, we see that `Main` extends `AnyRef`: ``` scalac -Xprint:typer Main.scala [[syntax trees at end of typer]] // Main.scala package com.yuvalitzchakov { object Main extends scala.AnyRef { def <init>(): com.yuvalitzchakov.Main.type = { Main.super.<init>(); () }; def main(args: Array[String]): Unit = scala.Predef.print("Hello World") } } ``` This is also documented in the [Scala specification](https://www.scala-lang.org/files/archive/spec/2.11/05-classes-and-objects.html#object-definitions) under object/class definition: > > An object definition defines a > single object of a new class. Its most general form is `object m > extends t`. Here, `m` is the name of the object to be defined, and `t` is a > template of the form > > > `sc with mt1 with … with mtn { stats }` which defines the base classes, > behavior and initial state of m. **The extends clause `extends sc with > mt1 with … with mtn` can be omitted, in which case extends scala.AnyRef > is assumed.** > > > This syntax is also valid for [*early initializers*](https://stackoverflow.com/questions/4712468/in-scala-what-is-an-early-initializer): ``` abstract class X { val name: String val size = name.size } class Y extends { val name = "class Y" } with X ```
Risk involved in converting basic to dynamic disk in Windows Server 2003 (and R2) How much risk is involved in converting a basic disk into a dynamic disk in Windows Server 2003 R1 and R2? I just expanded my vDisk in VMWare but since the disk is basic it will not let me expand the volume in Windows. Is there potential for data loss, OS corruption, etc? Or is this a relatively safe operation? Edit: This image is backed up nightly.
Stop! You don't need to convert the disk to a "Dynamic Disk" to expand the volume! Just boot a Windows 7 or Windows Server 2008 R2 setup ISO on that VM. Once you've booted the setup media open a command-prompt (Shift-F10) and use the `DISKPART` tool to `EXTEND` the volume. From the `DISKPART` prompt you'd do a `list disk` to list the disks in the machine, a `select disk #` (where # is the ordinal for the disk containing the volume you want to extend), a `list partition` to list the partitions on that disk, and a `select partition #` (where # is the ordinal for the volume you want to extend). After that, enter the command `extend` and the partition will be extended to fill the entire free space on the disk. This is a pretty safe operation (I've never had a problem and I've done it a *lot*) but, even so, you really should have a backup before you proceed with this operation. Better safe than sorry.
How to document default None/null in OpenAPI/Swagger using FastAPI? Using a ORM, I want to do a POST request letting some fields with a `null` value, which will be translated in the database for the default value specified there. The problem is that OpenAPI (Swagger) **docs**, ignores the default `None` and still prompts a `UUID` by default. ``` from fastapi import FastAPI from pydantic import BaseModel from typing import Optional from uuid import UUID import uvicorn class Table(BaseModel): # ID: Optional[UUID] # the docs show a example UUID, ok ID: Optional[UUID] = None # the docs still shows a uuid, when it should show a null or valid None value. app = FastAPI() @app.post("/table/", response_model=Table) def create_table(table: Table): # here we call to sqlalchey orm etc. return 'nothing important, the important thing is in the docs' if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", port=8000) ``` In the OpenAPI schema example (request body) which is at the **docs** we find: ``` { "ID": "3fa85f64-5717-4562-b3fc-2c963f66afa6" } ``` This is not ok, because I specified that the default value is `None`,so I expected this instead: ``` { "ID": null, # null is the equivalent of None here } ``` Which will pass a `null` to the `ID` and finally will be parsed in the db to the default value (that is a new generated `UUID`).
When you declare `Optional` parameters, users shouldn't have to include those parameters in their request specified with `null` or `None` (in Python), in order to be `None`. The default value of the parameters will be `None`, unless the user specifies some other value when sending the request. Hence, all you have to do is to declare a custom `example` for the Pydantic model using `Config` and `schema_extra`, as described in the [documentation](https://fastapi.tiangolo.com/tutorial/schema-extra-example/) and as shown below. The below example will create an empty (i.e., `{}`) request body in OpenAPI (Swagger UI), which can be successfully submitted (as `ID` is the only attribute of the model and is optional). ``` class Table(BaseModel): ID: Optional[UUID] = None class Config: schema_extra = { "example": { } } @app.post("/table/", response_model=Table) def create_table(table: Table): return table ``` If the `Table` model included some other *required* attributes, you could add `example` values for those, as demonstrated below: ``` class Table(BaseModel): ID: Optional[UUID] = None some_attr: str class Config: schema_extra = { "example": { "some_attr": "Foo" } } ``` If you would like to **keep the auto-generated examples** for the rest of the attributes **except the one for the `ID` attribute**, you could use the below to remove `ID` from the model's properties in the generated schema (inspired by [Schema customization](https://pydantic-docs.helpmanual.io/usage/schema/#schema-customization)): ``` class Table(BaseModel): ID: Optional[UUID] = None some_attr: str some_attr2: float some_attr3: bool class Config: @staticmethod def schema_extra(schema: Dict[str, Any], model: Type['Table']) -> None: del schema.get('properties')['ID'] ``` Also, if you would like to add custom `example` to some of the attributes, you could use `Field()` (as described [here](https://fastapi.tiangolo.com/tutorial/schema-extra-example/#field-additional-arguments)); for example, `some_attr: str = Field(example="Foo")`. Another possible solution would be to modify the generated OpenAPI schema, as described in Solution 3 of [this answer](https://stackoverflow.com/a/71536816/17865804). Though, the above solution is likely more suited to this case. ### Note `ID: Optional[UUID] = None` is the same as `ID: UUID = None`. As previously documented in FastAPI website (see [this answer](https://stackoverflow.com/a/71272615/17865804)): > > The Optional in `Optional[str]` is not used by FastAPI, but will allow > your editor to give you better support and detect errors. > > > Since then, FastAPI has revised their [documentation](https://fastapi.tiangolo.com/it/tutorial/query-params-str-validations/) with the following: > > The Union in `Union[str, None]` will allow your editor to give you > better support and detect errors. > > > Hence, `ID: Union[UUID, None] = None` is the **same** as `ID: Optional[UUID] = None` and `ID: UUID = None`. In **Python 3.10+**, one could also use `ID: UUID| None = None` (see [here](https://fastapi.tiangolo.com/python-types/#using-union-or-optional)). As per [FastAPI documentation](https://fastapi.tiangolo.com/it/tutorial/query-params-str-validations/#use-query-as-the-default-value) (see `Info` section in the link provided): > > Have in mind that the **most important part to make a parameter optional** > is the part: > > > > ``` > = None > > ``` > > or the: > > > > ``` > = Query(default=None) > > ``` > > as it will use that `None` as the default value, and that way make the > parameter not required. > > > The `Union[str, None]` part allows your editor to provide better > support, but it is not what tells FastAPI that this parameter is not > required. > > >
Objective C: I'm trying to save an NSDictionary that contains primitives and object, but error occurred once i tried adding the custom object So I have this custom object i made called 'Box', it's throwing the error for that one. Inside of my NSDictionary is... ``` box = [[Box alloc] initWithHeight:20 height:40]; wrap.dict = [NSMutableDictionary dictionaryWithObjectsAndKeys:numberInt, kInt, numberShort, kShort, numberLong, kLong, numberFloat, kFloat, numberDouble, kDouble, numberBool, kBool, nsString, kString, nsDate, kDate, box, @"keybox", nil]; ``` Notice the box object that I created up top and added in the NSDictionary at the very end. Before when the box wasn't in there, everything worked fine, but when I added the custom object 'Box', it couldn't save anymore. ``` - (void) saveDataToDisk:(NSMutableDictionary *)dictionaryForDisk { NSMutableData *data = [NSMutableData data]; NSKeyedArchiver *archiver = [[NSKeyedArchiver alloc]initForWritingWithMutableData:data]; //Encode [archiver setOutputFormat:NSPropertyListBinaryFormat_v1_0]; [archiver encodeObject:dictionaryForDisk forKey:@"Dictionary_Key"]; [archiver finishEncoding]; [data writeToFile:[self pathForDataFile] atomically:YES]; } - (NSString *) pathForDataFile { NSArray* documentDir = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES); NSString* path = nil; if (documentDir) { path = [documentDir objectAtIndex:0]; } return [NSString stringWithFormat:@"%@/%@", path, @"data.bin"]; } ``` This was the error ``` -[Box encodeWithCoder:]: unrecognized selector sent to instance 0x4e30d60 2011-11-11 11:56:43.430 nike[1186:207] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[Box encodeWithCoder:]: unrecognized selector sent to instance 0x4e30d60' *** Call stack at first throw: ``` Any help would be greatly appreciated!!!!
The exception tells you that it's trying to send `-encodeWithCoder:` to your `Box` instance. If you look up the [documentation](http://developer.apple.com/library/ios/documentation/Cocoa/Reference/Foundation/Protocols/NSCoding_Protocol/Reference/Reference.html#//apple_ref/doc/uid/20000294-BAJJGGCE), you can see that this belongs to the [`NSCoding`](http://developer.apple.com/library/ios/#documentation/Cocoa/Reference/Foundation/Protocols/NSCoding_Protocol/Reference/Reference.html) protocol. `NSCoding` is used by `NSKeyedArchiver` to encode objects. `NSDictionary`'s implementation of `NSCoding` requires that all keys and objects inside it also conform to `NSCoding`. In your case, all the previous values stored in your dictionary conform to `NSCoding`, but your `Box` instance doesn't. You need to implement `-initWithCoder:` and `-encodeWithCoder:` on your `Box` class and then declare that your class conforms to `NSCoding` by modifying the declaration to look like ``` @interface Box : Superclass <NSCoding> ```
Node.js - Monitoring a database for changes I am using a node.js server to create a 'close to real-time' socket between my web app and a database. Currently I am using MySQL which I am polling every second in node to check if there are any changes to the table (based on a timestamp.) I was wondering if there any specific techniques to doing something like this with MySQL? At the moment, I am just running a SQL query and using setTimeout before the next poll. I know it's more common to use a NoSQL database in instances like this but I'm not really comfortable with the technology and I'd much rather use SQL. Does anyone have any experience or tips for monitoring a SQL database with node?
I wouldn't personally use a polling mechanism for this. I think this is a pretty good use case for a pub/sub mq as a component on top of the database that allows consumers to subscribe to specific channels for change events on entities that they care about. Ex: 1. Someone requests that a model be changed 2. Model broadcasts change event 3. Queue up change to be persisted in the database 4. Fire off a change set on a specific channel in a message queue for distribution to all interested parties You can use a very simple in process pub/sub mechanism for this type of thing using nodes EventEmitter, and as you need to scale, have durability requirements, or need a cross language MQ you can go to a technology like rabbitmq, zeromq, etc. I've started to implement something very lightweight to do just this in one of my applications: <https://github.com/jmoyers/mettle/blob/master/src/pubsub.coffee> It boils down to something like: ``` pubsub.sub('users.*', function(updates){ // Interested party handles updates for user objects }); ``` That way you aren't putting stupid polling pressure on your database. In fact, change distribution is completely independent of writing to your database Josh