prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Mysqli not inserting any data I just changed from mysql to mysqli, and now it won't insert any data (it inserts empty fields) It worked fine before I changed the connection to mysqli. Any idea why it does that? ``` <?php $mysqli = new mysqli("localhost", "root", "", "opdracht"); /* check connection */ if ($mysqli->connect_errno) { printf("Connect failed: %s\n", $mysqli->connect_error); exit(); } if($_POST['action'] == 'button'){ // als de knop is ingedrukt insert dan if(filter_input(INPUT_POST, 'email', FILTER_VALIDATE_EMAIL) && preg_match('#\.[a-z]{2,6}$#i', $_POST['email'])){ $voornaam = mysqli_real_escape_string($_POST['voornaam']); $achternaam = mysqli_real_escape_string($_POST['achternaam']); $email = mysqli_real_escape_string($_POST['email']); $telefoonnummer = mysqli_real_escape_string($_POST['telefoonnummer']); $query = mysqli_query($mysqli,"insert into `form` (`id`,`voornaam`, `achternaam`, `email`, `telefoonnummer`) values ('','".$voornaam."', '".$achternaam."', '".$email."', '".$telefoonnummer."')") or die(mysqli_error($mysqli)); $subject = 'Een email van '.$voornaam.''; $headers = 'MIME-Version: 1.0' ."\r\n"; $headers .= "Content-type: text/html; charset=utf-8\r\n"; // opmaak mail $content = '<html><header><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> </header><img src="http://www.tekstbureaudevulpen.nl/wp-content/uploads/2010/11/Webparking_logo.png"/><body>'; $content .= '<p>Je hebt een mail van:</p>'; $content .= $voornaam; $content .= ' '; $content .= $achternaam; $content .= '<p>'.$telefoonnummer.'</p>'; $content .= '</body></html>'; mail($email, $subject, $content, $headers); // Verstuurd de mail error_log(E_ALL); } } ?> ```
Because now string escaping also requires the connection handle. All of the following need to be updated ``` $voornaam = mysqli_real_escape_string($_POST['voornaam']); $achternaam = mysqli_real_escape_string($_POST['achternaam']); $email = mysqli_real_escape_string($_POST['email']); $telefoonnummer = mysqli_real_escape_string($_POST['telefoonnummer']); ``` to this format ``` $voornaam = mysqli_real_escape_string($mysqli,$_POST['voornaam']); ^ ``` But you shouldn't still be playing with string escaping for queries even after moving to a new API, why not use prepared statements? Much safer and they come with a lot more benefits.
In Scala, how does one determine the import path from the dependency name? While adding a dependency to a Play 2.2 application written in Scala, it has occurred to me that I have no idea how the import path is actually defined or where to find it. For example, I added this dependency to my Build.scala file, like so: ``` val appDependencies = Seq( "nl.rhinofly" %% "play-s3" % "4.0.0", ... ) ``` Looking at this, I would assume that the import would be of 'nl.rhinofly.play\_s3'; and when I look in my cache, that seems to confirm this thinking: ``` /home/immauser/.ivy2/cache/nl.rhinofly/play-s3_2.10 ``` However, this import errors on compile with 'nl not found': ``` import nl.rhinofly.play_s3._ ``` However, this works: ``` import fly.play.s3._ ``` My question is: given just the dependency and the material in the cache, how would one go about determining that the correct import path is "fly.play.s3.\_"? where does one look to find this data?
**TL;DR**: you *cannot* determine the import path from the dependency group and name - the two are unrelated. Any correlation between the two is the result of a *convention* for naming Jars, but it isn't enforced in any way, isn't always adhered to, and certainly cannot be assumed. **Details**: 1. First, this behavior has nothing to do with Play, and isn't even specific to Scala, it's inherited from the way Java class names work and the way maven repositories name jar dependencies 2. The *"import path"* is actually the "fully-qualified" class name, which is simply the name of the class preceded by the folders that contain it *within the jar* (any of the jars in your classpath) separated by dots. The *jar name* has nothing to do with this 3. Any jar can contain any class, and two different jars can contain instances of two classes with the same fully-qualified name (not a desirable situation) What you *can* do is inspect the contents of a given jar, either within any modern IDE (e.g. IntelliJ, Eclipse) or using command line `jar tf`, e.g.: ``` $ jar tf ~/.ivy2/cache/org.mockito/mockito-core/jars/mockito-core-1.9.5.jar ``` The result would be a list of all files in the jar, for example: ``` ... org/mockito/Answers.class org/mockito/Answers.java org/mockito/ArgumentCaptor.class org/mockito/ArgumentCaptor.java org/mockito/ArgumentMatcher.class org/mockito/ArgumentMatcher.java ... ``` The `*.class` files can technically be imported (although some of these classes might be private or package-protected; Only the public ones can be used).
How to find the total number of Increasing sub-sequences of certain length with Binary Index Tree(BIT) How can I find the total number of Increasing sub-sequences of certain length with Binary Index Tree(BIT)? Actually this is a problem from [Spoj Online Judge](http://www.spoj.com/problems/INCSEQ/) **Example** Suppose I have an array `1,2,2,10` The increasing sub-sequences of length 3 are `1,2,4` and `1,3,4` So, the answer is `2`.
Let: ``` dp[i, j] = number of increasing subsequences of length j that end at i ``` An easy solution is in `O(n^2 * k)`: ``` for i = 1 to n do dp[i, 1] = 1 for i = 1 to n do for j = 1 to i - 1 do if array[i] > array[j] for p = 2 to k do dp[i, p] += dp[j, p - 1] ``` The answer is `dp[1, k] + dp[2, k] + ... + dp[n, k]`. Now, this works, but it is inefficient for your given constraints, since `n` can go up to `10000`. `k` is small enough, so we should try to find a way to get rid of an `n`. Let's try another approach. We also have `S` - the upper bound on the values in our array. Let's try to find an algorithm in relation to this. ``` dp[i, j] = same as before num[i] = how many subsequences that end with i (element, not index this time) have a certain length for i = 1 to n do dp[i, 1] = 1 for p = 2 to k do // for each length this time num = {0} for i = 2 to n do // note: dp[1, p > 1] = 0 // how many that end with the previous element // have length p - 1 num[ array[i - 1] ] += dp[i - 1, p - 1] // append the current element to all those smaller than it // that end an increasing subsequence of length p - 1, // creating an increasing subsequence of length p for j = 1 to array[i] - 1 do dp[i, p] += num[j] ``` This has complexity `O(n * k * S)`, but we can reduce it to `O(n * k * log S)` quite easily. All we need is a data structure that lets us efficiently sum and update elements in a range: [segment trees](http://en.wikipedia.org/wiki/Segment_tree), [binary indexed trees](http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=binaryIndexedTrees) etc.
Gradle task dependency order I have a problem with a custom gradle task : i would like to copy my android jar library and rename it after that it as executed a 'clean build' Here is how i defined it : ``` task('CreateJar', type: Copy, dependsOn: [':mylibmodule:clean', ':mylibmodule:build']){ doLast { from('build/intermediates/bundles/release/') into('libs') include('classes.jar') rename('classes.jar', 'MyLib.jar') } } ``` The problem is that in the gradle log results, the 'clean' is done after the 'build' task, so that the lib is never copied to the destination folder : ``` ... :mylibmodule:testReleaseUnitTest :mylibmodule:test :mylibmodule:check :mylibmodule:build :mylibmodule:clean :mylibmodule:CreateJar NO-SOURCE ``` I have also tried to change the order of tasks in the 'dependsOn:[]', but it does not change anything... Does anyone has any idea of where is my mistake ? Thanks in advance
The `dependsOn` list does not impose any ordering guarantees. Usually what is listed first is executed first if there are not other relations that actually do impose ordering guarantees. (One example is if `clean` depends on `build`, then it doesn't matter how you define it in that `dependsOn` attribute, becuase `build` will always be run before `clean`. That this is not the case is clear to me, thus in parentheses, just to clarify what I mean.) To determine why finally `build` is run before `clean` I cannot say without seeing the complete build script. From what you posted it is not determinable. Maybe what you are after is `clean.shouldRunAfter build` or `clean.mustRunAfter build` which define an ordering constraint without adding a dependency. So you can run each task alone, but if both are run, then their order is defined as you specified it. The difference between those two is only relevant if parallelizing task execution, then should run after means they could run in parallel iirc, must run after does not allow that.
the use of MPI\_Init() I encountered a question about the use of **MPI\_Init()**. I want to initialize random number "**randv**" only on the root processor with the code in the context below. To see if my goal is fulfilled, I have the program print out the array "**randv**" by placing a do loop immediately after the line "**call RANDOM\_NUMBER(randv)**." However, what is shown on the outcome screen is the repetition of the random number array by 8 times (given the number of processors is 8). My question is why the processors other than the root one are initialized before call **MPI\_Init()**. If all the processors are awaken and have the same random number array before evoking **MPI\_Init**, why bother to place call **MPI\_Init()** for initialization? Thanks. Lee Here is the example I use: ``` program main include 'mpif.h' integer :: i integer :: ierr integer :: irank integer :: nrow, ncol real, dimension(:,:), allocatable :: randv nrow = 4 ncol = 2 allocate(randv(nrow,ncol)) call RANDOM_SEED call RANDOM_NUMBER(randv) do i = 1, nrow write(*,'(2(f5.2,x))') randv(i,:) enddo call MPI_Init ( ierr ) allocate(row_list(ncol), col_list(nrow)) call MPI_Comm_rank ( MPI_COMM_WORLD, irank, ierr ) if( irank == 0 )then do i = 1, nrow write(*,'(2(f5.2,x))') randv(i,:) enddo endif call MPI_Finalize ( ierr ) deallocate( randv ) end program ```
I think you misunderstand how MPI works. The program you wrote is executed *by every process*. `MPI_Init` initializes the MPI environment s.t. those processes can interact. After initialization every process is uniquely identified by its *rank*. You have to make sure that, based on these ranks, each process works on different portions of your data, or performs different tasks. Typically, you should run `MPI_Init` before anything else in your program. Using `MPI_Comm_rank` you can obtain the ID of the current process (its *rank*). The first process always has the rank `0`. Therefore, if you want to run parts of the code on the "master" process only, you can test for `irank == 0`: ``` program main include 'mpif.h' integer :: i integer :: ierr integer :: irank integer :: nrow, ncol real, dimension(:,:), allocatable :: randv ! Initialize MPI call MPI_Init ( ierr ) ! Get process ID call MPI_Comm_rank ( MPI_COMM_WORLD, irank, ierr ) ! Executed on all processes nrow = 4 ncol = 2 allocate(randv(nrow,ncol)) ! Only exectued on the master process if ( irank == 0 ) then call RANDOM_SEED call RANDOM_NUMBER(randv) do i = 1, nrow write(*,'(2(f5.2,x))') randv(i,:) enddo endif ! Executed on all threads allocate(row_list(ncol), col_list(nrow)) ! Only exectued on the master process if ( irank == 0 ) then do i = 1, nrow write(*,'(2(f5.2,x))') randv(i,:) enddo endif deallocate( randv ) ! Finalize MPI, should always be executed last call MPI_Finalize ( ierr ) end program ```
efficient serverside autocomplete First off all I know: > > Premature optimization is the root of all evil > > > But I think wrong autocomplete can really blow up your site. I would to know if there are any libraries out there which can do autocomplete efficiently(serverside) which preferable can fit into RAM(for best performance). So no browserside javascript autocomplete(yui/jquery/dojo). I think there are enough topic about this on stackoverflow. But I could not find a good thread about this on stackoverflow (maybe did not look good enough). For example autocomplete names: ``` names:[alfred, miathe, .., ..] ``` What I can think off: - simple SQL like for example: `SELECT name FROM users WHERE name LIKE al%`. - I think this implementation will blow up with a lot of simultaneously users or large data set, but maybe I am wrong so numbers(which could be handled) would be cool. - Using something like solr terms like for example: `http://localhost:8983/solr/terms?terms.fl=name&terms.sort=index&terms.prefix=al&wt=json&omitHeader=true`. - I don't know the performance of this so users with big sites please tell me. - Maybe something like in [memory redis trie](https://stackoverflow.com/questions/1958005/redis-autocomplete/) which I also haven't tested performance on. - I also read in this thread about how to implement this in [java](https://stackoverflow.com/questions/971052/autocomplete-server-side-implementation) (lucene and some library created by shilad) What I would like to hear is implementation used by sites and numbers of how well it can handle load preferable with: - Link to implementation or code. - numbers to which you know it can scale. - It would be nice if it could be accesed by http or sockets. Many thanks, Alfred
## Optimising for Auto-complete Unfortunately, the resolution of this issue will depend heavily on the data you are hoping to query. LIKE queries will not put too much strain on your database, as long as you spend time using 'EXPLAIN' or the profiler to show you how the query optimiser plans to perform your query. Some basics to keep in mind: - Indexes: Ensure that you have indexes setup. (Yes, in many cases LIKE does use the indexes. There is an excellent article on the topic at myitforum. [SQL Performance - Indexes and the LIKE clause](http://myitforum.com/cs2/blogs/jnelson/archive/2007/11/16/108354.aspx) ). - Joins: Ensure your JOINs are in place and are optimized by the query planner. SQL Server Profiler can help with this. Look out for full index or full table scans ## Auto-complete sub-sets Auto-complete queries are a special case, in that they usually works as ever decreasing sub sets. - `'name' LIKE 'a%'` (may return 10000 records) - `'name' LIKE 'al%`' (may return 500 records) - `'name' LIKE 'ala%'` (may return 75 records) - `'name' LIKE 'alan%'` (may return 20 records) If you return the entire resultset for query 1 then there is no need to hit the database again for the following result sets as they are a sub set of your original query. Depending on your data, this may open a further opportunity for optimisation.
How do concatenation and indexing differ for cells and arrays in MATLAB? I am a little confused about the usage of cells and arrays in MATLAB and would like some clarification on a few points. Here are my observations: 1. An array can dynamically adjust its own memory to allow for a dynamic number of elements, while cells seem to not act in the same way: ``` a=[]; a=[a 1]; b={}; b={b 1}; ``` 2. Several elements can be retrieved from cells, but it doesn't seem like they can be from arrays: ``` a={'1' '2'}; figure; plot(...); hold on; plot(...); legend(a{1:2}); b=['1' '2']; figure; plot(...); hold on; plot(...); legend(b(1:2)); %# b(1:2) is an array, not its elements, so it is wrong with legend. ``` Are these correct? What are some other different usages between cells and array?
[Cell arrays](https://www.mathworks.com/help/matlab/cell-arrays.html) can be a little tricky since you can use the `[]`, `()`, *and* `{}` syntaxes in various ways for [creating](https://www.mathworks.com/help/matlab/matlab_prog/create-a-cell-array.html), [concatenating](https://www.mathworks.com/help/matlab/matlab_prog/combine-cell-arrays.html), and [indexing](https://www.mathworks.com/help/matlab/matlab_prog/access-data-in-a-cell-array.html) them, although they each do different things. Addressing your two points: 1. To grow a cell array, you can use one of the following syntaxes: ``` b = [b {1}]; % Make a cell with 1 in it, and append it to the existing % cell array b using [] b = {b{:} 1}; % Get the contents of the cell array as a comma-separated % list, then regroup them into a cell array along with a % new value 1 b{end+1} = 1; % Append a new cell to the end of b using {} b(end+1) = {1}; % Append a new cell to the end of b using () ``` 2. When you index a cell array with `()`, it returns a subset of cells in a cell array. When you index a cell array with `{}`, it returns a [comma-separated list](https://www.mathworks.com/help/matlab/matlab_prog/comma-separated-lists.html) of the cell contents. For example: ``` b = {1 2 3 4 5}; % A 1-by-5 cell array c = b(2:4); % A 1-by-3 cell array, equivalent to {2 3 4} d = [b{2:4}]; % A 1-by-3 numeric array, equivalent to [2 3 4] ``` For `d`, the `{}` syntax extracts the contents of cells 2, 3, and 4 as a [comma-separated list](https://www.mathworks.com/help/matlab/matlab_prog/comma-separated-lists.html), then uses `[]` to collect these values into a numeric array. Therefore, `b{2:4}` is equivalent to writing `b{2}, b{3}, b{4}`, or `2, 3, 4`. With respect to your call to [`legend`](https://www.mathworks.com/help/matlab/ref/legend.html), the syntax `legend(a{1:2})` is equivalent to `legend(a{1}, a{2})`, or `legend('1', '2')`. Thus *two* arguments (two separate characters) are passed to `legend`. The syntax `legend(b(1:2))` passes a single argument, which is a 1-by-2 string `'12'`.
Prevent leak after Ctrl+C when using read and malloc in a loop I have to do a project, creating a shell in C. There will be a while loop waiting for characters to read in input. I think it's better to store characters we read in a pointer using `malloc` than in a buffer (array of characters) with fixed sized because we don't know how many characters will be sent. This way, I could use `realloc` on the pointer to get a larger size if needed. I've noticed with valgrind that when the program has finished reading characters and is waiting for new ones; if I press Ctrl+C, there will be memory leak. The only solution I've found to prevent this is to free the pointer after each command sent. Is this a good idea or is there a better way to do this? For information, I'm reading characters in a buffer buf and then concatenating string to the pointer str. Here is the code: ``` #include <unistd.h> #include <stdlib.h> #include <string.h> #include <stdio.h> #define READ_SIZE 1024 int main() { char buf[READ_SIZE]; char *str; int ret; int size; int max; str = NULL; size = 0; max = READ_SIZE; write(1, "$> ", 3); while((ret = read(0, buf, READ_SIZE))) { if (str == NULL) { if ((str = malloc(READ_SIZE + 1)) == NULL) return EXIT_FAILURE; strncpy(str, buf, ret); str[ret] = '\0'; } else str = strncat(str, buf, ret); if (strncmp(&buf[ret - 1], "\n", 1) == 0) { str[size + ret - 1] = '\0'; printf("%s\n", str); size = 0; max = READ_SIZE; free(str); str = NULL; write(1, "$> ", 3); } else if (size + ret == max) { max *= 2; size += READ_SIZE; if ((str = realloc(str, max + 1)) == NULL) return EXIT_FAILURE; } else size += READ_SIZE; } free(str); return EXIT_SUCCESS; } ```
A few things to say here - 1. If it's important to you that your program exits cleanly on ctrl+C, then you must add a [signal handler](http://en.wikipedia.org/wiki/C_signal_handling) to do cleanup instead of exiting immediately on ctrl+C. Note that your solution works imperfectly: if you hit ctrl+c in the split-second between your alloc and dealloc, you would still show a leak. 2. It's arguably unimportant for leaks to be cleaned up on program exit, since the OS cleans up any remaining memory in this case. The argument against this is that in software engineering, your program now may become a library to another program in the future and its leaks would become dangerous. Right now your program leaks on any SIGINT; it's up to you whether part of the interface of your program is to guarantee that memory will be cleaned up on SIGINT even if the process continues running. I personally can't really imagine this being important. 3. You definitely can't prevent leaks on a ctrl+Z, so don't stress out too much over perfection. In other words - I don't think it's important, but I did my best to explain why it sort of might be, and you can solve it for ctrl+C if you insist.
How to get distinct values from an Array of Classes? I've found many examples on how to get the distinct values from an array such as this: ``` [1, 2, 2, 3, 4, 5, 6] ``` And then I can do `Array(Set(myArray))`, however, this doesn't work when the array value is a class. However, what I need is to get a list of distinct values from a property of a class that is in an array such as this class: ``` class Alert: Decodable { var id: Int? var category: String? var category_id: Int? var summary: String? var title: String? var publish_date: String? } ``` I'd like to get a list of "category" from my array. I know I can probably just loop through the array, and append a new array if the value doesn't exist. Not sure if this is the most effective way, or if there is a more swifty way of doing things. I found I can do: ``` let categories = self.items!.map { $0.category } ``` Which will get a list of the categories, but not sure how to make them distinct. I tried `Array(Set(`, but that doesn't work. I get the following error: > > Generic parameter 'Element' could not be inferred > > > I'm relatively new to swift, so I know what I want to do, just can't figure it out syntactically. Any guidance will be appreciated.
`Set` requires your elements to be `Hashable`. That means they have to be `Equatable` and provide a hashing value. If your aim is to create list of unique categories, then you should start by creating an array of categories: ``` let categories = (self.items ?? []).compactMap { $0.category } ``` Then you can simply get unique categories using `Set`: ``` let uniqueCategories = Array(Set(categories)) ``` However, note that `Set` does not keep items ordered. If you also need to keep the `categories` in order then I would recommend a simple `Array` extension: ``` extension Array where Element: Hashable { func distinct() -> Array<Element> { var set = Set<Element>() return filter { guard !set.contains($0) else { return false } set.insert($0) return true } } } ``` or simplified using `Set.insert` return value ([Leo Dabus](https://stackoverflow.com/users/2303865/leo-dabus)'s idea): ``` extension Array where Element: Hashable { func distinct() -> Array<Element> { var set = Set<Element>() return filter { set.insert($0).inserted } } } ``` and then ``` let uniqueCategories = (self.items ?? []) .flatMap { $0.category } .distinct() ```
Understanding maximum likelihood estimation I am told > > the method of maximum likelihood says we should use the model that > assigns the greatest probability to the data we have observed; > formally, the maximum likelihood estimator is found by solving > > $\hat{\theta}= \arg\_{\theta}\max\{p(x|\theta)\}$ > where $p(x|\theta)$ > is called the likelihood function. > > > Am I correct reading $p(x|\theta)$ in English as "the probability of the data given the parameters"? I am confused because we at first seem to be told that MLE is about the probability of the parameters given the data. [Update] I am still confused about whether the likelihood function returns probability or probability density. Because Wikipedia says > > The [likelihood](https://en.wikipedia.org/wiki/Likelihood_function) function (often simply called the likelihood) > represents the probability of random variable realizations conditional > on particular values of the statistical parameters > > > I am a programmer. When I write a function in code I expect it to return a value of a known type. I want to understand the type that the likelihood function returns. If the type can be either probability or probability density, why does Wikipedia not make that clear at the start?
Parameters are *not* random variables but fixed unknowns (at least in the likelihood approach to inference) calibrating the distribution of the observables/observations. Outside a [Bayesian setup](https://amzn.to/3mjvp4o), it is thus incorrect to talk of a *probability distribution* on the parameters. The MLE is the numerical value of the parameters that makes the actual observations the most likely to have occurred $$p(x|\hat\theta) = \max\_\theta p(x|\theta)$$ The quantity $p(x|θ)$ in the above thus reads as 1. the probability of observing the realisation $x$ of the random variable $X$ when the parameter value (indexing its distribution) is equal to $θ$ in a discrete setting and 2. the density of $X$ at the value $x$ when the parameter value is equal to $θ$ in a continuous setting. (I would even avoid the term *given* since this could be interpreted as a conditional probability or density, which does not make sense if $\theta$ is not a random variable, i.e., outside a [Bayesian framework](https://amzn.to/3mjvp4o).) To quote the very originator of the notion of *likelihood*, R.A. Fisher in [‘On the mathematical foundations of theoretical statistics’](https://royalsocietypublishing.org/doi/abs/10.1098/rsta.1922.0009) (1922): > > I suggest that we may speak without confusion of the likelihood of one value of p being thrice the likelihood of another (…) likelihood is not here used loosely as a synonym of probability, but simply to express the relative frequencies with which such values of the hypothetical quantity p would in fact yield the observed sample. > > > Note the stress made in the [discussed Wikipedia page](https://en.wikipedia.org/wiki/Likelihood_function) > > The fact that the likelihood function can be defined in a way that includes contributions that are not commensurate (the density and the probability mass) arises from the way in which the likelihood function is defined up to a constant of proportionality, where this "constant" can change with the observation $x$, but not with the parameter $\theta$. > > > which reinforces the point that the likelihood function is not a probability density function (for its argument $\theta$) and can take numerical values above $1$ (or any other bound).
Checking consistency of multiple arguments using Mockito I'm using Mockito to mock a class that has a method that looks something like this: ``` setFoo(int offset, float[] floats) ``` I want to be able verify that the values in the array (`floats`) are equal (within a given tolerance) to values in an array of expected values. The catch is that I want to check the contents of `floats` starting at the position specified by `offset`. For the purposes of the test I don't know/care what the offset is as long as it points at the values I'm expecting. I also don't care what the rest of the array contains. I only care about the values starting at the supplied offset. How do I do this?
While a partial mock isn't a bad idea, you might find your code easier to follow if you use an [ArgumentCaptor](http://docs.mockito.googlecode.com/hg/org/mockito/ArgumentCaptor.html) instead to get the values after the fact. It's a special argument matcher that keeps track of the value it matches. ``` // initialized with MockitoAnnotations.initMocks(); @Captor ArgumentCaptor<Integer> offsetCaptor; @Captor ArgumentCaptor<float[]> floatsCaptor; @Mock Bar bar; @Test public void valuesShouldBeCloseEnough() { Sut sut = new Sut(bar); sut.doSomething(); verify(bar).setFoo(offsetCaptor.capture(), floatsCaptor.capture()); // check values with assertValuesAreCloseEnough, declared elsewhere assertValuesAreCloseEnough(offsetCaptor.getValue(), floatsCaptor.getValue()); } ```
What is the role of undefined exception handler (\_\_und\_svc) in kprobes? I tried to convert the kprobe as loadable kernel module. I am able to run the samples available in `samples/kprobes/` folder from kernel tree. If we configure kprobes in kernel(`CONFIG_KPROBES`), then `svc_entry` macro will be expanded with 64 bytes in `__und_svc()` handler. **Reference :** <http://lxr.free-electrons.com/source/arch/arm/kernel/entry-armv.S?a=arm#L245> My aim is without touching kernel side, make kprobe as kernel module. so kernel is compiled without enabling CONFIG\_KPROBES. so svc\_entry macro will be expanded with 0 in [\_\_und\_svc()](http://lxr.free-electrons.com/source/arch/arm/kernel/entry-armv.S?a=arm#L251) I would like to get cleared from these doubts. 1. If kprobe is handled undefined instruction exception(bcos kprobe only created), then why `__und_svc()` is invoked. what is the role of `__und_svc()` handler with respect to kprobes?? 2. If 64 bytes memory is compulsory, then how to allocate without compiling the kernel. i.e How to do it dynamically.?? Please share your knowledge.
You may not get responses as your understanding of things is not very good and it will take some time for anyone on the *linux-arm-kernel* list to respond. Read [kprobes.txt](https://www.kernel.org/doc/Documentation/kprobes.txt) and study the ARM architecture in detail. > > If kprobe is handled undefined instruction exception(bcos kprobe only created), then why `__und_svc()` is invoked. what is the role of `__und_svc()` handler with respect to kprobes? > > > On the ARM, mode `0b11011` is the *undefined instruction mode*. The flow when an *undefined instruction* happens is, 1. lr\_und = pc of undef instruction + 4 2. SPSR\_und = CPSR of mode where the instruction occurred. 3. Change mode to ARM with interrupt disabled. 4. PC = vector base + 4 The main vector table of step four is located at [`__vectors_start`](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/arm/kernel/entry-armv.S#n1130) and this just branches to [`vector_und`](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/arm/kernel/entry-armv.S#n1080). The code is a macro called [`vector_stub`](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/arm/kernel/entry-armv.S#n949), which makes a descision to call either `__und_svc` or `__und_usr`. The stack is the 4/8k page that is reserved per process. It is the kernel page which contains both the task structure and the kernel stack. **kprobe** works by placing *undefined instructions* at code addresses that you wish to probe. Ie, it involves the *undefined instruction handler*. This should be pretty obvious. It calls two routines, [`call_fpe`](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/arm/kernel/entry-armv.S#n495) or [`do_undefinstr()`](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/arm/kernel/traps.c#n397). You are interested in the 2nd case, which gets the opcode and calls `call_undef_hook()`. Add a hook with [register\_undef\_hook()](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/arm/kernel/traps.c#n363); which you can see [`arch_init_kprobes()`](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/arm/kernel/kprobes.c#n608). The main callback [`kprobe_handler`](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/arm/kernel/kprobes.c#n203) is called with a `struct pt_regs *regs`, which happens to be the extra memory reserved in `__und_svc`. Notice for instance, [`kretprobe_trampoline()`](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/arm/kernel/kprobes.c#n370), which is playing tricks with the stack that it is currently executing with. > > If 64 bytes memory is compulsory, then how to allocate without compiling the kernel. i.e How to do it dynamically.? > > > No it is not. You can use a different mechanism, but you may have to modify the *kprobes* code. Most likely you will have to limit functionality. It is also possible to completely re-write the stack frame and reserve the extra 64bytes after the fact. It is **not an allocation** as in `kmalloc()`. It is just adding/subtracting a number from the supervisor stack pointer. I would guess that the code re-writes the return address from the *undefined handler* to execute in the context (ISR, bottom half/thread IRQ, work\_queue, kernel task) of the *kprobed* address. But there are probably additional issues you haven't yet encountered. If `arch_init_kprobes()` is never called, then you can just always do the reservation in `__und_svc`; it just eats 64 bytes of stack which will make it more likely that the kernel stack will overflow. Ie, change, ``` __und_svc: @ Always reserve 64 bytes, even if kprobe is not active. svc_entry 64 ``` `arch_init_kprobes()` is what actually installs the feature.
pandas.read\_csv moves column names over one I am using the ALL.zip file located [here](http://www.fec.gov/disclosurep/PDownload.do). My goal is to create a pandas DataFrame with it. However, if I run `data=pd.read_csv(foo.csv)` the column names do not match up. The first column has no name, and then the second column is labeled with the first, and the last column is a Series of NaN. So I tried ``` colnames=[list of colnames] data=pd.read_csv(foo.csv, names=colnames, header=False) ``` which gave me the exact same thing, so I ran ``` data=pd.read_csv(foo.csv, names=colnames) ``` which lined the colnames up perfectly, but had the csv assigned column names(the first line in the csv document) perfectly aligned as the first row of data it. So I ran ``` data=data[1:] ``` which did the trick. So I found a work around without solving the actual problem. I looked at the [read\_csv](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) document and found it a bit overwhelming, and could not figure out a way using only pd.read\_csv to fix this problem. What was the fundamental problem (I am assuming it is either user error or a problem with the file)? Is there a way to fix it with one of the commands from the read\_csv? Here is the first 2 rows from the csv file ``` cmte_id,cand_id,cand_nm,contbr_nm,contbr_city,contbr_st,contbr_zip,contbr_employer,contbr_occupation,contb_receipt_amt,contb_receipt_dt,receipt_desc,memo_cd,memo_text,form_tp,file_num,tran_id,election_tp C00458844,"P60006723","Rubio, Marco","HEFFERNAN, MICHAEL","APO","AE","090960009","INFORMATION REQUESTED PER BEST EFFORTS","INFORMATION REQUESTED PER BEST EFFORTS",210,27-JUN-15,"","","","SA17A","1015697","SA17.796904","P2016", ```
It's not the column that you're having a problem with, it's the index ``` import pandas as pd df = pd.read_csv('P00000001-ALL.csv', index_col=False, low_memory=False) print(df.head(1)) cmte_id cand_id cand_nm contbr_nm contbr_city \ 0 C00458844 P60006723 Rubio, Marco HEFFERNAN, MICHAEL APO contbr_st contbr_zip contbr_employer \ 0 AE 090960009 INFORMATION REQUESTED PER BEST EFFORTS contbr_occupation contb_receipt_amt contb_receipt_dt \ 0 INFORMATION REQUESTED PER BEST EFFORTS 210 27-JUN-15 receipt_desc memo_cd memo_text form_tp file_num tran_id election_tp 0 NaN NaN NaN SA17A 1015697 SA17.796904 P2016 ``` The `low_memory=False` is because column 6 has mixed datatype.
Spark 2.0.0 reading json data with variable schema I am trying to process a month's worth of website traffic, which is stored in an S3 bucket as json (one json object per line/website traffic hit). The amount of data is big enough that I can't ask Spark to infer the schema (OOM errors). If I specify the schema it loads fine obviously. But, the issue is that the fields contained in each json object differ, so even if I build a schema using one day's worth of traffic, the monthly schema will be different (more fields) and so my Spark job fails. So I'm curious to understand how others deal with this issue. I can for example use a traditional RDD mapreduce job to extract the fields I'm interested in, export and then load everything into a dataframe. But this is slow and seems a bit like self-defeating. I've found a [similar question here](https://stackoverflow.com/questions/35995785/dataframes-reading-json-files-with-changing-schema) but no relevant info for me. Thanks.
If you know the fields you're interested in just provide a subset of schema. JSON reader can gracefully ignore unexpected fields. Let's say your data looks like this: ``` import json import tempfile object = {"foo": {"bar": {"x": 1, "y": 1}, "baz": [1, 2, 3]}} _, f = tempfile.mkstemp() with open(f, "w") as fw: json.dump(object, fw) ``` and you're interested only in `foo.bar.x` and `foo.bar.z` (non-existent): ``` from pyspark.sql.types import StructType schema = StructType.fromJson({'fields': [{'metadata': {}, 'name': 'foo', 'nullable': True, 'type': {'fields': [ {'metadata': {}, 'name': 'bar', 'nullable': True, 'type': {'fields': [ {'metadata': {}, 'name': 'x', 'nullable': True, 'type': 'long'}, {'metadata': {}, 'name': 'z', 'nullable': True, 'type': 'double'}], 'type': 'struct'}}], 'type': 'struct'}}], 'type': 'struct'}) df = spark.read.schema(schema).json(f) df.show() ## +----------+ ## | foo| ## +----------+ ## |[[1,null]]| ## +----------+ df.printSchema() ## root ## |-- foo: struct (nullable = true) ## | |-- bar: struct (nullable = true) ## | | |-- x: long (nullable = true) ## | | |-- z: double (nullable = true) ``` You can also reduce sampling ratio for schema inference to improve overall performance.
Is id="nodeName" reserved in html5? I'm using: ``` <span id="nodeName"></span> ``` in my html, then let jquery do: ``` $("#nodeName").html("someString"); ``` Then, the console says that: ``` Uncaught TypeError: Object #<HTMLSpanElement> has no method 'toLowerCase' ``` After I change the id, it works all right. So, is there any reserved id?
It is failing in the following function: ``` acceptData: function( elem ) { // Do not set data on non-element because it will not be cleared (#8335). if ( elem.nodeType && elem.nodeType !== 1 && elem.nodeType !== 9 ) { return false; } var noData = elem.nodeName && jQuery.noData[ elem.nodeName.toLowerCase() ]; // nodes accept data unless otherwise specified; rejection can be conditional return !noData || noData !== true && elem.getAttribute("classid") === noData; } ``` Specifically on the call `elem.nodeName.toLowerCase()`, when `elem === window`. This gets called when you include jQuery in the page, even if you never select that element in your Javascript. The reason is that jQuery does a check to see which elements can handle `data-attributes` once jQuery is ready. During that check it calls the acceptData function on the `window` element. This in the most recent versions of jQuery 1, since version 1.8.0 and up to and including the latest 1.10.1. The bug seems to have been introduced by the following change in jQuery 1.8: > > $(element).data(“events”): In version 1.6, jQuery separated its internal data from the user’s data to prevent name collisions. However, some people were using the internal undocumented “events” data structure so we made it possible to still retrieve that via .data(). This is now removed in 1.8, but you can still get to the events data for debugging purposes via $.\_data(element, "events"). Note that this is not a supported public interface; the actual data structures may change incompatibly from version to version. > > > The window is passed into `jQuery._data` as `cur` on line 2939 in [version 1.8.0](http://code.jquery.com/jquery-1.8.0.js), in order to check for the internal "events" data on the window object. This happens when jQuery triggers the `$(document).ready` `$(window).ready` events. Since window is not a DOM node acceptData shouldn't be called on window at all. ``` handle = ( jQuery._data( cur, "events" ) || {} )[ event.type ] && jQuery._data( cur, "handle" ); ``` [I created a bug report](http://bugs.jquery.com/ticket/14074)
Correlation between numeric and logical variable gives (intended) error? Example data. ``` require(data.table) dt <- data.table(rnorm(10), rnorm(10) < 0.5) ``` Compute correlation between numeric and logical variables gives error. ``` cor(dt) #Error in cor(dt) : 'x' must be numeric ``` But error goes away when converting to a data frame. ``` cor(data.frame(dt)) # V1 V2 #V1 1.0000000 -0.1631356 #V2 -0.1631356 1.0000000 ``` Is this intended behaviour for `data.table`?
`cor` tests whether `x` or `y` (arguments) are `data.frames` (using `is.data.frame` - which data.table will return TRUE as well) and then coerces the argument to a matrix ``` if (is.data.frame(x)) x <- as.matrix(x) ``` The issue appears to be the different ways `as.matrix.data.table` and `as.matrix.data.frame` work with the example matrix ``` as.matrix(dt) ``` returns a character matrix - this would appear to be a bug in `data.table` `as.matrix.data.table` and `as.matrix.data.frame` appear to share similar code for coercing that is dispatching differently ``` # data.table:::as.matrix.data.table else if (non.numeric) { for (j in seq_len(p)) { if (is.character(X[[j]])) next xj <- X[[j]] miss <- is.na(xj) xj <- if (length(levels(xj))) as.vector(xj) else format(xj) is.na(xj) <- miss X[[j]] <- xj } } ## base::as.matrix.data.frame else if (non.numeric) { for (j in pseq) { if (is.character(X[[j]])) next xj <- X[[j]] miss <- is.na(xj) xj <- if (length(levels(xj))) as.vector(xj) else format(xj) is.na(xj) <- miss X[[j]] <- xj } } ``` Currently the `data.table` version coerces the logical column to a character.
IEnumerable to a CSV file I am getting the result from LINQ query as var of type `IEnumerable<T>` I want a CSV file to be created from the result from the LINQ I am getting the result from the following query ``` var r = from table in myDataTable.AsEnumerable() orderby table.Field<string>(para1) group table by new { Name = table[para1], Y = table[para2] } into ResultTable select new { Name = ResultTable.Key, Count = ResultTable.Count() }; ```
Check this ``` public static class LinqToCSV { public static string ToCsv<T>(this IEnumerable<T> items) where T : class { var csvBuilder = new StringBuilder(); var properties = typeof(T).GetProperties(); foreach (T item in items) { string line = string.Join(",",properties.Select(p => p.GetValue(item, null).ToCsvValue()).ToArray()); csvBuilder.AppendLine(line); } return csvBuilder.ToString(); } private static string ToCsvValue<T>(this T item) { if(item == null) return "\"\""; if (item is string) { return string.Format("\"{0}\"", item.ToString().Replace("\"", "\\\"")); } double dummy; if (double.TryParse(item.ToString(), out dummy)) { return string.Format("{0}", item); } return string.Format("\"{0}\"", item); } } ``` Full code at : [Scott Hanselman's ComputerZen Blog - From Linq To CSV](http://www.hanselman.com/blog/default.aspx?date=2010-02-04)
Uncaught TypeError: Cannot read property 'props' of undefined in reactJS I'm trying out an example in reactJS where I'm displaying a list of records and delete them.Refresh the list after each delete. I saw many questions raised by this error heading, but nothing worked with me. ``` var CommentBox = React.createClass({ loadCommentsFromServer: function () { $.ajax({ url: this.props.listUrl, dataType: 'json', success: function (data) { this.setState({data: data}); }.bind(this), error: function (xhr, status, err) { console.error(this.props.url, status, err.toString()); }.bind(this), cache: false }); }, getInitialState: function () { return {data: []}; }, componentDidMount: function () { this.loadCommentsFromServer(); }, onClickingDelete: function() { alert('Upper layer on click delete called'); this.loadCommentsFromServer(); }, render: function () { return ( <div className="commentBox"> <h1>Static web page</h1> <CommentList data={this.state.data} onClickingDelete={this.onClickingDelete} /> </div> ); } }); var CommentList = React.createClass({ render: function() { if (this.props.data != '') { var commentNodes = this.props.data.content.map(function (comment) { return ( <div className="comment" key={comment.id}> <h2 className="commentpageName"> {comment.pageName} </h2> <p> {comment.pageContent} </p> <DeleteButton deleteUrl={'/delete/' + comment.id} onClickingDelete={this.props.onClickingDelete}/> </div> ); }); } return ( <div className="commentList"> {commentNodes} </div> ); } }); var DeleteButton = React.createClass({ onClickingDelete: function() { $.ajax({ url: this.props.deleteUrl, type: 'delete', success: function () { alert('Successfully deleted'); this.props.onClickingDelete(); }.bind(this), error: function (xhr, status, err) { console.error(this.props.deleteUrl, status, err.toString()); }.bind(this), cache: false }); }, render: function() { return ( <button name={this.props.deleteUrl} onClick={this.onClickingDelete}>Delete</button> ); } }); ReactDOM.render(<CommentBox listUrl="/list/"/>, document.getElementById('content')); ``` On page load, I'm getting > > "Uncaught TypeError: Cannot read property 'props' of undefined" > > > in the CommentList's line call to DeleteButton. Wondering how "this" can go undefined
When you use array.map, **this** is undefined and undergoes the normal resolution for **this** in javascript. In strict mode, it'll stay undefined. - Non-strict mode: **this** resolves to Window ``` > [1].map(function(test) {console.log(this)}); > [object Window] ``` - Strict mode: **this** resolves to undefined ``` > 'use strict'; [1].map(function(test) {console.log(this)}); > undefined ``` There are different ways to bind it to the current object. - Using map.thisArg ``` this.props.data.content.map(function(comment) { ... }, this); ``` - Using bind ``` this.props.data.content.map(function(comment) { ... }).bind(this); ``` - Using an arrow function ``` this.props.data.content.map(comment => { ... }); ``` - Using a variable ``` var that = this; this.props.data.content.map(function(comment) { ... onClickingDelete={that.props.onClickingDelete} ... }); ``` Sources: > > If a thisArg parameter is provided to map, it will be passed to > callback when invoked, for use as its this value. Otherwise, the value > undefined will be passed for use as its this value. > <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map> > > > In strict mode, the value of this remains at whatever it's set to when > entering the execution context. If it's not defined, it remains > undefined. > <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/this> > > >
Is iOS guaranteed to be little-endian? It appears that ARM processors can be configured as big-endian or little-endian. However, according to the interwebs, ARM processors are "almost always" configured as little-endian. Is it guaranteed that iOS will run an ARM processor in little-endian mode? Is there a compile-time flag which I could check, via `#if` or anything else? Although there are functions in Foundation to handle different byte orderings, it seems that one could save some trouble about that if one could be sure that the byte ordering was always the same.
*At the time of this writing*, iOS runs the ARMs in *little-endian* mode. However, for architectures supporting multiple endiannes, it's considered good practice to handle both cases without making any assumptions of how the higher layer software/firmware runs it. The reasons are future code changes that affect endiannes or architectural changes resulting in a fixed endiannes mode. Apple has changed CPU architecture multiple times, that alone should be a hint, and the fact that today's microprocessor and microcontroller market is being actively driven forward with new products and developments means that more than a *good practice*, it's almost a *must*. Software and hardware vendors, in the mobile/smart appliances sector are known to change their CPU architecture with regularity. Plus, and more importantly, proper handling of multiple byte ordering will lead you to a robust, solid and future-proof solution.
O(n) algorithm to find a “publisher” in a social network I was asked a question how to find a "publisher" in a social network. Suppose the (simplified) social network only has "following" relationship between two users and one cannot follow himself. Then we define a "publisher" as a user who is followed by ALL other users but does not follow anyone. More specifically, given such a social network graph in the format of adjacency matrix, say NxN bool matrix, where cell[i,j] indicates whether or not user i follows user j. How to find out the publisher. What I can see is that there is at most one publisher could exist.(it's easy to prove: since publisher is followed by everyone else, then everyone else follows at least one user, so they are not publisher). I do come up with a naive solution: first scan column by column, if there is a all true column j (except for cell[j,j] of course), then scan row[j] to make sure it's all false. Obviously, the performance is O(n^2) for the naive algorithm cause we scan the whole matrix. However, I was told that there is an O(n) solution. I am kind of stuck at O(n). Any hints?
If your data is presented as an adjacency matrix, then you can proceed as follows. Start by checking entry (1,2) in the matrix. If 1 follows 2 then 1 is not the publisher, and if 1 does not follow 2 then 2 is not the publisher. Remove whoever is not the publisher (1 or 2) and let X be the remaining node. Then check entry (X,3) in the matrix. Similarly you will get that either X is not the publisher or 3 is not the publisher. Remove whoever is not the publisher and then add node 4 and repeat. After you have repeated this process with all n nodes, you will be left with one candidate for the publisher. Then you can check the row and column for the candidate to verify that it is a true publisher. Total running time is O(n) for the entire algorithm, even though the adjacency matrix has size n^2.
Mysql Regular Expression search with no repeating characters I have a database table with words from a dictionary. Now I want to select words for an anagram. For example if I give the string `SEPIAN` it should fetch values like `apes`, `pain`, `pains`, `pies`, `pines`, `sepia`, etc. For this I used the query ``` SELECT * FROM words WHERE word REGEXP '^[SEPIAN]{1,6}$' ``` But this query returns words like `anna`, `essen` which have repeated characters not in the supplied string. Eg. `anna` has two `n`'s but there is only one `n` in the search string `SEPIAN`. How can I write my regular expression to achieve this? Also if there are repeated characters in my search string at that time the repeated characters should reflect in the result.
Since MySQL does not support back-referencing capturing groups, the typical solution of `(\w).*\1` will not work. This means that any solution given will need to enumerate all possible doubles. Furthermore, as far as I can tell back-references are not valid in look-aheads or look-behinds, and look-aheads and look-behinds are not supported in MySQL. However, you can split this into two expressions, and use the following query: ``` SELECT * FROM words WHERE word REGEXP '^[SEPIAN]{1,6}$' AND NOT word REGEXP 'S.*?S|E.*?E|P.*?P|I.*?I|A.*?A|N.*?N' ``` Not very pretty, but it works and it should be fairly efficient as well. --- To support a set limit of repeated characters, use the following pattern for your secondary expression: ``` A(.*?A){X,} ``` Where `A` is your character and `X` is the number of times it's allowed. So if you're adding another `N` to your string `SEPIANN` (for a total of 2 `N`s), your query would become: ``` SELECT * FROM words WHERE word REGEXP '^[SEPIAN]{1,7}$' AND NOT word REGEXP 'S.*?S|E.*?E|P.*?P|I.*?I|A.*?A|N(.*?N){2}' ```
Is there a way to merge textures in Unity in a shader? I'm trying to animate a 2D game object using several overlaid layers of images in one mesh renderer. I've got several layers of different textures, each is an image with a transparent background. I've found a way to programmatically create a rectangular mesh and layer the materials within it with UV mapping. Unfortunately Unity now has to render each of these material layers separately, despite the fact that they are all within one mesh. This results in a very inefficient number of draw calls. I can see that each material now has it's own shader as well. Will I need to edit all of my images into one gigantic image outside of Unity and display portions of them using UV mapping in a single material within the mesh? Or is there some way to achieve this with a shader?
Create your own Shader (Code or ShaderGraph). Shaders can render multiple Textures (layers). You can blend by alpha/transprency however you like. Edit: Example in ShaderGraph: [![enter image description here](https://i.stack.imgur.com/86quC.png)](https://i.stack.imgur.com/86quC.png) ShaderGraph Code: <https://pastebin.com/a8ubgxRP> ``` application/vnd.unity.graphview.elements { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.CopyPasteGraph", "m_ObjectId": "82a0e513542e4106ae94a0ba8a6ec750", "m_Edges": [ { "m_OutputSlot": { "m_Node": { "m_Id": "a58e7e104e604e0b9e2961da5510e2bf" }, "m_SlotId": 0 }, "m_InputSlot": { "m_Node": { "m_Id": "5e1616a6e87c470f8e3520b520b86bea" }, "m_SlotId": 1 } }, { "m_OutputSlot": { "m_Node": { "m_Id": "5e1616a6e87c470f8e3520b520b86bea" }, "m_SlotId": 0 }, "m_InputSlot": { "m_Node": { "m_Id": "b9bc9d71a2354d1ebadb93dab18e7223" }, "m_SlotId": 0 } }, { "m_OutputSlot": { "m_Node": { "m_Id": "e5668fa7ac4e42fdaa32049802bd78b2" }, "m_SlotId": 0 }, "m_InputSlot": { "m_Node": { "m_Id": "a275a2c058614973b0efd13817919cc6" }, "m_SlotId": 1 } }, { "m_OutputSlot": { "m_Node": { "m_Id": "a275a2c058614973b0efd13817919cc6" }, "m_SlotId": 0 }, "m_InputSlot": { "m_Node": { "m_Id": "b9bc9d71a2354d1ebadb93dab18e7223" }, "m_SlotId": 1 } }, { "m_OutputSlot": { "m_Node": { "m_Id": "a275a2c058614973b0efd13817919cc6" }, "m_SlotId": 7 }, "m_InputSlot": { "m_Node": { "m_Id": "b9bc9d71a2354d1ebadb93dab18e7223" }, "m_SlotId": 3 } } ], "m_Nodes": [ { "m_Id": "5e1616a6e87c470f8e3520b520b86bea" }, { "m_Id": "a275a2c058614973b0efd13817919cc6" }, { "m_Id": "e5668fa7ac4e42fdaa32049802bd78b2" }, { "m_Id": "b9bc9d71a2354d1ebadb93dab18e7223" }, { "m_Id": "a58e7e104e604e0b9e2961da5510e2bf" } ], "m_Groups": [], "m_StickyNotes": [], "m_Inputs": [], "m_MetaProperties": [], "m_MetaPropertyIds": [], "m_MetaKeywords": [], "m_MetaKeywordIds": [] } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Vector1MaterialSlot", "m_ObjectId": "09a8291e17ab424e8149728df0325ac7", "m_Id": 5, "m_DisplayName": "G", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "G", "m_StageCapability": 2, "m_Value": 0.0, "m_DefaultValue": 0.0, "m_Labels": [] } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Vector1MaterialSlot", "m_ObjectId": "09adddb00a6a45ba89f015d46cc7e777", "m_Id": 5, "m_DisplayName": "G", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "G", "m_StageCapability": 2, "m_Value": 0.0, "m_DefaultValue": 0.0, "m_Labels": [] } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Vector4MaterialSlot", "m_ObjectId": "254965f8ab4b456789bf3701f105035f", "m_Id": 0, "m_DisplayName": "RGBA", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "RGBA", "m_StageCapability": 2, "m_Value": { "x": 0.0, "y": 0.0, "z": 0.0, "w": 0.0 }, "m_DefaultValue": { "x": 0.0, "y": 0.0, "z": 0.0, "w": 0.0 }, "m_Labels": [] } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Vector1MaterialSlot", "m_ObjectId": "394d8ab6d6254442a22a1d2a1d393090", "m_Id": 4, "m_DisplayName": "R", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "R", "m_StageCapability": 2, "m_Value": 0.0, "m_DefaultValue": 0.0, "m_Labels": [] } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.SampleTexture2DNode", "m_ObjectId": "5e1616a6e87c470f8e3520b520b86bea", "m_Group": { "m_Id": "" }, "m_Name": "Sample Texture 2D", "m_DrawState": { "m_Expanded": true, "m_Position": { "serializedVersion": "2", "x": -1229.6002197265625, "y": -552.0, "width": 208.0000762939453, "height": 433.6000061035156 } }, "m_Slots": [ { "m_Id": "7f1789f703db4759a2c2cbbdcee37bb7" }, { "m_Id": "fdbf06e354c64748bc51ee1f630dec56" }, { "m_Id": "09a8291e17ab424e8149728df0325ac7" }, { "m_Id": "e3d48a7642ee490d9e52742319616c2a" }, { "m_Id": "6ea82df60cc64a08a6d75d13343fa8b3" }, { "m_Id": "fcc275cddbd44f6491e5c6752757e36f" }, { "m_Id": "6f90ea7d44914d0985b264b8fdd91f89" }, { "m_Id": "f9c9786a2bcd475d9ad02b23429c97f6" } ], "synonyms": [], "m_Precision": 0, "m_PreviewExpanded": true, "m_PreviewMode": 0, "m_CustomColors": { "m_SerializableColors": [] }, "m_TextureType": 0, "m_NormalMapSpace": 0 } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.DynamicVectorMaterialSlot", "m_ObjectId": "5f0e671f32cf47cda2dbaf52f3591216", "m_Id": 2, "m_DisplayName": "Out", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "Out", "m_StageCapability": 3, "m_Value": { "x": 0.0, "y": 0.0, "z": 0.0, "w": 0.0 }, "m_DefaultValue": { "x": 0.0, "y": 0.0, "z": 0.0, "w": 0.0 } } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Texture2DMaterialSlot", "m_ObjectId": "69f238b9d6ec46009f9ad1e06532c289", "m_Id": 0, "m_DisplayName": "Out", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "Out", "m_StageCapability": 3, "m_BareResource": false } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Vector1MaterialSlot", "m_ObjectId": "6a74085b28a4494f9928fbde7bb14ab4", "m_Id": 6, "m_DisplayName": "B", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "B", "m_StageCapability": 2, "m_Value": 0.0, "m_DefaultValue": 0.0, "m_Labels": [] } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Vector1MaterialSlot", "m_ObjectId": "6ea82df60cc64a08a6d75d13343fa8b3", "m_Id": 7, "m_DisplayName": "A", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "A", "m_StageCapability": 2, "m_Value": 0.0, "m_DefaultValue": 0.0, "m_Labels": [] } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.UVMaterialSlot", "m_ObjectId": "6f90ea7d44914d0985b264b8fdd91f89", "m_Id": 2, "m_DisplayName": "UV", "m_SlotType": 0, "m_Hidden": false, "m_ShaderOutputName": "UV", "m_StageCapability": 3, "m_Value": { "x": 0.0, "y": 0.0 }, "m_DefaultValue": { "x": 0.0, "y": 0.0 }, "m_Labels": [], "m_Channel": 0 } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.DynamicVectorMaterialSlot", "m_ObjectId": "7d86c10d04514f1db34f1c03b5e0d07a", "m_Id": 0, "m_DisplayName": "Base", "m_SlotType": 0, "m_Hidden": false, "m_ShaderOutputName": "Base", "m_StageCapability": 3, "m_Value": { "x": 0.0, "y": 0.0, "z": 0.0, "w": 0.0 }, "m_DefaultValue": { "x": 0.0, "y": 0.0, "z": 0.0, "w": 0.0 } } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Vector4MaterialSlot", "m_ObjectId": "7f1789f703db4759a2c2cbbdcee37bb7", "m_Id": 0, "m_DisplayName": "RGBA", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "RGBA", "m_StageCapability": 2, "m_Value": { "x": 0.0, "y": 0.0, "z": 0.0, "w": 0.0 }, "m_DefaultValue": { "x": 0.0, "y": 0.0, "z": 0.0, "w": 0.0 }, "m_Labels": [] } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Texture2DMaterialSlot", "m_ObjectId": "9cf09bc550a4468faa6908e33854182c", "m_Id": 0, "m_DisplayName": "Out", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "Out", "m_StageCapability": 3, "m_BareResource": false } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.UVMaterialSlot", "m_ObjectId": "9f7246d6f2f84e50a35232a424585f47", "m_Id": 2, "m_DisplayName": "UV", "m_SlotType": 0, "m_Hidden": false, "m_ShaderOutputName": "UV", "m_StageCapability": 3, "m_Value": { "x": 0.0, "y": 0.0 }, "m_DefaultValue": { "x": 0.0, "y": 0.0 }, "m_Labels": [], "m_Channel": 0 } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.SampleTexture2DNode", "m_ObjectId": "a275a2c058614973b0efd13817919cc6", "m_Group": { "m_Id": "" }, "m_Name": "Sample Texture 2D", "m_DrawState": { "m_Expanded": true, "m_Position": { "serializedVersion": "2", "x": -1236.8001708984375, "y": -95.99998474121094, "width": 208.00001525878907, "height": 433.6000061035156 } }, "m_Slots": [ { "m_Id": "254965f8ab4b456789bf3701f105035f" }, { "m_Id": "394d8ab6d6254442a22a1d2a1d393090" }, { "m_Id": "09adddb00a6a45ba89f015d46cc7e777" }, { "m_Id": "6a74085b28a4494f9928fbde7bb14ab4" }, { "m_Id": "de7a0347276243dca2e037c26fdd8b82" }, { "m_Id": "f18732f84c8f40849bb8eb7fc6d31fb6" }, { "m_Id": "9f7246d6f2f84e50a35232a424585f47" }, { "m_Id": "f08fa5b4eede46afad513c1f5cb4539e" } ], "synonyms": [], "m_Precision": 0, "m_PreviewExpanded": true, "m_PreviewMode": 0, "m_CustomColors": { "m_SerializableColors": [] }, "m_TextureType": 0, "m_NormalMapSpace": 0 } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Texture2DAssetNode", "m_ObjectId": "a58e7e104e604e0b9e2961da5510e2bf", "m_Group": { "m_Id": "" }, "m_Name": "Texture 2D Asset", "m_DrawState": { "m_Expanded": true, "m_Position": { "serializedVersion": "2", "x": -1484.8001708984375, "y": -556.7999877929688, "width": 145.5999755859375, "height": 105.59998321533203 } }, "m_Slots": [ { "m_Id": "9cf09bc550a4468faa6908e33854182c" } ], "synonyms": [], "m_Precision": 0, "m_PreviewExpanded": true, "m_PreviewMode": 0, "m_CustomColors": { "m_SerializableColors": [] }, "m_Texture": { "m_SerializedTexture": "{\"texture\":{\"instanceID\":0}}", "m_Guid": "" } } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.DynamicVectorMaterialSlot", "m_ObjectId": "aee1f01a0a34437095d6a66540ce346f", "m_Id": 1, "m_DisplayName": "Blend", "m_SlotType": 0, "m_Hidden": false, "m_ShaderOutputName": "Blend", "m_StageCapability": 3, "m_Value": { "x": 0.0, "y": 0.0, "z": 0.0, "w": 0.0 }, "m_DefaultValue": { "x": 0.0, "y": 0.0, "z": 0.0, "w": 0.0 } } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.BlendNode", "m_ObjectId": "b9bc9d71a2354d1ebadb93dab18e7223", "m_Group": { "m_Id": "" }, "m_Name": "Blend", "m_DrawState": { "m_Expanded": true, "m_Position": { "serializedVersion": "2", "x": -852.800048828125, "y": -306.3999938964844, "width": 208.00001525878907, "height": 360.0 } }, "m_Slots": [ { "m_Id": "7d86c10d04514f1db34f1c03b5e0d07a" }, { "m_Id": "aee1f01a0a34437095d6a66540ce346f" }, { "m_Id": "c868501ed3c14206b2d478b0ceb706bf" }, { "m_Id": "5f0e671f32cf47cda2dbaf52f3591216" } ], "synonyms": [], "m_Precision": 0, "m_PreviewExpanded": true, "m_PreviewMode": 0, "m_CustomColors": { "m_SerializableColors": [] }, "m_BlendMode": 21 } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Vector1MaterialSlot", "m_ObjectId": "c868501ed3c14206b2d478b0ceb706bf", "m_Id": 3, "m_DisplayName": "Opacity", "m_SlotType": 0, "m_Hidden": false, "m_ShaderOutputName": "Opacity", "m_StageCapability": 3, "m_Value": 1.0, "m_DefaultValue": 1.0, "m_Labels": [] } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Vector1MaterialSlot", "m_ObjectId": "de7a0347276243dca2e037c26fdd8b82", "m_Id": 7, "m_DisplayName": "A", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "A", "m_StageCapability": 2, "m_Value": 0.0, "m_DefaultValue": 0.0, "m_Labels": [] } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Vector1MaterialSlot", "m_ObjectId": "e3d48a7642ee490d9e52742319616c2a", "m_Id": 6, "m_DisplayName": "B", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "B", "m_StageCapability": 2, "m_Value": 0.0, "m_DefaultValue": 0.0, "m_Labels": [] } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Texture2DAssetNode", "m_ObjectId": "e5668fa7ac4e42fdaa32049802bd78b2", "m_Group": { "m_Id": "" }, "m_Name": "Texture 2D Asset", "m_DrawState": { "m_Expanded": true, "m_Position": { "serializedVersion": "2", "x": -1501.6002197265625, "y": -96.79998779296875, "width": 145.60009765625, "height": 105.5999984741211 } }, "m_Slots": [ { "m_Id": "69f238b9d6ec46009f9ad1e06532c289" } ], "synonyms": [], "m_Precision": 0, "m_PreviewExpanded": true, "m_PreviewMode": 0, "m_CustomColors": { "m_SerializableColors": [] }, "m_Texture": { "m_SerializedTexture": "{\"texture\":{\"instanceID\":0}}", "m_Guid": "" } } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.SamplerStateMaterialSlot", "m_ObjectId": "f08fa5b4eede46afad513c1f5cb4539e", "m_Id": 3, "m_DisplayName": "Sampler", "m_SlotType": 0, "m_Hidden": false, "m_ShaderOutputName": "Sampler", "m_StageCapability": 3, "m_BareResource": false } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Texture2DInputMaterialSlot", "m_ObjectId": "f18732f84c8f40849bb8eb7fc6d31fb6", "m_Id": 1, "m_DisplayName": "Texture", "m_SlotType": 0, "m_Hidden": false, "m_ShaderOutputName": "Texture", "m_StageCapability": 3, "m_BareResource": false, "m_Texture": { "m_SerializedTexture": "{\"texture\":{\"instanceID\":0}}", "m_Guid": "" }, "m_DefaultType": 0 } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.SamplerStateMaterialSlot", "m_ObjectId": "f9c9786a2bcd475d9ad02b23429c97f6", "m_Id": 3, "m_DisplayName": "Sampler", "m_SlotType": 0, "m_Hidden": false, "m_ShaderOutputName": "Sampler", "m_StageCapability": 3, "m_BareResource": false } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Texture2DInputMaterialSlot", "m_ObjectId": "fcc275cddbd44f6491e5c6752757e36f", "m_Id": 1, "m_DisplayName": "Texture", "m_SlotType": 0, "m_Hidden": false, "m_ShaderOutputName": "Texture", "m_StageCapability": 3, "m_BareResource": false, "m_Texture": { "m_SerializedTexture": "{\"texture\":{\"instanceID\":0}}", "m_Guid": "" }, "m_DefaultType": 0 } { "m_SGVersion": 0, "m_Type": "UnityEditor.ShaderGraph.Vector1MaterialSlot", "m_ObjectId": "fdbf06e354c64748bc51ee1f630dec56", "m_Id": 4, "m_DisplayName": "R", "m_SlotType": 1, "m_Hidden": false, "m_ShaderOutputName": "R", "m_StageCapability": 2, "m_Value": 0.0, "m_DefaultValue": 0.0, "m_Labels": [] } ```
initializer-string for array of chars is too long C I'm working on a program that accepts input and and outputs a numerical value corresponding to the input. I get the error on the char part. I don't understand why it would have an error like that when there's only 27 characters in the array that has a size of 27? ``` int main () { char greek[27] = "ABGDE#ZYHIKLMNXOPQRSTUFC$W3"; } ```
You need one more `[28]` for the trailing `'\0'` to be a valid string. Take a look to [C Programming Notes: Chapter 8: Strings](http://www.eskimo.com/~scs/cclass/notes/sx8.html): > > Strings in C are represented by arrays of characters. The end of the > string is marked with a special character, the null character, which > is simply the character with the value 0. (The null character has no > relation except in name to the null pointer. In the ASCII character > set, the null character is named NUL.) The null or string-terminating > character is represented by another character escape sequence, \0. > > > And as pointed out by Jim Balter and Jayesh, when you provide initial values, you can omit the array size (the compiler uses the number of initializers as the array size). ``` char greek[] = "ABGDE#ZYHIKLMNXOPQRSTUFC$W3"; ```
PHP - passing variable from one object to another troubles I've recently started to work with OO PHP. As a training practice I'm trying to write some simple classes. I have trouble passing a variable from one to another class. Is it even possible? ``` class group { public $array = array(); public function person($name,$surname) { $this->person = new person($name,$surname); } public function __destruct() { print_r($this->array); } } class person { public function __construct($name,$surname) { $this->name = $name; $this->surname = $surname; } } $A = new group(); $A->person("John","Doe"); ``` What I want to archieve here is to pass person as another member of group (by simply putting it in group array) for further modifications and sorting. Been googling around but found nothing. Please forgive me if it's a dumb one. ;)
I'm not sure I totally understand but I think you want: ``` Class group { public $members=array(); public function person($name,$surname) { $this->members[]=new person($name,$surname); //Creates a new person object and adds it to the internal array. } /*...*/ } ``` A better alternative (seperation of intent) would be: ``` Class group { public $members=array(); public function addPerson(person $p) { $this->members[]=$p; //Avoids this function need to know how to construct a person object // which means you can change the constructor, or add other properties // to the person object before passing it to this group. } /*...*/ } ```
How React virtual dom differ from to Ajax in functionality? I just started to learn React JS. Learnt how react updates changes on render, how virtual dom helps to do it. I'm a dev who used ajax previously on projects. I understood the benefits and efficiency of using React. But while learning I understood that react virtual dom is being used to update only the objects which have changes. If I am not mistake, same thing is achieved by Ajax. Someone clear both concepts.
Let us separate context into three items; 1. Internet 2. Communication 3. Content (HTML JS and CSS) 4. Browser It is of common knowledge that; - we need Browser to view content. - Browser communicate via HTTP protocol via internet It is high likely you have heard of Document Object Model (DOM) however there is another aspect known as Browser Object Model (BOM). It is BOM which; 1. takes your URL from address bar or anchor link you click on then fetches concerned content and forwards it to DOM for interpretation. 2. Once the new content is received by DOM, it interprets the code and draws meaningful visual representation. 3. This interpretation cycle is denoted by various window states and event i.e. pop, document ready, on page show, on page hide, on load, on unload, on resize, on focus, on click etc. Some diagrams for better under standing, please take some time to glean at them. [Window, BOM, DOM and Javascript relationship](https://i.stack.imgur.com/XUU3n.png) [Document Object Model Node Tree](https://i.stack.imgur.com/TNP3L.png) [Browser Object Model](https://i.stack.imgur.com/wlR1r.png) [Relationship of JavaScript with JavaScript coding standards, DOM and BOM](https://i.stack.imgur.com/3J9HX.png) Scheme of things work as following; 1. URL fetched - pop event 2. Initial HTML code received - document ready state 3. HTML code interpreted, kept in memory associated with that particular window as DOM Tree state- window rendered event 4. Visual representation as result of rendering pushed to screen - on page show event 5. DOM tree drawn on screen pixel by pixel - window painted event 6. Meanwhile all other associated content is being fetched e.g. CSS, JS, background images etc. 7. Associated content is fully received - on load event 8. on completion of on load event newly received CSS & JS compared with the state kept in associated memory as in step 3. 9. If there are differences, or new instructions start over from step 3. In nutshell; - we are rendering and painting window twice for single collective page. - We do not notice it much these days because browser memory has improved other wise the good old days white flash or flicker was an impact of such state and events. - Matter only becomes worse if we need to hop from one page to another. First browser must call on pop event, then page hide, then on before unload event then start all over from step 2 doing every thing twice. - To really run stem in developers brain, DOM standards aka HTML standards are maintained by W3C,JavaScript standards are maintained ECMA, browser are stipulated as what and which standards they choose to include in BOM. Thus while tag might work in all others may not work in IE or while arrow function may work in modern browsers may not work in slightly older version of browsers. That is why we use polyfills, and precisely why various libraries and frame works exist, including JQuery. **What Ajax intends to achieve?** Once the browser enters into document ready state BOM instructs navigator functionality to initiate number of HTTP connections (may be up to 6 at a time) in order to download various images, CSS and JS files embedded in the page. This all then goes on in the background. Developers hate IE due to its notorious ways of not implementing latest DOM or JS features in BOM. However if it was not for IE we would not have AJAX as we now it. In 1996 IE introduced **iframe** feature which could download external content in that particular area of page. Any subsequent HTTP request would be repainted in that particular area of screen rather then repainting whole page. Novel idea was simple as to use the same scheme of things one uses for downloading images, for iframe. Obviously it had to be implemented at BOM level. So IE5 onward iframe tag became standard feature. Within a year iframe was introduced as standard in HTML 4.0 DOM standard, all the browsers implemented the technique in their BOM as default. By 1999 novel idea of communicating behind the scene after document ready state, led to a further "what if?" notion. What if using JavaScript we could fetch any content like iframe and update any given area on the screen. Again IE was the one to come up with the solution known as **XMLHTTP ActiveX control**. Shortly afterwards Mozilla, Safari, Opera and other browsers later implemented it as the **XMLHttpRequest JavaScript object**. Again its BOM feature, and probably why you had to write this code. ``` function Xhr(){ /* returns cross-browser XMLHttpRequest, or null if unable */ try { return new XMLHttpRequest(); }catch(e){} try { return new ActiveXObject("Msxml3.XMLHTTP"); }catch(e){} try { return new ActiveXObject("Msxml2.XMLHTTP.6.0"); }catch(e){} try { return new ActiveXObject("Msxml2.XMLHTTP.3.0"); }catch(e){} try { return new ActiveXObject("Msxml2.XMLHTTP"); }catch(e){} try { return new ActiveXObject("Microsoft.XMLHTTP"); }catch(e){} return null; } ``` Thank you my dear lord for giving John Resig the courage to develop JQuery which made AJAX as easy as a whistle. **What did we achieve from AJAX?** **PROS** - We conducted communication through document level Navigator instance through JS programming reference diagram 3 of BOM, rather than window level navigator event reference diagram 1 of Window, BOM, DOM and Javascript relationship. - After re-rendering of DOM tree repainting only the part of DOM tree in concern rather than redrawing whole DOM tree aka repainting whole page. Hence no white flicker. - Post re-rendering we could traversal and manipulation the new content as if it was part of the original content. - We could trigger up to six of these navigator instances that would run asynchronously, following the above pattern independent of each other as detailed above. **CONS** - Since there is no window level navigator event BOM does not call on history function. Hence no pop state, thus most likely no cache and definitely no URL change in address bar. - To avail any of these missing functionalities one would have to instance them programminglly. Default is no action. **What React intends to achieve?** Since the advent of AJAX, scalable web applications began to emerge. This in contrast to web page. Single Page Application (SPA) is just another name for the same context. SPA as concept soon lead to AJAX based web component development e.g. AJAX form submission, auto complete and that colour changing message bell and what not. Referring back to DOM, every time we receive initial HTML code as window pop event, DOM parser goes through the code from top line towards bottom line. Once passed the line it will not go back up again. On each passing of the line it parses the line and adds it to then being rendered DOM tree which will be painted latter. Example as following; ``` <!DOCTYPE html> <html> <body> <h1>My First Heading</h1> <p>My first paragraph.</p> </body> </html> ``` Parser reads first tag to determine what standard to follow, then once it comes to body tag it applies default properties of that element, then h1 tag applies default properties if non declared explicitly. Default properties are defined by W3C in HTML 5 standards. Now we have DOM tree rendered in memory has to be painted on to screen. This is where fun begins. Referring back to 4th diagram relationship of JavaScript with ECMA, DOM and BOM. BOM calls on JS engine to do the Picasso work on screen. This in short is called; - Hard DOM - one which is received initially via pop event - Virtual DOM - one which is parsed and comprises of DOM tree in memory - HARD DOM - again which will be painted on to screen from virtual DOM - Shadow DOM - certain element of DOM tree have fixed behaviour and properties that are once implemented are called shadow DOM of that particular DOM tree element. Shadow DOM example ``` <input type="text" name="FirstName" value="Mickey"></input> ``` Has shadow DOM applied as following; ``` <input type="text" name="FirstName" value="Mickey"> #shadow-root (user-agent) <div>Mickey</div> </input> ``` Aha so input is probably actually is an editable div after all. Now you know what **contenteditable="true"** property is does on div tag. So scheme of things are as following; 1. HTML code is received 2. HTMl is parsed into Virtual DOM 3. Parallel to which Shadow DOM is applied on each of the concerned tree items 4. Both virtual and Shadow DOM are kept in memory 5. Both of them get painted on to screen collectively as flattened HARD DOM tree. We have easy traversal access to HARD DOM and Virtual DOM but accessing shadow DOm is whole new ball game. So we leave it to that. This is how it pans out in following diagram; [HTML parsing to paint journey](https://i.stack.imgur.com/UtRxD.png) While at latter stage JS engine runs batch of commands and creates object identifier of each item rendered in virtual DOM tree. Below is just an example if you were to create it your self. ``` var btn = document.createElement("BUTTON"); btn.innerHTML = "CLICK ME"; document.body.appendChild(btn); ``` Once the create element function is called virtual DOM has another element rendered into it. You can then do what ever you wish with it. However until you call append child it never enters the territory of HARD flattened DOM. Then again you can do what ever you wish to do with after it enters Hard DOM. **So whats the point of all Virtual and Hard DOM?** Lets explore. In normal scheme of things if I was to traversal hard DOM I would have to query the current document to run through all its DOM tree and find the element we are looking for. This is what Jquery does, result is called HTML collection. Example below; ``` var btn = document.createElement("BUTTON"); btn.innerHTML = "CLICK ME"; document.body.appendChild(btn); // manipulate for the created element via hard DOM way but first find it document.querySelector("BUTTON").innerHTML = "OPPS ME"; ``` Suppose there are 5 buttons then which button will above query manipulate. Just the first one. To manipulate the 3rd one only I need to first generate HTML collection into array then run through that array to find reference for my desired element then manipulate it. Example below; ``` document.querySelectorAll("BUTTON")[2].innerHTML = "OPPS ME"; ``` That is lot of leg work just to change the text of one button. If I wish to do it virtual way all I had to do was call on this code; ``` btn.innerHTML = "OPPS ME"; ``` React intends to do the virtual way. - It dictates there will be no initial HARD DOM HTML code. Just JS in body tag if at all. - It runs batch of jS create element functions to generate DOM tree and creates a reference to newly created element. - Once the whole tree is generated it pushes it for painting. - In doing so cutting the leg work for parsing and then rendering then painting then re-parsing then re-rendering then repainting. - When in future some thing needs to be updated it calls on the reference and does the magic work as we did. - Browser BOM by default always keeps in sync the virtual DOM to hard DOM and hard DOM to virtual DOM. If one thing changes in any one of them it is reflected in the other as mirror depiction. - Whats more if you can keep record of what you changed and on what reference, you can go back and forth in implementing and re-implementing them as if they are cached in history. - BOM already does that on form inputs `Ctrl`+`Z`, `Ctrl`+`Y`. Only in react you are doing it through JS on all of the elements. This my friend is the very basic of state management. **What did we achieve from React?** **PROS** - saved time and headache in parsing, rendering, painting, finding, manipulating DOM tree. - In doing so only repainting the part of DOM tree in concern rather than redrawing whole DOM tree aka repainting whole page. Hence no white flicker similar to AJAX. - One can perform pre-render calculations on data beforehand, only do actions if conditions meet. - You control the state management. **CONS** 1. No pop state, thus have to engage in programming to avail such features. 2. One has to build routing system or simple F5 sends you back to step one as if nothing ever existed. 3. Heavy reliance on JS, almost forgoing the brilliant potential DOM environment can offer, CSS etc are just step siblings. 4. SPA Notion is almost a decade old, it has since then been realised it does not fix all problems. Modern mobile environment requires more. 5. End script is bloated. **What React does not care about?** React does not care how the new data comes and where from. Be it ajax, be user fed, all it does is manage state of data e.g.old button text was "so and so" new data is "so and so" should the change be made or not. If change is made keep a record of it. Both react and Ajax intend to leech on BOM processes, and initiate functions in light of progressive enhancement and user experience. One does it by manipulating how communication is made other does it by how data from communication is rendered in the end. Thus both are essentially part of SPA concept, may compliment each other but function interdependently in two different domains.
How to make a On/Off button in Ionic I need to put a button in Ionic that stays activated after it's been pressed, and only deactivates after it's been pressed again. There is a similar question but it only applies to icon buttons. ([How to adding navigation bar button having ionic-on/ ionic-off functionality](https://stackoverflow.com/questions/29841253/how-to-adding-navigation-bar-button-having-ionic-on-ionic-off-functionality)). **EDIT** I can't use a toggle button, it is required to be a regular looking buttom (in this case an Ionic button-outline) that stays active when pressed. Here is some code: ``` <div class="botones"> <div class="row"> <div class="col"> <button class="button button-outline button-block button-positive"> Promociones </button> </div> <div class="col"> <button class="button button-outline button-block button-energized" > Aprobados </button> </div> <div class="col"> <button class="button button-outline button-block button-assertive" > Alerta </button> </div> <div class="col"> <button class="button button-outline button-block button-balanced" > ATM </button> </div> </div> </div> ``` As you can see is a simple horizontal array of buttons. They suppose to act as filters for a message inbox, so they have to stay pressed (one at the time at most) as the filter is active. Like a tab but not quite. The main question is, how can I access the activated state of the button so I can mantain it activated.
You may find toggle button useful. Here's the [official link](https://ionicframework.com/docs/v3/api/components/toggle/Toggle/) to this example: ``` <ion-toggle ng-model="airplaneMode" toggle-class="toggle-calm">Airplane Mode</ion-toggle> ``` **edit:** Here's a demo from [Codepen](http://codepen.io/anon/pen/doOdqj) and the code pasted here: ``` angular.module('mySuperApp', ['ionic']) .controller('MyCtrl',function($scope) { }); ``` ``` <html ng-app="mySuperApp"> <head> <meta charset="utf-8"> <title> Toggle button </title> <meta name="viewport" content="initial-scale=1, maximum-scale=1, user-scalable=no"> <link href="//code.ionicframework.com/nightly/css/ionic.css" rel="stylesheet"> <script src="//code.ionicframework.com/nightly/js/ionic.bundle.js"></script> </head> <body class="padding" ng-controller="MyCtrl"> <button class="button button-primary" ng-model="button" ng-click="button.clicked=!button.clicked" ng-class="button.clicked?'button-positive':'button-energized'"> Confirm </button> </body> </html> ```
React: using React component inside of dangerouslySetInnerHTML I have a question about using React. As you can see from the title, I'm wondering if it is possible to use React component (that is created by `React.createClass`) inside of `dangerouslySetInnerHTML` property. I have tried, but React just prints code without transformation like this: ``` const MySubComponent = React.createClass({ render() { return (<p>MySubComponent</p>); } }); ... let myStringHTML; myStringHTML += "<ul>"; myStringHTML += " <li>"; myStringHTML += " <MySubComponent />"; myStringHTML += " </li>"; myStringHTML += "</ul>"; const MyComponent = React.createClass({ render() { return ( <div dangerouslySetInnerHTML={{__html:myStringHTML}}></div> ); } }); ``` I expected ``` <ul> <li> <p>MySubComponent</p> </li> </ul> ``` but the code is just same as original string, and that means React didn't transform `MySubComponent`. Is there a way to solve this problem? Above example is just simple but my actual code is quite complicated. It will be very thanks gimme a hand ;)
`dangerouslySetInnerHTML` expects a JS object with `__html` property which should be valid HTML markup. Instead you are providing `<MySubComponent />` there and expecting it to render that component's html. React won't process `<MySubComponent />` there. `dangerouslySetInnerHTML` as name suggests should be avoided. Moreover what you are trying to accomplish here can easily be done through React only. ``` const MySubComponent = React.createClass({ render() { return (<li><p>MySubComponent {this.props.index}</p></li>); } }); const MyComponent = React.createClass({ render() { let subComponentList = []; [1,2,3,4,5].forEach(function(i){ subComponentList.push(<MySubComponent key={i} index={i} />); }); return ( <ul>{subComponentList}</ul> ); } }); ReactDOM.render(<MyComponent />, mountNode); ```
Swift predicate that returns matches on a Core Data transformable string array I have a Core Data entity which has an attribute called `likedMovies` which is a String array `[String]`. This is configured as a Transformable attribute. I populate a list of IDs inside this attribute, so for example: User entity: `name = Duncan | likedMovies = [524, 558, 721]` `name = Saskia | likedMovies = [143, 558, 653]` `name = Marek | likedMovies = [655, 323, 112]` I would like to write a predicate which returns all `User` entities where the `likedMovies` array contains a given value. I have tried the predicate below but it doesn't work - for the id 558, instead of returning `[Duncan, Saskia]` I get no entities returned. `fetchRequest.predicate = NSPredicate(format: "%@ IN %@", "558", "likedMovies")` Thank you in advance for your help. [![enter image description here](https://i.stack.imgur.com/6rcaX.png)](https://i.stack.imgur.com/6rcaX.png)
There are two issues: first, your predicate format is wrong. You cannot use %@ as placeholder for attribute names; you must instead use %K: ``` fetchRequest.predicate = NSPredicate(format: "%@ IN %K", "558", "likedMovies") ``` But the second problem is that transformable attributes cannot be used in predicates as part of a fetch request (for a SQLite store). One option is to use the predicate to filter the User objects in memory (*after* fetching them). Another option is to represent the array of IDs as a string (with suitable separator), rather than an array of strings. You can then use the `CONTAINS` string operator to create the predicate. But adding and removing IDs to the `likedMovies` will become a rather unwieldy process of string manipulation. Another option is to replace the transformable attribute with a to-many relationship. I recommend the last of these.
How do I wrap a callback using JavaScript Overlay types (GWT)? In the [Display Object](http://easeljs.com/docs/DisplayObject.html) class, I have everything wrapped excepted the events. I can't not figure out the pattern and really need an example. In JavaScript, you create a callback for the object like this: ``` displayObject.onPress = function(event) { $wnd.alert("object pressed"); } ``` I have wrapped the [Mouse Event](http://easeljs.com/docs/MouseEvent.html) parameter: ``` public class MouseEventImpl extends JavaScriptObject { protected MouseEventImpl() {} public static native MouseEventImpl create(String type, int stageX, int stageY, DisplayObjectImpl target, JavaScriptObject nativeEvent) /*-{ return new $wnd.MouseEvent(type, stageX, stageY, target, nativeEvent); }-*/; ...other methods excluded... } public class MouseEvent { private MouseEventImpl impl; public MouseEvent(String type, int stageX, int stageY, DisplayObject target, JavaScriptObject nativeEvent) { this.impl = MouseEventImpl.create(type, stageX, stageY, target.getOverlay(), nativeEvent); } ...other methods excluded... } ``` The display object uses the same overlay pattern. How am I able to write a callback in java and pass it through to the JSO? Please provide an example if you can. :)
Pre-post edit: I wrote up this answer without actually trying to see *why* you were trying to do this thing, under the assumption that you were using some non-browser event, which is already wrapped up pretty nicely, and if you want more data from the `NativeEvent` instance, you can write JSNI methods in your own classes to get access to it, or further subclass `NativeEvent` to add more methods and `.cast()` to your class. Add the handler to a widget using the `Widget.addDomHandler` method and the appropriate `MouseEvent` subclass to get the type instance. --- In JavaScript, callbacks are just functions that will be invoked when something happens. Unless specifically specified where they are passed in, they will generally be called on the global context, not on a specific object instance. ``` var callback = function() { alert("callback called!"); }; // later, in something that consumes that callback: callback(); ``` To invoke a function on an instance (i.e. make it a method invocation), one can wrap that invocation in a function that doesn't need an instance: ``` var obj = {}; obj.name = "Me"; obj.whenSomethingHappens = function() { alert("my name is " + this.name); }; // wont work, so commented out: //var callback = obj.whenSomethingHappens; // instead we wrap in a function // this is making a closure (by closing over obj) var callback = function() { obj.whenSomethingHappens(); }; // later, in something that consumes that callback: callback(); ``` In Java, one cannot refer specifically to a method (without reflection), but only to object instances. The easiest way to build a simple callback is to implement an interface, and the code that takes the callback takes an instance of the interface, and invokes the defined method. GWT declares a `Command` interface for zero-arg functions, and a generic `Callback<T,F>` interface for cases that may pass or fail, with one generic arg for each option. Most event handlers in GWT just define one method, with specific data passed into that method. We need to use all of this knowledge to pass Java instances, with a function to call, into JavaScript, and make sure they are invoked on the right instance. This example is a function that takes a `Callback` instance and using JSNI wraps a call to it JS. ``` // a callback that has a string for either success or failure public native void addCallback(Callback<String, String> callback) /*-{ var callbackFunc = function() { // obviously the params could come from the function params callback.@com.google.gwt.core.client.Callback::onSuccess(Ljava/lang/String;)("success!"); }; doSomethingWith(callbackFunc);//something that takes (and presumably calls) the callback }-*/; ``` One last piece - to let GWT's error handling and scheduling work correctly, it is important to wrap the call back to java in $entry - that last line should really be ``` doSomething($entry(callbackFunc)); ```
ASP.NET MVC error : Object being referred to is not locked by any Client When ever we try to update a user using an ASP.net MVC4 website, we get this error. Please help me to find out . This was working previously without any issue. `ErrorCode<ERRCA0012>:SubStatus<ES0001>`:Object being referred to is not locked by any client. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: `Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode<ERRCA0012>:SubStatus<ES0001>:`Object being referred to is not locked by any client. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: ``` [DataCacheException: ErrorCode<ERRCA0012>:SubStatus<ES0001>:Object being referred to is not locked by any client.] Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ErrStatus errStatus, Guid trackingId, Exception responseException, Byte[][] payload, EndpointID destination) +551 Microsoft.ApplicationServer.Caching.SocketClientProtocol.ExecuteApi(IVelocityRequestPacket request, IMonitoringListener listener) +287 Microsoft.ApplicationServer.Caching.SocketClientProtocol.PutAndUnlock(String key, Object value, DataCacheLockHandle lockHandle, TimeSpan timeout, DataCacheTag[] tags, String region, IMonitoringListener listener) +360 Microsoft.ApplicationServer.Caching.DataCache.InternalPutAndUnlock(String key, Object value, DataCacheLockHandle lockHandle, TimeSpan timeout, DataCacheTag[] tags, String region, IMonitoringListener listener) +216 Microsoft.ApplicationServer.Caching.<>c__DisplayClass9d.<PutAndUnlock>b__9c() +160 Microsoft.ApplicationServer.Caching.DataCache.PutAndUnlock(String key, Object value, DataCacheLockHandle lockHandle, TimeSpan timeout) +276 Microsoft.Web.DistributedCache.<>c__DisplayClass1c.<PutAndUnlock>b__1b() +52 Microsoft.Web.DistributedCache.<>c__DisplayClass31`1.<PerformCacheOperation>b__30() +19 Microsoft.Web.DistributedCache.DataCacheRetryWrapper.PerformCacheOperation(Action action) +208 Microsoft.Web.DistributedCache.DataCacheForwarderBase.PerformCacheOperation(Func`1 func) +167 Microsoft.Web.DistributedCache.DataCacheForwarderBase.PutAndUnlock(String key, Object value, DataCacheLockHandle lockHandle, TimeSpan timeout) +162 System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) +929 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +80 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +270 ```
Please check if your requests are not timing out. We also faced exact same issue in our project and we found this blog post really helpful in finding why we were running into this issue: <http://www.anujvarma.com/session-storage-app-fabric-cache/>. One thing we initially did was increased the request timeout to 5 minutes (300 seconds) in web.config as mentioned in the blog post. However we only did it for the action where we knew we would encounter the timeout issue. To change the timeout for all actions, you could do something like below which was mentioned in the post: ``` <system.web> <httpRuntime executionTimeout = "300"/> </system.web> ``` Or you could do is increase the timeout for specific action: ``` <location path="Controller/Action"> <system.web> <httpRuntime executionTimeout="300"/> </system.web> </location> ``` IMHO, 2nd option is more preferable over the 1st one because you're targeting specific actions instead of your entire site. However as mentioned in my comments as well, this is not really foolproof. Better alternative would be to optimize the code first so that your requests are not taking long time. In our case, we had a bad retry policy for storage operations and we fixed the issue by modifying that retry policy.
Where is the fork() on the fork bomb :(){ :|: & };:? **Warning: Running this command in most shells will result in a broken system that will need a forced shutdown to fix** I understand the recursive function `:(){ :|: & };:` and what it does. But I don't know where is the fork system call. I'm not sure, but I suspect in the pipe `|`.
As a result of the pipe in `x | y`, a subshell is created to contain the pipeline as part of the foreground process group. This continues to create subshells (via `fork()`) indefinitely, thus creating a fork bomb. ``` $ for (( i=0; i<3; i++ )); do > echo "$BASHPID" > done 16907 16907 16907 $ for (( i=0; i<3; i++ )); do > echo "$BASHPID" | cat > done 17195 17197 17199 ``` The fork does not actually occur until the code is run, however, which is the final invocation of `:` in your code. To disassemble how the fork bomb works: - `:()` - define a new function called `:` - `{ :|: & }` - a function definition that recursively pipes the calling function into another instance of the calling function in the background - `:` - call the fork bomb function This tends to not be too memory intensive, but it will suck up PIDs and consume CPU cycles.
How do I maintain image size when using a colorbar? I'm trying to plot two versions of the same image side by side. When I plot the figure without the color bar for one of the images, it seems to have the right sizes: ![without the color bar](https://i.stack.imgur.com/xfzWT.png) But when I add a color bar to the image in the left, it scales the image down somehow: ![scales the image down](https://i.stack.imgur.com/IdhF8.png) Here's the code where I have commented out the lines for color bar: ``` def plot_amaps(self, anisotropy_map, parallel): timepoint = self.t * self.timestep amap_directory = self.directory + "amaps/" fig = plt.figure(facecolor='w', dpi=180) ax1 = fig.add_subplot(121) fig.subplots_adjust(top=0.85) ax1.grid(False) txt = "Mean(r) = %.3f SD(r)= %.3f t=%dmin" txt = txt %(self.mean, self.sd, timepoint) ax1.set_title(txt) amap = ax1.imshow(anisotropy_map, cmap="jet", clim = self.clim) #divider = make_axes_locatable(ax1) #cax = divider.append_axes('right', size='5%', pad=0.05) #fig.colorbar(amap, cax=cax) ax2 = fig.add_subplot(122) ax2.set_title("Intensity image", fontsize=10) ax2.imshow(parallel, cmap="gray") ax2.grid(False) ax1.axis('off') ax2.axis('off') if self.save is True: self.make_plot_dir(amap_directory) name = self.cell + "_time_"+str(timepoint) plt.savefig(amap_directory+name+self.saveformat, bbox_inches='tight') else: plt.show() plt.close('all') ``` What am I doing wrong, and how can I make sure that the two images are of the same size?
When using ``` divider = make_axes_locatable(ax1) cax = divider.append_axes('right', size='5%', pad=0.05) ``` you explicitely ask for a 5% smaller axes. So if you don't want that, you should not create the axes for the colorbar using `make_axes_locatable`. Instead you can simply create an axes at any point on the figure using ``` cax = fig.add_axes([left, bottom, width, height]) ``` where `left, bottom, width, height` are in figure units ranging from 0 to 1. Then add the colorbar to it. If you want the colorbar in the middle, you could previously make some space using ``` plt.subplots_adjust(wspace=0.3) ``` Of course you would have to experiment a bit with the numbers. 
Format entire column of Excel table as String I'm working on an Excel add-in using the JavaScript APIs to build add-ins in Excel Office 365. For Excel tables, how can I format the entire column to have strings only by default. [ I need the numbers to appear as 012 instead of 12 in all the cells for a particular column.]
You can set the number formatting to text first (set to `@`, which is Excel's format for saying that something is a text), and then set the values. Here's a Range example (table would be very similar -- just get the range of the column) ``` try { await Excel.run(async (context) => { const sheet = context.workbook.worksheets.getActiveWorksheet(); const range = sheet.getRange("A1:A3"); range.numberFormat = <any>"@"; range.values = [ ["012"], ["013"], ["014"] ]; await context.sync(); }); } catch (error) { OfficeHelpers.Utilities.log(error); } ``` The above example is in TypeScript -- but there too, it should be trivial to convert to plain ES5 JS, if you prefer. You can try this snippet live in literally five clicks in the new Script Lab (<https://aka.ms/getscriptlab>). Simply install the Script Lab add-in (free), then choose "Import" in the navigation menu, and use the following GIST URL: <https://gist.github.com/Zlatkovsky/741c3313825df988ac9840289c012440>
How can I test PHP skills in a interview? My company needs to hire a PHP developer, but nobody has PHP knowledge in my company and we find difficult to test for PHP skills. If it were a C/Java developer I would ask him to write a quick implementation of the Game of Life, but PHP is a completely different language. I saw this test with interest: <http://vladalexa.com/scripts/php/test/test_php_skill.html> Anyone else has more suggestions?
**Code** - Ask the candidate to write code - Ask the candidate to read code If you do ask the candidate to write code make sure that: - The code is non trivial but small - You allow access to the manual and the internet If you do ask the candidate to read code make sure that: - The code has some trivial errors - The code has some non trivial errors - The code works fine, but it can be easily optimized You can use three or more different pieces of code, start from the simpler one and only advance to the next if you see that the candidate copes with ease. Throw in some recursion, to spice things up. **Resources** Ask for a detailed list of PHP resources the candidate uses. Books, blogs, forums, magazines, etc. That's how my current employers found out about [StackOverflow](https://stackoverflow.com/). If the candidate mentions [StackOverflow](https://stackoverflow.com/) or Programmers, you should NOT ask or try to find out their username. If they wanted to advertise their reputation they would have included a [Careers 2.0](http://careers.stackoverflow.com/) link on their resume. **Frameworks** Every PHP developer should know of the most popular PHP frameworks: - [Laravel](https://laravel.com/) - [Zend Framework](http://framework.zend.com/) - [CodeIgniter](http://codeigniter.com/) - [Symfony](http://www.symfony-project.org/) - [CakePHP](http://cakephp.org/) - [Yii](http://www.yiiframework.com/) and be fluent in at least one of them. You can have a few code samples ready for each one and ask the candidate to read and explain them, after they tell you which one they are more familiar with. **Debugging & Profiling** I've always felt that PHP developers are lacking debugging and profiling skills (perhaps only the PHP developers I've worked with). If during the discussion you find out that the candidate actively uses [xdebug](http://xdebug.org/), don't bother with the rest of the interview and just hire them. ;) **Input sanitization** This is important. You can start with a discussion on why it's important and then ask for the most common methods to achieve it. This [discussion](https://stackoverflow.com/questions/129677/whats-the-best-method-for-sanitizing-user-input-with-php) will help you on what to ask. Some hints: - [mysqli\_real\_escape\_string](http://www.php.net/manual/en/mysqli.real-escape-string.php) is good - [magic quotes](http://php.net/manual/en/security.magicquotes.php) are bad **PHP snafus** You can find a lot of PHP snafus in this [excellent discussion](https://stackoverflow.com/questions/1995113/strangest-language-feature). If you are interviewing for a senior position you should definetaly ask on some of those. Some examples: PHP's handling of numeric values in strings: ``` "01a4" != "001a4" // true "01e4" == "001e4" // also true ``` [Valid PHP code](http://www.jansch.nl/2007/03/09/systemoutprint-in-php/): ``` System.out.print("hello"); ``` In PHP, a string is as good as a function pointer: ``` $x = "foo"; function foo(){ echo "wtf"; } $x(); # "wtf" ``` **Unit testing** Need I say more? **Conclusion** A good PHP developer should combine a variety of skills & talents: - A good understanding of HTTP - A good understanding of Apache configuration (Even if you use a different web server at your company) - At least a basic understanding of JavaScript - A great understanding of HTML / CSS The list goes on and on. Make sure you tailor the interview to the specific needs of the job opening, you don't want to hire just a good developer but a good developer that's great at what you immediately need him / her to do.
How can I read http request body in netcore 3 more than once? I have a netcore 3 API application that logs the incoming request and then passes it on to the controller action. My code looks like this: ``` public RequestLoggingHandler(IHttpContextAccessor httpContextAccessor) { _httpContextAccessor = httpContextAccessor; } protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, RequestLoggingRequirement requirement) { try { var httpContext = _httpContextAccessor.HttpContext; var request = httpContext.Request; _repository = (ICleanupOrderRepository)httpContext.RequestServices.GetService(typeof(ICleanupOrderRepository)); _cache = (IMemoryCache)httpContext.RequestServices.GetService(typeof(IMemoryCache)); httpContext.Items["requestId"] = SaveRequest(request); context.Succeed(requirement); return Task.CompletedTask; } catch (Exception ex) { throw ex; } } private int SaveRequest(HttpRequest request) { try { // Allows using several time the stream in ASP.Net Core var buffer = new byte[Convert.ToInt32(request.ContentLength)]; request.Body.ReadAsync(buffer, 0, buffer.Length); var requestContent = Encoding.UTF8.GetString(buffer); var requestId = _repository.SaveRawHandlerRequest($"{request.Scheme} {request.Host}{request.Path} {request.QueryString} {requestContent}"); return requestId; } catch (Exception ex) { throw ex; } } ``` However when this request is passed on to the Controller the request body is null. Previously in Core2.x you could do ``` request.EnableRewind(); ``` My understanding is this is now replaced with ``` httpContext.Request.EnableBuffering(); ``` However, even with ``` httpContext.Request.EnableBuffering(); ``` the request body is still null once the request body is read. How can I get around this ?
It is a known [issue](https://github.com/aspnet/AspNetCore/issues/14396) on github. A temp workaround is to pull out the body right after the call to EnableBuffering and then rewinding the stream to 0 and not disposing it: ``` public class RequestLoggingHandler : AuthorizationHandler<RequestLoggingRequirement> { private readonly IHttpContextAccessor _httpContextAccessor; public RequestLoggingHandler(IHttpContextAccessor httpContextAccessor) { _httpContextAccessor = httpContextAccessor; } protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, RequestLoggingRequirement requirement) { try { var httpContext = _httpContextAccessor.HttpContext; var request = httpContext.Request; request.EnableBuffering(); httpContext.Items["requestId"] = SaveRequest(request); context.Succeed(requirement); return Task.CompletedTask; } catch (Exception ex) { throw ex; } } private int SaveRequest(HttpRequest request) { try { // Allows using several time the stream in ASP.Net Core var buffer = new byte[Convert.ToInt32(request.ContentLength)]; request.Body.ReadAsync(buffer, 0, buffer.Length); var requestContent = Encoding.UTF8.GetString(buffer); var requestId = _repository.SaveRawHandlerRequest($"{request.Scheme} {request.Host}{request.Path} {request.QueryString} {requestContent}"); request.Body.Position = 0;//rewinding the stream to 0 return requestId; } catch (Exception ex) { throw ex; } } } ```
PHP: Iterating through array? - I want a function that searches through my array, and returns all the children to a specific node. What is the most appropriate way to do this? Will recursion be necessary in this case? I have previously constructed a few quite complex functions that iterates with or without the help of recursion through multi-dimensional arrays and re-arranging them, but this problem makes me completely stuck and I can't just get my head around it... Here's my array: ``` Array ( [1] => Array ( [id] => 1 [parent] => 0 ) [2] => Array ( [id] => 2 [parent] => 1 ) [3] => Array ( [id] => 3 [parent] => 2 ) ) ``` UPDATE: The output which I want to get. Sorry for the bad example, but I'll blame it on lack of knowledge on how to format the stuff I need to do :) ``` function getAllChildren($id) { // Psuedocode return $array; } getAllChildren(1); // Outputs the following: Array ( [2] => Array ( [id] => 2 [parent] => 1 ) [3] => Array ( [id] => 3 [parent] => 2 ) ) ```
``` $nodes = array( 1 => array ( 'id' => 1, 'parent' => 0 ), 2 => array ( 'id' => 2, 'parent' => 1 ), 3 => array ( 'id' => 3, 'parent' => 2 ) ); function searchItem($needle,$haystack) { $nodes = array(); foreach ($haystack as $key => $item) { if ($item['parent'] == $needle) { $nodes[$key] = $item; $nodes = $nodes + searchItem($item['id'],$haystack); } } return $nodes; } $result = searchItem('1',$nodes); echo '<pre>'; var_dump($result); echo '</pre>'; ``` Non-recursive version of the searchItem() function: ``` function searchItem($needle,$haystack) { $nodes = array(); foreach ($haystack as $key => $item) { if (($item['parent'] == $needle) || array_key_exists($item['parent'],$nodes)) { $nodes[$key] = $item; } } return $nodes; } ``` (assumes ordering of the parents/children, so a child node isn't included in the array unless the parent is already there)
Class not binding to ui-view before ng-enter To achieve different animations between states - I'm applying a class to the ui-view depending on the current state: ``` <div ng-class="currentClass.current.name" ui-view></div> ``` This works, however - **initially** when the view enters, it appears ng-enter is applied before the class in `currentClass.current.name` has been bound, so my animation gets ignored: ``` .slide-right.ng-enter { z-index: 100; left: -100%; background-color: green; } ``` I can apply animations to `ng-leave` - because the `currentClass.current.name` has been applied prior to this. See [plunker code for example](http://plnkr.co/edit/3QjBzRu5VjOzK7czn71e?p=preview). Any ideas?
> > CSS classes managed by ng-class conflict with ALL ui-view animations. > > > So, the issue is in this line ``` <section ui-view ng-class="stateClass"></section> ``` Unfortunately, ng-class is not compatible with ui-view. Here's the complete [issue](https://github.com/angular-ui/ui-router/issues/866). So you can use `class="{{stateClass}}"` with ui-view. ``` <section ui-view class="{{stateClass}}"></section> ``` Here is working [plunker](http://plnkr.co/edit/ymSGZdHZXSvoXTTDmFnr?p=preview) Cheers!
Is an app's core data database deleted when an app is updated from a App Store? I have an app on the App Store which is now onto its second version. The app uses Core Data to store information that I do not want to be lost when an upgrade of the app is installed. My question is if the user has version 1.0 installed on their iPad and has data stored in their core database, does this database get deleted when the version 1.1 update is downloaded and installed?
That depends entirely on you. When you set up a Core Data stack, you can point your NSPersistentStoreCoordinator at a particular file anywhere in your app's writable folders that you want. Where you place that file determines whether it gets migrated during an app update. A common choice is to stick your database file in the user's Documents directory, which will cause iOS to copy it along when installing an update to your application. Then, on launch, you're responsible for dealing with that database as you see fit (updating data in it, migrating your schema, etc.). Placing the file elsewhere - inside a temporary directory, for example - may cause it to be lost during an update. See the [File System Programming Guide](http://developer.apple.com/library/ios/#documentation/FileManagement/Conceptual/FileSystemProgrammingGUide/AccessingFilesandDirectories/AccessingFilesandDirectories.html#//apple_ref/doc/uid/TP40010672-CH3-SW3) and [Core Data Model Versioning and Data Migration](https://developer.apple.com/library/ios/#documentation/cocoa/Conceptual/CoreDataVersioning/Articles/Introduction.html) for more information.
Firebase: Should I add GoogleService-Info.plist to .gitignore? I'm using Firebase for an iOS project that I want to open source. Should I add GoogleService-Info.plist to .gitignore before I upload I share the project on Github? I know it contains my API key, client id, etc., which might not be safe to expose?
While it's not the end of the world if you commit `GoogleService-Info.plist` (similarly, on Android, `google-services.json`), you're better off leaving it out for one big reason: **You're making it clear that others who build your code that they should be setting up their own Firebase project to host its configuration and data (because your project simply won't build with that file missing).** Furthermore, if your project has world-writable resources in it, you can expect that people who get a hold of your app and config so easily will inadvertently start writing data into it, which is probably not what you want. Open source projects that require a Firebase setup should ideally have it stated in the readme that anyone who wants to work with this app needs to go through the process of setting up Firebase. The data in these config files is not exactly private - they can be extracted fairly easily from an IPA or Android APK. It's just probably not in your best interest to share them as part of your app's code.
Set button on next page to focus like the button clicked by the user I was wondering if there's someway to set focus to the same button or div clicked in the previous page, on the new page opened by the user. Currently I have a header that contains a top navigation bar that is the same on every page, but what I would like to do is when the user clicks one of the navigation button and goes to the next page, that navigation button stays highlighted (or on focus) in the next page. Is there a way to accomplish this? Here is the current code for the navigation bar: ``` div.scrollmenu { background-color: #333; overflow: auto; white-space: nowrap; } div.scrollmenu a { display: inline-block; color: white; text-align: center; padding: 14px; text-decoration: none; } div.scrollmenu a:hover { background-color: #777; } div.scrollmenu a:focus { background-color: #777; } ``` ``` <div class="scrollmenu"> <a tabindex="1" href="#home">Home</a> <a tabindex="2" href="#news">News</a> <a tabindex="3" href="#contact">Contact</a> <a tabindex="4" href="#about">About</a> <a tabindex="5" href="#support">Support</a> <a tabindex="6" href="#blog">Blog</a> <a tabindex="7" href="#tools">Tools</a> <a tabindex="8" href="#base">Base</a> <a tabindex="9" href="#custom">Custom</a> <a tabindex="10" href="#more">More</a> <a tabindex="11" href="#logo">Logo</a> <a tabindex="12" href="#friends">Friends</a> <a tabindex="13" href="#partners">Partners</a> <a tabindex="14" href="#people">People</a> <a tabindex="15" href="#work">Work</a> </div> ``` The issue with this is that the page refreshes the header navigation when the user is redirected to the next page, so if the user clicks on one of the menu tabs, its hightlighted on the current page but then the user is navigated to the next page and its no longer highlighted or focused. Basically I want it so when the user clicks the navigation tab, it stays highlighted on the current page and the next page they are navigated to. The use of javascript, jquery, or even just regular html and css is fine with me. Any help is always appreciated
The best way is to create an `active`class, that when you switch pages you get to update which links should be active or not, check out the below example ``` // Code goes here $(function(){ //watch for the changes in the url and update nav respectively $(window).hashchange(function(){ var hash = location.hash.replace( /^#/, '' ); alert(hash+"-"+location.hash); //update our nav now updateNav(location.hash); }); function updateNav(hash){ $("[data-active]").each(function (index, item){ // get the active class name var activeClass = $(this).attr("data-active"); //remove it from each $(this).removeClass(activeClass); // get the href var href =$(this).attr("href"); //compare and update the class if the hash is same with the href if(href == hash){ $(this).addClass(activeClass); } }); } }); ``` ``` .active { color:red; } ``` ``` <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <nav> <a data-active="active" href="#/home">Home</a> <a data-active="active" href="#/about">About</a> <a data-active="active" href="#/page2">Page 2</a> </nav> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery-hashchange/1.3/jquery.ba-hashchange.js"></script> <script src="script.js"></script> ```
Spring boot + Thymeleaf + webjars Bootstrap 4 I'm trying to add bootstrap to my Spring Boot project using Thymeleaf. index.html ``` <!DOCTYPE html > <html lang="en" xmlns:th="http://www.thymeleaf.org"> <head th:replace="fragments/header :: header"> </head> <body> <div th:replace="@{'views/' + ${content}} :: ${content}"></div> <div lang="en" th:replace="fragments/footer :: footer"> </div> </body> </html> ``` footer.html ``` <div xmlns:th="http://www.thymeleaf.org" th:fragment="footer" > <script th:src="@{/webjars/jquery/3.3.1/jquery.min.js}"></script> <script th:src="@{/webjars/popper.js/1.12.9-1/umd/popper.min.js}"></script> <script th:src="@{/webjars/bootstrap/4.0.0-1/js/bootstrap.min.js}" ></script> </div> ``` header.html ``` <head xmlns:th="http://www.thymeleaf.org" th:fragment="header" > <meta charset="UTF-8"> <title>Модуль планирования ПД</title> <link th:rel="stylesheet" th:href="@{webjars/bootstrap/4.0.0-1/css/bootstrap.min.css} "/> </head> ``` MvcConfig.java ``` @Configuration public class MvcConfig implements WebMvcConfigurer { @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry .addResourceHandler("/webjars/**") .addResourceLocations("/webjars/") .resourceChain(false); } } ``` Styles doesn't applied to page and throwing error: [![Syntax error: expected expression, got '<'](https://i.stack.imgur.com/bPXzR.png)](https://i.stack.imgur.com/bPXzR.png) in jquery, bootstrap, popper. How I can solve this problem? Thanks.
To make more easy I got webjars-locator: ``` <dependency> <groupId>org.webjars</groupId> <artifactId>bootstrap</artifactId> <version>4.0.0-2</version> </dependency> <dependency> <groupId>org.webjars</groupId> <artifactId>webjars-locator</artifactId> <version>0.30</version> </dependency> ``` And at my page, at the end, I put: ``` <script th:src="@{/webjars/jquery/jquery.min.js}"></script> <script th:src="@{/webjars/popper.js/umd/popper.min.js}"></script> <script th:src="@{/webjars/bootstrap/js/bootstrap.min.js}"></script> ``` And need to setup this WebMvcConfigure class: ``` @Configuration public class SpringMVCConfiguration implements WebMvcConfigurer { private static final String[] CLASSPATH_RESOURCE_LOCATIONS = { "classpath:/META-INF/resources/", "classpath:/resources/", "classpath:/static/", "classpath:/public/"}; @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry.addResourceHandler("/**") .addResourceLocations(CLASSPATH_RESOURCE_LOCATIONS); registry.addResourceHandler("/webjars/**") .addResourceLocations("/webjars/").resourceChain(false); } } ``` I did not use any other setup in Spring boot to work with.
How to define multiple functions changing internal values? I think my question should be more clearly understood by this short code: ``` fs = [] for k in range(0, 10): def f(x): return x + 2*k fs.append(f) fs[0](1) # expecting 1 but 19(=1+2*9) ``` How do I instead make `f` return what I want? Please note that `f` cannot receive `k` as an argument. (What I'm actually trying to do is prepare [multiple constraint functions that are fed to scipy.optimize.minimize](http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#constrained-minimization-of-multivariate-scalar-functions-minimize))
The *typical* way to fix this is to do something like: ``` def f(x, k=k): return x + 2*k ``` For the most part, this shouldn't affect your "f cannot receive k as an argument" condition because it isn't a required argument. --- A related, but different approach would be to define `f` out of the loop. ``` def f(k, x): return x + 2*k ``` Then in the loop use [functools.partial](https://docs.python.org/2/library/functools.html#functools.partial). ``` import functools fs = [] for k in range(10): fs.append(functools.partial(f, k)) ``` In this approach, your function won't accept a value for `k` even if you try to pass one.
Template operator linker error I have a linker error I've reduced to a simple example. The build output is: > > debug/main.o: In function `main': > > C:\Users\Dani\Documents\Projects\Test1/main.cpp:5: > undefined reference to`log& > log::operator<< (char > const (&) [6])' > > collect2: ld returned > 1 exit status > > > It looks like the linker ignores the definition in log.cpp. I also cant put the definition in log.h because I include the file alot of times and it complains about redefinitions. main.cpp: ``` #include "log.h" int main() { log() << "hello"; return 0; } ``` log.h: ``` #ifndef LOG_H #define LOG_H class log { public: log(); template<typename T> log &operator <<(T &t); }; #endif // LOG_H ``` log.cpp: ``` #include "log.h" #include <iostream> log::log() { } template<typename T> log &log::operator <<(T &t) { std::cout << t << std::endl; return *this; } ```
I guess this is your first use of templates, so I'll try to be didactic. You can think of template as some kind of type-aware macros. This type awareness is of course not to be neglected, since it grants type safety for free. This does mean however than template functions or classes are NOT functions or classes: they are model that will be used to generate functions or classes. For example: ``` template <class T> void foo(T t) { std::cout << t << "\n"; } ``` This is a template function, it allows me to define something once and apply it to many different types. ``` int i; foo(i); // [1] ``` This causes the instantiation of the `template`. Basically, it means that a function is created according to the model, but replacing all occurrences of `T` by `int`. ``` double d; foo(d); // Another instantiation, this time with `T` replaced by `double` foo(d); // foo<double>() already exists, it's reused ``` Now, this idea of model is very important. If the definition of the model is not present in the header file, then the compiler does not know how to define the method. So, you have 2 solutions here: 1. Define it in the header 2. Explicitly instantiate it The 2 have different uses. (1) is the classic way. It's easier because you don't restrict the user to a subset of the types. However it does mean that the user depends on the implementation (change it, she recompiles, and you need to pull the dependencies in the header) (2) is less often used. For full compliance with the standard it requires: - That you declare the specialization in the header (`template <> void foo<int>();`) so as to let the compiler know it exists - That you fully define it in one of the translation units linked The main advantage is that, like classic functions, you isolate the client from the implementation. `gcc` is quite lenient because you can forgo the declaration and it should work. I should also note that it is possible to define a method twice, with different implementations. This is of course an error, as it is in direct violation of the ODR: One Definition Rule. However most linkers don't report it because it's quite common to have one implementation per object, and they just pick the first and assume the others will be equivalent (it's a special rule for templates). So if you do want to use explicit instantiation, take great care to only define it once.
Java class not recognized within package My friend and I are using GitHub to collaborate on a project, and I just downloaded a package he had. He wrote it in NetBeans and I'm using it in Eclipse. Four of the classes in the package have the regular icon, a white page with a blue J. But three others have a white page, but there's an outline of a blue J instead of a filled J. The four regular classes all expand into class and then method/property trees, but the three odd classes don't expand at all in the Package Explorer. When I try to reference one of the odd classes in a regular one, i.e. ``` List<Reminder> list = new ArrayList<Reminder>(); ``` It puts a red underline under the class `Reminder` and when I hover over it with my cursor, it tells me to add an import statement, but when I click on where it says that it doesn't add the import statement. When I try to type in the import statement myself, i.e. ``` import MobiTech.PlaceSaver.Reminder; ``` It says the import can not be resolved. The syntax used for declaring the class seems to be correct: ``` public class Reminder { public Location location; public String message; //Reminder radius in meters double radius = 1.0; public Reminder() { } public Reminder(Location l, String m) { message = m; location = l; } public Reminder(Location l, String m, int r) { message = m; location = l; radius = r; } ``` I don't see what's going on, any ideas?
You should take a look at [this](http://help.eclipse.org/indigo/index.jsp?topic=/org.eclipse.jdt.doc.user/reference/ref-icons.htm). It seems the "outline of the blue J" is the second one on that list, which means that Java file is not on a build path. Follow [this](http://www.informit.com/articles/article.aspx?p=367962) guide to add them to your project's build path. Edit: or [this](http://www.stanford.edu/class/cs108/JavaTools/eclipse-guide/) one, that one's nice and simple. Go down to "2. Adding existing files to the project". Edit 2: As the OP said in the comment below, to add the existing code to the project, right click on it (the package or directory) in the package explorer, then select "Build Path -> Include".
A Java bounds-checking optimization example I have read that some of the JVMs out there can optimize code execution by removing bounds checking. What I am trying to figure out is what coding technique will work better. In method **example1** below would the JVM ever figure it out and eliminate the bounds checking of the **source[index]** reference? Is **example2** the better code practice? It would seem so, but in some algorithms inside a loop the index being out of bounds is a normal condition. So you don't want to be generating tons of Exception objects inside that loop. ``` public void example1(int [] source, int index) { if (index >= 0 && index < source.length) System.out.println("Value is " + source[index]); else System.out.println("Out of range: " + index); } public void example2(int [] source, int index) { try { System.out.println("Value is " + source[index]); } catch (IndexOutOfBoundsException exp) { System.out.println("Out of range: " + index); } } ``` These code fragments are only representational. I am aware that in these examples the bounds-checking will hardly matter to performance. However I am working on an embedded protocol application where the redundant bounds checking will add up.
To your first question, in example1 the bounds check can *theoretically* be eliminated. I'd expect the best modern JIT compilers to do this (e.g. perhaps via common sub-expression elimination in the bounds check when source[index] is expanded). As usual this will be implementation dependent so you can't rely on it. OTOH even if the bounds check isn't eliminated the difference will be trivial - you're hitting the already cached memory location for source.length and doing a couple of integer compares so the overhead is tiny. example2 is not good practice - you are hitting an exception but then catching it and continuing as if nothing happened. Unless you are watching stdout closely you might completely miss the fact that there is a bug in your code. There are basically two common "good" possibilities depending on what you consider to be a valid input for "index": 1. An **out-of-bounds index value is expected** and is considered valid input. In which case you should test and handle it explicitly as in example1. You shouldn't need to throw any sort of exception in this case. 2. An **out-of-bounds index is unexpected** (and is therefore a bug in the calling code). Your code should raise an exception here. If you like you can catch and re-throw the exception with your own message but you could also just let the IndexOutOfBounds exception propagate. Don't worry about the performance impact of this exception handling - you have just discovered a bug and therefore you want the program to fail as quickly and "loudly" as it can.....
unclear about foldl type definition ``` foldl :: (a -> b -> a) -> a -> [b] -> a foldl step zero (x:xs) = foldl step (step zero x) xs foldl _ zero [] = zero ``` I don't quite understand why does (a -> b -> **a**) return **a**, also (a -> b -> a) -> a -> [b] -> **a** return **a**. I would think it should be like: (a -> b -> **c**) -> a -> [b] -> **c**. Can someone explain that to me based on the example below. Thanks! ``` foldl (+) 0 (1:2:3:[]) foldl (+) (0 + 1) (2:3:[]) foldl (+) ((0 + 1) + 2) (3:[]) foldl (+) (((0 + 1) + 2) + 3) ([]) foldl (+) (((0 + 1) + 2) + 3) ```
`a` represents the type of the accumulator value, and `b` represents the type of each element in the input. `(a -> b -> a)` is a function that takes an accumulator value, some item in the list, and returns a new accumulator value that can be passed onto the next step. The type of the initial value must be `a` so that the first invocation of the function can receive an accumulator value. The accumulator function must take an `a` and return an `a` so that an accumulator value can be passed to each step of the fold. The final value of the fold must necessarily be an `a`, because that is the type of the accumulator that will be returned by the final call the fold function. `(a -> b -> c) -> a -> [b] -> c` cannot represent a fold, because the folding function does not take a `c`. The input and output of the folding function *must* be the same type so the accumulator can be passed to the next fold step. Let's see an example of what would happen if the fold function returned a `c`. ``` f :: Integer -> Integer -> Bool -- this is a valid (a -> b -> c) f acc n = (acc + n) > 0 ``` Pretend we're using a dynamic language and we try to fold with `f`. What happens at runtime? ``` foldl f 0 [1] ~ (0 + 1) > 0 == True :: Bool foldl f 0 [1, 2] ~ ((0 + 1) > 0) + 2) > 0 == error - no instance of (+) for Bool \_________/ \ | \ Bool + Integer ``` You can see that you can't make a fold work if you try to accumulate with incompatible types.
Python pandas cumsum() reset after hitting max I have a pandas DataFrame with timedeltas as a cumulative sum of those deltas in a separate column expressed in milliseconds. An example is provided below: ``` Transaction_ID Time TimeDelta CumSum[ms] 1 00:00:04.500 00:00:00.000 000 2 00:00:04.600 00:00:00.100 100 3 00:00:04.762 00:00:00.162 262 4 00:00:05.543 00:00:00.781 1043 5 00:00:09.567 00:00:04.024 5067 6 00:00:10.654 00:00:01.087 6154 7 00:00:14.300 00:00:03.646 9800 8 00:00:14.532 00:00:00.232 10032 9 00:00:16.500 00:00:01.968 12000 10 00:00:17.543 00:00:01.043 13043 ``` I would like to be able to provide a maximum value for CumSum[ms] after which the cumulative sum would start over again at 0. For example, if the maximum value was 3000 in the above example, the results would look like so: ``` Transaction_ID Time TimeDelta CumSum[ms] 1 00:00:04.500 00:00:00.000 000 2 00:00:04.600 00:00:00.100 100 3 00:00:04.762 00:00:00.162 262 4 00:00:05.543 00:00:00.781 1043 5 00:00:09.567 00:00:04.024 0 6 00:00:10.654 00:00:01.087 1087 7 00:00:14.300 00:00:03.646 0 8 00:00:14.532 00:00:00.232 232 9 00:00:16.500 00:00:01.968 2200 10 00:00:17.543 00:00:01.043 0 ``` I have explored using the modulo operator, but am only successful in resetting back to zero when the resulting cumsum is equal to the limit provided (i.e. cumsum[ms] of 500 % 500 equals zero). Thanks in advance for any thoughts you may have, and please let me know if I can provide any more information.
Here's an example of how you might do this by iterating over each row in the dataframe. I created new data for the example for simplicity: ``` df = pd.DataFrame({'TimeDelta': np.random.normal( 900, 60, size=100)}) print df.head() TimeDelta 0 971.021295 1 734.359861 2 867.000397 3 992.166539 4 853.281131 ``` So let's do an accumulator loop with your desired 3000 max: ``` maxvalue = 3000 lastvalue = 0 newcum = [] for row in df.iterrows(): thisvalue = row[1]['TimeDelta'] + lastvalue if thisvalue > maxvalue: thisvalue = 0 newcum.append( thisvalue ) lastvalue = thisvalue ``` Then put the `newcom` list into the dataframe: ``` df['newcum'] = newcum print df.head() TimeDelta newcum 0 801.977678 801.977678 1 893.296429 1695.274107 2 935.303566 2630.577673 3 850.719497 0.000000 4 951.554206 951.554206 ```
Android ListView Custom Adapter ImageButton This may not be the correct approach, if there is a better way pleas tell me. I've created a class of Custom Adapter & in my getView method I inflate the view I want to use ``` public View getView(int position, View convertView, ViewGroup parent) { View v = mInflater.inflate(R.layout.wherelayout, null); if (convertView != null) { v = convertView; } HashMap<String, Object> whereHash = (HashMap<String, Object>) this.getItem(position); if (whereHash != null) { TextView whereId = (TextView) v.findViewById(R.id.tvWhere); TextView whereDetails = (TextView) v.findViewById(R.id.tvWhereDetails); ImageButton ibDelWhere = (ImageButton) v.findViewById(R.id.ibDelWhere); whereId.setText((CharSequence) whereHash.get("where")); whereDetails.setText((CharSequence) whereHash.get("details")); if (ibDelWhere != null) { ibDelWhere.setId(position); ibDelWhere.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { //do stuff when clicked } } ); } } return v; } ``` The view consists of 2 TextView aligned to the left & an ImageButton aligned to the right, I want to be able to delete the item from the ListView when the button is clicked. the layout is like this - ``` <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="horizontal" android:clickable="true"> <TextView android:layout_height="wrap_content" android:layout_width="wrap_content" android:textSize="25sp" android:id="@+id/tvWhere" android:textColor="#00FF00" android:text="TextView" android:gravity="top|left" android:layout_alignParentTop="true" android:layout_alignParentLeft="true"></TextView> <TextView android:layout_height="wrap_content" android:layout_width="wrap_content" android:id="@+id/tvWhereDetails" android:textColor="#0000FF" android:text="TextView" android:textSize="18sp" android:layout_below="@+id/tvWhere" android:gravity="bottom|left" android:layout_alignParentLeft="true"></TextView> <ImageButton android:layout_height="wrap_content" android:layout_width="wrap_content" android:src="@drawable/eraser" android:id="@+id/ibDelWhere" android:layout_alignParentRight="true" android:layout_alignParentTop="true"></ImageButton> </RelativeLayout> ``` The problem is that when the ImageButton is in the layout, I can click it & the onClick() fires as expected, but I can't click the actual list item itself, i.e. click on the TextView items to fire the ListView.onItemClick that was assigned to it already. If I remove the ImageButton from the layout, then the ListView.onItemClick event fires when I click the item. Is there any way I can enable clicking both the ListView item & the button within the layout ? Thanks guys & gals.
You have to set the imagebutton as non `focusable` and non `focusableInTouchMode` (clickable is ok). Please note, as opposed as other views, you can't do that in xml because the `android:focusable` gets overwritten in `ImageButton`'s constructor. To be more precise, that's one of the few differences between `ImageView` and `ImageButton`. See for yourself, this is the **complete** source of `ImageButton`. ``` @RemoteView public class ImageButton extends ImageView { public ImageButton(Context context) { this(context, null); } public ImageButton(Context context, AttributeSet attrs) { this(context, attrs, com.android.internal.R.attr.imageButtonStyle); } public ImageButton(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); setFocusable(true); } @Override protected boolean onSetAlpha(int alpha) { return false; } } ``` To solve, just call `setFocusable(false)` from java. Or use an `ImageView` :) ``` myImageButton.setFocusable(false); ``` Hope it helps.
Clarification on the meaning of the parameter block synchronization i would like to know if this expression it is correct and if it means this: i put a write lock over the field status, and than i change it. If not, i would like to know what is the meaning of the paramenter, because i see always *this*. ``` public class Example { private int status; public Example(int status){ this.status = status; } public void setStatus(int newStatus){ synchronized(this.status){ this.status = newStatus; } } } ```
There are several things wrong with this code: 1. You cannot `synchronize` on a primitive. You could change it to an `Integer` but see below. 2. Synchronizing on a non-final object is not a good idea. You could make it `final` 3. Changing a field while it is being `synchronized` on is going to break in some very obscure ways. And now it is `final` it will not be allowed. Probably better to synchronize on another field. 4. You should also provide a get method for completeness. With all of these issue fixed your code looks something like this: ``` public class Example { private final Object statusLock = new Object(); private Integer status; public Example(Integer status) { this.status = status; } public void setStatus(Integer newStatus) { synchronized (statusLock) { status = newStatus; } } public Integer getStatus() { return status; } } ``` Now - with this code - the answer to your question is *kind of*. What happens here is that you lock **all** access to the `status` field **through the set method** from any other thread while you change it's value. Notice that I do not synchronise in the get method. If I did then the above statement would change.
Can't build cross compiler I can't seem to build Rust as a cross-compiler, either on Windows with MSYS2 or on a fresh install of Debian Wheezy. The error is the same for both. I run this configure: ``` ./configure --target=arm-unknown-linux-gnueabihf,x86_64-pc-windows-gnu ``` make works, but then make install fails with: ``` [...] prepare: tmp/dist/rustc-1.0.0-dev-x86_64-pc-windows-gnu-image/bin/rustlib/x86_64-pc-windows-gnu/lib/rustdoc-*.dll prepare: tmp/dist/rustc-1.0.0-dev-x86_64-pc-windows-gnu-image/bin/rustlib/x86_64-pc-windows-gnu/lib/fmt_macros-*.dll prepare: tmp/dist/rustc-1.0.0-dev-x86_64-pc-windows-gnu-image/bin/rustlib/x86_64-pc-windows-gnu/lib/libmorestack.a prepare: tmp/dist/rustc-1.0.0-dev-x86_64-pc-windows-gnu-image/bin/rustlib/x86_64-pc-windows-gnu/lib/libcompiler-rt.a compile: arm-unknown-linux-gnueabihf/rt/arch/arm/morestack.o make[1]: arm-linux-gnueabihf-gcc: Command not found /home/Sandro/rust/mk/rt.mk:94: recipe for target 'arm-unknown-linux-gnueabihf/rt/arch/arm/morestack.o' failed make[1]: *** [arm-unknown-linux-gnueabihf/rt/arch/arm/morestack.o] Error 127 make[1]: Leaving directory '/home/Sandro/rust' /home/Sandro/rust/mk/install.mk:22: recipe for target 'install' failed make: *** [install] Error 2 ``` Everything builds fine if I don't specify a cross architecture. Am I missing some special configure flag to make this work?
The error message says that make did not find the `arm-linux-gnueabihf-gcc` binary, which is supposed to be a C compiler producing ARM code. That means that you probably don't have any ARM C cross-compilation toolchain installed. I know Ubuntu has packages for cross compilers (gcc-arm-linux-gnueabihf in 14.04) so Debian may have the same packages. You can also find fully packaged ARM C cross-compilers for Windows and Linux on the [Linaro](https://www.linaro.org/) website. If you are building for the Rapsberry Pi, you can also find toolchains to build for Raspbian and Archlinux on <https://github.com/raspberrypi/tools>. Here is an example under Linux with a Linaro toolchain (should be distribution-agnostic for the host) ``` $ wget http://releases.linaro.org/14.11/components/toolchain/binaries/arm-linux-gnueabihf/gcc-linaro-4.9-2014.11-x86_64_arm-linux-gnueabihf.tar.xz $ tar -xf gcc-linaro-4.9-2014.11-x86_64_arm-linux-gnueabihf.tar.xz $ export PATH=$PATH:$PWD/gcc-linaro-4.9-2014.11-x86_64_arm-linux-gnueabihf/bin $ cd <your_configured_rustc_build_directory> $ make ``` You can then use the cross compiler with the following line. You can provide the full path to the arm-linux-gnueabihf-gcc binary if you don't want to put it in your PATH. ``` rustc --target=arm-unknown-linux-gnueabihf -C linker=arm-linux-gnueabihf-gcc hello.rs ``` If you are using Cargo, you can specify the linker to use for each target in the `.cargo/config` with this option: ``` [target.arm-unknown-linux-gnueabihf] linker = "arm-linux-gnueabihf-gcc" ```
Why does removing a border from a div break my layout? I have a site with a sticky footer and i want to have a background image at the top of my contentContainer. With a border its fine but if i remove the border from content"thin blue solid" the whole thing is pushed down and my background image is no longer right at the top... ``` <div id="mainContain"> <div id="header"></div> <div id="contentContain"> <div id="content"> <h2>Hog</h2> </div> </div> <div id="footer"></div> </div> ``` My CSS is: ``` html, body { padding: 0; margin: 0; height: 100%; } #mainContain { min-height: 100%; position:relative; } #header { width: 100%; height: 180px; background-color: #111111; } #contentContain { background-image: url('../img/shadow-bg.png'); background-position: center top; background-repeat: no-repeat; padding-bottom: 45px; /* This value is the height of your footer */ } #content { margin: auto; border: thin blue solid; } #footer { position: absolute; Width: 100%; bottom: 0; height: 45px; /* This value is the height of your footer */ background-color: #111111; } ```
You are probably seeing the results of collapsing margins. You can prevent margins from collapsing by either adding a border or some padding to the top of your block element, or by creating a new block formatting context by setting `overflow: auto`, for example: ``` #content { margin: auto; border: thin transparent solid; /* option 1 */ padding-top: 1px; /* option 2 */ overflow: auto; /* option 3 */ } ``` **Which Option to Use** If you use the first two options, you are adding an extra 1px of height to the top of your block element by virtue of either the width of the border or the padding. The third option using `overflow` will not affect the height of the element. The first two options will probably be backwards compatible to all but the most primitive browsers. Anything that supports CSS2.1 will work as expected. As for `overflow`, it is widely supported all the way back to IE4 with a few minor exceptions. See <https://developer.mozilla.org/en-US/docs/Web/CSS/overflow> for details.
JavaFX Pass MouseEvents through Transparent Node to Children In the java doc it says for [`setMouseTransparent`](http://docs.oracle.com/javafx/2/api/javafx/scene/Node.html#setMouseTransparent%28boolean%29) that is affects all children as well as the parent. How can it be made so only the parent's transparent areas (can see other nodes below it but not responding to mouse events) are transparent to mouse events so that the nodes below it may receive them. This happens when stacking two XYCharts in the same pane. Only the last one added can receive events.
Set [`pickOnBounds`](http://docs.oracle.com/javafx/2/api/javafx/scene/Node.html#pickOnBoundsProperty) for the relevant nodes to `false`, then clicking on transparent areas in a node won't register a click with that node. > > Defines how the picking computation is done for this node when triggered by a MouseEvent or a contains function call. If pickOnBounds is true, then picking is computed by intersecting with the bounds of this node, else picking is computed by intersecting with the geometric shape of this node. > > > **Sample Output** This sample is actually far more complicated than is necessary to demonstrate the `pickOnBounds` function - but I just did something this complicated so that it shows what happens "when stacking two `XYCharts` in the same pane" as mentioned in the poster's question. In the sample below two line charts are stacked on top of each other and the mouse is moved over the data line in one chart which has a glow function attached to it's mouseenter event. The mouse is then moved off of the first line chart data and the glow is removed from it. The mouse is then placed over the second line chart data of an underlying stacked chart and the glow is added to that linechart in the underlying stacked chart. This sample was developed using [Java8](https://jdk8.java.net/download.html) and the coloring and behaviour described is what I exeperienced running the program on Mac OS X and Java 8b91. ![mouseoverline1](https://i.stack.imgur.com/u8vzS.png)![mouseoverline2](https://i.stack.imgur.com/NSGB2.png) **Sample Code** The code below is just for demonstrating that `pickOnBounds` does work for allowing you to pass mouse events through transparent regions stacked on top of opaque node shapes. It is not a recommended code practice to follow for styling lines in charts (you are better off using style sheets than lookups for that), it is also not necessary that you use a line chart stack to get multiple series on a single chart - it was only necessary or simpler to do these things to demonstrate the pick on bounds concept application for this answer. Note the recursive call to set the `pickOnBounds` property for the charts after the charts have been shown on a stage and all of their requisite nodes created. Sample code is an adaption of [JavaFX 2 XYChart.Series and setOnMouseEntered](https://stackoverflow.com/questions/10212147/javafx-2-xychart-series-and-setonmouseentered): ``` import javafx.application.Application; import javafx.collections.*; import javafx.event.EventHandler; import javafx.scene.*; import javafx.scene.chart.*; import javafx.scene.effect.Glow; import javafx.scene.input.MouseEvent; import javafx.scene.layout.StackPane; import javafx.scene.shape.Path; import javafx.stage.Stage; public class LineChartSample extends Application { @SuppressWarnings("unchecked") @Override public void start(Stage stage) { // initialize data ObservableList<XYChart.Data> data = FXCollections.observableArrayList( new XYChart.Data(1, 23),new XYChart.Data(2, 14),new XYChart.Data(3, 15),new XYChart.Data(4, 24),new XYChart.Data(5, 34),new XYChart.Data(6, 36),new XYChart.Data(7, 22),new XYChart.Data(8, 45),new XYChart.Data(9, 43),new XYChart.Data(10, 17),new XYChart.Data(11, 29),new XYChart.Data(12, 25) ); ObservableList<XYChart.Data> reversedData = FXCollections.observableArrayList( new XYChart.Data(1, 25), new XYChart.Data(2, 29), new XYChart.Data(3, 17), new XYChart.Data(4, 43), new XYChart.Data(5, 45), new XYChart.Data(6, 22), new XYChart.Data(7, 36), new XYChart.Data(8, 34), new XYChart.Data(9, 24), new XYChart.Data(10, 15), new XYChart.Data(11, 14), new XYChart.Data(12, 23) ); // create charts final LineChart<Number, Number> lineChart = createChart(data); final LineChart<Number, Number> reverseLineChart = createChart(reversedData); StackPane layout = new StackPane(); layout.getChildren().setAll( lineChart, reverseLineChart ); // show the scene. Scene scene = new Scene(layout, 800, 600); stage.setScene(scene); stage.show(); // make one line chart line green so it is easy to see which is which. reverseLineChart.lookup(".default-color0.chart-series-line").setStyle("-fx-stroke: forestgreen;"); // turn off pick on bounds for the charts so that clicks only register when you click on shapes. turnOffPickOnBoundsFor(lineChart); turnOffPickOnBoundsFor(reverseLineChart); // add a glow when you mouse over the lines in the line chart so that you can see that they are chosen. addGlowOnMouseOverData(lineChart); addGlowOnMouseOverData(reverseLineChart); } @SuppressWarnings("unchecked") private void turnOffPickOnBoundsFor(Node n) { n.setPickOnBounds(false); if (n instanceof Parent) { for (Node c: ((Parent) n).getChildrenUnmodifiable()) { turnOffPickOnBoundsFor(c); } } } private void addGlowOnMouseOverData(LineChart<Number, Number> lineChart) { // make the first series in the chart glow when you mouse over it. Node n = lineChart.lookup(".chart-series-line.series0"); if (n != null && n instanceof Path) { final Path path = (Path) n; final Glow glow = new Glow(.8); path.setEffect(null); path.setOnMouseEntered(new EventHandler<MouseEvent>() { @Override public void handle(MouseEvent e) { path.setEffect(glow); } }); path.setOnMouseExited(new EventHandler<MouseEvent>() { @Override public void handle(MouseEvent e) { path.setEffect(null); } }); } } private LineChart<Number, Number> createChart(ObservableList<XYChart.Data> data) { final NumberAxis xAxis = new NumberAxis(); final NumberAxis yAxis = new NumberAxis(); xAxis.setLabel("Number of Month"); final LineChart<Number, Number> lineChart = new LineChart<>(xAxis, yAxis); lineChart.setTitle("Stock Monitoring, 2010"); XYChart.Series series = new XYChart.Series(data); series.setName("My portfolio"); series.getData().addAll(); lineChart.getData().add(series); lineChart.setCreateSymbols(false); lineChart.setLegendVisible(false); return lineChart; } public static void main(String[] args) { launch(args); } } ```
Using autofs to mount under each users' home directory *(crosspost from SF, where I wasn't getting much joy)* I have a CentOS 6.2 box up and running and have configured autofs to automount Windows shares under a /mydomain folder, using various howtos on the internet. Specifically, I have three files: ## /etc/auto.master ``` # ... /mydomain /etc/auto.mydomain --timeout=60 # ... ``` ## /etc/auto.mydomain ``` * -fstype=autofs,-DSERVER=& file:/etc/auto.mydomain.sub ``` ## /etc/auto.mydomain.sub ``` * -fstype=cifs,uid=${UID},gid=${EUID},credentials=${HOME}/.smb/mydomain ://${SERVER}/& ``` This works and allows each user to specify their own credentials in a file under their home directory. However, the mounts they create are then available to everyone, with the original user's credentials, until the timeout is reached. This is less than ideal, so I've been looking at trying to do one of the following: 1. Configure autofs so that the mounts are local to each user but under the same path, so they can each simultaneously access `/mydomain/server1` with their own credentials 2. Configure autofs so that the mount points are under each users' home folder, so they can each simultaneously access `~/mydomain/server1` with their own credentials 3. Configure autofs so that the mounts are under a user-named folder, so they can simultaneously access `/mydomain/$USER/server1` with their own credentials (but I would also need to ensure that `/mydomain/$USER` is 0700 to the given $USER) So far, I can't see any way of doing #1, but for #2 or #3, I've tried changing the entry in /etc/auto.master so that the key is either `${HOME}/mydomain` or `/mydomain/${USER}`, but neither have worked (the first showed no matching entry in `/var/log/messages` and the second did not appear to do the variable substitution). Am I missing something obvious? (PS: Bonus props if you can provide a way to avoid the need for a plain-text credentials file -- maybe a straight prompt for username/domain/password, or maybe even some kerberos magic?) (PPS: I have looked briefly at [smbnetfs](http://sourceforge.net/projects/smbnetfs/), but I couldn't get it to configure/make -- it asks for `fuse >= 2.6` even though I have v2.8.3 according to `fusermount --version` -- and I couldn't find a released version for `yum install`) (PPPS: I also briefly looked at the supplied `/etc/auto.smb` but it looked like it would suffer the same sharing issues?)
I've done a lot of work with `autofs` and mounting a variety of different types of resources using it. You can check out the [man page](http://linux.die.net/man/5/autofs) for `autofs` which does answer some of your questions if you can keep straight that when they're referring to `$USER` in the documentation, they're referring to the user that's running the `autofs` daemon. These are the variables that you get by default: ``` Variable Substitution The following special variables will be substituted in the key and location fields of an automounter map if prefixed with $ as customary from shell scripts (Curly braces can be used to separate the field name): ARCH Architecture (uname -m) CPU Processor Type HOST Hostname (uname -n) OSNAME Operating System (uname -s) OSREL Release of OS (uname -r) OSVERS Version of OS (uname -v) autofs provides additional variables that are set based on the user requesting the mount: USER The user login name UID The user login ID GROUP The user group name GID The user group ID HOME The user home directory HOST Hostname (uname -n) Additional entries can be defined with the -Dvariable=Value map-option to automount(8). ``` You'd probably be tempted to use the `-DUSER=$USER` but this will only set `$USER` inside the `autofs` map file to the user that started the `autofs` daemon. The daemon is usually owned by a user such as `root` or a `chrooted` user specifically setup for `autofs`. **NOTE #1:** a `autofs` file is comprised of a `key` and a `value`. The variables are only allowed for use within the `value` portion of a entry. **NOTE #2:** If the `-D=...` switch does not override a built-in variable then `$USER` or `$UID` would contain the value of the person's `$USER` & `$UID` that is accessing the mount. ### Limiting access to the CIFS share Regarding your question of how to limit access to a `CIFS` mount, I don't see a way to accomplish this with `autofs`. The credentials used to mount a `CIFS` share are used throughout the duration that the share is mounted. In effect, `autofs`, running it's daemon `automount` as say `root`, is "equivalent" to the credentials of the `CIFS` user. This isn't what I would consider typical behavior for `autofs` and is a by-product of using `mount.cifs`. Typical `autofs` behavior would respect the permissions on the other end of the mount, whereas with `mount.cifs` it does not. ### Bottom-line I think you're out of luck accomplishing your setup using autofs. I think you're going to have to use `fuse` if you truly want each user to be accessing `CIFS` shares using their own credentials.
Minimum of sum of absolute values Problem statement: There are 3 arrays A,B,C all filled with positive integers, and all the three arrays are of the same size. Find min(|a-b|+|b-c|+|c-a|) where a is in A, b is in B, c is in C. --- I worked on the problem the whole weekend. A friend told me that it can be done in linear time. I don't see how that could be possible. How would you do it ?
Well, I think I can do it in O(n log n). I can only do O(n) if the arrays are initially sorted. First, observe that you can permute `a`,`b`,`c` however you like without changing the value of the expression. So let `x` be the smallest of `a`,`b`,`c`; let `y` be the middle of the three; and let `z` be the maximum. Then note that the expression just equals `2*(z-x)`. (Edit: This is easy to see... Once you have the three numbers in order, `x < y < z`, the sum is just `(y-x) + (z-y) + (z-x)` which equals `2*(z-x)`) Thus, all we are really trying to do is find three numbers such that the outer two are as close together as possible, with the other number "sandwiched" between them. So start by sorting all three arrays in O(n log n). Maintain an index into each array; call these `i`, `j`, and `k`. Initialize all three to zero. Whichever index points to the smallest value, increment that index. That is, if `A[i]` is smaller than `B[j]` and `C[k]`, increment `i`; if `B[j]` is smallest, increment `j`; if `C[k]` is smallest, increment k. Repeat, keeping track of `|A[i]-B[j]| + |B[j]-C[k]| + |C[k]-A[i]|` the whole time. The smallest value you observe during this march is your answer. (When the smallest of the three is at the end of its array, stop because you are done.) At each step, you add one to exactly one index; but you can only do this `n` times for each array before hitting the end. So this is at most `3*n` steps, which is O(n), which is less than O(n log n), meaning the total time is O(n log n). (Or just O(n) if you can assume the arrays are sorted.) Sketch of a proof that this works: Suppose `A[I]`, `B[J]`, `C[K]` are the `a`, `b`, `c` that form the actual answer; i.e., they have the minimum `|a-b|+|b-c|+|c-a|`. Suppose further that `a` > `b` > `c`; the proof for the other cases is symmetric. Lemma: During our march, we do not increment `j` past `J` until after we increment `k` past `K`. Proof: We always increment the index of the smallest element, and when `k <= K`, `B[J] > C[k]`. So when `j=J` and `k <= K`, `B[j]` is not the smallest element, so we do not increment `j`. Now suppose we increment `k` past `K` before `i` reaches `I`. What do things look like just before we perform that increment? Well, `C[k]` is the smallest of the three at that moment, because we are about to increment `k`. `A[i]` is less than or equal to `A[I]`, because `i < I` and `A` is sorted. Finally, `j <= J` because `k <= K` (by our Lemma), so `B[j]` is also less than `A[I]`. Taken together, this means our sum-of-abs-diff at this moment is *less* than `2*(c-a)`, which is a contradiction. Thus, we do not increment `k` past `K` until `i` reaches `I`. Therefore, at some point during our march `i=I` and `k=K`. By our Lemma, at this point `j` is less than or equal to `J`. So at this point, either `B[j]` is less than the other two and `j` will get incremented; or `B[j]` is between the other two and our sum is just `2*(A[i]-C[k])`, which is the right answer. This proof is sloppy; in particular, it fails to explicitly account for the case where one or more of `a`,`b`,`c` are equal. But I think that detail can be worked out pretty easily.
get the offset of a tuple element I have wrote the following code to get the offset of a tuple element ``` template<size_t Idx,class T> constexpr size_t tuple_element_offset() { return static_cast<size_t>( reinterpret_cast<char*>(&std::get<Idx>(*reinterpret_cast<T*>(0))) - reinterpret_cast<char*>(0)); } ``` This is actually similar to the implementation of the **offsetof** macro. It looks ugly, but compiles and works fine on gcc-4.6 ``` typedef std::tuple<int,char,long> mytuple; mytuple var = std::make_tuple(4,'c',1000); char * ptr = reinterpret_cast<char*>(&var); long * pt = reinterpret_cast<long*>(ptr+tuple_element_offset<2,mytuple>()); std::cout << *pt << std::endl; ``` prints "1000". I don't know too much about constexpr, so my questions are: 1. Is it legal c++? 2. More important, why I am allowed to call std::get (which is non constexpr) inside a constexpr function? As far as I understand constexpr, the compiler is forced to evaluate the result of the expression at compile time, so no zero-dereferentiation can occurs in practice.
> > Is it legal C++? > > > If by "legal" you mean "well-formed," then, yes. If by "legal" you mean "valid and will work on any compiler and Standard Library implementation, then, no, because `std::tuple` is not POD. > > Why I am allowed to call `std::get` (which is not `constexpr`) inside a `constexpr` function? > > > Basically, a `constexpr` function doesn't necessarily have to consist of just a constant expression. If you tried to use your `tuple_element_offset()` function in a constant expression, you'd get a compilation error. The idea is that a function might be usable in a constant expression in some circumstances but not in others, so there isn't a restriction that a `constexpr` function must always be usable in a constant expression (since there isn't such a restriction, it's also possible that a particular `constexpr` function might never be usable in a constant expression, as is the case with your function). The C++0x draft has a good example (from 5.19/2): ``` constexpr const int* addr(const int& ir) { return &ir; } // OK // OK: (const int*)&(const int&)x is an address contant expression static const int x = 5; constexpr const int* xp = addr(x); // Error, initializer for constexpr variable not a constant expression; // (const int*)&(const int&)5 is not a constant expression because it takes // the address of a temporary constexpr const int* tp = addr(5); ```
Efficient string concatenations with DOM I form DOM nodes as strings and append them to DOM tree like below using jQuery. ``` var dom = '<div><div style="display: inline-block">first name</div>' '<div style="display: inline-block">last name</div></div>'; $("#contacts").append(dom); ``` The above code is a small sample. In most of my cases `dom` will hold big strings. When I recently read about JS performance tutorials, I saw this [post](https://developers.google.com/speed/articles/optimizing-javascript). It mentioned that this way of string concatenation is not a good practice. It mentioned the use of `.join()` instead of concatenation. That seems like an old post, so which one is efficient in these days?
Strings are [immutable](https://stackoverflow.com/questions/51185/are-javascript-strings-immutable-do-i-need-a-string-builder-in-javascript). It means that when you allocate memory while creating a string, you are not able to reallocate it. So, this code: ``` var a = 'a'; a = a + 'bc'; ``` Will allocate a new block of memory for 'a' variable, instead of reusing already allocated. When you do something like `'a' + 'b' + 'c' + 'd'` each concatenation allocates a new block of memory for every temporary string. When you use something like this: ``` var a = 'a'; a = [a, 'b', 'c'].join(''); ``` `join` function calculates memory for result string and allocates it only once for a complete string. As it is shown in the post that you mentioned, it is easier to handle and to avoid memory leaks in JS interpreter (it seems that IE6 and IE7 just not handle garbage collection for the first variant correctly). If you are interested for the speed of string concatenation for different browsers you can try and view it [here](http://jsperf.com/string-concatenation). UPD: Not advocating `join`, just trying to explain why it was assumed as an optimized variant. As seen at jsPerf tests new browsers optimize string concatenation, so it is faster.
Why can't I use += operator on a list iterator? I have a iterator from a `std::list<std::string>`, but when I attempt to advance it using `+=`, I get a compilation error. The code is: ``` #include <list> #include <iostream> #include <string> int main() { std::list<std::string> x; x.push_front("British"); x.push_back("character"); x.push_front("Coding is unco"); x.push_back("Society"); x.push_back("City Hole"); auto iter = x.begin(); iter += 3; //std::advance(iter, 3); x.erase(iter); for (auto &e: x) { std::cout << e << "\n"; } } ``` If I compile this using `clang++ -std=c++11 -o li li.cpp`, I get: ``` li.cpp:13:10: error: no viable overloaded '+=' iter += 3; ~~~~ ^ ~ 1 error generated. ``` Why can't I use `+=` with this iterator?
The iterator for [std::list](http://en.cppreference.com/w/cpp/container/list#Member_types) is [BidirectionalIterator](https://en.cppreference.com/w/cpp/named_req/BidirectionalIterator), which doesn't support `operator+=` like [RandomAccessIterator](https://en.cppreference.com/w/cpp/named_req/RandomAccessIterator). You can use `operator++`, which is supported by [InputIterator](https://en.cppreference.com/w/cpp/named_req/InputIterator) (including `BidirectionalIterator`), something like ``` ++iter; ++iter; ++iter; ``` But it's ugly. The best way is as you commented, to use [std::advance](http://en.cppreference.com/w/cpp/iterator/advance) (or [std::next](http://en.cppreference.com/w/cpp/iterator/next) (since C++11)) instead, which could be used with [InputIterator](https://en.cppreference.com/w/cpp/named_req/InputIterator) (including `BidirectionalIterator`), and also takes advantage of the features supported by `RandomAccessIterator`. (emphasis mine) > > Complexity > > > Linear. > > > However, if `InputIt` additionally meets the requirements of > `RandomAccessIterator`, **complexity is constant**. > > > So you can just use it without considering about the category of the iterator, `std::advance` will do the best choice for you. e.g. ``` std::advance(iter, 3); ``` or ``` iter = std::next(iter, 3); ```
JavaScript Promises : Deep nested context with bind(this) Because I'm using a prototype that has functions calling other functions in the same prototype I have to refer to that method using `this` The Problem `this` Created: **But because of that, I have to preserve a context to use `this` that has me forming very ugly `.bind(this)` walls.** Here is a simplified example I made for laughs. ``` Killmyself.prototype.fireLeMissles = function () { return new Promise(function(resolve,reject) { this.anotherFunction(param).then(function(result) { someList.forEach(function(item) { this.fireLeMissles().then(function(anotherResult){ promiseList.push(anotherResult) }) },this); Promise.all(promiseList).then(function(promiseItem){ childPlacesIds.forEach(function(childPlaceId) { //Do Other Stuff },this); }); resolve(result); }.bind(this).catch(function(err){ console.log("Yea, life sucks sometimes.") })); }.bind(this)); } Killmyself.prototype.another = function(){ //Other stuff } ``` You can see because of calls to functions in the same prototype such as `this.anotherFunction(`... and `this.fireLeMissles(`... I had to do deep preservation of context,which now (in my much larger version of this) is making this code hard to work with. **Question:** Is this a "man up and get use to the harder aspects of JavaScript" thing - or do you seasoned developers see simple ways that deep binding like this could have been avoided?
If you are using **ES6**, you can benefit from [arrow functions](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Functions/Arrow_functions), which preserve the context. ``` var counter = function () { this.count = 0; setInterval( () => { // arrow function console.log(this.count++); // context is preserved }, 1000) } var counter = new counter(); ``` So, your code would become something like: ``` Killmyself.prototype.fireLeMissles = function() { return new Promise((resolve, reject) => { this.anotherFunction(param).then(result => { someList.forEach(item => { this.fireLeMissles().then(anotherResult => { promiseList.push(anotherResult) }); }); Promise.all(promiseList).then(promiseItem => { childPlacesIds.forEach(childPlaceId => { //Do Other Stuff }); }); resolve(result); }).catch(err => { console.log("Yea, life sucks sometimes.") }); }); } ``` For **ES5**, you can just either use `.bind` just like the way you did or you can assign `this` to something else in the function with the desired context, then use that variable inside the inner functions. ``` Killmyself.prototype.fireLeMissles = function() { var self = this; /// use `self` instead of `this` from now on. return new Promise(function(resolve, reject) { self.anotherFunction(param).then(function(result) { someList.forEach(function(item) { self.fireLeMissles().then(function(anotherResult) { promiseList.push(anotherResult) }) }); Promise.all(promiseList).then(function(promiseItem) { childPlacesIds.forEach(function(childPlaceId) { //Do Other Stuff }); }); resolve(result); }).catch(function(err) { console.log("Yea, life sucks sometimes.") }); }); } ```
Improving a method to guarantee uniqueness I have just written something to guarantee a unique URL when creating pages within an old legacy system (sounds odd doing this retrospectively but we've just changed how the routing works). I don't often end up using `while` and I think this may be why I feel it can be improved. ``` private string GetUniqueUrl(Page parentPage, Page newPage) { var isNotDuplicatateUrl = false; var version = 0; var baseUrl = parentPage != null ? parentPage.FullUrl.Trim() + "/" : string.Empty; while (isNotDuplicatateUrl == false) { var friendlyUrl = version > 0 ? (newPage.FriendlyURLName + "-" + version) : newPage.FriendlyURLName; var fullUrl = baseUrl + friendlyUrl; var duplicateUrlPage = Site.Pages.FirstOrDefault(x => x.FullUrl != null && string.Equals(x.FullUrl.Trim(), fullUrl.Trim(), StringComparison.CurrentCultureIgnoreCase)); if (duplicateUrlPage == null) { isNotDuplicatateUrl = true; } else { version++; } } return newPage.FriendlyURLName + "-" + version; } ``` NB: I know `Page.FriendlyURLName` should be `Page.FriendlyUrlName` but I'm only looking for improvements within the scope of this method.
Having a loop to check each version of the URL could become a problem if there end up being a lot of URL's. There many be a better way to do it using a hash, or some other means of persisting data, and retrieving the most recently used version. Still, even with your loop code, there's a way to do it in a simpler fashion, using a helper method to check for a duplicate URL is a start..... ``` private bool PageFound(string baseUrl, string friendlyURL) { var fullUrl = baseUrl + friendlyUrl; var duplicateUrlPage = Site.Pages.FirstOrDefault( x => x.FullUrl != null && string.Equals(x.FullUrl.Trim(), fullUrl.Trim(), StringComparison.CurrentCultureIgnoreCase)); return duplicateUrlPage != null; } private string GetUniqueUrl(Page parentPage, Page newPage) { var version = 0; var baseUrl = parentPage != null ? parentPage.FullUrl.Trim() + "/" : string.Empty; var friendlyUrl = newPage.FriendlyURLName; while (PageFound(baseUrl, friendlyURL)) { version++; friendlyUrl = newPage.friendlyURLName + "-" + version; } return newPage.FriendlyURLName + "-" + version; } ``` There are two things to pay attention to here. 1. Note how the first time the duplicate is checked, there's no 'version' on the URL? The version only gets added for version >=1. This is 'managed' by doing the first URL construction outside the loop, and the rest inside the loop. This saves the conditional on the `version == 0` 2. I believe there's a bug in your code (which I have reproduced in my suggestion). If you search for the page ***without*** the version (version = 0) and you do not find one, you return the URL with the version anyway (you append "-0"). Are you sure this is what you want to do? If it is not what you want, you can just return `friendlyUrl` instead of `newPage.FriendlyURLName + "-" + version` Using a helper function to check for the duplicate is a good way to put a code block in to a conditional clause for a loop. In essence, by creating a function, we have also created a powerful while loop check. The 'problem' with your code is that it was trying to do condition-loop logic inside the execution block, instead of inside the loop clause 2.
Clean up & style characters from text I am getting text from a feed that has alot of characters like: ``` Insignia&amp;#153; 2.0 Stereo Computer Speaker System (2-Piece) - Black 4th-Generation Apple&#174; iPod&#174; touch ``` Is there an easy way to get rid of these, or do I have to anticipate which characters I want to delete and use the delete method to remove them? Also, when I try to remove ``` &amp; ``` with ``` str.delete("&amp;") ``` It leaves behind "amp;" Is there a better way to delete this type of character? Do I need to re-encode the text?
[String#delete](http://ruby-doc.org/docs/ProgrammingRuby/html/ref_c_string.html#String.delete) is certainly not what you want, as it works on characters, not the string as a whole. Try ``` str.gsub /&amp;/, "" ``` You may also want to try replacing the `&amp;` with a literal ampersand, such as: ``` str.gsub /&amp;/, "&" ``` If this is closer to what you really want, you may get the best results unescaping the HTML string. If so try this: ``` CGI::unescapeHTML(str) ``` Details of the unescapeHTML method are [here](http://rubydoc.info/stdlib/cgi/1.9.2/CGI#unescapeHTML-class_method).
Scrolling div items into view on keypress I have the following scrolling div, which I want to scroll when pressing the up and down arrows keys. The problem is when the selection get´s out of view I want to scroll the div accordingly. I added a [focused] attribute to the div element only for demonstration purposes, I was hoping I could databind to a focused attribute via Angular2. ``` <div class="scrolling-div"> <div *ngFor="let item of Items; let i = index" [focused]="i === currentIndex"> {{item.title}} </div> </div> ``` Once I detect a arrow keypress the currentIndex is incremented or decremented. The problem is I want to set the focus to the current item such that the div get´s scrolled accordingly. Is there a way to bind each item to the focus such that it get´s scrolled into view? Maybe there is a different solution to this problem.
That's an example of a `focused` directive that will do the work: ``` import {Directive, Input, Renderer, ElementRef} from '@angular/core' @Directive({ selector: '[focused]' }) export class FocusedDirective { @Input() set focused(value: boolean){ if(value){ this.renderer.invokeElementMethod(this.elementRef.nativeElement, 'scrollIntoViewIfNeeded'); } } constructor(private elementRef: ElementRef, private renderer: Renderer){} } ``` If you want it to scroll to the top (and not just become visible), replace `scrollIntoViewIfNeeded` with `scrollIntoView`. Here is a [plnkr](http://plnkr.co/edit/gSFdUy0eplAuKEPLLGNH?p=preview) of a very basic working example. Note that I didn't do range checking and the like, just showed basic functionality
AX 2012R2: Lookup query takes too long, lookup never opens I have a AX2012R2 CU6 (build&client 6.2.1000.1437, kernel 6.2.1000.5268) with the following problem: On AP>Journals>Invoices>Invoice Journal>lines (form LedgerJournalTransVendInvoice), when I select Vendor as *Account type* and then activate the lookup on the *Account* field, AX freezes for a couple minutes and when it recovers, the lookup is closed/never opened. This happens every time when account type vendor, other account types work just fine. I debugged this to LedgerJournalEngine.accountNumLookup() --> VendTable.lookupVendor line `formSegmentedEntryControl.performFormLookup(formRun);` The above process takes up the time. Any ideas before I hire an exorcist?
There is a known KB for this for R3, look for it on [Lifecycle services](https://lcs.dynamics.com) > > KB 3086961 Performance issue of VendorLookup on the volume data, > during the GFM Bugbash 6/11 took over 30 minutes > > > Even though the fix is for R3 it should be easy to backport as the changes are described as > > The root cause seemed to be the DirPartyLookupGridView, which had > around 14 joins on views and tables. This view is used in many places > and hence seemed to have grown quite a lot over time. > > > The changes in the hotfix remove the view and add only the required > datasources - dirpartytable and logisticsaddress to the > VendTableLookup form. > > > The custtableLookup is not using the view and using custom datasource > joins instead, so no changes there. > > > Try implementing that change and see what happens. I'm not sure this will fix your issue as in your execution plan the only operation that seems really expensive is the sort operator which needs to spill to tempdb (you might need more memory to solve that) but the changes in the datasource could have the effect of removing the sort operator from the execution plan as the data may be sorted by an index.
Is it possible to add more than mt-5 to my top margin? I would like to align some elements on my website, I've already used mt-5 in my HTML, but I need a little more margin. How can I do this?
There is no higher value in Bootstrap but you could make your own classes. The documentation says ["You can add more sizes by adding entries to the $spacers Sass map variable."](https://getbootstrap.com/docs/4.0/utilities/spacing/), but if you don't want to recompile the SASS you can make you own class like this: ``` .mt-6 { margin-top: 4rem; // or the value you want } ``` The Bootstrap classes already use `rem` for the margin classes so it is better to stay with the same unit. The existing classes use these values (it could help you decide what you want to use for your value): - `.mt-5` is set to `3rem` so this is 1rem unit higher. - `.mt-4` is `1.5rem` (so you might want to use `1.5rem` higher for your class) - `.mt-3` is `1rem` - `.mt-2` is `0.5rem` - `.mt-1` is `0.25rem` If you use it consistently on the top of all elements in the row, it will not affect the responsiveness. But if you use it left or right also, then it can affect responsiveness because it affects the width.
R find last weekday of month How do I find the last weekday (e.g., Wednesday) of the month using R? In the code below, I calculate the month, day of the month, week of the month, and weekday. There are 5 Wednesdays in January 2014, but only 4 Wednesdays in February 2014, so I cannot use max(week of the month) as a filter. Any help is appreciated although I prefer to use the base R functions. ``` DF <- data.frame(DATE = seq(as.Date("2014-01-01"), as.Date("2014-12-31"), "day")) DF$MONTH <- as.numeric(format(DF$DATE, "%m")) DF$DAY_OF_MONTH <- as.numeric(format(DF$DATE, "%d")) DF$WEEK_OF_MONTH <- ceiling(as.numeric(format(DF$DATE, "%d")) / 7) DF$WEEKDAY <- format(DF$DATE, "%A") DF ```
I think this is what you're after: ``` DF$last_weekday_o_month <- ave( weekdays(DF$DATE), months(DF$DATE), FUN = function(x) tail(x[ !(x %in% c("Saturday","Sunday")) ], 1) ) ``` To find the particular date that is the last weekday.... ``` DF$last_weekdaydate_o_month <- ave( DF$DATE, months(DF$DATE), FUN = function(x) tail(x[ !(weekdays(x) %in% c("Saturday","Sunday")) ], 1) ) ``` the result looks like... ``` DATE last_weekday_o_month last_weekdaydate_o_month 1 2014-01-01 Friday 2014-01-31 2 2014-01-02 Friday 2014-01-31 3 2014-01-03 Friday 2014-01-31 4 2014-01-04 Friday 2014-01-31 5 2014-01-05 Friday 2014-01-31 6 2014-01-06 Friday 2014-01-31 ... 360 2014-12-26 Wednesday 2014-12-31 361 2014-12-27 Wednesday 2014-12-31 362 2014-12-28 Wednesday 2014-12-31 363 2014-12-29 Wednesday 2014-12-31 364 2014-12-30 Wednesday 2014-12-31 365 2014-12-31 Wednesday 2014-12-31 ``` If you did this first, of course you could compute `last_weekday_o_month` as `weekdays(last_weekdaydate_o_month)`. --- With a couple packages, this can be done more elegantly/readably, as suggested by @RichardScriven: ``` library(data.table) setDT(DF)[, last_weekdaydate_o_month := last(DATE[!chron::is.weekend(DATE)]) , by = month(DATE)] ``` which gives ``` DATE last_weekdaydate_o_month 1: 2014-01-01 2014-01-31 2: 2014-01-02 2014-01-31 3: 2014-01-03 2014-01-31 4: 2014-01-04 2014-01-31 5: 2014-01-05 2014-01-31 --- 361: 2014-12-27 2014-12-31 362: 2014-12-28 2014-12-31 363: 2014-12-29 2014-12-31 364: 2014-12-30 2014-12-31 365: 2014-12-31 2014-12-31 ```
Matrix Transpose in Golang I have write a normal programm "Transpose the matrix" in go. Suppose input are always correct. I also read the article [An Efficient Matrix Transpose in CUDA C/C++](https://devblogs.nvidia.com/efficient-matrix-transpose-cuda-cc/). So I keen to know how I can use Go-routines or another efficient way to problem in go way ``` package main import ( "fmt" ) func main() { a := [][]int{{1, 1, 1, 1}, {2, 2, 2, 2}, {3, 3, 3, 3}, {4, 4, 4, 4}} r := transpose(a) fmt.Println(r) } func transpose(a [][]int) [][]int { newArr := make([][]int, len(a)) for i := 0; i < len(a); i++ { for j := 0; j < len(a[0]); j++ { newArr[j] = append(newArr[j], a[i][j]) } } return newArr } ```
In Go, measure performance. Run benchmarks using the Go `testing` package. For example, ``` $ go test transpose_test.go -bench=. -benchmem BenchmarkTranspose-4 2471407 473 ns/op 320 B/op 13 allocs/op BenchmarkTransposeOpt-4 9023720 136 ns/op 224 B/op 2 allocs/op $ ``` As you can see, minimizing allocations is important. Efficient memory cache usage probably helps too. `transpose_test.go`: ``` package main import "testing" func transpose(a [][]int) [][]int { newArr := make([][]int, len(a)) for i := 0; i < len(a); i++ { for j := 0; j < len(a[0]); j++ { newArr[j] = append(newArr[j], a[i][j]) } } return newArr } func BenchmarkTranspose(b *testing.B) { a := [][]int{{1, 1, 1, 1}, {2, 2, 2, 2}, {3, 3, 3, 3}, {4, 4, 4, 4}} b.ResetTimer() for N := 0; N < b.N; N++ { _ = transpose(a) } } func NewMatrix(d2, d1 int) [][]int { a := make([]int, d2*d1) m := make([][]int, d2) lo, hi := 0, d1 for i := range m { m[i] = a[lo:hi:hi] lo, hi = hi, hi+d1 } return m } func transposeOpt(a [][]int) [][]int { b := NewMatrix(len(a[0]), len(a)) for i := 0; i < len(b); i++ { c := b[i] for j := 0; j < len(c); j++ { c[j] = a[j][i] } } return b } func BenchmarkTransposeOpt(b *testing.B) { a := [][]int{{1, 1, 1, 1}, {2, 2, 2, 2}, {3, 3, 3, 3}, {4, 4, 4, 4}} b.ResetTimer() for N := 0; N < b.N; N++ { _ = transposeOpt(a) } } ``` --- Goroutines have overhead. For a small task (4 x 4 matrix), the overhead may outweigh any gains. Let's look at a 1920 x 1080 matrix (the size of an FHD display). For this type of problem, we examine the optimized transpose function (`transposeOpt`) and see if it can be subdivided into smaller, concurrent pieces. For example, by row (`transposeRow`), or the number of available CPUs (`transposeCPU`). ``` $ go test goroutine_test.go -bench=. -benchmem BenchmarkTranspose-4 37 31848320 ns/op 63354443 B/op 15121 allocs/op BenchmarkTransposeOpt-4 202 5921065 ns/op 16616065 B/op 2 allocs/op BenchmarkTransposeRow-4 229 5307156 ns/op 16616159 B/op 3 allocs/op BenchmarkTransposeCPU-4 360 3347992 ns/op 16616083 B/op 3 allocs/op $ ``` A row is still a small task. Twice the number of CPUs amortizes the goroutine overhead over a number of rows. By any measure -- CPU, memory, allocations -- `transposeCPU` is considerably more efficient than the original transpose for a 1920 x 1080 matrix. ``` func NewMatrix(d2, d1 int) [][]int { a := make([]int, d2*d1) m := make([][]int, d2) lo, hi := 0, d1 for i := range m { m[i] = a[lo:hi:hi] lo, hi = hi, hi+d1 } return m } var numCPU = runtime.NumCPU() func transposeCPU(a [][]int) [][]int { b := NewMatrix(len(a[0]), len(a)) var wg sync.WaitGroup n := 2 * numCPU stride := (len(b) + n - 1) / n for lo := 0; lo < len(b); lo += stride { hi := lo + stride if hi > len(b) { hi = len(b) } wg.Add(1) go func(b [][]int) { defer wg.Done() for i := 0; i < len(b); i++ { c := b[i] for j := 0; j < len(c); j++ { c[j] = a[j][i] } } }(b[lo:hi]) } wg.Wait() return b } ``` However, for a amall, 4 x 4 matrix, the goroutine overhead outweighs any gains. ``` BenchmarkTranspose-4 2570755 463 ns/op 320 B/op 13 allocs/op BenchmarkTransposeOpt-4 8241715 145 ns/op 224 B/op 2 allocs/op BenchmarkTransposeRow-4 908217 1318 ns/op 240 B/op 3 allocs/op BenchmarkTransposeCPU-4 881936 1330 ns/op 240 B/op 3 allocs/op ``` As always, when we are exploiting concurrency, we use the Go race detector to check for data races. The overhead to check for data races is considerable. Therefore, we discard any benchmark results. ``` $ go test goroutine_test.go -bench=. -benchmem -race ``` By design, there are no data races. `goroutine_test.go`: ``` package main import ( "runtime" "sync" "testing" ) func transpose(a [][]int) [][]int { newArr := make([][]int, len(a)) for i := 0; i < len(a); i++ { for j := 0; j < len(a[0]); j++ { newArr[j] = append(newArr[j], a[i][j]) } } return newArr } func BenchmarkTranspose(b *testing.B) { for N := 0; N < b.N; N++ { _ = transpose(a) } } func NewMatrix(d2, d1 int) [][]int { a := make([]int, d2*d1) m := make([][]int, d2) lo, hi := 0, d1 for i := range m { m[i] = a[lo:hi:hi] lo, hi = hi, hi+d1 } return m } func transposeOpt(a [][]int) [][]int { b := NewMatrix(len(a[0]), len(a)) for i := 0; i < len(b); i++ { c := b[i] for j := 0; j < len(c); j++ { c[j] = a[j][i] } } return b } func BenchmarkTransposeOpt(b *testing.B) { for N := 0; N < b.N; N++ { _ = transposeOpt(a) } } func transposeRow(a [][]int) [][]int { b := NewMatrix(len(a[0]), len(a)) var wg sync.WaitGroup for i := 0; i < len(b); i++ { wg.Add(1) c := b[i] go func(c []int, i int) { defer wg.Done() for j := 0; j < len(c); j++ { c[j] = a[j][i] } }(c, i) } wg.Wait() return b } func BenchmarkTransposeRow(b *testing.B) { for N := 0; N < b.N; N++ { _ = transposeRow(a) } } var numCPU = runtime.NumCPU() func transposeCPU(a [][]int) [][]int { b := NewMatrix(len(a[0]), len(a)) var wg sync.WaitGroup n := 2 * numCPU stride := (len(b) + n - 1) / n for lo := 0; lo < len(b); lo += stride { hi := lo + stride if hi > len(b) { hi = len(b) } wg.Add(1) go func(b [][]int) { defer wg.Done() for i := 0; i < len(b); i++ { c := b[i] for j := 0; j < len(c); j++ { c[j] = a[j][i] } } }(b[lo:hi]) } wg.Wait() return b } func BenchmarkTransposeCPU(b *testing.B) { b.ResetTimer() for N := 0; N < b.N; N++ { _ = transposeCPU(a) } } var a = func() [][]int { b := NewMatrix(1920, 1080) for i := range b { for j := range b[0] { b[i][j] = i<<16 + j } } return b }() ``` --- You might want to look at `gonum`, the Go numeric module. It's open source. For matrices: > > package mat > > > > ``` > import "gonum.org/v1/gonum/mat" > > ``` > > Package mat provides implementations of float64 and complex128 matrix > structures and linear algebra operations on them. > > >
NSUserDefaults of different application I am currently developing System Pane and my app have some configuration settings saved to User Defaults: ``` NSUserDefaults *userDefault=[NSUserDefaults standardUserDefaults]; NSData *encodedObject = [NSKeyedArchiver archivedDataWithRootObject:listOfStuff]; [userDefault setObject:encodedObject forKey:@"myStuff"]; [userDefault synchronize]; ``` Can anyone tell me if and how a different application can read settings that have been saved in above System Pane? Thank you.
The way to read someones preferences is very simple and straight forward: ``` NSUserDefaults *defaults = [[NSUserDefaults alloc] init]; [defaults addSuiteNamed:@"com.apple.systempreferences.plist"]; NSLog(@"DefaultExposeTab is: %@", [defaults stringForKey:@"DefaultExposeTab"]); ``` Make sure to initialize NSUserDefaults following way: > > [[NSUserDefaults alloc] init]; > > > then you can add desired preference list, in our case I would like to read System Preferences: > > [defaults addSuiteNamed:@"com.apple.systempreferences.plist"]; > > > and finally get value for whichever key you want, in this example: > > "DefaultExposeTab" > > > Above example works like a charm. Please remember it will only work for current user. Thanks. **P.S: Please note - above example will NOT work for sandboxed application.**
snprintf() does not work for modifying a string in-place ``` char symbols[16] = ""; int index = 0; while (1) { if (index % 2) snprintf(symbols, sizeof symbols, "a%s", symbols); else snprintf(symbols, sizeof symbols, "b%s", symbols); index++; printf("%s\n", symbols); } ``` How output looks: a => bb => aaa => bbbb I want the output looks: a => ba => aba => baba
This is undefined behaviour. From the C99 standard section *7.19.6.5 The snprintf function*: > > The snprintf function is equivalent to fprintf, except that the output is written into > an array (specified by argument s) rather than to a stream. If n is zero, nothing is written, > and s may be a null pointer. Otherwise, output characters beyond the n-1st are > discarded rather than being written to the array, and a null character is written at the end > of the characters actually written into the array. **If copying takes place between objects > that overlap, the behavior is undefined.** > > > You will need to make a copy of `symbols` for use as an argument in the `snprintf()` calls: ``` char symbols[16] = ""; char symbols_copy[16]; int index = 0; while (index++ < 15) { memcpy(symbols_copy, symbols, sizeof(symbols)); if (index % 2) snprintf(symbols, sizeof symbols, "a%s", symbols_copy); else snprintf(symbols, sizeof symbols, "b%s", symbols_copy); printf("%s\n", symbols); } ``` See demo <http://ideone.com/GvnW7D> .
Django Autocomplete Light List Object has no Attribute Queryset *Please look at my edits at the end of my code as well.* I'm attempting to implement django-autocomplete-light (dal 3.2.10) for a single field. Following the tutorial, I turn up this error: 'list' object has no attribute 'queryset'. I have seen this question: [django-autocomplete-light error = 'list' object has no attribute 'queryset'](https://stackoverflow.com/questions/40822373/django-autocomplete-light-error-list-object-has-no-attribute-queryset). It did not resolve my issue. Why is this error occurring? What can I do to combat this? I don't think this is the entire problem, but I don't see any js files show up in the browser inspector. I thought including the code in Edit #3 would cause something to show up. I have two models: ``` class Entity(models.Model): entity = models.CharField(primary_key=True, max_length=12) entityDescription = models.CharField(max_length=200) def __str__(self): return self.entityDescription class Action(models.Model): entity = models.ForeignKey(Entity, on_delete=models.CASCADE, db_column='entity') entityDescription = models.CharField(max_length=200) action = models.CharField(max_length=50) def __str__(self): return '%s' % self.entity ``` I have a model form and formset. I am also using crispy-forms to render the formset: ``` class ActionForm(ModelForm): class Meta: model = Action fields = '__all__' widgets = { 'entityDescription': autocomplete.ModelSelect2(url='eda') } ActionFormSet = modelformset_factory(Action, extra=1, exclude=(), form=ActionForm) ``` I have a view: ``` class EntityDescriptionAutocomplete(autocomplete.Select2QuerySetView): def get_queryset(self): qs = Entity.objects.all() if self.q: qs = qs.filter(entityDescription__istartswith=self.q) return qs ``` I have a urls.py: ``` urlpatterns = [ url( r'^eda/$', views.EntityDescriptionAutocomplete.as_view(), name='eda', ), ] ``` Thank you for any insight you all might have. **Edit:** I changed... ``` widgets = { 'entityDescription': autocomplete.ModelSelect2(url='eda'), } ``` ...to... ``` widgets = { 'entityDescription': autocomplete.Select2(url='eda'), } ``` ...this allowed my page to render, but the autocomplete field is an empty dropdown. Why is it empty, and why is it not an autocomplete box? **Edit #2:** I removed the widget setting in the meta class and instead overrode the field directly: ``` class ActionForm(ModelForm): entityDescription = ModelChoiceField( queryset=Entity.objects.all(), widget=autocomplete.ModelSelect2(url='eda') ) class Meta: model = Action fields = '__all__' ``` This still returns an empty dropdown (not an autocomplete box), but it now has Django's `-------` signifier instead of absolutely nothing. **Edit #3:** I added this to my template: ``` {% block footer %} <script type="text/javascript" src="/static/collected/admin/js/vendor/jquery/jquery.js"></script> {{ form.media }} {% endblock %} ``` Nothing changed.
Your `ActionForm`, and `EntityDescriptionAutocomplete` look fine. **Is your autocomplete view returning results?** You should be able to browse to the autocomplete view's URL (in your case `/eda`) and see JSON results from the query. Is the set non-empty? ``` {"pagination": {"more": true}, "results": [{"text": "foo", "id": "1" ... ^^^^^^^^^^^^^^^^^^^^^^^^ ``` **Does your rendered HTML include the libraries?** You should see the following in your HTML source, where you put the `{{ form.media }}` tag: ``` <link href="/static/autocomplete_light/vendor/select2/dist/css/select2.css" type="text/css" media="all" rel="stylesheet" /> <link href="/static/autocomplete_light/select2.css" type="text/css" media="all" rel="stylesheet" /> <script type="text/javascript" src="/static/autocomplete_light/jquery.init.js"></script> <script type="text/javascript" src="/static/autocomplete_light/autocomplete.init.js"></script> <script type="text/javascript" src="/static/autocomplete_light/vendor/select2/dist/js/select2.full.js"></script> <script type="text/javascript" src="/static/autocomplete_light/select2.js"></script> ``` If not, **Are you referring to the correct form variable?** The `form` in `{{ form.media }}` should refer to whatever name you have given to your form in the context being passed to your page template.
Android 12 Splash Screen API - Increasing SplashScreen Duration I am learning Android's new SplashScreen API introduced with Android 12. I have so far gotten it to work on my Emulator and Google Pixel 4A, but I want to increase its duration. In my Splash Screen I do not want a fancy animation, I just want a static drawable. I know, I know (sigh) some of you might be thinking, that I should not increase the duration and I know there are several good arguments in favor of not doing so. However, for me the duration of a splash screen with a non animated drawable is so brief (less than a second), I think it raises an accessibility concern, especially so since it cannot be disabled (ironically). Simply, the organization behind the product or its brand/product identity cannot be properly absorbed or recognized by a new user at that size and in that time, rendering the new splash screen redundant. I see the property windowSplashScreenAnimationDuration in the theme for the splash screen (shown below), but this has no effect on the duration presumably because I am not animating. ``` <style name="Theme.App.starting" parent="Theme.SplashScreen"> <!--Set the splash screen background, animated icon, and animation duration.--> <item name="windowSplashScreenBackground">@color/gold</item> <!-- Use windowSplashScreenAnimatedIcon to add either a drawable or an animated drawable. One of these is required--> <item name="windowSplashScreenAnimatedIcon">@drawable/accessibility_today</item> <item name="windowSplashScreenAnimationDuration">300</item> <!--# Required for--> <!--# animated icons--> <!--Set the theme of the activity that directly follows your splash screen--> <item name="postSplashScreenTheme">@style/Theme.MyActivity</item> <item name="android:windowSplashScreenBrandingImage">@drawable/wculogo</item> </style> ``` Is there a straightforward way to extend the duration of a non animated splash screen?
As I was writing this question and almost ready to post it, I stumbled on the method setKeepOnScreenCondition (below) that belongs to the splashScreen that we must install on the onCreate of our main activity. I thought it seemed wasteful not to post this, given there are no other posts on this topic and no such similar answers to other related questions (as of Jan 2022). ``` SplashScreen splashScreen = SplashScreen.installSplashScreen(this); splashScreen.setKeepOnScreenCondition(....); ``` Upon inspecting it I found this method receives an instance of the **splashScreen.KeepOnScreenCondition()** interface for which the implementation must supply the following method signature implementation: ``` public boolean shouldKeepOnScreen() ``` It seems this method will be called by the splash screen and retain the splash screen visibly until it returns false. This is where the light bulb moment I so love about programming occurred. What if I use a boolean initialised as true, and set it to false after a delay? That hunch turned out to work. Here is my solution. It seems to work and I thought it would be useful to others. Presumably instead of using a Handler for a delay, one could also use this to set the boolean after some process had completed. ``` package com.example.mystuff.myactivity; import androidx.appcompat.app.AppCompatActivity; import androidx.core.splashscreen.SplashScreen; import android.os.Bundle; import android.os.Handler; public class MainActivity extends AppCompatActivity { private boolean keep = true; private final int DELAY = 1250; @Override protected void onCreate(Bundle savedInstanceState) { // Handle the splash screen transition. SplashScreen splashScreen = SplashScreen.installSplashScreen(this); super.onCreate(savedInstanceState); //Keep returning false to Should Keep On Screen until ready to begin. splashScreen.setKeepOnScreenCondition(new SplashScreen.KeepOnScreenCondition() { @Override public boolean shouldKeepOnScreen() { return keep; } }); Handler handler = new Handler(); handler.postDelayed(runner, DELAY); } /**Will cause a second process to run on the main thread**/ private final Runnable runner = new Runnable() { @Override public void run() { keep = false; } }; } ``` If you are into Java Lambdas an even nicer and more compact solution is as follows: ``` package com.example.mystuff.myactivity; import androidx.appcompat.app.AppCompatActivity; import androidx.core.splashscreen.SplashScreen; import android.os.Bundle; import android.os.Handler; public class MainActivity extends AppCompatActivity { private boolean keep = true; private final int DELAY = 1250; @Override protected void onCreate(Bundle savedInstanceState) { // Handle the splash screen transition. SplashScreen splashScreen = SplashScreen.installSplashScreen(this); super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //Keep returning false to Should Keep On Screen until ready to begin. splashScreen.setKeepOnScreenCondition(() -> keep); Handler handler = new Handler(); handler.postDelayed(() -> keep = false, DELAY);; } } ``` If you have comments or feedback (besides telling me I should not increase the duration of the splash screen), or a better way please do comment or respond with additional answers.
Don't display subviews outside of UIBezierPath I have one UIView which is drawn with: ``` - (void)drawRect:(CGRect)rect { UIBezierPath *bezierPath = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(0, 0, self.widthOfLine, self.heightOfLine)]; [bezierPath fill]; } ``` and whose frame is centered within the iOS window. I want to place a bunch of smaller (5px by 5px) views within this view randomly, which I am doing fine with arc4random() function and drawing the views. However, I can't seem to figure out how to cut off all views outside the bezierPathWithOvalInRect, it will just display them everywhere in the first UIView's frame. Anyone know what to do for this? edit: For clarity, I have defined another view as: ``` - (void)drawRect:(CGRect)rect { UIBezierPath *drawPath = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(self.xPosition, self.yPosition, self.width, self.width)]; [[UIColor whiteColor] setFill]; [drawPath fill]; } ``` and I am adding a large number of these to the first view as subviews, but I want them to not be displayed outside the oval bezier path. Is there a way to a) only add them within the oval bezier path as opposed to within the entire frame, or b) a way to clip all that are outside the oval from the view?
In order to clip all subviews to a particular UIBezierPath, you'll want to set the mask property of the view's layer property. If you set the layer.mask to a CAShapeLayer of that path, then all subviews will be cropped to that path. I use the following code to do just that in my app: ``` // create your view that'll hold all the subviews that // need to be clipped to the path CGRect anyRect = ...; UIView* clippedView = [[UIView alloc] initWithFrame:anyRect]; clippedView.clipsToBounds = YES; // now create the shape layer that we'll use to clip CAShapeLayer* maskingLayer = [CAShapeLayer layer]; [maskingLayer setPath:bezierPath.CGPath]; maskingLayer.frame = backingImageHolder.bounds; // set the mask property of the layer, and this will // make sure that all subviews are only visible inside // this path clippedView.layer.mask = maskingLayer; // now any subviews you add won't show outside of that path UIView* anySubview = ...; [clippedView addSubview:anySubview]; ```
jquery.html() strange behavior with form I have following html: ``` <div class="copy_me_text"> <div> <input type="text" name="name" /> <input type="hidden" name="id" /> </div> </div> <div class="copy_me_hidden"> <div> <input type="hidden" name="name" /> <input type="hidden" name="id" /> </div> </div> ``` And following js code: ``` var $cloned_text = $('.copy_me_text').clone(); $cloned_text.find('input[name="name"]').val("SOMETHING"); $cloned_text.find('input[name="id"]').val("SOMETHING"); console.log($cloned_text.html()); var $cloned_hidden = $('.copy_me_hidden').clone(); $cloned_hidden.find('input[name="name"]').val("SOMETHING"); $cloned_hidden.find('input[name="id"]').val("SOMETHING"); console.log($cloned_hidden.html()); ``` And output is strange for me: ``` <div> <input name="name" type="text"> <input value="SOMETHING" name="id" type="hidden"> </div> <div> <input value="SOMETHING" name="name" type="hidden"> <input value="SOMETHING" name="id" type="hidden"> </div> ``` I create also jsFiddle [example](http://jsfiddle.net/xRjPm/1/). Is it correct behavior? I don't understand, why in `.html()` function, value of `input type="text"` is not returned.
This is not a strange jQuery behavior, its a strange DOM effect. [`jQuery.val()`](http://api.jquery.com/val/) does nothing else than setting the `value` property of `<input>` element. By "property", I mean the DOM *property* and not the node *attribute* - see [.prop() vs .attr()](https://stackoverflow.com/questions/5874652/prop-vs-attr) for the difference. The [`.html()`](http://api.jquery.com/html/) method, which returns the [`innerHTML`](https://developer.mozilla.org/en-US/docs/DOM/element.innerHTML) serialisation of the DOM, is expected to show only attributes of the elements - their properties are irrelevant. This is the default behaviour, and when you want input values serialized you need to explicitly set them as attributes - `$input.attr("value", $input.prop("value"))`. So why did simple `val()` work on the hidden input elements? The reason is the HTML specification. There are [reflecting IDL attributes](http://www.w3.org/TR/html5/infrastructure.html#reflecting-content-attributes-in-idl-attributes), where the DOM property is coupled with the node attribute, but the `value` attribute is none of those. Yet, the [value IDL attribute](http://www.w3.org/TR/html5/forms.html#dom-input-value) has special modes, in which it reacts differently. To cite the spec: > > The attribute is in one of the following modes, which define its behavior: > > > **value** > > > > > > > On getting, it must return the [current value](http://www.w3.org/TR/html5/forms.html#concept-fe-value) of the element. On setting, it must set the [element's value](http://www.w3.org/TR/html5/forms.html#concept-fe-value) to the new value, set the element's dirty value [… and do a lot of other stuff]. > > > > > > > > > **default** > > > > > > > On getting, if the element has a [value attribute](http://www.w3.org/TR/html5/forms.html#attr-input-value), it must return that attribute's value; otherwise, it must return the empty string. On setting, it must set the element's [value attribute](http://www.w3.org/TR/html5/forms.html#attr-input-value) to the new value. > > > > > > > > > ["default/on" and "filename" modes] > > > Spot the difference? And now, let's have a look at the [states](http://www.w3.org/TR/html5/forms.html#states-of-the-type-attribute) of the [type attribute](http://www.w3.org/TR/html5/forms.html#attr-input-type). And really, if we check the *Bookkeeping details* sections, we can notice that in the [`hidden` state](http://www.w3.org/TR/html5/forms.html#hidden-state-(type=hidden)), > > The `value` IDL attribute applies to this element and is in mode **default** > > > ‐ while in all other (textual) states the mode is "value". --- ## TL;DR: (Only) On `<input type="hidden">` elements, setting the `value` DOM property (`input.value = …`, `$input.val(…)`, `$input.prop("value", …)`) also sets the `value` attribute and makes it serializable for `innerHTML`/`.html()`.
Maximum characters that can be stuffed into raw\_input() in Python For an InterviewStreet challenge, we have to be able to accomodate for a 10,000 character String input from the keyboard, but when I copy/paste a 10k long word into my local testing, it cuts off at a thousand or so. What's the official limit in Python? And is there a way to change this? Thanks guys Here's the challenge by-the-by: <http://www.interviewstreet.com/recruit/challenges/solve/view/4e1491425cf10/4edb8abd7cacd>
Are you sure of the fact that your *10k* long word doesn't contain newlines? --- > > **[raw\_input](http://docs.python.org/library/functions.html#raw_input)([prompt])** > > > If the prompt argument is present, it is written to standard output without a trailing newline. The function then reads a line from input, converts it to a string (stripping a trailing newline), and returns that. When EOF is read, EOFError is raised. > > > ... > > > If the [readline](http://docs.python.org/library/readline.html#module-readline) module was loaded, then raw\_input() will use it to provide elaborate line editing and history features. > > > There is no maximum limit (in python) of the buffer returned by `raw_input`, and as I tested some big length of input to *stdin* I could not reproduce your result. I tried to search the web for information regarding this but came up with nothing that would help me answer your question. **my tests** ``` :/tmp% python -c 'print "A"*1000000' | python -c 'print len (raw_input ())'; 1000000 :/tmp% python -c 'print "A"*210012300' | python -c 'print len (raw_input ())'; 210012300 :/tmp% python -c 'print "A"*100+"\n"+"B"*100' | python -c 'print len (raw_input ())'; 100 ```
Why does my Arduino serial port gives me semi-random numbers? I have a basic problem with my Arduino Uno. My example code gets a number over Serial port and should print it back. ``` int incomingByte = 0; void setup() { Serial.begin(9600); Serial.println("Hello World"); } void loop() { if (Serial.available() > 0) { // read the incoming byte: incomingByte = Serial.read(); // say what you got: Serial.print("I received: "); Serial.println(incomingByte, DEC); } } ``` When I send 0, I receive 48. ``` 0->48 1->49 2->50 3->51 a->97 b->98 A->65 ``` So why doesn't it send the same numbers back to me?
In your program the output is [ASCII equivalent](http://ee.hawaii.edu/~tep/EE160/Book/chap4/subsection2.1.1.1.html) of the input that the Arduino receives. ASCII equivalent of 0 is 48, 1 is 49, a is 97, A is 65 and so on. The reason is you are storing your input to `incomingByte` variable (`incomingByte = Serial.read();`) but you declare `incomingByte` variable as `int`. When a character is assigned to integer variable, its corresponding ASCII value will be stored to the [integer variable](https://www.programiz.com/c-programming/examples/ASCII-value-character). So if you want to print a character that you send to the Arduino, you want to change `int incomingByte = 0;` to `char incomingByte;`.
Getting async/await to work with superagent I'm missing something in my syntax here but not sure what: I'm trying to nock this. But I get `expected undefined to equal 'pXVCJ9.eyJpYXQ'` **Test.js** ``` describe('User API', () => { let email, password, requestBody beforeEach(() => { email = 'someperson@gmail.com' password = 'password' requestBody = { session: '05833a20-4035', token: 'pXVCJ9.eyJpYXQ' } nock('https://someurl/') .get('users/sessions') .reply(200, requestBody) }) it('returns user session for login', async () => { const data = await UserApi.login(email, password) expect(data.token).to.equal(requestBody.token) expect(data.session).to.equal(requestBody.session) }) }) ``` **UserApi.js** ``` import request from 'superagent' export const endpoint = 'https://someurl/' const login = (email, password) => async () => { try { return await request .get(`${endpoint}/users/sessions`) .send({ email: email, password: password }) .set('Accept', 'application/json') } catch (err) { //todo: implement this } } export default { login } ```
There's a few issues: ``` const login = (email, password) => async () => { ... } ``` This makes `login` a function that returns an async function. It should be an async function itself, so this: ``` const login = async (email, password) => { ... } ``` Next, in your `nock` setup, you're not prefixing the path with a `/`, so `nock` won't be able to match the request. Easy fix: ``` nock('https://someurl') .get('/users/sessions') .reply(...); ``` Lastly, `superagent` will return the response data as `response.body` or, in your case, `data.body`. So the assertions should look like this: ``` expect(data.body.token).to.equal(requestBody.token) expect(data.body.session).to.equal(requestBody.session) ```
jQuery link for bootstrap I have noticed that the bootstrap provides a jQuery link that you can use to run the JavaScript files. What I would like to know is if it matters what jQuery link you use. Currently, the code provided by the bootstrap site is: ``` <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js"></script> ``` Would it hurt if I used these links below to replace the above link: ``` <script src="http://code.jquery.com/jquery-1.11.2.min.js"></script> <script src="http://code.jquery.com/jquery-migrate-1.2.1.min.js"></script> ```
It will not create any issue as `jQuery library` version is same. as per your link both files version is same that means files is same but loads from different server. Main thing is you need to use `jQuery library` suggested by `bootstrap` > > jQuery (necessary for Bootstrap's JavaScript plugins) . > > > you can download that files to your local project folder and use in project it will work fine. [jQuery Migrate Plugin](https://jquery.com/download/#jquery-migrate-plugin) ``` <script src="http://code.jquery.com/jquery-migrate-1.2.1.min.js"></script> ``` > > jQuery Migrate Plugin simplify the transition from older versions of > jQuery. The plugin restores deprecated features and behaviors so that > older code will still run properly on jQuery 1.9 and later. > > >
How to sort a list [MVar a] using a values? How can sort a [MVar a] list? using a as the element to compare in the sort. E.g: ``` sortList :: [MVar Int] -> [MVar Int] ``` I cannot think a ways without breaking other threads. **Update:** I need to sort the list, because I want to implement a reference count like MVar and return always the one with least references. Something like: ``` getLeastUsed :: [MVar Int] -> MVar Int getLeastUsed = head . sortList ``` And in the thread I want to increase the 'Int'. **Update:** I was notice by answers that the rigth signature need IO because MVar
First of all, your type signature is impossible; reading an `MVar` is not referentially transparent (as should hopefully be obvious--that's what they're *for*!). This has two consequences: - Your sort function must return an `IO` action - The list will be sorted according to the values seen when each `MVar` was read; not only may it be invalid by the time you use the list, it may change halfway through such that the first value is out of date before you read the last value. The former is unavoidable, and assuming the latter is acceptable for your purposes, you can do essentially what @hammar demonstrated. However, given that the sorting will be quickly out of date, and that you seem to be mostly interested in the least element, you might find something like this more directly useful since there's little use to sorting otherwise: ``` import Control.Applicative import Data.List import Data.Ord leastUsed :: [MVar Int] -> IO (MVar Int) leastUsed vars = fst . minimumBy (comparing snd) . zip vars <$> mapM readMVar vars ```
Borrowed value does not live long enough requiring static lifetime I'm getting this error with a sample code at this [rust playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=54dbdbd06309bdc3ea52e956ea0fe6dc) ``` Compiling playground v0.0.1 (/playground) error[E0597]: `text` does not live long enough --> src/main.rs:34:38 | 34 | let item = NotLongEnough { text: &text }; | ^^^^^ borrowed value does not live long enough 35 | let mut wrapper = Container { buf: Vec::new() }; 36 | wrapper.add(Box::new(item)); | -------------- cast requires that `text` is borrowed for `'static` ... 40 | } | - `text` dropped here while still borrowed error: aborting due to previous error For more information about this error, try `rustc --explain E0597`. error: could not compile `playground` To learn more, run the command again with --verbose. ``` The contents are: ``` trait TestTrait { fn get_text(&self) -> &str; } #[derive(Copy, Clone)] struct NotLongEnough<'a> { text: &'a str, } impl<'a> TestTrait for NotLongEnough<'a> { fn get_text(&self) -> &str { self.text } } struct Container { buf: Vec<Box<dyn TestTrait>>, } impl Container { pub fn add(&mut self, item: Box<dyn TestTrait>) { self.buf.push(item); } pub fn output(&self) { for item in &self.buf { println!("{}", item.get_text()); } } } fn main() -> Result<(), Box<dyn std::error::Error>> { let text = "test".to_owned(); let item = NotLongEnough { text: &text }; let mut wrapper = Container { buf: Vec::new() }; wrapper.add(Box::new(item)); wrapper.output(); Ok(()) } ``` I have no clue why `cast requires that text is borrowed for 'static` Could someone please help me with this? I've no idea what I've done wrong.
[TLDR: Fixed version](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=1f01ad6df3f54e35cd2b9d7da4129df9) The problem is in your `Container` definition: ``` struct Container { buf: Vec<Box<dyn TestTrait>>, } ``` The statement `dyn TestTrait` is equivalent to `dyn TestTrait + 'static`, meaning that your trait objects must not contain any references with lifetime less than `'static`. In order to fix the problem, you have to replace that trait bound with a less strict one: ``` struct Container<'a> { buf: Vec<Box<dyn TestTrait + 'a>>, } ``` Now instead of `'static`, the container requires `'a`. And you have to apply that change to the implementation as well: ``` pub fn add(&mut self, item: Box<dyn TestTrait + 'a>) { // notice the new trait-bound self.buf.push(item); } ``` Relevant resources: - [Trait bounds](https://doc.rust-lang.org/rust-by-example/scope/lifetime/lifetime_bounds.html)
TabControl with Close and Add Button I'm tring to make a tab control have a "x" (close button) and "+" (new tab button). I found a solution to add a [`x button`](https://stackoverflow.com/questions/3183352/close-button-in-tabcontrol), the tab looks like this now: [![enter image description here](https://i.stack.imgur.com/xNiZp.png)](https://i.stack.imgur.com/xNiZp.png) But I want to add a `+` where's that black circle right now. I have no idea how, I tried draw on `Paint` event of the last tab, like this: ``` var p = tabs.TabPages[tabs.TabCount - 1]; p.Paint += new PaintEventHandler(tab_OnDrawPage); private void tab_OnDrawPage(object sender, PaintEventArgs e) { // e.ClipRectangle. e.Graphics.DrawString("+", new Font("verdana", 10, FontStyle.Bold), Brushes.Black, e.ClipRectangle.X + 10, e.ClipRectangle.Y + 10); } ``` But it didn't show anything draw. I guess it has to do with the positions I passed to `DrawString()` call, but I don't know the proper ones to use. I used +10 to draw it away from last tab. How to fix that?. I haven't done any custom drawing myself, I'm learning it.
As an option you can add an extra tab which shows an add icon ![Add](https://i.stack.imgur.com/CP4Y1.png) and check when the user clicks on that tab, then insert a new [`TabPage`](https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.tabpage?WT.mc_id=DT-MVP-5003235&view=netframework-4.8) before it. Also you can prevent selecting that extra tab simply using [`Selecting`](https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.tabcontrol.selecting?WT.mc_id=DT-MVP-5003235&view=netframework-4.8) event of [`TabControl`](https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.tabcontrol?WT.mc_id=DT-MVP-5003235&view=netframework-4.8). This way the last tab acts only like an add button for you, like IE and Chrome. ![Tab with close and add button](https://i.stack.imgur.com/OkUVL.png) **Implementation Details** We will use an owner draw tab to show close icons on each tab an add icon on the last tab. We use [`DrawItem`](https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.tabcontrol.drawitem?WT.mc_id=DT-MVP-5003235&view=netframework-4.8) to draw close and add icons, `MouseDown` to handle click on close and add buttons, `Selecting` to prevent selecting of the last tab and `HandleCreated` to adjust tab width. You can see all implementation settings and codes below. **Initialization** Set padding and [`DrawMode`](https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.tabcontrol.drawmode?WT.mc_id=DT-MVP-5003235&view=netframework-4.8) and assign event handlers for `DrawItem`, `MouseDown`, `Selecting` and `HandleCreated` event. ``` this.tabControl1.Padding = new Point(12, 4); this.tabControl1.DrawMode = TabDrawMode.OwnerDrawFixed; this.tabControl1.DrawItem += tabControl1_DrawItem; this.tabControl1.MouseDown += tabControl1_MouseDown; this.tabControl1.Selecting += tabControl1_Selecting; this.tabControl1.HandleCreated += tabControl1_HandleCreated; ``` **Handle click on close button and add button** You can handle `MouseDown` or `MouseClick` event and check if the last tab rectangle contains the mouse clicked point, then insert a tab before the last tab. Otherwose check if one of close buttons contains clicked location, then close the tab which its close button was clicked: ``` private void tabControl1_MouseDown(object sender, MouseEventArgs e) { var lastIndex = this.tabControl1.TabCount - 1; if (this.tabControl1.GetTabRect(lastIndex).Contains(e.Location)) { this.tabControl1.TabPages.Insert(lastIndex, "New Tab"); this.tabControl1.SelectedIndex = lastIndex; } else { for (var i = 0; i < this.tabControl1.TabPages.Count; i++) { var tabRect = this.tabControl1.GetTabRect(i); tabRect.Inflate(-2, -2); var closeImage = Properties.Resources.DeleteButton_Image; var imageRect = new Rectangle( (tabRect.Right - closeImage.Width), tabRect.Top + (tabRect.Height - closeImage.Height) / 2, closeImage.Width, closeImage.Height); if (imageRect.Contains(e.Location)) { this.tabControl1.TabPages.RemoveAt(i); break; } } } } ``` **Prevent selectin last tab** To prevent selection the last tab, you can handle `Selecting` event of control and check if the selecting tab is the last tab, cancel the event: ``` private void tabControl1_Selecting(object sender, TabControlCancelEventArgs e) { if (e.TabPageIndex == this.tabControl1.TabCount - 1) e.Cancel = true; } ``` **Draw Close Button and Add Button** To draw close button and add button, you can handle `DrawItem` event. I used these icons for add ![Add](https://i.stack.imgur.com/CP4Y1.png) and close ![Close](https://i.stack.imgur.com/IICDM.png) buttons. ``` private void tabControl1_DrawItem(object sender, DrawItemEventArgs e) { var tabPage = this.tabControl1.TabPages[e.Index]; var tabRect = this.tabControl1.GetTabRect(e.Index); tabRect.Inflate(-2, -2); if (e.Index == this.tabControl1.TabCount - 1) { var addImage = Properties.Resources.AddButton_Image; e.Graphics.DrawImage(addImage, tabRect.Left + (tabRect.Width - addImage.Width) / 2, tabRect.Top + (tabRect.Height - addImage.Height) / 2); } else { var closeImage = Properties.Resources.DeleteButton_Image; e.Graphics.DrawImage(closeImage, (tabRect.Right - closeImage.Width), tabRect.Top + (tabRect.Height - closeImage.Height) / 2); TextRenderer.DrawText(e.Graphics, tabPage.Text, tabPage.Font, tabRect, tabPage.ForeColor, TextFormatFlags.Left); } } ``` **Adjust Tab width** To adjust tab width and let the last tab have smaller width, you can hanlde `HandleCreated` event and send a [`TCM_SETMINTABWIDTH`](https://learn.microsoft.com/en-us/windows/win32/controls/tcm-setmintabwidth?WT.mc_id=DT-MVP-5003235) to the control and specify the minimum size allowed for the tab width: ``` [DllImport("user32.dll")] private static extern IntPtr SendMessage(IntPtr hWnd, int msg, IntPtr wp, IntPtr lp); private const int TCM_SETMINTABWIDTH = 0x1300 + 49; private void tabControl1_HandleCreated(object sender, EventArgs e) { SendMessage(this.tabControl1.Handle, TCM_SETMINTABWIDTH, IntPtr.Zero, (IntPtr)16); } ``` **Download** You can download the code or clone the repository here: - [r-aghaei/TabControlWithCloseButtonAndAddButton](https://github.com/r-aghaei/TabControlWithCloseButtonAndAddButton)
Retrofit2 java.lang.NoClassDefFoundError: okhttp3/Call$Factory in JAVA > > I am not developing a Android Application , > > I'm just writing some JAVA codes to support Imgur API services. > > > ``` public interface ImgurAPI { String server = "https://api.imgur.com"; String BASE64 = "base64"; @POST("/3/upload") void postImage( @Header("Authorization") String auth, @Query("title") String title, @Query("description") String description, @Query("type") String type, @Body String base64Image, Callback<ImageResponse> cb ); ``` } Main : ``` public static void main(String[] args) { try{ Retrofit retrofit = new Retrofit.Builder() .baseUrl(ImgurAPI.server) .build(); ImgurAPI myAPI = retrofit.create(ImgurAPI.class); String base64Image = new ImageReader(PATH).getBase64String(); myAPI.postImage(AUTH, "Hi", "Test", ImgurAPI.BASE64, base64Image, new MyCallBack()); }catch(Exception err){ err.printStackTrace(); } } ``` and the exception throwing : ``` Exception in thread "main" java.lang.NoClassDefFoundError: okhttp3/Call$Factory at Main.main(Main.java:14) Caused by: java.lang.ClassNotFoundException: okhttp3.Call$Factory ``` I found out lots of solutions of Android. So I'm wondering if Retrofit available in **only JAVA**. thanks .
I solved it , If you are writing Java (Only Java), And you downloaded the Jar of Retrofit2, It might not contain certain libraries which built-in in Android Studio, So you have to download them manually. 1. OkHttp3 [OkHttp3 3.0.0 Jar download](http://central.maven.org/maven2/com/squareup/okhttp3/okhttp/3.0.0-RC1/okhttp-3.0.0-RC1.jar) 2. Okio [Okio 1.6.0 Jar download](http://central.maven.org/maven2/com/squareup/okio/okio/1.6.0/okio-1.6.0.jar) 3. Retrofit-converter gson [Retrofit converter gson-2 beta3 Jar download](http://central.maven.org/maven2/com/squareup/retrofit2/converter-gson/2.0.0-beta3/converter-gson-2.0.0-beta3.jar) (If you want to convert other types of data , just download other Jars in [Retrofit](https://github.com/square/retrofit/tree/master/retrofit-converters) ) 4. Gson [Gson 2.2.3 Jar download](http://www.java2s.com/Code/JarDownload/gson/gson-2.2.3.jar.zip) Import the jar files , then It could work Make sure you got all of its[![enter image description here](https://i.stack.imgur.com/mLfa0.png)](https://i.stack.imgur.com/mLfa0.png)
Add font-awesome icon to a series of buttons I know I can specifically add a font awesome to a button like this: ``` <a href="#" class="btn btn-default">Some text <i class="fa fa-chevron-right"></i></a> ``` But is there a way I can create a CSS class so that the icon is automatically inserted for all buttons?
This is with Glyphicon and FontAwesome. Add your unicode (found in the CSS file) in the content and create a class name that is appropriate for your situation or just add it to the .btn or .btn-default, but change the css to reflect that: **FontAwesome: <http://jsbin.com/seguf/2/edit> \*** **Glyphicon: <http://jsbin.com/seguf/1/edit>** HTML ``` <a href="#" class="btn btn-default btn-plus">Some text</a> ``` <http://jsbin.com/seguf/2/edit> CSS ``` .btn-arrow:after { font-family: 'FontAwesome'; content: '\f054'; padding-left: 5px; position: relative; font-size: 90%; } .btn-plus:after { font-family: 'Glyphicons Halflings'; content: '\2b'; padding-left: 5px; position: relative; top: 1px; font-size: 90%; } ```
Use jquery datepicker in wordpress I want datepicker to who in a form in my wordpress template page, but it doesn't work. This is the code I've the child theme functions.php: ``` function modify_jquery() { if (!is_admin()) { // comment out the next two lines to load the local copy of jQuery wp_deregister_script('jquery'); wp_register_script('jquery', 'http://ajax.googleapis.com/ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js', false, '2.1.1'); wp_enqueue_script('jquery'); } } add_action('init', 'modify_jquery'); function load_jquery_ui_google_cdn() { global $wp_scripts; wp_enqueue_script('jquery-ui-core'); wp_enqueue_script('jquery-ui-slider'); // get the jquery ui object $queryui = $wp_scripts->query('jquery-ui-core'); // load the jquery ui theme $urlui = "https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.8/jquery-ui.min.js"; wp_enqueue_style('jquery-ui-smoothness', $urlui, false, null); } add_action('wp_enqueue_scripts', 'load_jquery_ui_google_cdn'); ``` Then I've this in page-mypage.php: ``` <script> $(function() { $( "#datepicker" ).datepicker(); }); </script> ...other code... Date: <input type="text" id="datepicker"> ...other code... </form> ``` But it doesn't show. Could you help me to wind what's wrong?
The code you're using to load jQuery is invalid and you aren't loading datepicker (jQuery UI Datepicker) at all. I've posted a few answers regarding the correct way to use jQuery in WordPress so I'll provide a working example and then a link if you'd like to read more. Replace the code you've inserted into your child theme functions.php with: ``` /** * Load jQuery datepicker. * * By using the correct hook you don't need to check `is_admin()` first. * If jQuery hasn't already been loaded it will be when we request the * datepicker script. */ function wpse_enqueue_datepicker() { // Load the datepicker script (pre-registered in WordPress). wp_enqueue_script( 'jquery-ui-datepicker' ); // You need styling for the datepicker. For simplicity I've linked to the jQuery UI CSS on a CDN. wp_register_style( 'jquery-ui', 'https://code.jquery.com/ui/1.12.1/themes/smoothness/jquery-ui.css' ); wp_enqueue_style( 'jquery-ui' ); } add_action( 'wp_enqueue_scripts', 'wpse_enqueue_datepicker' ); ``` Finally replace your usage with: ``` <script> jQuery(document).ready(function($) { $("#datepicker").datepicker(); }); </script> ``` [jquery requires the word Jquery instead of $](https://stackoverflow.com/questions/25320795/jquery-requires-the-word-jquery-instead-of/25320826#25320826)
Is there a performance degradation when we ALWAYS use nullable value types instead of value types? Is there a performance degradation when we ALWAYS use nullable value types instead of value types?
As [Mitch Wheat](https://stackoverflow.com/users/16076/mitch-wheat) pointed about above, no, you should not worry about this. I'm going to give you the short answer reason now, and later I'm going to help you discover more about what you're asking: > > Write your code to be correct. Profile *after* writing so that you find the points that are causing you grief. > > > When you have code that constantly uses Nullable and you have performance reasons and you profile it and you can't find the problem yourself, *then* come ask us how to make it faster. But no, the overhead of using Nullable is for all intents and purposes not degrading. # Discover more about what you're asking: - [Performance surprise with "as" and nullable types](https://stackoverflow.com/questions/1583050/performance-surprise-with-as-and-nullable-types) - [Why shouldn't I always use nullable types in C#](https://stackoverflow.com/questions/830592/why-shouldnt-i-always-use-nullable-types-in-c) - [C# Performance gain returning a Nullable Type from a SqlDataReader](https://stackoverflow.com/questions/3253943/c-performance-gain-returning-a-nullable-type-from-a-sqldatareader) - [Alternatives to nullable types in C#](https://stackoverflow.com/questions/876632/alternatives-to-nullable-types-in-c) - [Casting (int?)null vs. new int?() - Which is better?](https://stackoverflow.com/questions/2243625/casting-intnull-vs-new-int-which-is-better) - [Boxing / Unboxing Nullable Types - Why this implementation?](https://stackoverflow.com/questions/1387597/boxing-unboxing-nullable-types-why-this-implementation) Now, having read all those pages, I hope you feel more enlightened.
switch statement javascript I have searched quite awhile for this, but still I am none-the-wiser. I am trying to do a simple calculator for a website which calculates prices based on type of material. I can't seem to get it working. It keeps stepping straight to the default case of the switch statement below. ``` <div class="form-group"> <div class="btn-group" data-toggle="buttons" class="col-lg-3 col-lg-offset-10"> <label class="btn btn-default **active**"> <input type="radio" name="inputWalls" id="inputWalls" value="block" checked>Block</label> <label class="btn btn-default"> <input type="radio" name="inputWalls" id="inputWalls" value="brick">Brick </label> <label class="btn btn-default"> <input type="radio" name="inputWalls" id="inputWalls" value="stone">Stone </label> ``` JAVASCRIPT ``` function quote(){ var blockPrice = 0; var brickPrice = 0; var stonePrice = 0; var myWidth = parseFloat(document.getElementById("widthInput").value); var myHeight = parseFloat(document.getElementById("heightInput").value); var radioButtons = document.getElementsByName("inputWalls"); var totalArea = myWidth + myHeight; switch(radioButtons){ case "block" : blockPrice = totalArea * 1.90; alert(blockPrice); break; case "brick" : { brickPrice = totalArea * 3.80; alert(brickPrice); break; } case "stone" : { stonePrice = totalArea * 4.90; alert(stonePrice); break; } default : alert('you have not selected material'); } } ```
First warning flag: You really shouldn't have more than one thing with the same ID. Once you have a unique ID for each option, you can then just check the `checked` property of each element. ``` <label class="btn btn-default **active**"> <input type="radio" name="inputWalls" id="inputWalls_block" value="block" checked>Block</label> <label class="btn btn-default"> <input type="radio" name="inputWalls" id="inputWalls_brick" value="brick">Brick </label> ``` followed by: ``` document.getElementById("inputWalls_block").checked ``` Obviously, you could wrap this up so you don't have to have loads of duplicate code: ``` function getCheckedSection() { var sections=["block", "brick", "stone"]; for ( var i=0;i< sections.length; i++ ) { if ( document.getElementById("inputWalls_"+sections[i]).checked ) { return sections[i]; } } } ``` Of course, this being JavaScript and with you interacting with the DOM, you can save yourself a lot of time and effort using JQuery, but if you're trying to figure out how these things work without a supporting library, that is useful knowledge to have. You can find a lot more helpful stuff about this in [the answers to this question.](https://stackoverflow.com/questions/1423777/javascript-how-can-i-check-whether-a-radio-button-is-selected)
Multi Select Box only shows last selected value I have a multi-select box which allows me to select multiple values and then add them to a second select box when I click on an arrow. Now this part works fine and I can select multiple values. However when I pass the values of the second select box to variable, to a PHP page the variable only shows the value of the last item in the list and not all of them. Javascript Code ``` <script type="text/javascript"> $().ready(function() { $('#add').click(function() { return !$('#select1 option:selected').remove().appendTo('#select2'); }); $('#remove').click(function() { return !$('#select2 option:selected').remove().appendTo('#select1'); }); }); </script> ``` HTML CODE ``` <form action="insert_email_visitor_list1.php" method="get" name="form1" target="_self" id="form1"> <select multiple id="select1" class="mulitiselect" name="select1"> <option value="1">Date/Time</option> <option value="2">Company</option> <option value="3">Location</option> <option value="4">Nof Pages / Visit</option> <option value="5">Traffic Source</option> <option value="6">Search Term</option> <option value="7">Report</option> <option value="8">Classification</option> <option value="9">Owner</option> <option value="10">Phone</option> <option value="11">Town</option> <option value="12">City</option> <option value="12">Country</option> <option value="12">Hostname</option> </select> <a href="#" id="add"><img src="images/add-arrow.png" alt="" /></a> selected Columns <a href="#" id="remove"><img src="images/remove-arrow.png" alt="" /></a> <select multiple id="select2" name="select2" class="mulitiselect"></select> <div class="bottom-img"><img src="images/popup-bottomimg.png" alt="" /></div> <button> <img src="images/info-icon.png" alt="" /> </button> <input type="submit" name="send" id="send" value="Submit" /> </form> ``` PHP CODE ``` $select2 = $_GET['select2']; echo "$select2"; ``` Basically I am just after some advice as to if i am doing this the correct way? Thanks
The field you pass to PHP must be named with a `[]`, e.g. `select2[]`. Without that, PHP will treat the passed in values as a single value, and only the LAST value passed in would ever bee put into $\_GET. Using `[]` on the name tells PHP to treat it as a multi-valued field and it will create an array in $\_GET, e.g. ``` $_GET['select2'][0] => 'first option selected'; $_GET['select2'][1] => 'second option selected'; ``` etc... Remember that the `multiple` parameter is a purely client-side thing, and PHP has no way of knowing that it was a multi-select. It willy simply see the equivalent of ``` select2=foo&select2=bar&select2=baz ``` arrive in the POST/GET, and select the last value passed in.
generating multiple graphs on Web Browser using Matplotlib and mpld3 I am plotting two graphs. I am trying to plot multiple matplotlib graphs on my web browser using the mpld3 library. I am successful in plotting the first graph with the help of `mpld3.show()` function, but the other graph is not being loaded. Can anyone help me out on how to get both the graphs on the browser, I am sure it a single line of code that will solve the issue. ``` import matplotlib.pyplot as plt, mpld3 x = [1,2,3] y = [2,3,4] #firstgraph plt.xlabel("xlabel 1") plt.ylabel("ylabel 1") plt.title("Plot 1") plt.legend() plt.bar(x,y, label = 'label for bar', color = 'b') mpld3.show() #secondgraph x = [1,2,3] y = [5,3,1] plt.xlabel("xlabel 2") plt.ylabel("ylabel 2") plt.title("Plot 2") plt.bar(x,y, color = 'r') mpld3.show() ```
As with `plt.show()` the execution of the script will stop while the output is served to the browser. You may press `Ctrl`+`C` to stop the server, such that the code continues with the second figure. The second figure will then be shown in a new browser tab. On the other hand you may also serve both figures simultaneously to the browser by creating their html representation individually and joining the html to be served. ``` import matplotlib.pyplot as plt import mpld3 from mpld3._server import serve #firstgraph x = [1,2,3] y = [2,3,4] fig1 = plt.figure() plt.xlabel("xlabel 1") plt.ylabel("ylabel 1") plt.title("Plot 1") plt.legend() plt.bar(x,y, label = 'label for bar', color = 'b') #secondgraph x = [1,2,3] y = [5,3,1] fig2 =plt.figure() plt.xlabel("xlabel 2") plt.ylabel("ylabel 2") plt.title("Plot 2") plt.bar(x,y, color = 'r') # create html for both graphs html1 = mpld3.fig_to_html(fig1) html2 = mpld3.fig_to_html(fig2) # serve joined html to browser serve(html1+html2) ```
Does Swift have any native concurrency and multi-threading support? I'm writing a Swift client to communicate with a server (written in C) on an embedded system. Its not iOS/OSX related as I'm using the recently released Ubuntu version. Does Swift have any native support for concurrency? I'm aware that Apple discourages developers from using threads and encourages handing tasks over to dispatch queues via GCD. The issue is that GCD seems to be only on Darwin (and NSThread is a part of Cocoa). For example, C++11 and Java have threads and concurrency as a part of their standard libraries. I understand that platform specific stuff like posix on unix could be used under some sort of C wrapper, but for me that really ruins the point of using Swift in the first place (clean, easy to understand code etc.).
## 2021 came and... Starting with Swift 5.5, [more options are available](https://docs.swift.org/swift-book/LanguageGuide/Concurrency.html) like async/await programming models and actors. There is still no direct manipulation of threads, and this is (as of today) a design choice. > > If you’ve written concurrent code before, you might be used to working with threads. The concurrency model in Swift is built on top of threads, but you don’t interact with them directly. An asynchronous function in Swift can give up the thread that it’s running on, which lets another asynchronous function run on that thread while the first function is blocked. > > > ## Original 2015 answer Quoting from [Swift's GitHub](https://github.com/apple/swift-evolution), there's a readme for "evolutions" : > > **Concurrency**: Swift 3.0 relies entirely on platform concurrency primitives (libdispatch, Foundation, pthreads, etc.) for concurrency. Language support for concurrency is an often-requested and potentially high-value feature, but is too large to be in scope for Swift 3.0. > > > I guess this means no language-level "primitives" for threading are in the pipeline for the foreseeable future.
Why are other folder paths also added to system PATH with SetX and not only the specified folder path? I have a batch file which I am calling from C++ using `system("name.bat")`. In that batch file I am trying to read the value of a registry key. Calling the batch file from C++ causes `set KEY_NAME=HKEY_LOCAL_MACHINE\stuff` to fail. However, when I directly run the batch file (double clicking it), it runs fine. Not sure what I am doing wrong. Batch file: ``` set KEY_NAME=HKEY_LOCAL_MACHINE\SOFTWARE\Ansoft\Designer\2014.0\Desktop set VALUE_NAME=InstallationDirectory REG QUERY %KEY_NAME% /v %VALUE_NAME% ``` C++ file: ``` int main(void) { system("CALL C:\\HFSS\\setup_vars.bat"); return 0; } ``` --- UPDATE 1: I found out that the key is actually in the 64-bit registry, and I was building my C++ solution as a 32-bit. Once I fixed that, it found the registry key fine. Now I am having an issue adding that path to my **PATH** variable. Instead of creating a system variable, it is creating a user variable **PATH** and adding it there. Running from command line works. Code: ``` set KEY_NAME=HKLM\SOFTWARE\Ansoft\Designer\2014.0\Desktop\ set VALUE_NAME=InstallationDirectory FOR /F "usebackq skip=1 tokens=1,2*" %%A IN (`REG QUERY %KEY_NAME% /v %VALUE_NAME%`) DO ( set ValueName=%%A set ValueType=%%B set ValueValue=%%C ) if defined ValueName ( @echo Value Value = %ValueValue% ) else ( @echo %KEY_NAME%\%VALUE_NAME% not found. ) :: Set PATH Variable set path_str=%PATH% set addPath=%ValueValue%; echo %addPath% echo %ValueValue% echo %PATH%| find /i "%addPath%">NUL if NOT ERRORLEVEL 1 ( SETX PATH "%PATH% ) else ( SETX PATH "%PATH%;%addPath%;" /M ) ``` --- UPDATE 2: I moved the placement of the option /M and it is now adding to right **PATH** variable. However, when I am doing this, it is adding the **PATH** more than once (3 times) and then it is also adding a path to visual studio amd64 folder. I'm mot sure why that is happening.
The Windows kernel library function [CreateProcess](https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createprocessw) creates a copy of the entire environment table of the process starting a new process for the new process. Therefore on start of your C++ application, your application gets the environment table including **PATH** from parent process, *Windows Explorer* or in your case *Visual Studio*. And this **PATH** is copied for `cmd.exe` on start of the batch file. Taking the entire process tree into account from Windows desktop to the batch file, there have been several copies made for **PATH** and some processes perhaps appended something to their local copy of **PATH** like *Visual Studio* has done, or have even removed paths from **PATH**. What you do now with `SETX PATH "%PATH%` is appending the local copy of **PATH** modified already by the parent processes in process tree completely to system **PATH** without checking for duplicate paths. Much better would be to throw away all code using local copy of **PATH** and instead read the value of system **PATH**, check if the path you want to add is not already in system **PATH** and if this is not the case, append the path you want to add to system **PATH** using `setx`. And this should be done without expanding the environment variables in system **PATH** like `%SystemRoot%\System32` to `C:\Windows\System32`. --- Here is the batch code required for your task tested on Windows XP SP3 x86, Windows 7 SP1 x64 and Windows 11 22H2. ``` @echo off setlocal EnableExtensions DisableDelayedExpansion set "KeyName=HKLM\SOFTWARE\Ansoft\Designer\2014.0\Desktop" set "ValueName=InstallationDirectory" for /F "skip=2 tokens=1,2*" %%G in ('%SystemRoot%\System32\reg.exe query "%KeyName%" /v "%ValueName%" 2^>nul') do ( if /I "%%G" == "%ValueName%" ( set "PathToAdd=%%I" if defined PathToAdd goto GetSystemPath ) ) echo Error: Could not find non-empty value "%ValueName%" under key echo %KeyName% echo( endlocal pause exit /B :GetSystemPath for /F "skip=2 tokens=1,2*" %%G in ('%SystemRoot%\System32\reg.exe query "HKLM\System\CurrentControlSet\Control\Session Manager\Environment" /v "Path" 2^>nul') do ( if /I "%%G" == "Path" ( set "SystemPath=%%I" if defined SystemPath goto CheckPath ) ) echo Error: System environment variable PATH not found with a non-empty value. echo( endlocal pause exit /B :CheckPath setlocal EnableDelayedExpansion rem The folder path to add must contain \ (backslash) as directory rem separator and not / (slash) and should not end with a backslash. set "PathToAdd=!PathToAdd:/=\!" if "!PathToAdd:~-1!" == "\" set "PathToAdd=!PathToAdd:~0,-1!" if "!SystemPath:~-1!" == ";" (set "Separator=") else set "Separator=;" set "PathCheck=!SystemPath!%Separator%" rem Do nothing if the folder path to add without or with a backslash rem at end with a semicolon appended for entire folder path check is rem already in the system PATH value. This code does not work with rem path to add contains an equal sign which is fortunately very rare. if not "!PathCheck:%PathToAdd%;=!" == "!PathCheck!" goto EndBatch if not "!PathCheck:%PathToAdd%\;=!" == "!PathCheck!" goto EndBatch set "PathToSet=!SystemPath!%Separator%!PathToAdd!" set "UseSetx=1" if not "!PathToSet:~1024,1!" == "" set "UseSetx=" if not exist %SystemRoot%\System32\setx.exe set "UseSetx=" if defined UseSetx ( %SystemRoot%\System32\setx.exe Path "!PathToSet!" /M >nul ) else ( set "ValueType=REG_EXPAND_SZ" if "!PathToSet:%%=!" == "!PathToSet!" set "ValueType=REG_SZ" %SystemRoot%\System32\reg.exe ADD "HKLM\System\CurrentControlSet\Control\Session Manager\Environment" /f /v Path /t !ValueType! /d "!PathToSet!" >nul ) :EndBatch endlocal endlocal ``` The batch code above uses a simple case-insensitive string substitution and a case-sensitive string comparison to check if the folder path to append is present already in system **PATH**. This works only if it is well known how the folder path was added before and the user has not modified this folder path in **PATH** in the meantime. For a safer method of checking if **PATH** contains a folder path see the answer on [How to check if directory exists in %PATH%?](https://stackoverflow.com/a/8046515/1012053) written by [Dave Benham](https://stackoverflow.com/users/1012053/dbenham). **Note 1:** Command `setx` is by default not available on Windows XP. **Note 2:** Command `setx` truncates values longer than 1024 characters to 1024 characters. For that reason the batch file uses command `reg` to replace **system PATH** in Windows registry if either `setx` is not available or new path value is too long for `setx`. The disadvantage on using `reg` is that [WM\_SETTINGCHANGE](https://learn.microsoft.com/en-us/windows/win32/winmsg/wm-settingchange) message is not sent to all top-level windows informing Windows Explorer running as Windows desktop and other applications about this change of system environment variable. So the user must restart Windows which is best done always on changing something on persistent stored Windows system environment variables. The batch script was tested with **PATH** containing currently a folder path with an exclamation mark and with a folder path being enclosed in double quotes which is necessary only if the folder path contains a semicolon. To understand the commands used and how they work, open a [command prompt](https://www.howtogeek.com/235101/10-ways-to-open-the-command-prompt-in-windows-10/) window, execute there the following commands, and read the displayed help pages for each command, entirely and carefully. - `echo /?` - `endlocal /?` - `exit /?` - `for /?` - `goto /?` - `if /?` - `pause /?` - `reg /?` - `reg add /?` - `reg query /?` - `set /?` - `setlocal /?` - `setx /?`
How do I restore /usr/bin/env after overwriting it with start-tor-browser file I was installing Tor and wanted to access it directly from the terminal, so I was trying to copy `start-tor-browser` to `/usr/bin`. But by mistake, I have replaced the `/usr/bin/env` file with the `start-tor-browser` file. What should I do now??
`/usr/bin/env` is provided by the [`coreutils`](https://packages.ubuntu.com/xenial/coreutils) package. [karel's way using a single command](https://askubuntu.com/a/976033/22949) will likely work, but I suggest replacing `/usr/bin/env` with a symbolic link to `/bin/busybox` first, in case a removal or installation script attempts to use `env` (which is generally assumed to be present). First move the wrong file you put there aside, or delete it if you know you don't need that file. This renames it from `env` to `env.old`: ``` sudo mv /usr/bin/env{,.old} ``` Then make `/usr/bin/env` a symbolic link to `/bin/busybox`. When run with the name `env`, [`busybox`](http://manpages.ubuntu.com/manpages/xenial/en/man1/busybox.1.html) will behave as the [`env`](http://manpages.ubuntu.com/manpages/xenial/en/man1/env.1.html) command: ``` sudo ln -s /bin/busybox /usr/bin/env ``` Then perform the reinstallation. The symbolic link you have just created will be used if necessary, will have no ill effect if it wasn't needed, and will be replaced automatically with the proper `env` executable installed from the `coreutils` package: ``` sudo apt --reinstall install coreutils ``` In general, [if you need to know what package provides a file](https://askubuntu.com/questions/481/how-do-i-find-the-package-that-provides-a-file), you can run `dpkg -S */path/to/file*` (in this case, `dpkg -S /usr/bin/env`), which works so long as the package is installed even if the file itself has been damaged or deleted. Or you can use the *Search the contents of packages* section of [Ubuntu Packages Search](https://packages.ubuntu.com/), which does not require that you use the full path; you would just select your Ubuntu release and type in `env`.
jpa custom connection pool I've successfully integrated hibernate in my web app. I was happy with my `persistence.xml` configuration ``` <persistence xmlns="http://java.sun.com/xml/ns/persistence" version="1.0"> <persistence-unit name="PU"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <properties> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.NoCacheProvider" /> <property name="hibernate.hbm2ddl.auto" value="validate" /> <property name="hibernate.dialect" value="org.hibernate.dialect.SQLiteDialect" /> <property name="hibernate.show_sql" value="false" /> <property name="hibernate.format_sql" value="true" /> <property name="hibernate.connection.url" value="jdbc:sqlite:/tmp/database.db" /> <property name="hibernate.connection.driver_class" value="org.sqlite.JDBC" /> </properties> </persistence-unit> </persistence> ``` Then I decided to use **HikariCp** connection pooler after reading [this](https://docs.jboss.org/hibernate/orm/4.1/devguide/en-US/html/ch01.html#d5e60) > > The built-in connection pool is not intended for production environments > > > With [this](https://github.com/brettwooldridge/HikariCP/wiki/Hibernate4) example I managed to make it work partially with the new `persistence.xml` ``` <persistence-unit name="PU"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <properties> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.NoCacheProvider" /> <property name="hibernate.hbm2ddl.auto" value="validate" /> <property name="hibernate.dialect" value="org.hibernate.dialect.SQLiteDialect" /> <property name="hibernate.show_sql" value="true" /> <property name="hibernate.format_sql" value="true" /> <property name="hibernate.connection.provider_class" value="com.zaxxer.hikari.hibernate.HikariConnectionProvider" /> <property name="hibernate.hikari.minimumPoolSize" value="20" /> <!-- <property name="hibernate.hikari.maximumPoolSize" value="100" /> --> <property name="hibernate.hikari.idleTimeout" value="30000" /> <property name="hibernate.hikari.dataSourceClassName" value="org.sqlite.SQLiteDataSource" /> <property name="hibernate.hikari.dataSource.url" value="jdbc:sqlite:/tmp/database.db" /> <!-- <property name="hibernate.hikari.dataSource.user" value="" /> <property name="hibernate.hikari.dataSource.password" value="" /> --> </properties> </persistence-unit> ``` But I get error if I try to set *minimumPoolSize*, *maximumPoolSize*, *user* and *password*. If comment them out everything works great. > > org.hibernate.HibernateException: java.lang.RuntimeException: java.beans.IntrospectionException: Method not found: setMinimumPoolSize > > > How can I configure jpa to use hibernate with hikaricp pool? I prefer not to scatter hibernate-specific stuff in my code as I want to keep ORM layer abstract. I found a lot of confusing materials and got more questions than aswers. How are persistence.xml, hibernate.properties and hibernate.cfg.xml related to each other? What is JNDI and how to use it? And what is [this](http://renren.io/questions/4878609/hikaricp-lots-of-database-connections) bean configuration?
Sorry for the original question. After more research I found the solution. This is working `persistence.xml`. I think `user` and `password` can't be set in sqlite. *minimumPoolSize* -> *minimumIdle* ``` <properties> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.NoCacheProvider" /> <property name="hibernate.hbm2ddl.auto" value="validate" /> <property name="hibernate.dialect" value="org.hibernate.dialect.SQLiteDialect" /> <property name="hibernate.show_sql" value="false" /> <property name="hibernate.format_sql" value="true" /> <property name="hibernate.connection.provider_class" value="com.zaxxer.hikari.hibernate.HikariConnectionProvider" /> <property name="hibernate.hikari.minimumIdle" value="20" /> <property name="hibernate.hikari.maximumPoolSize" value="100" /> <property name="hibernate.hikari.idleTimeout" value="30000" /> <property name="hibernate.hikari.dataSourceClassName" value="org.sqlite.SQLiteDataSource" /> <property name="hibernate.hikari.dataSource.url" value="jdbc:sqlite:/tmp/database.db" /> </properties> ``` As @neil and @zeus suggested here is another configuration using JNDI ``` <properties> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.NoCacheProvider" /> <property name="hibernate.hbm2ddl.auto" value="validate" /> <property name="hibernate.dialect" value="org.hibernate.dialect.SQLiteDialect" /> <property name="hibernate.show_sql" value="true" /> <property name="hibernate.format_sql" value="true" /> <property name="hibernate.connection.datasource" value="java:comp/env/jdbc/SQLiteHikari"/> </properties> ``` src->main->webapp->META-INF->context.xml ``` <Context antiJARLocking="true" path="/nbs"> <Resource name="jdbc/SQLiteHikari" auth="Container" factory="com.zaxxer.hikari.HikariJNDIFactory" type="javax.sql.DataSource" minimumIdle="20" maximumPoolSize="100" connectionTimeout="300000" dataSourceClassName="org.sqlite.SQLiteDataSource" dataSource.url="jdbc:sqlite:/tmp/database.db" /> </Context> ```
How to use custom classes with Apache Spark (pyspark)? I have written a class implementing a classifier in python. I would like to use Apache Spark to parallelize classification of a huge number of datapoints using this classifier. 1. I'm set up using Amazon EC2 on a cluster with 10 slaves, based off an ami that comes with python's Anaconda distribution on it. The ami lets me use IPython Notebook remotely. 2. I've defined the class BoTree in a file call BoTree.py on the master in the folder /root/anaconda/lib/python2.7/ which is where all my python modules are 3. I've checked that I can import and use BoTree.py when running command line spark from the master (I just have to start by writing import BoTree and my class BoTree becomes available 4. I've used spark's /root/spark-ec2/copy-dir.sh script to copy the /python2.7/ directory across my cluster. 5. I've ssh-ed into one of the slaves and tried running ipython there, and was able to import BoTree, so I think the module has been sent across the cluster successfully (I can also see the BoTree.py file in the .../python2.7/ folder) 6. On the master I've checked I can pickle and unpickle a BoTree instance using cPickle, which I understand is pyspark's serializer. **However**, when I do the following: ``` import BoTree bo_tree = BoTree.train(data) rdd = sc.parallelize(keyed_training_points) #create rdd of 10 (integer, (float, float) tuples rdd = rdd.mapValues(lambda point, bt = bo_tree: bt.classify(point[0], point[1])) out = rdd.collect() ``` Spark fails with the error (just the relevant bit I think): ``` File "/root/spark/python/pyspark/worker.py", line 90, in main command = pickleSer.loads(command.value) File "/root/spark/python/pyspark/serializers.py", line 405, in loads return cPickle.loads(obj) ImportError: No module named BoroughTree ``` Can anyone help me? Somewhat desperate... Thanks
Probably the simplest solution is to use `pyFiles` argument when you create `SparkContext` ``` from pyspark import SparkContext sc = SparkContext(master, app_name, pyFiles=['/path/to/BoTree.py']) ``` Every file placed there will be shipped to workers and added to `PYTHONPATH`. If you're working in an interactive mode you have to stop an existing context using `sc.stop()` before you create a new one. Also make sure that Spark worker is actually using Anaconda distribution and not a default Python interpreter. Based on your description it is most likely the problem. To set `PYSPARK_PYTHON` you can use `conf/spark-env.sh` files. On a side note copying file to `lib` is a rather messy solution. If you want to avoid pushing files using `pyFiles` I would recommend creating either plain Python package or Conda package and a proper installation. This way you can easily keep track of what is installed, remove unnecessary packages and avoid some hard to debug problems.
Problems with authentication with Laravel 5 I am trying to login a user with laravel 5. Here is my controller ``` public function postLogin(LoginRequest $request){ $remember = ($request->has('remember'))? true : false; $auth = Auth::attempt([ 'email'=>$request->email, 'password'=>$request->password, 'active'=>true ],$remember); if($auth){ return redirect()->intended('/home'); } else{ return redirect()->route('login')->with('fail','user not identified'); } } ``` When i enter wrong credentials, everything works fine, but when i enter the right one, i got this error message: ``` ErrorException in EloquentUserProvider.php line 110: Argument 1 passed to Illuminate\Auth\EloquentUserProvider::validateCredentials() must be an instance of Illuminate\Contracts\Auth\Authenticatable, instance of App\Models\User given, called in C:\xampp\htdocs\Projects\Pedagogia\Admin.pedagogia\vendor\laravel\framework\src\Illuminate\Auth\Guard.php on line 390 and defined ``` I don't see where i did wrong
> > Argument 1 passed > to Illuminate\Auth\EloquentUserProvider::validateCredentials() **must be > an instance of Illuminate\Contracts\Auth\Authenticatable, instance of > App\Models\User given.** > > > The `validateCredentials()` method of the `Illuminate\Auth\EloquentUserProvider` class expects an instance of `Illuminate\Contracts\Auth\Authenticable`, but you are passing it an instance of `App\Models\User`. To put it simply, your user model needs to implement the `Illuminate\Contracts\Auth\Authenticable` interface to work with Laravels authentication scaffolding. Your `App\Models\User` model should look like this: ``` use Illuminate\Contracts\Auth\Authenticatable; use Illuminate\Auth\Authenticable as AuthenticableTrait; class User extends \Eloquent implements Authenticatable { } ```
Automating PDF generation What would be a solid tool to use for generating PDF reports? Particularly, we are interested in creating interactive PDFs that have video, like the example found [here](http://blogs.adobe.com/pdfdevjunkie/2008/09/best_practices_video_in_acroba.html). Right now we are using Python and [reportlab](http://www.reportlab.com/) to generate PDFs, but have not explored the library completely (mostly because the license pricing is a little prohibitive) We have been looking at the Adobe's [SDK](http://www.adobe.com/devnet/acrobat/overview.html) and [iText](http://itextpdf.com/) libraries but it's hard to say what the capabilities of either are. The ability to generate a document from a template PDF would be a plus. Any pointers or comments will be appreciated. Thanks,
Recently, I needed to create PDF reports for a Django application; a ReportLab license was available, but I ended up choosing LaTeX. The benefit of this approach is that we could use [Django templates](https://docs.djangoproject.com/en/dev/ref/templates/) to generate the LaTeX source, and not get over encumbered writing lots of code for the many reports we needed to create. Plus, we could take advantage of the relatively much more concise LaTeX syntax (which does have it's many quirks and is not suitable for every purpose). [This snippet](http://djangosnippets.org/snippets/102/) provides a general overview of the approach. I found it necessary to make some changes, which I have provided at the end of this question. The main addition is detection for `Rerun LaTeX` messages, which indicates an additional pass is required. Usage is as simple as: ``` def my_view(request): pdf_stream = process_latex( 'latex_template.tex', context=RequestContext(request, {'context_obj': context_obj}) ) return HttpResponse(pdf_stream, content_type='application/pdf') ``` It is possible to embed videos in LaTeX generated PDFs, however I do not have any experience with it. [Here](http://pages.uoregon.edu/noeckel/PDFmovie.html) is a top [Google result](http://www.google.com/search?q=latex+video+pdf). This solution does require spawning a new process (`pdflatex`), so if you want a pure Python solution keep looking. ``` import os from subprocess import Popen, PIPE from tempfile import NamedTemporaryFile from django.template import loader, Context class LaTeXException(Exception): pass def process_latex(template, context={}, type='pdf', outfile=None): """ Processes a template as a LaTeX source file. Output is either being returned or stored in outfile. At the moment only pdf output is supported. """ t = loader.get_template(template) c = Context(context) r = t.render(c) tex = NamedTemporaryFile() tex.write(r) tex.flush() base = tex.name names = dict((x, '%s.%s' % (base, x)) for x in ( 'log', 'aux', 'pdf', 'dvi', 'png')) output = names[type] stdout = None if type == 'pdf' or type == 'dvi': stdout = pdflatex(base, type) elif type == 'png': stdout = pdflatex(base, 'dvi') out, err = Popen( ['dvipng', '-bg', '-transparent', names['dvi'], '-o', names['png']], cwd=os.path.dirname(base), stdout=PIPE, stderr=PIPE ).communicate() os.remove(names['log']) os.remove(names['aux']) # pdflatex appears to ALWAYS return 1, never returning 0 on success, at # least on the version installed from the Ubuntu apt repository. # so instead of relying on the return code to determine if it failed, # check if it successfully created the pdf on disk. if not os.path.exists(output): details = '*** pdflatex output: ***\n%s\n*** LaTeX source: ***\n%s' % ( stdout, r) raise LaTeXException(details) if not outfile: o = file(output).read() os.remove(output) return o else: os.rename(output, outfile) def pdflatex(file, type='pdf'): out, err = Popen( ['pdflatex', '-interaction=nonstopmode', '-output-format', type, file], cwd=os.path.dirname(file), stdout=PIPE, stderr=PIPE ).communicate() # If the output tells us to rerun, do it by recursing over ourself. if 'Rerun LaTeX.' in out: return pdflatex(file, type) else: return out ```
Does Malloc only use the heap if requested memory space is large? Whenever you study the memory allocation of processes you usually see it outlined like this: ![enter image description here](https://i.stack.imgur.com/wn0oN.jpg) So far so good. But then you have the sbrk() system call which allows the program to change the upper limit of its **data section**, and it can also be used to simply check where that limit is with sbrk(0). Using that function I found the following patterns: **Pattern 1 - Small malloc** I run the following program on my Linux machine: ``` #include <stdio.h> #include <stdlib.h> #include <unistd.h> int globalVar; int main(){ int localVar; int *ptr; printf("localVar address (i.e., stack) = %p\n",&localVar); printf("globalVar address (i.e., data section) = %p\n",&globalVar); printf("Limit of data section = %p\n",sbrk(0)); ptr = malloc(sizeof(int)*1000); printf("ptr address (should be on stack)= %p\n",&ptr); printf("ptr points to: %p\n",ptr); printf("Limit of data section after malloc= %p\n",sbrk(0)); return 0; } ``` And the output is the following: ``` localVar address (i.e., stack) = 0xbfe34058 globalVar address (i.e., data section) = 0x804a024 Limit of data section = 0x91d9000 ptr address (should be on stack)= 0xbfe3405c ptr points to: 0x91d9008 Limit of data section after malloc= 0x91fa000 ``` As you can see the allocated memory region was right above the old data section limit, and after the malloc that limit was pushed upward, so the allocated region is actually inside the new data section. **Question 1**: Does this mean that small mallocs will allocate memory in the data section and not use the heap at all? **Pattern 2 - Big Malloc** If you increase the requested memory size on line 15: ``` ptr = malloc(sizeof(int)*100000); ``` you will now the following output: ``` localVar address (i.e., stack) = 0xbf93ba68 globalVar address (i.e., data section) = 0x804a024 Limit of data section = 0x8b16000 ptr address (should be on stack)= 0xbf93ba6c ptr points to: 0xb750b008 Limit of data section after malloc= 0x8b16000 ``` As you can see here the limit of the data section has not changed, and instead the allocated memory region is in the middle of the gap section, between the data section and the stack. **Question 2**: Is this the large malloc actually using the heap? **Question 3**: Any explanation for this behavior? I find it a bit insecure, cause on the first example (small malloc) even after you free the allocated memory you'll still be able to use the pointer and use that memory without getting a seg fault, as it will be inside your data section, and this could lead to hard to detect bugs. **Update with Specs**: Ubuntu 12.04, 32-bits, gcc version 4.6.3, Linux kernel 3.2.0-54-generic-pae. Update 2: Rodrigo's answer below solved this mystery. [This Wikipedia link](http://en.wikipedia.org/wiki/Data_segment) also helped.
First of all, the only way to be absolutely sure of what happens is to read the source code of `malloc`. Or even better, step through it with the debugger. But anyway, here are my understanding of these things: 1. The system call `sbrk()` is used to increase the size of the data section, all right. Usually, you will not call it directly, but it will be called by the *implementation* of `malloc()` to increase the memory available for the heap. 2. The function `malloc()` does not allocate memory from the OS. It just splits the data section in pieces and assigns these pieces to whoever need them. You use `free()` to mark one piece as unused and available for reassignment. 3. Point 2 is an oversimplification. At least the GCC implementation, for big blocks, `malloc()` allocates them using `mmap()` with private, non-file backed options. Thus, these blocks are outside of the data segment. Obviously, calling `free()` in such a block will call `munmap()`. What is exactly a *big block* depends on many details. See `man mallopt` for the gory details. From that, you can guess what happens when you access to free'd memory: 1. If the block was *small*, the memory will still be there, so if you read nothing will happen. If you write to it, you may corrupt the internal heap structures, or it may have been reused and you can corrupt any random structure. 2. If the block was big, the memory has been unmapped, so any access will result in a segmentation fault. Unless the improbable situation that in the interim, another big block is allocated (or another thread calls `mmap()` and the same address range happen to be used. **Clarification** The term *data section* is used with two different meanings, depending on the context. 1. The `.data` section of the executable (linker point of view). It may also include `.bss` or even `.rdata`. For the OS that means nothing, it just maps pieces of the program into memory with little regard of what it contains other than the flags (read-only, executable...). 2. The *heap*, that block of memory that every process has, that is not read from the executable, and that can be grown using `sbrk()`. You can see that with the following command that prints the memory layout of a simple program (`cat`): ``` $ cat /proc/self/maps 08048000-08053000 r-xp 00000000 00:0f 1821106 /usr/bin/cat 08053000-08054000 r--p 0000a000 00:0f 1821106 /usr/bin/cat 08054000-08055000 rw-p 0000b000 00:0f 1821106 /usr/bin/cat 09152000-09173000 rw-p 00000000 00:00 0 [heap] b73df000-b75a5000 r--p 00000000 00:0f 2241249 /usr/lib/locale/locale-archive b75a5000-b75a6000 rw-p 00000000 00:00 0 b75a6000-b774f000 r-xp 00000000 00:0f 2240939 /usr/lib/libc-2.18.so b774f000-b7750000 ---p 001a9000 00:0f 2240939 /usr/lib/libc-2.18.so b7750000-b7752000 r--p 001a9000 00:0f 2240939 /usr/lib/libc-2.18.so b7752000-b7753000 rw-p 001ab000 00:0f 2240939 /usr/lib/libc-2.18.so b7753000-b7756000 rw-p 00000000 00:00 0 b7781000-b7782000 rw-p 00000000 00:00 0 b7782000-b7783000 r-xp 00000000 00:00 0 [vdso] b7783000-b77a3000 r-xp 00000000 00:0f 2240927 /usr/lib/ld-2.18.so b77a3000-b77a4000 r--p 0001f000 00:0f 2240927 /usr/lib/ld-2.18.so b77a4000-b77a5000 rw-p 00020000 00:0f 2240927 /usr/lib/ld-2.18.so bfba0000-bfbc1000 rw-p 00000000 00:00 0 [stack] ``` The first line is the executable code (`.text` section). The second line is the read-only data (`.rdata` section) and some other read-only sections. The third line is the `.data` + `.bss` and some other writable sections. The fourth line is the heap! The next lines, those with a name are memory mapped files or shared objects. Those without a name are probably big malloc'ed blocks of memory (or maybe private anonymous mmap's, they are impossible to distinguish). The last line is the stack!
Halving a list into sublists using pattern matching I am trying to get the output, given an input ``` > halve [1,2,3,4,5,6] ([1,2,3],[4,5,6]) ``` I have solved this problem using this approach: ``` halve xs = ((take s xs), (drop s xs)) where s = (length xs) `div` 2 ``` I am a beginner in Haskell and I want to learn how to solve this question using pattern matching? Thanks
You can make use of a variant of the [*hare and tortoise* algorithm](https://en.wikipedia.org/wiki/Cycle_detection#Floyd's_Tortoise_and_Hare). This algorithm basically runs over the list with two iterators: the *hare* taking two hops at a time, and the *tortoise* performing one hop at that time. When the hare reaches the end of the list, then we know that the tortoise is halfway, and thus can split the list in half: the list seen thus far is the first half, and the list still to enumerate over, is the second half. An algorithm thus looks like: ``` half :: [a] -> ([a], [a]) half h = go h h where go (_:(_:hs)) (t:ts) = (..., ...) where (a, b) = go ... go _ (t:ts) = (..., ...) go _ [] = (..., ...) ``` with the `...` parts still to fill in.
IBrokers: queueing up an order using R I'm trying to do a simple order through the Interactive Brokers API using R. For example, I'm trying to buy 1 share of IBM and short 1 share of MSFT. I cannot find any documentation on how to do this step in R. Does anyone have familiarity working in R with the TWS API? Thank you!
There is a pretty good project on cran called IBrokers, which wraps the C++ API for IB. You can find it on cran: <http://cran.r-project.org/web/packages/IBrokers/index.html> Have a look at the vignettes for a good reference on how to get data and set orders: About the general setup and receiving data: <http://cran.r-project.org/web/packages/IBrokers/vignettes/IBrokers.pdf> Also, I can really recommend the cheat sheet: <http://cran.r-project.org/web/packages/IBrokers/vignettes/IBrokersREFCARD.pdf> -- So, to set an order, use the *placeOrder* Object, which you supply with the connection details (those are described in the general setup I linked) : ``` placeOrder(twsconn=tws,Contract=twsSTK("IBM"),Order=twsOrder(reqIds(tws),"BUY",1,"MKT")) placeOrder(twsconn=tws,Contract=twsSTK("MSFT"),Order=twsOrder(reqIds(tws),"SELL",1,"MKT")) ``` Here, both are market orders. I hope that gives you a starting point.
php Exception.getMessage() always returns nothing When I catch an exception in php and try to output some details, getMessage() invariably returns nothing. If I do a var\_dump(), I see the message that I would like to display. What am I doing wrong? ``` try { ... } catch (Exception $e) { echo "<p>Exception: " . $e->getMessage() . "</p>"; return; } ``` if I do var\_dump($e) I get the following output: > > object(ETWSException)#735 (10) { ["errorCode":protected]=> int(401) > ["errorMessage":protected]=> string(226) "HTTP/1.1 401 Unauthorized > Date: Fri, 21 Aug 2015 18:26:30 GMT Server: Apache WWW-Authenticate: > OAuth realm=<https://etws.etrade.com/,oauth_problem=token_expired> > Content-Length: 995 Content-Type: text/html;charset=utf-8 " > ["httpCode":protected]=> NULL ["message":protected]=> string(0) "" > ["string":"Exception":private]=> string(0) "" ["code":protected]=> > int(0) ["file":protected]=>[snip!] > > > I would think that getMessage() should display the contents of errorMessage. Well I tried $e->getErrorMessage() and *that* displays the expected message. Searching google for php exception getErrorMessage does not seem to show anything useful (all pages only seem to mention getMessage, not getErrorMessage). What gives?
The e-trade exception class is a mess. It implements its own constructor and does not set the correct values for the standard `Exception`. It expects you to use `$e->getErrorMessage()` to get the message. ``` <?php /** * E*TRADE PHP SDK * * @package PHP-SDK * @version 1.1 * @copyright Copyright (c) 2012 E*TRADE FINANCIAL Corp. * */ class ETWSException extends Exception { protected $errorCode; protected $errorMessage; protected $httpCode; /** * Constructor ETWSException * */ public function __construct($errorMessage, $errorCode = null, $httpCode = null, Exception $previous = null) { $this->errorMessage = $errorMessage; $this->errorCode = $errorCode; $this->httpCode = $httpCode; } /** * Gets the value of the errorCode property. * * @return * possible object is * {@link Integer } * */ public function getErrorCode() { return $this->errorCode; } /** * Gets the value of the errorMessage property. * * @return * possible object is * {@link String } * */ public function getErrorMessage() { return $this->errorMessage; } /** * Gets the value of the httpStatusCode property. * * @return * possible object is * {@link String } * */ public function getHttpCode() { return $this->httpCode; } } ?> ```
Does new char actually guarantee aligned memory for a class type? Is allocating a buffer via `new char[sizeof(T)]` guaranteed to allocate memory which is properly aligned for the type `T`, where all members of `T` has their natural, implementation defined, alignment (that is, you have not used the `alignas` keyword to modify their alignment). I have seen this guarantee made in a few answers around here but I'm not entirely clear how the standard arrives at this guarantee. 5.3.4-10 of the standard gives the basic requirement: essentially `new char[]` must be aligned to `max_align_t`. What I'm missing is the bit which says `alignof(T)` will always be a valid alignment with a maximum value of `max_align_t`. I mean, it seems obvious, but must the resulting alignment of a structure be at most `max_align_t`? Even point 3.11-3 says extended alignments may be supported, so may the compiler decide on its own a class is an over-aligned type?
> > What I'm missing is the bit which says `alignof(T)` will always be a valid alignment with a maximum value of `max_align_t`. I mean, it seems obvious, but must the resulting alignment of a structure be at most `max_align_t` ? Even point 3.11-3 says extended alignments may be supported, so may the compiler decide on its own a class is an over-aligned type ? > > > As noted by Mankarse, the best quote I could get is from **[basic.align]/3**: > > A type having an extended alignment requirement is an over-aligned type. [ Note: > every over-aligned type is or contains a class type to which extended alignment applies (possibly through a non-static data member). —end note ] > > > which seems to imply that extended alignment must be explicitly required (and then propagates) but cannot I would have prefer a clearer mention; the intent is obvious for a compiler-writer, and any other behavior would be insane, still...
Moving /home with LVM I currently have Ubuntu 16.04 Server installed on a machine with a small 96-GB SSD. That's no longer enough space for all of the users on the server, so I'd like to add a 1-TB HDD and move the `/home` folder to a new partition on this 1-TB drive explicitly. Originally, I had planned on doing this using the instructions [here](https://help.ubuntu.com/community/Partitioning/Home/Moving). However, on closer review of the system, I discovered that LVM is enabled: ``` Filesystem Size Used Avail Use% Mounted on udev 63G 0 63G 0% /dev tmpfs 13G 9.5M 13G 1% /run /dev/mapper/ubuntu--vg-root 98G 76G 18G 82% / tmpfs 63G 0 63G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 63G 0 63G 0% /sys/fs/cgroup /dev/sda2 473M 111M 338M 25% /boot /dev/sda1 512M 136K 512M 1% /boot/efi tmpfs 13G 0 13G 0% /run/user/1001 ``` Based on what I've read so far, I need to add the new drive to the `ubuntu-vg` volume group (as in steps 2-3 of [this answer](https://askubuntu.com/a/459176/424812)). But I'm unsure of how to proceed after this -- should I continue with [steps 4-5](https://askubuntu.com/a/459176/424812)? Or is there another way of explicitly moving just `/home` to the 1-TB HDD and leaving `/` on the SSD?
There are quite a few ways to do what you want, but the way I'd recommend is: 1. Partition the new disk. It can be one big LVM partition or you can set aside one or more non-LVM partitions for other purposes. I sometimes create multiple LVM partitions (aka physical volumes, or PVs) so that I can remove one or more of them from the LVM configuration in the future, should the need arise. Note that I do *not* recommend putting PV data structures directly on the disk without partitions, as shown in the LVM answer you reference. Although this is legal, it can lead to confusion because hard disks are usually partitioned. 2. Prepare the PV(s) for use by LVM with the `pvcreate` command, as in `sudo pvcreate /dev/sdb1`. 3. Add the new PV(s) to your existing volume group (VG) with `vgextend`, as in `sudo vgextend ubuntu--vg /dev/sdb1`. 4. Type `sudo pvdisplay` to show statistics on your PVs, including their sizes. 5. Create a new logical volume (LV) with `lvcreate`, as in `sudo lvcreate -L 900G -n home ubuntu--vg /dev/sdb1`. Note that I've specified a syntax that enables you to set a size (via `-L`) and the PV on which the LV will be created (`/dev/sdb1`). 6. Type `sudo pvdisplay` again to verify that you've created an LV that's an appropriate size. If not, you can resize it with `lvresize` or delete it with `lvremove` and try again. 7. Create a filesystem on your LV, as in `sudo mkfs -t ext4 /dev/mapper/ubuntu--vg-home`. 8. Mount the new LV somewhere convenient, as in `sudo mount /dev/mapper/ubuntu--vg-home /mnt`. 9. Copy your `/home` directory with `tar`, `cp`, `rsync`, or whatever tool you prefer. 10. Edit `/etc/fstab` to mount the new LV at `/home`. 11. Rename your current `/home` to some other name (say, `/home-orig`) and create a new empty `/home` directory to serve as a mount point. 12. Reboot and hope it works. 13. If it all looks good, delete the old `/home-orig` directory. Steps 8-11 are very similar to equivalent steps in the procedure outlined in [the wiki page you reference,](https://help.ubuntu.com/community/Partitioning/Home/Moving) so I've gone light on the details for those steps. Note that, although I've done this sort of thing myself many times, the example commands I've shown are based on my reading of the relevant man pages, so I may have missed some detail. I recommend you review the man pages and, if you get an error, adjust once you figure out what's wrong. Also, I find Ubuntu's default LVM naming confusing, so I may have misinterpreted that detail, and you may need to adjust. Once you're done with this, you'll have unused space in your PV on your SSD. You can expand your root (`/`) LV into this space, expand the `/home` LV to span both disks, create another LV for some special purpose, etc. A variant of this procedure might be to leave your current `/home` LV where it is, create a new LV on the new disk, and begin using the new LV as spillover space. You note that your computer is a multi-user server, so this might be awkward, but there might be reasons to do this -- for instance, if just one or two users are consuming the lion's share of the disk space, you could move just their home directories to the new space or give them directories on the new space (with mount points or symbolic links to make access easier) and instruct them to move their big files to the new space. This approach would have the advantage of not negatively impacting other users, since the new hard disk storage space is likely to be noticeably slower than the old SSD storage space.
How to pass 2D array (matrix) in a function in C? I need to do this to persist operations on the matrix as well. Does that mean that it needs to be passed by reference? Will this suffice? `void operate_on_matrix(char matrix[][20]);`
C does not really have multi-dimensional arrays, but there are several ways to simulate them. The way to pass such arrays to a function depends on the way used to simulate the multiple dimensions: 1) Use an array of arrays. This can only be used if your array bounds are fully determined at compile time, or if your compiler supports [VLA's](http://en.wikipedia.org/wiki/Variable-length_array): ``` #define ROWS 4 #define COLS 5 void func(int array[ROWS][COLS]) { int i, j; for (i=0; i<ROWS; i++) { for (j=0; j<COLS; j++) { array[i][j] = i*j; } } } void func_vla(int rows, int cols, int array[rows][cols]) { int i, j; for (i=0; i<rows; i++) { for (j=0; j<cols; j++) { array[i][j] = i*j; } } } int main() { int x[ROWS][COLS]; func(x); func_vla(ROWS, COLS, x); } ``` 2) Use a (dynamically allocated) array of pointers to (dynamically allocated) arrays. This is used mostly when the array bounds are not known until runtime. ``` void func(int** array, int rows, int cols) { int i, j; for (i=0; i<rows; i++) { for (j=0; j<cols; j++) { array[i][j] = i*j; } } } int main() { int rows, cols, i; int **x; /* obtain values for rows & cols */ /* allocate the array */ x = malloc(rows * sizeof *x); for (i=0; i<rows; i++) { x[i] = malloc(cols * sizeof *x[i]); } /* use the array */ func(x, rows, cols); /* deallocate the array */ for (i=0; i<rows; i++) { free(x[i]); } free(x); } ``` 3) Use a 1-dimensional array and fixup the indices. This can be used with both statically allocated (fixed-size) and dynamically allocated arrays: ``` void func(int* array, int rows, int cols) { int i, j; for (i=0; i<rows; i++) { for (j=0; j<cols; j++) { array[i*cols+j]=i*j; } } } int main() { int rows, cols; int *x; /* obtain values for rows & cols */ /* allocate the array */ x = malloc(rows * cols * sizeof *x); /* use the array */ func(x, rows, cols); /* deallocate the array */ free(x); } ``` 4) Use a dynamically allocated VLA. One advantage of this over option 2 is that there is a single memory allocation; another is that less memory is needed because the array of pointers is not required. ``` #include <stdio.h> #include <stdlib.h> #include <time.h> extern void func_vla(int rows, int cols, int array[rows][cols]); extern void get_rows_cols(int *rows, int *cols); extern void dump_array(const char *tag, int rows, int cols, int array[rows][cols]); void func_vla(int rows, int cols, int array[rows][cols]) { for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { array[i][j] = (i + 1) * (j + 1); } } } int main(void) { int rows, cols; get_rows_cols(&rows, &cols); int (*array)[cols] = malloc(rows * cols * sizeof(array[0][0])); /* error check omitted */ func_vla(rows, cols, array); dump_array("After initialization", rows, cols, array); free(array); return 0; } void dump_array(const char *tag, int rows, int cols, int array[rows][cols]) { printf("%s (%dx%d):\n", tag, rows, cols); for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) printf("%4d", array[i][j]); putchar('\n'); } } void get_rows_cols(int *rows, int *cols) { srand(time(0)); // Only acceptable because it is called once *rows = 5 + rand() % 10; *cols = 3 + rand() % 12; } ``` (See [`srand()` — why call it only once?](https://stackoverflow.com/questions/7343833/srand-why-call-it-only-once/).)
Set JSON CamelCase per Web API request Web API uses the Json.Net formatter to serialise its JSON responses which allows you to customise the format of the generated JSON very easily for the entire application at startup using: ``` config.Formatters.JsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver(); ``` This allows you resolve the issues between C# syntax preferring PascalCase and javascript based clients preferring camelCase. However setting this globally on the API without taking into consideration who the client request is actually coming from seems to assume that an API will only have 1 type of client and whatever you set for your API is just the way it has to be. With multiple client types for my API's (javascript, iOS, Android, C#), I'm looking for a way to set the Json.Net SerializerSettings **per request** such that the client can request their preferred format by some means (perhaps a custom header or queryString param) to override the default. What would be the best way to set per-request Json.Net SerializerSettings in Web API?
With a bit of help from Rick Strahl's [blog post](https://weblog.west-wind.com/posts/2012/Apr/02/Creating-a-JSONP-Formatter-for-ASPNET-Web-API) on creating a JSONP media type formatter, I have come up with a solution that allows the API to dynamically switch from camelCase to PascalCase based on the client request. Create a MediaTypeFormatter that derives from the default JsonMediaTypeFormatter and overrides the GetPerRequestFormatterInstance method. This is where you can implement your logic to set your serializer settings based on the request. ``` public class JsonPropertyCaseFormatter : JsonMediaTypeFormatter { private readonly JsonSerializerSettings globalSerializerSettings; public JsonPropertyCaseFormatter(JsonSerializerSettings globalSerializerSettings) { this.globalSerializerSettings = globalSerializerSettings; SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/javascript")); } public override MediaTypeFormatter GetPerRequestFormatterInstance( Type type, HttpRequestMessage request, MediaTypeHeaderValue mediaType) { var formatter = new JsonMediaTypeFormatter { SerializerSettings = globalSerializerSettings }; IEnumerable<string> values; var result = request.Headers.TryGetValues("X-JsonResponseCase", out values) ? values.First() : "Pascal"; formatter.SerializerSettings.ContractResolver = result.Equals("Camel", StringComparison.InvariantCultureIgnoreCase) ? new CamelCasePropertyNamesContractResolver() : new DefaultContractResolver(); return formatter; } } ``` Note that I take a JsonSerializerSettings argument as a constructor param so that we can continue to use WebApiConfig to set up whatever other json settings we want to use and have them still applied here. To then register this formatter, in your WebApiConfig: ``` config.Formatters.JsonFormatter.SerializerSettings.Converters.Add(new StringEnumConverter()); config.Formatters.JsonFormatter.SerializerSettings.NullValueHandling = NullValueHandling.Ignore; config.Formatters.JsonFormatter.SerializerSettings.DateTimeZoneHandling = DateTimeZoneHandling.Local; config.Formatters.Insert(0, new JsonPropertyCaseFormatter(config.Formatters.JsonFormatter.SerializerSettings)); ``` Now requests that have a header value of `X-JsonResponseCase: Camel` will receive camel case property names in the response. Obviously you could change that logic to use any header or query string param you like.