qid
int64
10
74.7M
question
stringlengths
15
26.2k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
27
28.1k
response_k
stringlengths
23
26.8k
31,347,739
I am wondering how to access routes in nested resources in rails. I added categories and it as broken my application. Routes: ``` resources :categories do resources :posts do resources :comments end end ``` It errors out in this: ``` <% @posts.each_with_index do |post, index| %> <%= link_to post do %> # originally, this used to work but now it says 'undefined method "post_path"' <li class="post-title"><%= truncate post.title, length: 50 %></li> <li class="post-content"><%= truncate post.content, length: 400 %></li> <li><span class="post-comments"><%= post.comments.count %> comments</span></li> <% end %> ``` My root is set to posts#index and has a listing of posts. However, the route nesting (originally I didn't have categories) has broken the routes. Running rake routes I get in part this: ``` category_posts GET /categories/:category_id/posts(.:format) posts#index POST /categories/:category_id/posts(.:format) posts#create new_category_post GET /categories/:category_id/posts/new(.:format) posts#new edit_category_post GET /categories/:category_id/posts/:id/edit(.:format) posts#edit category_post GET /categories/:category_id/posts/:id(.:format) posts#show PATCH /categories/:category_id/posts/:id(.:format) posts#update PUT /categories/:category_id/posts/:id(.:format) posts#update DELETE /categories/:category_id/posts/:id(.:format) posts#destroy ``` How can I adjust routes to compensate for nesting? Or is there a better way to nest?
2015/07/10
[ "https://Stackoverflow.com/questions/31347739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3162553/" ]
You'd need to define another block that nests `posts` under `category` resource: ``` # Existing resources :posts do resources :comments end # Additional block resources :categories do resources :posts do resources :comments end end ``` With the additional block, your existing routes remain unchanged so existing routes don't break! It is always better to restrict the routes to only what you need/use. For example if you only need `/categories/:category_id/posts`, i.e. just the `index` method on the `posts_controller`, then your route will be updated to: ``` resources :categories do resources :posts, only: [ :index ] end ``` This way you have more control on your routes and greater maintainability of the application.
vee's answer is correct if you want to use the nested routes and still preserve the original URLs. This is recommend this if your site is live and changing the URLs would affect links to your indexed pages. Otherwise, you should point your links to the nested resource, like this: ``` <%= link_to post.title, [post.category, post] %> ``` See: [Rails - link\_to, routes and nested resources](https://stackoverflow.com/questions/1548009/rails-link-to-routes-and-nested-resources)
1,528,830
Given $\lim\_{n\to\infty} a\_n = a \neq0$. Need to prove that $\lim\limits\_{n\to\infty}\frac{a\_{n+1}}{a\_n}=1$. So i know that if $\lim\_{n\to\infty} a\_n = a$ so $\lim\_{n\to\infty} a\_{n+1} = a$ for the getting $1$, but how do i prove it?
2015/11/14
[ "https://math.stackexchange.com/questions/1528830", "https://math.stackexchange.com", "https://math.stackexchange.com/users/290034/" ]
For any $k\in\mathbb{N}\setminus\{0,1\}$, $\lim\_{n\to +\infty} a\_n = a\neq 0$ implies that for any $n$ big enough we have: $$ \left(1-\frac{1}{k}\right) a \leq a\_n \leq \left(1+\frac{1}{k}\right) a, $$ that implies: $$ \frac{k-1}{k+1}\leq \frac{a\_{n+1}}{a\_n}\leq \frac{k+1}{k-1}.$$ By letting $k\to +\infty$ the claim follows.
We assume that $a>0$. Then, for any given $\epsilon>0$, there exists a positive integer $N$ such that whenever $n>N$, $$a-a\epsilon/4 <a\_n<a+a\epsilon/4$$ and $$a-a\epsilon/4 <a\_{n+1}<a+a\epsilon/4$$ Now, if we take $n$ so large that $a/2\le a\_n\le 3a/2$, we can write $$\left|\frac{a\_{n+1}}{a\_n}-1\right|=\frac{|a\_{n+1}-a\_n|}{|a\_n|}\le \epsilon$$ and we are done!
1,810,079
What would you say if a developer wanted to implement a sql2008 dev environment, but we were still forced to use a sql2000 test and sql2000 production environment? Would there be anything wrong with using sql2008 on a dev server? Of course you'd need to know what functionality you couldn't use, so you didn't have problems migrating your work from the sql2008 servers to sql2000.
2009/11/27
[ "https://Stackoverflow.com/questions/1810079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9266/" ]
I'd strongly avoid developing on a different local version than the dev/qa/prod environments. Most of the time nothing will happen, but when it does it can take forever to track down the issue. ***Not only that, you may never be able to replicate it locally since you have a different environment.***
Using Basic SQL features - you'll do OK. I have no idea why you use this environment, but it is best to use as similar environment and DEV, QA and Production as possible, to avoid surprise when going on production. I think that SQL 2000 uses OLEDB and SQL 2008 you can use ADO.NET provider, And there might be many more differences that you might bump into. so the **best advise it NOT TO DO SO.**
2,362,212
> > Prove that the second derivative of $x^{4/3}$ does not exist at zero? > > > a/ As $f(x)= x^{4/3}$ we have $f'(x)=\frac{4}{3} x^{\frac{1}{3}}$ and $f''(x)=\frac{4}{9} x^{-\frac{2}{3}}$ Both $f$ and $f'$ are defined at zero but $f''$ is not defined at zero. Therefore the second derivative does not exist at 0 Would this be sufficient? b/ how do you prove this using $f''(0)=\lim\limits\_{h \rightarrow 0} \frac{f'(0+h)-f'(0)}{h}$ I came up with $$f''(0) =\lim\limits\_{h \rightarrow 0} \frac {\frac{4}{3} h^\frac{1}{3} } {h} =\lim\limits\_{h \rightarrow 0} \frac{4}{3} h^{-\frac{2}{3}} $$ That is incorrect. I do not know what I am doing wrong here. c/ since the second derivative does not exist, does this mean we cannot write a Taylor polynomial out the $f(x)=x^{4/3}$ or order 2? what would the justification be?
2017/07/18
[ "https://math.stackexchange.com/questions/2362212", "https://math.stackexchange.com", "https://math.stackexchange.com/users/430020/" ]
The reason I think is because the first two equations serve as "steps" for the third one. And so it is not as important how they got the equation, rather then the equation itself. It's like: find $z$ such that $z = x + y$ where $x=5, y= 6$. $$x=5$$ $$y = 6$$ $$z = x + y = 11\text{ (1.2)}$$ $z$ is the most important here, not how we got it, and so in your case, the bound is the most important, rather then the thinking that went behind it. This is my guess. I'm not sure tbh.
From a LaTeX and physics point of view -- and this depends on the style imposed by the journal or other authority: * Number (almost) all display maths. The "almost" refers to the occasional separate equation often preceding a numbered equation (so you can refer to such steps as "in the derivation of Eq. 1.1"). * Use display maths for anything you want to refer to, plus anything that needs it for clarity. * Use inline maths when display isn't *really* needed (this is a common journal requirement in physics). I know there are mathematics fields in which this will work nicely; I know there are others in which it won't.
20,420,429
I would like to know what the difference is between ``` window.setTimeout(myFancyFunciton, 1000); ``` and ``` setTimeout(myFancyFunciton, 1000); ``` Both seem to do the exact same thing. When should you use one or the other?
2013/12/06
[ "https://Stackoverflow.com/questions/20420429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3073240/" ]
From <https://developer.mozilla.org/en-US/docs/Web/API/Window> > > The window object represents the window itself. > > > So, all variables and functions that you call are enclosed inside the object window. However you can omit the object reference every time you call a function or a variable. Why this? Think about a page with 2 or more frames. Every frame has own `window`. You can access to a variable inside a frame from another frame simply accessing to the `window` object of the target. This is valid for every variable or function declared as global... and it's valid too for native functions, like `setTimeout`. So why sometimes we need to write explicity `window.setTimeout`? Simply, if you are inside a scope and you use the same name of a native function, you can choose which function to use. for example: ``` function myF() { function setTimeout(callback,seconds) { // call the native setTimeout function return window.setTimeout(callback,seconds*1000); } // call your own setTimeout function (with seconds instead of milliseconds) setTimeout(function() {console.log("hi"); },3); } myF(); ``` Please note that the object `window` exists only in browser environment. The global object of `Node.js` is `global` where `window` is not defined.
I faced an issue related to this topic. I tried to make some functionality of my SPA to be a part of server side rendering proccess. I used `setTimeout` to provide some deferred action on the UI. When it works on server side (NodeJS) it turns into deferred action on the server side with no relation to the client side. It's because of Browser `setTimeout` (say `window.setTimeout`) is not the same as NodeJS `setTimeout`. Apart from different runtime environments, which prohibits using a single `setTimeout` both for client side and server side rendering, the implementations of [Browser](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setTimeout) and [NodeJs](https://nodejs.org/api/timers.html#timers_settimeout_callback_delay_args) `setTimeout` are different, they have different return value... Now I'm looking for some workaround.
3,781,124
I have used Eclipse in the past without problems over a year ago, but I downloaded Helios onto a new computer having Windows Vista and Java 1.6.13 currently. I extract the folder and try to run the Eclipse EXE and get an error stating "Java was started but returned exit code=13". I did some digging and it appears that the config file is requiring Java 1.5 to launch (dosgiRequiredJavaVersion=1.5). Removing this line does not matter. I went and got an archived copy of 1.5.0\_22 and tried to use -vm mypath/java/jre1.5.0\_22/bin in the config file before vmargs and it still does not help. I have also tried making a shortcut and specified the "eclipse -vm mypath" in the target field of properties and still nothing. Here are my config settings which still give the error: ``` -startup plugins/org.eclipse.equinox.launcher_1.1.0.v20100507.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.1.0.v20100503 -product org.eclipse.epp.package.java.product --launcher.defaultAction openFile --launcher.XXMaxPermSize 256M -showsplash org.eclipse.platform --launcher.XXMaxPermSize 256m --launcher.defaultAction openFile -vm c:\program files\java\jre1.5.0_22\bin -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms40m -Xmx384m ``` Has anyone solved this problem?
2010/09/23
[ "https://Stackoverflow.com/questions/3781124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/456474/" ]
I had this same problem. I'm on Windows Vista 64, with the 64-bit versions of both Helios and the JDK/JRE 1.6 update 14. I had been using Eclipse Galileo just fine. Upgrading to the JDK/JRE 1.6 update 24 (the latest as of this date) fixed the problem. I didn't have to make any changes to the Helios eclipse.ini file.
I also had the same issue with Eclipse Indigo on 64-bit Windows 7. The error message was "Java was started but returned exit code=1". Updating to latest JRE 1.6 update 29 fixed it.
2,770,523
I'm struggling with the following problem, Let $g(z)=\sum^k\_1 m\_{\alpha}(z-z\_{\alpha})^{-1}$. Show that if $g(z)=0$, then $z\_1,\cdots,z\_k$ cannot all lie on the same sie of a straight line through $z$. **What I did:** The book says that I should use the fact that if $z\_1,\cdots,z\_k$ lie on one side of some straight line through 0, then $z\_1+\cdots+z\_k\neq 0$. But I can't move much, I think it follows that the $m\_{\alpha}(z-z\_{\alpha})^{-1}$'s are on the same side of a line through zero, I know that the next step is probably going to be something related to expanding this thought to the inverse of these numbers and the last one is likely going to be translating $z-z\_\alpha$ to $z$.
2018/05/07
[ "https://math.stackexchange.com/questions/2770523", "https://math.stackexchange.com", "https://math.stackexchange.com/users/474703/" ]
No - any two coprime odd numbers (e.g any two primes $\ne 2$) provide a counterexample.
Far from true. $3+5=8,\ 8+7=15{}{}{}{}$
3,179,513
How can I append text to every cell in a column in Excel? I need to add a comma (",") to the end. **Example:** `email@address.com` turns into `email@address.com,` **Data Sample:** ``` m2engineers@yahoo.co.in satishmm_2sptc@yahoo.co.in threed_precisions@rediffmail.com workplace_solution@yahoo.co.in threebworkplace@dataone.in dtechbng@yahoo.co.in innovations@yahoo.co.in sagar@mmm.com bpsiva@mmm.com nsrinivasrao@mmm.com pdilip@mmm.com vvijaykrishnan@mmm.com mrdevaraj@mmm.com b3minvestorhelpdesk@mmm.com sbshridhar@mmm.com balaji@mmm.com schakravarthi@mmm.com srahul1@mmm.com khramesh2@mmm.com avinayak@mmm.com rockindia@hotmail.com ```
2010/07/05
[ "https://Stackoverflow.com/questions/3179513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/365614/" ]
It's a simple "&" function. ``` =cell&"yourtexthere" ``` Example - your cell says Mickey, and you want Mickey Mouse. Mickey is in A2. In B2, type ``` =A2&" Mouse" ``` Then, copy and "paste special" for values. ``` B2 now reads "Mickey Mouse" ```
Pretty simple...you could put all of them in a cell using the concatenate function: ``` =CONCATENATE(A1, ", ", A2, ", ", and so on) ```
24,468,943
I'm working on problem #3 on project euler, and I've run into a problem. It seems that the program is copying *all* the items from `factors` into `prime_factors`, instead of just the prime numbers. I assume this is because my `is_prime` function is not working properly. How can I make the function do what I want? Also, in the code, there is a line that I commented out. Do I need that line, or is it unnecessary? Finally, is the code as a whole sound (other than `is_prime`), or is it faulty? The project euler question is: The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143 ? A link to a previous question of mine on the same topic: <https://stackoverflow.com/questions/24462105/project-euler-3-python?noredirect=1#comment37857323_24462105> thanks ``` import math factors = [] prime_factors = [] def is_prime (x): counter = 0 if x == 1: return False elif x == 2: return True for item in range (2, int(x)): if int(x) % item == 0: return False else: return True number = int(input("Enter a number: ")) start = int(math.sqrt(number)) for item in range(2, start + 1): if number % item == 0: factors.append(item) #factors.append(number/item) do i need this line? for item in factors: if is_prime(item) == True: prime_factors.append(item) print(prime_factors) ```
2014/06/28
[ "https://Stackoverflow.com/questions/24468943", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3784876/" ]
Yes, you need the commented line. (It seems that on that case it's not necessary, but with other numbers the part of your code for getting factors would go wrong). Check these references: [Prime numbers](https://en.wikipedia.org/wiki/Prime_number) [Integer factorization](https://en.wikipedia.org/wiki/Integer_factorization) [Why do we check up to the square root of a prime number to determine if it is prime or not](https://stackoverflow.com/questions/5811151/why-do-we-check-up-to-the-square-root-of-a-prime-number-to-determine-if-it-is-pr) I got a very fast result on my computer with the following code: ``` #!/usr/bin/env python import math def square_root_as_int(x): return int(math.sqrt(x)) def is_prime(number): if number == 1: return False for x in range(2, square_root_as_int(number) + 1): if x == number: next if number % x == 0: return False return True def factors_of_(number): factors = [] for x in range(2, square_root_as_int(number) + 1): if number % x == 0: factors.append(x) factors.append(number/x) return factors factors = factors_of_(600851475143) primes = [] for factor in factors: if is_prime(factor): primes.append(factor) print max(primes) # Bonus: "functional way" print max(filter(lambda x: is_prime(x), factors_of_(600851475143))) ```
Use the while loop. n%i simply means n%i!=0 ``` i = 2 n = 600851475143 while i*i <= n: if n%i: i+=1 else: n //= i print n ```
22,624,879
I'm still not sure how to do my migrations with knex. Here is what I have so far. It works on `up`, but `down` gives me FK constraint error even though foreign\_key\_checks = 0. ```js exports.up = function(knex, Promise) { return Promise.all([ knex.raw('SET foreign_key_checks = 0;'), /* CREATE Member table */ knex.schema.createTable('Member', function (table) { table.bigIncrements('id').primary().unsigned(); table.string('email',50); table.string('password'); /* CREATE FKS */ table.bigInteger('ReferralId').unsigned().index(); table.bigInteger('AddressId').unsigned().index().inTable('Address').references('id'); }), /* CREATE Address table */ knex.schema.createTable('Address', function (table) { table.bigIncrements('id').primary().unsigned(); table.index(['city','state','zip']); table.string('city',50).notNullable(); table.string('state',2).notNullable(); table.integer('zip',5).unsigned().notNullable(); }), knex.raw('SET foreign_key_checks = 1;') ]); }; exports.down = function(knex, Promise) { return Promise.all([ knex.raw('SET foreign_key_checks = 0;'), knex.schema.dropTable('Address'), knex.schema.dropTable('Member'), knex.raw('SET foreign_key_checks = 1;') ]); }; ```
2014/03/25
[ "https://Stackoverflow.com/questions/22624879", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2132307/" ]
Figured out that it wasn't working because of connection pooling. It would use a different connection to run each migration task which caused foreign key checks not to be set properly. setting ``` pool:{ max:1 } ``` in the migration config file fixed this.
Subjectively as the most clean way to do it I would suggest including in your migration file something like: ``` exports.up = function (knex) { return Promise.all([ knex.schema.createTable('users', function (table) { table.increments('id') table.string('username').notNullable() table.string('password').notNullable() table.string('service').notNullable() table.text('cookies') table.enu('status', ['active', 'disabled', 'need_login', 'failed']).defaultTo('need_login').notNullable() table.datetime('last_checked') table.timestamps() }), knex.schema.createTable('products', function (table) { table.increments() table.integer('user_id').unsigned().notNullable() table.string('title') table.decimal('price', 8, 2) table.text('notes') table.enu('status', ['to_publish', 'published', 'hidden', 'not_found']).notNullable() table.timestamps() table.foreign('user_id').references('id').inTable('users') }), knex.schema.createTable('messages', function (table) { table.increments() table.integer('user_id').unsigned().notNullable() table.integer('product_id').unsigned().notNullable() table.boolean('incoming') table.boolean('unread') table.text('text') table.timestamps() table.foreign('user_id').references('id').inTable('users') table.foreign('product_id').references('id').inTable('products') }) ]) } exports.down = function (knex) { return Promise.all([ knex.schema.dropTable('messages'), knex.schema.dropTable('products'), knex.schema.dropTable('users') ]) } ```
7,242,354
For example I have four classes like: ``` class A; class B{ protected: void check(const A &a); }; class C : public A, public B; class D : public B; ``` Now I would like to write check function that does nothing if the caller and parameter are the same: ``` void B::check(const A &a){ if(*this != a){ //do something } else{ //do nothing } } ``` However this won't compile as class B doesn't know anything about class C, which will one day call B's function check on itself. It would be easy to cast `this` into A, but that would give an error if one day class D would call that check as it has nothing to do with A. How is such thing done then? Edit: I might had to mention that class C and D will have interface for calling that check which is not avalible outside these classes, but it'll do nothing more than just pass parameter to inner function
2011/08/30
[ "https://Stackoverflow.com/questions/7242354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/365619/" ]
There's only one possible escape hatch here. If *both* A and B have a virtual function, then you can `dynamic_cast` both `this` and `&a`. And per 5.2.7/7 "If T is “pointer to cv void,” then the result is a pointer to the most derived object pointed to by v." Therefore, this code works: ``` void B::check(const A &a){ if(dynamic_cast<void const*>(const_cast<B const>(this) != dynamic_cast<void const*>(&a)) { //do something } ... ``` Nothing else gets you a **pointer to the most derived object** without knowing that type.
You probably want to check whether the instances are the same: ``` void B::check(const A &a){ if(this != &a){ //do something } else{ //do nothing } } ``` Comparing the content of different classes doesn't make much sense to me.
302,896
The following command has been entered: ``` sleep 12h; nuke-russia ``` It was entered on one of the Debian's `tty`s. I would like to cancel the sleep without nuking anything. I don't want the nuke command to run for even for a single moment. I have the other five terminals available. What can I do?
2011/06/27
[ "https://superuser.com/questions/302896", "https://superuser.com", "https://superuser.com/users/81119/" ]
In future, try using `&&` instead of `;` `sleep X && echo bla` pressing ctrl + C will stop sleep and will not echo bla If it's feasible, you could rename the responsible binary . That way bash will not be able to execute `nuke-russia` since it will not exist. Other solution would be to kill the responsible shell EDIT: How to find PID of responsible shell (which should its parent) If pid of `sleep 12h` is 1234 find responsible shell with `ps -o ppid= 1234` Then kill it with kill -9
Similar answer to Linker3000's - Log in as root from another terminal, and kill the tty process (On CentOS the process is named login, not sure about Debian).
443,310
I'd like for a subclass of a certain superclass with certain constructor parameters to load an XML file containing information that I'd then like to pass to the superconstructor. Is this impossible to achieve?
2009/01/14
[ "https://Stackoverflow.com/questions/443310", "https://Stackoverflow.com", "https://Stackoverflow.com/users/50844/" ]
How about using a factory method instead? Maybe something like: ``` private MyObject(ComplexData data) { super(data); } public static MyObject createMyObject(String someParameter) { ComplexData data = XMLParser.createData(someParameter); return new MyObject(data); } ```
You can call a static method in the super() call, e.g. ``` public Subclass(String filename) { super(loadFile(filename)); } private static byte[] loadFile(String filename) { // ... } ```
44,641,976
I would like to add the L1 regularizer to the activations output from a ReLU. More generally, how does one add a regularizer **only to a particular layer** in the network? --- **Related material:** * [This similar post](https://stackoverflow.com/questions/42704283/adding-l1-l2-regularization-in-pytorch) refers to adding *L2* regularization, but it appears to add the regularization penalty to *all* layers of the network. * `nn.modules.loss.L1Loss()` seems relevant, but I do not yet understand how to use this. * The legacy module [`L1Penalty`](https://github.com/pytorch/pytorch/blob/master/torch/legacy/nn/L1Penalty.py) seems relevant also, but why has it been deprecated?
2017/06/20
[ "https://Stackoverflow.com/questions/44641976", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8126541/" ]
Here is how you do this: * In your Module's forward return final output and layers' output for which you want to apply L1 regularization * `loss` variable will be sum of cross entropy loss of output w.r.t. targets and L1 penalties. Here's an example code ``` import torch from torch.autograd import Variable from torch.nn import functional as F class MLP(torch.nn.Module): def __init__(self): super(MLP, self).__init__() self.linear1 = torch.nn.Linear(128, 32) self.linear2 = torch.nn.Linear(32, 16) self.linear3 = torch.nn.Linear(16, 2) def forward(self, x): layer1_out = F.relu(self.linear1(x)) layer2_out = F.relu(self.linear2(layer1_out)) out = self.linear3(layer2_out) return out, layer1_out, layer2_out batchsize = 4 lambda1, lambda2 = 0.5, 0.01 model = MLP() optimizer = torch.optim.SGD(model.parameters(), lr=1e-4) # usually following code is looped over all batches # but let's just do a dummy batch for brevity inputs = Variable(torch.rand(batchsize, 128)) targets = Variable(torch.ones(batchsize).long()) optimizer.zero_grad() outputs, layer1_out, layer2_out = model(inputs) cross_entropy_loss = F.cross_entropy(outputs, targets) all_linear1_params = torch.cat([x.view(-1) for x in model.linear1.parameters()]) all_linear2_params = torch.cat([x.view(-1) for x in model.linear2.parameters()]) l1_regularization = lambda1 * torch.norm(all_linear1_params, 1) l2_regularization = lambda2 * torch.norm(all_linear2_params, 2) loss = cross_entropy_loss + l1_regularization + l2_regularization loss.backward() optimizer.step() ```
I think the original post wants to regularize the output from ReLU, so the regularizer should be on the output, not the weights of the network. They are not the same! * with l1-norm regularize the weights is training a neural network has sparse weights * with l1-norm regularize the output of a layer is training a network has a sparse output of this certain layer. Either these above answers (including the accepted one) missed the point, or I misunderstanding the original post question.
634,608
how can I get information on the state of a process (i.e. if it is a zombie) using C under Linux? After reading the answers so far I want to narrow my question somewhat: I would prefer a pure C solution. After reading the ps source (which reads /proc/) I thought that there should be a better way and asked here :)
2009/03/11
[ "https://Stackoverflow.com/questions/634608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/76663/" ]
I know only two ways: * Parsing output of the `ps` command * Reading files in /proc/PID, where PID is the process identifier (that's what `ps` does internally)
Pseudo file system /proc is describing kernel internal data structures and gives to you opportunity to alter some values directly. Obtaining state of particular process can be easily implemented with I/O C functions. The right file to parse is: */proc/{PID}/status* Command below can be used to obtain processes in Zombie state. ``` for proc in $(echo /proc/[0-9]*);do if [[ $(sed -n '/^State:\tZ/p' ${proc}/status 2>/dev/null) ]];then basename $proc;fi;done ```
1,194
The awesome relationship movie [*Eternal Sunshine of the Spotless Mind (2004)*](http://www.imdb.com/title/tt0338013/) ends with a scene in which the two main characters are tempted to get back together but are not sure. The woman points out that they both know they will most likely just end up breaking up again. The man responds by saying "Okay" and then the woman also says "Okay." Are they saying they are okay with getting back together despite the probable outcome? Or are they saying they are okay with not getting back together despite their mutual attraction?
2012/02/04
[ "https://movies.stackexchange.com/questions/1194", "https://movies.stackexchange.com", "https://movies.stackexchange.com/users/763/" ]
Joel and Clem seem to get back together at the end, and after that it shows a candid scene of them looping over and over until the final fade to black. The writer intentionally left the movie ambiguous in the first place, so there is no ending set in stone, but I've always thought that, although they rekindle their relationship, they do eventually break up because of their flaws. I don't actually agree with whole "they go back to Lacuna" part, because Mary's discovery and leaving with the documents implies that Lacuna will be hurt by all of their patients' discoveries, which may evoke a decidedly negative reaction as with her. (Of course, I could be wrong, but that's just my view.) The looping scene, in my opinion, symbolizes that they will break up, but also that they will get back together. I've always related this to those on-and-off couples. Maybe somewhere down the line, their flaws will becomes less magnified and a more concrete relationship will abound. Or maybe they are doomed to keep going back to each other, possibly out of true love. They are humans, and being humans, they are prone to repeating something that initially gave them pleasure, regardless of the negative note it will end on. While Joel has the memory erasing process done, he realizes that the good times he had with Clem outdid their breakup; "It's better to have loved and lost, than never to have loved at all." And who's to say that some time couldn't have occupied the gaps between the loops? Maybe they saw different people to help gain new experiences, and were repeatedly drawn to each other afterwards. They didn't seem that old in the movie, so maybe somewhere down the line, when they've completely matured, they might decide to settle down. Orrrr, maybe it symbolizes that memory lives on, longer than the physical experience, and that candid scene was just an enduring addition to his experiences as they moved on from each other. There's so many different ways one can interpret the ending. :D
To me, the message of the film is that we are destined to have people in our lives that make us experience human nature. In this universe in the movie where you have the ability to simply erase someone from your memory, a play on the desires of many a heartbroken lover in reality, this erasure doesn't actually do anyone any good. There are just some people you will always meet, because they deliver a lesson to you that no one else can. So I don't think the question is whether or not they end up together or break up, I think the question is: if you had a chance to erase someone, would you do it? I think the goal of the writer is to make you think twice about your answer. For the record, though, I agree it's very clear that Joel and Clem have been through multiple erasures. The movie is shown to have a cyclical element; "Meet me in Montauk."
321,167
Trying to ssh into a computer I control, I'm getting the familiar message: ``` @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. The fingerprint for the RSA key sent by the remote host is [...]. Please contact your system administrator. Add correct host key in /home/sward/.ssh/known_hosts to get rid of this message. Offending RSA key in /home/sward/.ssh/known_hosts:86 RSA host key for [...] has changed and you have requested strict checking. Host key verification failed. ``` I did indeed change the key. And I read a few dozen postings saying that the way to resolve this problem is by deleting the old key from the `known_hosts` file. But what I would like is to have ssh accept both the old key and the new key. The language in the error message ("`Add correct host key`") suggests that there should be some way to add the correct host key without removing the old one. I have not been able to figure out how to add the new host key without removing the old one. Is this possible, or is the error message just extremely misleading?
2011/10/13
[ "https://serverfault.com/questions/321167", "https://serverfault.com", "https://serverfault.com/users/97740/" ]
Remove that the entry from known\_hosts using: ``` ssh-keygen -R *ip_address_or_hostname* ``` This will remove the problematic IP or hostname from *known\_hosts* file and try to connect again. From the man pages: > > `-R hostname` > > Removes all keys belonging to hostname from a known\_hosts file. This option is useful to delete hashed hosts (see the -H option > above). > > >
So many answers, but so many that give up protection by turning off strict host checking totally, or destroying unrelated host info or just forcing the user to interactively accept keys, possibly at a later point, when it is unexpected. Here's a simple technique to allow you to leave strict host checking on, but update the key in a controlled way, when you **expect** it to change: * Remove the old key and update in one command ``` ssh-keygen -R server.example.com && \ ssh -o StrictHostKeyChecking=no server.example.com echo SSH host key updated. ``` * Repeat with IP address(es) or other host names if you use them. The advantage of this approach is that it rekeys the server exactly once. Most versions of ssh-keygen seem to not return an error if the server you try to delete doesn't exist in the known hosts file, if this is a problem for you, use the two commands in sequence. This approach also verifies connectivity and emits a nice message for logs in the ssh command (which logs in, updates the host key, and outputs *SSH host key updated* then immediately exits. If your version of ssh-keygen returns a non-zero exit code, and you prefer to handle this without error, regardless or prior connection, simply use the two commands in sequence, ignoring any errors on the ssh-keygen command. If you use this technique, you never need to vary your ssh command, or turn off host checking except during that one ssh command. You can be sure that future ssh sessions will work without conflict or needing to explicitly accept a new key, as long as the ssh command above ran without error.
45,258,500
So I made up this code: ``` sed -n -e '0,/version/{s/.*: *//p}' "$path" ``` $path is actual path to this file: ``` name: CSaveBackupWorld main: ru.centurion.savebackupworld.SBW version: 3.6 author: ASTRO load: STARTUP ``` I just wanna get version value, I mean that 3.6, but instead of that I am getting this output: ``` CSaveBackupWorld ru.centurion.savebackupworld.SBW 3.6 ``` That is really strange, course I made a filter only for version, but it shows name and main too. What am I doing wrong? First problem solved completely! Here is the last one: I have one more file: ``` apply plugin: 'jee' version = "1.3" group= "com.centurion.eye" // http://maven.apache.org/guides/mini/guide-naming-conventions.html archivesBaseName = "eye" ``` How can I get THAT version? Only 1.3.
2017/07/22
[ "https://Stackoverflow.com/questions/45258500", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7834288/" ]
You needlessly added `0,` to your command: ``` sed -n -e '/version/{s/.*: *//p}' "$path" ``` should work just fine. `0,` makes `sed` print everything from the start of the file up to the matched line. Alternatively, `awk` might be more readable: ``` awk '/version/{print $2}' "$path" ```
Just ``` sed -n -e '/version/ s/.*: *//p' "$path" ``` will do it
7,879,737
I have a mySQl. I have an inventory database. I would like to send an email to a person when the inventory is equal or less than a preset value. Is this possible on client side or server side? using PHP? Any examples? Erik
2011/10/24
[ "https://Stackoverflow.com/questions/7879737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/297791/" ]
If you have the height and you want to change a div , just do the following ```js $("#myDiv").css("height", myHeight+"px"); ``` Where "myDiv" is the "id" of the needed div. Shai.
`$(function(){` `var innerHeight = $('#cell').innerHeight();` `$('#scrollable').height(innerHeight);` });` Example: <http://jsfiddle.net/annF4/>
4,622,458
I had tried to solve this integral; using the substitution $\tan(x/2) =t$, and $\cos x= \frac{1-t^2}{1+t^2}$. But after making terms in $t$, I am not able to integrate further as numerator contains quadratic and denominator contains biquadratic. $\int\limits\_0^{\pi/2} \frac{1}{(3 + 5 \cos x)^2}\ dx$.
2023/01/20
[ "https://math.stackexchange.com/questions/4622458", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1065508/" ]
Well let's use Weierstrass and partial fractions ig and evaluate the definite with FTC II. Using Weierstrass with the tangent half angle substitution, our integral becomes $$\int \frac{1}{(3 + 5 \cos x)^2}\ dx = \int \frac{1}{\left(3+5\left(\frac{1-t^2}{1+t^2}\right)\right)^2}\cdot \frac{2dt}{1+t^2} = \frac12\int {t^2 + 1\over(t - 2)^2 (t + 2)^2}dt$$ Perform partial fraction decomposition by setting it up like this $$ {t^2 + 1\over2(t - 2)^2 (t + 2)^2}dt = \frac{A}{2(t-2)} + \frac{B}{2(t-2)^2} + \frac{C}{2(t+2)} + \frac{D}{2(t+2)^2}$$ Then just multiply by the denominator, match powers, and solve the system of equations for the unknown. We get $$ {t^2 + 1\over2(t - 2)^2 (t + 2)^2}dt = \frac{3}{64(t-2)} + \frac{5}{32(t-2)^2} + \frac{-3}{64(t+2)} + \frac{5}{32(t+2)^2}$$ Now integrate termwise (log for single power, 1/(whatever) for double power) and simplify to get $$\int \frac{3}{64(t-2)} + \frac{5}{32(t-2)^2} + \frac{-3}{64(t+2)} + \frac{5}{32(t+2)^2} dt $$$$= {-5t\over 16(t^2-4)} + {3\over 64}\ln(2-t) -\frac3{64}\ln(t+2)+C$$ The bounds transform as follows $\tan\left({\frac\pi2\over2}\right)=1$ and the lower one remains $0$ so $${-5t\over 16(t^2-4)} + {3\over 64}\ln(2-t) -\frac3{64}\ln(t+2)\Big|^1\_0 = \frac5{48} - {3\ln(3)\over64}$$
Let's take $\int\_0^{\pi/2} \frac{1}{\left(3+5 \cos\left(x\right)\right)^2}\text{d}x$ and substitute $u = \tan\left(\frac{x}{2}\right)$ and $\text{d}u = \frac{1}{2}\text{d}x\sec^2\left(\frac{x}{2}\right)$ which yields $\cos\left(x\right) = \frac{1 - u^2}{1 + u^2}$ and $\text{d}x = \frac{2\text{d}u}{1 + u^2}$: $$ \begin{align} \int\_0^{\pi/2} \frac{1}{\left(3+5 \cos\left(x\right)\right)^2}\text{d}x &= \int\_{\tan\left(0\right)}^{\tan\left(\pi/4\right)} \frac{2}{\left(1 + u^2\right)\left(\frac{5\left(1-u^2\right)}{1+u^2}+3\right)^2} \text{d}u \\ &= \int\_0^1 \frac{u^2+1}{2\left(u^2-4\right)^2} \text{d}u \end{align} $$ Now decompose $\frac{u^2+1}{2\left(u^2-4\right)^2} = \frac{u^2+1}{2\left(u-2\right)^2\left(u+2\right)^2}$ into its [partial fractions](https://en.wikipedia.org/wiki/Partial_fraction_decomposition): $$ \begin{align} &\frac{u^2+1}{2\left(u-2\right)^2\left(u+2\right)^2} &&= \frac{\theta\_1}{2\left(u-2\right)} + \frac{\theta\_2}{2\left(u-2\right)^2} + \frac{\theta\_3}{2\left(u+2\right)} + \frac{\theta\_4}{2\left(u+2\right)^2} \\ \Leftrightarrow &\; u^2 + 1 &&= \theta\_1\left(u+2\right)^2\left(u-2\right)+\theta\_2\left(u+2\right)^2+\theta\_3\left(u+2\right)\left(u-2\right)^2\\&&&\quad+\theta\_4\left(u-2\right)^2 \\ &&&=\left(\theta\_1+\theta\_3\right)u^3+\left(2\theta\_1+\theta\_2-2\theta\_3+\theta\_4\right)u^2 \\&&&\quad+\left(-4\theta\_1+4\theta\_2-4\theta\_3-4\theta\_4\right)u+\left(-8\theta\_1+4\theta\_2+8\theta\_3+4\theta\_4\right) \end{align} $$ This gives us $4$ equations in $4$ unknowns: $$ \begin{align} 1&=-8\theta\_1+4\theta\_2+8\theta\_3+4\theta\_4 \\ 0&=-4\theta\_1+4\theta\_2-4\theta\_3-4\theta\_4 \\ 1&=2\theta\_1+\theta\_2-2\theta\_3+\theta\_4 \\ 0&=\theta\_1+\theta\_3 \end{align} $$ This system of linear equations can be solved using [Gauss-Jordan elimination](https://en.wikipedia.org/wiki/Gaussian_elimination): $$ \left(\begin{array}{cccc|c} -8 & 4 & 8 & 4 & 1 \\ -4 & 4 & -4 & -4 & 0 \\ 2 & 1 & -2 & 1 & 1 \\ 1 & 0 & 1 & 0 & 0 \end{array}\right) \sim \left(\begin{array}{cccc|c} 1 & 0 & 0 & 0 & \frac{3}{32} \\ 0 & 1 & 0 & 0 & \frac{5}{16} \\ 0 & 0 & 1 & 0 & -\frac{3}{32} \\ 0 & 0 & 0 & 1 & \frac{5}{16} \end{array}\right) $$ So the solution to this system is $\theta = \left(\array{\frac{3}{32} \\ \frac{5}{16} \\ -\frac{3}{32} \\ \frac{5}{16}}\right)$ which gives us: $$ \begin{align} \int\_0^1 \frac{u^2+1}{2\left(u^2-4\right)^2} \text{d}u &= \int\_0^1 \frac{3}{64\left(u-2\right)} + \frac{5}{32\left(u-2\right)^2} \\&\qquad - \frac{3}{64\left(u+2\right)}+\frac{5}{32\left(u+2\right)^2} \text{d}u \tag{1}\label{1}\\ &=\frac{3}{64}\int\_0^1 \frac{1}{u-2}\text{d}u + \frac{5}{32}\int\_0^1\frac{1}{\left(u-2\right)^2}\text{d}u \\&\quad-\frac{3}{64}\int\_0^1\frac{1}{u+2}\text{d}u + \frac{5}{32}\int\_0^1\frac{1}{\left(u+2\right)^2}\text{d}u \tag{2}\label{2}\\ &= \left.\frac{3}{64}\log\left(u-2\right)\right\rvert\_0^1 - \left.\frac{5}{32\left(u-2\right)}\right\rvert\_0^1 \\&\quad- \left.\frac{3}{64}\log\left(u+2\right)\right\rvert\_0^1 - \left.\frac{5}{32\left(u+2\right)}\right\rvert\_0^1 \\ &=-\frac{3}{64}\log\left(2\right) + \frac{5}{64} - \frac{3}{64}\log\left(\frac{3}{2}\right) + \frac{5}{192} \\ &= \frac{5}{48} - \frac{3}{64}\log\left(3\right) \\ &\approx 0.0527 \end{align} $$ $\eqref{1}$ Integrate term by term. $\eqref{2}$ Substitute $v\_1 = u - 2, \text{d}v\_1 = \text{d}u$ and $v\_2 = u + 2, \text{d}v\_2 = \text{d}u$.
6,730
Let us consider the case: there is an overall bounding rectangle (call this **Rb**) which contains a number of rectangles within it (call this set **SRo**). Now I would like to randomly position a new rectangle with known bounds (call this **Rn**) within the bounds of **Rb** without intersecting any of the rectangles in the set **SRo** My current approach is rather brute force: 1. Generate random x-y coordinates for new rectangle 2. Check Rb contains Rn if it is placed at intended coordinates 3. Check Rn does not intersect any of rectangles inside set SRo Anyone got any better idea?
2010/12/22
[ "https://gamedev.stackexchange.com/questions/6730", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/1179/" ]
This sounds similar to a collision-detection problem. Look at "spacial partitioning" to sub-divide **Rb**, and find the set of open spaces in which **Rn** would fit. Randomly select one of those spaces, and place **Rn** randomly within it.
Depending on the layout of rectangles already inside, if you find that brute-force is taking too many iterations (i.e. that there are too few valid spaces to put a new rectangle after some point in time), an alternative might be to iterate through the entire bounding box, making a list of all valid positions where the new smaller box can go. Then, choose a random element from that list. Worst-case O(x\*y) time, which is at least better than the random worst-case O(infinity).
3,789
I've found a binary copy of a DOS freeware game I was fond of back in the day ([Firefighter](http://dosgamer.com/firefighter/), downloadable as `FIRE.COM`) but the source code is not available. I'd like to re-implement it in C or Python using Unix curses and while I'm fully competent to do that part, I know almost nothing about tools for reverse-engineering and decompiling DOS .COM file binaries. Going by documentation that says .COM is headerless raw machine code, I tried running `objdump -D -b binary -m i386` on it. The result didn’t look valid. `objdump -D -b binary -m i8086 –adjust-vma=0×100` appeared to do the trick as did using NASM with `ndisasm -b 16 -a -o 0×100` and both of those do seem to produce plausible *assembly* listings. Ideally, though, I'm looking for something that decompiles and renders to C in *as-if-it-were-assembler* style with generated variable names and useful annotations on the DOS traps. I understand that such a beast may not exist, but on the chance one does I'm seeking recommendations.
2014/03/04
[ "https://reverseengineering.stackexchange.com/questions/3789", "https://reverseengineering.stackexchange.com", "https://reverseengineering.stackexchange.com/users/4077/" ]
My answer is a little late; newcomer to this site. The Decompiler project was initiated in order to decompile MS-DOS EXE and COM binaries. The project has both a command-line and a GUI tool: <https://sourceforge.net/projects/decompiler/> Use the following command with the command-line tool to decompile COM programs: ``` decompile --default-to ms-dos-com myprog.com ``` In the GUI, use the menu command `File` > `Open as...` to open the COM file and specify a start address like 0800:0100.
no beast exist that will decompile 16 bit com program if you prefer to debug it look for grdb from ladsoft also have in hand a copy of ralf browns interrupt list for all the int 21 / int 10 SetCursor / GetCursor calls also make sure you rename the downloaded exe to be 8dot3 compatible (name less than 8 charecters) ``` GRDB.EXE firefi.com Get Real Debugger Version 9.6 Copyright (c) 1997-2009 David Lindauer (LADSoft) GRDB comes with ABSOLUTELY NO WARRANTY, for details type `?g' This is free software, and you are welcome to redistribute it under certain conditions; type `?gr' for details DPMI Start code hooked History enabled eax:00000000 ebx:00000000 ecx:00007500 edx:00000000 esi:00000000 edi:00000000 ebp:00000000 esp:0000FFEE eip:00000100 flag:00000202 NV UP EI PL NZ NA PO NC ds:1F25 es:1F25 fs:1F25 gs:1F25 ss:1F25 cs:1F25 1F25:0100 E9 25 27 jmp 2828 Size: 00007500 -> ```
1,069,451
As the title suggests, I am experiencing issues with my Display in Ubuntu 18.04. I have seen somewhat related posts (like [this](https://askubuntu.com/questions/1053088/ubuntu-18-04-unknown-display) one), but those typically suggest they have several detected displays, where one of them is dubbed 'Unknown Display'. Well, I only have a single display called 'Unknown Display', which means I: * cannot add a second/third/etc screen, * cannot adjust the resolution, * and cannot adjust the brightness I would very much like to be able to do the above, obviously, and I was hoping somebody over here might be able to help me out. I have tried out several things, like removing the current drivers in favor of older ones, but I seem to be missing something crucial every time.. Here are some specifics about my system: * I am running Ubuntu 18.04.1 LTS. * I have a `GP107M [GeForce GTX 1050 Mobile]` graphic card. * I have `nvidia-driver-396` installed. * It is a dual boot system, where I left the original Windows copy intact. Some potentially interesting stats: `lspci | grep VGA` output: ``` steven@stevens-laptop:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation Device 591b (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GP107M [GeForce GTX 1050 Mobile] (rev a1 ``` `xrandr` output: ``` steven@stevens-laptop:~$ xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 1920 x 1080, current 1920 x 1080, maximum 1920 x 1080 default connected primary 1920x1080+0+0 0mm x 0mm 1920x1080 77.00* ``` `nvidia-settings` output: ``` steven@stevens-laptop:~$ nvidia-settings ERROR: NVIDIA driver is not loaded ERROR: Unable to load info from any available system ``` If any additional info is needed to figure out what's going on, please let me know. And thanks in advance for any help on this! Here is a screenshot of the 'Settings -> Devices -> Displays' window in my Ubuntu by the way: [![display-settings](https://i.stack.imgur.com/QfDzN.jpg)](https://i.stack.imgur.com/QfDzN.jpg)
2018/08/27
[ "https://askubuntu.com/questions/1069451", "https://askubuntu.com", "https://askubuntu.com/users/864725/" ]
I had this problem and removing the monitors.xml config didn't work. there was a problem w/ the nvidia driver. check to see by running ``` ubuntu-drivers devices ``` and if that looks good ``` sudo ubuntu-drivers autoinstall ``` should do the trick <https://linuxconfig.org/how-to-install-the-nvidia-drivers-on-ubuntu-18-04-bionic-beaver-linux>
I just had this problem - all of a sudden, when I booted the computer this morning, my Samsung 40" TV had turned into "Unknown Display" and I have no idea why. Then I found this question and tried to debug according to the first answer - so after `lspci` I ran `xrandr` and suddenly boom!, it worked! I still have no idea why, but I suspect it could be a kernel (i.e., a missing kernel update) thing.
30,019
I'm a web developer, but I occasionally step over into doing some mobile app development here and there. A client recently asked me to help them develop a very simple app. The app is essentially just an HTML / jQuery form that the user fills out to reserve my clients service. Having heard a lot about phonegap, I decided to try it for the first time, along with jQuery mobile. Everything worked fine with the phonegap build, and after testing both iOS/Android work. Although, part of the form requires users to enter their Credit Card information, so my client can book them with their own system. The card is never processed through the app, the information is just gathered and sent (via e-mail) to my client who uses their own payment gateway. Since I used phonegap to build the app, I simply used action="https://www.my-website-with-ssl.php" on the form. Like I said, both iOS and Adroid are sending the e-mails(to my client) fine. But is this even secure?
2013/01/31
[ "https://security.stackexchange.com/questions/30019", "https://security.stackexchange.com", "https://security.stackexchange.com/users/20239/" ]
Sending credit card info via e-mail is very bad and likely a violation of their merchant service agreement. Credit card details must be protected in transit and e-mail leaves it flapping in the breeze. The SSL connection to the server or not for transmitting form data is irrelevant unless the server is somehow encrypting the e-mail to protect it and even then it is really not a very good channel to use compared to storing the information protected in a database (still likely requires PCI-DSS compliance) or simply having it directly relay the user to the payment gateway itself to sidestep compliance issues entirely.
If that form is not embedded in the app and is delivered over an unencrypted channel then it is possible for an attacker to change it in transit to `action=https://evil-site.com/carding.php`. Ensure that all communication is encrypted and user data is being validated thoroughly on the server. To make sure the framework is interpreting the `https` part you can use an interception proxy to check out the live data. Putting `:443` after the domain name could force the connection to only use the secure channel. A different thing for concern is how safely the PhoneGap framework is using SSL. Classic browsers will prompt users for decisions while frameworks will have default behaviors for those special cases. Things to check for: * How PhoneGap deals with **self-signed certificates** that are usually a sign of communication being intercepted. * The same case for **expired certificates**. * In order to check for **revoked certificates** PhoneGap has to take additional actions such as CRL checking and OSCP server queries. * Validating **basic constraints**. Even iOS was vulnerable to this in 2011. * SSL **cipher strength** used.
43,294,488
I just started using MinGW for Windows. When trying to create executable using `g++ a.cpp -o a.exe -std=c++14` for the code below: ``` #include <string> using namespace std; int main() { string x = to_string(123); return 0; } ``` I'm getting following error: ``` C:/mingw/bin/../lib/gcc/mingw32/5.3.0/../../../libmingwex.a(vsnprintf.o):(.text+0x0): multiple definition of vsnprintf C:\Users\..\Local\Temp\cc4sJDvK.o:c:/mingw/include/stdio.h:426: first defined here collect2.exe: error: ld returned 1 exit status ``` What is the root cause for this error and how can I make it go away? While I can easily find a replacement for `to_string()` function I'm not sure what is causing this error to occur in the first place.
2017/04/08
[ "https://Stackoverflow.com/questions/43294488", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7837095/" ]
Installing `MinGW` packages `mingw32-libmingwex-*` will link an appropriate version of `vsnprintf` and avoid the linker error.
There are multiple definitions of `vsnprintf` in both `stdio.h` and `libmingwex.a`. I am able to work this around by adding `#define __USE_MINGW_ANSI_STDIO 0` to the start of the code, before any includes, which disables the definition of `vsnprintf` in `stdio.h`.
329,410
I'm fitting poisson GLMM, and I'm quite confused about the need or not to log10 transform my main predictor. Raw values of this main predictor was very spread, from 2e+03 to 6e+06, that's why I thought about log10 transformation. Linearity with response seem to me equal. [![enter image description here](https://i.stack.imgur.com/ga91J.jpg)](https://i.stack.imgur.com/ga91J.jpg) For fitting GLMM I had to scale the predictors (errors without scaling), using: ``` pvars <- c("x1","x1_log10", "x2" ,"x3", "x4", "x5") mydf_sc <- mydf mydf_sc[pvars] <- lapply(mydf[pvars],scale) ``` Plot with the scaled predictor are : [![enter image description here](https://i.stack.imgur.com/nm8wW.jpg)](https://i.stack.imgur.com/nm8wW.jpg) I'm very confused because results of my GLMM are opposite : my main predictor is significant without log10 transform and not significant if I use log10 transform ``` glmm1 <- glmer(count ~ x1+ x2 + x3 + x4 + x5 + (1| x6) +(1|x7)+(1|ID), data=mydf_sc, family="poisson") summary(glmm1) Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod'] Family: poisson ( log ) Formula: count ~ x1 + x2 + x3 + x4 + x5 + (1 | x6) + (1 | x7) + (1 | ID) Data: mydf_sc AIC BIC logLik deviance df.resid 610.8 638.6 -296.4 592.8 152 Scaled residuals: Min 1Q Median 3Q Max -1.9743 -0.6970 -0.2632 0.5131 3.0054 Random effects: Groups Name Variance Std.Dev. ID (Intercept) 0.07861 0.2804 x7 (Intercept) 0.03236 0.1799 x6 (Intercept) 0.78608 0.8866 Number of obs: 161, groups: ID, 161; x7, 8; x6, 2 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.41893 0.64230 2.209 0.0272 * x1 -0.49491 0.12024 -4.116 3.86e-05 *** x2 -0.13887 0.11129 -1.248 0.2121 x3 0.07619 0.09702 0.785 0.4323 x4 -0.08049 0.06327 -1.272 0.2033 x5 -0.09930 0.07945 -1.250 0.2113 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) x1 x2 x3 x4 x1 0.079 x2 -0.034 -0.519 x3 0.041 -0.257 0.514 x4 -0.053 -0.152 -0.003 -0.085 x5 -0.092 -0.125 0.117 0.256 0.297 ``` And with the log10 transform and scaled predictor ``` glmm2 <- glmer(count ~ x1_log10+ x2 + x3 + x4 + x5 + (1| x6) +(1|x7) + (1|ID), data=mydf_sc, family="poisson") summary(glmm2) Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod'] Family: poisson ( log ) Formula: count ~ x1_log10 + x2 + x3 + x4 + x5 + (1 | x6) + (1 | x7) + (1 | ID) Data: mydf_sc AIC BIC logLik deviance df.resid 628.4 656.2 -305.2 610.4 152 Scaled residuals: Min 1Q Median 3Q Max -2.0486 -0.6626 -0.1504 0.4169 2.3551 Random effects: Groups Name Variance Std.Dev. ID (Intercept) 0.11584 0.3403 x7 (Intercept) 0.03584 0.1893 x6 (Intercept) 0.82438 0.9080 Number of obs: 161, groups: ID, 161; x7, 8; x6, 2 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.50363 0.65939 2.280 0.0226 * x1_log10 -0.16203 0.13867 -1.168 0.2426 x2 -0.31247 0.13154 -2.376 0.0175 * x3 -0.05047 0.10111 -0.499 0.6176 x4 -0.12361 0.06499 -1.902 0.0572 . x5 -0.12676 0.08173 -1.551 0.1209 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) x1_l10 x2 x3 x4 x1_log10 0.090 x2 -0.048 -0.663 x3 0.089 0.176 0.223 x4 -0.035 -0.002 -0.086 -0.116 x5 -0.082 -0.014 0.047 0.219 0.285 ``` If I compare fits with AIC, glmm1 is better (i.e lower) , and if I calculate the sum of square residuals glmm1 is better (ie.lower) too. I thought to use a log10 transformation because of the spread of the predictor values, but finally since I use scaled predictors, I wonder if it's necessary yet. So, if some of you can explain me what happens (why results are so different) and which analysis is the good one, it would be very very appreciated. Data are here : ``` mydf <- structure(list(count = c(1, 1, 1, 5, 15, 11, 9, 8, 7, 1, 5, 16, 6, 2, 8, 15, 4, 3, 1, 0, 4, 1, 2, 2, 2, 1, 3, 1, 5, 3, 3, 4, 3, 2, 1, 0, 2, 2, 6, 2, 0, 0, 3, 1, 2, 2, 2, 1, 3, 5, 7, 7, 7, 6, 2, 3, 3, 4, 1, 2, 3, 1, 2, 3, 1, 1, 1, 1, 1, 2, 2, 5, 2, 2, 6, 2, 2, 2, 2, 2, 3, 2, 0, 0, 0, 0, 0, 0, 2, 3, 2, 2, 1, 0, 0, 3, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 1, 4, 1, 6, 3, 5, 1, 3, 4, 6, 7, 6, 3, 2, 3, 3, 5, 6, 8, 9, 4, 3, 2, 1, 6, 2, 2, 1, 1, 3, 5, 3, 2, 3, 3, 2, 3, 1, 4, 1, 2, 3, 1, 3, 1), x1 = c(454276.630324255, 15803.1563972592, 15458.2342654783, 79089.1163309219, 433064.92842954, 639609.580040433, 15796.6139883664, 104607.240566262, 3301847.85530658, 3380.36483734805, 6357.74361426188, 78110.710827558, 1529337.73525669, 3474601.85370647, 94724.1554098659, 639609.580040433, 39834.5777550968, 49961.5621483385, 49501.3804401392, 50826.3757249488, 51670.4355390994, 55337.9747884692, 52492.3355531823, 51375.6168345031, 51830.7997135719, 54004.1327091058, 52364.8333586487, 54076.335684573, 52105.8109404304, 52453.8631578501, 35511.3686511835, 35456.7012643244, 33395.0533851741, 35062.9690293352, 31354.2541181611, 31831.853724259, 118596.374688501, 121554.512420281, 191138.31164019, 121100.531704515, 113179.847358967, 137020.588002108, 137085.296834259, 136367.64719088, 136367.64719088, 135610.442532084, 136824.220830818, 136110.128893872, 133403.823145702, 132311.491140916, 128584.592590665, 123079.910041864, 123796.075203802, 124141.510674517, 121886.481343848, 122145.003101152, 13077.9129382755, 124419.09895087, 124419.09895087, 124419.09895087, 124515.585953799, 124515.585953799, 124515.585953799, 124611.257457142, 124611.257457142, 124611.257457142, 124611.257457142, 124419.09895087, 127248.25326102, 127248.25326102, 127248.25326102, 127248.25326102, 127248.25326102, 127248.25326102, 125084.715383792, 116820.543248463, 3312347.83977499, 3307143.68368415, 3339420.73710133, 3339420.73710133, 3489612.02613466, 3787340.40364162, 4044735.09967731, 4332712.49030506, 4410506.3486271, 6738481.68768351, 6829376.07553111, 6753771.27992383, 950841.73646546, 950841.73646546, 230393.74295532, 1283593.72888636, 1419207.9736855, 1491344.05744556, 2013224.87745932, 2023866.97925484, 1925108.17089723, 2661178.20766687, 2922632.22932389, 2972397.52352174, 2973263.36236786, 5087084.6439317, 5062249.54053654, 5049109.16912577, 4874011.01990889, 4865212.37320984, 4844194.80198645, 2946546.02832311, 2646007.37429602, 2678211.41076352, 2018903.43065148, 4123476.19271286, 3164645.53052, 3824227.28626133, 3342110.58530565, 3339420.73710133, 3342110.58530565, 3343192.06281568, 852591.942449119, 2887.67136368804, 2887.67136368804, 2887.67136368804, 5225.19886143861, 2841.08844859385, 2841.08844859385, 2838.0416631723, 2384.70089496048, 2818.29878593123, 2816.21191647018, 2816.21191647018, 2816.21191647018, 2835.9401746766, 2838.0416631723, 2838.0416631723, 2841.08844859385, 2880.08521424055, 2880.08521424055, 2882.21941509514, 2882.21941509514, 2924.40544679865, 2924.40544679865, 3226.70820676332, 3226.70820676332, 3226.70820676332, 3226.70820676332, 3226.70820676332, 3214.82585949069, 3209.8220949141, 2441.3578929725, 2468.63429708923, 2439.58170286854, 2441.3578929725, 2441.3578929725, 3207.28767252863, 3207.28767252863, 3209.77492390452, 3209.77492390452, 3209.77492390452, 3209.77492390452, 3226.70820676332, 3226.70820676332), x1_log10 = c(5.6573203956694, 4.19874383815735, 4.18915988463051, 4.89811672316093, 5.63655301400862, 5.80591495996403, 4.19856400570374, 5.01956174599534, 6.51875705768416, 3.52896357551324, 3.80330301034245, 4.89271059001513, 6.18450340454985, 6.54090504701398, 4.97646074165915, 5.80591495996403, 4.60026021802215, 4.69863600900148, 4.69461731023005, 4.70608914258146, 4.71324212230491, 4.74302326116043, 4.72009589635918, 4.7107570492143, 4.71458790975622, 4.73242699582451, 4.71903972581057, 4.73300725530994, 4.71688615935781, 4.71977747892335, 4.55036741085983, 4.54969832829418, 4.52368214206241, 4.54484868809727, 4.49629647386532, 4.50286193044554, 5.0740714135068, 5.08477108562158, 5.28134774548676, 5.08314604996246, 5.05376910388449, 5.13678582689321, 5.13699087677218, 5.1347113474973, 5.1347113474973, 5.13229313318477, 5.13616298365657, 5.13389044525503, 5.12516827596201, 5.12159756393332, 5.10918893317819, 5.09018717015312, 5.09270687615138, 5.09391702599875, 5.08595553988135, 5.08687570487486, 4.11653844186966, 5.09488705183641, 5.09488705183641, 5.09488705183641, 5.09522371665322, 5.09522371665322, 5.09522371665322, 5.09555727852436, 5.09555727852436, 5.09555727852436, 5.09555727852436, 5.09488705183641, 5.10465182948183, 5.10465182948183, 5.10465182948183, 5.10465182948183, 5.10465182948183, 5.10465182948183, 5.09720424470521, 5.06751922150016, 6.52013593709916, 6.51945306388122, 6.5236711397163, 6.5236711397163, 6.54277714493193, 6.57833434097488, 6.60689008379363, 6.63675987112144, 6.64448845155199, 6.82856205247856, 6.83438102881605, 6.82954634893355, 5.97810823649622, 5.97810823649622, 5.36247068032029, 6.10842758664355, 6.15204604253689, 6.17357784802218, 6.30389228834369, 6.30618196465291, 6.28445513732707, 6.42507395836007, 6.46577416916377, 6.47310689079747, 6.47323337935542, 6.70646896391141, 6.70434354963434, 6.70321476087896, 6.68788650676914, 6.68710180256944, 6.68522159931963, 6.46931322964654, 6.42259105021112, 6.42784485606065, 6.30511554601734, 6.61526349146535, 6.500325072091, 6.58254369590421, 6.52402081593243, 6.5236711397163, 6.52402081593243, 6.52416132706371, 5.93074122396392, 3.46054776609357, 3.46054776609357, 3.46054776609357, 3.7181028235452, 3.45348475436296, 3.45348475436296, 3.45301876672153, 3.3774339146909, 3.44998703355778, 3.44966533182334, 3.44966533182334, 3.44966533182334, 3.45269706498709, 3.45301876672153, 3.45301876672153, 3.45348475436296, 3.45940533759498, 3.45940533759498, 3.45972703932942, 3.45972703932942, 3.46603758412046, 3.46603758412046, 3.50875969365855, 3.50875969365855, 3.50875969365855, 3.50875969365855, 3.50875969365855, 3.50715745305789, 3.50648096220605, 3.38763144985953, 3.39245675841327, 3.38731536744143, 3.38763144985953, 3.38763144985953, 3.50613791502314, 3.50613791502314, 3.50647457983995, 3.50647457983995, 3.50647457983995, 3.50647457983995, 3.50875969365855, 3.50875969365855 ), x2 = c(1615L, 1500L, 1530L, 1605L, 1300L, 1367L, 1700L, 1450L, 1550L, 1315L, 1375L, 1455L, 1515L, 1585L, 1650L, 1700L, 900L, 910L, 915L, 920L, 925L, 935L, 990L, 995L, 1000L, 1005L, 1010L, 1015L, 1020L, 1025L, 1030L, 1035L, 1040L, 1045L, 1050L, 1055L, 1175L, 1180L, 1185L, 1190L, 1195L, 1200L, 1205L, 1210L, 1215L, 1220L, 1225L, 1230L, 1235L, 1240L, 1245L, 1250L, 1255L, 1260L, 1265L, 1270L, 1295L, 1300L, 1305L, 1310L, 1315L, 1320L, 1325L, 1330L, 1335L, 1360L, 1365L, 1370L, 1375L, 1380L, 1385L, 1390L, 1395L, 1400L, 1405L, 1410L, 1500L, 1502L, 1505L, 1508L, 1510L, 1512L, 1514L, 1516L, 1518L, 1520L, 1522L, 1524L, 1528L, 1530L, 1532L, 1534L, 1538L, 1540L, 1542L, 1544L, 1546L, 1548L, 1550L, 1552L, 1556L, 1559L, 1602L, 1604L, 1608L, 1612L, 1615L, 1620L, 1633L, 1636L, 1638L, 1640L, 1643L, 1645L, 1648L, 1650L, 1652L, 1654L, 1658L, 810L, 815L, 820L, 825L, 830L, 835L, 840L, 845L, 850L, 855L, 900L, 905L, 910L, 915L, 920L, 925L, 930L, 935L, 940L, 945L, 950L, 955L, 950L, 955L, 1000L, 1005L, 1010L, 1015L, 1020L, 1025L, 1030L, 1035L, 1040L, 1045L, 1050L, 1055L, 1100L, 1105L, 1110L, 1115L, 1130L, 1135L), x3 = c(13.5, 13.5, 13.5, 24, 24, 24, 24, 24, 24, 0, 2, 1, 1, 1, 1, 1, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52), x4 = c(30L, 60L, 30L, 40L, 40L, 20L, 50L, 20L, 10L, 30L, 5L, 25L, 10L, 0L, 15L, 20L, 60L, 60L, 60L, 90L, 20L, 20L, 5L, 20L, 30L, 20L, 30L, 20L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 30L, 5L, 20L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 30L, 40L, 40L, 40L, 40L, 40L, 40L, 40L, 40L, 40L, 40L, 40L, 40L, 5L, 5L, 0L, 0L, 30L, 30L, 40L, 50L, 50L, 40L, 30L, 0L, 0L, 0L, 0L, 20L, 20L, 20L, 0L, 0L, 0L, 0L, 0L, 15L, 15L, 5L, 10L, 10L, 10L, 30L, 50L, 50L, 50L, 50L, 50L, 50L, 50L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 40L, 40L, 40L, 40L, 40L, 40L, 40L, 40L, 0L, 0L, 0L, 30L, 30L, 30L, 10L, 10L, 10L, 50L, 50L, 50L, 50L, 50L, 50L, 40L, 40L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 50L, 50L, 50L, 50L, 50L, 50L, 50L, 50L, 50L, 50L, 0L, 0L), x5 = c(40L, 40L, 70L, 60L, 60L, 70L, 50L, 70L, 50L, 70L, 95L, 50L, 90L, 70L, 80L, 70L, 0L, 0L, 0L, 0L, 10L, 20L, 20L, 10L, 40L, 70L, 50L, 60L, 90L, 90L, 90L, 90L, 90L, 90L, 95L, 95L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 40L, 50L, 30L, 5L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 30L, 20L, 30L, 10L, 40L, 20L, 20L, 30L, 30L, 0L, 0L, 0L, 0L, 5L, 5L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 30L, 40L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 10L, 25L, 45L, 60L, 60L, 60L, 20L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 10L, 10L, 10L, 10L, 20L, 20L, 20L, 20L, 20L, 0L, 0L, 0L, 0L, 0L, 0L, 20L, 20L, 50L, 50L, 50L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 10L, 10L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 50L, 50L), x6 = structure(c(1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("Date1", "Date48", "Date49", "Date2", "Date3"), class = "factor"), x7 = structure(c(3L, 4L, 4L, 1L, 3L, 2L, 4L, 2L, 6L, 1L, 7L, 1L, 6L, 6L, 2L, 2L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L), .Label = c("Site4", "Site6", "Site1", "Site3", "Site7", "Site9", "Site5", "Site10", "Site11", "Site13", "Site12", "Site2", "Site8"), class = "factor"), ID = 1:161), .Names = c("count", "x1", "x1_log10", "x2", "x3", "x4", "x5", "x6", "x7", "ID"), row.names = c(NA, -161L), class = "data.frame") ``` --- Thanks @Florian Hartig , @whuber, and @Elvis for all the element you gave. They were very helpful to understand what happens. As suggested by @Elvis, I fit the model removing the 4 points having count >10 and obtained pvalue = 0.09. ``` ind <- which(mydf_sc$count >10) ind [1] 5 6 12 16 glmm2b <- glmer(count ~ x1_log10+ x2 + x3 + x4 + x5 + + (1| x6) +(1|x7) + (1|ID), + data=mydf_sc[-ind,], family="poisson") summary(glmm2b) Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod'] Family: poisson ( log ) Formula: count ~ x1_log10 + x2 + x3 + x4 + x5 + (1 | x6) + (1 | x7) + (1 | ID) Data: mydf_sc[-ind, ] AIC BIC logLik deviance df.resid 592.7 620.2 -287.4 574.7 148 Scaled residuals: Min 1Q Median 3Q Max -1.8740 -0.7304 -0.1666 0.4929 2.5919 Random effects: Groups Name Variance Std.Dev. ID (Intercept) 0.06340 0.2518 x7 (Intercept) 0.06662 0.2581 x6 (Intercept) 0.51231 0.7158 Number of obs: 157, groups: ID, 157; x7, 8; x6, 2 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.25735 0.54202 2.320 0.0204 * x1_log10 -0.34372 0.20201 -1.702 0.0888 . x2 -0.18029 0.15799 -1.141 0.2538 x3 0.01162 0.13034 0.089 0.9289 x4 -0.12246 0.06382 -1.919 0.0550 . x5 -0.08543 0.08204 -1.041 0.2978 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) x1_l10 x2 x3 x4 x1_log10 0.184 x2 -0.135 -0.726 x3 0.099 -0.135 0.217 x4 -0.055 -0.058 -0.092 -0.050 x5 -0.111 -0.085 0.027 0.257 0.327 ```
2018/02/19
[ "https://stats.stackexchange.com/questions/329410", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/164563/" ]
Let’s play with a simple linear model, even if it is inappropriate it is easier to understand: Predicting `count` with `x1` : ``` > summary( lm(count ~ x1, data=mydf ) ) Call: lm(formula = count ~ x1, data = mydf) Residuals: Min 1Q Median 3Q Max -3.3201 -1.5073 -0.4093 0.6720 12.7067 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.338e+00 2.431e-01 13.735 < 2e-16 *** x1 -5.786e-07 1.254e-07 -4.615 8.06e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 2.649 on 159 degrees of freedom Multiple R-squared: 0.1181, Adjusted R-squared: 0.1126 F-statistic: 21.3 on 1 and 159 DF, p-value: 8.06e-06 ``` Predicting `count` with `log10(x1)` : ``` summary( lm(count ~ x1_log10, data=mydf ) ) Call: lm(formula = count ~ x1_log10, data = mydf) Residuals: Min 1Q Median 3Q Max -3.0589 -1.6956 -0.7050 0.3267 13.1826 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 5.8489 0.9614 6.083 8.45e-09 *** x1_log10 -0.6196 0.1882 -3.292 0.00123 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 2.729 on 159 degrees of freedom Multiple R-squared: 0.06381, Adjusted R-squared: 0.05792 F-statistic: 10.84 on 1 and 159 DF, p-value: 0.001225 ``` So here already you observe the problem. The points with the low `x1` value have an higher count; compressing the scale of `x1` with the log makes this effect of `x1` less significant — just look how the position of the 4 counts above 10 shifts in the range of `x1` or `log10(x1)`... I bet that if you fit your model without these four points, `log10(x1)` still has an effect. I think this is the main reason. You can add to this the presence of other variables: `log10(x1)` adds less information to `x2` to `x5` than does `x1`: ``` > summary( lm(x1 ~ x2 + x3 + x4 + x5 , data=mydf ) ) Call: lm(formula = x1 ~ x2 + x3 + x4 + x5, data = mydf) Residuals: Min 1Q Median 3Q Max -2896309 -780317 -130054 660693 4581044 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -6462170 934691 -6.914 1.14e-10 *** x2 5022 470 10.684 < 2e-16 *** x3 32796 11502 2.851 0.00494 ** x4 4164 4835 0.861 0.39049 x5 -3165 4066 -0.778 0.43749 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1212000 on 156 degrees of freedom Multiple R-squared: 0.4863, Adjusted R-squared: 0.4731 F-statistic: 36.92 on 4 and 156 DF, p-value: < 2.2e-16 ``` Here what's relevant is $R^2 = 0.49$, and... ``` > summary( lm(x1_log10 ~ x2 + x3 + x4 + x5 , data=mydf ) ) Call: lm(formula = x1_log10 ~ x2 + x3 + x4 + x5, data = mydf) Residuals: Min 1Q Median 3Q Max -2.03458 -0.34902 0.03176 0.49380 0.88808 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.9326161 0.4537035 4.260 3.53e-05 *** x2 0.0030988 0.0002282 13.582 < 2e-16 *** x3 -0.0203155 0.0055829 -3.639 0.000372 *** x4 -0.0036238 0.0023471 -1.544 0.124627 x5 -0.0059743 0.0019737 -3.027 0.002891 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.5884 on 156 degrees of freedom Multiple R-squared: 0.7431, Adjusted R-squared: 0.7365 F-statistic: 112.8 on 4 and 156 DF, p-value: < 2.2e-16 ``` ...here $R^2 = 0.74$. So, loosely speaking, a greater part of the effect of `log10(x1)` is absorbed by the other variables.
@Elvis thank so much for your interest. In the results I gave I used a individual random effect which in reality was not needed. So finally pvalue of the x1 partial slope is 0.000786, and the one of log10\_x1 removing the 4counts >10 is 0.15191. I agree that the magnitude remains different. Another element given by a collegue is that using x1, spread of x1 value is important and lead to an important SSX value and so reduce the standard error of the slope and finally lead to a powerful Wald test. My intention is to use x1 rather than lo10x1 because 1) linearity seem to me equally good with x1 and x1\_log10, 2) AIC is lower with x1 (619 vs 630), 3) prediction is better with x1, 4) removing the 4 "outlier" tends to find again a signal, 5) with x1 wald test is more powerful. What do you say ?
11,601,076
I'm trying to create the search system with mutliple criteria fields. My question is how to handle the empty criteria fields (omitted by user) when you search by EXACT word not just it's similars (LIKE). **Code:** ``` $prva = mysql_real_escape_string($_POST["crit1"]); $druga = mysql_real_escape_string($_POST["crit2"]); $tretja = mysql_real_escape_string($_POST["crit3"]); $cetrta = mysql_real_escape_string($_POST["crit4"]); $query = mysql_query("SELECT pointID FROM bpoint WHERE (sName LIKE '%$prva%') AND (sAddr LIKE '%$druga%') AND (placeID LIKE '%$tretja%') AND (sPhone = '$cetrta') "); ``` Someone suggested this, but i dont understand exactly how it works: ``` WHERE (col1=@col1 or @col1 is null) and (col2=@col2 or @col2 is null) and (col3=@col3 or @col3 is null) and . . ``` thanks :)
2012/07/22
[ "https://Stackoverflow.com/questions/11601076", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1318239/" ]
``` $query="SELECT pointID FROM bpoint WHERE "; if(isset($_POST['condition1'])) $condition[]="name='somename'"; if(isset($_POST['condition2'])) $condition[]="address='someaddress'"; //More condition here if(!empty($condition)) $query .= implode(' AND ',$condition); mysql_query($query) ```
The suggestion is the same as writing the following query: ``` SELECT pointID FROM bpoint WHERE (sName = '$prva' or sName is null) AND (sAddr = '$druga' or aAdds is null) AND (placeID = '$tretja' or placeID is null) AND (sPhone = '$cetrta' or sPhone is null) ``` which is basically querying for the EXACT match in your returned data unless the column you are searching is null. This will work quite well - unless you have records where all the searched fields are null, in which case it will return every single one of them.
54,827,163
I'm trying to include a (.h) header file which is auto-generated by some compiler in my code. Below is the code snip from auto-generated header file. ``` typedef struct SequenceOfUint8 { // Line 69 struct SequenceOfUint8 *next; Uint8 value; } *SequenceOfUint8; // Line 72 ``` If I include this header file in C code (gcc compiler), it compiles fine without any error, but if try to include this in CPP code, g++ compiler throws below mentioned error. ``` In file included from ssme/src/../include/xxxxx.h:39:0, from ssme/src/ssme.cpp:11: ssme/src/../include/yyyyy.h:72:4: error: conflicting declaration ‘typedef struct SequenceOfUint8* SequenceOfUint8’ } *SequenceOfUint8; ^~~~~~~~~~~~~~~ ssme/src/../include/yyyyy.h:69:16: note: previous declaration as ‘struct SequenceOfUint8’ typedef struct SequenceOfUint8 { ^~~~~~~~~~~~~~~ ``` Can someone please tell how to use this in C++ code (if possible without changing the auto-generated code). PS: I included the header file in CPP file using `extern "C" { #include "yyyy.h" }` statement, still no luck.
2019/02/22
[ "https://Stackoverflow.com/questions/54827163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7368517/" ]
You can't use it as is in C++ code. This is another instance where C and C++ being two *different* languages matters. The tag namespace in C is separate, in C++ it isn't. It doesn't even exist in C++ to be precise. Wrapping in `extern "C"` is also not going to make a C++ compiler treat the header as C code. That's not the intended function. The header must be standalone valid C++, which it simply isn't. You will need to write a C wrapper, that exposes a C++ compatible API.
*Is more a remark than an answer, but not practical to do as a remark* By pity if you are in C++ just replace it by ``` struct SequenceOfUint8 { // Line 69 SequenceOfUint8 *next; Uint8 value; }; ``` else use *typedef* as usual in C but not to make a *typedef* for `SequenceOfUint8 *`, to make a *typedef* to hide a pointer is catastrophic for the readability of the code, and in your case it was worst because the name is the same
39,584
While playing around with the Stack Overflow [data-dump](http://www.rdbhost.com/rdbadmin/main.html?r0000000767), I was getting a feel for the schema and data by looking at my own posts and comments, and I happened coincidentally to notice my comment on [this answer](https://stackoverflow.com/questions/1995113/strangest-language-feature/2003277#2003277) which turns out to have been popular. However, I had no idea until I stumbled upon it in the dump. Shouldn't the profile page be a way to find everything that a given user has posted, be it a question, answer, or comment? I recognize that comments are definitely not the primary focus, and I wouldn't want to encourage commenting just for the sake of getting voted up (the comment in question was a drive-by comment that I never gave a second thought to until now). However, it would be nice to have a record of my past comments, or to be able to view comments of other users at a glance. I think, in particular, the comments would be most valuable (or detrimental) to those using Stack Overflow et al as a resume since comments tend to illustrate a user's temperament and civility when dealing with others much more so that questions and answers which are often more carefully worded and edited.
2010/02/16
[ "https://meta.stackexchange.com/questions/39584", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/140463/" ]
You pretty much answered the question yourself: * Comments are lightweight; even as far as "they shouldn't contain information that you care about to persist". * Comment voting even more so -- even though comments often get their votes for being witty, saying something funny, or including the words "freehand circles", the purpose of comment votes is to help a reader quickly identify those comments that are most relevant. * The information is in the data dump, so if you want to revisit your comments, it's easy to do (of course I have done so myself, only to find [this](https://meta.stackexchange.com/questions/36827/data-dump-includes-deleted-comments)) -- but that's what Jeff calls "information pornography". Additionally, * at least the most recent comments *are* shown on the profile page (in the activity tab).
I am able to see all my comments in activity tab's comment section . ![enter image description here](https://i.stack.imgur.com/vd6YI.png)
106,884
I want to startup iexplore once an hour pointed at a specific url to kick off some processing. It's a Windows 2003 server with Internet Explorer 7 running in enhanced security configuration. The scheduled task is set to not require the user to be logged in to kick this task off. When I am logged in I can see the iexplore window popup and disappear again and I know it has accessed my url (I've set it so I get an email). When I am not logged in I do not get anything triggered from my url - but when I look at the Scheduled Log it says the task ran the program OK with an exit code of 0. I can only therefore assume that iexplore pops up - but does not go to the required url. Is this something to do with IE's 'enhanced security configuration'? What do I need to do to get it to open IE and actually go to my url? Clarification: The url points to a .aspx page - so I'd want something that can start up a web page and not fall over when the .aspx page tries to access sesson info.
2010/01/27
[ "https://serverfault.com/questions/106884", "https://serverfault.com", "https://serverfault.com/users/20733/" ]
I'm wondering why would you want to launch such a behemoth just to access a URL? Nevermind... Here are some things that can help: * [A set of command-line Windows website tools](http://johnmu.com/web-toolbox-1/) * [WGET for Windows (win32)](http://users.ugent.be/~bpuype/wget/) * [cURL](http://curl.haxx.se/) Those are all command line utilities that can make your life easier.
I'm looking for an even better way to do this than I'm using right now. Currently I'm using a simple .vbs script to achieve this. ``` url="http://www.mysite.com/updates.asp" Set objHTTP = CreateObject("MSXML2.XMLHTTP") Call objHTTP.Open("GET", url, FALSE) objHTTP.Send ``` In your scheduled task, call the "wscript.exe" program, and specify the full absolute path to the .vbs file as the argument.
35,228,125
Let's say I have a C program that evaluates to either a zero or non zero integer; basically a program that evaluates to a boolean value. I wish to write a shell script that can find out whether the C program evaluates to zero or not. I am currently trying to assign the return value of the C program to a variable in a shell script but seem to be unable to do so. I currently have; ``` #!/bin/sh variable=/path/to/executable input1 ``` I know that assigning values in shell script requires us not to have spaces, but I do not know another way around this, since running this seems to evaluate to an error since the shell interprets `input1` as a command, not an input. Is there a way I can do this? I am also unsure as to how to check the return value of the C program. Should I just use an if statement and check if the C program evaluates to a value equal to zero or not?
2016/02/05
[ "https://Stackoverflow.com/questions/35228125", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4557453/" ]
This is very basic ``` #!/bin/sh variable=`/path/to/executable input1` ``` or ``` #!/bin/sh variable=$(/path/to/executable input1) ``` and to get the return code from the program use ``` echo $? ```
You can assign with backticks or `$(...)` as shown in [iharob](https://stackoverflow.com/users/1983495/iharob)'s [answer](https://stackoverflow.com/a/35228138/3266847). Another way is to interpret a zero return value as success and evaluate that directly (see [manual](https://www.gnu.org/software/bash/manual/bashref.html#Conditional-Constructs)): ``` if /path/to/executable input1; then echo "The return value was 0" else echo "The return value was not 0" fi ``` Testing with a little dummy program that exits with 0 if fed "yes" and exits with 1 else: ``` #!/bin/bash var="$1" if [[ $var == yes ]]; then exit 0 else exit 1 fi ``` Testing: ``` $ if ./executable yes; then echo "Returns 0"; else echo "Doesn't return 0"; fi Returns 0 $ if ./executable no; then echo "Returns 0"; else echo "Doesn't return 0"; fi Doesn't return 0 ``` If not using Bash: `if [ "$var" = "yes" ]; then`
34,670,679
I don't quite know how to explain my problem simply but I need to modify a `DataFrame` by inserting almost empty rows for a software formatting compatibility problem. Here an example: I need to change this type of `Dataframe`: ``` df = pd.DataFrame({"line1": [200, 400, 800], "line2": [400, 900, 700], "line3": [800, 700, 966], "name": ["bla", "bloo", "bloom"}) print df line1 line2 line3 name 0 200 400 800 bla 1 400 900 700 bloo 2 800 700 966 bloom ``` To something like this: ``` line_name line1 line2 line3 0 ID 1 name 2 bla 200 400 800 3 bloo 400 900 700 4 bloom 800 700 966 ``` Of course the real dataframe have much more rows and columns. So I'm looking for a method that can deal with a variable number of columns without having to manually adding the "Blank" under the line columns one by one. I tried some `Groupby` methods as well as making 2 dataframes ( one with just the `line`, `ID`, `name` structure and another with the actual `names` and `values` and then merging them but without success.
2016/01/08
[ "https://Stackoverflow.com/questions/34670679", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3943196/" ]
Not sure this is exactly what you want. Based on the example dataframe given, you could try: ``` df = pd.DataFrame({"line1": [200, 400, 800], "line2": [400, 900, 700], "line3": [800, 700, 966], "name": ["bla", "bloo", "bloom"]}) dftemp=pd.DataFrame(columns=df.columns) dftemp.loc[0]=(len(df.columns)-1)*['']+['ID'] dftemp.loc[1]=(len(df.columns)-1)*['']+['name'] dfnew= dftemp.append(df,ignore_index=True) dfnew.rename(columns={'name':'line_name'}, inplace=True) cols = dfnew.columns.tolist() cols = cols[-1:]+cols[:-1] dfnew = dfnew[cols] print(dfnew) Output: line_name line1 line2 line3 0 ID 1 name 2 bla 200 400 800 3 bloo 400 900 700 4 bloom 800 700 966 ```
``` df = pd.DataFrame({"line1": [200, 400, 800], "line2": [400, 900, 700], "line3": [800, 700, 966], "name": ["bla", "bloo", "bloom"]}) df.loc[-1] = [np.nan for i in range(df.shape[1] - 1) ] + ['name'] df.loc[-2] = [np.nan for i in range(df.shape[1] -1)] + ['ID'] df = df.fillna('') df=df.sort_index() df=df.reset_index() df.loc[:,['name','line1','line2','line3']] ```
1,795,649
Maybe this is a stupid question but it's bugging me. I have a bi-directional one to many relationship of Employee to Vehicles. When I persist an Employee in the database for the first time (i.e. it has no assigned ID) I also want its associated Vehicles to be persisted. This works fine for me at the moment, except that my saved Vehicle entity is not getting the associated Employee mapped automatically, and in the database the employee\_id foreign key column in the Vehicle table is null. My question is, is it possible to have the Vehicle's employee persisted at the same time the Employee itself is being persisted? I realise that the Employee would need to be saved first, then the Vehicle saved afterwards. Can JPA do this automatically for me? Or do I have to do something like the following: ``` Vehicle vehicle1 = new Vehicle(); Set<Vehicle> vehicles = new HashSet<Vehicle>(); vehicles.add(vehicle1); Employee newEmployee = new Employee("matt"); newEmployee.setVehicles(vehicles); Employee savedEmployee = employeeDao.persistOrMerge(newEmployee); vehicle1.setAssociatedEmployee(savedEmployee); vehicleDao.persistOrMerge(vehicle1); ``` Thanks! Edit: As requested, here's my mappings (without all the other methods etc.) ``` @Entity public class Employee { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name="employee_id") private Long id; @OneToMany(mappedBy="associatedEmployee", cascade=CascadeType.ALL) private Set<Vehicle> vehicles; ... } @Entity public class Vehicle { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name="vehicle_id") private Long id; @ManyToOne @JoinColumn(name="employee_id") private Employee associatedEmployee; ... } ``` I just realised I should have had the following method defined on my Employee class: ``` public void addVehicle(Vehicle vehicle) { vehicle.setAssociatedEmployee(this); vehicles.add(vehicle); } ``` Now the code above will look like this: ``` Vehicle vehicle1 = new Vehicle(); Employee newEmployee = new Employee("matt"); newEmployee.addVehicle(vehicle1); Employee savedEmployee = employeeDao.persistOrMerge(newEmployee); ``` Much simpler and cleaner. Thanks for your help everyone!
2009/11/25
[ "https://Stackoverflow.com/questions/1795649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/205971/" ]
You have to set the associatedEmployee on the Vehicle before persisting the Employee. ``` Employee newEmployee = new Employee("matt"); vehicle1.setAssociatedEmployee(newEmployee); vehicles.add(vehicle1); newEmployee.setVehicles(vehicles); Employee savedEmployee = employeeDao.persistOrMerge(newEmployee); ```
One way to do that is to set the cascade option on you "One" side of relationship: ``` class Employee { // @OneToMany(cascade = {CascadeType.PERSIST}) private Set<Vehicles> vehicles = new HashSet<Vehicles>(); // } ``` by this, when you call ``` Employee savedEmployee = employeeDao.persistOrMerge(newEmployee); ``` it will save the vehicles too.
54,451,154
I'm using react-router-dom v4, and had a hard time implementing routers in a way that params is passed. ``` const routing = ( <Router> <div> <Route exact path="/" component={App} /> <Route exact path="/login" component={Login} /> <Route exact path="/app" component={Mainpage} /> <Route exact path="/app/documents" component={Documents} /> <Route exact path="/app/quizzes" component={Quizzes} /> <Route exact path="/app/new" component={Documents} /> </div> </Router> ); const Quizzes = ({match}) => ( <div> <Switch> <Route exact path="/app/quizzes" component={QuizzesInterface} /> <Route exact path="/app/quizzes/:id" component={QuizInterface} /> </Switch> </div> ) const QuizzesInterface = ({match}) => ( <div> <Background /> <Interface status='quizzes' index={0}/> </div> ) const QuizInterface = ({match}) => { return( <div> <Background /> <Interface status='quizzes' index={match.params.id}/> </div> ) } ``` When I test localhost:3000/app/quizzes, the QuizzesInterface loads fine, but when I test localhost:3000/app/quizzes/1, say, the QuizInterface is not reached. What am I doing wrong?
2019/01/30
[ "https://Stackoverflow.com/questions/54451154", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9977407/" ]
Remove return from QuizInterface ``` const QuizInterface = ({match}) => ( <div> <Background /> <Interface status='quizzes' index={match.params.id}/> </div> ) ```
The problem is that everywhere in your App you have used the exact keyword which works alright when you don't have nested Routes but as soon as you have nested Routes it will cause problems. In you case you have Quizzes Routes Configured as ``` <Route exact path="/app/quizzes" component={Quizzes} /> ``` Now Quizzes component has two Routes ``` <Switch> <Route exact path="/app/quizzes" component={QuizzesInterface} /> <Route exact path="/app/quizzes/:id" component={QuizInterface} /> </Switch> ``` So when your url path is `/app/quizzes/1`, it doesn't match `exact path="/app/quizzes"` and hence `Quizzes` component isn't rendered which makes it obvious that any nested Routes that it contains won't render the solution is to not use `exact` for Route paths that have nested Routes, but use `Switch` and order your Routes correctly ``` const routing = ( <Router> <Switch> <Route path="/login" component={Login} /> <Route path="/app/documents" component={Documents} /> <Route path="/app/quizzes" component={Quizzes} /> <Route path="/app/new" component={Documents} /> <Route path="/app" component={Mainpage} /> <Route path="/" component={App} /> </Switch> </Router> ); const Quizzes = ({match}) => ( <div> <Switch> <Route path="/app/quizzes/:id" component={QuizInterface} /> <Route path="/app/quizzes" component={QuizzesInterface} /> </Switch> </div> ) ``` > > **P.S.** You can use `exact` keyword for Routes but ensure that they don't > contain nested Routes > > >
15,313,412
Say I've got this: ``` \item[1]\footnote{«footnote blah blah»} footnote blah blah. \item[2]\footnote{«blah blah footnote»} random text \item[3]\footnote{«this is not»} more random text ``` I want to quickly delete \footnote{\*} (that includes \footnote{«footnote blah blah»} \footnote{«blah blah footnote»} and \footnote{«this is not»}, but not \item[x] or the lines below that. How is this achieved? Thanks!
2013/03/09
[ "https://Stackoverflow.com/questions/15313412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I think the most accurate way would be to use: ``` :%s/\\footnote{.\{-}}//g ``` This will delete footnote and everything in between the braces. Using `{.*}` will delete everything up to the last brace even if it is outside of the footnote brace.
In escape mode ``` :1,$s/\\footnote{.*}//g ```
10,765,363
Which is the best method.To open an link using button ``` <button type="button" onclick="location='permalink.php'">Permalink</button> <button type="button" href="index.php">Permalink</button> ```
2012/05/26
[ "https://Stackoverflow.com/questions/10765363", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1416553/" ]
The first one should work, but inline scripts are not recommended. You should read about how to attach events using [`addEventListener`](https://developer.mozilla.org/en/DOM/element.addEventListener) for standards compliant browsers and [`attachEvent`](http://msdn.microsoft.com/en-us/library/ie/ms536343%28v=vs.85%29.aspx) for older IE. The second won't work since [buttons don't use the `href` attribute](http://reference.sitepoint.com/html/button).
Alternatively, you can do something like this, if the button 'style' is what you're after, and not specifically the `<button>` tag: ``` <form method="link" action="permalink.php"> <input type="submit" value="Permalink"> </form> ``` Another thought is that you could style an anchor to look like a button, then you could use the href attribute.
27,564,995
Well, I'm building a app in both iOS and Android and both app needs to be syncronized with the cloud DB MySQL. Both the apps works offline so I need all the insert and update functionality. Delete is probably not going to be used but still I'd like to know. Well, what I'm looking here is the solution or ideas or algo that's needs to done to achieve this. I have kept CreatedOn and LastSync as timestamp column on each table. Now the problem is should I always keep checking all the rows and all the columns everytime? What I think is I should keep ModifiedOn column on all the tables and check that with LastSync for every device id. What do you guys suggest?
2014/12/19
[ "https://Stackoverflow.com/questions/27564995", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1417245/" ]
It's fairly straightforward once you get the hang of variadic template expansions. this code builds a vector of each version of a function with 3 binary compile-time options. The index into the vector is the integer value of considering those options as binary bits: ``` #include <iostream> #include <functional> #include <vector> using namespace std; // the actual function implementation template <bool option_a, bool option_b, bool option_c> void function_with_options() { if(option_a) cout << "option a is enabled, "; else cout << "no option a, "; if(option_b) cout << "option b is enabled, "; else cout << "no option b, "; if(option_c) cout << "option c is enabled, "; else cout << "no option c, "; } // convert an integer into the 3 option bits and return a corresponding function object template<int bits> std::function<void()> make_function_with_options() { return function_with_options<bool(bits & 1), bool(bits & 2), bool(bits & 4)>; } // expand an index sequence of make_function_with_options<int> template<size_t... Is> std::vector<std::function<void()>> make_all_functions_impl(std::index_sequence<Is...>) { auto v = std::vector<std::function<void()>> { make_function_with_options<Is>()... }; return v; } // make an ordered vector of the 8 function variants std::vector<std::function<void()>> make_all_functions() { return make_all_functions_impl(make_index_sequence<8>{}); } // here is my global 'switch' static const std::vector<std::function<void()>> bit_functions = make_all_functions(); int main() { // call each 'switch' option to prove it works for(size_t i = 0 ; i < bit_functions.size() ; ++i) { cout << "case " << i << " "; bit_functions[i](); cout << endl; } return 0; } ``` output: ``` case 0 no option a, no option b, no option c, case 1 option a is enabled, no option b, no option c, case 2 no option a, option b is enabled, no option c, case 3 option a is enabled, option b is enabled, no option c, case 4 no option a, no option b, option c is enabled, case 5 option a is enabled, no option b, option c is enabled, case 6 no option a, option b is enabled, option c is enabled, case 7 option a is enabled, option b is enabled, option c is enabled, ```
Encode each boolean to a bit in a number, then use a switch statement. Something like this to set bits: ``` int i = 0; i = i | ((1 << (N-1)) && boolN); ```
7,234,089
In 2 parts of the C++/CLI code I'm working on, the program needs to run a different executable and have its STDOUT output redirected to a file. Its being attempted in 2 different ways, and only one of the 2 currently works. The program is being run on Windows XP. The first section assembles a long char\* that ends up looking something like this: ``` char* exepath = "start/B /D\\root\\bin \\root\\bin\\process.exe 1>\\root\\logs\\process_STDOUT.txt" ``` Then the code simply calls ``` Status = system(exepath); ``` This works fine: It both runs process.exe and creates the process\_STDOUT.txt file as expected. The second section tries to do the same using a ProcessStartInfo object instead. It successfully starts process.exe, but has not been creating the redirected output .txt file. Here is a simplified version of that code: ``` Process p = gcnew Process(); ProcessStartInfo^ s = gcnew ProcessStartInfo(); s->FileName = "\\root\\bin\\process.exe"; s->WindowStyle = ProcessWindowStyle::Hidden; s->CreateNoWindow = true; s->UseShellExecute = false; s->Arguments = "1>\\root\\logs\\process_STDOUT.txt"; p->StartInfo = s; p->Start(); ``` Am I doing something wrong with this code? And if not, is going back to just calling system(exepath) my only option or are there any others?
2011/08/29
[ "https://Stackoverflow.com/questions/7234089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/867710/" ]
Check out [this](http://www.codeproject.com/KB/cs/ProcessStartDemo.aspx) article. Noteworthy: ``` /* Setup Redirect */ processStartInfo.RedirectStandardError = true; processStartInfo.RedirectStandardInput = true; processStartInfo.RedirectStandardOutput = true; /* Capture Output */ StreamWriter inputWriter = process.StandardInput; StreamReader outputReader = process.StandardOutput; StreamReader errorReader = process.StandardError; process.WaitForExit(); ``` Passing `"1>\\root\\logs\\process_STDOUT.txt"` as an argument is not the same as redirecting
The "pipe command" is not passed as a parameter to the program; the shell interprets it and the program itself has no idea that its output is being redirected to a file. So you don't want to pass stuff like that as an argument. However, the `Program` class has `StandardOutput` and `StandardError` stream properties to help you capture output. You simply need to specify (in your `ProcessStartInfo` object) that you want to redirect output. ``` s->RedirectStandardOutput = true; s->RedirectStandardError = true; ``` Past that point, you'll be able to use [`p->StandardOutput`](http://msdn.microsoft.com/fr-fr/library/system.diagnostics.process.standardoutput%28v=VS.100%29.aspx) and [`p->StandardError`](http://msdn.microsoft.com/fr-fr/library/system.diagnostics.process.standarderror%28v=VS.100%29.aspx) to get the output of the stream.
68,159,185
I really liked the toggle switch "Example 3" on this website and they provide the HTML/CSS code with it. Although, when I try it on my webpage it renders as a simple plain checkbox. I have beginners knowledge of HTML/CSS but the code makes sense and I'm not sure what is wrong. This is the code and the website attached on the bottom. Example 3 is what it's supposed to look like. ```css <style> /* Switch Yes No ==========================*/ .switch-yes-no { padding: 0; margin: 15px 0 0; background: #FFF; border-radius: 0; background-image: none; } .switch-yes-no .switch-label { box-shadow: none; background: none; } .switch-yes-no .switch-label:after, .switch-yes-no .switch-label:before { width: 100%; height: 70%; top: 5px; left: 0; text-align: center; padding-top: 10%; box-shadow: inset 0 1px 4px rgba(0, 0, 0, 0.2), inset 0 0 3px rgba(0, 0, 0, 0.1); } .switch-yes-no .switch-label:after { color: #FFFFFF; background: #32CD32; backface-visibility: hidden; transform: rotateY(180deg); } .switch-yes-no .switch-label:before { background: #eceeef; backface-visibility: hidden; } .switch-yes-no .switch-handle { display: none; } .switch-yes-no .switch-input:checked ~ .switch-label { background: #FFF; border-color: #0088cc; } .switch-yes-no .switch-input:checked ~ .switch-label:before { transform: rotateY(180deg) } .switch-yes-no .switch-input:checked ~ .switch-label:after { transform: rotateY(0) } </style> ``` ```html <label class="switch switch-yes-no"> <input class="switch-input" type="checkbox" /> <span class="switch-label" data-on="Yes" data-off="No"></span> <span class="switch-handle"></span> </label> ``` Example 3 on this website in case needed [Example 3](https://www.htmllion.com/css3-toggle-switch-button.html)
2021/06/28
[ "https://Stackoverflow.com/questions/68159185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11305136/" ]
Not an answer to this question, but I think this is more important than the actual solution. Solving interview questions is a great way to learn, but you should try to spend the time to understand why it doesn't work by yourself. As a programmer you'll spend most of your time reading and debugging code and this skill is acquired by investing a huge amount of time exactly on cases like this one with `a debugger`. First you need `a debugger` and depending on your OS you can pick Visual Studio Community edition on Windows, xcode on Mac or gdb/lldb on Linux. I would recommend you to start with MSVC or xcode as they are beginner friendly and learn how to use Linux tools later - they are a must have skill for experienced programmers. Second you need to run your code `line by line` using the debugger and inspecting all `relevant variables` after each line to see if they have the expected value or not. To know which are they `relevant variables` takes time and experience, but you'll get there if you practice. Third and final step is to repeat second step until you `really understand` what was the problem. I'm not talking about `trial and error until it works`, I'm talking about understand `why` the variables don't have the expected value. Search SO, read the standard, ask very specific questions and so on, but don't let others do the debugging for you as they will acquire this skill whilst you won't
As most list questions, you might need several "iterators" to solve it: ``` class LinkedList { private // ... std::optional<Node*> next(Node* n, std::size_t k) { for (std::size_t i = 0; i != k; ++i) { if (n == nullptr) { return std::nullopt; } node = node->next; } return node; } public: // ... std::optional<Node*> kth_to_last(std::size_t k) { auto endNode = next(head, k); if (!endNode) { return nullopt; } // list size < k auto res = head; while (*endNode != nullptr) { *endNode = (*endNode)->next; res = res->next; } return res; } }; ``` Note: If you cannot use `std::optional`, `std::pair<bool, Node*>` is a possible replacement.
59,198,428
For example, I want to check if data contains list of well-formed dicts, and this list has length between 1 and 10. ``` from marshmallow import Schema, fields class Record(Schema): id = fields.Integer(required=True) # more fields here, let's omit them schema = Record(many=True) # somehow define that we have constraint on list length # list length should be between 1 and 10 inclusive # validation should fail errors = schema.validate([]) assert errors # length < 1 errors = schema.validate([{"id": i} for i in range(100)]) assert errors # length > 10 # validation should succeed errors = schema.validate([{"id": i} for i in range(5)]) assert not errors ``` Is it possible to define such constraints using marshmallow? --- I need something like this but I would like to avoid additional level of nesting in data: ``` from marshmallow.validate import Length class BatchOfRecords(Schema): records = fields.Nested( Record, required=True, many=True, validate=Length(1, 10) ) ``` **UPD:** So to clarify the question, I would like to validate a list of dicts: ``` [ {"id": 1}, {"id": 2}, ... ] ``` And **not a dict** with a key containing list of dicts: ``` # it works but it introduces extra level of nesting, # I want to avoid it { "records": [ {"id": 1}, {"id": 2}, ... ] } ```
2019/12/05
[ "https://Stackoverflow.com/questions/59198428", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10650942/" ]
**EDIT** So it is possible to validate collections using just marshmallow. You can use the `pass_many` kwarg with `pre_load` or `post_load` methods. I did not have success with the `pre_load` but got to work with post. The `pass_many` kwarg will treat the input as a collection so then you can check the length of the collection after loaded. I use the `many` kwarg so that I will only check the length if we are passing a collection of records rather than just an individual record ``` from marshmallow import Schema, fields, ValidationError, post_load class Record(Schema): id = fields.Integer(required=True) name = fields.String(required=True) status = fields.String(required=True) @post_load(pass_many=True) def check_length(self, data, many, **kwargs): if many: if len(data) < 1 or len(data) > 10: raise ValidationError(message=['Record length should be greater than 1 and less than 10.'], field_name='record') ``` **EDIT TEST CASES** ``` from unittest import TestCase from marshmallow import ValidationError from stack_marshmallow import Record class TestStackSchemasNonNested(TestCase): def test_empty_dict(self): with self.assertRaises(ValidationError) as exc: Record(many=True).load([]) self.assertEqual(exc.exception.messages['record'], ['Record length should be greater than 1 and less than 10.']) def test_happy_path(self): user_data = [{"id": "1", "name": "apple", "status": "OK"}, {"id": "2", "name": "apple", "status": 'OK'}] data = Record(many=True).load(user_data) self.assertEqual(len(data), 2) def test_invalid_values_with_valid_values(self): user_data = [{"id": "1", "name": "apple", "status": 'OK'}, {"id": "2"}] with self.assertRaises(ValidationError) as exc: Record(many=True).load(user_data) self.assertEqual(exc.exception.messages[1]['name'], ['Missing data for required field.']) self.assertEqual(exc.exception.messages[1]['status'], ['Missing data for required field.']) def test_too_many(self): user_data = [{"id": "1", "name": "apple", "status": "OK"}, {"id": "2", "name": "apple", "status": 'OK'}, {"id": "3", "name": "apple", "status": 'OK'}, {"id": "4", "name": "apple", "status": 'OK'}, {"id": "5", "name": "apple", "status": 'OK'}, {"id": "6", "name": "apple", "status": 'OK'}, {"id": "7", "name": "apple", "status": 'OK'}, {"id": "8", "name": "apple", "status": 'OK'}, {"id": "9", "name": "apple", "status": 'OK'}, {"id": "10", "name": "apple", "status": 'OK'}, {"id": "11", "name": "apple", "status": 'OK'}, ] with self.assertRaises(ValidationError) as exc: Record(many=True).load(user_data) self.assertEqual(exc.exception.messages['record'], ['Record length should be greater than 1 and less than 10.']) ``` > > **EDIT SOURCES**: <https://marshmallow.readthedocs.io/en/stable/extending.html> > > > You are very close. I added a little more complexity to record because I don't think you will just have one field or else I would just use a List of Integers. I added some unit tests as well so you can see how to test it. ``` from marshmallow import Schema, fields, validate class Record(Schema): id = fields.Integer(required=True) name = fields.String(required=True) status = fields.String(required=True) class Records(Schema): records = fields.List( fields.Nested(Record), required=True, validate=validate.Length(min=1,max=10) ) ``` **TEST CASES** ``` from unittest import TestCase from marshmallow import ValidationError from stack_marshmallow import Records class TestStackSchemas(TestCase): def setUp(self): self.schema = Records() def test_empty_dict(self): with self.assertRaises(ValidationError) as exc: self.schema.load({}) self.assertEqual(exc.exception.messages['records'], ['Missing data for required field.']) def test_empty_empty_list_in_dict(self): with self.assertRaises(ValidationError) as exc: self.schema.load({"records": []}) self.assertEqual(exc.exception.messages['records'], ['Length must be between 1 and 10.']) def test_missing_fields_in_single_record(self): with self.assertRaises(ValidationError) as exc: self.schema.load({"records": [{"id": 1}]}) self.assertEqual(exc.exception.messages['records'][0]['name'], ['Missing data for required field.']) self.assertEqual(exc.exception.messages['records'][0]['status'], ['Missing data for required field.']) def test_list_too_long_and_invalid_records(self): with self.assertRaises(ValidationError) as exc: self.schema.load({"records": [{"id": 1, "name": "stack", "status": "overflow"}, {"id": 2, "name": "stack", "status": "overflow"}, {"id": 3, "name": "stack", "status": "overflow"}, {"id": 4, "name": "stack", "status": "overflow"}, {"id": 5, "name": "stack", "status": "overflow"}, {"id": 6, "name": "stack", "status": "overflow"}, {"id": 7, "name": "stack", "status": "overflow"}, {"id": 8, "name": "stack", "status": "overflow"}, {"id": 9, "name": "stack", "status": "overflow"}, {"id": 10, "name": "stack", "status": "overflow"}, {"id": 11, "name": "stack", "status": "overflow"}]}) self.assertEqual(exc.exception.messages['records'], ['Length must be between 1 and 10.']) ``` > > Sources: <https://marshmallow.readthedocs.io/en/stable/nesting.html> and > <https://marshmallow.readthedocs.io/en/stable/examples.html> > > >
What I want to do may be achieved using this little library: <https://github.com/and-semakin/marshmallow-toplevel>. ``` pip install marshmallow-toplevel ``` Usage: ``` from marshmallow.validate import Length from marshmallow_toplevel import TopLevelSchema class BatchOfRecords(TopLevelSchema): _toplevel = fields.Nested( Record, required=True, many=True, validate=Length(1, 10) ) ```
66,822,785
I need to create 4 lists of random numbers containing 6 numbers each, these numbers can't overlap. How can I do it? The below only generates 1 list. I need 4. ``` import random list_numbers = [random.randint(8,29) for x in range(6)] print(list_numbers) ```
2021/03/26
[ "https://Stackoverflow.com/questions/66822785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8237335/" ]
This answer doesn't meet the requirement of no imports, but it is worth mentioning anyway that it can be solved with a couple of imports at the top of main.qml. <https://github.com/jh3010-qt-questions/qml_location/tree/import_solution> ``` import QtQuick 2.12 import QtQuick.Window 2.12 import QtQuick.Controls 2.12 import "qml" import "qml/more" Window { width: 640 height: 480 visible: true title: qsTr("Hello World") Column { MyDeepComponent { } MyDeeperComponent { } } } ```
Another solution is to move main.qml into the qml folder. This allows main.qml to find MyDeepComponent because they are siblings. To find MyDeeperComponent, main.qml can import the "more" directory. This solution in represented in the [all\_in\_one\_solution](https://github.com/jh3010-qt-questions/qml_location/tree/all_in_one_solution) branch. directory structure ``` $ tree qml_location/ qml_location/ ├── main.cpp ├── qml │   ├── MyDeepComponent.qml │   ├── MyDeepComponentForm.ui.qml │   ├── main.qml │   └── more │   ├── MyDeeperComponent.qml │   └── MyDeeperComponentForm.ui.qml ├── qml.qrc ├── qml_location.pro └── qml_location.pro.user ``` main.cpp ``` #include <QGuiApplication> #include <QQmlApplicationEngine> int main(int argc, char *argv[]) { #if QT_VERSION < QT_VERSION_CHECK(6, 0, 0) QCoreApplication::setAttribute(Qt::AA_EnableHighDpiScaling); #endif QGuiApplication app(argc, argv); QQmlApplicationEngine engine; const QUrl url(QStringLiteral("qrc:/qml/main.qml")); QObject::connect(&engine, &QQmlApplicationEngine::objectCreated, &app, [url](QObject *obj, const QUrl &objUrl) { if (!obj && url == objUrl) QCoreApplication::exit(-1); }, Qt::QueuedConnection); engine.load(url); return app.exec(); } ``` qml.qrc ``` <RCC> <qresource prefix="/"> <file>qml/main.qml</file> <file>qml/MyDeepComponent.qml</file> <file>qml/MyDeepComponentForm.ui.qml</file> <file>qml/more/MyDeeperComponent.qml</file> <file>qml/more/MyDeeperComponentForm.ui.qml</file> </qresource> </RCC> ``` main.qml ``` import QtQuick 2.12 import QtQuick.Window 2.12 import QtQuick.Controls 2.12 import "more" Window { width: 640 height: 480 visible: true title: qsTr("Hello World") Column { MyDeepComponent { } MyDeeperComponent { } } } ```
40,787,241
I'va seen an **excersise** in a book, but I cannot figure out the answer: > > Is the following code legal or not? If not, how might you make it > legal? > > > `int null = 0, *p = null;` > > > Of course, the second one is not legal, you cannot convert int to int\*. The theme was in the section the `constexpr`. **GUYS! This is just an exercise about pointers, consts, and constexprs! I think, you have to solve it without cast and nullptr.**
2016/11/24
[ "https://Stackoverflow.com/questions/40787241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/417289/" ]
The other answer miss a very simple solution to make it legal: Make `null` a pointer: ``` int *null = 0, *p = null; ``` But as noted, the best solution is to not use the `null` variable at all, but to use the standard `nullptr`.
I think you can use `reinterpret_cast<int*>` over the int in order to make this valid.
56,508,564
I want to build a combination of numbers and letters separated by - (minus sign). i.e 1-R-3. The first number are in an array called $Points ,the letters are stored in a array called $Color and the last number is in a third array called $Points2; ``` $Points = [1,2,3,4]; $Color = [R,B,V,Y]; $Points = [1,2,3,4]; ``` I want the result to be one one row 1-R-1, 2-B-2 and so on. Now the result outputs as: 1 (minus sign) R (minus sign) 3 ` ``` $Bind = "-"; $foo = $Points[0] . $Bind . $Points[1]; ``` I have tried to convert the integer to a string by (String) but have not worked. Can somebody help me to get the result on one line? I bet i'm missing something easy! EDIT: The format in the arrays where incorrect since I forgot ->plaintext when doing my web-scraping. /U
2019/06/08
[ "https://Stackoverflow.com/questions/56508564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9860030/" ]
``` $Points = [1,2,3,4]; $Color = ['R','B','V','Y']; foreach ($Points as $point=>$value) { echo $value . '-' . $Color[$point] . '-' . $value . PHP_EOL; } ``` Note that the values in the `$Color` array need to be in quotes to avoid errors.
You can use php `join` function. For example: ```php $results = []; for ($i = 0; $i < count($Points); $i++) { $results[] = join('-', [$Points[$i], $Colors[$i], $Points2[$i]]); } // Now you have your combined values in $results array var_export($results); ```
40,921,128
It seems that MathML works OK with my word with simple copy and paste for strings such as ``` <math xmlns="http://www.w3.org/1998/Math/MathML"><mfrac><mn>1</mn><mn>2</mn></mfrac></math> ``` But when I try to use `sympy`'s mathml printer: ``` from sympy import S from sympy.printing.mathml import mathml my_eqn = S(1) / 2 print(mathml(my_eqn)) ``` The output is: ``` <apply><divide/><cn>1</cn><cn>2</cn></apply> ``` And I cannot copy and paste it into word to make it a Word equation. Could anyone please help?
2016/12/01
[ "https://Stackoverflow.com/questions/40921128", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2203311/" ]
Looks like the MathML that works is [presentation MathML](https://en.wikipedia.org/wiki/MathML#Presentation_MathML), whereas SymPy outputs [content MathML](https://en.wikipedia.org/wiki/MathML#Content_MathML). Unsurprisingly, Word is unable to convert from content to presentation, as that requires some degree of mathematical knowledge on the part of the software. SymPy [probably ought to support](https://github.com/sympy/sympy/issues/11893) outputting presentation format, but until that is implemented, you might try to find some other software that can convert between the two (I don't know of any myself, unfortunately).
SymPy [supports presentation MathML now](https://github.com/sympy/sympy/pull/13794). For that you use the arg `printer="presentation"` ```py mathml(expr, printer="presentation") ``` You should put the output inside the `<math>` tag: ```xml <math xmlns = "http://www.w3.org/1998/Math/MathML"> </math> ```
41,360
I am watching a lot of American and English movies on Netflix with the original sound and German subtitles. I've noticed that in the subtitles, when two people are talking, the formal pronouns are always used when the two characters address each other, regardless of their relationship. For example, a conversation in [*Sherlock*](https://en.wikipedia.org/wiki/Sherlock_(TV_series)) between Sherlock and Watson (extremely good friends, one might say): Sherlock says to Watson something along the lines of: *Sie sind […]* or *Sie haben […] gemacht*, while the audio is *You are […]* or *You've done […]*. This happens in all the series and movies I have watched. Another example would be [*The Expanse*](https://en.wikipedia.org/wiki/The_Expanse_(TV_series)) where crew members would talk to each other in formal language. I'm assuming it's not a slip-up, since it's Netflix and it's on such a huge scale, but what's the reasoning?
2018/01/30
[ "https://german.stackexchange.com/questions/41360", "https://german.stackexchange.com", "https://german.stackexchange.com/users/31744/" ]
In the German dubbed version Sherlock and John also use *Sie* when talking to each other. One reason might be, that they still have a somehow professional relationship which maybe should be enforced by the use of *Sie*. Especially in a business context in a German speaking company it is still quite common to use *Sie* instead of *du*, even when talking to colleagues you have known for years. EDIT: Having just watched the first couple minutes of Black Mirror with German audio and German subs, I noted that the subtitles constantly changes between the use of *du* and *Sie* even within a single conversation between two people, while in the audio *Sie* is always used. There is also no obvious reason for the constant change of the pronouns and this can definitely be quite confusing to the viewer. So my conclusion that the German subtitles matches the German dub is not always right. And it even might be true that Netflix actually don't put much effort in producing the subtitles in some cases, however this is just a guess.
The form of address in German is part of the speech and vary in dependence of social relation. Further more it is different in parts of Germany how you address your counterpart by default. So it can be very subtle which address is suitable in which situation. I don't know other series but for Sherlock in a modern sight as interpreted by the BBC series I can state following: Sherlock has a special lack of social interaction. This is called [Asperger](https://en.wikipedia.org/wiki/Asperger_syndrome). Often very intelligent people can not interact well just by feelings. Sometimes they have problems to get to close to anybody even nearest family members and close friends. So to keep a formal distance they might use the formal form of address always simply to avoid closer contact. Even if they are aware it is not applicable in this situation or relation they keep this by habit. Simply you also could say it is Sherlocks arrogance to keep everyone under his level even Watson. But how this can be excused related to special "disabilities" of Sherlock is a long, long discussion. ;-) PS: If Asperger is applicable to Sherlock might be subject of big discussion and might be seen controversal however it narrows his behavior quite well.
9,662,458
I'm integrating Symfony DIC in a zend framework application, that's going fine except for parent services. In my DIC config I have a parent service PC\_Service which will be extended by all my services. The problem is that the entity manager is not available (NULL) in the services that extend PC\_Service. When I inject the entitymanager via service.stats the entitymanger is set correctly. ``` ... <service id="pc.service" class="PC_Service" abstract="true"> <call method="setEntityManager"> <argument type="service" id="doctrine.entitymanager" /> </call> </service> ... <service id="service.stats" class="Application_Service_Stats" parent="pc.service" /> ... ``` **PC\_Service** ``` abstract class PC_Service { protected $_em; public function setEntityManager($entityManager) { $this->_em = $entityManager; } } ``` **Application\_Service\_Stats** ``` class Application_Service_Stats extends PC_Service { ... $this->_em should be set here. } ``` I hope someone can tell me what I'm doing wrong.
2012/03/12
[ "https://Stackoverflow.com/questions/9662458", "https://Stackoverflow.com", "https://Stackoverflow.com/users/80840/" ]
Don't know if it's a typo but it should be `doctrine.orm.default_entity_manager` or `doctrine.orm.entity_manager` (alias of the previuos): ``` <service id="pc.service" class="PC_Service" abstract="true"> <call method="setEntityManager"> <argument type="service" id="doctrine.orm.default_entity_manager" /> </call> </service> ```
The solution is to compile the service container near the end of the ZF bootstrap. This process has a step called `ResolveDefinitionTemplatesPass` which patches in the calls from parent services. This is typically done by the [Symfony Kernel](https://github.com/symfony/symfony/blob/master/src/Symfony/Component/HttpKernel/Kernel.php#L673), but of course it isn't present in a ZF integration. ``` protected function _initServiceContainerCompilation() { // Wait for the SC to get built $this->bootstrap('Services'); // Doctrine modifies the SC, so we need to wait for it also $this->bootstrap('Doctrine'); // Compiling the SC allows "ResolveDefinitionTemplatesPass" to run, // allowing services to inherit method calls from parents $sc = $this->getResource('Services'); $sc->compile(); } ```
19,513,094
Complete noob here. Trying this in C++. I am trying to create an array of numbers that are of any base. That is I want to define a new base and have this array "count" in the new base. For example: base 78. I want to create an array of lets say 3 78-base numbers in such a way as when I add to the first "digit" it overflows to the second when it reaches 78. something like [1] [1] [77] + 1 = [1] [2] [0]... Is this possible in an efficient way? Can I do this for lets say a 20 long array of base-10000? Or will computation time kill me? Thanks.
2013/10/22
[ "https://Stackoverflow.com/questions/19513094", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2527849/" ]
Basically: you can't do this with primitive types like long, int etc. in Java as they're allways passed by-value. Check out [Oracles Java tutorials](http://docs.oracle.com/javase/tutorial/java/javaOO/arguments.html "Oracles Java tutorials") for some background You could workaround this problem very simple if you use a custom return value containing class like ``` public class EuclidReturnValues { long gcd; long latestA; long latestB; } ``` and change the signature of your method to (assumed you changed your code as well!) ``` public EuclidReturnValues extendedEuclid(long a, long b) ``` **Edit:** It might also be a good idea to nest this class into your euclid-algorithm-providing class so it's thematically coherent
Well, almost. If you want to be able to change the value and have it reflected in the calling code, you can do something like this: ``` class ValueHolder { long value; //getter, setters, etc } ``` And pass this instead of your `long`. Note that this isn't passing by reference, you are just passing the value of the `ValueHolder` reference instead of your `long` value.
9,664,036
I have this piece of code: ``` public static void main(String[] args) { Downoader down = new Downoader(); Downoader down2 = new Downoader(); down.downloadFromConstructedUrl("http:xxxxx", new File("./references/word.txt"), new File("./references/words.txt")); down2.downloadFromConstructedUrl("http:xxxx", new File("./references/word1.txt"), new File("./references/words1.txt")); System.exit(0); } ``` Is it possible to run these two methods: `down.downloadFromConstructedUrl()` and `down2.downloadFromConstructedUrl()` simultaneously? If so, how?
2012/03/12
[ "https://Stackoverflow.com/questions/9664036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1238134/" ]
Instead of using threads directly, you better use ExecutorService, and run all download tasks through this service. Something like: ``` ExecutorService service = Executors.newCachedThreadPool(); Downloader down = new Downloader("http:xxxxx", new File("./references/word.txt"), new File("./references/words.txt")); Downloader down2 = new Downloader("http:xxxx", new File("./references/word1.txt"), new File("./references/words1.txt")); service.invokeAll(Arrays.asList(down, down2)); ``` Your `Downloader` class must implement `Callable` interface.
That would be what [concurrency and threading](http://docs.oracle.com/javase/tutorial/essential/concurrency/) is for.
18,560,060
I'm learning C programming in a self-taught fashion. I know that numeric pointer addresses must always be initialized, either statically or dynamically. However, I haven't read about the compulsory need of initializing char pointer addresses yet. For example, would this code be correct, or is a pointer address initialization needed? ``` char *p_message; *p_message = "Pointer"; ```
2013/09/01
[ "https://Stackoverflow.com/questions/18560060", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2718426/" ]
I'm not entirely sure what you mean by "numeric pointer" as opposed to "char pointer". In C, a `char` is an integer type, so it is an arithmetic type. In any case, initialization is not required for a pointer, regardless of whether or not it's a pointer to `char`. Your code has the mistake of using `*p_message` instead of `p_message` to set the value of the pointer: ``` *p_message = "Pointer" // Error! ``` This wrong because given that `p_message` is a pointer to `char`, `*p_message` should be a `char`, not an entire string. But as far as the need for initializing a `char` pointer when first declared, it's not a requirement. So this would be fine: ``` char *p_message; p_message = "Pointer"; ``` I'm guessing part of your confusion comes from the fact that this would not be legal: ``` char *p_message; *p_message = 'A'; ``` But then, that has nothing to do with whether or not the pointer was initialized correctly. Even as an initialization, this would fail: ``` char *p_message = 'A'; ``` It is wrong for the same reason that `int *a = 5;` is wrong. So why is that wrong? Why does this work: ``` char *p_message; p_message = "Pointer"; ``` but this fail? ``` char *p_message; *p_message = 'A'; ``` It's because there is no memory allocated for the `'A'`. When you have `p_message = "Pointer"`, you are assigning `p_message` the address of the first character `'P'` of the string literal `"Pointer"`. String literals live in a different memory segment, they are considered immutable, and the memory for them doesn't need to be specifically allocated on the stack or the heap. But `char`s, like `int`s, need to be allocated either on the stack or the heap. Either you need to declare a `char` variable so that there is memory on the stack: ``` char myChar; char *pChar; pChar = &myChar; *pChar = 'A'; ``` Or you need to allocate memory dynamically on the heap: ``` char* pChar; pChar = malloc (1); // or pChar = malloc (sizeof (char)), but sizeof(char) is always 1 *pChar = 'A'; ``` So in one sense `char` pointers are different from `int` or `double` pointers, in that they can be used to point to string literals, for which you don't have to allocate memory on the stack (statically) or heap (dynamically). I think this might have been your actual question, having to do with memory allocation rather than initialization. If you are really asking about initialization and not memory allocation: A pointer variable is no different from any other variable with regard to initialization. Just as an uninitialized `int` variable will have some garbage value before it is initialized, a pointer too will have some garbage value before it is initialized. As you know, you can declare a variable: ``` double someVal; // no initialization, will contain garbage value ``` and later in the code have an assignment that sets its value: ``` someVal = 3.14; ``` Similarly, with a pointer variable, you can have something like this: ``` int ary [] = { 1, 2, 3, 4, 5 }; int *ptr; // no initialization, will contain garbage value ptr = ary; ``` Here, `ptr` is not initialized to anything, but is later assigned the address of the first element of the array. Some might say that it's always good to initialize pointers, at least to `NULL`, because you could inadvertently try to dereference the pointer before it gets assigned any actual (non-garbage) value, and dereferencing a garbage address might cause your program to crash, or worse, might corrupt memory. But that's not all that different from the caution to always initialize, say, `int` variables to zero when you declare them. If your code is mistakenly using a variable before setting its value as intended, I'm not sure it matters all that much whether that value is zero, `NULL`, or garbage. --- **Edit**. OP asks in a comment: *You say that "String literals live in a different memory segment, they are considered immutable, and the memory for them doesn't need to be specifically allocated on the stack or the heap", so how does allocation occur?* That's just how the language works. In C, a string literal is an element of the language. The C11 standard specifies in §6.4.5 that when the compiler translates the source code into machine language, it should transform any sequence of characters in double quotes to a static array of `char` (or `wchar_t` if they are wide characters) and append a `NUL` character as the last element of the array. This array is then considered immutable. The standard says: `If the program attempts to modify such an array, the behavior is undefined.` So basically, when you have a statement like: ``` char *p_message = "Pointer"; ``` the standard requires that the double-quoted sequence of characters `"Pointer"` be implemented as a static, immutable, `NUL`-terminated array of `char` somewhere in memory. Typically implementations place such string literals in a read-only area of memory such as the text block (along with program instructions). But this is not required. The exact way in which a given implementation handles memory allocation for this array / `NUL` terminated sequence of `char` / string literal is up to the particular compiler. However, because this array exists somewhere in memory, you can have a pointer to it, so the above statement does work legally. An analogy with function pointers might be useful. Just as the code for a function exists somewhere in memory as a sequence of instructions, and you can have a function pointer that points to that code, but you cannot change the function code itself, so also the string literal exists in memory as a sequence of `char` and you can have a `char` pointer that points to that string, but you cannot change the string literal itself. The C standard specifies this behavior only for string literals, not for character constants like `'A'` or integer constants like `5`. Setting aside memory to hold such constants / non-string literals is the programmer's responsibility. So when the compiler comes across statements like: ``` char *charPtr = 'A'; // illegal! int *intPtr = 5; // illegal! ``` the compiler does not know what to do with them. The programmer has not set aside such memory on the stack or the heap to hold those values. Unlike with string literals, the compiler is not going to set aside any memory for them either. So these statements are illegal. Hopefully this is clearer. If not, please comment again and I'll try to clarify some more.
Initialisation is not *needed*, regardless of what type the pointer points to. The only requirement is that you must not attempt to *use* an uninitialised pointer (that has never been assigned to) for anything. However, for aesthetic and maintenance reasons, one should always initialise where possible (even if that's just to `NULL`).
24,880,971
Im getting this error: ``` D:\Users\JF150696\AppData\Local\Temp\ccrDYwyp.o Source2.cpp:(.text+0xdf): undefined reference to `cv::imread(std::string const&, int)' D:\Users\JF150696\AppData\Local\Temp\ccrDYwyp.o Source2.cpp:(.text+0xdec): undefined reference to `cv::_InputArray::_InputArray(cv::Mat const&)' D:\Users\JF150696\AppData\Local\Temp\ccrDYwyp.o Source2.cpp:(.text+0xe41): undefined reference to `cv::imwrite(std::string const&, cv::_InputArray const&, std::vector<int, std::allocator<int> > const&)' d:\devc\dev-cpp\mingw64\x86_64-w64-mingw32\bin\ld.exe D:\Users\JF150696\AppData\Local\Temp\ccrDYwyp.o: bad reloc address 0x20 in section `.text$_ZSt4sqrtf[__ZSt4sqrtf]' D:\devc\opencv\projekty\test 3 opencv\test 3 opencv\collect2.exe [Error] ld returned 1 exit status ``` What i did in Dev Options: 1. I have added this command line to compiler: -L"C:\opencv\build\x86\vc11\lib" -lopencv\_highgui248 -lopencv\_core248 -lopencv\_imgproc248 -lopencv\_calib3d248 -lopencv\_video248 -lopencv\_features2d248 -lopencv\_ml248 -lopencv\_highgui248 -lopencv\_objdetect248 -lopencv\_contrib248 -lopencv\_legacy248 -lopencv\_flann248 2. This lane to linker options: -static-libgcc -lopencv\_highgui248 -lopencv\_core248 -lopencv\_imgproc248 -lopencv\_calib3d248 -lopencv\_video248 -lopencv\_features2d248 -lopencv\_ml248 -lopencv\_highgui248 -lopencv\_objdetect248 -lopencv\_contrib248 -lopencv\_legacy248 -lopencv\_flann248 3. In directiories i have added: binaries: `D:\devc\opencv\build\x86\vc11\bin` libs: `D:\devc\opencv\build\x86\vc11\lib` headers C: `D:\devc\opencv\build\include\opencv2` `D:\devc\opencv\build\include\opencv` `D:\devc\opencv\build\include` headers C++: same as above 4. I have added opencv path `D:\devc\opencv` to PATH variable 5. My dev C++ version is: 5.7.0, OpenCV: 2.4.8 Anyone know how to fix that? **EDIT** Same problem using CodeBlock
2014/07/22
[ "https://Stackoverflow.com/questions/24880971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3804906/" ]
Two approaches spring to mind: ### Tread the four cases separately and then OR them together: 1. start and end are null: any date matches, 2. start is null, so need DoB <= end 3. send is null, so need DoB >= start 4. neither is null, so need between This will lead to a long expression. ### Use [`IsNull`](https://stackoverflow.com/a/24881013/67392): As shown by mehdi lotfi in his [answer](https://stackoverflow.com/a/24881013/67392).
If you are using a nullable datetime type you could use ``` User.DateOfBirth BETWEEN isnull(@startDAte, CAST('1753-01-01' AS datetime)) AND isnull(@endDAte, CAST('9999-12-31' AS datetime)) ``` for datetime2 use ``` User.DateOfBirth BETWEEN isnull(@startDAte, CAST('0001-01-01' AS datetime2)) AND isnull(@endDAte, CAST('9999-12-31' AS datetime2)) ```
162,845
What physical evidence shows that subatomic particles pop in and out of existence?
2015/02/01
[ "https://physics.stackexchange.com/questions/162845", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/72173/" ]
My current understanding is that the physical reality of vacuum fluctuations, particle-antiparticle pairs being created and then annihilating, is disputed. [The Casimir effect](http://en.wikipedia.org/wiki/Casimir_effect) is often cited as physical evidence but there's a few authors which have come to dispute that the Casimir effect is convincing evidence for the reality of vacuum fluctuations, as they argue that the same results can be extracted from treating the effect as a result of retarded van der Waals forces, not vacuum fluctuations. See this paper: <http://arxiv.org/abs/hep-th/0503158> and this summary of the situation <http://orbi.ulg.ac.be/bitstream/2268/137507/1/238.pdf> Maybe read the summary first, it's easy and quick to read :) As far as I know aside from the Casimir effect we have no other evidence for the physical reality of vacuum fluctuations. If you want to delve deeper, a good start is the papers cited by the two above.
This phenomenon is called Quantum Fluctuations or vacuum energy and it could be described theoretically by Heisenberg uncertainty relation with the energy term. One of the physical evidences of such phenomenon is ''Casimir effect'' . when two uncharged plates are put close to each other they exhibit a repulsive force, this force is explained by quantum fluctuations (subatomic particles popping in an out of existence).
15,539,029
I have been working on an app and recently started making some pretty large changes to it. Basically, the app enables the user to keep track of the water they drink and apply that to the goal they set at registration. One of the features would be a visual rep of their current progress, but I can't figure out how to do it the way I want. I want a glass of water, that fills up based on current status. (glass fills to 75% if user has 75% of goal, etc). Any ideas? Ive looked it up over and over again, but no luck. **BECAUSE OF CLOSE VOTES:** The question -> How would I use a glass of water as a progress bar? That should be clear enough now, but if not.. just let me know.
2013/03/21
[ "https://Stackoverflow.com/questions/15539029", "https://Stackoverflow.com", "https://Stackoverflow.com/users/326544/" ]
Here's a fiddle I made doing exactly what you describe. You should be able to see what's going on! <http://jsfiddle.net/mike_marcacci/XztTN/> Basically, just nest a div.water inside div.glass, position the water to the bottom of the glass, and animate its height with query! ``` $(function(){ $('.water').animate({ height: '75%' }, 1000) }) ```
I do something very similar by absolutely positioning a semi-transparent white PNG over the top of the base image (in your case the glass), and moving it higher as the percentage increases. ``` <div style="z-index: 10; position: relative;"><img src="images/overlay.png" style="position: absolute; top: 75%; left: 0; width: 100%; height: 250px;"></div> <p style="text-align: center; margin: 0; padding: 0;"><img src="glass.png" width="100" height="250"></p> ``` You could also do it with a semi-transparent glass image and a water background image, again moving the water up based on the percentage. That may result in a better effect in this particular instance.
28,628,667
I have file like below : this is a sample file this file will be used for testing ``` this is a sample file this file will be used for testing ``` I want to count the words using AWK. the expected output is ``` this 2 is 1 a 1 sample 1 file 2 will 1 be 1 used 1 for 1 ``` the below AWK I have written but getting some errors ``` cat anyfile.txt|awk -F" "'{for(i=1;i<=NF;i++) a[$i]++} END {for(k in a) print k,a[k]}' ```
2015/02/20
[ "https://Stackoverflow.com/questions/28628667", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3090114/" ]
It works fine for me: ``` awk '{for(i=1;i<=NF;i++) a[$i]++} END {for(k in a) print k,a[k]}' testfile used 1 this 2 be 1 a 1 for 1 testing 1 file 2 will 1 sample 1 is 1 ``` PS you do not need to set `-F" "`, since its default any blank. PS2, do not use `cat` with programs that can read data itself, like `awk` You can add `sort` behind code to sort it. ``` awk '{for(i=1;i<=NF;i++) a[$i]++} END {for(k in a) print k,a[k]}' testfile | sort -k 2 -n a 1 be 1 for 1 is 1 sample 1 testing 1 used 1 will 1 file 2 this 2 ```
Here is Perl code which provides similar sorted output to Jotne's awk solution: `perl -ne 'for (split /\s+/, $_){ $w{$_}++ }; END{ for $key (sort keys %w) { print "$key $w{$key}\n"}}' testfile` `$_` is the current line, which is split based on whitespace `/\s+/` Each word is then put into `$_` The `%w` hash stores the number of occurrences of each word After the entire file is processed, the `END{}` block is run The keys of the `%w` hash are sorted alphabetically Each word `$key` and number of occurrences `$w{$key}` is printed
475,611
If I have 20 variables, should I do a pair-wise correlation check before building a classification model with Logistic Reg, Decision Tree or Random Forest ? Multicolinearity is a problem in regression. Is it a problem in classification as well ? Besides classifying, I also need to understand the influence of variables as well. The post [Should one be concerned about multi-collinearity when using non-linear models?](https://stats.stackexchange.com/questions/266267/should-one-be-concerned-about-multi-collinearity-when-using-non-linear-models) has conflicting answers.
2020/07/05
[ "https://stats.stackexchange.com/questions/475611", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/268655/" ]
Instead of focusing only on the distribution function, let's focus on equality in distribution. A finite sequence of random variables $X\_1, \ldots, X\_n$ is exchangeable if for every permutation $\pi$ we have $$ X\_1, \ldots, X\_N =\_d X\_{\pi(1)}, \ldots, X\_{\pi(n)} $$ where $=\_d$ means equality in distribution. Equality in distribution is equivalent to equality of the distribution function, equality of the probability distribution function if the random variables are continuous and equality of the probability mass function if the random variables are discrete. Now let's get back to your example, suppose that we have a realization $X\_1, \ldots, X\_5 = 1, 1, 1, 0, 0$. We can compute the joint law of $X\_1, \ldots, X\_n$ by hand an in particular we have $$ P(X\_1, \ldots, X\_5 = 1, 1, 1, 0, 0) = \frac{3}{5} \frac{2}{4} \frac{1}{3} \frac{1}{2} \frac{1}{1} $$ Looking at the expression above, it is clear that a permutation of the indices will leave the denominators unchanged and only the numerators will permute, leaving overall the joint probability unchanged, which is the definition of exchangeability
Something that might be helpful in unifying these two views is the Hewitt-Savage(-de Finetti) representation theorem. The theorem says that $X\_1,\dots,X\_n$ are exchangeable precisely when they are independent and identically distributed conditional on some additional information. This is important in Bayesian statistics, because it means that an exchangeable sequence (which seems a reasonable thing to assume) can be modelled as an iid sequence plus a prior (which is convenient mathematically). For binary variables the extra information is just the probability. If $P$ is a random variable between 0 and 1, and $X\_i|P\sim \mathrm{Bern}(P)$, then $X\_i$ are exchangeable, and they are conditionally iid given $P$. That's the de Finetti result. Hewitt and Savage showed this was true generally, not just for binary sequences: a sequence is exchangeable if and only if it's iid conditional on some extra information, in this case the 'tail $\sigma$-field'
73,086,940
I have the working example for rotating an image on hover [here](https://codepen.io/krakruhahah/pen/zYWzpOW) It uses scale(), rotate() and transition properties when hovering over the parent element to animate the image. And overflow: hidden on the parent element to hide the excess from the image transformation. When I try to replicate the same effect on React I see the image but the image does not rotate when i hover. But its all the same? What am I missing here? ``` import React from 'react'; import { Box } from '@mui/material'; import Image from 'mui-image'; const styles = { hoverRotate: { overflow: 'hidden', margin: '8px', minWidth: '240px', maxWidth: '320px', width: '100%', }, 'hoverRotate img': { transition: 'all 0.3s', boxSizing: 'border-box', maxWidth: '100%', }, 'hoverRotate:hover img': { transform: 'scale(1.3) rotate(5deg)', }, }; function Rotate() { return ( <Box style={styles.hoverRotate}> <Image src="https://picsum.photos/id/669/600/800.jpg" /> </Box> ); } export { Rotate }; ```
2022/07/22
[ "https://Stackoverflow.com/questions/73086940", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You have to either change the hash method's return value to `hash(self.text)`, or, you can keep its return as `hash(str(self))` and add a `__str__` method whose return value is `self.text`.
You didn’t define a `__str__` or a `__repr__`, and the default repr includes the object’s id.
2,080,347
I have a model that uses a serialized column: ``` class Form < ActiveRecord::Base serialize :options, Hash end ``` Is there a way to make this serialization use JSON instead of YAML?
2010/01/17
[ "https://Stackoverflow.com/questions/2080347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14971/" ]
My requirements didn't need a lot of code re-use at this stage, so my distilled code is a variation on the above answer: ``` require "json/ext" before_save :json_serialize after_save :json_deserialize def json_serialize self.options = self.options.to_json end def json_deserialize self.options = JSON.parse(options) end def after_find json_deserialize end ``` Cheers, quite easy in the end!
Aleran, have you used this method with Rails 3? I've somewhat got the same issue and I was heading towards serialized when I ran into this post by Michael Rykov, but commenting on his blog is not possible, or at least on that post. To my understanding he is saying that you do not need to define Settings class, however when I try this it keeps telling me that Setting is not defined. So I was just wondering if you have used it and what more should have been described? Thanks.
1,207,163
When running a clean install of Ubuntu 17.04 on a fresh Dell XPS 9360 (3200x1800 display) I was soon shown the following screen after the initial boot: [![corrupted screen](https://i.stack.imgur.com/NZd1f.png)](https://i.stack.imgur.com/NZd1f.png) Basically, the displayed image is corrupt and incomprehensible. What can I do to fix this?
2017/05/08
[ "https://superuser.com/questions/1207163", "https://superuser.com", "https://superuser.com/users/69121/" ]
Basically, the allotted default amount of VRAM (video ram) in Virtualbox is not sufficient for the high-resolution display. Simply try increasing it. By increasing it from the default (19MB?) to 128MB the problem went away. [![Video memory](https://i.stack.imgur.com/iP76K.png)](https://i.stack.imgur.com/iP76K.png) This might also be related to the Unity desktop environment, which according to the specs requires a video card of at least 256 MB, a requirement most integrated solutions will fail. In that case, try another desktop environment, such as XFCE (XUbuntu) which does not have a 3D rendered desktop by default.
While trying to install an old version of ubuntu (10.04) on my Ubuntu 19.04 (beta) on Lenovo X1 carbon - I had this issue. I changed the "Graphics Controller" in machine settings to VBOXVGA and it fixed the issue.
35,233,305
I ran `rvm install ruby-2.2.4` and got the following error: ``` $ rvm reinstall ruby-2.2.4 ruby-2.2.4 - #removing src/ruby-2.2.4.. ruby-2.2.4 - #removing rubies/ruby-2.2.4.. Searching for binary rubies, this might take some time. No binary rubies available for: osx/10.9/x86_64/ruby-2.2.4. Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies. Checking requirements for osx. Certificates in '/usr/local/etc/openssl/cert.pem' are already up to date. Requirements installation successful. Installing Ruby from source to: /Users/JAckerman/.rvm/rubies/ruby-2.2.4, this may take a while depending on your cpu(s)... ruby-2.2.4 - #downloading ruby-2.2.4, this may take a while depending on your connection... ruby-2.2.4 - #extracting ruby-2.2.4 to /Users/JAckerman/.rvm/src/ruby-2.2.4.... ruby-2.2.4 - #applying patch /Users/JAckerman/.rvm/patches/ruby/2.2.4/fix_installing_bundled_gems.patch. ruby-2.2.4 - #configuring.......................................................... ruby-2.2.4 - #post-configuration. ruby-2.2.4 - #compiling............................................................. ruby-2.2.4 - #installing............ Error running '__rvm_make install', showing last 15 lines of /Users/JAckerman/.rvm/log/1454705088_ruby-2.2.4/install.log from /Users/JAckerman/.rvm/src/ruby-2.2.4/lib/rubygems/core_ext/kernel_require.rb:54:in `require' from /Users/JAckerman/.rvm/src/ruby-2.2.4/lib/rubygems/core_ext/kernel_require.rb:54:in `require' from /Users/JAckerman/.rvm/src/ruby-2.2.4/lib/rubygems/dependency_installer.rb:3:in `<top (required)>' from /Users/JAckerman/.rvm/src/ruby-2.2.4/lib/rubygems/core_ext/kernel_require.rb:54:in `require' from /Users/JAckerman/.rvm/src/ruby-2.2.4/lib/rubygems/core_ext/kernel_require.rb:54:in `require' from /Users/JAckerman/.rvm/src/ruby-2.2.4/lib/rubygems.rb:556:in `install' from ./tool/rbinstall.rb:722:in `block (2 levels) in <main>' from ./tool/rbinstall.rb:721:in `each' from ./tool/rbinstall.rb:721:in `block in <main>' from ./tool/rbinstall.rb:757:in `call' from ./tool/rbinstall.rb:757:in `block in <main>' from ./tool/rbinstall.rb:754:in `each' from ./tool/rbinstall.rb:754:in `<main>' make: *** [do-install-nodoc] Error 1 ++ return 2 There has been an error while running make install. Halting the installation. ``` I tried these steps, running `rvm reinstall ruby-2.2.4` after each step, to no avail: * `rvm get head` * `rvm get master` * Then all of this: ``` $ rvm gem list error Please note that `rvm gem ...` was removed, try `gem list error` or `rvm all do gem list error` instead. ( see: 'rvm usage' ) JAckerman@Jasons-MacBook-Pro parkme3.1 (staging) $ gem list error /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/yaml.rb:4:in `<top (required)>': It seems your ruby installation is missing psych (for YAML output). To eliminate this warning, please install libyaml and reinstall your ruby. /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require': dlopen(/Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/x86_64-darwin13/psych.bundle, 9): Library not loaded: /usr/local/lib/libyaml-0.2.dylib (LoadError) Referenced from: /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/x86_64-darwin13/psych.bundle Reason: image not found - /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/x86_64-darwin13/psych.bundle from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/psych.rb:1:in `<top (required)>' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/yaml.rb:5:in `<top (required)>' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/rubygems.rb:624:in `load_yaml' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/rubygems/config_file.rb:328:in `load_file' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/rubygems/config_file.rb:197:in `initialize' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/rubygems/gem_runner.rb:74:in `new' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/rubygems/gem_runner.rb:74:in `do_configuration' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/lib/ruby/2.2.0/rubygems/gem_runner.rb:39:in `run' from /Users/JAckerman/.rvm/rubies/ruby-2.2.4/bin/gem:21:in `<main>' JAckerman@Jasons-MacBook-Pro parkme3.1 (staging) $ brew install libyaml Warning: libyaml-0.1.6 already installed JAckerman@Jasons-MacBook-Pro parkme3.1 (staging) $ brew reinstall libyaml ==> Reinstalling libyaml Warning: libyaml-0.1.6 already installed JAckerman@Jasons-MacBook-Pro parkme3.1 (staging) $ brew unlink libyaml Unlinking /usr/local/Cellar/libyaml/0.1.6... 1 symlinks removed JAckerman@Jasons-MacBook-Pro parkme3.1 (staging) $ brew reinstall libyaml ==> Reinstalling libyaml Warning: libyaml-0.1.6 already installed, it's just not linked JAckerman@Jasons-MacBook-Pro parkme3.1 (staging) $ brew link libyaml Linking /usr/local/Cellar/libyaml/0.1.6... 5 symlinks created ``` * Then I re-tried `rvm get head` and `rvm get master`, reinstalling after each.
2016/02/05
[ "https://Stackoverflow.com/questions/35233305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1499260/" ]
Since [webpack-dev-server](https://github.com/webpack/webpack-dev-server) is just a tiny express server with compile on change and hot reload. So, if you already got a express server for backend API, just merge the `compile on change and hot reload` into your express server. Then after take a look at the `package.json` of [webpack-dev-server](https://github.com/webpack/webpack-dev-server), i find the key is just [webpack-dev-middleware](https://github.com/webpack/webpack-dev-middleware) ``` const express = require('express'); //your original BE server const app = express(); const webpack = require('webpack'); const middleware = require('webpack-dev-middleware'); //webpack hot reloading middleware const compiler = webpack({ .. webpack options .. }); //move your `devServer` config from `webpack.config.js` app.use(middleware(compiler, { // webpack-dev-middleware options })); app.listen(3000, () => console.log('Example app listening on port 3000!')) ``` So, when you run your BE server, it will compile all the things using webpack, and watch for changes, LOL ~ Also, add [webpack-hot-middleware](https://github.com/glenjamin/webpack-hot-middleware) for hot reloading function, see [Hot Module Replacement](https://webpack.js.org/concepts/hot-module-replacement/)
Just faced the same issue and came with another solution (found out more information about it later, but here it is). Instead of using the webpack-dev-server, use the `webpack --watch` command so files are compiled again upon changes. Once the files are updated on the dist (or any other compiled files folder) you can set to run the nodemon on the dist folder and watch only the dist files. This way it is possible to have the express server running and serving the front-end as you would in a production environment (or kinda) and benefit from the fast reloads. Here's [a link](https://stackoverflow.com/questions/35545093/webpack-watch-and-launching-nodemon) with some solutions to combine the webpack watch and nodemon. My scripts section is something like this at this moment (I'm using the run-all solution): ``` "scripts": { "serve": "npm-run-all --parallel serve:webpack serve:nodemon", "serve:webpack": "webpack --progress --colors --watch", "serve:nodemon": "nodemon ./dist/app.js --watch dist" }, ```
35,456,815
Suppose we have a text size of (1 GB) and let's assume the following text as a sample: > > Stack Overflow is a privately held website, the flagship site of the Stack Exchange Network, created in 2008 by Jeff Atwood and Joel Spolsky. It was created to be a more open alternative to earlier Q&A sites such as Experts-Exchange. > > > As an input search string of **Iwc** I'm expecting the following result : \*\*It was created \*\* . In the sentence : **First word begins with I** , **Second word begins with w** and **Third word begins with c** to give me that result Algorithm should get the result as quickly as possible.
2016/02/17
[ "https://Stackoverflow.com/questions/35456815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3005903/" ]
You ask for the "best" algorithm. I can think of the following definitions of "best": 1. fastest to implement 2. fastest to run once (lowest run-time) given enough memory 3. lowest memory footprint 4. fastest to run for large numbers of queries, so that each additional query requires minimal time 5. a compromise between one or more of the above Each has a different solution. For example, solving for **2)** or **3)**, you can't beat writing a small state machine that reads through the input word by work, keeping track of how far it is along finding a full match. Complexity would be O(totalWordCount) Solving for **4)** is more interesting - you would have to build an efficient index and look things up in that index. A depth-limited [Trie](https://en.wikipedia.org/wiki/Trie) structure would then be optimal, since complexity would be O(queryWordCount), which is << O(totalWordCount). However, building the index is expensive (although linear in O(totalWordCount) for a fixed depth), and would make this a bad approach for **2)**. Finally, the folks at <https://codegolf.stackexchange.com/> will tell you about the joy of minimizing **6)** the amount of characters in your code. --- Expanding on a Trie-based solution: * First build an a trie. Store word initials as node labels, and an `ArrayList<Integer>` associated with each node with offsets to where the words with those initial-sequences can be found within the text. To keep index-size and build-time in check, you will need to limit the trie to a given depth (a depth-5 trie would only allow you to find sequences of initials of length 5 or shorter). With a maximal depth of 5, I estimate that the index size will be similar to your text - so make sure to have plenty of RAM available. * Searching is just a matter of walking trie nodes, starting from the root, until you either do not find the next node you need (initials in that order do not appear in text) or you have walked to the end of your initials. In the second case, the contents of the arraylist are the offsets to the text where the initialed words can be found, in their correct order.
I really don't know anything about algorithms, but for a simple solution I'd use a StringTokenizer, delimited with a space, and if the stringTokenizer.nextToken() returns a string starting with the letter you're looking for, check the result of the next stringTokenizer.nextToken() call, and so on
47,916,611
I am working in an android app where user accept or reject video call which is working fine. but now i want to prevent app conflict with others if we have any other call like what's app or skype. I have then read [self managed connectionservice](https://developer.android.com/guide/topics/connectivity/telecom/selfManaged.html) and try to implement it. [Here is complete doc](https://developer.android.com/reference/android/telecom/ConnectionService.html) From Activity i first register ``` TelecomManager tm = (TelecomManager) getSystemService(Context.TELECOM_SERVICE); if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.M) { PhoneAccountHandle phoneAccountHandle = new PhoneAccountHandle( new ComponentName(this.getApplicationContext(), MyConService.class), "example"); PhoneAccount phoneAccount = PhoneAccount.builder(phoneAccountHandle, "example") .setCapabilities(PhoneAccount.CAPABILITY_SELF_MANAGED).build(); tm.registerPhoneAccount(phoneAccount); } ``` Then try to add new call by doing something like below: ``` PhoneAccountHandle phoneAccountHandle = new PhoneAccountHandle( new ComponentName(this.getApplicationContext(), MyConService.class), "example"); Bundle extras = new Bundle(); Uri uri = Uri.fromParts(PhoneAccount.SCHEME_TEL, "11223344", null); extras.putParcelable(TelecomManager.EXTRA_INCOMING_CALL_ADDRESS, uri); tm.addNewIncomingCall(phoneAccountHandle, extras); ``` App is always crashed on tm.[addNewIncomingCall](https://developer.android.com/reference/android/telecom/TelecomManager.html#addNewIncomingCall(android.telecom.PhoneAccountHandle,%20android.os.Bundle))(phoneAccountHandle, extras) with following log ``` /AndroidRuntime: FATAL EXCEPTION: main Process: com.liverep.videochat, PID: 27754 java.lang.RuntimeException: Unable to start activity ComponentInfo{com.liverep.videochat/com.liverep.videochat.VideoChat}: java.lang.SecurityException: This PhoneAccountHandle is not enabled for this user! at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2927) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2988) at android.app.ActivityThread.-wrap14(ActivityThread.java) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1631) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:154) at android.app.ActivityThread.main(ActivityThread.java:6682) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1520) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1410) Caused by: java.lang.SecurityException: This PhoneAccountHandle is not enabled for this user! at android.os.Parcel.readException(Parcel.java:1693) at android.os.Parcel.readException(Parcel.java:1646) at com.android.internal.telecom.ITelecomService$Stub$Proxy.addNewIncomingCall(ITelecomService.java:1450) at android.telecom.TelecomManager.addNewIncomingCall(TelecomManager.java:1225) at com.liverep.videochat.VideoChat.placeIncomingCall(VideoChat.java:792) at com.liverep.videochat.VideoChat.call(VideoChat.java:730) at com.liverep.videochat.VideoChat.registerForPhoneCall(VideoChat.java:716) at com.liverep.videochat.VideoChat.onCreate(VideoChat.java:133) at android.app.Activity.performCreate(Activity.java:6942) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1126) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2880) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2988)  at android.app.ActivityThread.-wrap14(ActivityThread.java)  at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1631)  at android.os.Handler.dispatchMessage(Handler.java:102)  at android.os.Looper.loop(Looper.java:154)  at android.app.ActivityThread.main(ActivityThread.java:6682)  at java.lang.reflect.Method.invoke(Native Method)  at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1520)  at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1410)  ```
2017/12/21
[ "https://Stackoverflow.com/questions/47916611", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2121089/" ]
You've already noticed that `unlist` gives you the values column-wise: ``` unlist(df[1:3], use.names = FALSE) ## [1] 1 4 7 10 2 5 8 11 3 6 9 12 ``` To get the values row-wise, you can use the `c(t(...))` idiom: ``` c(t(df[1:3])) ## [1] 1 2 3 4 5 6 7 8 9 10 11 12 ``` That would allow you to solve the problem in base R using: ``` as.data.frame(matrix(c(t(df[1:3]), t(df[4:6])), ncol = 3, byrow = TRUE)) ## V1 V2 V3 ## 1 1 2 3 ## 2 4 5 6 ## 3 7 8 9 ## 4 10 11 12 ## 5 13 14 15 ## 6 16 NA NA ## 7 19 20 21 ## 8 22 23 24 ``` --- Generalized as a function, you can try something like: ``` splitter <- function(indf, ncols) { if (ncol(indf) %% ncols != 0) stop("Not the right number of columns to split") inds <- split(sequence(ncol(indf)), c(0, sequence(ncol(indf)-1) %/% ncols)) temp <- unlist(lapply(inds, function(x) c(t(indf[x]))), use.names = FALSE) as.data.frame(matrix(temp, ncol = ncols, byrow = TRUE)) } splitter(df, 3) ``` --- A more flexible "data.table" approach would be something like the following: ``` library(data.table) rbindlist(split.default(as.data.table(df), c(0, sequence(ncol(df)-1) %/% 3)), use.names = FALSE) ## V1 V2 V3 ## 1: 1 2 3 ## 2: 4 5 6 ## 3: 7 8 9 ## 4: 10 11 12 ## 5: 13 14 15 ## 6: 16 NA NA ## 7: 19 20 21 ## 8: 22 23 24 ```
I was surprised that no one mentions `split.default`, which also works on data with more columns: ``` x <- split.default(df, ceiling(seq_along(df) / 3 )) do.call(rbind, lapply(x, setNames, names(x[[1]]))) # V1 V2 V3 # 1.1 1 2 3 # 1.2 4 5 6 # 1.3 7 8 9 # 1.4 10 11 12 # 2.1 13 14 15 # 2.2 16 NA NA # 2.3 19 20 21 # 2.4 22 23 24 ``` Add `make.row.names = FALSE` to get rid of the odd row names: ``` do.call(rbind, c(lapply(x, setNames, names(x[[1]])), list(make.row.names = FALSE))) # V1 V2 V3 # 1 1 2 3 # 2 4 5 6 # 3 7 8 9 # 4 10 11 12 # 5 13 14 15 # 6 16 NA NA # 7 19 20 21 # 8 22 23 24 ```
30,559,602
googled it, read millions of advices about how to generate google API key, how to use it but nothing works! 1. enabled google maps v2 API 2. created project with one activity with google map (default wizard in studio) 3. created key on google for debug project, assigned *SHA1;app package* of application to the project key on google 4. API key inserted into google\_maps\_api.xml (AndroidManifest.xml updated automaticaly) 5. build & run on AVD on which the app is working, map is shown. Then: 1. API already enabled 2. in Android studio created keystore for release (keystore.rel.jks) 3. generated fingerprints (keytool -list -v -keystore C:\Dropbox\AndroidKeyStore\keystore.rel.jks) 4. on google added new fingerprint to the new release project (*release SHA1;app package*) - key generated AIza... 5. API key inserted into google\_maps\_api.xml (AndroidManifest.xml updated automatically) 6. generate signed app (using keystore.rel.jks) & install on real device ... app is working, map is STILL BLANK, only zoom buttons and Google label is on Thanks in advance for any advice. Zdenek
2015/05/31
[ "https://Stackoverflow.com/questions/30559602", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4953224/" ]
i was tired of trying over and over again, it turns out that PlayStore has something called **App signing certificate**, and the map works after i copy that sha1 and paste it in the google console for the android map.
Make sure the map key you entered in google\_maps\_api.xml file is the same android key, generated by google console. You can also try a new key.
1,474,536
I'm trying to retreive data from a massive form using Asp.net MVC. Its top object L1 contains properties which are Collections of other types L2. However the Type L2 contains some properties which collections of type L3, and so on. There are probably 5 levels of nested collections. I've seen the approach of binding to Lists in Asp.Net MVC, where the element name has an array substring included in the names of all its propertie html elements, e.g. [0] in the first set, [1] in the second set etc. However when we've got nested objects, its going to be quite tricky / nightmare to go town[0].council[0].street[0].Name and use that convention to name the html elements. Has anyone come across this situation / can see an elegant way to resolve it? Thanks Mickey
2009/09/24
[ "https://Stackoverflow.com/questions/1474536", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The default model binder that ships with ASP.NET MVC is what dictates the form element naming convention you are referring to. If there is another convention that you would like to use to name the form elements, go for it. Then you just have to write a custom model binder that can populate your nested objects based on your convention. There are lots of tutorials out there for creating model binders, here are some: <http://www.singingeels.com/Articles/Model_Binders_in_ASPNET_MVC.aspx> <http://www.lostechies.com/blogs/jimmy_bogard/archive/2009/03/17/a-better-model-binder.aspx>
You may want to consider using LINQ to abstract the nitty gritty for you and allow you to perform joins and whatnot on the lists.
6,478,414
I have a button that apply filter to jquery datatable ``` $("#buttonFilter").button().click(function() { if (lboxColor.val() != null) { jqTable.fnFilter($("option:selected", lboxColor).text(), 1); } }); ``` It shows me for example 47 rows from 60. I tried .fnGetData() and fnGetNodes() but it shows me all rows, but not filtered. How could I get 47 rows?
2011/06/25
[ "https://Stackoverflow.com/questions/6478414", "https://Stackoverflow.com", "https://Stackoverflow.com/users/815410/" ]
In current versions [selector-modifier](https://datatables.net/reference/type/selector-modifier) uses a slightly different set of properties. ``` var table = $('#example').DataTable(); var rows = table.rows({"search" : "applied"}); ``` And you can iterate over the cell data like this: ``` table.rows({"search":"applied" }).every( function () { var data = this.data(); }); ``` Some helpful links: * <https://datatables.net/reference/api/rows()> * <https://datatables.net/reference/type/selector-modifier> * <https://datatables.net/reference/api/rows().every()>
like if work for you ``` var data = [] table.rows({"search":"applied"}).every(()=>data.push(this.data())) console.log(data) ```
50,000,756
I have the following code: ``` final TransformerFactory factory = TransformerFactory.newInstance(); factory.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, ""); ``` The second line works fine in modern JDKs (I tried 1.8) with a default `TransformerFactory`. But when I add `xalan` (version 2.7.2, the most recent one) to classpath, I get the following on that second line: ``` Exception in thread "main" java.lang.IllegalArgumentException: Not supported: http://javax.xml.XMLConstants/property/accessExternalDTD at org.apache.xalan.processor.TransformerFactoryImpl.setAttribute(TransformerFactoryImpl.java:571) at Main.main(Main.java:11) ``` I guess this is because xalan's `TransformerFactory` does not support this attribute. Xalan's implementation gets picked up through `ServiceLoader` mechanism: it is specified in `services/javax.xml.transform.TransfomerFactory` in xalan jar. It is possible to override the `TransformerFactory` implementation using `javax.xml.transform.TransformerFactory` system property or with `$JRE/lib/jaxp.properties` file, or pass class name directly in code. But to do it, I must supply a concrete class name. Right now, it is `com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl`, but it's a bit scary to hardcode it in a system property, as on JDK upgrade they can easily change the class name, and we will just get a runtime error. Is there any way to instruct the `TransformerFactory.newInstance()` to just ignore that xalan-supplied implementation? Or tell it 'just use the system default'. P.S. I cannot just remove `xalan` from classpath because a bunch of other libraries we use depend on it.
2018/04/24
[ "https://Stackoverflow.com/questions/50000756", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7637120/" ]
The only thing I could achieve here is to hardcode JDK default factory and use the normal discovery process as a fallback: ``` TransformerFactory factory; try { //the open jdk implementation allows the disabling of the feature used for XXE factory = TransformerFactory.newInstance("com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl", SecureXmlFactories.class.getClassLoader()); } catch (Exception | TransformerFactoryConfigurationError e) { //this part uses the default implementation of in xalan 2.7.2 LOGGER.error("Cannot load default TransformerFactory, le's try the usual way", e); //not advisable if you dont want your application to be vulnerable. If needed you can put null here. factory = TransformerFactory.newInstance(); } ``` and then configure it under `try/catch` ``` // this works everywhere, but it does not disable accessing // external DTDs... still enabling it just in case try { factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true); } catch (TransformerConfigurationException e) { LOGGER.error("Cannot enable secure processing", e); } // this does not work in Xalan 2.7.2 try { factory.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, ""); } catch (Exception e) { LOGGER.error("Cannot disable external DTD access", e); } // this does not work in Xalan 2.7.2 try { factory.setAttribute(XMLConstants.ACCESS_EXTERNAL_STYLESHEET, ""); } catch (Exception e) { LOGGER.error("Cannot disable external stylesheet access", e); } ``` And monitor the logs to see if/when the default JDK factory class name changes.
Another way to define which TransformerFactory implementation to load is by setting the `javax.xml.transform.TransformerFactory` system property ([docs](https://docs.oracle.com/javase/8/docs/api/javax/xml/transform/TransformerFactory.html#newInstance--)). In Ant you would do something like this inside your target: ```xml <jvmarg value="-Djavax.xml.transform.TransformerFactory=com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl"/> ``` In the above example, I'm setting the one from OpenJDK8 as default
13,642,529
The following works in windows: ``` mkdir('../my/folder/somewhere/on/the/server', 0777, true); ``` I am talking about [PHP mkdir](http://php.net/manual/en/function.mkdir.php). It works perfectly, and creates the subfolders recursively. However, if I run the same command on a linux server, the folders aren't created. Previously I solved this by breaking up the path and creating each folder one by one. But I don't want to do that because it should work with the "resurive" flag set to true. Why isn't it working?
2012/11/30
[ "https://Stackoverflow.com/questions/13642529", "https://Stackoverflow.com", "https://Stackoverflow.com/users/146366/" ]
Sorry, but there must be some problem apart from the `mkdir` command itself. This tiny example works as expected and recursively creates the directories for me when executed on Linux: ``` #!/usr/bin/php <?php mkdir ('testdir/testdir2/testdir3',0777,TRUE); ?> ```
This are the thing have discovered * Make sure the root path exists * Make sure the root path is writable * Don't use `..` always use real path ... Example ``` $fixedRoot = __DIR__; $recusivePath = 'my/folder/somewhere/on/the/server'; if (is_writable($fixedRoot) && is_dir($fixedRoot)) { mkdir($fixedRoot . DIRECTORY_SEPARATOR . $recusivePath, 0, true); } else { trigger_error("can write to that path"); } ```
9,169,517
I cant get Spring's JSON support working. In my spring-servlet.xml file i have included following lines: ``` <mvc:annotation-driven/> <context:component-scan base-package="my.packagename.here" /> <context:annotation-config /> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver" p:prefix="/WEB-INF/jsp/" p:suffix=".jsp" /> <bean id="jacksonMessageConverter" class="org.springframework.http.converter.json.MappingJacksonHttpMessageConverter"/> ``` I have also downloaded jackson libraries and added them to my eclipse project and also to WEB-INF/lib folder. When sending request to controller with jQuery getJSON method i get following errors: ``` javax.servlet.ServletException: Servlet.init() for servlet dispatcher threw exception java.lang.NoClassDefFoundError: org/codehaus/jackson/JsonProcessingException java.lang.ClassNotFoundException: org.codehaus.jackson.JsonProcessingException ``` What do you think is the problem. I'm guessing it has something to do with my `spring-servlet.xml` file. I can paste entire error log, if you need.
2012/02/07
[ "https://Stackoverflow.com/questions/9169517", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1132633/" ]
For Jackson v2 jars, class to be used for bean should be ``` <bean class="org.springframework.http.converter.json.MappingJackson2HttpMessageConverter" /> ``` For older jackson version, org.springframework.http.converter.json.MappingJacksonHttpMessageConverter is ok. Make sure the jar files are added to the project library.
Just to complement anshul tiwari answer, the bean tag should go inside mvc:annotation-driver: ``` <mvc:annotation-driven> <mvc:message-converters> <bean class="org.springframework.http.converter.ResourceHttpMessageConverter" /> <!-- <bean --> <!-- class="org.springframework.http.converter.xml.Jaxb2RootElementHttpMessageConverter" /> --> <bean class="org.springframework.http.converter.json.MappingJackson2HttpMessageConverter" /> <!-- <bean --> <!-- class="org.springframework.http.converter.json.MappingJacksonHttpMessageConverter" /> --> </mvc:message-converters> </mvc:annotation-driven> ```
23,491,088
I have a string like ``` "<Canvas Background="#FF00FFFF" Name="Page_1" Width="1200" Height="900" ><TextBlock Name="PageTitle" /></Canvas><Canvas Background="#FF00FFFF" Name="Page_2" Width="1200" Height="900"><TextBlock Name="PageTitle" /></Canvas>" ``` I want to split this string into an array like ``` [< Canvas Background="#FF00FFFF" Name="Page_1" Width="1200" Height="900" >< TextBlock Name="PageTitle" />< /Canvas>], [< Canvas Background="#FF00FFFF" Name="Page_2" Width="1200" Height="900">< TextBlock Name="PageTitle" />< /Canvas>] ``` But when i use ``` objectsAsStrings = contents.Split(new string[] { "/Canvas><Canvas" }, StringSplitOptions.None); ``` i get the delimeter removed, what i dont want. How do i Split a string BETWEEN "/Canvas" and "< Canvas" ?
2014/05/06
[ "https://Stackoverflow.com/questions/23491088", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3351652/" ]
try this ``` string mailstring = "<Canvas Background='#FF00FFFF' Name='Page_1' Width='1200' Height='900' ><TextBlock Name='PageTitle' /></Canvas><Canvas Background='#FF00FFFF' Name='Page_2' Width='1200' Height='900'<TextBlock Name='PageTitle' /></Canvas>"; string splitor = "</Canvas>"; string[] substrings = mailstring.Split(new string[] { splitor }, StringSplitOptions.None); string part1 = substrings[0] + splitor; string part2 = substrings[1] + splitor; ```
Use `Split` as you did, after that iterate over the array and insert `"<Canvas"` to the beginning of all items besides the first, append `"/Canvas>"` to the end of all items besides the last. ``` contents = "<Canvas Background=\"#FF00FFFF\" Name=\"Page_1\" Width=\"1200\" Height=\"900\" ><TextBlock Name=\"PageTitle\" /></Canvas><Canvas Background=\"#FF00FFFF\" Name=\"Page_2\" Width=\"1200\" Height=\"900\"><TextBlock Name=\"PageTitle\" /></Canvas>"; objectsAsStrings = contents.Split(new string[] { "/Canvas><Canvas" }, StringSplitOptions.None); for(var i = 0; i < objectsAsStrings.Length; i++) { if( 0 < i) objectsAsStrings[i] = "<Canvas" + objectsAsStrings[i]; if( i < objectsAsStrings.Length-1) objectsAsStrings[i] = objectsAsStrings[i] + "/Canvas>"; } ```
15,811,920
Apologies if this has been asked, but the similar questions I found weren't answering my problem exactly. I know how to take a single input and convert it into a string, and or anything hard coded. What I'm trying to do however is take multiple inputs and convert them into a single string. Example: ``` Enter a letter: h Enter a letter: e Enter a letter: l Enter a letter: l Enter a letter: o ``` ...and so on, however long it takes until the user inputs 's' for instance. ``` Enter a letter: s ``` And then the program prints all previous iterations as a single string. ``` Result: hello ``` How do I piece together a string from separate, potentially infinite inputs?
2013/04/04
[ "https://Stackoverflow.com/questions/15811920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2093399/" ]
I realize this post is old, however, this is how I coded the solution in Python 3 ``` mylist = [] results = "" while True: c = input("Enter a letter:") if c == "s": result = "\n".join(mylist) print('Items entered: \n'+ result) break mylist.append(c) ```
> > How do I piece together a string from separate, potentially infinite inputs? > > > You can do this using ``` ''.join(inputs) ``` where `inputs` is any [iterable](http://docs.python.org/2/glossary.html#term-iterable) of strings.
17,070,740
I need to call a COM function in C++ that returns a reference to a `SAFEARRAY(BSTR)`. According to [this document](http://qt-project.org/doc/qt-4.8/qaxbase.html#details), it should be: ``` QAxObject object = new QAxObject(...); QStringList list; for(int i=0; i<goodSize; i++) list << "10.0"; object->dynamicCall("Frequencies(QStringList&)", list); for(int i=0; i<list.size(); i++) qDebug() << list.at(i); ``` but the list elements remain to `10.0`. Am I missing something? **EDIT** I used Oleview.exe and actually, the function looks like this: `void Frequencies(VARIANT* FrequencyArray);`. But the documentation of the ActiveX server says: `Use a safearray of strings (VT_BSTR) or reals (VT_R8 for double or VT_R4 for float)`.
2013/06/12
[ "https://Stackoverflow.com/questions/17070740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/902025/" ]
Here is how I solved this: public WebDriver driver; ``` // Overrides the default driver @Override public WebDriver getDefaultDriver() { System.setProperty("webdriver.chrome.driver", "C:/chromeDriver/chromedriver.exe"); // Set for ChromeDriver driver = new ChromeDriver(); return driver; } ```
Have an abstract `createDriver()` function that is overridden by each type of driver. Each driver should return a correctly configured driver, which is then stored, and returned when you call `getDefaultDriver();`
4,551
If $A,B$ non empty, upper bounded sets and $A+B=\{a+b\mid a\in A, b\in B\}$, how can I prove that $\sup(A+B)=\sup A+\sup B$?
2010/09/13
[ "https://math.stackexchange.com/questions/4551", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1862/" ]
Show that $\sup(A+B)$ is less than or equal to $\sup(A)+\sup(B)$ by showing that the latter is an upper bound for $A+B$. Then show that $\sup(A)+\sup(B)$ is less than or equal to $\sup(A+B)$ by showing that $\sup(A+B)$ is an upper bound for $A+\sup(B)$ and that $\sup(A+\sup(B)) = \sup(A)+\sup(B)$.
Another proof for $\sup(A + B) \geq \sup A + \sup B$. --- Proof when $\sup A + \sup B$ is finite: --------------------------------------- Posit $e > 0.$ Then there exists $a \in A$ and $b \in B$ such that $a > \sup A − \frac{e}{2}$ and $b > \sup B − \frac{e}{2}$. Then $a + b \in A + B$. Ergo, $$\color{seagreen}{\sup(A + B)} \geq a + b \color{seagreen}{> \sup A + \sup B - e} \implies \color{seagreen}{ \sup(A + B) > \sup A + \sup B - e }.$$ Since $e > 0$ is arbitrary, $\sup(A + B) \geq \sup A + \sup B$ Proof when $\sup A + \sup B$ is infinite: ----------------------------------------- > > Since the sets here are nonempty, the suprema here are not equal to $-\infty$, so we're not in danger of encountering the undefined sum $-\infty +\infty$. If $\sup A + \sup B = + \infty$, then at least one of the suprema, say $\sup B$, equals $+\infty$. Select some $a\_0 \in A$. Then $$\sup(A+B) \geq \sup(a\_0 + B) = a\_0 + \sup B = + \infty,$$ so $\sup(A+B) \geq \sup(A) + \sup(B)$ holds in this case. ([Source](https://i.stack.imgur.com/Qdk8X.png)) > > >
61,546
Computing power considerations aside, are there any reasons to believe that **increasing the number of folds** in cross-validation leads to better model selection/validation (i.e. that the higher the number of folds the better)? Taking the argument to the extreme, does leave-one-out cross-validation necessarily lead to better models than $K$-fold cross-validation? Some background on this question: I am working on a problem with very few instances (e.g. 10 positives and 10 negatives), and am afraid that my models may not generalize well/would overfit with so little data.
2013/06/12
[ "https://stats.stackexchange.com/questions/61546", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2798/" ]
Leave-one-out cross-validation does not generally lead to better performance than K-fold, and is more likely to be *worse*, as it has a relatively high variance (i.e. its value changes more for different samples of data than the value for k-fold cross-validation). This is bad in a model selection criterion as it means the model selection criterion can be optimised in ways that merely exploit the random variation in the particular sample of data, rather than making genuine improvements in performance, i.e. you are more likely to over-fit the model selection criterion. The reason leave-one-out cross-validation is used in practice is that for many models it can be evaluated very cheaply as a by-product of fitting the model. If computational expense is not primarily an issue, a better approach is to perform repeated k-fold cross-validation, where the k-fold cross-validation procedure is repeated with different random partitions into k disjoint subsets each time. This reduces the variance. If you have only 20 patterns, it is very likely that you will experience over-fitting the model selection criterion, which is a much neglected pitfall in statistics and machine learning (shameless plug: see my [paper](http://jmlr.org/papers/v11/cawley10a.html) on the topic). You may be better off choosing a relatively simple model and try not to optimise it very aggressively, or adopt a Bayesian approach and average over all model choices, weighted by their plausibility. IMHO optimisation is the root of all evil in statistics, so it is better not to optimise if you don't have to, and to optimise with caution whenever you do. Note also if you are going to perform model selection, you need to use something like nested cross-validation if you also need a performance estimate (i.e. you need to consider model selection as an integral part of the model fitting procedure and cross-validate that as well).
Choosing the number K folds by considering the learning curve ============================================================= I would like to argue that choosing the appropriate number of $K$ folds depends a lot on the shape and position of the learning curve, mostly due to its impact on the **bias**. This argument, which extends to leave-one-out CV, is largely taken from the book "Elements of Statistical Learning" chapter 7.10, page 243. For discussions on the impact of $K$ on the **variance** see [here](https://stats.stackexchange.com/questions/61783/variance-and-bias-in-cross-validation-why-does-leave-one-out-cv-have-higher-var/357749#357749) > > To summarize, if the learning curve has a considerable slope at the > given training set size, five- or tenfold cross-validation will > overestimate the true prediction error. Whether this bias is a > drawback in practice depends on the objective. On the other hand, > leave-one-out cross-validation has low bias but can have high > variance. > > > An intuitive visualization using a toy example ---------------------------------------------- To understand this argument visually, consider the following toy example where we are fitting a degree 4 polynomial to a noisy sine curve: [![enter image description here](https://i.stack.imgur.com/TKHXp.png)](https://i.stack.imgur.com/TKHXp.png) Intuitively and visually, we expect this model to fare poorly for small datasets due to overfitting. This behaviour is reflected in the learning curve where we plot $1 -$ Mean Square Error vs Training size together with $\pm$ 1 standard deviation. *Note that I chose to plot 1 - MSE here to reproduce the illustration used in ESL page 243* [![enter image description here](https://i.stack.imgur.com/vh27T.png)](https://i.stack.imgur.com/vh27T.png) Discussing the argument ----------------------- The performance of the model improves significantly as the training size increases to 50 observations. Increasing the number further to 200 for example brings only small benefits. Consider the following two cases: 1. If our training set had 200 observations, $5$ fold cross validation would estimate the performance over a training size of 160 which is virtually the same as the performance for training set size 200. Thus cross-validation would not suffer from much bias and increasing $K$ to larger values will not bring much benefit (*left hand plot*) 2. However if the training set had $50$ observations, $5$ fold cross-validation would estimate the performance of the model over training sets of size 40, and from the learning curve this would lead to a biased result. Hence increasing $K$ in this case will tend to reduce the bias. (*right hand plot*). [![enter image description here](https://i.stack.imgur.com/9k3Xb.png)](https://i.stack.imgur.com/9k3Xb.png) [Update] - Comments on the methodology -------------------------------------- You can find the code for this simulation [here](http://nbviewer.jupyter.org/github/xavierbourretsicotte/Data-Portfolio/blob/master/Cross_Validation.ipynb). The approach was the following: 1. Generate 50,000 points from the distribution $sin(x) + \epsilon$ where the true variance of $\epsilon$ is known 2. Iterate $i$ times (e.g. 100 or 200 times). At each iteration, change the dataset by resampling $N$ points from the original distribution 3. For each data set $i$: * Perform K-fold cross validation for one value of $K$ * Store the average Mean Square Error (MSE) across the K-folds 4. Once the loop over $i$ is complete, calculate the mean and standard deviation of the MSE across the $i$ datasets for the same value of $K$ 5. Repeat the above steps for all $K$ in range $\{ 5,...,N\}$ all the way to LOOCV An alternative approach is to *not resample* a new data set at each iteration and instead reshuffle the same dataset each time. This seems to give similar results.
1,284,804
Isn't the tuple different structure from $m \times 1$ or $1 \times n$ matrix? Why can you mix them?
2015/05/16
[ "https://math.stackexchange.com/questions/1284804", "https://math.stackexchange.com", "https://math.stackexchange.com/users/114222/" ]
From [Wikipedia](http://en.wikipedia.org/wiki/Tuple), "A tuple is a finite ordered list of elements." So things like $(\clubsuit, \square, 57, \clubsuit)$ are perfectly valid tuples, and we can view this particular tuple as one function $$f: [4] = \{1, 2, 3,4\} \to A,$$ where $A$ is just some set that includes $\{\square, \clubsuit, 57\}$ as a subset. We relate this tuple to $f$ by considering it as a list of outputs of our function: $\big(f(1), f(2), f(3), f(4)\big) = (\clubsuit, \square, 57, \clubsuit)$. So tuples are really the most general. Now, when we consider only functions $$f: [m] = \{1, 2, \ldots, m\} \to \Bbb F$$ that map into some field $\Bbb F$, we get very special tuples (remember: a field is a set of "numbers" in which we can add and multiply, and where all non-zero "numbers" have a multiplicative inverse, like the real numbers $\Bbb R$ or the complex numbers $\Bbb C$). If we write such tuples as $$v = \begin{bmatrix}f(1) \\ f(2) \\ \vdots \\ f(m)\end{bmatrix},$$ we have an $m \times 1$ matrix, or column vector, of the vector space $\Bbb F^m$. If, on the other hand, we write such tuples as $$v = \begin{bmatrix}f(1) & f(2) & \cdots & f(m)\end{bmatrix},$$ we have an $m \times 1$ matrix, or row vector, of the vector space $ (\Bbb F^m)^\*$. If we're being pedantic, one difference between a generic tuple and row/column vectors is that vectors always belong to an algebraic structure (a vector space), while this may or may not be true for an arbitrary tuple. We can even distinguish between row and column vectors, algebraically. For example, given $$v = [1, 2] \in \Bbb R^2, \quad v^t = \begin{bmatrix}1 \\ 2\end{bmatrix} \in (\Bbb R^2)^\*,$$ then given a $2 \times 2$ matrix such as $M = \begin{bmatrix}1 & 3 \\ -2 & 5\end{bmatrix},$ then $Mv$ makes sense while $vM$ doesn't, and $v^tM$ makes sense, while $Mv^t$ doesn't (and further, $Mv$ is probably not equal to $v^tM$. To learn more, read about Linear Algebra and vector spaces). So, "as a list of $m$ things", tuples, row vectors, and column vectors are all "the same", we get bijections between all three (and structure-preserving maps between a vector space $\Bbb F^m$ and its dual $(\Bbb F^m)^\*$). And while each fills a separate mathematical role, generally we may gloss over the details the details and the line gets blurry.
There is a difference because of the definition of those objects (I take $\mathbb R$ as the underlying number system): * A $m\times 1$ matrix represents a linear function $\mathbb R^m \rightarrow \mathbb R$. * A $1\times m$ matrix represents a linear function $\mathbb R \rightarrow \mathbb R^m$. * A $m$-tuple represents a finite sequence of real numbers of the length $m$. But they have a key property in common: > > A $m\times 1$ matrix, a $1\times m$ matrix and a m-tuple can be uniquely defined by giving $m$ real numbers. > > > Or better to say: > > The set of all $m\times 1$ matrices, the set of all $1\times m$ matrices and the set of all m-tuples are vector spaces of dimension $m$ > > > Thus all vector spaces are isomorph (because there dimension is the same). Note that isomorph means "have the same structure". Thus you can identify a $m\times 1$ matrix with a $1\times m$ with a $m$-tuple.
259,053
We all know that *to eat out* means “to eat away from home, esp in a restaurant” per the [Free Dictionary](http://www.thefreedictionary.com/eat+out). However, I have not heard anyone say *to drink out*. The Internet doesn’t have any information about it. Does > > *to drink out*: to drink away from home, esp. in coffee shop or pub, etc. > > > make any sense?
2015/07/13
[ "https://english.stackexchange.com/questions/259053", "https://english.stackexchange.com", "https://english.stackexchange.com/users/105551/" ]
No, such a term is not used. To convey the meaning that you describe, people say: *let's go out for a drink.* Consequently, simply saying *let's go out* has come to mean the same thing, and so *go out* is equivalent to *eat out*. My answer is a British English perspective.
We all know English language just like every language is living. If the phrase is not yet recognized in English Language but is very convenient, I guess it is high time we used it even more often. By that it will surely get the recognition it needs.
1,397,428
> > Find the number of real roots of $1+x/1!+x^2/2!+x^3/3! + \ldots + x^6/6! =0$. > > > *Attempts so far:* Used Descartes signs stuff so possible number of real roots is $6,4,2,0$ tried differentiating the equation $4$ times and got an equation with no roots hence proving that above polynomial has $4$ real roots. But using online calculators I get zero real roots. Where am I wrong?
2015/08/14
[ "https://math.stackexchange.com/questions/1397428", "https://math.stackexchange.com", "https://math.stackexchange.com/users/262120/" ]
We can compute the number of real roots using [Sturm's Theorem](https://en.wikipedia.org/wiki/Sturm%27s_theorem). $$ \begin{array}{rll} \text{Sturm Chain}&+\infty&-\infty\\\hline x^6+6x^5+30x^4+120x^3+360x^2+720x+720&+\infty&+\infty\\ 6x^5+30x^4+120x^3+360x^2+720x+720&+\infty&-\infty\\ -5x^4-40x^3-180x^2-480x-600&-\infty&-\infty\\ -48x^3-432x^2-1728x-2880&-\infty&+\infty\\ 45x^2+360x+900&+\infty&+\infty\\ 384x+1920&+\infty&-\infty\\ -225&-225&-225 \end{array} $$ There are $3$ changes of sign at $+\infty$ and $3$ changes of sign at $-\infty$. Thus, there are no real roots.
let $y = 1+x/1!+x^2/2!+x^3/3! + \cdots + x^6/6! .$ it is clear that $y \ge 1$ for all $x \ge 0.$ we will show that $y(a) > 0$ for $a < 0$ and that will prove that $y$ is never zero. pick an $a < 0.$ we have $$y' = y - x^6/6!, \space y(0) = 1.\tag 1$$ rearranging $(1)$ and multiplying by $e^{-x}$ gives $$ (ye^{-x})' = -x^6e^{-x}/6!.$$ integrating the last equation from $a$ to $0$ we get $$1-y(a)e^{-a}=-\int\_a^0 x^6e^{-x}/6!\, dx\to y(a)e^{-a} = 1+\int\_a^0 x^6e^{-x}/6!\, dx > 0$$ therefore $y(a) > 0$ and that concluded the claim that $y > 0$ for all $x.$
8,143,750
I would like to implement a basic rotation animation in iOS, where the view is continuously rotating around its center point. However, for some reason, the rotation's anchor point is always the parent view's origin, and not the rotating view's center. Therefore, the view is rotating around the upper left corner of the screen, even if I manually set the anchor point. Here's what I'm doing: ``` // Add shape layer to view CAShapeLayer *shapeLayer = [[CAShapeLayer alloc] init]; CGRect shapeRect = CGRectMake(0, 0, 100, 100); UIBezierPath *roundedRect = [UIBezierPath bezierPathWithRoundedRect:shapeRect cornerRadius:5]; shapeLayer.path = roundedRect.CGPath; shapeLayer.anchorPoint = CGPointMake(0.5, 0.5); shapeLayer.fillColor = [[UIColor redColor] CGColor]; [self.view.layer addSublayer:shapeLayer]; // Set rotation animation CATransform3D rotationTransform = CATransform3DMakeRotation(M_PI, 0, 0, 1); CABasicAnimation *rotationAnimation = [CABasicAnimation animationWithKeyPath:@"transform"]; rotationAnimation.toValue = [NSValue valueWithCATransform3D:rotationTransform]; rotationAnimation.duration = 1.0f; rotationAnimation.cumulative = YES; rotationAnimation.repeatCount = HUGE_VALF; [shapeLayer addAnimation:rotationAnimation forKey:@"transform"]; ``` Any help would be appreciated.
2011/11/15
[ "https://Stackoverflow.com/questions/8143750", "https://Stackoverflow.com", "https://Stackoverflow.com/users/560496/" ]
I also encountered the same problem. Scaling and rotation seems to be using a top-left anchorPoint even after setting it manually to (0.5, 0.5). I fixed it on my code by manually setting the layer's frame or bounds property to have the same size as your path.
Here is how you can rotate around a particular point, without using anchorPoint. I'm not sure what anchorPoint is supposed to do for a rotation animation, but it doesn't seem to be doing what you want. "First" translate so your center of rotation is at the origin. So apply a translation left and up. Then rotate. Then translate back to the desired position. So apply a translation right and down. You do those simultaneously by concatenating the matrices. They go in reverse order. From left to right: the right/down translation, the rotation, the left/up translation.
566,724
I've a fresh installation of Ubuntu 14.10 desktop. I downloaded it yesterday. It has been installed correctly. I've added a repository and now when I enter `sudo apt-get update` it gives following errors: ``` Ign cdrom://Ubuntu 14.10 _Utopic Unicorn_ - Release i386 (20141022.1) utopic InRelease Ign cdrom://Ubuntu 14.10 _Utopic Unicorn_ - Release i386 (20141022.1) utopic/main Translation-en_US Ign cdrom://Ubuntu 14.10 _Utopic Unicorn_ - Release i386 (20141022.1) utopic/main Translation-en Ign cdrom://Ubuntu 14.10 _Utopic Unicorn_ - Release i386 (20141022.1) utopic/restricted Translation-en_US Ign cdrom://Ubuntu 14.10 _Utopic Unicorn_ - Release i386 (20141022.1) utopic/restricted Translation-en Err http://pk.archive.ubuntu.com|security.ubuntu.com utopic InRelease Err http://pk.archive.ubuntu.com|security.ubuntu.com utopic-updates InRelease Err http://pk.archive.ubuntu.com|security.ubuntu.com utopic-backports InRelease Err http://pk.archive.ubuntu.com|security.ubuntu.com utopic Release.gpg Could not resolve 'pk.archive.ubuntu.com|security.ubuntu.com' Err http://pk.archive.ubuntu.com|security.ubuntu.com utopic-updates Release.gpg Could not resolve 'pk.archive.ubuntu.com|security.ubuntu.com' Err http://pk.archive.ubuntu.com|security.ubuntu.com utopic-backports Release.gpg Could not resolve 'pk.archive.ubuntu.com|security.ubuntu.com' Ign http://archive.canonical.com utopic InRelease Ign http://ppa.launchpad.net utopic InRelease Ign http://extras.ubuntu.com utopic InRelease Hit http://archive.canonical.com utopic Release.gpg Ign http://ppa.launchpad.net utopic Release.gpg Hit http://extras.ubuntu.com utopic Release.gpg Ign http://us.archive.ubuntu.com utopic-security InRelease Hit http://archive.canonical.com utopic Release Ign http://ppa.launchpad.net utopic Release Hit http://extras.ubuntu.com utopic Release Ign http://us.archive.ubuntu.com utopic InRelease Hit http://extras.ubuntu.com utopic/main Sources Ign http://us.archive.ubuntu.com utopic-backports InRelease Hit http://extras.ubuntu.com utopic/main i386 Packages Ign http://us.archive.ubuntu.com utopic-proposed InRelease Hit http://archive.canonical.com utopic/partner Sources Ign http://us.archive.ubuntu.com utopic-updates InRelease Hit http://archive.canonical.com utopic/partner i386 Packages Hit http://us.archive.ubuntu.com utopic-security Release.gpg Hit http://archive.canonical.com utopic/partner Translation-en Hit http://us.archive.ubuntu.com utopic Release.gpg Hit http://us.archive.ubuntu.com utopic-backports Release.gpg Hit http://us.archive.ubuntu.com utopic-proposed Release.gpg Hit http://us.archive.ubuntu.com utopic-updates Release.gpg Ign http://extras.ubuntu.com utopic/main Translation-en_US Ign http://extras.ubuntu.com utopic/main Translation-en Hit http://us.archive.ubuntu.com utopic-security Release Hit http://us.archive.ubuntu.com utopic Release Hit http://us.archive.ubuntu.com utopic-backports Release Hit http://us.archive.ubuntu.com utopic-proposed Release Err http://ppa.launchpad.net utopic/main Sources 404 Not Found Hit http://us.archive.ubuntu.com utopic-updates Release Err http://ppa.launchpad.net utopic/main i386 Packages 404 Not Found Ign http://ppa.launchpad.net utopic/main Translation-en_US Ign http://ppa.launchpad.net utopic/main Translation-en Hit http://us.archive.ubuntu.com utopic-security/main Sources Hit http://us.archive.ubuntu.com utopic-security/restricted Sources Hit http://us.archive.ubuntu.com utopic-security/universe Sources Hit http://us.archive.ubuntu.com utopic-security/multiverse Sources Hit http://us.archive.ubuntu.com utopic-security/main i386 Packages Hit http://us.archive.ubuntu.com utopic-security/restricted i386 Packages Hit http://us.archive.ubuntu.com utopic-security/universe i386 Packages Hit http://us.archive.ubuntu.com utopic-security/multiverse i386 Packages Hit http://us.archive.ubuntu.com utopic-security/main Translation-en Hit http://us.archive.ubuntu.com utopic-security/multiverse Translation-en Hit http://us.archive.ubuntu.com utopic-security/restricted Translation-en Hit http://us.archive.ubuntu.com utopic-security/universe Translation-en Hit http://us.archive.ubuntu.com utopic/main i386 Packages Hit http://us.archive.ubuntu.com utopic/universe i386 Packages Hit http://us.archive.ubuntu.com utopic/restricted i386 Packages Hit http://us.archive.ubuntu.com utopic/multiverse i386 Packages Hit http://us.archive.ubuntu.com utopic/main Translation-en Hit http://us.archive.ubuntu.com utopic/multiverse Translation-en Hit http://us.archive.ubuntu.com utopic/restricted Translation-en Hit http://us.archive.ubuntu.com utopic/universe Translation-en Hit http://us.archive.ubuntu.com utopic-backports/main i386 Packages Hit http://us.archive.ubuntu.com utopic-backports/universe i386 Packages Hit http://us.archive.ubuntu.com utopic-backports/multiverse i386 Packages Hit http://us.archive.ubuntu.com utopic-backports/restricted i386 Packages Hit http://us.archive.ubuntu.com utopic-backports/main Translation-en Hit http://us.archive.ubuntu.com utopic-backports/multiverse Translation-en Hit http://us.archive.ubuntu.com utopic-backports/restricted Translation-en Hit http://us.archive.ubuntu.com utopic-backports/universe Translation-en Hit http://us.archive.ubuntu.com utopic-proposed/main i386 Packages Hit http://us.archive.ubuntu.com utopic-proposed/universe i386 Packages Hit http://us.archive.ubuntu.com utopic-proposed/multiverse i386 Packages Hit http://us.archive.ubuntu.com utopic-proposed/restricted i386 Packages Hit http://us.archive.ubuntu.com utopic-proposed/main Translation-en Hit http://us.archive.ubuntu.com utopic-proposed/multiverse Translation-en Hit http://us.archive.ubuntu.com utopic-proposed/restricted Translation-en Hit http://us.archive.ubuntu.com utopic-proposed/universe Translation-en Hit http://us.archive.ubuntu.com utopic-updates/main i386 Packages Hit http://us.archive.ubuntu.com utopic-updates/universe i386 Packages Hit http://us.archive.ubuntu.com utopic-updates/multiverse i386 Packages Hit http://us.archive.ubuntu.com utopic-updates/restricted i386 Packages Hit http://us.archive.ubuntu.com utopic-updates/main Translation-en Hit http://us.archive.ubuntu.com utopic-updates/multiverse Translation-en Hit http://us.archive.ubuntu.com utopic-updates/restricted Translation-en Hit http://us.archive.ubuntu.com utopic-updates/universe Translation-en Reading package lists... Done W: Failed to fetch http://pk.archive.ubuntu.com|security.ubuntu.com/ubuntu/dists/utopic/InRelease W: Failed to fetch http://pk.archive.ubuntu.com|security.ubuntu.com/ubuntu/dists/utopic-updates/InRelease W: Failed to fetch http://pk.archive.ubuntu.com|security.ubuntu.com/ubuntu/dists/utopic-backports/InRelease W: Failed to fetch http://pk.archive.ubuntu.com|security.ubuntu.com/ubuntu/dists/utopic/Release.gpg Could not resolve 'pk.archive.ubuntu.com|security.ubuntu.com' W: Failed to fetch http://pk.archive.ubuntu.com|security.ubuntu.com/ubuntu/dists/utopic-updates/Release.gpg Could not resolve 'pk.archive.ubuntu.com|security.ubuntu.com' W: Failed to fetch http://pk.archive.ubuntu.com|security.ubuntu.com/ubuntu/dists/utopic-backports/Release.gpg Could not resolve 'pk.archive.ubuntu.com|security.ubuntu.com' W: Failed to fetch http://ppa.launchpad.net/plushuang-tw/uget-stable/ubuntu/dists/utopic/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/plushuang-tw/uget-stable/ubuntu/dists/utopic/main/binary-i386/Packages 404 Not Found W: Some index files failed to download. They have been ignored, or old ones used instead. ``` I've read a lot of questions and tried all of them one by one. But its still causing the problem. I've checked all of the options in **Software and Updates** and also tried different servers. It's causing the same problem
2014/12/29
[ "https://askubuntu.com/questions/566724", "https://askubuntu.com", "https://askubuntu.com/users/300619/" ]
I'm not sure if there should be a pipe character `|` in the middle of your ubuntu server name... that's not just how the error message looks is it? Try checking your software repositories, try different mirrors perhaps. This help page should help <https://help.ubuntu.com/community/Repositories/Ubuntu> And I tried connecting to <http://pk.archive.ubuntu.com> and it just hangs, maybe it's just down.
You are most possibly behind a physical / network firewall or some other network security device on which http/https proxy or few website categories are blocked or a rare chance but port 80/443/8080 are booked
43,432,388
Am trying to add a variable into a url, but the server gives me a 400 and i dont recive data ``` var x = "City" $.getJSON('http://api.apixu.com/v1/current.json?key=' + x, function(jd) { ```
2017/04/16
[ "https://Stackoverflow.com/questions/43432388", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6637384/" ]
You need to get the value in the text field inside the click handler; otherwise, you're just going to have the same value every time the handler runs: ``` var clickBot = document.querySelector("#bot") clickBot.addEventListener("click",function(){ var textValue = document.querySelector("#myText").value; alert(textValue); }) ```
The variable `textValue` is already initialized as `undefined` outside the event listener. You need to reassign that value every time that button is clicked. I would suggest using `textValue` to store the selector and then using `textValue.value` in the alert. ``` var textValue = document.querySelector("#myText"); var clickBot = document.querySelector("#bot"); clickBot.addEventListener("click", function(){ alert(textValue.value); }); ```
59,275,009
I'm using Android Navigation Component with bottom navigation, lint gives a warning about replacing the `<fragment>` tag with `<FragmentContainerView>` but when i replaced, `findNavController` is not working it gives me error about it does not have a NavController set on Fragment ``` <androidx.fragment.app.FragmentContainerView android:id="@+id/nav_host_fragment" android:name="androidx.navigation.fragment.NavHostFragment" android:layout_width="match_parent" android:layout_height="match_parent" app:defaultNavHost="true" app:navGraph="@navigation/mobile_navigation" /> ``` Activity ``` val navController = findNavController(R.id.nav_host_fragment) val appBarConfiguration = AppBarConfiguration( setOf( R.id.navigation_classes, R.id.navigation_schedule, R.id.navigation_settings ) ) setupActionBarWithNavController(navController, appBarConfiguration) navView.setupWithNavController(navController) } ```
2019/12/10
[ "https://Stackoverflow.com/questions/59275009", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7160898/" ]
Replace this line: ```java NavController navController = Navigation.findNavController(this, R.id.nav_host_fragment); ``` with ```java NavController navController = getNavController(); ``` Where `getNavController()` looks like this: ```java // workaround for https://issuetracker.google.com/issues/142847973 @NonNull private NavController getNavController() { Fragment fragment = getSupportFragmentManager().findFragmentById(R.id.nav_host_fragment); if (!(fragment instanceof NavHostFragment)) { throw new IllegalStateException("Activity " + this + " does not have a NavHostFragment"); } return ((NavHostFragment) fragment).getNavController(); } ```
Here is the simple answer - ``` val navHostFragment = supportFragmentManager.findFragmentById(R.id.fragmentContainerView) ``` as NavHostFragment ``` val navController = navHostFragment.navControllermenus.setupWithNavController(navController) ```
44,603,418
I'm trying to make a simple app for some kids. I want to show a photo and to link that photo to a sound. When the kid click on "Play Sound" it should play an audio that represents the image. so far I've done only showing the images, that I've put in Assets.xcassets, but my question is.. How can I link those photos to some audio files and play them by clicking on "Play Sound" Here is my code in ViewController.swift : ``` @IBOutlet weak var viewImage: UIImageView! var images: [UIImage] = [ UIImage(named: "photo1.png")!, UIImage(named: "photo2.png")!, UIImage(named: "photo3.png")!, UIImage(named: "photo4.png")!, UIImage(named: "photo5.png")!, UIImage(named: "photo6.png")!, UIImage(named: "photo7.png")!, UIImage(named: "photo8.png")! ] var currentImagesIndex = 0 @IBAction func nextImage(_ sender: Any) { currentImagesIndex += 1 let numberOfImages = images.count let nextImagesIndex = currentImagesIndex % numberOfImages viewImage.image = images[nextImagesIndex] } ``` So far, when I click on Next Photo it shows me a new photo, and it's working ok. All I want to do for now, it's to link the "photo1-8" to an audio and play it. I'll appreciate any kind of help, Radu
2017/06/17
[ "https://Stackoverflow.com/questions/44603418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8024141/" ]
Please comment this ``` compile 'com.daimajia.swipelayout:library:1.2.0@aar' ``` Try Rebuilding again. Since it's a third party library , there is a chance that it might be causing the trouble
It means you are using different version of support and all android libraries for example you are using support library version 25.0.1 in one module and version 22.2.1 in another and of course you should add just one of these library to your dependencies when you add one library in one module you should not add it to build.gradle file of second. --- Solution : remove duplicate libraries.
31,643,036
When I try to call install\_github, I get the following error (not just for this package, but for all github packages): ``` > install_github('ramnathv/slidify') Downloading github repo ramnathv/slidify@master Error in curl::curl_fetch_memory(url, handle = handle) : Problem with the SSL CA cert (path? access rights?) ``` But if I use RCurl directly to access github with ssl, I don't get any problem: ``` > x <- getBinaryURL( url='https://github.com/ramnathv/slidify/archive/master.zip', followlocation=1L ) ``` works with no errors, so RCurl can verify the SSL certificate properly and download the archive file. ``` > sessionInfo() R version 3.2.1 (2015-06-18) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Debian GNU/Linux 8 (jessie) locale: [1] LC_CTYPE=en_US.utf8 LC_NUMERIC=C [3] LC_TIME=en_US.utf8 LC_COLLATE=en_US.utf8 [5] LC_MONETARY=en_US.utf8 LC_MESSAGES=en_US.utf8 [7] LC_PAPER=en_US.utf8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_US.utf8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] RCurl_1.95-4.7 bitops_1.0-6 devtools_1.8.0 loaded via a namespace (and not attached): [1] httr_1.0.0 R6_2.1.0 magrittr_1.5 rversions_1.0.2 [5] tools_3.2.1 curl_0.9.1 Rcpp_0.12.0 memoise_0.2.1 [9] xml2_0.1.1 stringi_0.5-5 git2r_0.10.1 stringr_1.0.0 [13] digest_0.6.8 ``` And ``` > curlVersion() $age [1] 3 $version [1] "7.38.0" $vesion_num [1] 468480 $host [1] "x86_64-pc-linux-gnu" $features ipv6 ssl libz ntlm asynchdns spnego 1 4 8 16 128 256 largefile idn tlsauth_srp ntlm_wb 512 1024 16384 32768 $ssl_version [1] "OpenSSL/1.0.1k" $ssl_version_num [1] 0 $libz_version [1] "1.2.8" $protocols [1] "dict" "file" "ftp" "ftps" "gopher" "http" "https" "imap" [9] "imaps" "ldap" "ldaps" "pop3" "pop3s" "rtmp" "rtsp" "scp" [17] "sftp" "smtp" "smtps" "telnet" "tftp" $ares [1] "" $ares_num [1] 0 $libidn [1] "1.29" ``` If I use `httr::set_config( httr::config( ssl_verifypeer = 0L ) )` then I can successfully run `install_github` but I would prefer to actually check ssl certificates. Can anyone offer a solution?
2015/07/26
[ "https://Stackoverflow.com/questions/31643036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2727570/" ]
If `httr` lib is missing the same error appears try: > > install.packages("httr") > > >
I was getting the same error with swirl while submitting assignments for Coursera. Uninstall and reinstall swirl itself, curl and htrr (which was also missing) did not address the issue. The only thing that actually worked was: install.packages("openssl") Also using Windows 10 and R 3.3.3.
17,790,397
I have some radiobuttons in an app that works with touch. Because the end-user can have thick fingers, I want to make the circle and text int he radiobutton bigger. Problem is, I can only make the text bigger, not the circle in the radiobutton. ``` <RadioButton VerticalAlignment="Center" x:Name="rbtnContainers" Click="SetContainers" FontSize="18">Containers</RadioButton> ``` Using height doesn't work either. It makes the radiobutton bigger, but the circle remains the same. Any hint or answer is appreciated.
2013/07/22
[ "https://Stackoverflow.com/questions/17790397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2515525/" ]
This should work for you. ``` <Viewbox Height="40"> <RadioButton></RadioButton> </Viewbox> ``` another alternative is to write your own ControlTemplate for the RadioButton and change its appearance as you want.
To resize only the circle, one can use `RadioButton` template and change `Width` and `Height` of `BulletChrome`. ``` <ControlTemplate TargetType="RadioButton" x:Key="CustomRadioButtonStyle" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:s="clr-namespace:System;assembly=mscorlib" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mwt="clr-namespace:Microsoft.Windows.Themes;assembly=PresentationFramework.Aero"> <BulletDecorator Background="#00FFFFFF"> <BulletDecorator.Bullet> <mwt:BulletChrome Height="25" Width="25" Background="{TemplateBinding Panel.Background}" BorderBrush="{TemplateBinding Border.BorderBrush}" RenderMouseOver="{TemplateBinding UIElement.IsMouseOver}" RenderPressed="{TemplateBinding ButtonBase.IsPressed}" IsChecked="{TemplateBinding ToggleButton.IsChecked}" IsRound="True" /> </BulletDecorator.Bullet> <ContentPresenter RecognizesAccessKey="True" Content="{TemplateBinding ContentControl.Content}" ContentTemplate="{TemplateBinding ContentControl.ContentTemplate}" ContentStringFormat="{TemplateBinding ContentControl.ContentStringFormat}" Margin="{TemplateBinding Control.Padding}" HorizontalAlignment="{TemplateBinding Control.HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding Control.VerticalContentAlignment}" /> </BulletDecorator> <ControlTemplate.Triggers> <Trigger Property="ContentControl.HasContent"> <Setter Property="FrameworkElement.FocusVisualStyle"> <Setter.Value> <Style TargetType="IFrameworkInputElement"> <Style.Resources> <ResourceDictionary /> </Style.Resources> <Setter Property="Control.Template"> <Setter.Value> <ControlTemplate> <Rectangle Stroke="{DynamicResource {x:Static SystemColors.ControlTextBrushKey}}" StrokeThickness="1" StrokeDashArray="1 2" Margin="14,0,0,0" SnapsToDevicePixels="True" /> </ControlTemplate> </Setter.Value> </Setter> </Style> </Setter.Value> </Setter> <Setter Property="Control.Padding"> <Setter.Value> <Thickness>4,0,0,0</Thickness> </Setter.Value> </Setter> <Trigger.Value> <s:Boolean>True</s:Boolean> </Trigger.Value> </Trigger> <Trigger Property="UIElement.IsEnabled"> <Setter Property="TextElement.Foreground"> <Setter.Value> <DynamicResource ResourceKey="{x:Static SystemColors.GrayTextBrushKey}" /> </Setter.Value> </Setter> <Trigger.Value> <s:Boolean>False</s:Boolean> </Trigger.Value> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> ```
13,236,719
I'm trying to compare two days of NSDate objects, I used at the first time NSDateComponents to calculate difference between my two dates Objects, but it is working correctly. ``` NSString *dateStr =@"2012-11-05 15:00:00"; NSDate *date = [dateFormat dateFromString:dateStr]; NSDateComponents *datesDiff = [calendar components: NSDayCalendarUnit fromDate: date toDate: [NSDate date] options: 0]; ``` That does not resolve my problem. Indeed when date1 = 2012-11-05 23:00:00 and date2 = 2012-11-06 10:00:00, the difference between the tow dates is 0 day and 11 hours. I am looking for something that allows me to detect day changes, in other words when date1 = 2012-11-05 23:00:00 and date2 = 2012-11-06 00:01:00, there's one day diffrence. If someone knows the solution for that, i would welcome his suggestion. Thanks in advance
2012/11/05
[ "https://Stackoverflow.com/questions/13236719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1419681/" ]
I think you are looking for this (from the [Date and Time Programming Guide](https://developer.apple.com/library/ios/#documentation/cocoa/Conceptual/DatesAndTimes/Articles/dtCalendricalCalculations.html)): > > Listing 13 Days between two dates, as the number of midnights between > > > ``` @implementation NSCalendar (MySpecialCalculations) -(NSInteger)daysWithinEraFromDate:(NSDate *) startDate toDate:(NSDate *) endDate { NSInteger startDay=[self ordinalityOfUnit:NSDayCalendarUnit inUnit: NSEraCalendarUnit forDate:startDate]; NSInteger endDay=[self ordinalityOfUnit:NSDayCalendarUnit inUnit: NSEraCalendarUnit forDate:endDate]; return endDay-startDay; } @end ``` **EDIT: Swift version:** ``` extension NSCalendar { func daysWithinEraFromDate(startDate: NSDate, endDate: NSDate) -> Int { let startDay = self.ordinalityOfUnit(.Day, inUnit: NSCalendarUnit.Era, forDate: startDate) let endDay = self.ordinalityOfUnit(.Day, inUnit: NSCalendarUnit.Era, forDate: endDate) return endDay - startDay } } ```
``` func daysBetweenDates(startDate: NSDate, endDate: NSDate) -> Int { let calendar = NSCalendar.currentCalendar() let components = calendar.components([.Day], fromDate: startDate, toDate: endDate, options: []) return components.day } ``` or ``` let calendar = NSCalendar.currentCalendar(); let component1 = calendar.component(.Day, fromDate: fromDate) let component2 = calendar.component(.Day, fromDate: toDate) let difference = component1 - component2 ```
69,762,012
I'm using OjbectBox for the first time in my Flutter project. After I wrote the basic code to read and right user data, I'm getting this error: ``` [!] CocoaPods could not find compatible versions for pod "ObjectBox": In Podfile: objectbox_flutter_libs (from `.symlinks/plugins/objectbox_flutter_libs/ios`) was resolved to 0.0.1, which depends on ObjectBox (= 1.6.0) objectbox_sync_flutter_libs (from `.symlinks/plugins/objectbox_sync_flutter_libs/ios`) was resolved to 0.0.1, which depends on ObjectBox (= 1.6.0-sync) ``` I did what's recommended in [this answer](https://stackoverflow.com/questions/64443888/flutter-cocoapodss-specs-repository-is-too-out-of-date-to-satisfy-dependencies#:%7E:text=Go%20to%20/ios,application%3A%20flutter%20run) (first one) but I'm getting the same error when I try to update with `pod install --repo-update`. Any idea how to solve this?
2021/10/28
[ "https://Stackoverflow.com/questions/69762012", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9276925/" ]
This happened for me when I added `androidTestImplementation "androidx.compose.ui:ui-test-junit4:$version_compose"`. Referring to the advice in [this posting](https://stackoverflow.com/a/29682960/1229936), I decided to use [`pickFirst`](https://developer.android.com/reference/tools/gradle-api/4.1/com/android/build/api/dsl/PackagingOptions#pickfirsts) as opposed to [`exclude`](https://developer.android.com/reference/tools/gradle-api/4.1/com/android/build/api/dsl/PackagingOptions#excludes). According to the [PackagingOptions documentation](https://developer.android.com/reference/tools/gradle-api/4.1/com/android/build/api/dsl/PackagingOptions), [`pickFirst`](https://developer.android.com/reference/tools/gradle-api/4.1/com/android/build/api/dsl/PackagingOptions#pickfirsts) will allow for the first occurrence of the file to be packaged with the APK, whereas [`exclude`](https://developer.android.com/reference/tools/gradle-api/4.1/com/android/build/api/dsl/PackagingOptions#excludes) would exclude all occurrences of the file. This ended up working for me: ``` android { packagingOptions { pickFirst 'META-INF/AL2.0' pickFirst 'META-INF/LGPL2.1' } } ```
So the `exclude` and `pickFirst` already deprecated in kotlin DSL. put the following in your `build.gradle.kts`: ``` android { packagingOptions { resources.excludes.apply { add("META-INF/LICENSE") add("META-INF/*.properties") add("META-INF/AL2.0") add("META-INF/LGPL2.1") } } } ```
65,870,844
I have a react component that renders the answers of a contact form and presents them in the page. eg: ``` const Quote = ({fields}) => { return ( <> <div> Business Type: {fields.businesstype.value} </div> .... </> ) } ``` I need to send the rendered data to the php backend in order to send them as an email. The question: Can I use the same component to get the rendered data (in a variable) and send them to backend (with axios) Thank you for your time.
2021/01/24
[ "https://Stackoverflow.com/questions/65870844", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8091607/" ]
AnimatedSwitcher might help here. It animates between widgets automatically whenever the child changes. In your case, inside stream builder, wrap your widget with animated switcher. Check this for more information. [Animated Switcher](https://api.flutter.dev/flutter/widgets/AnimatedSwitcher-class.html) There is a good video explanation of how to use it in the page. For more info, refer to the documentation.
When anything updates then StreamBuilder rebuilds the widget automatically. Internally, StreamBuilder calls `State.setState`. So, according to me, if you want to add an animation when the item is deleted, you have to add it on the Gesture detection. In your code snippet, you had added on GestureDetection that will remove the item on double click. So, you can do one thing. When the user double clicks, play the animation for some seconds and then call the function to delete the item. After that, StreamBuilde will rebuild itself and displays the updated content. Hope this will solve your use case. If you have any problem further, please comment below.
36,656
I have had a theory for a long time that it is the blood in the deer which causes the gamey flavor. Hunters gut the deer soon after a kill, but they don't bleed it or chill it for hours or days. It takes time to haul it from the woods, then drive home and wait until the next day before visiting the processor. It seems that the blood would be the first part of the animal to spoil. > > It is my opinion that the bad reputation of venison’s “gamey” flavor comes from poor processing habits and the serving of meat that is actually rancid or at least borderline. > > > The key to fresh tasting meat it to get it cool and skinned as fast as possible. Leaving it hot or leaving the hide on will cause it to rot quickly and leave your meat tasting quite “pungent” (i.e.: rotten). This is important whether you plan to butcher the animal yourself or are taking it to a pro. If you leave the hide on longer than necessary or don’t cool the meat quickly, it will have a bad flavor. > > > <http://lazyhomesteader.com/2012/08/21/the-gamey-taste-of-game-meat-part-ii/> Is it true that the gamey taste is caused by spoiled blood in the meat?
2013/09/10
[ "https://cooking.stackexchange.com/questions/36656", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/20120/" ]
According to the [University of Minnesota Extension](http://www1.extension.umn.edu/food/food-safety/preserving/meat-fish/gamey-flavor-and-cooking-venison/) (emphasis added): > > ### What causes the wild or gamey taste in venison? > > > Venison refers to the > meat of antlered animals such as deer, moose, elk and caribou. The > 'wild' flavor of venison is directly related to what the animal eats. > Corn fed deer will have a milder flavor than those that eat acorns or > sage. The 'gamey' flavor is more noticeable in the fat. Removing the > fat, connective tissue, silver skin, bone and hair during processing > lessens the 'gamey' taste. *However, undesirable strong flavors are due > to inadequate bleeding, delay in field dressing or failure to cool the > carcass promptly.* > > > So while some gaminess is simply due to the diet of the wild animals, improper dressing or treatment can be a contributing factor.
As a lifelong hunter I must comment on the debate on what causes the gamey taste in venison.It is actually the blood of the animal if not soaked properly that gives venison the gamey taste.I learned from my Mom as well as generations of hunters before me that soaking the meat for a few days in ice water only makes for the best tasting venison.Also I need to point out that when deer are in the rut,It is the musk of the male deer that causes the strong odor in the meat. The female deer do not produce this musk and are therefore tastier and requires less soaking time to remove the blood from the cuts of meat.When soaking the meat, look for a pinkish to white color of the meat that indicates the meat has purged the blood.Happy hunting!!
22,827,278
Based on [Efficiency of the search algorithm of KMP](http://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm#Efficiency_of_the_search_algorithm),I really don't understand why the loop can execute at most 2n times. The following is the pseudocode on wiki ``` algorithm kmp_search: input: an array of characters, S (the text to be searched) an array of characters, W (the word sought) output: an integer (the zero-based position in S at which W is found) define variables: an integer, m ← 0 (the beginning of the current match in S) an integer, i ← 0 (the position of the current character in W) an array of integers, T (the table, computed elsewhere) while m + i < length(S) do if W[i] = S[m + i] then if i = length(W) - 1 then return m let i ← i + 1 else let m ← m + i - T[i] if T[i] > -1 then let i ← T[i] else let i ← 0 (if we reach here, we have searched all of S unsuccessfully) return the length of S ``` I think the while loop executes at most n times, not 2n times. There are two branches in the loop. The first branch increase i but do not increase m. The second branch adds i-T[i] to m and i>T[i], so m will be increased. Thus m+i always increase in the while loop. I think the total time in the loop is at most n, why 2n times?
2014/04/03
[ "https://Stackoverflow.com/questions/22827278", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2594180/" ]
`.` is used when accessing members of a struct using an object. `->` is used when accessing members of a struct using a pointer. so, `ptr->a = something;` is correct. Also, you `free` something that you allocate. SO, instead of `free(ptr->a);` use `free(pt);`, since you allocated `ptr` not `ptr->a`
you have to use -> for both a and b ``` ptr->a = something; ptr->b = 44; ``` To free you can use ``` free (ptr); /* since ptr is what we got from malloc() */ ```
39,749,333
I am using a ViewPager and have 3 Fragments in it. I know that when the ViewPager loads for the first time it loads all the fragments by default , minimum is 3 fragments and viewpager executes all the lifecycle methods of fragments. **Problems I noticed :** * Whenever I swipe the viewpager the selected fragment doesn't call any of its lifecycle methods again. * Hence I cannot access the global variables directly. e.g: if I have a preference initialized in OnCreateView() I am getting NPE when I try to access it from activity by initializing the instance of that fragment and calling a method in that fragment. * Also not even onResume is getting called for any fragment after first loading. **What I want to Know :** * How can I access the views and preferences after I have already initialized them in onCreateView() ? * But for the button which is initialized in onCreateView(), on click of it calls the web-services and works perfectly as I want .How ? *I have stuck with these issues from the past 3 days and googled a lot but not found my answers.* Any Help will be appreciated. **Fragment Code :** ``` @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View v = inflater.inflate(R.layout.activity_dth, container, false); preferences= PreferenceManager.getDefaultSharedPreferences(getActivity()); editor = preferences.edit(); editAmt = (EditText) v.findViewById(R.id.editAmt); browsplan = (Button) v.findViewById(R.id.browsplan); Log.e("DTH onCreateView ",""+page); String fontpath = "fonts/OpenSans-Regular.ttf"; Typeface tf = Typeface.createFromAsset(getActivity().getAssets(), fontpath); editCustomerid.setTypeface(tf); editAmt.setTypeface(tf); token=preferences.getString("Token","-1"); mobileNo=preferences.getString("userMobileNo","0"); browsplan.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { //new DTHOperator(getActivity()).execute(); HERE IT WORKS PROPERLY } }); return v; } @Override public void setUserVisibleHint(boolean isVisibleToUser) { super.setUserVisibleHint(isVisibleToUser); if(isVisibleToUser && isResumed()){ new DTHOperator(getActivity()).execute(); Toast.makeText(getActivity(),"setUserVisibleHint DTH " ,Toast.LENGTH_SHORT).show(); } } public class DTHOperator extends AsyncTask<Context, Integer, String> { Context ctx; DTHOperator(Context ctx) { this.ctx = ctx; } @Override protected String doInBackground(Context... params) { List<NameValuePair> telecomdata = new ArrayList<NameValuePair>(); String result; Log.d("Register Activity Token", /*preferences.getString("Token", "")*/token); telecomdata.add(new BasicNameValuePair("mobNo", /*preferences.getString("userMobileNo", "")*/mobileNo)); telecomdata.add(new BasicNameValuePair("requestDate", Utilities.getApiCallTimeFormat())); telecomdata.add(new BasicNameValuePair("reqFrom", "APP")); Log.v("PostParameters", "telecomdata" + telecomdata); if (Connectivity.checkNetwork(ctx)) { result = rechargeUrl.queryRESTurlONline(ctx, "/GetDTHOperatorsService", "post", telecomdata, GenericConstants.PiPay_root); } else { result = GenericConstants.NETWORKNOTFOUND; displayToast("The Internet Connection appears to be offline"); } return result; } @Override protected void onPreExecute() { super.onPreExecute(); MyProgress.show(ctx, "", ""); } @Override protected void onPostExecute(String result) { super.onPostExecute(result); try { if (result != null) { if (result.equalsIgnoreCase(GenericConstants.NETWORKNOTFOUND)) { MyProgress.CancelDialog(); Toast.makeText(ctx,"The Internet Connection appears to be offline",Toast.LENGTH_SHORT).show(); return; } String[] resArr = result.split("delimiter_"); if (!resArr[0].equals("500")) { MyProgress.CancelDialog(); Log.d("DTHOperator Response ::", result); JSONArray jsonArray = null; JSONObject object = new JSONObject(result); jsonArray = object.getJSONArray("data"); for (int i = 0; i < jsonArray.length(); i++) { if (jsonArray.getJSONObject(i).has("opName")) { operator.add(jsonArray.getJSONObject(i).getString("opName")); Log.d("", " Question size " + operator.size()); } } if(MyProgress.isShowingProgress()) MyProgress.CancelDialog(); editAmt.setText("2000"); // NPE HERE Also setting the value } else { MyProgress.CancelDialog(); displayToast("Failure"); } } } catch (Exception e) { e.printStackTrace(); MyProgress.CancelDialog(); }finally { if(MyProgress.isShowingProgress()) MyProgress.CancelDialog(); } } ```
2016/09/28
[ "https://Stackoverflow.com/questions/39749333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5201270/" ]
Use the following override method ``` @Override public void setUserVisibleHint(boolean isVisibleToUser) { super.setUserVisibleHint(isVisibleToUser); if (isVisibleToUser && isResumed()) { } } ``` this will give you an option to call anything when the the user is currently viewing the fragment. * Set a hint to the system about whether this fragment's UI is currently visible \* to the user. This hint defaults to true and is persistent across fragment instance\* state save and restore. + An app may set this to false to indicate that the fragment's UI is + scrolled out of visibility or is otherwise not directly visible to the user. + This may be used by the system to prioritize operations such as fragment lifecycle updates + or loader ordering behavior. \* + @param isVisibleToUser true if this fragment's UI is currently visible to the user (default), + false if it is not.
This worked for me in my last project. I like this approach too because I don't have to do anything weird. Essentially, in my adapter for the viewpager I added a method to keep track of the fragment tags & the associated position. ``` public class TabAdapter extends FragmentPagerAdapter { private Map<Integer, String> tagMap = new HashMap<>(2); // matches tab count public TabAdapter(FragmentManager fMgr) { super(fMgr); } // !!!!!!!!!!!!!!!!!!!!!!!!!!! // Capturing the tag that is auto-generated @Override public Object instantiateItem(ViewGroup container, int position) { Fragment frag = (Fragment) super.instantiateItem(container, position); tagMap.put(position, frag.getTag()); return frag; } /** * Return the Fragment associated with a specified position. * * @param position */ @Override public Fragment getItem(int position) { Fragment frag = null; ... return frag; } /** * Return the number of views available. */ @Override public int getCount() { return 2; } @Override public CharSequence getPageTitle(int position) { String title = null; ... return title; } // !!!!!!!!!!!!!!!!!!!!!!!!!!! public String getFragmentTag(int position) { return tagMap.get(position); } } // end of class TabAdapter ``` Then access within my activity (event methods) as: ``` String fragTag = tabAdapter.getFragmentTag(tabPosition); Fragment frag = getSupportFragmentManager().findFragmentByTag(fragTag); ``` Of course this means you have to keep a reference of the adapter in your containing activity. From here my fragments will have simple callbacks and explicit methods to update whatever views I need to update.
67,050,362
I'm going back through Python Crash Course 2nd Edition for about the third time to cement my knowledge, & I've run into something interesting that I haven't actually accounted for in previous runs. When printing a lists length with the len() function, I realized I am not exactly sure how to perform this on a new line in this situation, so I added a separate print function above it that begins a new line itself. This causes the next line to be pushed down yet *another* line, as I'm assuming the line that is supposed to have my length on it, doesn't want to take its place where the previous print function already is, making 2 spacer lines. Is there a specific way I can print the length of this list on a new line without utilizing a separate print function? I apologize if this seems silly. My code is as follows: ``` # -- Temporarily sort a list in alphabetical order -- # colleges = ['Suffolk', 'Westbury', 'Maritime'] print("\nHere is the original list:") print(*colleges, sep=', ') print("\nHere is the temporarily sorted list:") print(*sorted(colleges), sep=', ') print("\nHere is the original list again:") print(*colleges, sep=', ') # -- Sorts list in reverse order, not alphabetically -- # print("\nHere is a reversed list, not in alphabetical order:") colleges.reverse() print(*colleges, sep=', ') # -- Printing the length of the list -- # print("\n") print(len(colleges)) ``` I appreciate any help! These fundamentals are important to me.
2021/04/11
[ "https://Stackoverflow.com/questions/67050362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15191975/" ]
The `print()` function automatically ends with a newline character (`\n`). Then ``` print("\n") print(len(colleges)) ``` Equals ``` \n\nlen(colleges)\n ``` To achieve what you need you can make use of the extra arguments of `print()`. Like so: ``` print('\n', len(colleges)) ```
You don't need the \n function. The len function automatically does this for you. The reason why it does this is because all print statements automatically does this anyway.
42,443,903
I'm working on a small reusable Component which styles radio buttons and emits the selected values. ``` import { Component, OnInit, Input, Output, EventEmitter } from "@angular/core"; @Component({ moduleId: module.id, selector: 'button-select', template: `<div class="toggle-group"> <div *ngFor="let choice of choices"> <input type="radio" id="{{ groupName + choice }}" name="{{groupName}}" value="{{ choice }}" [checked]="choice === defaultChoice" [(ngModel)]="value" (ngModelChange)="choose($event)" /> <label class="toggle-button" for="{{ groupName + choice }}">{{ choice }}</label> </div> </div>`, styleUrls: [ 'editableField.css', 'buttonSelect.css' ] }) export class ButtonSelectComponent implements OnInit { @Input() choices: string[]; @Input() defaultChoice: string; @Input() groupName: string; @Input() value: string; @Output() valueChosen: EventEmitter<any> = new EventEmitter(); ngOnInit() { this.choose(this.defaultChoice); } private choose(value: string) { this.valueChosen.emit(value); } } ``` The component is implemented like so: ``` <button-select #statusFilter [choices]="['All', 'Active', 'Draft']" [defaultChoice]="'All'" [groupName]="'statusFilter'" (valueChosen)="filterChosen('statusFilter', $event)" </button-select> ``` *Before* adding `[(ngModel)]="value" (ngModelChange)="choose($event)"` to the button-select Component, the `[checked]="choice === defaultChoice"` directive correctly set the `checked` attribute on the relevant `<input />`. *After* adding the `[(ngModel)]`, *only* `ng-reflect-checked="true"` gets set, which prevents the visual styling from showing the default value (since my CSS uses a pseudo-selector). Changing `[(ngModel)]` for `[ngModel]` had no effect. **Why did this happen and how can I fix it?**
2017/02/24
[ "https://Stackoverflow.com/questions/42443903", "https://Stackoverflow.com", "https://Stackoverflow.com/users/114900/" ]
I think, you don't need this `[checked]="choice === defaultChoice"`. Try this : ``` <input type="radio" id="{{ groupName + choice }}" name="{{groupName}}" [value]="choice" [(ngModel)]="defaultChoice" (ngModelChange)="choose($event)" /> ``` When `[value] = [(ngModel)]` the radio is selected.
```js export class ConfirmationmodalComponent implements OnInit { client_notification: any = false; candidate_notification: any = false; cancel_associated_session: any = false; constructor( ) {} ngOnInit(): void { } } ``` ```html <div class="form-check form-check-inline"> <input class="form-check-input" type="radio" id="inlineRadio1" name="cancel_associated_session" [(ngModel)]="cancel_associated_session" [value]="true" /> <label class="form-check-label" for="inlineRadio1"> Yes </label> </div> <div class="form-check form-check-inline"> <input class="form-check-input" type="radio" id="inlineRadio2" name="cancel_associated_session" [(ngModel)]="cancel_associated_session" [value]="false" /> <label class="form-check-label" for="inlineRadio2"> No </label> </div> ```
58,212,644
I have a list of numbers (that will be changed weekly) and I also have a list of ranges (that also change weekly). I need to check whether each number falls between each range. Eg. My list of numbers on the left and my list of ranges on the right. ``` 4 1 3 10 67 99 54 120 122 155 ``` So what I need is to return a value if 4 falls between 1-3, then check if it falls between 67-99 and so on. Then return a value if 10 falls between 1-3 or 67-99 etc. I have tried array and vba but I'm noob and I cant find much in the way of examples for this issue. I have had success with the following nested if; ``` =IF(OR(AND(G2>$L$2,G2<$M$2),AND(G2>$L$3,G2<$M$3),AND(G2>$L$4,G2<$M$4),G2,"") =IF(OR(AND(G3>$L$2,G3<$M$2),AND(G3>$L$3,G3<$M$3),AND(G3>$L$4,G3<$M$4),G3,"") ``` However, once my number of ranges gets above a certain number it says i have too many characters. Any help would be appreciated. Regards, Will.
2019/10/03
[ "https://Stackoverflow.com/questions/58212644", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12157182/" ]
If you want to appear full image use `:cover="true"` in `<v-img>`. If you you want to appear the whole image, you can put follow CSS. ``` .v-image__image{ background-size:100% 100%; } ``` This is not recommended way.Because image's ratio will be wrong.
Using pure HTML appears to even be simpler and better for me. The vuetify just complicate simple things sometimes. Use `v-col`, `div`, any block... ``` <v-row class="fill-height" justify="center" align="center" style="background-image: url('card.imageUrlHiRes'); background-size: cover;" > </v-row> ```
29,986,379
I've been trying to figure-out how can i make the cell fill the width, as you can see in the picture the width between the cells is too big. i am using custom cell with only one imageView. ![enter image description here](https://i.stack.imgur.com/LIV7k.jpg) I tried to customize it from the storyboard but i guess there is no option for that or it should be done programmatically. ![enter image description here](https://i.stack.imgur.com/UHbSq.png) my UICollectionViewController : ``` @IBOutlet var collectionView2: UICollectionView! let recipeImages = ["angry_birds_cake", "creme_brelee", "egg_benedict", "full_breakfast", "green_tea", "ham_and_cheese_panini", "ham_and_egg_sandwich", "hamburger", "instant_noodle_with_egg.jpg", "japanese_noodle_with_pork", "mushroom_risotto", "noodle_with_bbq_pork", "starbucks_coffee", "thai_shrimp_cake", "vegetable_curry", "white_chocolate_donut"] override func viewDidLoad() { super.viewDidLoad() // Uncomment the following line to preserve selection between presentations // self.clearsSelectionOnViewWillAppear = false // Do any additional setup after loading the view. } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } override func numberOfSectionsInCollectionView(collectionView: UICollectionView) -> Int { //#warning Incomplete method implementation -- Return the number of sections return 1 } override func collectionView(collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { //#warning Incomplete method implementation -- Return the number of items in the section return recipeImages.count } override func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell { let cell = collectionView.dequeueReusableCellWithReuseIdentifier(reuseIdentifier, forIndexPath: indexPath) as! RecipeCollectionViewCell // Configure the cell cell.recipeImageView.image = UIImage(named: recipeImages[indexPath.row]) return cell } ```
2015/05/01
[ "https://Stackoverflow.com/questions/29986379", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3989689/" ]
I have the same requirement, in my case below solution is worked. I set **UIImageView** top, left, bottom and right constraints to **0** inside **UICollectionviewCell** ``` @IBOutlet weak var imagesCollectionView: UICollectionView! override func viewDidLoad() { super.viewDidLoad() // flowlayout let screenWidth = UIScreen.main.bounds.width let layout: UICollectionViewFlowLayout = UICollectionViewFlowLayout() layout.sectionInset = UIEdgeInsets(top: 5, left: 5, bottom: 10, right: 0) layout.itemSize = CGSize(width: screenWidth/3 - 5, height: screenWidth/3 - 5) layout.minimumInteritemSpacing = 5 layout.minimumLineSpacing = 5 imagesCollectionView.collectionViewLayout = layout } ```
The best solution is to change the `itemSize` of your `UICollectionViewFlowLayout` in `viewDidLayoutSubviews`. That is where you can get the most accurate size of the `UICollectionView`. You don't need to call invalidate on your layout, that will be done for you already.
34,952
If you were going to write a chess game engine, what programming paradigm would you use (OOP, procedural, etc) and why whould you choose it ? By chess engine, I mean the portion of a program that evaluates the current board and decides the computer's next move. I'm asking because I thought it might be fun to write a chess engine. Then it occured to me that I could use it as a project for learning functional programming. Then it occured to me that some problems aren't well suited to the functional paradigm. Then it occured to me that this might be good discussion fodder.
2011/01/08
[ "https://softwareengineering.stackexchange.com/questions/34952", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/9830/" ]
Evaluation isn't a parallelizable problem as far as I know but evaluating different chains is , so I definitively would write it to make use of multiple cores and multithreading. Whether you go functional or semi-functional is a matter of taste. Personally i'd go OOP and use the support for functional programming and parallelization that exists in for instance C# On a sidenote, if I were to write a chess engine I'd try to make one that really can "think" about chess. Using board evaluation to brute force all possible combinations has been done to death and extremely well, but there hasn't been much progress afaik in doing a more thinking/fuzzy chess engine. That'd be a challenge! :) Find some games with really tricky position play and strong moves (they're marked ! or !!) and use them to train and test your engine.
I ported a simple chess program as a means of learning the Forth language. It turned out to be a great fit to this very imperative problem, and I learned a lot. The open stack allowed me to implement alpha-beta search in a unique fashion which gave me greater insight into the algorithm. One would think that functional programming would be great for chess programs, since the core algorithms (alpha-beta depth-first search, evaluation) are recursive and functionally strict. However, a chess program lives and dies on efficiency and none of the current crop of functional languages have that aim. The top one hundred state-of-the-art engines all use imperative languages (mostly C/C++, then Delphi) in order to have maximum control over memory usage, multi-threading, global state, and code generation. All functional languages use dynamic memory allocation for core data structures, which is death for a chess program. I'd still like to see someone make an attempt to break into the top 100 chess engines using a functional language.
46,379,115
ESLint: Line 403 exceeds the maximum line length of 120 (max-len) I have a long string, which I built using ES6 template strings, but I want it to be without line breaks: ``` var string = `Let me be the 'throws Exception’ to your 'public static void main (String[] args)’. I will accept whatever you give me my ${love}.` console.log(string); ``` Result: ``` Let me be the 'throws Exception’ to your 'public static void main (String[] args)’. I will accept whatever you give me xxx. ``` My expectation: ``` Let me be the 'throws Exception’ to your 'public static void main (String[] args)’. I will accept whatever you give me xxx. ``` **Requirements**: 1. I cannot disable the eslint rule, as enforcement is necessary. 2. I cannot put the data in a separate file, as the data is dynamic. 3. I cannot concatenate multiple shorter strings, since that is too much work.
2017/09/23
[ "https://Stackoverflow.com/questions/46379115", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7707677/" ]
This is expected behaviour. One of the important problems that template literals solve is [multiline strings](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#Multi-line_strings): > > Any new line characters inserted in the source are part of the template literal. > > > If the string needs to be processed further, this can be done with other JS features, like regular expressions: ``` var string = `Let me be the 'throws Exception’ to your 'public static void main (String[] args)’. I will accept whatever you give me.` .replace(/[\n\r]+ */g, ' '); ``` [`String.raw`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/raw) is built-in function to transform template literals. It's possible to use tag function to provide custom behaviour to template literals. It should be noticed that `String.raw` differs from default template transformer in terms of how it processes [special characters](https://learn.microsoft.com/en-us/scripting/javascript/advanced/special-characters-javascript#escape-sequences). If they are used in a string, they should be additionally processed with [`unescape-js`](https://github.com/iamakulov/unescape-js/blob/master/src/index.js#L35-L49) or similar helper function. ``` function singleLine(strsObj, ...values) { const strs = strsObj.raw .map(str => str.replace(/[\n\r]+ */g, ' ')) .map(unescapeSpecialChars); return String.raw( {raw: strs }, ...values ); } var string = singleLine`Let me be the 'throws Exception’ to your 'public static void main (String[] args)’. I will accept whatever you give me.`; ```
A good way of reaching your purpose is to to join an array of strings: ``` var string = [ `Let me be the 'throws Exception’ to your 'public static void`, `main (String[] args)’. I will accept whatever you give me my ${love}.` ].join(' '); ```
24,341,589
I am using python-docx with django to generate word documents. Is there a way to use `add_picture` to add an image from the web rather then from the file system? In word, when I select to add a picture, I can just give the URL. I tried to simply so the same and write: ``` document.add_picture("http://icdn4.digitaltrends.com/image/microsoft_xp_bliss_desktop_image-650x0.jpg") ``` and got error: > > IOError: [Errno 22] invalid mode ('rb') or filename: > '<http://icdn4.digitaltrends.com/image/microsoft_xp_bliss_desktop_image-650x0.jpg>' > > >
2014/06/21
[ "https://Stackoverflow.com/questions/24341589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1077102/" ]
Not very elegant, but i found a solution, based on the question in [here](https://stackoverflow.com/questions/22505922/python-elementtree-xml-ioerror-errno-22-invalid-mode-rb-or-filename) my code now looks like that: ``` import urllib2, StringIO image_from_url = urllib2.urlopen(url_value) io_url = StringIO.StringIO() io_url.write(image_from_url.read()) io_url.seek(0) try: document.add_picture(io_url ,width=Px(150)) ``` and this works fine.
Below is a fresh implementation for Python 3: ``` from io import BytesIO import requests from docx import Document from docx.shared import Inches response = requests.get(your_image_url) # no need to add stream=True # Access the response body as bytes # then convert it to in-memory binary stream using `BytesIO` binary_img = BytesIO(response.content) document = Document() # `add_picture` supports image path or stream, we use stream document.add_picture(binary_img, width=Inches(2)) document.save('demo.docx') ```
32,006,351
I have a [Spring MVC](http://en.wikipedia.org/wiki/Spring_Framework#Model-view-controller_framework) project using [IntelliJ IDEA](http://en.wikipedia.org/wiki/IntelliJ_IDEA) 14 as my IDE (I'm new to IntelliJ IDEA; I used [Eclipse](http://en.wikipedia.org/wiki/Eclipse_%28software%29)). My main Java code is in folder `src/main`, and the unit test code is in `src/test`. While deploying the project, I found that all the unit tests are executed. How can I skip all the unit tests while deploying the project into Tomcat?
2015/08/14
[ "https://Stackoverflow.com/questions/32006351", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1252933/" ]
In case you are using `Maven`, on the `View` > `Tool Windows` > `Maven Projects` click on the button shown below ( called `Skip Tests Mode`). Essentially it is taking the `test` phase out of the lifecycle when you say run `package`. [![enter image description here](https://i.stack.imgur.com/NuWtb.png)](https://i.stack.imgur.com/NuWtb.png)
Even I faced this same issue as I have newly started using IntelliJ starting today. So I have trouble to find that logo in the screenshot shared in the post shared by `dimitrisli` and also accepted answer. Let me share my solution to this issue with latest IntelliJ version. Open IntelliJ IDE, press `Ctrl + Alt + S`, this will open settings window, now look at the screenshot I shared and do the same settings and you are done. Make sure you click the check box for `Skip Tests`. My IntelliJ IDE version: `IntelliJ IDEA 2022.2.4 (Community Edition)` [![enter image description here](https://i.stack.imgur.com/ifLOY.png)](https://i.stack.imgur.com/ifLOY.png)
7,991,303
i have a disclaimer tick box on my login page and i want to make it so when its clicked it remains clicked the next time a person visits the site. The site is javascript/jquery php based and onchange of the tick box is then written and stored in a database and can thus be retrieved, but i need to associate a user to this isclicked database value now. So i need like a browser id, or i need to set a cookie or something, can anyone advise me as to the best way top go about this?
2011/11/03
[ "https://Stackoverflow.com/questions/7991303", "https://Stackoverflow.com", "https://Stackoverflow.com/users/930584/" ]
The common and more basic way to do this is to write a cookie with the username and another with the hashed password. But i think you should write the cookies with PHP, JS is a client-side language so you can't hide your salt, for instance.
You need a cookie [plugin](http://plugins.jquery.com/project/Cookie)
24,751,014
When i am installing an extension in my magento admin,it shows `Connection string is empty`. I don't know why this problem occurs. I cleared the cache also then also the problem is same. If anyone knows how to do this,please help me out. Thanks!
2014/07/15
[ "https://Stackoverflow.com/questions/24751014", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3129737/" ]
There's a couple of suggestions to change the permissions to this: > > sudo chmod 777 -R /var/www/html > > > Which is a really really bad idea. Have a search for 'Magento permissions' and follow the guide instead. Unless you're seeking to get your site hacked wihin a few minutes.
Type the following command in your terminal. `sudo chmod 777 -R /var/www/html/<magento folder name>`. Press Enter. Then it will ask for password if exists for your username. Once it is done, login and logout the admin and try to install extension which you want Edited: Try this for your files: ``` sudo chmod 777 -R /var/www/html ``` and follow the above steps.
74,150,425
I have a list of data: ``` data = ['value', '"""', 'Comment 1st row', 'Comment 2nd row', 'Comment 3rd row', '"""', 'another value'] ``` I would like to remove the whole comment from the list, including docstrings. Also not just once, but everytime a comment appears. Could someone help?
2022/10/21
[ "https://Stackoverflow.com/questions/74150425", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10155537/" ]
You can use a simple loop with a flag that alternates every time a `"""` is found: ``` data = ['value', '"""', 'Comment 1st row', 'Comment 2nd row', 'Comment 3rd row', '"""', 'another value'] flag = True out = [] for v in data: if v == '"""': flag = not flag # invert the flag's boolean value continue # we have a comment, skip to the next step if flag: # if flag is True, add the item out.append(v) print(out) ``` output: ``` ['value', 'another value'] ``` Example run on `data = data * 5` to mimic multiple comments: ``` ['value', 'another value', 'value', 'another value', 'value', 'another value', 'value', 'another value', 'value', 'another value'] ```
A counter-based approach. If the amount of `"""` is even then append (comments are balanced), instead increment the counter if odd (since comment hasn't finished). ``` res = [] c = 0 for char in data: if char != '"""': if not c % 2: res.append(char) else: c += 1 print(res) ```
30,216,635
What is happening: ------------------ 1. I have a stacktrace from the appstore as below, problem i am facing is that it dosen't show which class has caused this crash. 2. what i can understand is that its causing due to the assets that i have used 3. Only place i am using assets is at the application level to set the font Code: ----- ``` private void setDefaultFont() { try { final Typeface bold = Typeface.createFromAsset(getAssets(), "fonts/OpenSans-Bold.ttf"); final Typeface italic = Typeface.createFromAsset(getAssets(), "fonts/OpenSans-Italic.ttf"); final Typeface boldItalic = Typeface.createFromAsset(getAssets(), "fonts/OpenSans-BoldItalic.ttf"); final Typeface regular = Typeface.createFromAsset(getAssets(),"fonts/OpenSans-Regular.ttf"); Field DEFAULT = Typeface.class.getDeclaredField("DEFAULT"); DEFAULT.setAccessible(true); DEFAULT.set(null, regular); Field DEFAULT_BOLD = Typeface.class.getDeclaredField("DEFAULT_BOLD"); DEFAULT_BOLD.setAccessible(true); DEFAULT_BOLD.set(null, bold); Field sDefaults = Typeface.class.getDeclaredField("sDefaults"); sDefaults.setAccessible(true); sDefaults.set(null, new Typeface[]{ regular, bold, italic, boldItalic }); } catch (NoSuchFieldException e) { // logFontError(e); } catch (IllegalAccessException e) { // logFontError(e); } catch (Throwable e) { //cannot crash app if there is a failure with overriding the default font! // logFontError(e); } } ``` StackTrace from Appstore: ------------------------- ``` java.lang.NullPointerException: Attempt to invoke virtual method 'android.content.res.AssetManager android.content.res.Resources.getAssets()' on a null object reference at android.app.LoadedApk.getAssets(LoadedApk.java:528) at android.app.LoadedApk.makeApplication(LoadedApk.java:584) at android.app.ActivityThread.handleBindApplication(ActivityThread.java:4526) at android.app.ActivityThread.access$1500(ActivityThread.java:151) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1364) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:135) at android.app.ActivityThread.main(ActivityThread.java:5254) at java.lang.reflect.Method.invoke(Native Method) at java.lang.reflect.Method.invoke(Method.java:372) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698) ``` What approach should i need to take to resolve this?? -----------------------------------------------------
2015/05/13
[ "https://Stackoverflow.com/questions/30216635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1083093/" ]
Try to declare the font styles separately (not in the same line), as following, change: ``` final Typeface bold = Typeface.createFromAsset(getAssets(), "fonts/OpenSans-Bold.ttf"); final Typeface italic = Typeface.createFromAsset(getAssets(), "fonts/OpenSans-Italic.ttf"); final Typeface boldItalic = Typeface.createFromAsset(getAssets(), "fonts/OpenSans-BoldItalic.ttf"); final Typeface regular = Typeface.createFromAsset(getAssets(),"fonts/OpenSans-Regular.ttf"); ``` to: ``` final Typeface bold; final Typeface italic; final Typeface boldItalic; final Typeface regular; bold = Typeface.createFromAsset(getAssets(), "fonts/OpenSans-Bold.ttf"); italic = Typeface.createFromAsset(getAssets(), "fonts/OpenSans-Italic.ttf"); boldItalic = Typeface.createFromAsset(getAssets(), "fonts/OpenSans-BoldItalic.ttf"); regular = Typeface.createFromAsset(getAssets(),"fonts/OpenSans-Regular.ttf"); ```
Add context when calling, It should be like this,- ``` final Typeface bold = Typeface.createFromAsset(getApplicationContext().getAssets(), "fonts/OpenSans-Bold.ttf"); ```
16,541,555
I have two arrays as shown below. I need to merge the content of the arrays so that I can get the structure as shown in the third array at last. I have checked array\_merge but can't figure out the way this is possible. Any help appreciated. Thanks. ``` [ ['gross_value' => '100', 'quantity' => '1'], ['gross_value' => '200', 'quantity' => '1'] ] ``` and ``` [ ['item_title_id' => '1', 'order_id' => '4'], ['item_title_id' => '2', 'order_id' => '4'] ]; ``` I should get a merged array like this: ``` [ [ 'gross_value' => '100', 'quantity' => '1', 'item_title_id' => '1', 'order_id' => 4 ], [ 'gross_value' => '200', 'quantity' => '1', 'item_title_id' => '2', 'order_id' => 4 ] ] ```
2013/05/14
[ "https://Stackoverflow.com/questions/16541555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/324860/" ]
Use [array\_merge\_recursive](http://php.net/manual/en/function.array-merge-recursive.php) : Convert all numeric key to strings, (make is associative array) ``` $result = array_merge_recursive($ar1, $ar2); print_r($result); ``` See [live demo here](http://codepad.org/nBMErHku)
The problem with things like merge recursive is that they don't know when to stop. In some scenarios you want to stop traversing down an array and simply take a given value if it exists. For instance if you have to override a nested config array you might not want the default keys to stick around at a a specific level. here is my solution: ``` public static function merge_lvl2(){ $args = func_get_args(); return static::merge($args, 2); } public static function merge($args, $maxDepth = null, $depth = 1) { $merge = []; foreach($args as $arg) { if (is_array($arg)) { if (is_array($merge)) { if ($maxDepth == $depth) { $arg += $merge; $merge = $arg; } else { $merge = array_merge($merge, $arg); } } else { $merge = $arg; } } } if ($maxDepth !== $depth) { foreach($args as $a) { if (is_array($a)) { foreach($a as $k => $v) { if (isset($merge[$k]) && is_array($merge[$k])) { $merge[$k] = static::merge([$merge[$k], $v], $maxDepth, $depth + 1); } } } } } return $merge; } ``` You can pass as many arrays to merge as you want to. ``` $merged = ClassName::merge_lvl2([..array1..], [..array2..], [..array3..], etc...); ``` It will stop merging at level 2 and accept the last instance of the key as an override instead of a merge. You can also call merge directly with an array of args and setting the max depth. If no max depth is set it will traverse the entire array.