prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Can CDI dependency injection be optional?
In Spring DI, declaring an autowired field as Optional enables a client to not inject any value to it. Is this possible using Java EE's CDI? I tried Optional and it fails. I want to know if there is an equivalent mechanism I can use.
Here is what I tried:
```
public class OmeletteMaker implements EggMaker{
public static void main(String[] args){
WeldContainer container = new Weld().initialize();
OmeletteMaker omeletteMaker = container.instance().select(OmeletteMaker.class).get();
}
@Inject
Optional<Vegetable> vegetable;
}
```
I get an error message:
Exception in thread "main" org.jboss.weld.exceptions.DeploymentException: WELD-001408 Unsatisfied dependencies for type [Optional] with qualifiers [@Default] at injection point [[BackedAnnotatedField] @Inject cafeteria.OmeletteMaker.vegetable]
| There are many questions lurking in this seemingly simple question. I'll try to answer them bearing in mind the spirit of the question.
First, as a general rule, if you `@Inject` a `Fred`, that `Fred` cannot be `null` unless `Fred` is in `@Dependent` scope, and even then a producer method or custom bean will have to explicitly be written to return `null`. There are edge cases but in all modern CDI implementations this is a good rule of thumb to bear in mind.
Second, `Optional` isn't special. From the standpoint of CDI, an `Optional` is just another Java object, so see my first statement above. If you have something that produces an `Optional` (like a producer method) then it cannot make a `null` `Optional` (unless, again, the production is defined to be in the `@Dependent` scope—and if you were writing such a method to make `Optional` instances and returning `null` you are definitely going to confuse your users). If you are in control of producing `Optional` instances, then you can make them any way you like.
Third, in case you want to test to see if there is a managed bean or a producer of some kind for a `Fred`, you can, as one of the comments on your question indicates, inject a `Provider<Fred>` or an `Instance<Fred>`. These are "made" by the container automatically: you don't have to write anything special to produce them yourself. A `Provider<Fred>` is an accessor of `Fred` instances and does not attempt to acquire an instance until its `get()` method is called.
An `Instance` is a `Provider` and an `Iterable` of all known `Fred`s and can additionally tell you whether (a) it is "unsatisfied"—there are no producers of `Fred` at all—and (b) it is "resolvable"—i.e. there is exactly one producer of `Fred`.
Fourth, the common idiom in cases where you want to see if something is there is to inject an `Instance` parameterized with the type you want, and then check its [`isResolvable()`](https://jakarta.ee/specifications/cdi/2.0/apidocs/javax/enterprise/inject/Instance.html#isResolvable--) method. If that returns `true`, then you can call its `get()` method and trust that its return value will be non-`null` (assuming the thing it makes is not in `@Dependent` scope).
I hope this is helpful!
|
Android Visibility from GONE to VISIBLE doesn't work first time
Hello I have a problem with an animation I try to make.
I use this library [AndroidViewAnimations](https://github.com/daimajia/AndroidViewAnimations).
Here is my layout xml code:
```
<Button
android:id="@+id/buttonDetails"
style="@style/Button_Details"/>
<LinearLayout
android:id="@+id/linearLayoutDetails"
android:visibility="gone"
style="@style/LinearLayout_Details">
<CheckBox
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/checkbox_clmn_text"
android:checked="true"/>
<CheckBox
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/checkbox_clme_text"
android:checked="true"/>
<CheckBox
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/checkbox_clmn_text"
android:checked="true"/>
<CheckBox
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/checkbox_clmn_text"
android:checked="true"/>
<CheckBox
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/checkbox_clmn_text"
android:checked="true"/>
<CheckBox
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/checkbox_clmn_text"
android:checked="true"/>
</LinearLayout>
```
And here my java code:
```
// Declare Variables
@ViewById
LinearLayout linearLayoutDetails;
@Click
void buttonDetails() {
// Checks Linear Layout Visibility
if (linearLayoutDetails.getVisibility() == View.GONE) {
// Sets linearLayoutDetails Visibility to VISIBLE
linearLayoutDetails.setVisibility(View.VISIBLE);
// Makes Appear Animation
YoYo.with(Techniques.SlideInDown)
.duration(700)
.playOn(linearLayoutDetails);
} else {
linearLayoutDetails.setVisibility(View.GONE);
}
}
```
Now the problem is that the first time I press the Button the animation is not working, but after that every other time works.
I did some research and i found that the problem is that i have the visibility set to gone and if I set it to invisible it works fine from the first time. Thing is that I dont want the visibility to invisible but to gone because I dont want the Linear Layout to takes up the space when its hidden.
Any ideas?
| I just solved a couple of minutes ago using a ViewTreeObserver. In the example below, the variable `currentMode` is a view that I want to animate in and out. Its default value is "gone" and I was having the EXACT same issue you were having.
Here's how I solved it:
```
ViewTreeObserver vto = currentMode.getViewTreeObserver();
vto.addOnGlobalLayoutListener(new ViewTreeObserver.OnGlobalLayoutListener() {
@Override
public void onGlobalLayout() {
if(currentMode.isShown()) {
YoYo.with(Techniques.SlideInDown).duration(250).playOn(currentMode);
}
}
});
currentMode.setVisibility(mShouldShowMode ? View.VISIBLE : View.GONE);
```
Using the ViewTreeObserver we can watch global changes to the UI and then react. The `View.isShown()` method checks to see if its visible. If so, then I then start the animation. This works great for me.
Then for the exit animation you have to attach a listener to the out animation like this:
```
YoYo.with(Techniques.SlideOutUp).duration(250).withListener(new Animator.AnimatorListener() {
@Override
public void onAnimationStart(Animator animation) {
}
@Override
public void onAnimationEnd(Animator animation) {
currentMode.setVisibility(View.GONE);
}
@Override
public void onAnimationCancel(Animator animation) {
}
@Override
public void onAnimationRepeat(Animator animation) {
}
}).playOn(currentMode);
```
|
Which padding is used by javax.crypto.Cipher for RSA
I need to decrypt messages via RSA in order to send it over an unsecured channel, but I'm afraid of the [Padding Oracle Attack](https://en.wikipedia.org/wiki/Padding_oracle_attack). Therefore I already have asked the follwoing questions:
1. [How to verify the integrity of RSA encrypted messages?](https://crypto.stackexchange.com/q/27510/8325)
2. [How to ensure message integrity for RSA ciphers by using javax.crypto.Cipher](https://security.stackexchange.com/q/96943/29820)
Like suggested in the first question,
>
> However, since you are using a high level cryptographic library, this is something you shouldn't have to worry about. The writers of that library should have taken care of it.
>
>
>
I shouldn't consider about. As far I know, the RSA implementation of `PKCS#1 v1.5` is vulnerable to the `Padding Oracale Attack` whereby [OAEP](https://en.wikipedia.org/wiki/Optimal_asymmetric_encryption_padding) isn't (assumed it's implemented correctly)
Hence I want to know which padding implementation is used by `javax.crypt.Cipher` by Java 7
| It depends on the chosen or default provider which padding is actually used when you instantiate a Cipher without fully qualifying it like:
```
Cipher.getInstance("RSA")
```
Doing so is a bad practice, because if you switch Java implementations, there might be different defaults and suddenly, you won't be compatible with the old ciphertexts anymore. **Always fully qualify the cipher.**
As I said before, the default will probably (there are many providers, one can't be sure) be PKCS#1 v1.5 padding. If you need another, you would have to specify it. If you want to use OAEP, here is a fully qualified cipher string from [here](https://docs.oracle.com/javase/8/docs/api/index.html?javax/crypto/Cipher.html):
```
Cipher.getInstance("RSA/ECB/OAEPWithSHA-256AndMGF1Padding");
```
|
UISearchBar on UITableView strange offset issue
I have a `UITableView` which has a `UISearchBar` subview. This is all on the view of a `UIViewController` along with a few other subviews (labels, text fields and such).
The search bar and content offset of the table are acting quite strangely, but it seems dependent on the order in which these views are added to the main view in the xib. I created a sample project with just my table/search and a label in order to test, and the result is the same. When the table is added *after* the label, everything works fine:
# Setup:
![table added after label](https://i.stack.imgur.com/C9Qld.png)
# Correct and Expected Result:
![Proper expected result](https://i.stack.imgur.com/9vs2l.png)
**However**, if I simply change the order in which my 2 subviews sit on the main view (aka table added *before* the label) then weird things start happening.
# Apparently bad setup:
![Table added before label](https://i.stack.imgur.com/M4HGd.png)
# Weird offset of Search Bar:
![wtf is this?!?!](https://i.stack.imgur.com/GGtc0.png)
I'm not changing anything else whatsoever, so why does Xcode seem to care which order these subviews are added to the main view?? If I scroll up on the "bad" table setup, the search bar disappears immediately at its top edge, but the table's content will continue to scroll up until it reaches the top of the frame that was set in the xib. Scroll back down and the search bar doesn't reappear until the strange lowered location. This is in Xcode 5.1.1, not the new beta. The result is the same with or without Autolayout turned on.
Any idea why this is happening? Is this a bug, or am I missing something? (I didn't post any code because all I'm doing is setting the number of sections, rows, and setting the text on the cell. Not messing with content insets, offset, anything. I load the view from the app delegate as the root of a nav controller)
| This happens because a `UIViewController`'s property called `automaticallyAdjustsScrollViewInsets`
>
> With iOS 7, UIViewControllers have a property called
> automaticallyAdjustsScrollViewInsets, and it defaults to YES. If you
> have a scroll view that is either the root view of your view
> controller (such as with a UITableViewController) or the subview at
> index 0, then that property will adjust both the contentInset and the
> scrollIndicatorInsets. This will allow your scroll view to start its
> content and scroll indicators below the navigation bar (if your view
> controller is in a navigation controller).
>
>
>
From [Big Nerd Ranch](http://www.bignerdranch.com/blog/designing-interfaces-ios-6-ios-7/)
If you are using storyboards, you can change it by selecting the view controller and in the attributes inspector deselect `Adjust scroll view insets`.
Here is its description from [apple documentation](https://developer.apple.com/library/ios/documentation/uikit/reference/UIViewController_Class/Reference/Reference.html#//apple_ref/occ/instp/UIViewController/automaticallyAdjustsScrollViewInsets):
>
> Default value is YES, which allows the view controller to adjust its
> scroll view insets in response to the screen areas consumed by the
> status bar, navigation bar, and toolbar or tab bar. Set to NO if you
> want to manage scroll view inset adjustments yourself, such as when
> there is more than one scroll view in the view hierarchy.
>
>
>
|
Increase the api limit in ggmap's geocode function (in R)
I'm trying to use the `geocode` function from the `ggmaps` library in `R` to get coordinates for specific locations. I'm able to use the function fine so far.
The issue I'm running into is that I would like to increase my daily limit from `2,500` to `100,000`. The official Google [documentation](https://developers.google.com/maps/documentation/geocoding/usage-limits)
says that this is readily possible if you enable billing on the project, which I'm happy to do. When you proceed with this process, the `Google Developers Console` gives you a personalized API key.
However, the `geocode` function doesn't have an option to put in this personalized API key. Instead, it asks for the `client` (client ID for business users) and `signature`(signature for business users), which is how [Google Maps API for Work](https://developers.google.com/maps/documentation/business/) customers can access the API. I get that this is also an option, but that seems to be a very use case, since Google Maps API for Work seems to be designed for large enterprise accounts:
>
> Daily quota starting at 100,000 requests per 24 hours, based on annual contractual purchase.
>
>
>
So my question boils down to this: can I use the `geocode` function from the `ggmaps`library in `R` to ping the Google Maps Geocoding API?
| I didn't find a way to use the existing `geocode` function (from the `ggmap` library) to answer this question, so I just created a new function to just do this myself using the existing `getURL` function (from the `RCurl` library) and the `fromJSON` function (from the `RJSONIO` library).
Write the new function:
```
library(RJSONIO)
library(RCurl)
getGeoData <- function(location){
location <- gsub(' ','+',location)
geo_data <- getURL(paste("https://maps.googleapis.com/maps/api/geocode/json?address=",location,"&key=**[YOUR GOOGLE API KEY HERE]**", sep=""))
raw_data_2 <- fromJSON(geo_data)
return(raw_data_2)
}
```
Test:
`getGeoData("San Francisco")`
This gives you a list with the same data that's almost (but not quite) in the same exact format as the list produced by `geocode("San Francisco")`.
|
Does the implementation of HasMap store key-value pairs in a linked list
I read in a book where it was mentioned that when we put elements in HashMap, internally it is stored in bucket. My question is
1. Does hashmap store key-value pair altogether in the form of linked list? or does it store in linked list only when there is a collision?
2. How does it retrieve the object when 2 different objects are stored in the same bucket?
Thanks!
| Lots of details at <http://en.wikipedia.org/wiki/Hash_table>
See also [Internal implementation of java.util.HashMap and HashSet](https://stackoverflow.com/questions/1781868/internal-implementation-of-java-util-hashmap-and-hashset)
And of course you can use the [source](http://www.docjar.com/html/api/java/util/HashMap.java.html), Luke.
Updated: to specifically answer your Q, it stores an Entry, which has a reference to the next item in the bucket (if any). If there is only one item in the bucket then the reference will be null:
```
static class Entry<K,V> implements Map.Entry<K,V> {
final K key;
V value;
Entry<K,V> next;
```
|
Animate splitpane divider
I have a horizontal split pane, and i would like to on button click, change divider position, so that i create sort of "slide" animation.
divider would start at 0 (complete left) and on my click it would open till 0.2, when i click again, it would go back to 0;
now i achived this, i just use
```
spane.setdividerPositions(0.2);
```
and divider position changes, but i cant manage to do this slowly i would really like that slide feeling when changing divider position.
Could anyone help me ? all examples i found on google, show some DoubleTransition, but that does not exist anymore in java 8, at least i dont have import for that.
| You can call `getDividers().get(0)` to get the first divider. It has a `positionProperty()` that you can animate using a timeline:
```
import javafx.animation.KeyFrame;
import javafx.animation.KeyValue;
import javafx.animation.Timeline;
import javafx.application.Application;
import javafx.beans.binding.Bindings;
import javafx.beans.property.BooleanProperty;
import javafx.beans.property.SimpleBooleanProperty;
import javafx.geometry.Insets;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.control.SplitPane;
import javafx.scene.layout.BorderPane;
import javafx.scene.layout.HBox;
import javafx.scene.layout.Pane;
import javafx.stage.Stage;
import javafx.util.Duration;
public class AnimatedSplitPane extends Application {
@Override
public void start(Stage primaryStage) {
SplitPane splitPane = new SplitPane(new Pane(), new Pane());
splitPane.setDividerPositions(0);
BooleanProperty collapsed = new SimpleBooleanProperty();
collapsed.bind(splitPane.getDividers().get(0).positionProperty().isEqualTo(0, 0.01));
Button button = new Button();
button.textProperty().bind(Bindings.when(collapsed).then("Expand").otherwise("Collapse"));
button.setOnAction(e -> {
double target = collapsed.get() ? 0.2 : 0.0 ;
KeyValue keyValue = new KeyValue(splitPane.getDividers().get(0).positionProperty(), target);
Timeline timeline = new Timeline(new KeyFrame(Duration.millis(500), keyValue));
timeline.play();
});
HBox controls = new HBox(button);
controls.setAlignment(Pos.CENTER);
controls.setPadding(new Insets(5));
BorderPane root = new BorderPane(splitPane);
root.setBottom(controls);
Scene scene = new Scene(root, 600, 600);
primaryStage.setScene(scene);
primaryStage.show();
}
public static void main(String[] args) {
launch(args);
}
}
```
|
Maven dependency graph missing in NetBeans 6.9
NetBeans had a really cool feature that would allow you to view all of a Maven project's dependencies as a graph. Well, I recently upgraded from 6.8 to 6.9, and while all the other Maven stuff works fine, the menu item for the dependency graph has vanished. I couldn't find any information on the NetBeans site. Does anybody know if this feature was removed? Or am I just missing some configuration option?
| According to [Creating an Enterprise Application Using Maven](https://netbeans.org/kb/docs/javaee/maven-entapp.html) (applies to NetBeans 6.9), the dependency graph is supposed to be available:
>
> You can right-click in `pom.xml` and
> choose Show Dependency Graph to see a
> visual representation of the project
> dependencies. You can place your
> cursor over an artifact to display a
> tooltip with the artifact details.
>
>
> [![alt text](https://i.stack.imgur.com/fz74O.png)](https://i.stack.imgur.com/fz74O.png)
>
> (source: [netbeans.org](http://netbeans.org/images_www/articles/68/javaee/mavenentapp/maven-webpomgraph.png))
>
>
>
I can't confirm this as I'm not using NetBeans 6.9 right now but I doubt they removed this nice feature.
|
How is std::atomic::operator= implemented for immutable types?
I've learned that one way to communicate between threads is to share some atomic data structure. For example:
```
struct Point {
int const x, y;
};
std::atomic<Point> some_point_in_shared_memory{Point{0, 0}};
```
Despite `Point::operator=(Point const &)` being deleted, there seems to be [no problem](https://godbolt.org/z/dbd7fx) calling the assignment operator for `std::atomic<Point>` as follows:
```
some_point_in_shared_memory = Point{1, 2};
```
How can this operation be implemented?
One solution I might think about is using `placement new` to construct a new object on top of the old one, but apparently [it is not exception safe](https://stackoverflow.com/questions/7177884/can-i-use-placement-newthis-in-operator). Or is it okay because `Point` is trivially-copyable?
| From [cppreference](https://en.cppreference.com/w/cpp/atomic/atomic):
>
> The primary std::atomic template may be instantiated with any
> TriviallyCopyable type T satisfying both CopyConstructible and
> CopyAssignable. The program is ill-formed if any of following values
> is false:
>
>
>
> ```
> std::is_trivially_copyable<T>::value
> std::is_copy_constructible<T>::value
> std::is_move_constructible<T>::value
> std::is_copy_assignable<T>::value
> std::is_move_assignable<T>::value
>
> ```
>
>
Your `T` is not CopyAssignable, and this line
```
some_point_in_shared_memory = Point{1, 2};
```
is ill-formed. There should be a compiler error. Unfortunately I didn't get GCC to emit an error or warning (`-pedantic -Wpedantic -pedantic-errors -Wall -Werror=pedantic` no effect).
|
data auditing in Cassandra
How to implement auditing for cassandra data?
I am looking for a open source option.
Are there any features of cassandra that help with auditing?
Can I use triggers to log the records into a table? I followed [Triggers](https://github.com/apache/cassandra/tree/trunk/examples/triggers) example and was able to get a record inserted into `triggers_log` table when the updates occur on another table.
But not sure how do I capture the `user/session` details that triggered the update. I have From `CQLSH` terminal, create `users` and `trigger_log table`
```
create table AUDIT_LOG (
transaction_id int,
entries map<text, text>, --> to capture the modifications done to the tables
user varchar, //authenticated user
time timestamp,
primary key(transaction_id));
```
```
CREATE TABLE users (
user_id int PRIMARY KEY,
fname text,
lname text
);
```
Define the trigger on users table using `CREATE TRIGGER` syntax from `cqlsh`
Below code so far.
```
public class AuditTrigger implements ITrigger {
@Override
public Collection<RowMutation> augment(ByteBuffer key, ColumnFamily update) {
List<RowMutation> mutations = new ArrayList<RowMutation>();
for (Column column : update) {
if (column.value().remaining() > 0) {
RowMutation mutation = new RowMutation("mykeyspace", key);
//What do I need here to capture the updates to users
//table and log the updates into various columns of audit_log
mutations.add(mutation);
}
}
return mutations;
}
}
```
If triggers is not the correct approach (any spring AOP approach?), please suggest alternatives. I also tried [Cassandra vs logging activity](https://stackoverflow.com/questions/9604554/cassandra-vs-logging-activity) solution but it does not print the sql executed, authenticated user information.
| Unfortunately at this time, Triggers cannot be used as what you need is the ClientState which contains the user information and is not passed to Triggers.
There are 2 approaches I can think of.(You will need to look at the Cassandra code base for better understanding these approaches)
One approach is AOP i.e to add an agent which would AOP and start Cassandra with the Agent. The class that will need to be pointcut is the QueryProcessor#processStatement method. The call to this method will have the prepared statement and the QueryState as parameters. From the PreparedStatement you can identify the intention of the user. QueryState.getClientState will return the ClientState which is where the user information resides.
The other approach involves custom authenticators and authorizers. Configuring this in Cassandra is described here.
<http://www.datastax.com/documentation/cassandra/2.0/cassandra/security/secure_about_native_authenticate_c.html>
You can have a custom authorizer extending the AllowAllAuthorizer(this will disable permission caching). Whenever you get an authorize request on the Authorizer you can log it. The downside of this approach is that you do not know what the user intends to do with the table, only that he is request some authorization on it. Permission is the one which contains what he wants to do with the table, but it is not passed on to the authorizer.
If you decide on either of these approaches, you are free to post followups if you need more detail.
|
SASS - for h1-h6 style
I have SCSS style like this, I want to use @for from SCSS to write this more efficient.
So far:
```
@for $i from 1 through 6 {
h#{$i} {
$size: {$i} * 1.4;
@include font-size($size);
}
}
```
*Note: don't mind calculation of size, its just for test*
but syntax is not right.
Expected output
```
h1 {
@include font-size(3.2);
}
h2 {
@include font-size(2.4);
}
h3 {
@include font-size(1.8);
}
h4 {
@include font-size(1.6);
}
h5 {
@include font-size(1.3);
}
h6 {
@include font-size(1.2);
}
```
| The main issue is your `h1` increasing in size as it gets to a higher number (because you are using `$i` incrementally). You can escape that by reducing the h-number size using the formula `7 - $i`.
```
@for $i from 1 through 6 {
h#{7 - $i} {
fontsize: $i * 1.4em;
}
}
```
The output here is:
```
h6 { font-size:1.4em }
h5 { font-size:2.8em }
h4 { font-size:4.2em }
h3 { font-size:5.6em }
h2 { font-size:7em }
h1 { font-size:8.4em }
```
Which seems to make sense. The original error you were getting was this:
```
Invalid CSS after "$size:": expected expression (e.g. 1px, bold), was "{$i} * 1.4;"
```
Because you can simply use `$i` as a number without special denotation.
To get the numbers to match with your question, you should actually find a way to calculate them on the fly - and the numbers you have shown above *are not* a pattern, so they are not mathematically controllable. Here's what you could do:
```
@for $i from 1 through 6 {
h#{7 - $i} {
fontsize: 3.4em / 6 * $i;
}
}
```
The reason this cannot be computed mathematically like your question desires is: `h1 - h2 = .6em`, `h2 - h3 = .6em`, `h3 - h4 = .2em` => That last one does not fall into line with any particular pattern.
|
Allocating string with malloc
I'm new in programming in C and now I'm studying strings.
My question is: if I allocate a string using `malloc` (as in the code below), is the NULL character automatically inserted at the end of the string?
I find an answer in another question here, and it seems that the NULL character is not automatically included.
But here comes the problem: I know functions like `strlen` don't work if there isn't the NULL character, and in this code I use it and it works. So I think there is `\0` at the end of my string, even if I don't write it anywhere.
What's the answer?
Here's the code:
```
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char** argv) {
char *stringa1;
int n;
int i;
printf("How many characters in the string? ");
scanf("%d", &n);
stringa1 = (char*) malloc(n*sizeof(char));
printf("Insert the string: ");
scanf("%s", stringa1);
free(stringa1);
return 0;
}
```
| `malloc()` returns a `void*` pointer to a block of memory stored in the heap. Allocating with `malloc()` does not initialize any string, only space waiting to be occupied.To add a null-terminating character, you either have to do this yourself, or use a function like `scanf()`, which adds this character for you. Having said this, you need to allocate space for this `\0` character beforehand.
Your `malloc()` call should be this instead:
```
stringa1 = (char*) malloc((n+1)*sizeof(char)); /*+1 for '\0' character */
```
**Note:** You don't need to cast return of malloc. For more information, read [this](https://stackoverflow.com/questions/605845/do-i-cast-the-result-of-malloc).
Another thing to point out is `sizeof(char)` is `1`, so multiplying this in your `malloc()` call is not necessary.
You also need to check if `malloc()` returns `NULL`. This can be done like this:
```
if (stringa1 == NULL) {
/* handle exit */
```
Also, you can only use `strlen()` on a null-terminated string, otherwise this ends up being [undefined behaviour](https://stackoverflow.com/questions/25799336/is-strlen-on-a-string-with-unitialized-values-undefined-behavior).
Once `scanf()` is called, and the `stringa1` contains some characters, you can call `strlen()` on it.
Additionally, checking return of `scanf()` is also a good idea. You can check it like this:
```
if (scanf("%d", &n) != 1) {
/* handle exit */
```
**Your code with these changes:**
```
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(void) {
char *stringa1 = NULL;
size_t n, slen;
printf("How many characters in the string? ");
if (scanf("%zu", &n) != 1) {
printf("Invalid input\n");
exit(EXIT_FAILURE);
}
stringa1 = malloc(n+1);
if (stringa1 == NULL) {
printf("Cannot allocate %zu bytes for string\n", n+1);
exit(EXIT_FAILURE);
}
printf("Insert the string: ");
scanf("%s", stringa1);
slen = strlen(stringa1);
printf("String: %s Length: %zu\n", stringa1, slen);
free(stringa1);
stringa1 = NULL;
return 0;
}
```
|
Apache POI insert image
I am having troubles with inserting a picture in an excel sheet im making.
There are a lot of question about this subject, but I simply cannot figure out what am I doing wrong.
My code runs, shows no errors but I do not see an image inserted :(
here is the code:
```
InputStream is = new FileInputStream("nasuto_tlo.png");
byte [] bytes = IOUtils.toByteArray(is);
int pictureIndex = wb.addPicture(bytes, Workbook.PICTURE_TYPE_PNG);
is.close();
CreationHelper helper = wb.getCreationHelper();
Drawing drawingPatriarch = sheet.createDrawingPatriarch();
ClientAnchor anchor = helper.createClientAnchor();
anchor.setCol1(2);
anchor.setRow1(3);
Picture pict = drawingPatriarch.createPicture(anchor, pictureIndex);
pict.resize();
try {
FileOutputStream out = new FileOutputStream(root+"/Busotina/Busotina1.xls");
wb.write(out);
out.close();
} catch (Exception e) {
e.printStackTrace();
}
```
| the problem is that your anchor is not correct.
You need to set all 4 values, because the default ones are 0 - but your first column can not be more right than your second one ;) You'll get negative extent.
You should get a warning when you open the excel file that it is corrupt.
So try
```
anchor.setCol1(2);
anchor.setCol2(3);
anchor.setRow1(3);
anchor.setRow2(4);
```
---
A working example from some code I wrote:
```
// read the image to the stream
final FileInputStream stream =
new FileInputStream( imagePath );
final CreationHelper helper = workbook.getCreationHelper();
final Drawing drawing = sheet.createDrawingPatriarch();
final ClientAnchor anchor = helper.createClientAnchor();
anchor.setAnchorType( ClientAnchor.MOVE_AND_RESIZE );
final int pictureIndex =
workbook.addPicture(IOUtils.toByteArray(stream), Workbook.PICTURE_TYPE_PNG);
anchor.setCol1( 0 );
anchor.setRow1( LOGO_ROW ); // same row is okay
anchor.setRow2( LOGO_ROW );
anchor.setCol2( 1 );
final Picture pict = drawing.createPicture( anchor, pictureIndex );
pict.resize();
```
|
What is /proc/85/root/tty & why am I seeing it written to every 1 second?
I'm watching which files / directories are being written to on my Ubuntu 22.04.1 LTS installation (kernel v. 5.15.0-52-generic).
Suddenly I'm seeing writes to /proc/85/root/tty every 1 second.
I know this may be some specific process (such as firefox) to my machine.
If so, can you tell me how I can determine which process it would be that is writing to this tty?
| `/proc/85` is for PID 85. To find out the process or program name do `ps aux | grep " 85 "`. Example from my computer, but for the similar PID 86:
```
doug@s19:~/idle/teo/util/ping-sweep/6-2$ ps aux | grep " 86 "
root 86 0.0 0.0 0 0 ? S Oct26 0:00 [kdevtmpfs]
doug 13416 0.0 0.0 9040 660 pts/2 S+ 13:44 0:00 grep --color=auto 86
```
Where the 2nd hit is the grep program itself. So the kernel thread that maintains devtmpfs is what you are observing. I do not know why you see the tty handle being written to every second. On my system it seems to update not often, and I haven't been able to isolate why:
```
doug@s19:~$ sudo ls -l /proc/86/root/tty
crw-rw-rw- 1 root tty 5, 0 Oct 28 13:34 /proc/86/root/tty
doug@s19:~$ sudo ls -l /proc/86/root/tty
crw-rw-rw- 1 root tty 5, 0 Oct 28 13:34 /proc/86/root/tty
doug@s19:~$ sudo ls -l /proc/86/root/tty
crw-rw-rw- 1 root tty 5, 0 Oct 28 2022 /proc/86/root/tty
doug@s19:~$ sudo ls -l /proc/86/root/tty
crw-rw-rw- 1 root tty 5, 0 Oct 28 13:58 /proc/86/root/tty
doug@s19:~/idle/teo/util/ping-sweep/6-2$ date
Fri 28 Oct 2022 02:05:07 PM PDT
doug@s19:~$ sudo ls -l /proc/86/root/tty
crw-rw-rw- 1 root tty 5, 0 Oct 28 13:58 /proc/86/root/tty
```
|
Manually call didSelectRowatIndexPath
I am trying to call didSelectRowAtIndexPath programmatically but am having trouble.
```
[self tableView:playListTbl didSelectRowAtIndexPath:indexPath];
```
Gives me the following error:
>
> Use of undeclared identifier 'indexPath'; did you mean 'NSIndexPath'?
>
>
>
Can anybody help me out? Thanks!
EDIT: From the responses below it sounds like im going about this the wrong way. How can I get the text of the selected items when a button is pressed (can be multiple selections)? I need to do this in a function dedicated to the button press.
| You need to pass a valid argument, if you haven't declared `indexPath` in the calling scope then you'll get that error. Try:
```
NSIndexPath *indexPath = [NSIndexPath indexPathForRow:ROW_YOU_WANT_TO_SELECT inSection:SECTION_YOU_WANT_TO_SELECT]
[self tableView:playListTbl didSelectRowAtIndexPath:indexPath];
```
Where `ROW_YOU_WANT...` are to be replaced with the row and section you wish to select.
However, you really shouldn't ever call this directly. Extract the work being done inside `tableView:didSelectRowAtIndexPath:` into separate methods and call those directly.
To address the updated question, you need to use the `indexPathsForSelectedRows` method on `UITableView`. Imagine you were populating the table cell text from an array of arrays of strings, something like this:
```
- (UITableViewCell *)tableView:(UITableView *)tv cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
UITableViewCell *cell = [tv dequeue...];
NSArray *rowsForSection = self.sectionsArray[indexPath.section];
NSString *textForRow = rowsForSection[indexPath.row];
cell.textLabel.text = textForRow;
return cell;
}
```
Then, to get all the selected text, you'd want to do something like:
```
NSArray *selectedIndexPaths = [self.tableView indexPathsForSelectedRows];
NSMutableArray *selectedTexts = [NSMutableArray array];
for (NSIndexPath *indexPath in selectedIndexPaths) {
NSArray *section = self.sectionsArray[indexPath.section];
NSString *text = section[indexPath.row];
[selectedTexts addObject:text];
}
```
`selectedTexts` would at that point contain all selected information. Hopefully that example makes sense.
|
More doubts in bzImage
The description of the `bzImage` in Wikipedia is really confusing me.
![alt text](https://i.stack.imgur.com/JXbyA.png)
The above picture is from Wikipedia, but the line next to it is:
>
> The bzImage file is in a specific
> format: It contains concatenated
> bootsect.o + setup.o + misc.o +
> piggy.o.
>
>
>
I can't find the others (`misc.o` and `piggy.o`) in the image.
I would also like to get more clarity on these object files.
The info on [this post](http://lkml.indiana.edu/hypermail/linux/kernel/9909.3/0625.html) about why we can't boot a `vmlinux` file is also really confusing me.
Another doubt is regarding the `System.map`. How is it linked to the `bzImage`? I know it contains the symbols of `vmlinux` before creating `bzImage`. But then at the time of booting, how does `bzImage` get attached to the `System.map`?
| Till Linux 2.6.22, `bzImage` contained:
- bbootsect ([`bootsect.o`](http://lxr.linux.no/#linux+v2.6.22.19/arch/i386/boot/bootsect.S)):
- bsetup ([`setup.o`](http://lxr.linux.no/#linux+v2.6.22.19/arch/i386/boot/setup.S))
- bvmlinux ([`head.o`](http://lxr.linux.no/#linux+v2.6.22.19/arch/i386/boot/compressed/head.S), [`misc.o`](http://lxr.linux.no/#linux+v2.6.22.19/arch/i386/boot/compressed/misc.c), `piggy.o`)
Linux 2.6.23 merged bbootsect and bsetup into one ([`header.o`](http://lxr.linux.no/#linux+v2.6.37/arch/x86/boot/header.S)).
At boot up, the kernel needs to initialize some sequences (see the header file above) which are only necessary to bring the system into a desired, usable state. At runtime, those sequences are not important anymore (so why include them into the running kernel?).
`System.map` stands in relation with `vmlinux`, `bzImage` is just the compressed container, out of which `vmlinux` gets extracted at boot time (=> `bzImage` doesn't really care about `System.map`).
Linux 2.5.39 intruduced `CONFIG_KALLSYMS`. If enabled, the kernel keeps it's own map of symbols (`/proc/kallsyms`).
`System.map` is primary used by user space programs like `klogd` and `ksymoops` for debugging purposes.
Where to put `System.map` depends on the user space programs which consults it.
`ksymoops` tries to get the symbol map either from `/proc/ksyms` or `/usr/src/linux/System.map`.
`klogd` searches in `/boot/System.map`, `/System.map` and `/usr/src/linux/System.map`.
Removing `/boot/System.map` generated no problems on a Linux system with kernel 2.6.27.19 .
|
Removing file extension using grunt-contrib-connect and grunt-connect-rewrite
I'm trying to remove '.html' from files in my grunt web app.
<http://testing.com/one/> should return index.html from that folder, but if there is no trailing slash (<http://testing.com/one>) it should check for one.html
The grunt-connect-rewrite seems to be working fine with examples that I can find, but removing file extensions from .html files seems to be killing me. The rule here is one similar to what i'd use in an .htaccess file.
```
connect: {
server: {
options: {
port: 9000,
keepalive: true,
base: 'dist',
middleware: function(connect, options) {
return [
rewriteRulesSnippet,
// Serve static files
connect.static(require('path').resolve(options.base))
];
}
},
rules: {
'^(.*)\.html$': '/$1'
}
}
}
```
So the question is, what is the correct rule to use here?
| The answers didn't work for me so I played around with it until I found a solution.
Regex:
```
from: '(^((?!css|html|js|img|fonts|\/$).)*$)',
to: "$1.html"
```
Package versions:
```
"grunt-contrib-watch": "~0.5.3",
"grunt-contrib-connect": "~0.5.0",
"grunt-connect-rewrite": "~0.2.0"
```
Complete working Gruntfile:
```
var rewriteRulesSnippet = require("grunt-connect-rewrite/lib/utils").rewriteRequest;
module.exports = function(grunt) {
grunt.initConfig({
watch: {
html: {
files: "**/*.html"
}
},
connect: {
options: {
port: 9000,
hostname: "127.0.0.1"
},
rules: [{
from: '(^((?!css|html|js|img|fonts|\/$).)*$)',
to: "$1.html"
}],
dev: {
options: {
base: "./",
middleware: function(connect, options) {
return [rewriteRulesSnippet, connect["static"](require("path").resolve(options.base))];
}
}
},
}
});
grunt.loadNpmTasks("grunt-connect-rewrite");
grunt.loadNpmTasks("grunt-contrib-connect");
grunt.loadNpmTasks("grunt-contrib-watch");
grunt.registerTask("default", ["configureRewriteRules", "connect:dev", "watch"]);
};
```
|
How to properly prepare a server for power outages?
I have a personal Debian server set up, that I'm able to access in person and remotely, but normally I'm away from it and it's remote. It's on a battery-backup surge protector, but sometimes that isn't enough to keep the server on through a power outage. The machine automatically starts and boots, but its left at the login screen (in person). What I want to know is, what is the best way to have the server get itself back on and online after it loses power, without leaving it vulnerable in person? I'd like it to not auto-login for the screen in person, so that the machine isn't overly vulnerable in-person while I'm away.
Are there battery backups that signal for hibernation or other safe power down automatically?
How can I have a user sign in automatically in the background?
What would you recommend as a "best practice" for this, considering it's just a home server?
| Make sure your BIOS is set to restart the system after a power failure.
Turn off auto login unless you have a really good reason for it to be enabled. You shouldn't really need to be interactively logged on for any server type programs to function. Especially if you can get into the system remotely there is no reason for it.
For most any program you need running in the background you can either make an initscript (if there isn't one already) or start it from `rc.local` without needing to be logged in.
For example, I have apache running on my home server. It launches and runs in the background when I boot. I don't have to explicity start it when my server comes up.
Most battery backups over $50 have a serial port or other mechanism for communicating with a PC. If yours is an APC (a really common brand), you want a package called `apcupsd`. This lets you call scripts when the power goes out, comes back on, and the system is about to shut down because of low battery. Most battery backups support Linux, it's just a matter of going on the manufacturer's website or doing a bit of Googling.
Generally power events are delivered to `init` and then init calls the respective programs. Look at `/etc/inittab` and the comments illustrate this.
You can actually disable console login entirely if you really want to by removing all the "getty" lines in inittab. Don't recommend you do that though, because if your network fails or you misconfigure `sshd` one day you are screwed.
|
POST body JSON using Retrofit
I'm trying to POST a JSONObject using the Retrofit library, but when I see the request at the receiving end, the content-length is `0`.
In the RestService interface:
```
@Headers({
"Content-type: application/json"
})
@POST("/api/v1/user/controller")
void registerController(
@Body JSONObject registrationBundle,
@Header("x-company-device-token") String companyDeviceToken,
@Header("x-company-device-guid") String companyDeviceGuid,
Callback<JSONObject> cb);
```
And it gets called with,
```
mRestService.registerController(
registrationBundle,
mApplication.mSession.getCredentials().getDeviceToken(),
mApplication.mSession.getCredentials().getDeviceGuid(),
new Callback<JSONObject>() {
// ...
}
)
```
And I'm certain that the `registrationBundle`, which is a `JSONObject` isn't null or empty (the other fields are certainly fine). At the moment the request is made, it logs out as: `{"zip":19312,"useAccountZip":false,"controllerName":"mine","registrationCode":"GLD94Q"}`.
On the receiving end of the request, I see that the request has `Content-type: application/json` but has `Content-length: 0`.
Is there any reason why sending JSON in the body like this isn't working? Am I missing something simple in using Retrofit?
| By default, you don't need to set any headers if you want a JSON request body. Whenever you test Retrofit code, I recommend setting `.setLogLevel(RestAdapter.LogLevel.FULL)` on your instance of RestAdapter. This will show you the full request headers and body as well as the full response headers and body.
What's occurring is that you are setting the Content-type twice. Then you're passing a JSONObject, which is being passed through the GsonConverter and mangled to look like `{"nameValuePairs":YOURJSONSTRING}` where `YOURJSONSTRING` contains your complete, intended JSON output. For obvious reasons, this won't work well with most REST APIs.
You should skip messing with the Content-type header which is already being set to JSON with UTF-8 by default. Also, don't pass a JSONObject to GSON. Pass a Java object for GSON to convert.
Try this if you're using callbacks:
```
@POST("/api/v1/user/controller")
void registerController(
@Body MyBundleObject registrationBundle,
@Header("x-company-device-token") String companyDeviceToken,
@Header("x-company-device-guid") String companyDeviceGuid,
Callback<ResponseObject> cb);
```
I haven't tested this exact syntax.
Synchronous example:
```
@POST("/api/v1/user/controller")
ResponseObject registerController(
@Body MyBundleObject registrationBundle,
@Header("x-company-device-token") String companyDeviceToken,
@Header("x-company-device-guid") String companyDeviceGuid);
```
|
unaffix event for Bootstrap affix?
I want to combine the affix plugin with the bootstrap navbar-fixed-top class. So far I have got it working then when I scroll past the navbar it gets fixed. But when I scroll back up I want it to go back into static state again. I have seen some code I think from older bootstrap versions and a `unaffix` event. Why is it gone? Can I create one? Or how to accomplish what I am trying here?
```
navbar_secondary = $( '.navbar-secondary:first' );
navbar_secondary.affix( {
offset: {
top: function () {
return (this.top = navbar_secondary.offset().top )
}
}
} );
navbar_secondary.on( 'affix.bs.affix', function () { // this is actually the wrong event for this. I want this to fire when its *not* affixed
console.log('affix');
navbar_secondary.removeClass( 'navbar-fixed-top' ).addClass( 'navbar-not-fixed' );
} );
navbar_secondary.on( 'affixed.bs.affix', function () {
console.log('affixed');
navbar_secondary.removeClass( 'navbar-not-fixed' ).addClass( 'navbar-fixed-top' );
} );
```
| Figured it out myself. This event names are totally confusing. `affixed-top.bs.affix` is actually the event when it goes back to being not affixed.
```
navbar_secondary = $( '.navbar-secondary:first' );
navbar_secondary.affix( {
offset: {
top: function () {
return (this.top = navbar_secondary.offset().top )
}
}
} );
navbar_secondary.on( 'affixed-top.bs.affix', function () {
console.log('unaff');
navbar_secondary.removeClass( 'navbar-fixed-top' ).addClass( 'navbar-not-fixed' );
} );
navbar_secondary.on( 'affix.bs.affix', function () {
console.log('aff');
navbar_secondary.removeClass( 'navbar-not-fixed' ).addClass( 'navbar-fixed-top' );
} );
```
### Summary
>
> **affix.bs.affix** => before fixed positioning is applied to an element
>
> **affixed.bs.affix** => after fixed positioning is applied to an element
>
> **affix-top.bs.affix** => before a top element returns to its original (non-fixed) position
>
> **affixed-top.bs.affix** => after a top element returns to its original (non-fixed) position
>
> **affix-bottom.bs.affix** => before a bottom element returns to its original (non-fixed) position
>
> **affixed-bottom.bs.affix** => after a bottom element returns to its original (non-fixed) position
>
>
>
|
how to make a function live
I have a enter function which makes it so my form will submit on enter. The only problem is my form does not exist until i click a button that appends it to the body. Is there a way to make my $.Enter function live? Thanks in advance for any suggestions.
```
//submit function
function submit_chatbox(){
alert('yo');
}
$.Enter('#message',submit_chatbox);
jQuery.Enter = function(element,callback) {
jQuery(element).bind('keypress', function(event) {
var code=event.charCode || event.keyCode;
if(code && code == 13) {// if enter is pressed
callback(event.target);
event.preventDefault(); //prevent browser from following the actual href
};
});
};
```
| To make it use [`.live()`](http://api.jquery.com/live/), it would look like this:
```
jQuery.Enter = function(element,callback) {
jQuery(element).live('keypress', function(event) {
var code=event.charCode || event.keyCode;
if(code && code == 13) {// if enter is pressed
callback(event.target);
event.preventDefault(); //prevent browser from following the actual href
};
});
};
```
But... what you have lends it self well to a plugin, like this:
```
jQuery.fn.Enter = function(callback) {
return this.live('keypress', function(event) {
if(event.which == 13) {
callback.call(this, event);
event.preventDefault();
};
});
};
```
Then you'd call it like this:
```
$('#message').Enter(submit_chatbox);
```
|
Whether to use r-square or adjusted r-square with a small sample size that may represent the entire population?
I read online that it is only necessary to use adjusted-$R^2$ when you are working with a sample rather than the entire population.
The data I'm working with is information on a series of live educational seminars. Each datapoint represents a single seminar that was held in the past, and contains various information on that program's characteristics.
In trying to decide whether to use $R^2$ or adjusted-$R^2$, I can see two different sides to the coin.
1. Since my dataset contains every seminar we've held to date, I'm working with the entire population, so I should go with regular old $R^2$.
2. The population of interest is really all *possible* seminars, including those that haven't happened yet, especially since my goal in this model is to better understand the relationship of factors going forward. Therefore I am looking at a sample, and I should use adjusted-$R^2$.
Which logic is correct, and which measure of correlation should I use?
| I think you have two different viewpoints and no correct or incorrect answer. But I would be more inclined to go with 2. Although in 1 you said you have included every seminar held to date it seems that your universe includes future seminars as well.
But accepting 2 does not settle the issue between R square and adjusted R square. The reason adjusted R square is included in the first place is that if the size of the model parameters or covariates is large relative to the size of the sample the ordinary R square will tend to overestimate the amount of variation that the model explains. It is the percentage of variance explained by the model for the observed data set but it overestimates the amount of variation the model will explain on a new data set randomly sampled from the population. The adjusted R square makes an effort to account for this bias. But if the sample size is very large relative to the number of covariates R square and adjusted R square won't differ much and choosing adjusted R square is far less important than if the sample size was only slightly larger than the number of parameters estimated in the model.
So I see the choice of adjusted R square over R square being more a matter of the relative size of the sample size to the number of parameters rather than whether or not the sample represent the enitre population or just a random piece of it.
|
API Throttle in Laravel 5.2
I was seeing [this tutorial](https://laracasts.com/series/whats-new-in-laravel-5-2/episodes/2) about throttle in Laravel 5.2
It seems that throttle is just used for APIs, but why couldn't be used for other controller stuff, to avoid that people send 100 times the same form through Postman.
I tell that, because in the Kernel.php, now, middleware are clearly divided between web and apis: [Kernel.php:Laravel 5.2](https://github.com/laravel/laravel/blob/master/app/Http/Kernel.php)
| You can apply it to web pages as well. Judging from your comments, you're confused as to the new features of Middleware, primarily [Middleware Groups](https://laravel.com/docs/5.2/middleware#middleware-groups).
5.2 brought along with it a way to group Middleware like you would with Route groups before. In 5.1 you would do something like:
```
Route::group(['prefix' => 'api', 'middleware'=>'auth,custom_middleware,permission:edit_permissions'], function() {
Route::post('permissions/{id}/store', ['uses'=>'PermissionController@store']);
});
```
That is still completely valid, but if you wanted to add another Route group with the same middleware, you had to either juggle organization so they were nested beneath a single Route group that applied those middleware or you had to copy paste the middleware, neither very desirable. With 5.2, all you have to is this:
```
Kernel.php
protected $middlewareGroups = [
'permissions_api' => [
'auth',
'custom_middleware',
'permission:edit_permissions',
]
];
routes.php
Route::group(['middleware' => ['permissions_api']], function () {
Route::post('permissions/{id}/store', ['uses'=>'PermissionController@store']);
});
Route::group(['middleware' => ['permissions_api']], function () {
Route::post('permissions/{id}/update', ['uses'=>'PermissionController@update']);
});
```
So you can group those middleware and apply them in those groups. That's what the `api` and `web` you are seeing is. It's just the default Middleware groups provided by Laravel that you can modify however you want. `throttle` is available as Middleware where ever you may need it. The below are both perfectly valid
```
Route::group(['middleware' => ['throttle:60,1']], function () {
Route::post('permissions/{id}/update', ['uses'=>'PermissionController@update']);
});
```
or
```
protected $middlewareGroups = [
'permissions_api' => [
'auth',
'custom_middleware',
'permission:edit_permissions',
'throttle:60,1'
]
];
```
So `throttle` is just a middleware and can be applied just as any middleware is. It is defined in `Kernel.php` as `'throttle' => \Illuminate\Routing\Middleware\ThrottleRequests::class,` and the `60,1` are just middleware parameters, which were added in 5.1
|
Pointers and References as member variables of const objects
The following code compiles fine. However I wonder if it is legal C++. So more specific, if I have a const object, am I allowed to modify variables through pointers/references of that object?
```
class Foo {
public:
int* a;
int& b;
Foo(int* _a, int& _b) : a(_a), b(_b) {}
};
int main ( int argc, char* argv[] ) {
int x = 7;
const Foo bar(&x, x);
*bar.a = 3; //Leagal?
bar.b = 7; //Legal?
return 0;
}
```
| It's legal, as const-ness of the class means that the class member is constant. `a` is a pointer, so the address the pointer points to is constant, but the value stored at that address need not be.
Hence `bar.a` is effectively an `int * const`, not an `int const *`.
As, after initialization, a reference cannot be made to refer to another entity anyway, it does not matter for `bar.b` whether `bar` is declared `const` or not.
The constant variant of a pointer is a constant pointer, not a pointer to a constant. The constant variant of a reference is a reference, not a reference to a constant.
Small digression: You should be careful with references as members anyway in connection with const-ness, as the following will probably compile
```
struct Y { int m_a; };
struct X {
const Y & m_y;
X (const Y & y) : m_y (y) { }
};
Y y;
y.m_a = 1;
X x (y); // or const X x (y) -- does not matter
// X.m_y.m_a == 1
y.m_a = 2;
// now X.m_y.m_a == 2, although X.m_y is supposed to be const
```
As it is possible to assign a pointer to non-const to a pointer to const, you can build an analogous example with pointers. Remember that `const` does only guarantee that YOU will not modify a variable via this very variable, it cannot guarantee that the contents of the variable are not modified at all.
|
Is "Expresssion Register" not supported by the VsCodeVim extension?
I've been learning vim recently and I've been using the vscodevim extension to get the shortcuts in Visual Studio Code. Yesterday I came across [this](http://vimcasts.org/episodes/simple-calculations-with-vims-expression-register/) tutorial which uses to 'Expression Register' to do simple calculations. This worked when using vim from the command line directly but I've had no luck trying to make it work in Visual Studio Code (pressing `<C-r>=` does nothing).
I've looked in the github page of vscode vim but found nothing related to it. There is mention of the '=' register, but nothing related to 'Expression Register'.
>
> CTRL-R {0-9a-z%#:.-="} insert the contents of a register
>
>
>
And also the 'useCtrlKeys' option is set to true in the settings.json so the extension has access to the Ctrl keys.
Am I missing something? Is this feature missing from the extension?
| The answer is no (at least for now). Upon reading the linked [article](http://vimcasts.org/episodes/simple-calculations-with-vims-expression-register/) and `vscodevim`'s [page](https://marketplace.visualstudio.com/items?itemName=vscodevim.vim) in Visual Studio marketplace a little more carefully I've found that it is currently not possible to use the 'expression register' using the vscodevim extension. This is because the expression register uses `Vimscript` to evaluate simple code and `Vimscript` is currently not supported in `vscodevim`.
From the article:
>
> The expression register lets us evaluate a snippet of Vimscript code.
>
>
>
From `vscodevim`'s page in Visual Studio Marketplace:
>
> Vimscript is not supported; therefore, we are not able to load your .vimrc or use .vim plugins.
>
>
>
|
Moving a buffer in vim across instances of vim
Is this possible to do?
Conceptually, a solution should apply across a lot of possible configurations, ranging from two vim instances running in separate virtual terminals in panes in a tmux window, to being in separate terminals on separate machines in separate geographical regions, one or both connected over network (in other words, the vims are hosted by two separate shell processes, which they would already be under tmux anyhow).
The case that prompted me to ponder this:
**I have two tmux panels both with vim open and I want to use the Vim yank/paste to copy across the files.**
But it only works if I've got them both running in the same instance of Vim, so I am forced to either:
1. use tmux's copy/paste feature to get the content over (which is somewhat tedious and finicky), or
2. use the terminal (PuTTY, iTerm2)'s copy/paste feature to get the content over (which is similarly tedious but not subject to network latency, however this only works up to a certain size of text payload to copy at which point this method will not work at all due to the terminal not knowing the contents of the not-currently-visible parts of the file), or
3. lose Vim buffer history/context and possibly shell history/context in reopening the file manually in one of the Vim instances in either a split buffer or tab and then closing the other terminal context (much less tedious than 1 for large payloads but more so with small payloads).
This is a bit of a PITA and could all be avoided if I have the foresight of switching to an appropriate terminal already running vim to open my files but the destiny of workflow and habit rarely match up with that which would have been convenient.
So the question is, does there exist a command or the possibility of a straighforwardly-constructed (shell) script that allows me to join buffers across independently running vim instances? Am having a hard time getting Google to answer that adequately.
In the absence of an adequate answer (or if it is determined with reasonable certainty that Vim does not possess the features to accomplish the transfer of buffers across its instances), a good implementation (bindable to keys) for approach 3 above is acceptable.
Meanwhile I'll go back to customizing my vim config further and forcing myself to use as few instances of vim as possible.
| No, Vim can't share a session between multiple instances. This is how it's designed and it doesn't provide any session-sharing facility. Registers, on-the-fly mappings/settings, command history, etc. are local to a Vim session and you can't realistically do anything about that.
---
But your title is a bit misleading: you wrote "buffer" but it looks like you are only after copying/pasting (which involves "register", not "buffers") from one Vim instance to another. Is that right? If so, why don't you simply get yourself a proper build with clipboard support?
Copying/yanking across instances is as easy as `"+y` in one and `"+p` in another.
Obviously, this won't work if your Vim instances are on different systems. In such a situation, `"+y` in the source Vim and system-provided paste in the destination Vim (possibly with `:set paste`) is the most common solution.
---
If you are on a Mac, install MacVim and move the accompanying `mvim` shell script somewhere in your path. You can use the MacVim executable *in* your terminal with `mvim -v`.
If you are on Linux, install the vim-gnome package from your package manager.
If you are on Windows, install the latest "Vim without Cream".
---
But the whole thing looks like an XY problem to me. Using Vim's built-in `:e[dit]` command efficiently is probably the best solution to what appears to be your underlying problem: editing many files from many different shells.
|
Pass List from actionlink to controller method
In my controller I have this:
```
ViewBag.lstIWantToSend= lstApps.Select(x => x.ID).ToList(); // creates a List<int> and is being populated correctly
```
I want to pass that list to another controller.. so in my view I have:
```
@Html.ActionLink(count, "ActionName", new { lstApps = ViewBag.lstIWantToSend }, null)
```
Method in Controller:
```
public ActionResult ActionName(List<int> lstApps) // lstApps is always null
```
Is there a way to send a list of ints as a route value to a controller method?
| its not possible directly but you can do it with `Json` if i have `List<int>`
```
ViewBag.lstIWantToSend= new List<int> {1, 2, 3, 4};
```
so my view would be something like
```
@Html.ActionLink(count, "ActionName", new { lstApps = Json.Encode(ViewBag.lstIWantToSend) }, null)
```
`Json.Encode` will convert `List<int>` to `json string`
and `ActionName` will be like this
```
public ActionResult ActionName (string lstApps)
{
List<int> result = System.Web.Helpers.Json.Decode<List<int>>(lstApps);
return View();
}
```
`Json.Decode<List<int>>` will convert this `json string` back to `List<int>`
|
Android send mail with attachment from string
I have a HTML string which I want to attach to mail as a file. I could save this string to a file and attach it but I want to do it without saving it to a file. I think it should be possible but I don't know how to do it. This is my code:
```
String html = "<html><body><b><bold</b><u>underline</u></body></html>";
Intent intent = new Intent(Intent.ACTION_SEND, Uri.parse("mailto:"));
intent.setType("text/html");
intent.putExtra(Intent.EXTRA_SUBJECT, "Subject");
intent.putExtra(Intent.EXTRA_TEXT, Html.fromHtml(html));
// this is where I want to create attachment
intent.putExtra(Intent.EXTRA_STREAM, Html.fromHtml(html));
startActivity(Intent.createChooser(intent, "Send Email"));
```
How can I attach string as a file to mail?
|
>
> This code saves you from adding a manifest uses permission to read from external sd card. It creates a temp in files directory on your app private directory then creates the file with the contents of your string and allows read permission so that it can be accessed.
>
>
>
```
String phoneDesc = "content string to send as attachment";
FileOutputStream fos = null;
try {
fos = openFileOutput("tempFile", Context.MODE_WORLD_READABLE);
fos.write(phoneDesc.getBytes(),0,phoneDesc.getBytes().length);
fos.flush();
fos.close();
} catch (IOException ioe) {
ioe.printStackTrace();
}
finally {
if (fos != null)try {fos.close();} catch (IOException ie) {ie.printStackTrace();}
}
File tempFBDataFile = new File(getFilesDir(),"tempFile");
Intent emailClient = new Intent(Intent.ACTION_SENDTO, Uri.parse("someone@somewhere.com"));
emailClient.putExtra(Intent.EXTRA_SUBJECT, "Sample Subject";
emailClient.putExtra(Intent.EXTRA_TEXT, "Sample mail body content");
emailClient.putExtra(Intent.EXTRA_STREAM, Uri.fromFile(tempFBDataFile));//attachment
Intent emailChooser = Intent.createChooser(emailClient, "select email client");
startActivity(emailChooser);
```
>
> This should be called whenever you dont need the file anymore.
>
>
>
```
File tempData = new File(getFilesDir(),"tempFile");
if (tempData.exists()) {
tempData.delete();
}
```
|
How to NOT overload the std::map::at member function in the case where the two template types are the same?
I have a here a bidirectional map. I make it form 0. I defined member functions like `insert` `count` `size` and others and of course a function `at` which returns a reference to the mapped value of the element identified with the given key. Until type A is not the same with type B everything works fine, but when type A is the same with type B i get an error that i try to overload this function `at`, which is correct :( But my mind can`t help me with a method to solve this error. Maybe you can give me an example or tell me what i can do in this case :)
```
template <class A,class B>
class BidirectionalMap
{
public:
void insert(A a,B b)
{
m1.insert(std::pair<A,B> (a,b));
m2.insert(std::pair<B,A> (b,a));
}
BidirectionalMap& operator =(BidirectionalMap &a)
{
m1=a.m1;
m2=a.m2;
return *this;
}
A& at(const A& a)
{
if(m1.find(a)!=m1.end()) return m1.at(a);
else return m2.at(a);
}
const B& at(const A& b) const
{
return m1.at(b);
}
const A& at(const B& a) const
{
return m2.at(a);
}
int size() const
{
return m1.size();
}
int count(const A& a) const
{
return m1.count(a);
}
int count(const B& b) const
{
return m2.count(b);
}
B& operator[](const A& a)
{
return m1[a];
}
A& operator[](const B& b)
{
return m2[b];
}
private:
std::map<A,B> m1;
std::map<B,A> m2;
};
```
And what if i cannot modify this sequence in main()?
```
BidirectionalMap<int, int> f;
f.insert(3, 18);
f.insert(8, 2);
f.insert(7, 5);
f.insert(9, 1);
const BidirectionalMap<int, int> cf = f;
if( f.at(5) == 7 &&
f.count(12) == 0 &&
f.at(8) == 2)
{
yourMark = cf[18] + cf[9];
}
```
| Implement `at` in a CRTP base class.
```
template<class D, class A, class B>
struct crtp_at {
D* self() { return static_cast<D*>(this); }
D const* self() const { return static_cast<D const*>(this); }
const B& at(const A& b) const {
return self()->m1.at(b);
}
const A& at(const B& a) const {
return self()->m2.at(a);
}
B& at(const A& b) {
return self()->m1.at(b);
}
A& at(const B& a) {
return self()->m2.at(a);
}
};
template<class D, class A>
struct crtp_at<D,A,A> {
D* self() { return static_cast<D*>(this); }
D const* self() const { return static_cast<D const*>(this); }
A& at(const A& a) {
if(self()->m1.find(a)!=self()->m1.end()) return self()->m1.at(a);
else return self()->m2.at(a);
}
A const& at(const A& a) const {
if(self()->m1.find(a)!=self()->m1.end()) return self()->m1.at(a);
else return self()->m2.at(a);
}
};
```
Then your class uses the above like:
```
template <class A,class B>
class BidirectionalMap:public crtp_at< BiDirectionalMap<A,B>, A, B >
{
// rest of your code
};
```
however, I would advise actually **blocking** `at` in that case, and any other method where it is not clear which way you are going.
You should have methods that clearly go one way or the other in your code for cases like `short <-> double` anyhow.
|
mongodb equivalent of SELECT field AS `anothername`
what is the mongodb equivalent of the MySQL query
```
SELECT username AS `consname` FROM `consumer`
```
| As it was mentioned by sammaye, you have to use [$project](http://docs.mongodb.org/manual/reference/operator/aggregation/project/#pipe._S_project) in [aggregation framework](http://docs.mongodb.org/manual/reference/method/db.collection.aggregate/) to rename fields.
So in your case it would be:
```
db.consumer.aggregate([
{ "$project": {
"_id": 0,
"consname": "$username"
}}
])
```
Cool thing is that in 2.6.x version aggregate returns a cursor which means it behaves like find.
You might also take a look at [$rename](http://docs.mongodb.org/manual/reference/operator/update/rename/) operator to permanently change schema.
|
cx\_Oracle in Ubuntu: distutils.errors.DistutilsSetupError: cannot locate an Oracle software installation
i try to install python dependencies for django project sdu.edu.kz. This project uses cx-Oracle.
When i try:
```
./install_python_dependencies.sh install
```
It successfully installs all modules except one. The module of cx-Oracle. However, I installed the cx-Oracle program on my computer.
It prints the error:
```
Collecting cx-oracle==5.2 (from -r requirements/base.txt (line 82))
Using cached cx_Oracle-5.2.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-RP7c9i/cx-oracle/setup.py", line 170, in <module>
raise DistutilsSetupError("cannot locate an Oracle software " \
distutils.errors.DistutilsSetupError: cannot locate an Oracle software installation
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip build-RP7c9i/cx-oracle/
```
How to locate an Oracle software installation? Any ideas? Please, help
| For Oracle 12.x , cx\_Oracle not available yet. So we need to download instant client with 11.x version.
Go to <http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html> and accept License Agreement.
# Then download zip files with names as below
- instantclient-sdk-linux.x64-11.2.0.4.0.zip
- instantclient-basic-linux.x64-11.2.0.4.0.zip
# And unzip them with unzip command
```
unzip instantclient-sdk-linux.x64-11.2.0.4.0.zip
unzip instantclient-basic-linux.x64-11.2.0.4.0.zip
```
Both will be extracted into a common directory named as "instantclient\_11\_2". Then add this to PATH with below commands.
```
export ORACLE_HOME=/path-to_this/instantclient_11_2
cd $ORACLE_HOME
ln -s libclntsh.so.11.1 libclntsh.so
```
Open /etc/profile or .bashrc and following entries.
```
export ORACLE_HOME=/location/of/your/files/instantclient_11_2
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME
```
then do `source ~/.bashrc` or `source /etc/profile`
And install these packages
```
sudo apt-get install build-essential unzip python-dev libaio-dev
```
after that use `pip` or `pip3` according to your Python version and then install it
```
pip3 install cx_Oracle
```
and try loading `cx_Oracle` module in Python interpreter.
Hope this helps.
Credit goes here: <https://gist.github.com/kimus/10012910>
Note: I have tried this on my Ubuntu 16.04 installation , it should work for you.
|
.htaccess - ErrorDocument vs RewriteRule
I want requests for missing files and invalid directories to redirect to a custom 404 error page. I believe this can be accomplished using 2 methods.
```
# redirect invalid requests and missing files to the not-found page
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ http://www.example.com/not-found.html [L]
# same idea but using error document
ErrorDocument 404 /not-found.html
```
Is one option better than the other? When would I want to use one vs the other?
| The rewrite rule will not return the proper status code. Meaning that a browser will see your custom error page but the response code will still be **200**, and the browser (or whatever client is talking to the server) will think that the specific bad request is actually a good request.
The `ErrorDocument` statement will show the custom error page but will also return a **404** response code, letting the browser (or whatever client) know that what it requested was not found on the server.
Note that mod\_rewrite has an `R` flag that you can give a response code to:
```
RewriteRule ^(.*)$ /not-found.html [L,R=404]
```
This will return a **404** response code but *it will not display* the `/not-found.html` page. It *will* display the custom error document if you use the `ErrorDocument` statement:
```
RewriteRule ^(.*)$ - [L,R=404]
ErrorDocument 404 /not-found.html
```
The rewrite rule will force a 404, its target is essentially ignored, then the `ErrorDocument` makes sure the custom error page is served up as well as the proper **404** response.
So you want `ErrorDocument`, pretty much always. The other thing to note is that mod\_rewrite is in a different place in the URI to File mapping pipeline than `ErrorDocument`. If you're relying on mod\_rewrite to determine what should be 404 and what shouldn't, it's likely to be affected by other modules that may come after (like mod\_proxy, for example).
---
To be clear, the `ErrorDocument` is used *after* the URL-to-file mapping pipeline concludes with a "resource not found" (i.e. a **404**), then the custom error document statement, `ErrorDoucment 404`, serves up a custom error page instead of the default one. When you use mod\_rewrite like you do in your question, it completely circumvents the natural way the pipeline arrives at a **404**.
|
What does boost interprocess file\_lock actually do with the target file?
I've done some reading about [`boost::interprocess::file_lock`](http://www.boost.org/doc/libs/1_44_0/doc/html/boost/interprocess/file_lock.html) and it seems to do pretty much what I'm after (support shareable and exclusive locking, and being unlocked if the process crashes or exits).
One thing I'm not sure about though, is what does it *do* to the file? Can I use for example a file of 0 bytes long? Does `boost::interprocess` write anything into it? Or is its presence all the system cares about?
I've been using `boost::interprocess` now for some time to reliably memory map a file and write into it, now I need to go multiprocess and ensure that reads and writes to this file are protected; `file_lock` does seem the way to go, I just wonder if I now need to add another file to use as a mutex.
Thanks in advance
|
>
> what does it *do* to the file?
>
>
>
Boost does not do anything with the file, it relies on the operating system to get that job done. Support for memory mapped files is a generic capability of a demand-paged virtual memory operating system. Like Windows, Linux, OSX. Memory is normally backed by the paging file, having it backed by a specific file you select is but a small step. Boost just provides a platform-independent adapter, nothing more.
You'll want to take a look at the relevant OS documentation pages to see what's possible and how it is expected to work when you do something unusual. For Linux and OSX you'll want to look at the `mmap` man pages. For Windows look at `CreatefileMapping`.
>
> file\_lock does seem the way to go
>
>
>
Yes, you almost always need to arbitrate access to the memory mapped file so for example one process will only attempt to read the data when the other process finished writing it. The most suitable synchronization primitive for that is *not* a file\_lock (the OS already locks the file), it is a named mutex. Use, say, boost's [named\_mutex class](http://www.boost.org/doc/libs/1_42_0/doc/html/boost/interprocess/named_mutex.html).
Do keep in mind that this is a *very* low-level interop mechanism and comes without any conveniences whatsoever. By the time you add all of the required synchronization, you're half-way to what the OS already does with a named pipe or local-loopback socket. If you discover that you have to copy data into the mapped view, not uncommon since it is not easily resizable, then you've lost all benefits.
|
groupby weighted average and sum in pandas dataframe
I have a dataframe:
```
Out[78]:
contract month year buys adjusted_lots price
0 W Z 5 Sell -5 554.85
1 C Z 5 Sell -3 424.50
2 C Z 5 Sell -2 424.00
3 C Z 5 Sell -2 423.75
4 C Z 5 Sell -3 423.50
5 C Z 5 Sell -2 425.50
6 C Z 5 Sell -3 425.25
7 C Z 5 Sell -2 426.00
8 C Z 5 Sell -2 426.75
9 CC U 5 Buy 5 3328.00
10 SB V 5 Buy 5 11.65
11 SB V 5 Buy 5 11.64
12 SB V 5 Buy 2 11.60
```
I need a sum of adjusted\_lots , price which is weighted average , of price and adjusted\_lots , grouped by all the other columns , ie. grouped by (contract, month , year and buys)
Similar solution on R was achieved by following code, using dplyr, however unable to do the same in pandas.
```
> newdf = df %>%
select ( contract , month , year , buys , adjusted_lots , price ) %>%
group_by( contract , month , year , buys) %>%
summarise(qty = sum( adjusted_lots) , avgpx = weighted.mean(x = price , w = adjusted_lots) , comdty = "Comdty" )
> newdf
Source: local data frame [4 x 6]
contract month year comdty qty avgpx
1 C Z 5 Comdty -19 424.8289
2 CC U 5 Comdty 5 3328.0000
3 SB V 5 Comdty 12 11.6375
4 W Z 5 Comdty -5 554.8500
```
is the same possible by groupby or any other solution ?
| **EDIT:** update aggregation so it works with recent version of pandas
To pass multiple functions to a groupby object, you need to pass a tuples with the aggregation functions and the column to which the function applies:
```
# Define a lambda function to compute the weighted mean:
wm = lambda x: np.average(x, weights=df.loc[x.index, "adjusted_lots"])
# Define a dictionary with the functions to apply for a given column:
# the following is deprecated since pandas 0.20:
# f = {'adjusted_lots': ['sum'], 'price': {'weighted_mean' : wm} }
# df.groupby(["contract", "month", "year", "buys"]).agg(f)
# Groupby and aggregate with namedAgg [1]:
df.groupby(["contract", "month", "year", "buys"]).agg(adjusted_lots=("adjusted_lots", "sum"),
price_weighted_mean=("price", wm))
adjusted_lots price_weighted_mean
contract month year buys
C Z 5 Sell -19 424.828947
CC U 5 Buy 5 3328.000000
SB V 5 Buy 12 11.637500
W Z 5 Sell -5 554.850000
```
You can see more here:
- <http://pandas.pydata.org/pandas-docs/stable/groupby.html#applying-multiple-functions-at-once>
and in a similar question here:
- [Apply multiple functions to multiple groupby columns](https://stackoverflow.com/questions/14529838/apply-multiple-functions-to-multiple-groupby-columns)
[1] : <https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.25.0.html#groupby-aggregation-with-relabeling>
|
Second and Third Distributed Kafka Connector workers failing to work correctly
With a Kafka cluster of 3 and a Zookeeper cluster of the same I brought up one distributed connector node. This node ran successfully with a single task. I then brought up a second connector, this seemed to run as some of the code in the task definitely ran. However it then didn't seem to stay alive (though with no errors thrown, the not staying alive was observed by a lack of expected activity, while the first connector continued to function correctly). When I call the URL `http://localhost:8083/connectors/mqtt/tasks`, on each connector node, it tells me the connector has one task. I would expect this to be two tasks, one for each node/worker. (Currently the worker configuration says `tasks.max = 1` but I've also tried setting it to 3.
When I try and bring up a third connector, I get the error:
```
"POST /connectors HTTP/1.1" 500 90 5
(org.apache.kafka.connect.runtime.rest.RestServer:60)
ERROR IO error forwarding REST request:
(org.apache.kafka.connect.runtime.rest.RestServer:241)
java.net.ConnectException: Connection refused
```
Trying to call the connector POST method again from the shell returns the error:
```
{"error_code":500,"message":"IO Error trying to forward REST request:
Connection refused"}
```
I also tried upgrading to Apache Kafka 0.10.1.1 that was released today. I'm still seeing the problems. The connectors are each running on isolated Docker containers defined by a single image. They should be identical.
The problem could be that I'm trying to run the POST request to `http://localhost:8083/connectors` on each worker, when I only need to run it once on a single worker and then the tasks for that connector will automatically distribute to the other workers. If this is the case, how do I get the tasks to distribute? I currently have the max set to three, but only one appears to be running on a single worker.
## Update
I ultimately got things running using essentially the same approach that Yuri suggested. I gave each worker a unique group ID, then gave each connector task the same name. This allowed the three connectors and their single tasks to share a single offset, so that in the case of sink connectors the messages they consumed from Kafka were not duplicated. They are basically running as standalone connectors since the workers have different group ids and thus won't communicate with each other.
If the connector workers have the same group ID, you can't add more than one connector with the same name. If you give the connectors different names, they will have different offsets and consume duplicate messages. If you have three workers in the same group, one connector and three tasks, you would theoretically have an ideal situation where the tasks share an offset and the workers make sure the tasks are always running and well distributed (with each task consuming a unique set of partitions). In practice the connector framework doesn't create more than one task, even with tasks.max set to 3 and when the topic tasks are consuming has 25 partitions.
If anyone knows why I'm seeing this behaviour, please let me know.
| I've encountered with similar issue in the same situation as yours.
1. Task.max is configured for a topic and distributed workers automatically decide what nodes handle topic. So, if you have 3 workers in a cluster and your topic configuration says task.max=2 then only 2 of 3 workers will process the topic. In theory, if one of workers fails, 3rd should pick up workload. But..
2. The distributed connector turned out to be very unreliable: once you add\remove some nodes, the cluster broke down and all workers did nothing but tried to choose leader and failed. The only way to fix was to restart whole cluster and preferably all workers simultaneously.
I chose another way - I used standalone worker and it works like a charm to me because distribution of load is implemented on Kafka client level and once some worker dropped, the cluster re-balances automatically and clients connected to unoccupied topics.
PS. Maybe it will be useful for you too. Confluent connector is not tolerate to invalid payload that does not match topic's schema. Once the connector get some invalid message it silently dies. The only way to find out is to analyze metrics.
|
django-paypal IPN signals not being received
At the bottom of models.py I have:
```
from paypal.standard.ipn.signals import payment_was_successful, payment_was_flagged
import pay
payment_was_successful.connect(pay.paypal_success)
payment_was_flagged.connect(pay.paypal_flagged)
```
I'm using the Paypal Developer IPN simulator and it returns "IPN sent successfully", but the code in `pay.paypal_success` and `pay.paypal_flagged` isn't being executed.
The `paypal_ipn` table is being populated, however I noticed under `flag_info` every row has:
```
Invalid form. (<ul class="errorlist"><li>payment_date<ul class="errorlist">
<li>Enter a valid date/time.</li></ul></li></ul>)
```
I don't know if this has anything to do with the signals not working.
| I've had the same problem.
Apparently the date format the IPN simulator sends is different from the one the `django-paypal` package accepts.
Head over to `paypal.standard.forms.py` and add the 'new format date' PayPal sends.
```
PAYPAL_DATE_FORMAT = ("%H:%M:%S %b. %d, %Y PST",
"%H:%M:%S %b. %d, %Y PDT",
"%H:%M:%S %d %b %Y PST", # note this
"%H:%M:%S %d %b %Y PDT", # and that
"%H:%M:%S %b %d, %Y PST",
"%H:%M:%S %b %d, %Y PDT",)
```
I don't like this solution, because what if PayPal changes the date string format in the future?
This is actually a caveat of Python's `datetime` object that does not know how to easily convert strings to actual time objects easily.
But that works for now.
|
How to remove small components from a graph
I'm new to networkx and could do with some help please.
I have a set of data which I've processed to generate the nodes and edges. There are around 5000 groups of nodes that have more than 2 links within them (up to 10 nodes in the group in total). But the problem is that there are also several thousand pairs of nodes that only have 1 edge between them, i.e node a is linked to node b but neither are linked to any other node.
I want to remove these paired nodes from the chart.
Is there a way to filter these out?
| So our goal is to remove all nodes from components with less than 3 nodes (this includes isolated nodes if they exist).
```
for component in list(nx.connected_components(G)):
if len(component)<3:
for node in component:
G.remove_node(node)
```
A small warning is in order when using `nx.connected_components`. It returns a [generator](https://stackoverflow.com/q/1756096/2966723) of components. If I didn't put `list` around it, it would generate one at a time, and then perform the steps for the given component. Once all that is done, it would generate the next component. But because `G` has been modified, python can't be sure that this behaves well. So it would die (complaining that a dictionary changed size --- the number of nodes in `G` changed). By turning it into a list, the components are all found before it starts the loop. So the graph won't be changing while the components are being found.
|
Elixir list concatenation
So I've been playing with Elixir and am a bit confused about something:
```
iex> [ 1 | [ 2 ] ] // [ 1, 2] (expected)
iex> [ 1 | 2 ] // [ 1 | 2 ] (huh?)
```
My confusion is in why the second version does what it does. I understand that `2` is not a list, so it can't concatenate the "head" with the "tail", but, in my opinion, it should throw an error when the tail is not a list. I've been trying to think of a use-case for having this behavior but have come empty-handed. If anyone can explain why this is the desired behavior, I'd really appreciate it. Thanks!
| The tail of a list can actually be any term, not just another list. This is sometimes called an "improper list".
The Erlang documentation [gives an example](http://www.erlang.org/documentation/doc-5.8/doc/programming_examples/funs.html) on how to use this to build infinite lists, but it is unlikely that you will encounter this in the wild. The idea is that the tail is in this case not a list, but a function that will return another improper list with the next value and function:
```
defmodule Lazy do
def ints_from(x) do
fn ->
[x | ints_from(x + 1)]
end
end
end
iex> ints = Lazy.ints_from(1)
#Function<0.28768957/0 in Lazy.ints_from/1>
iex> current = ints.()
[1 | #Function<0.28768957/0 in Lazy.ints_from/1>]
iex> hd(current)
1
iex> current = tl(current).()
[2 | #Function<0.28768957/0 in Lazy.ints_from/1>]
iex> hd(current)
2
iex> current = tl(current).()
[3 | #Function<0.28768957/0 in Lazy.ints_from/1>]
iex> hd(current)
3
```
However, we can achieve infinite streams much more easily in Elixir using the `Stream` module:
```
iex> ints = Stream.iterate(1, &(&1+1))
#Function<32.24255661/2 in Stream.unfold/2>
iex> ints |> Enum.take(5)
[1, 2, 3, 4, 5]
```
Another (pseudo) use case of improper lists is with so-called [iodata](http://elixir-lang.org/docs/stable/elixir/IO.html#iodata_to_binary/1) or [chardata](http://elixir-lang.org/docs/stable/elixir/IO.html#chardata_to_string/1) values. These allow you to optimize situations where you need to frequently append to a charlist (single quoted string), due to the fact that charlists are linked lists for which appending is expensive. You normally don't really see improper lists with chardata in the wild either, because we can just use regular lists – but rest assured they *could* be used to build a chardata. If you want to learn more about chardata in general, I recommend [this blog post from The Pug Automatic](http://thepugautomatic.com/2015/12/char-data/).
|
How to use ORDER BY inside UNION
I want to use ORDER BY on every UNION ALL queries, but I can't figure out the right syntax. This is what I want:
```
(
SELECT id, user_id, other_id, name
FROM tablename
WHERE user_id = 123 AND user_in IN (...)
ORDER BY name
)
UNION ALL
(
SELECT id, user_id, other_id, name
FROM tablename
WHERE user_id = 456 AND user_id NOT IN (...)
ORDER BY name
)
```
EDIT:
Just to be clear: I need two ordered lists like this, not one:
1
2
3
1
2
3
4
5
Thank you very much!
| Something like this should work in MySQL:
```
SELECT a.*
FROM (
SELECT ... FROM ... ORDER BY ...
) a
UNION ALL
SELECT b.*
FROM (
SELECT ... FROM ... ORDER BY ...
) b
```
to return rows in an order we'd like them returned. i.e. MySQL seems to honor the `ORDER BY` clauses inside the inline views.
But, without an **`ORDER BY`** clause on the outermost query, the order that the rows are returned is *not* guaranteed.
If we need the rows returned in a particular sequence, we can include an `ORDER BY` on the outermost query. In a lot of use cases, we can just use an `ORDER BY` on the outermost query to satisfy the results.
But when we have a use case where we need all the rows from the first query returned before all the rows from the second query, one option is to include an extra discriminator column in each of the queries. For example, add **`,'a' AS src`** in the first query, **`,'b' AS src`** to the second query.
Then the outermost query could include **`ORDER BY src, name`**, to guarantee the sequence of the results.
---
**FOLLOWUP**
In your original query, the `ORDER BY` in your queries is discarded by the optimizer; since there is no `ORDER BY` applied to the outer query, MySQL is free to return the rows in whatever order it wants.
The "trick" in query in my answer (above) is dependent on behavior that may be specific to some versions of MySQL.
Test case:
populate tables
```
CREATE TABLE foo2 (id INT PRIMARY KEY, role VARCHAR(20)) ENGINE=InnoDB;
CREATE TABLE foo3 (id INT PRIMARY KEY, role VARCHAR(20)) ENGINE=InnoDB;
INSERT INTO foo2 (id, role) VALUES
(1,'sam'),(2,'frodo'),(3,'aragorn'),(4,'pippin'),(5,'gandalf');
INSERT INTO foo3 (id, role) VALUES
(1,'gimli'),(2,'boromir'),(3,'elron'),(4,'merry'),(5,'legolas');
```
query
```
SELECT a.*
FROM ( SELECT s.id, s.role
FROM foo2 s
ORDER BY s.role
) a
UNION ALL
SELECT b.*
FROM ( SELECT t.id, t.role
FROM foo3 t
ORDER BY t.role
) b
```
resultset returned
```
id role
------ ---------
3 aragorn
2 frodo
5 gandalf
4 pippin
1 sam
2 boromir
3 elron
1 gimli
5 legolas
4 merry
```
The rows from `foo2` are returned "in order", followed by the rows from `foo3`, again, "in order".
Note (again) that this behavior is *NOT* guaranteed. (The behavior we observer is a side effect of how MySQL processes inline views (derived tables). This behavior may be different in versions after 5.5.)
If you need the rows returned in a particular order, then specify an **`ORDER BY`** clause for the outermost query. And that ordering will apply to the *entire* resultset.
As I mentioned earlier, if I needed the rows from the first query first, followed by the second query, I would include a "discriminator" column in each query, and then include the "discriminator" column in the ORDER BY clause. I would also do away with the inline views, and do something like this:
```
SELECT s.id, s.role, 's' AS src
FROM foo2 s
UNION ALL
SELECT t.id, t.role, 't' AS src
FROM foo3 t
ORDER BY src, role
```
|
Since "fold" isn't powerful enough to write a tree pretty-printer with indentation, what high-order combinator is?
Given, for example, the following tree data type:
```
data Tree a = Node [Tree a] | Leaf a deriving Show
type Sexp = Tree String
```
How do I express a "pretty" function using an high-order combinator, that prints the tree with proper indentation? For example:
```
sexp =
Node [
Leaf "aaa",
Leaf "bbb",
Node [
Leaf "ccc",
Leaf "ddd",
Node [
Leaf "eee",
Leaf "fff"],
Leaf "ggg",
Leaf "hhh"],
Leaf "jjj",
Leaf "kkk"]
pretty = ????
main = print $ pretty sexp
```
I want the result of that program to be:
```
(aaa
bbb
(ccc
ddd
(eee
fff)
ggg
hhh)
jjj
kkk)
```
Here is an incomplete solution, using a "fold" as the combinator, that doesn't implement the indentation:
```
fold f g (Node children) = f (map (fold f g) children)
fold f g (Leaf terminal) = g terminal
pretty = fold (\ x -> "(" ++ (foldr1 ((++) . (++ " ")) x) ++ ")") show
main = putStrLn $ pretty sexp
```
It is obviously not possible to write the function I want using `fold`, since it forgets the tree structure. So, what is a proper high-order combinator that is generic enough to allow me to write the function I want, but less powerful than writing a direct recursive function?
| `fold` is strong enough; the trick is that we'll need to instantiate `r` as a reader monad of the current indentation level.
```
fold :: ([r] -> r) -> (a -> r) -> (Tree a -> r)
fold node leaf (Node children) = node (map (fold node leaf) children)
fold node leaf (Leaf terminal) = leaf terminal
pretty :: forall a . Show a => Tree a -> String
pretty tree = fold node leaf tree 0 where
node :: [Int -> String] -> Int -> String
node children level =
let childLines = map ($ level + 1) children
in unlines ([indent level "Node ["] ++ childLines ++ [indent level "]"])
leaf :: a -> Int -> String
leaf a level = indent level (show a)
indent :: Int -> String -> String -- two space indentation
indent n s = replicate (2 * n) ' ' ++ s
```
Take careful note that I pass an extra parameter to the call to `fold`. This is the initial state of indentation and it works because with this specialization of `r`, `fold` returns a function.
|
How to find the changes in records in SQL server
I have a table which holds student info.
```
+==========================================+
| ID | Department | Date |
+==========================================+
| 001 | English | Feb 3 2017 |
| 001 | English | Feb 4 2017 |
| 001 | Science | Mar 1 2017 |
| 001 | Maths | Mar 2 2017 |
| 001 | Maths | Mar 21 2017 |
| 001 | Maths | Apr 2 2017 |
| 001 | English | Apr 7 2017 |
| 002 | Maths | Feb 1 2017 |
| 002 | Maths | Apr 7 2017 |
| 003 | Maths | Apr 3 2017 |
| 003 | Maths | Apr 7 2017 |
| 004 | Science | Feb 1 2017 |
| 004 | Science | Mar 1 2017 |
| 004 | Maths | Apr 7 2017 |
| 004 | English | Apr 9 2017 |
+==========================================+
```
In the above table I need to get the list of student records whenever the student's department preference is changed. There is also a chance that student can change back to the same department again. So for the above sample data, the list of records returned would be
**For student 001**
```
| 001 | English | Feb 4 2017 |
| 001 | Science | Mar 1 2017 |
| 001 | Maths | Apr 2 2017 |
```
**002 and 003 Nothing**
**for 004**
```
| 004 | Science | Mar 1 2017 |
| 004 | Maths | Apr 7 2017 |
```
When I try to apply the logic mentioned in [here](https://stackoverflow.com/questions/43270488/how-to-track-changes-in-records-in-sql-server/43270684), partition on doesn't work as the student can come back to the same department again. Kindly help.
| You could use `LEAD` window function - for SQL version 2012 and later...
```
DECLARE @SampleData AS TABLE
(
Id int,
Department varchar(20),
[Date] date
)
INSERT INTO @SampleData
VALUES (1,'English', 'Feb 3 2017'),(1,'English', 'Feb 4 2017'),(1,'Science', 'Mar 1 2017'),
(1,'Maths', 'Mar 2 2017'),(1,'Maths', 'Mar 3 2017'),(1,'English', 'Mar 7 2017'),
(2,'Maths', 'Feb 3 2017'),(2,'Maths', 'Feb 4 2017'),
(3,'Maths', 'Feb 3 2017'), (3,'Maths', 'Feb 4 2017'),
(4,'Science', 'Feb 1 2017'), (4,'Science', 'Feb 2 2017'), (4,'Maths', 'Feb 3 2017'),(4,'English', 'Feb 4 2017')
;WITH temps AS
(
SELECT sd.*, LEAD(sd.Department, 1) OVER(PARTITION BY id ORDER BY sd.[Date]) AS NextDepartment
FROM @SampleData sd
)
SELECT t.id, t.Department,t.[Date] FROM temps t
WHERE t.Department != t.NextDepartment
```
Demo link: [Rextester](http://rextester.com/SVVA33279)
Reference link: [LEAD - MDSN](https://learn.microsoft.com/en-us/sql/t-sql/functions/lead-transact-sql)
For older version you could use `OUTER APPLY`
```
SELECT sd.*
FROM @SampleData sd
OUTER APPLY
(
SELECT TOP 1 * FROM @SampleData sd2 WHERE sd.Id = sd2.Id AND sd.[Date] < sd2.[Date]
) nextDepartment
WHERE sd.Department != nextDepartment.Department
```
|
Can I mix sessions auth and token auth in one site?
I have django application using sessions auth. I need to add API part. This API will be used by my app.users only (web browsers and mobiles devices as well). I would prefer to use token auth for API as it seems more robust. I found [rest\_framework\_jwt](https://github.com/GetBlimp/django-rest-framework-jwt) that can handle it.
My question: Can I mix sessions auth for web and token auth for API in one site without problems?
I think about the web app and the API app as two different applications. So I want to separate them in my project, use different subdomain and use different kind of auth for each. Is it possible to separate auth by subdomain? I would like to send token when user log in to web app. Is it good idea?
| As you see in the [documentation](http://www.django-rest-framework.org/api-guide/authentication), you can configure multiple authentication backends without any problems. DRF will just try each one of the backends until one says "ok".
One thing to keep in mind:
If you (for example) provide an invalid JSON-Web-Token then the authentication will immediately fail and other backends will not be tried. Good to see in the [source of rest\_framework\_jwt](https://github.com/GetBlimp/django-rest-framework-jwt/blob/master/rest_framework_jwt/authentication.py#L29).
```
def authenticate(self, request):
"""
Returns a two-tuple of `User` and token if a valid signature has been
supplied using JWT-based authentication. Otherwise returns `None`.
"""
auth = get_authorization_header(request).split()
if not auth or auth[0].lower() != b'jwt':
return None
if len(auth) == 1:
msg = 'Invalid JWT header. No credentials provided.'
raise exceptions.AuthenticationFailed(msg)
elif len(auth) > 2:
msg = ('Invalid JWT header. Credentials string '
'should not contain spaces.')
raise exceptions.AuthenticationFailed(msg)
try:
payload = jwt_decode_handler(auth[1])
except jwt.ExpiredSignature:
msg = 'Signature has expired.'
raise exceptions.AuthenticationFailed(msg)
except jwt.DecodeError:
msg = 'Error decoding signature.'
raise exceptions.AuthenticationFailed(msg)
user = self.authenticate_credentials(payload)
return (user, auth[1])
```
- `return None` means the backend saying: "this is not JWT, let the others try
- `raise exceptions.AuthenticationFailed(msg)` means: "the user tried JWT, but the he failed it."
To answer the further questions:
- no need for doing this in separate applications (but it's no problem if you want).
- as you can read in ["setting the authentication scheme"](http://www.django-rest-framework.org/api-guide/authentication#setting-the-authentication-scheme) you can define global defaults for authentication backends, but you can also override them per `View` or `ViewSet`.
|
Keep p:dialog open when a validation error occurs after submit
Minimal example dialog:
```
<p:dialog header="Test Dialog"
widgetVar="testDialog">
<h:form>
<p:inputText value="#{mbean.someValue}"/>
<p:commandButton value="Save"
onsuccess="testDialog.hide()"
actionListener="#{mbean.saveMethod}"/>
</h:form>
</p:dialog>
```
What I want to be able to do is have the mbean.saveMethod somehow prevent the dialog from closing if there was some problem and only output a message through growl. This is a case where a validator won't help because there's no way to tell if someValue is valid until a save is submitted to a back end server. Currently I do this using the visible attribute and point it to a boolean field in mbean. That works but it makes the user interface slower because popping up or down the dialog requires hitting the server.
| The `onsuccess` runs if ajax request itself was successful (i.e. there's no network error, uncaught exception, etc), not if action method was successfully invoked.
Given a `<p:dialog widgetVar="yourWidgetVarName">`, you could remove the `onsuccess` and replace it by PrimeFaces `RequestContext#execute()` inside `saveMethod()`:
```
if (success) {
RequestContext.getCurrentInstance().execute("PF('yourWidgetVarName').hide()");
}
```
Note: `PF()` was introduced in PrimeFaces 4.0. In older PrimeFaces versions, you need `yourWidgetVarName.hide()` instead.
If you prefer to not clutter the controller with view-specific scripts, you could use `oncomplete` instead which offers an `args` object which has a boolean `validationFailed` property:
```
<p:commandButton ...
oncomplete="if (args && !args.validationFailed) PF('yourWidgetVarName').hide()" />
```
The `if (args)` check is necessary because it may be absent when an ajax error has occurred and thus cause a new JS error when you try to get `validationFailed` from it; the `&` instead of `&` is mandatory for the reason explained in [this answer](https://stackoverflow.com/questions/16303779/the-entity-name-must-immediately-follow-the-in-the-entity-reference/16328808#16328808), refactor if necessary to a JS function which you invoke like `oncomplete="hideDialogOnSuccess(args, 'yourWidgetVarName')"` as shown in [Keep <p:dialog> open when validation has failed](https://stackoverflow.com/questions/14328115/keep-pdialog-open-when-validation-has-failed/14328152#14328152).
If there is however no validation error and the action method is successfully triggered, and you would still like to keep the dialog open because of e.g. an exception in the service method call, then you can manually trigger `validationFailed` to `true` from inside backing bean action method by explicitly invoking [`FacesContext#validationFailed()`](https://docs.oracle.com/javaee/7/api/javax/faces/context/FacesContext.html#validationFailed--). E.g.
```
FacesContext.getCurrentInstance().validationFailed();
```
|
Knockout.js applyBindings with empty data model
Is it possible to applyBindings when the dataModel of the viewModel is unknown? My problem is that the dataModel structure is first known after an ajax call on the page and the way i understand knockout.js is that the viewModel should be inistialized on page load?
The code fails with nCustomerId is undefined.
How should i handle this? I could wait with calling the ko.applyBindings() until i know the dataModel structure (which i do after the ajax call), but is that the right way to do it when using knockout.js?
```
function initModel () {
var kunderModel = function () {
var self = this;
self.list = ko.observableArray();
self.selectedItem = ko.observable();
self.newItem = ko.observable();
self.add = function () {
self.selectedItem(newItem(self.newItem));
showInputContainer();
};
self.getList = function () {
var nButikId = jQuery("#butikid").val();
jQuery.ajax({
url: "crm_service.wso/Dan_Butik_Kunder_Tabel/JSON/",
data: { nButikId: nButikId },
success: function (data) {
self.list(data);
},
complete: function () {
connectExt.UIElements().Loading(false);
}
});
}
}
}
_viewModel = new kunderModel();
ko.applyBindings(_viewModel);
jQuery(document).ready(function () {
initModel();
});
<div data-bind="template: { name: 'editTmpl', data: selectedItem }"></div>
<script id="editTmpl" type="text/html">
<div class="opretContainer">
<div class="opretContainerTitle">
<span data-bind="visible: nCustomerId == 0">New</span>
<span data-bind="visible: nCustomerId != 0">Edit</span>
</div>
</div>
</script>
```
| you do not need the **initFuction**. What you should do is initialize the **\_viewModel** directly in the JQuery document callback.
You have to understand that your ViewModel itself is already a function. You can call your **getList** function directly inside your viewmodel when it is initialized.
What I would do:
```
jQuery(document).ready(function () {
_viewModel = new kunderModel();
ko.applyBindings(_viewModel);
});
```
and than inside kunderModel:
```
var kunderModel = function () {
var self = this;
self.list = ko.observableArray();
self.getList = function () {
var nButikId = jQuery("#butikid").val();
jQuery.ajax({ ... });
};
self.getList();
}
```
This way the **getList** method gets invoked at the end of the creation of the ViewModel. (I think of it as a method which is invoked by the "constructor". Since everything is observable, when the callback is executed, your UI will be automatically updated.
|
Find local minima in an array
Given an array of integers, find the local minima. An element A[i] is defined as a local minimum if A[i-1] > A[i] and A[i] < A[i+1] where i = 1...n-2. In case of boundary elements, the number has to be just smaller than its adjacent number.
I know if there is only one local minimum, then we can solve with modified binary search.
But if it is known that there exist multiple local minima in the array, can it be solved in `O(log n)` time?
| If the array elements are not guaranteed to be distinct, then it's not possible to do this in O(log n) time. The reason for this is the following: suppose that you have an array where all n > 1 values are the same. In this case, none of the elements can be local minima, because no element is less than its neighbors. However, in order to determine that all values are the same, you will have to look at all the array elements, which takes O(n) time. If you use less than O(n) time, you can't necessarily look at all the array elements.
If, on the other hand, the array elements are guaranteed to be distinct, you can solve this in O(log n) time using the following observations:
1. If there is just one element, it's guaranteed to be a local minimum.
2. If there are multiple elements, look at the middle element. If it's a local minimum, you're done. Otherwise, at least one of the elements next to it must be smaller than it. Now, imagine what would happen if you were to start at one of the smaller elements and progressively move toward one of the ends of the array in the direction away from the middle element. At each step, either the next element is smaller than the previous, or it will be bigger. Eventually, you will either hit the end of the array this way, or you will hit a local minimum. Note that this means that you **could** do this to find a local minimum. However, we're not actually going to do that. Instead, we'll use the fact that a local minimum will exist in this half of the array as a justification for throwing away one half of the array. In what remains, we are guaranteed to find a local minimum.
Consequently, you can build up the following recursive algorithm:
1. If there is just one array element, it's a local minimum.
2. If there are two array elements, check each. One must be a local minimum.
3. Otherwise, look at the middle element of the array. If it's a local minimum, return it. Otherwise, at least one adjacent value must be smaller than this one. Recurse in the half of the array containing that smaller element (but not the middle).
Notice that this has the recurrence relation
>
> T(1) ≤ 1
>
>
> T(2) ≤ 1
>
>
> T(n) ≤ T(n / 2) + 1
>
>
>
Using the Master Theorem, you can show that this algorithm runs in time O(log n), as required.
Hope this helps!
Please also notice that this algorithm only works if edges of the array count as local minima if they are smaller than the adjacent element.
|
Nest API Authorization giving 404
I can't seem to get my Access Token through the Nest API.
I've tried POSTing to the Access Token URL in 3 different ways, but they all give the same result.
I'm using the following code:
```
<body>
<button type = 'button' id = 'connect' class = 'btn btn-default'>Connect To Nest</button>
<div id = 'pinArea'>
<label for = 'pin'>Enter PIN Here: </label><input type = 'text' name = 'pin' id = 'pin'><br />
<button type = 'button' class = 'btn btn-default' id = 'pinSubmit'>Submit</button>
</div>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script src="js/bootstrap.min.js"></script>
<script type = 'text/javascript'>
$(document).ready(function() {
function makeid()
{
var text = "";
var possible = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
for( var i=0; i < 5; i++ )
text += possible.charAt(Math.floor(Math.random() * possible.length));
return text;
}
$("#connect").click(function() {
var state = makeid();
window.open('https://home.nest.com/login/oauth2?client_id=MYCLIENTID&state='+state+'');
$("#connect").hide();
$("#pinArea").show();
});
$("#pinSubmit").click(function() {
var pin = $("#pin").val();
$.ajax({
url: "https://api.home.nest.com/oauth2/access_token?code="+pin+"&client_id=MYCLIENTID&client_secret=MYCIENTSECRET&grant_type=authorization_code",
//data: {code: pin, client_id: "MYCLIENTID", client_secret: "MMYCLIENTSECRET", grant_type: "authorization_code"},
type: "POST",
success: function(res) {
console.log(res);
},
error: function(e) {
console.log(e);
}
});
});
});
</script>
```
The problem is that the URL is giving the following error in my console, when it should be sending back my Access Token:
```
XMLHttpRequest cannot load https://api.home.nest.com/oauth2/access_token. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://fatslug.ca' is therefore not allowed access.
```
Any ideas on what could be causing this? Am I just doing it completely wrong?!
| The issue is that Nest does not support CORs for their token exchange step. I presume this is intentional, but I'm really not sure.
Instead, Nest would seem to prefer that you build a server and proxy the token exchange through that server. Pretty simple to do.
However, if you really want to do the token exchange in the browser (and do NOT do this for anything in production or taking privacy/security seriously), then you can use a service like cors-anywhere.com:
```
"https://cors-anywhere.herokuapp.com/api.home.nest.com/oauth2/access_token?" +
"code="+auth.authorizationCode+"&" +
"client_id="+clientId+"&" +
"client_secret="+clientSecret+"&" +
"grant_type=authorization_code"
```
This will send the request to cors-anywhere, which will provide CORs support to the request and proxy to Nest.
|
PriorityQueue has objects with the same priority
I'm using a priority queue to sort and use a large number of custom objects. The objects have a "weight" that is their natural ordering. However, different objects that are inserted into the priority queue may have the same "weight". In such cases, I want the priority queue to order them in the same order in which they were put into the queue.
For example, if I add in CustomObjects A,B,C,D in that order, all with the same "weight", than the priority queue should return them in that order as well - even if I poll one or more of the objects before adding in the others.
Here is the CompareTo for my custom object:
```
public int compareTo(CustomObject o) {
int thisWeight = this.weight;
int thatWeight = o.weight;
if(thisWeight < thatWeight){
return -1;
}
else{
return 1;
}
}
```
While I thought that this would maintain that initial order, it doesn't. This occurs when I input A,B,C with weight 1; poll A; and add D,E also with weight 1. Somehow, D and E are sorted after B, but before C.
I am aware that the Iterator for PriorityQueues doesn't return the correct ordering, so I am limited in my ability to look at the ordering - however I can see the order that the elements leave the queue and it clearly doesn't follow the path that I want it to.
Suggestions?
| If you need to have an ordering according the insertion order you need to use an extra element for timestamp.
I.e. on insertions and equal weight use `timestamp` to see which element was inserted first.
So `CustomObject` should be something like:
```
class CustomObject {
int weight;
long timestamp;
}
```
And the comparison should be:
```
public int compareTo (CustomObject o) {
int thisWeight = this.weight;
int thatWeight = o.weight;
if (thisWeight != thatWeight) {
return thisWeight - thatWeight;
}
else {
return this.timestamp - o.timestamp;
}
}
```
The smaller `timestamp` means it was inserted *earlier* so you keep in the insertion order.
You could also use a "logical" time by maintaining a counter that you update on each `add` or `remove`.
|
Extract data from database using ajax in Liferay
I am using liferay framwork for developing an application.
I have a dropdown box whose values are pulled from the database.
What i want to do is whenever a user selects any Person from the drop down menu the information about that Person should be extracted from the database just for viewing. How should this be done? Should I use ajax or any other stuff? And how should this be done?
I do not know how to start:
EDITED:
This is how I have made a call from jsp. I am not sure if it is correct approach
Call from jsp:
```
<!-- Ajax script to pull Employee data from the database -->
<script>
function showEmployeeInfo(empName)
{
var xmlhttp;
if (str=="")
{
document.getElementById("empDetails").innerHTML="";
return;
}
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
document.getElementById("empDetails").innerHTML=xmlhttp.responseText;
}
}
xmlhttp.open("GET","getEmp.java?q="+empName,true);
xmlhttp.send();
}
```
Please note that
xmlhttp.open("GET","getEmp.java?q="+empName,true);
is incorrect and I didnt know how to put it.
| You should always use a javascript library to perform ajax, why? Because the library would take care of the boiler-plate code and also would be cross-browser compliant.
So with Liferay 6.x you can use [alloy-ui](/questions/tagged/alloy-ui "show questions tagged 'alloy-ui'") as it is the default Library or else you can use [jquery](/questions/tagged/jquery "show questions tagged 'jquery'") which is the most popular and easy to use.
It is just that you would need to include jQuery in your portlet explicitly where as with Alloy UI you can just directly use it.
There are other libraries but I prefer these as I am comfortable with these two :-)
I would give an example by using Alloy UI (a crash course):
1. Lets understand the simple steps and flow first:
1. Render JSP
2. Have a `resourceURL` created `<portlet:resourceURL var="ajaxCallResourceURL" />` in the JSP
3. Call javascript function by generating an event through any element like `onChange`, `onClick` etc
4. Use Alloy `io.request` module to call the `serveResource` method through the `reourceURL`
5. The `serveResource` method returns either HTML text or JSON list to fill in the drop-down
6. In the `success` method of the `io.request` script do some javascript magic to fill in the drop-down
2. Now let the code flow:
**JSP**
```
<%-- Create the Resource URL --%>
<portlet:resourceURL var="fetchWordsResourceURL" />
<aui:form method="post" name="fm" >
<%-- Calling the javascript function fetchWords() which will make the ajax call --%>
<aui:select name="sourceSelect" id="sourceSelect" label="alphabets" onChange='<%= renderResponse.getNamespace() + "fetchWords();"%>'>
<aui:option label="--" value="--" />
<aui:option label="A" value="a" />
<aui:option label="B" value="b" />
<aui:option label="C" value="c" />
</aui:select>
<%-- The ajax response would populate this drop-down --%>
<aui:select name="targetSelect" id="targetSelect" label="Words with Alphabets">
</aui:select>
</aui:form>
<aui:script>
<%-- This is the javascript function which will be executed onChange of the value of sourceSelect --%>
Liferay.provide(
window,
'<portlet:namespace />fetchWords',
function() {
var A = AUI();
var fetchWordsURL = '<%= fetchWordsResourceURL.toString() %>';
// selecting the sourceSelect drop-down to get the current value
var sourceElement = A.one("#<portlet:namespace />sourceSelect");
// selecting the targetSelect drop-down to populate values
var targetElement = A.one("#<portlet:namespace />targetSelect");
alert("Fetch word for alphabet = " + sourceElement.val());
A.io.request (
// the resource URL to fetch words
fetchWordsURL, {
data: {
// request parameters to be sent to the Server
<portlet:namespace />alphabet: sourceElement.val()
},
dataType: 'json',
on: {
failure: function() {
// if there was some error at the server
alert("Ajax failed!");
},
success: function(event, id, obj) {
// JSON Data recieved from Server
var wordsArray = this.get('responseData');
// crude javascript magic to populate the drop-down
//clear the content of select
targetElement.html("");
for (var j=0; j < wordsArray.length; j++) {
// alert("Alphabet ==> " + wordsArray[j]);
targetElement.append("<option value='" + wordsArray[j] + "'>" + wordsArray[j] + "</option>");
}
}
}
}
);
},
['aui-io']
);
</aui:script>
```
**Portlet class: serveResource method**
```
@Override
public void serveResource(ResourceRequest resourceRequest,
ResourceResponse resourceResponse)
throws IOException, PortletException {
String alphabet = ParamUtil.getString(resourceRequest, "alphabet");
_log.info("Alphabet recieved from ajax request ==> " + alphabet);
// build the JsonArray to be sent back
JSONArray jsonArray = JSONFactoryUtil.createJSONArray();
if("a".equals(alphabet)) {
jsonArray.put("Apple");
jsonArray.put("Ape");
jsonArray.put("Ant");
}
else if("b".equals(alphabet)) {
jsonArray.put("Banana");
jsonArray.put("Ball");
jsonArray.put("Bat");
}
else if("c".equals(alphabet)) {
jsonArray.put("Code");
jsonArray.put("Cat");
jsonArray.put("Camera");
}
_log.info("Json Array populated ==> " + jsonArray.toString());
// set the content Type
resourceResponse.setContentType("text/javascript");
// using printWrite to write to the response
PrintWriter writer = resourceResponse.getWriter();
writer.write(jsonArray.toString());
}
```
Thats it you are ready to code some highly ajaxed applications :-).
|
How to avoid code duplication implementing const and non-const iterators?
I'm implementing a custom container with an STL-like interface. I have to provide a regular iterator and a const iterator. Most of the code for the two versions of the iterators is identical . How can I avoid this duplication?
For example, my container class is `Foo`, and I'm implementating `FooIterator` and `FooConstIterator`. Both of the iterators have to provide methods like `operator++()` which are identical.
My question is similar to [How do I remove code duplication between similar const and non-const member functions?](https://stackoverflow.com/questions/123758/how-do-i-remove-code-duplication-between-similar-const-and-non-const-member-funct), but the answer to that one is specific to const and non-const methods, especially accessors. I don't see how that might generalize to the iterator problem.
Should I have `FooIterator` derive from `FooConstIterator` and extend it with additional non-const methods? That either leads to virtual methods or method hiding, which seem inappropriate here.
Perhaps `FooIterator` should contain a `FooConstIterator`. Although that approach does reduce implementation duplication, it seems to re-introduce a lot of boilerplate method definitions.
Is there clever template technique for generating the two iterators from a single definition? Or perhaps there's a way to--shudder--use the preprocessor to stamp out these nearly identical classes.
I've tried looking at my local STL implementation to see how it handle this. There are so many helper classes that I'm having trouble grokking the design, but it looks like the functionality is simply duplicated.
In previous projects, my custom container was built on top of a standard STL container, so I didn't have to provide my own iterators. That's not an option in this case.
| **[The best answer was, unfortunately, deleted by a moderator because it was a link-only answer. I understand why link-only answers are discouraged; deleting it, however, has robbed future seekers of very useful information. The link has remained stable for more than seven years and continues to work at the time of this writing.]**
I strongly recommend the original Dr. Dobb's Journal article by Matt Austern entitled ["The Standard Librarian: Defining Iterators and Const Iterators"](http://www.drdobbs.com/the-standard-librarian-defining-iterato/184401331), January 2001. Should that link go bad, now that Dr. Dobb's has ceased operating, it's also available [here](http://collaboration.cmc.ec.gc.ca/science/rpn/biblio/ddj/Website/articles/CUJ/2001/0101/austern/austern.htm).
To prevent this replacement answer from being deleted, I will summarize the solution.
The idea is to implement the iterator once as a template that takes an extra template parameter, a boolean that says whether or not this is the const version. Anywhere in the implementation where the const and non-const versions differ, you use a template mechanism to select the correct code. Matt Austern's mechanism was called `choose`. It looked like this:
```
template <bool flag, class IsTrue, class IsFalse>
struct choose;
template <class IsTrue, class IsFalse>
struct choose<true, IsTrue, IsFalse> {
typedef IsTrue type;
};
template <class IsTrue, class IsFalse>
struct choose<false, IsTrue, IsFalse> {
typedef IsFalse type;
};
```
If you had separate implementations for const and non-const iterators, then the const implementation would include typedefs like this:
```
typedef const T &reference;
typedef const T *pointer;
```
and the non-const implementation would have:
```
typedef T &reference;
typedef T *pointer;
```
But with `choose`, you can have a single implementation that selects based on the extra template parameter:
```
typedef typename choose<is_const, const T &, T &>::type reference;
typedef typename choose<is_const, const T *, T *>::type pointer;
```
By using the typedefs for the underlying types, all the iterator methods can have an identical implementation. See Matt Austern's [complete example](http://collaboration.cmc.ec.gc.ca/science/rpn/biblio/ddj/Website/articles/CUJ/2001/0101/austern/list1.htm).
|
Is Google Chart Tool Api is free to use
Is Google Chart API free to use? If so, where can I find its documentation?
| Yes, it is free. It is written in the [official page](http://code.google.com/apis/chart/):
>
> Completely free for all uses:
> commercial, governmental, personal or
> educational.
>
>
>
For documentation follow this [link](http://code.google.com/apis/chart/interactive/docs/).
Edit:
You don't really download it. You include the following in your code:
```
<script type="text/javascript" src="http://www.google.com/jsapi"></script>
```
Then you start using it as shown [here](http://code.google.com/apis/chart/interactive/docs/quick_start.html).
If you really want to see how it looks like inside, you can paste this to your browser address bar: <http://www.google.com/jsapi>
|
RecyclerView - get Position inside Activity rather than RecyclerViewAdapter
It is my third day now dealing with the handling of my view clicks. I originally was using `ListView`, then I switched to `RecyclerView`. I have added `android:onclick` elements to every control on my `row_layout` and I am handling them in my `MainActivity` like this:
```
public void MyMethod(View view) {}
```
In my old `ListView` implementation, I have done `setTag(position)` to be able to get it in `MyMethod` by doing this inside it:
```
Integer.parseInt(view.getTag().toString())
```
This worked nicely without problems. Though now I am dealing with `RecyclerView` and being forced to use the `ViewHolder`, which does not offer a `setTag` method. After searching for 2 hours, I have found that people use `setTag` like this:
```
holder.itemView.setTag(position)
```
This was acceptable. Though when I try to get the value from the `MyMethod` function using the line:
```
Integer.parseInt(view.getTag().toString())
```
The application crashes. I have read several implementation of onclick handling inside the adapter which works but I have to use the `MainActivity` because I am using something that is unique to that activity.
TL;DR I want to send the position of the clicked row to my `MainActivity` in a simple manner.
Edit: I apologize for the confusion since my topic was not very thorough. I have a `RecyclerView` and an adapter. The adapter is linked to my `row_layout`. This `row_layout` xml has one root `LinearLayout`. Inside it there is one `TextView`, another `LinearLayout` (which has two `TextViews`) and one `Button` (for simplicity). I do not want to suffer for dealing with the clicks on `RecylerView` like I did with the `ListView`. So, I have decided to add an `android:onclick` for every control, then link `TextView` and `LinearLayout` to a single method and link the `Button` (and future `Button`s) to their unique methods. What I am missing is that I want to be able to tell the position on each of the receiving methods on my `MainActivity`. If I must link everything that comes from the adapter and goes into the `MainActivity` to a single onclick handler, so be it. Although, how would I tell which control fired the click?
Edit 2: The requested layout
```
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:onClick="MyMethod"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:orientation="horizontal"
android:weightSum="1">
<TextView
android:id="@+id/letter"
android:onClick="MyMethod"
android:layout_width="60dp"
android:layout_height="fill_parent"
/>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:onClick="MyMethod"
android:layout_width="200dp"
android:layout_height="wrap_content"
android:orientation="vertical">
<TextView
android:id="@+id/firstname"
android:onClick="MyMethod"
android:layout_width="fill_parent"
android:layout_height="17dp" />
<TextView
android:id="@+id/longname"
android:onClick="MyMethod"
android:layout_width="fill_parent"
android:layout_height="wrap_content" />
</LinearLayout>
<Button
android:text="Test"
android:onClick="OtherMethod"
android:layout_width="match_parent"
android:layout_height="fill_parent"
android:id="@+id/process"/>
</LinearLayout>
```
| You can achieve this by creating an interface inside your adapter for an `itemclicklistener` and then you can set `onItemClickListener` from your `MainActivity`.
Somewhere inside your `RecyclerViewAdapter` you would need the following:
```
private onRecyclerViewItemClickListener mItemClickListener;
public void setOnItemClickListener(onRecyclerViewItemClickListener mItemClickListener) {
this.mItemClickListener = mItemClickListener;
}
public interface onRecyclerViewItemClickListener {
void onItemClickListener(View view, int position);
}
```
Then inside your `ViewHolder` (which I've added as an inner class inside my adapter), you would apply the listener to the components you'd like the user to click, like so:
```
class RecyclerViewHolder extends RecyclerView.ViewHolder implements View.OnClickListener {
public ImageView imageview;
RecyclerViewHolder(View view) {
super(view);
this.imageview = (ImageView) view
.findViewById(R.id.image);
imageview.setOnClickListener(this);
}
@Override
public void onClick(View v) {
if (mItemClickListener != null) {
mItemClickListener.onItemClickListener(v, getAdapterPosition());
}
}
}
```
This example shows an `onClickListener` being applied to the image inside a `ViewHolder`.
```
recyclerView.setAdapter(adapter);// set adapter on recyclerview
adapter.notifyDataSetChanged();// Notify the adapter
adapter.setOnItemClickListener(new RecyclerViewAdapter.onRecyclerViewItemClickListener() {
@Override
public void onItemClickListener(View view, int position) {
//perform click logic here (position is passed)
}
});
```
To implement this code, you would `setOnItemClickListener` to your adapter inside `MainActivity` as shown above.
**EDIT**
Because the `View` is getting passed into the `OnItemClickListener`, you can perform a `switch` statement inside the listener to ensure that the right logic is being performed to the right component. All you would need to do is take the logic from the `MyMethod` function and copy and paste it to the component you wish it to be applied to.
Example:
```
recyclerView.setAdapter(adapter);// set adapter on recyclerview
adapter.notifyDataSetChanged();// Notify the adapter
adapter.setOnItemClickListener(new RecyclerViewAdapter.onRecyclerViewItemClickListener() {
@Override
public void onItemClickListener(View view, int position) {
Switch (view.getId()) {
case R.id.letter:
//logic for TextView with ID Letter here
break;
case R.id.firstname:
//logic for TextView with ID firstname here
break;
....
//the same can be applied to other components in Row_Layout.xml
}
}
});
```
You would also need to change something inside the `ViewHolder`. instead of applying the `OnClickListener` to an `ImageView`, you would need to apply to the whole row like so:
```
RecyclerViewHolder(View view) {
super(view);
this.imageview = (ImageView) view
.findViewById(R.id.image);
view.setOnClickListener(this);
}
```
**EDIT 2**
Explanation:
So, with every `RecyclerView`. You need three components, The `RecyclerView`, `RecyclerViewAdapter` and the `RecyclerViewHolder`. These are what define the actual components the user sees (`RecyclerView`) and the Items within that View. The Adapter is where everything is pieced together and the Logic is implemented. The ins and outs of these components are nicely explained by Bill Phillips with the article [`RecyclerView` Part 1: Fundamentals For `ListView` Experts](https://www.bignerdranch.com/blog/recyclerview-part-1-fundamentals-for-listview-experts/) over at Big Nerd Ranch.
But to further explain the logic behind the click events, it's basically utilizing an [interface](http://httphttps://developer.android.com/training/basics/fragments/communicating.html) to pass information from the `RecyclerViewAdapter` to the `RecyclerViewHolder` to your `MainActivity`. So if you follow the life-cycle of the `RecyclerView` adapter, it'll make sense.
The adapter is initialized inside your `MainActivity`, the adapter's constructor would then be called with the information being passed. The components would then be passed into the adapter via the `OnCreateViewHolder` method. This itself tells the adapter, that's how you would like the list to look like. The components in that layout, would then need to be individually initialized, that's where the `ViewHolder` comes into play. As you can see like any other components you would initialize in your `Activities`, you do the same in the `ViewHolder` but because the `RecyclerViewAdapter` inflates the `ViewHolder` you can happily use them within your adapter as shown by Zeeshan Shabbir. But, for this example you would like multiple components to have various logic applied to each individual one in your `MainActivity` class.
That's where we create the click listener as a global variable (so it can be accessed by both the `ViewHolder` and the `Adapter`) the adapter's job in this case is to ensure the listener exists by creating an `Interface` you can initialize the listener through.
```
public interface onRecyclerViewItemClickListener {
void onItemClickListener(View view, int position);
}
```
After you've defined the information you would like the interface to hold (E.G. the component and it's position), you can then create a function in which the adapter will call to apply the logic from your `Activity` (same way you would called `View.OnClickListener`) but by creating a `setOnItemClickListener`, you can customize it.
```
public void setOnItemClickListener(onRecyclerViewItemClickListener mItemClickListener) {
this.mItemClickListener = mItemClickListener;
}
```
This function then needs `onRecyclerViewItemClickListener` variable passed to it, as seen in your `MainActivity`. `new RecyclerViewAdapter.onRecyclerViewItemClickListener()` in this case it's the interface you created before with the method inside that would need to be implemented hence the
```
@Override
public void onItemClickListener(View view, int position) {
}
```
is called.
All the `ViewHolder` does in this scenario is pass the information (The components it's self and the position) into the `onItemClickListener` with the components attached (inside the `onClick` function) to finalize the actual click functionality.
if you would like me to update the explanation in anyway, let me know.
|
Chrome network Timing , how to improve Content Download
I was checking for XHR calls timing in Chrome DevTools to improve slow requests but I found out that 99% of the response time is wasted on content download even though the content size is less than 5 KB and the application is running on localhost(Working on my local machine so no Network issues).
But when replaying the call using Replay XHR menu, the Content download period drops dramatically from 2.13 s to 2.11 ms(as shown in the screen shots below). Data is not cached at browser level.
- Example of Call Timing
![Example of Call Timing](https://i.stack.imgur.com/qsbzQ.png)
- Same Example Replayed
![Same Example Replayed](https://i.stack.imgur.com/rnw6c.png)
Can someone explain why the content download timing is slow and how to improve it?
The Application is an ASP.NET mvc 5 solution combined with angularJS.
The Web Server Details:
- Windows Server 2012 R2
- IIS 8
Thank you in advance for your support!
| I can't conclusively tell you the cause of this, but I can offer some variables that you can investigate, which might help you figure out what's going on.
## Caching
I know you said that the data is not getting cached at the browser level, but I'd suggest checking that again. Because the fact that the initial request takes 2s, and then the repeat request only takes 2ms really does sound like caching.
How to check:
1. Go to **Network** panel.
2. Look at **Size** column for the request. If you see `from memory` or `from disk cache`, it was served from the cache.
[![size column](https://i.stack.imgur.com/zDsY7.png)](https://i.stack.imgur.com/zDsY7.png)
## Slow development server or machine
My initial thought was that you're doing more work on your development machine than it can handle. Maybe the server requires more resources than your machine can handle. Maybe you have a lot of other programs running and your memory / CPU is getting maxed.
How to check:
1. Run your app on a more powerful server and see if the pattern persists.
## Frontend app is doing too much work
I'm not sure this last one actually makes sense, but it's worth a check. Perhaps your Angular app is doing a crazy amount of JS work during the initial request, and it's maxing out your CPU. So the entire browser is stalling when you make the initial request.
How to check:
1. Go to **Performance** panel.
2. [Start recording](https://developers.google.com/web/tools/chrome-devtools/evaluate-performance/reference#record-runtime).
3. Do the action that causes your app to make the initial request.
4. Stop recording.
5. Check the **CPU** chart. If it's completely maxed out, then your app is indeed doing a bunch of work.
[![CPU chart](https://i.stack.imgur.com/RKbZs.png)](https://i.stack.imgur.com/RKbZs.png)
Please leave a comment and let me know if any of these helped.
|
Troubles with NLTK bigram finder
I have a text file labeled "all.txt" It contains a regular english paragraph
For some reason when I run this code:
```
import nltk
from nltk.collocations import *
bigram_measures = nltk.collocations.BigramAssocMeasures()
trigram_measures = nltk.collocations.TrigramAssocMeasures()
# change this to read in your data
finder = BigramCollocationFinder.from_words(('all.txt'))
# only bigrams that appear 3+ times
#finder.apply_freq_filter(3)
# return the 10 n-grams with the highest PMI
print finder.nbest(bigram_measures.pmi, 10)
```
I get the following result:
```
[('.', 't'), ('a', 'l'), ('l', '.'), ('t', 'x'), ('x', 't')]
```
What am I doing wrong, since I am only getting letters? I am looking for words not letters!
Here is an example of what is in "all.txt", so you get an idea of what is being processed:
"and it 's not just democrats who oppose this plan . americans across the country have expressed their opposition to this plan .my democratic colleagues and i have a better plan that will strengthen the ethics rules to improve congressional accountability and to make sure that legislation is properly considered . the republican plan fails to close a loophole that allows legislation to be considered before members have read it ."
| The first problem is that you aren't actually reading the file in, you're just passing a string containing the file path to the function, and the second problem is that you need to use a tokenizer, first. To resolve the second problem:
```
from nltk.tokenize import word_tokenize
finder = BigramCollocationFinder.from_words(word_tokenize("This is a test sentence"))
print finder.nbest(bigram_measures.pmi, 10)
```
Yields `[('This', 'is'), ('a', 'test'), ('is', 'a'), ('test', 'sentence')]`
Note that you may want to use a different tokenizer--the tokenize package documentation will explain more about the various options.
In the case of the first, you can use something like:
```
with open('all.txt', 'r') as data_file:
finder = BigramCollocationFinder.from_words(word_tokenize(data_file.read())
```
|
What do you do to remain productive when working on your own?
I find working in isolation, on a piece of code that won't be seen by anyone else for weeks, draining. I'm looking for ideas to try to keep myself productive and motivated.
What do you do to remain motivated and productive, when given a long term programming task, and working on your own (for example, from home, without any team-mates or coworkers)?
| **Maintain a balance.**
Given something novel (e.g. playing a game, having a beer, etc.), we're able to focus and do that one thing for an extended amount of time. The only way to power through a mundane task (without overdosing on *coffee*) is to **maintain a balance**. *I say 'mundane' because if this were a task you were **really** passionate about, you wouldn't have meandered to site and asked this question.*
Suggestions:
- **Balance.** Work on the long-term project for an hour two and then reward yourself with something you enjoy. Embrace the break from the task. Repeat.
- **Long-term mindset**: thinking about the **awesome work you will be doing after** (this less interesting job) is invigorating.
- **Break your project down into small tasks**. Tasks that will only take a couple of hours to complete. *As you complete each of these small tasks, it'll give you the feeling of progression.*
|
Different probability density transformations due to Jacobian factor
In Bishop's *Pattern Recognition and Machine Learning* I read the following, just after the probability density $p(x\in(a,b))=\int\_a^bp(x)\textrm{d}x$ was introduced:
>
> Under a nonlinear change of variable, a probability density transforms
> differently from a simple function, due to the Jacobian factor. For
> instance, if we consider a change of variables $x = g(y)$, then a
> function $f(x)$ becomes $\tilde{f}(y) = f(g(y))$. Now consider a
> probability density $p\_x(x)$ that corresponds to a density $p\_y(y)$
> with respect to the new variable $y$, where the suffices denote the
> fact that $p\_x(x)$ and $p\_y(y)$ are different densities. Observations
> falling in the range $(x, x + \delta x)$ will, for small values of
> $\delta x$, be transformed into the range $(y, y + \delta y$) where
> $p\_x(x)\delta x \simeq p\_y(y)δy$, and hence $p\_y(y) = p\_x(x) |\frac{dx}{dy}| = p\_x(g(y)) | g\prime (y) |$.
>
>
>
What is the Jacobian factor and what exactly does everything mean (maybe qualitatively)? Bishop says, that a consequence of this property is that the concept of the maximum of a probability density is dependent on the choice of variable. What does this mean?
To me this comes all a bit out of the blue (considering it's in the introduction chapter). I'd appreciate some hints, thanks!
| I suggest you reading [the solution of Question 1.4](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/05/prml-web-sol-2009-09-08.pdf) which provides a good intuition.
In a nutshell, if you have an arbitrary function $ f(x) $ and two variable $x$ and $y$ which are related to each other by the function $x = g(y)$, then you can find the maximum of the function either by directly analyzing $f(x)$: $ \hat{x} = argmax\_x(f(x)) $ or the transformed function $f(g(y))$: $\hat{y} = argmax\_y(f(g(y))$. Not surprisingly, $\hat{x}$ and $\hat{y}$ will be related to each as $\hat{x} = g(\hat{y})$ (here I assumed that $\forall{y}: g^\prime(y)\neq0)$.
This is not the case for probability distributions. If you have a probability distribution $p\_x(x)$ and two random variables which are related to each other by $x=g(y)$. Then there is no direct relation between $\hat{x} = argmax\_x(p\_x(x))$ and $\hat{y}=argmax\_y(p\_y(y))$. This happens because of Jacobian factor, a factor that shows how the volum is relatively changed by a function such as $g(.)$.
|
How to convert render from three.js to .png file?
How would I convert a render to a .png image?
I've been looking around for awhile but nothing has worked.
| Here is a function I use and [a fiddle](https://jsfiddle.net/2pha/art388yv/) that shows it working.
```
function takeScreenshot() {
// For screenshots to work with WebGL renderer, preserveDrawingBuffer should be set to true.
// open in new window like this
var w = window.open('', '');
w.document.title = "Screenshot";
//w.document.body.style.backgroundColor = "red";
var img = new Image();
img.src = renderer.domElement.toDataURL();
w.document.body.appendChild(img);
// download file like this.
//var a = document.createElement('a');
//a.href = renderer.domElement.toDataURL().replace("image/png", "image/octet-stream");
//a.download = 'canvas.png'
//a.click();
}
```
|
Why is a single backslash shown when using quotes
I always thought that bash treats backslashes the same when using without or with double quotes, but I was wrong:
```
[user@linux ~]$ echo "foo \ "
foo \
[user@linux ~]$ echo foo \ # Space after \
foo
```
So I thought backslashes are always printed, when using double quotes, but:
```
[user@linux ~]$ echo "foo \" "
foo "
[user@linux ~]$ echo "foo \\ "
foo \
```
Why is the backslash in the first code line shown?
| Section [3.1.2.3 Double Quotes](https://www.gnu.org/software/bash/manual/bash.html#Double-Quotes) of the [GNU Bash manual](https://www.gnu.org/software/bash/manual/) says:
>
> The backslash retains its special meaning only when followed by one of
> the following characters: ‘`$`’, ‘```’, ‘`"`’, ‘`\`’, or
> `newline`. Within double quotes, backslashes that are followed by one
> of these characters are removed. Backslashes preceding characters
> without a special meaning are left unmodified. A double quote may be
> quoted within double quotes by preceding it with a backslash. If
> enabled, history expansion will be performed unless an ‘`!`’ appearing
> in double quotes is escaped using a backslash. The backslash preceding
> the ‘`!`’ is not removed.
>
>
>
Thus `\` in double quotes is treated differently both from `\` in single quotes and `\` outside quotes. It is treated literally except when it is in a position to cause a character to be treated literally that could otherwise have special meaning in double quotes.
Note that sequences like `\'`, `\?`, and `\*` are treated literally and the backslash is not removed, because `'`, `?` and `*` already have no special meaning when enclosed in double quotes.
|
Git -- Merge then Revert then trying to Merge again
Git can be cryptic sometimes and the documentation out there even more so. My team and I are faced with a very particular situation. For reference, we have a production ready master branch and a develop branch. Our branching strategy is as follows:
- branch off master to create feature branch
- merge feature into develop for integration and functional testing
- submit merge request to master when feature is done
One of the devs accidentally merged develop into his feature before merging it into master, so there was a bunch of unvetted code that went into feature as well. I reverted that merge to master, then the dev fixed his branch to remove all the unvetted code. However, when trying to merge his branch into master again, it threw an error. It seems to be because master already had the feature branch's history so it wouldn't accept it again.
What is the proper way to handle this situation.
P.S. - Trying to wrap my head around the difference between merging and rebasing. I understand that merge pulls in code changes and leaves everything else in tact, creating a new history for the commit. However, how does rebase work? My understanding is that it sets one branch to exactly the same state as another. Is this correct?
| This is explained very well in the Git Documentation itself, under [revert a faulty merge](https://github.com/git/git/blob/master/Documentation/howto/revert-a-faulty-merge.txt). Quoting Linus:
>
> Reverting a regular commit just effectively undoes what that commit
> did, and is fairly straightforward. But reverting a merge commit also
> undoes the *data* that the commit changed, but it does absolutely
> nothing to the effects on *history* that the merge had.
>
>
> So the merge will still exist, and it will still be seen as joining
> the two branches together, and future merges will see that merge as
> the last shared state - and the revert that reverted the merge brought
> in will not affect that at all.
>
>
> So a "revert" undoes the data changes, but it's very much *not* an
> "undo" in the sense that it doesn't undo the effects of a commit on
> the repository history.
>
>
> So if you think of "revert" as "undo", then you're going to always
> miss this part of reverts. Yes, it undoes the data, but no, it doesn't
> undo history.
>
>
>
There are two options to get the changes of the branch "again":
1. Revert the revert
2. Rebase the branch without the faulty code, effectively creating new history, i.e. completely new commits and then merge this new branch (it won't share the same history, so it looks "new" when merging)
|
C# Creating and using Functions
I need help with C# programming; I am new to it and I come from C background. I have a Console Application like this:
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Add_Function
{
class Program
{
static void Main(string[] args)
{
int a;
int b;
int c;
Console.WriteLine("Enter value of 'a':");
a = Convert.ToInt32(Console.ReadLine());
Console.WriteLine("Enter value of 'b':");
b = Convert.ToInt32(Console.ReadLine());
//why can't I not use it this way?
c = Add(a, b);
Console.WriteLine("a + b = {0}", c);
}//END Main
public int Add(int x, int y)
{
int result = x + y;
return result;
}//END Add
}//END Program
}//END Add_Function
```
It gives me this error on the line that I call Add():
>
> An object reference is required for the non-static field, method, or property 'Add\_Function.Program.Add(int, int)'
>
>
>
Can anyone please explain to me why I have this problem. Is this because the architecture of C# is different than C, and the way I call it is wrong? Thanks.
| *Note: in C# the term "function" is often replaced by the term "method". For the sake of this question there is no difference, so I'll just use the term "function".*
The other answers have already given you a quick way to *fix* your problem (just make `Add` a `static` function), but I'd like to explain why.
C# has a fundamentally different design paradigm than C. That paradigm is called **object-oriented programming** (OOP). Explaining all the differences between OOP and functional programming is beyond the scope of this question, but here's the short version as it applies to you.
Writing your program in C, you would have created a function that adds two numbers, and that function would exist independently and be callable from anywhere. In C# most functions don't exist independently; instead, they exist in the context of an object. In your example code, only an instance (an object) of the class `Program` knows how to perform `Add`. Said another way, you have to create an instance of `Program`, and then ask `Program` to perform an `Add` for you.
The solutions that people gave you, using the `static` keyword, route around that design. Using the `static` keyword is kind of like saying, "Hey, this function I'm defining doesn't need any context/state, it can just be called." Since your `Add` function is very simple, this makes sense. As you start diving deeper into OOP, you're going to find that your functions get more complicated and rely on knowing their state/context.
My advice: Pick up an OOP book and get ready to switch your brain from functional programming to OOP programming. You're in for a ride.
|
c# ToString() - printing child items
I have a class (eg. `Foo`)which which overrides ToString method to print it's internal state. This class has a collection of `Foo` - it's child elements. Children can also have children etc.
I'm looking for solution to implement in ToString() in such way that it would indent child elements automatically, eg:
```
Parent Foo
Child1 Foo
Child1.1 Foo
Child2 Foo
Child2.1 Foo
Child2.2 Foo
```
| The solution is to use `ToString()` only as the "entry point" that is called on the root of the subtree to output. That `ToString()` method can call a private `ToIndentedString(int)` method that takes the current indentation level as an argument. That method will then return the string representation of the current node at the specified indentation, plus the string representations of all child nodes at the indentation + 1 etc.
```
public string ToString()
{
return ToIndentedString(0);
}
private string ToIndentedString(int indentation)
{
StringBuilder result = new StringBuilder();
result.Append(' ', indentation);
result.Append(Environment.NewLine);
foreach (Foo child in children) {
result.Append(child.ToIndentedString(indentation + 1));
}
return result.ToString();
}
```
|
MVC 3 - access for specific user only
In my web application registered users can add new content and edit it later. I want only the content's author to be able to edit it. Is there any smart way of doing this other than manually writing code in all the action methods that checks if the logged user is the same as the author? Any attribute that I could use for the whole controller?
|
>
> Any attribute that I could use for the whole controller?
>
>
>
Yes, you could extend the `Authorize` attribute with a custom one:
```
public class AuthorizeAuthorAttribute : AuthorizeAttribute
{
protected override bool AuthorizeCore(HttpContextBase httpContext)
{
var isAuthorized = base.AuthorizeCore(httpContext);
if (!isAuthorized)
{
// the user is either not authenticated or
// not in roles => no need to continue any further
return false;
}
// get the currently logged on user
var username = httpContext.User.Identity.Name;
// get the id of the article that he is trying to manipulate
// from the route data (this assumes that the id is passed as a route
// data parameter: /foo/edit/123). If this is not the case and you
// are using query string parameters you could fetch the id using the Request
var id = httpContext.Request.RequestContext.RouteData.Values["id"] as string;
// Now that we have the current user and the id of the article he
// is trying to manipualte all that's left is go ahead and look in
// our database to see if this user is the owner of the article
return IsUserOwnerOfArticle(username, id);
}
private bool IsUserOwnerOfArticle(string username, string articleId)
{
throw new NotImplementedException();
}
}
```
and then:
```
[HttpPost]
[AuthorizeAuthor]
public ActionResult Edit(int id)
{
... perform the edit
}
```
|
Given that p is a pointer is "p > nullptr" well-formed?
Given a pointer `p`:
```
char *p ; // Could be any type
```
assuming `p` is properly initialized is the following well-formed:
```
if (p > 0) // or p > nullptr
```
More generally is it well-formed to use a relational operator when one operand is a pointer and the other is a null pointer constant?
| In C++14 this code is ill-formed but prior to the C++14 this was well-formed code(*but the result is unspecified*), as [defect report 583: Relational pointer comparisons against the null pointer constant](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3714.html#583) notes:
>
> In C, this is ill-formed (cf C99 6.5.8):
>
>
>
> ```
> void f(char* s) {
> if (s < 0) { }
> }
>
> ```
>
> ...but in C++, it's not. Why? Who would ever need to write (s > 0)
> when they could just as well write (s != 0)?
>
>
> This has been in the language since the ARM (and possibly earlier);
> apparently it's because the pointer conversions (4.10 [conv.ptr]) need
> to be performed on both operands whenever one of the operands is of
> pointer type. So it looks like the "null-ptr-to-real-pointer-type"
> conversion is hitching a ride with the other pointer conversions.
>
>
>
In C++14 this was made ill-formed when [N3624](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3624.html) was [applied to the draft C++14 standard](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3692.html), which is a revision of `N3478`. The proposed resolution to `583` notes:
>
> This issue is resolved by the resolution of issue 1512.
>
>
>
and issue `1512` proposed resolution is `N3478`(*N3624 is a revision of N3478*):
>
> The proposed wording is found in document N3478.
>
>
>
**Changes to section 5.9 from C++11 to C++14**
Section `5.9` *Relational operators* changed a lot between the [C++11 draft standard](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3337.pdf) and the [C++14 draft standard](https://github.com/cplusplus/draft/blob/b7b8ed08ba4c111ad03e13e8524a1b746cb74ec6/papers/N3936.pdf), the following highlights the most relevant differences (*emphasis mine going forward*), from paragraph `1`:
>
> The operands shall have arithmetic, enumeration, or pointer type, **or
> type std::nullptr\_t**.
>
>
>
changes to:
>
> The operands shall have arithmetic, enumeration, or pointer type
>
>
>
So the type [std::nullptr\_t](http://en.cppreference.com/w/cpp/types/nullptr_t) is no longer a valid operand but that still leaves `0` which is a *null pointer constant* and therefore can be converted(*section `4.10`*) to a *pointer type*.
This is covered by paragraph `2` which in C++11 says:
>
> [...]**Pointer conversions** (4.10) and qualification conversions (4.4)
> are performed on pointer operands (**or on a pointer operand and a null
> pointer constant, or on two null pointer constants, at least one of
> which is non-integral**) to bring them to their composite pointer type.
> If one operand is a null pointer constant, the composite pointer type
> is std::nullptr\_t if the other operand is also a null pointer constant
> or, if the other operand is a pointer, the type of the other
> operand.[...]
>
>
>
this explicitly provides an exception for a *null pointer constant* operand, changes to the following in C++14:
>
> The usual arithmetic conversions are performed on operands of
> arithmetic or enumeration type. **If both operands are pointers**, pointer
> conversions (4.10) and qualification conversions (4.4) are performed
> to bring them to their composite pointer type (Clause 5). **After
> conversions, the operands shall have the same type.**
>
>
>
In which there is no case that allows `0` to be converted to a *pointer type*. Both operands must be pointers in order for pointer conversions to be applied and it is required that the operands have the same type after conversions. Which is not satisfied in the case where one operand is a *pointer type* and the other is a *null pointer constant* `0`.
**What if both operands are pointers but one is a null pointer value?**
R Sahu asks, is the following code well-formed?:
```
char* p = "";
char* q = nullptr;
if ( p > q ) {}
```
Yes, in C++14 this code is well formed, both `p` and `q` are pointers but the result of the comparison is unspecified. The defined comparisons for two pointers is set out in paragraph `3` and says:
>
> Comparing pointers to objects is defined as follows:
>
>
> - If two pointers point to different elements of the same array, or to subobjects thereof, the pointer to the element with the higher
> subscript compares greater.
> - If one pointer points to an element of an array, or to a subobject thereof, and another pointer points one past the last element of the
> array, the latter pointer compares greater.
> - If two pointers point to different non-static data members of the same object, or to subobjects of such members, recursively, the
> pointer to the later declared member compares greater provided the two
> members have the same access control (Clause 11) and provided their
> class is not a union.
>
>
>
Null pointers values are not defined here and later on in paragraph `4` it says:
>
> [...]Otherwise, the result of each of the operators is unspecified.
>
>
>
In C++11 it specifically makes the results unspecified in paragraph `3`:
>
> If two pointers p and q of the same type point to different objects
> that are not members of the same object or elements of the same array
> or to different functions, **or if only one of them is null, the results
> of p<q, p>q, p<=q, and p>=q are unspecified.**
>
>
>
|
List all NSPasteBoard names on macOS
Is there a way to get a list of all the NSPasteBoards and their names on the current system?
I'm wondering if there's some function available (even if private API) to achieve this. Thank you!
| No, there's no function to do this, even with private API.
The pboard program (`/usr/libexec/pboard`) runs as a daemon and manages all shared pasteboards. The `NSPasteboard` class talks to the pboard daemon using XPC, so to get a list of all pasteboards, pboard would need to handle some XPC message by responding with a list of pasteboard names.
The pboard program is very simple: it initializes various things (logs, sandbox, dispatch queue, mach service) and then calls `__CFPasteboardStartServicingConnection`, which is actually defined in the CoreFoundation framework. This function ultimately handles each incoming XPC request by calling `_CFHandlePasteboardXPCEvent`.
Looking at `_CFHandlePasteboardXPCEvent` in a disassembler (I used Hopper), we can see the complete list of requests supported by pboard:
```
com.apple.pboard.create
com.apple.pboard.get-counts
com.apple.pboard.barrier
com.apple.pboard.begin-generation
com.apple.pboard.has-entries
com.apple.pboard.register-entries
com.apple.pboard.request-data
com.apple.pboard.refresh-cache
com.apple.pboard.release
com.apple.pboard.unique-promise-file
com.apple.pboard.resolve-all-promises
com.apple.pboard.resolve-pboard-promises
com.apple.pboard.set-data-flags
com.apple.pboard.make-generation-local
```
None of these has an obvious name like `com.apple.pboard.get-pboard-names`. Looking at how they're handled, the first thing done by almost all of them is to get a pasteboard name from the event and look up or create the pasteboard with that name.
The only request that I found that doesn't immediately involve looking up a pasteboard by name is `com.apple.pboard.resolve-all-promises`, which in fact sends the intriguing message `+[_CFPasteboardStore copyAllPasteboards]`. However, it doesn't send the result, or in fact any result, back to the client, as far as I can tell. And that is the only use of the `copyAllPasteboards` selector in CoreFoundation.
So, unless you attach to the pboard daemon with a debugger and poke around, you won't find a list of all existing pasteboards. (Attaching to pboard with a debugger requires first disabling SIP, which requires booting your recovery partition.)
|
Physical simulation of realistic two-legged skeleton
I wish to utilize bullet-physics or similar physical-engine to create a realistic skeleton simulation of human-like body with two legs. That is, create a simulation of "a body" made of round mass on top of two "legs", where each leg is made of 3 solid pieces connected through 3 joints and each joint have some degrees of freedom and a limited movement range in each direction, similar to human hip, knee and ankle.
I aim for a realistic model, and hence it will 'stand' only if all joints are balanced correctly and it will fall otherwise.
Any directions, suggestion or pointers to existing tutorials or resources is appreciated! This looks like an awful lot of work doing from scratch...
| I'm working on similar code at the moment. My approach is use the Bullet Physics Rag Doll Demo as a starting point. It has a rag doll with body parts connected by joints.
I'm then using the Bullet Physics Dynamic Control Demo to learn to bend the joints. The challenging part at the moment is setting all the parameters.
I suggest you learn how to create two rigid bodies connected by a constraint and then to activate the constraint motor to bend the joint.
The following is some code that I'm working with to learn how rigid bodies and constraints work in Bullet Physics. The code creates two blocks connected by a hinge constraint. The update function bends the hinge constraint slowly over time.
Now that I've got this I'll be going back to the Rag Doll and adjusting the joints.
```
class Simple
{
private:
btScalar targetAngle;
btCollisionShape* alphaCollisionShape;
btCollisionShape* bravoCollisionShape;
btRigidBody* alphaRigidBody;
btRigidBody* bravoRigidBody;
btHingeConstraint* hingeConstraint;
btDynamicsWorld* dynamicsWorld;
public:
~Simple( void )
{
}
btRigidBody* createRigidBody( btCollisionShape* collisionShape,
btScalar mass,
const btTransform& transform ) const
{
// calculate inertia
btVector3 localInertia( 0.0f, 0.0f, 0.0f );
collisionShape->calculateLocalInertia( mass, localInertia );
// create motion state
btDefaultMotionState* defaultMotionState
= new btDefaultMotionState( transform );
// create rigid body
btRigidBody::btRigidBodyConstructionInfo rigidBodyConstructionInfo(
mass, defaultMotionState, collisionShape, localInertia );
btRigidBody* rigidBody = new btRigidBody( rigidBodyConstructionInfo );
return rigidBody;
}
void Init( btDynamicsWorld* dynamicsWorld )
{
this->targetAngle = 0.0f;
this->dynamicsWorld = dynamicsWorld;
// create collision shapes
const btVector3 alphaBoxHalfExtents( 0.5f, 0.5f, 0.5f );
alphaCollisionShape = new btBoxShape( alphaBoxHalfExtents );
//
const btVector3 bravoBoxHalfExtents( 0.5f, 0.5f, 0.5f );
bravoCollisionShape = new btBoxShape( bravoBoxHalfExtents );
// create alpha rigid body
const btScalar alphaMass = 10.0f;
btTransform alphaTransform;
alphaTransform.setIdentity();
const btVector3 alphaOrigin( 54.0f, 0.5f, 50.0f );
alphaTransform.setOrigin( alphaOrigin );
alphaRigidBody = createRigidBody( alphaCollisionShape, alphaMass, alphaTransform );
dynamicsWorld->addRigidBody( alphaRigidBody );
// create bravo rigid body
const btScalar bravoMass = 1.0f;
btTransform bravoTransform;
bravoTransform.setIdentity();
const btVector3 bravoOrigin( 56.0f, 0.5f, 50.0f );
bravoTransform.setOrigin( bravoOrigin );
bravoRigidBody = createRigidBody( bravoCollisionShape, bravoMass, bravoTransform );
dynamicsWorld->addRigidBody( bravoRigidBody );
// create a constraint
const btVector3 pivotInA( 1.0f, 0.0f, 0.0f );
const btVector3 pivotInB( -1.0f, 0.0f, 0.0f );
btVector3 axisInA( 0.0f, 1.0f, 0.0f );
btVector3 axisInB( 0.0f, 1.0f, 0.0f );
bool useReferenceFrameA = false;
hingeConstraint = new btHingeConstraint(
*alphaRigidBody,
*bravoRigidBody,
pivotInA,
pivotInB,
axisInA,
axisInB,
useReferenceFrameA );
// set constraint limit
const btScalar low = -M_PI;
const btScalar high = M_PI;
hingeConstraint->setLimit( low, high );
// add constraint to the world
const bool isDisableCollisionsBetweenLinkedBodies = false;
dynamicsWorld->addConstraint( hingeConstraint,
isDisableCollisionsBetweenLinkedBodies );
}
void Update( float deltaTime )
{
alphaRigidBody->activate();
bravoRigidBody->activate();
bool isEnableMotor = true;
btScalar maxMotorImpulse = 1.0f; // 1.0f / 8.0f is about the minimum
hingeConstraint->enableMotor( isEnableMotor );
hingeConstraint->setMaxMotorImpulse( maxMotorImpulse );
targetAngle += 0.1f * deltaTime;
hingeConstraint->setMotorTarget( targetAngle, deltaTime );
}
};
```
|
Animated wallpapers for Windows 7?
Is it possible to have an animated wallpaper for a Windows 7 64 bit computer?
| Windows 7 supports video backgrounds and even offers some samples of their own.
UPDATE new (old) info:
Dreamscene requires the Ultimate Edition of Windows Vista. I don't run it myself but had when I used Vista and had conflated memories in my mind. Apologies.
However, you can still install Dreamscene in Windows 7, though it does require registry edits and such: <http://windows7center.com/tutorials/how-to-enable-dreamscene-in-windows-7/>
This solution will allow you to right-click on .mpg and .wmv files (any video file that will play in Windows Media Player) and select "Set as desktop background" and the video will play on endless loop.
In my experience with Vista, this did not slow the computer down so long as the computer was capable of handling Aero effects easily.
|
Java hashcode() strings collision
I do not know much about hashcodes. I found this code which prints the collisions.
Can you please tell me what are collisions and how to reduce it?
Why should we use hashcodes?
```
public static int getHash(String str, int limit)
{
int hashCode = Math.abs(str.hashCode()%(limit));
return hashCode;
}
/**
* @param args
*/
public static void main(String[] args)
{
int hashLimit = 10000;
int stringsLimit = 10000;
String[] arr = new String[hashLimit];
List<String> test = new ArrayList<String>();
Random r = new Random(2);
for ( int i = 0 ; i < stringsLimit ; i++ )
{
StringBuffer buf = new StringBuffer("");
for ( int j = 0 ; j < 10 ; j++ )
{
char c = (char)(35+60*r.nextDouble());
buf.append(c);
}
test.add(buf.toString());
//System.out.println(buf.toString());
}
int collisions = 0;
for ( String curStr : test )
{
int hashCode = getHash(curStr,hashLimit);
if ( arr[hashCode] != null && !arr[hashCode].equals(curStr) )
{
System.out.println("collision of ["+arr[hashCode]+"] ("+arr[hashCode].hashCode()+" = "+hashCode+") with ["+curStr+"] ("+curStr.hashCode()+" = "+hashCode+")");
collisions++;
}
else
{
arr[hashCode] = curStr;
}
}
System.out.println("Collisions: "+collisions);
}
```
|
>
> Can you please tell me what are collisions and how to reduce it?
>
>
>
Collisions are when two non-equal objects have the same hash code. They're a fact of life - you need to deal with it.
>
> Why should we use hashcodes?
>
>
>
Because they make it quick to look up values by key, basically. A hash table can use a hash code to very quickly get the set of possible key matches down to a *very small set* (often just one), at which point you need to check for *actual* key equality.
You should *never* assume that two hash codes being equal means the objects they were derived from are equal. Only the reverse is true: assuming a correct implementation, if two objects give *different* hash codes, then they are *not* equal.
|
count() query taking more than 20 seconds
Table\_a = 7022536 rows
Table\_b (GTT) = 5601 rows
Query:
```
SELECT COUNT (a.ssn_head)
FROM table_a a, table_b b
WHERE b.hoh = a.head AND a.flag = 'Y';
```
takes 20+ seconds to bring 17214 records.
Explain plan is:
```
Plan hash value: 1901401324
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | C
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 25 | 1
| 1 | SORT AGGREGATE | | 1 | 25 |
|* 2 | HASH JOIN | | 114K| 2801K| 1
| 3 | TABLE ACCESS FULL| table_b | 49188 | 528K|
| 4 | REMOTE | table_a | 7022K| 93M| 1
--------------------------------------------------------------------------------
```
`table_b` (GTT) has no indices on it...I think since the query is going through all of table\_b it will always do a full table scan..right?
`table_a` has index on `head`
What other way is there to make this query run faster?
| IS hoh in table\_b unique ? If so, then
```
SELECT COUNT (a.ssn_head)
FROM table_a a, table_b b
WHERE b.hoh = a.head AND a.flag = 'Y';
```
is logically equivalent to
```
SELECT COUNT (a.ssn_head)
FROM table_a a
WHERE a.flag = 'Y'
and a.head in (select hoh FROM table_b);
```
Given that the larger data volume is on the remote server, I'd suggest pushing the query over there with the DRIVING\_SITE hint.
```
SELECT /*+DRIVING_SITE (r) */ COUNT (r.col_a)
FROM owner.table@other r
WHERE r.col_b in (select l.col_c FROM local l);
```
That should work with a synonym instead of table@dblink. But it probably won't work with a view.
|
Construct a custom object whose properties will be enumerated in the order in which they were defined
I have an object that I created in PowerShell to fetch info from AWS.
```
$list = New-Object -TypeName PSObject -Property @{
'name' = ($instance.Tags | Where-Object {$_.Key -eq 'Name'}).Value
'baseAmi' = ""
'patchDate' = ""
'baseName' = ""
'owner' = ($instance.Tags | Where-Object {$_.Key -eq 'Owner'}).Value
'instanceID' = $instance.InstanceID
'imageID' = $instance.ImageId
'env' = ($instance.Tags | Where-Object {$_.Key -eq 'EnvName'}).Value
'instanceState' = $instance.State.Name
}
$baseAmi = Get-EC2Image -ImageId $list.imageID
$list.baseAmi = ($baseAmi.Tags | Where-Object{$_.Key -eq 'BaseAmi'}).Value
$baseAmi = Get-Ec2Image -ImageId $list.baseAmi
$list.patchDate = ($baseAmi.Tags | Where-Object{$_.Key -eq 'PatchDate'}).Value
$list.baseName = ($baseAmi.Tags | Where-Object{$_.Key -eq 'Name'}).Value
```
I would like to output the fields of the object in the following order:
```
baseName,baseAmi,patchDate,name,owner,instanceID,env,instanceState
```
This object is then exported as a CSV. I basically need the CSV headers to be organized in that order when viewing it in Excel.
|
Assuming **PSv3+**, always **use `[pscustomobject] @{ ... }` to create a custom object whose properties should be enumerated in the same order as they were defined.**
```
$customObj = [pscustomobject] @{
name = $null
baseAmi = $null
patchDate = $null
baseName = $null
owner = $null
instanceID = $null
imageID = $null
env = $null
instanceState = $null
}
```
You can verify that the resulting `[pscustomobject]` instance enumerates its properties in definition order as follows:
```
PS> $customObj | Format-List
name :
baseAmi :
patchDate :
baseName :
owner :
instanceID :
imageID :
env :
instanceState :
```
Note that this only works with **hashtable *literals* preceded by `[pscustomobject]`**, which in PowerShell **v3+** is **syntactic sugar** to make PowerShell construct the **custom-object instance with the hashtable entries *in the order specified***, even though, in isolation, the entries of a hashtable literal (`@{ ... }`) are inherently *unordered* (their ordering is an implementation detail).
You can think of `[pscustomobject] @{ ... }` as an implicit shortcut for `[pscustomobject] [ordered] @{ ... }`, where `[ordered] @{ ... }` is PSv3+ syntax for a hashtable with *ordered* entries (keys), whose true type is `[System.Collections.Specialized.OrderedDictionary]`.
---
As for **what you tried**:
Unlike the syntactic sugar discussed above, **combining `New-Object` with a hashtable literal (`@{ ... }`) does *not* guarantee that the resulting object's properties are enumerated in the same order** as the (inherently unordered) input hashtable's entries.
Both `New-Object -TypeName PSObject -Property @{ ... }` and `New-Object -TypeName PCustomSObject -Property @{ ... }` create a `[pscustomobject]` instance, whose properties, due the property definitions having been provided as a *hashtable literal* - in the absence of syntactic sugar - are defined in the unpredictable order in which the hashtable literal's entries are enumerated:
```
> New-Object -TypeName PSCustomObject -Property @{
'name' = $null
'baseAmi' = $null
'patchDate' = $null
'baseName' = $null
'owner' = $null
'instanceID' = $null
'imageID' = $null
'env' = $null
'instanceState' = $null
} | Format-List
imageID :
instanceID :
owner :
env :
patchDate :
name :
baseName :
baseAmi :
instanceState :
```
As you can see, the properties are enumerated in no particular order.
---
You *could* pass an *ordered* hashtable instead (PSv3+), but that amounts to the much more verbose - and less efficient - equivalent of the `[pscustomobject] @{ ... }` syntactic sugar solution above:
```
New-Object -TypeName PSCustomObject -Property ([ordered] @{
'name' = $null
'baseAmi' = $null
'patchDate' = $null
'baseName' = $null
'owner' = $null
'instanceID' = $null
'imageID' = $null
'env' = $null
'instanceState' = $null
}) | Format-List
```
|
Tidy up number counting code
I am hoping someone can help me tidy up my code (for a practical for university). The practical is to use the random number generator to produce 100 integers (all between 1 and 10) and store them in an array. We then need to scan the array and print out how often each number appears. Following this, we needed to create a horizontal bar chart using asterisks to show how often each number appears, before finally printing out which number appeared the most.
My code works and produces the correct results:
```
import java.util.Random;
public class Practical4_Assessed
{
public static void main(String[] args)
{
Random numberGenerator = new Random ();
int[] arrayOfGenerator = new int[100];
int[] countOfArray = new int[10];
int count;
for (int countOfGenerator = 0; countOfGenerator < 100; countOfGenerator++)
{
count = numberGenerator.nextInt(10);
countOfArray[count]++;
arrayOfGenerator[countOfGenerator] = count + 1;
}
int countOfNumbersOnLine = 0;
for (int countOfOutput = 0; countOfOutput < 100; countOfOutput++)
{
if (countOfNumbersOnLine == 10)
{
System.out.println("");
countOfNumbersOnLine = 0;
countOfOutput--;
}
else
{
if (arrayOfGenerator[countOfOutput] == 10)
{
System.out.print(arrayOfGenerator[countOfOutput] + " ");
countOfNumbersOnLine++;
}
else
{
System.out.print(arrayOfGenerator[countOfOutput] + " ");
countOfNumbersOnLine++;
}
}
}
System.out.println("");
System.out.println("");
// This section
for (int countOfNumbers = 0; countOfNumbers < countOfArray.length; countOfNumbers++)
System.out.println("The number " + (countOfNumbers + 1) + " occurs " + countOfArray[countOfNumbers] + " times.");
System.out.println("");
for (int countOfNumbers = 0; countOfNumbers < countOfArray.length; countOfNumbers++)
{
if (countOfNumbers != 9)
System.out.print((countOfNumbers + 1) + " ");
else
System.out.print((countOfNumbers + 1) + " ");
for (int a = 0; a < countOfArray[countOfNumbers]; a++)
{
System.out.print("*");
}
System.out.println("");
}
// To this section
System.out.println("");
int max = 0;
int test = 0;
for (int counter = 0; counter < countOfArray.length; counter++)
{
if (countOfArray[counter] > max)
{
max = countOfArray[counter];
test = counter + 1;
}
}
System.out.println("The number that appears the most is " + test);
}
}
```
However, I know that the section that tells how often a number occurs and the section that prints out the asterisks both start with the same for statement (I have marked them off in the code with comments). Can anyone advise me how I could do this using just one for statement? It seems quite straightforward, and yet I just can't get it to work!
Additionally, I am sure there are plenty of other areas that the code could be improved on. Feel free to suggest any improvements!
| There is much to say about this piece of code. I'll just focus on some parts of it.
**One long method**
There is one long main method. It makes it hard to get an overview of what is happening. Consider breaking the main method into several methods that each does its piece. Something like.
```
public static void main(String...args){
int[] randomNumbers = generateNumbers();
String result = format(randomNumbers);
result += formatNumberOfOccurences(randomNumbers);
result += formatGraph(randomNumber);
System.out.println(result);
}
// 4 methods omitted....
```
In this way you will get smaller chunks that is easier to understand. Each of these methods except the number generator is also easy to unit test since the outcome is the same for each argument. The code you have now is hard to unit test.
**Printing with System.out**
This is handy in many ways. But this is the main reason you need two loops for the code you have highlighted. Since you want to print the occurrence count first and then the graph you ned to do it in separate loops. If you instead store the intermediate result in a local variable then you can do it in one loop. Something like:
```
String occurrencesReport = "";
String graph = "";
for (int countOfNumbers = 0; countOfNumbers < countOfArray.length; countOfNumbers++)
{
occurrencesReport += "The number " + (countOfNumbers + 1) +
" occurs " + countOfArray[countOfNumbers] + " times.";
if (countOfNumbers != 9)
graph += (countOfNumbers + 1) + " ";
else
graph += (countOfNumbers + 1) + " ";
for (int a = 0; a < countOfArray[countOfNumbers]; a++)
{
graph += "*";
}
graph += "\n";
}
System.out.println(occurrencesReport);
System.out.println(graph);
```
**Readability example**
This little piece:
```
if (countOfNumbers != 9)
graph += (countOfNumbers + 1) + " ";
else
graph += (countOfNumbers + 1) + " ";
```
can be simplified like so:
```
graph += (countOfNumbers + 1) + " ";
if (countOfNumbers != 9) graph += " ";
```
this makes it easier to understand what is going on. (On a side note - most people dislike the absence of curly braces in this code snippet. If you add a statement yo your if-block it will actually be outside if curly braces are omitted. I tend to do ifs on one line when they are short. But people don't like that either....) This example can be applied to at least one more piece of the code where there is unnecessary repetition.
Better stop now - just some bits and pieces I hope you can use to improve!
|
indent python file (with pydev) in eclipse
I'm a newbie in eclipse. I want to indent all the lines of my code and formatting the open file by pressing a shortcut or something like that...
I know the CTRL+SHIFT+F (as it actually doesn't work in pydev!!)
I've been searching for hours with no success. Is there any way to do that in eclipse. kind of like CTRL+K,D in visual studio, which formats and indents all the source code lines automatically?
| I ... don't think this question makes sense. Indentation is syntax in Python. It doesn't make sense to have your IDE auto-indent your code. If it's not indented properly already, it doesn't work, and the IDE can't know where your indentation blocks begin and end. Take, for example:
```
# Valid Code
for i in range(10):
b = i
for j in range(b):
c = j
# Also Valid Code.
for i in range(10):
b = i
for j in range(b):
c = j
```
There's no possible way that the IDE can know which of those is the correct version, or what your intent is. If you're going to write Python code, you're going to have to learn to manage the indentation. There's no way to avoid it, and expecting the IDE to magically clean it up and still get the desired result out of it is pretty much impossible.
Further example:
```
# Valid Code.
outputData = []
for i in range(100):
outputData.append(str(i))
print ''.join(outputData)
# Again, also valid code, wildly different behavior.
outputData = []
for i in range(100):
outputData.append(str(i))
print ''.join(outputData)
```
The first will produce a list of strings, then print the joined result to the console 1 time. The second will still produce a list of strings, but prints the cumulative joined result for each iteration of the loop - 100 print statements. The two are both 100% syntactically correct. There's no problem with them. Either of them could be what the developer wanted. An IDE can't "know" which is correct. It could, very easily incorrectly change the first version to the second version. Because the Language uses Indentation as Syntax, there is no way to configure an IDE to perform this kind of formatting for you.
|
Is virtual table creation thread safe?
Please let me begin with that I know it is a bad practice to call virtual functions from within a constructor/destructor.
However, the behavior in doing so, although it might be confusing or not what the user is expecting, is still well defined.
```
struct Base
{
Base()
{
Foo();
}
virtual ~Base() = default;
virtual void Foo() const
{
std::cout << "Base" << std::endl;
}
};
struct Derived : public Base
{
virtual void Foo() const
{
std::cout << "Derived" << std::endl;
}
};
int main(int argc, char** argv)
{
Base base;
Derived derived;
return 0;
}
Output:
Base
Base
```
Now, back to my real question. What happens if a user calls a virtual function from within the constructor from a different thread. Is there a race condition? Is it undefined?
Or put it in other words. Is setting the vtable by the compiler, thread-safe?
Example:
```
struct Base
{
Base() :
future_(std::async(std::launch::async, [this] { Foo(); }))
{
}
virtual ~Base() = default;
virtual void Foo() const
{
std::cout << "Base" << std::endl;
}
std::future<void> future_;
};
struct Derived : public Base
{
virtual void Foo() const
{
std::cout << "Derived" << std::endl;
}
};
int main(int argc, char** argv)
{
Base base;
Derived derived;
return 0;
}
Output:
?
```
| First off a few excerpts from the standard that are relevant in this context:
[[defns.dynamic.type]](http://eel.is/c++draft/defns.dynamic.type)
>
> type of the most derived object to which the glvalue refers
> [Example: If a pointer `p` whose static type is "pointer to class `B`" is pointing to an object of class `D`, derived from `B`, the dynamic type of the expression `*p` is "`D`". References are treated similarly. — end example]
>
>
>
[[intro.object] 6.7.2.1](http://eel.is/c++draft/intro.object#1)
>
> [..] An object has a type. Some objects are polymorphic; the implementation generates information associated with each such object that makes
> it possible to determine that object's type during program execution.
>
>
>
[[class.cdtor] 11.10.4.4](http://eel.is/c++draft/class.cdtor#4)
>
> Member functions, including virtual functions, can be called during construction or destruction. When a virtual function is called directly or indirectly from a constructor or from a destructor, including during the construction or destruction of the class's non-static data members, and the object to which the call applies is the object (call it x ) under construction or destruction, **the function called is the final overrider in the constructor's or destructor's class** and not one overriding it in a more-derived class. [..]
>
>
>
As you wrote, it is clearly defined how virtual function calls in the constructor/destructor work - they depend on the *dynamic type* of the object, and the dynamic type information associated with the object, and that information *changes* in the course of the execution. It is not relevant what kind of pointer you are using to "look at the object". Consider this example:
```
struct Base {
Base() {
print_type(this);
}
virtual ~Base() = default;
static void print_type(Base* obj) {
std::cout << "obj has type: " << typeid(*obj).name() << std::endl;
}
};
struct Derived : public Base {
Derived() {
print_type(this);
}
};
```
`print_type` always receives a pointer to `Base`, but when you create an instance of `Derived` you will see two lines - one with "Base" and one with "Derived". The dynamic type is set at the very beginning of the constructor so you can call a virtual function as part of the member initialization.
It is not specified *how* or *where* this information is stored, but it is associated with the object itself.
>
> [..] the implementation generates information associated with each such object [..]
>
>
>
In order to change the dynamic type, this information has to be *updated*. This may be some data that is introduced by the compiler, but operations on that data are still covered by the memory model:
[[intro.memory] 6.7.1.3](http://eel.is/c++draft/intro.memory#3)
>
> A memory location is either an object of scalar type or a maximal sequence of adjacent bit-fields all having
> nonzero width. **[ Note: Various features of the language, such as references and virtual functions, might involve additional memory locations that are not accessible to programs but are managed by the implementation. — end note]**
>
>
>
So the information associated with the object is stored and updated in some *memory location*. But that is were data races happen:
[[intro.races]](http://eel.is/c++draft/intro.races)
>
> [..]
>
> Two expression evaluations conflict if one of them modifies a **memory location** and the other one reads or modifies **the same memory location**.
>
> [..]
>
> The execution of a program contains a data race if it contains two potentially concurrent conflicting actions, at least one of which is not atomic, and neither happens before the other [..]
>
>
>
The update of the dynamic type is not atomic, and since there is no other synchronization that would enforce a happens-before order, this is a *data race* and therefore UB.
Even if the update *were* to be atomic, you would still have no guarantee about the state of the object as long as the constructor has not finished, so there is no point of making it atomic.
---
**Update**
Conceptually it *feels* like the object takes on different types during construction and destruction. However, it has been pointed out to me by @LanguageLawyer that the *dynamic type* of an object (more precisely of a glvalue that refers to that object) corresponds to the *most derived type*, and this type is clearly defined and does *not* change. [[class.cdtor]](http://eel.is/c++draft/class.cdtor#4) also includes a hint about this detail:
>
> [..] the function called is the final overrider in the constructor's or destructor's class and not one overriding it in a **more-derived class**.
>
>
>
So even though the behavior of virtual function calls and the typeid operator is defined *as if* the object takes on different types, that is actually not the case.
That said, in order to achieve the specified behavior *something* in the state of the object (or at least some information associated with that object) has to be changed. And as pointed out in [[intro.memory]](http://eel.is/c++draft/intro.memory#3), these additional memory locations are indeed subject of the memory model. So I still stand by my initial assessment that this is a data race.
|
iText 7: How can I allow overflow in a Div?
I have a `Div` with a certain height:
```
Div div = new Div();
div.setHeight(100);
```
If, to the `Div`, I add a paragraph with several lines that would occupy an area higher than the `Div`, I receive the following warning:
```
WARN com.itextpdf.layout.renderer.BlockRenderer - Element content was clipped because some height properties are set.
```
And in addition to that, lines of the paragraph are omitted. Even though the paragraph could overflow the `Div`'s bottom border, it ends above the border.
**But despite the warning I do not care and I even need the paragraph to overflow in a hidden manner below the bottom border of the `Div`.**
**How can I achieve such a behavior?**
(The CSS equivalent of the behavior I need can be achieved by setting [`overflow: hidden`](https://www.w3schools.com/cssref/tryit.asp?filename=trycss_overflow) on an HTML `<div>`.)
| You can consider using a custom `DivRenderer` for those DIVs.
A proof-of-concept:
```
public class OverflowHiddenDivRenderer extends DivRenderer {
public OverflowHiddenDivRenderer(Div modelElement) {
super(modelElement);
}
@Override
public Rectangle getOccupiedAreaBBox() {
Rectangle rectangle = super.getOccupiedAreaBBox();
if (height != null) {
if (rectangle.getHeight() > height.getValue()) {
rectangle.moveUp(rectangle.getHeight() - height.getValue()).setHeight(height.getValue());
}
}
return rectangle;
}
@Override
public LayoutResult layout(LayoutContext layoutContext) {
height = getPropertyAsUnitValue(Property.HEIGHT);
deleteProperty(Property.HEIGHT);
LayoutResult layoutResult = super.layout(layoutContext);
LayoutArea layoutArea = layoutResult.getOccupiedArea();
if (layoutArea != null) {
layoutArea.setBBox(getOccupiedAreaBBox());
}
return layoutResult;
}
UnitValue height;
}
```
*([OverflowHiddenDivRenderer](https://github.com/mkl-public/testarea-itext7/blob/master/src/main/java/mkl/testarea/itext7/content/OverflowHiddenDivRenderer.java#L31))*
Using it like this:
```
for (int height = 100; height < 150; height += 5) {
Div div = new Div();
div.setProperty(Property.OVERFLOW_Y, OverflowPropertyValue.HIDDEN);
div.add(new Paragraph(height + " Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet."));
div.setHeight(height);
div.setNextRenderer(new OverflowHiddenDivRenderer(div));
document.add(div);
}
```
*([RenderDivOverflowHidden](https://github.com/mkl-public/testarea-itext7/blob/master/src/test/java/mkl/testarea/itext7/content/RenderDivOverflowHidden.java#L48) test `testOverflowHiddenDivRenderer`)*
for `Document document` you get
[![Screenshot](https://i.stack.imgur.com/8y1W5.png)](https://i.stack.imgur.com/8y1W5.png)
[![enter image description here](https://i.stack.imgur.com/0PsPI.png)](https://i.stack.imgur.com/0PsPI.png)
Beware, even though I've had my hands on iText 7 for quite some time now, this is my first attempt to create a custom `DivRenderer` and I may well have forgotten some special cases. I think in particular of problems in context with rotated content (which in `super.getOccupiedAreaBBox()` is of influence) or area breaks (I don't set a next renderer in `OverflowHiddenDivRenderer` with an adapted height).
Some people more proficient in this stuff may come up with some improvements ...
|
Why is this statement producing a linker error with gcc?
I have this extremely trivial piece of C code:
```
static int arr[];
int main(void) {
*arr = 4;
return 0;
}
```
I understand that the first statement is illegal (I've declared a file-scope array with static storage duration and file linkeage but no specified size), but why is it resulting in a linker error? :
```
/usr/bin/ld: /tmp/cch9lPwA.o: in function `main':
unit.c:(.text+0xd): undefined reference to `arr'
collect2: error: ld returned 1 exit status
```
Shouldn't the compiler be able to catch this before the linker?
It is also strange to me that, if I omit the `static` storage class, the compiler simply assumes array is of length `1` and produces no error beyond that:
```
int arr[];
int main(void) {
*arr = 4;
return 0;
}
```
Results in:
```
unit.c:5:5: warning: array 'arr' assumed to have one element
int arr[];
```
Why does omitting the storage class result in different behavior here and why does the first piece of code produce a linker error? Thanks.
| Empty arrays `static int arr[];` and zero-length arrays `static int arr[0];` were [gcc non-standard extensions](https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html).
The intention of these extensions were to act as a fix for the old "struct hack". Back in the C90 days, people wrote code such as this:
```
typedef struct
{
header stuff;
...
int data[1]; // the "struct hack"
} protocol;
```
where `data` would then be used as if it had variable size beyond the array depending on what's in the header part. Such code was buggy, wrote data to padding bytes and invoked array out-of-bounds undefined behavior in general.
gcc fixed this problem by adding empty/zero arrays as a compiler extension, making the code behave without bugs, although it was no longer portable.
The C standard committee recognized that this gcc feature was useful, so they added *flexible array members* to the C language in 1999. Since then, the gcc feature is to be regarded as obsolete, as using the C standard flexible array member is to prefer.
As recognized by the linked gcc documentation:
>
> Declaring zero-length arrays in other contexts, including as interior members of structure objects or as non-member objects, is discouraged.
>
>
>
And this is what your code does.
Note that gcc with no compiler options passed defaults to `-std=gnu90` (gcc < 5.0) or `-std=gnu11`(gcc > 5.0). This gives you all the non-standard extensions enabled, so the program compiles but does not link.
If you want standard compliant behavior, you must compile as
```
gcc -std=c11 -pedantic-errors
```
The `-pedantic` flag disables gcc extensions, and the linker error switches to a compiler error as expected. For an empty array as in your case, you get:
>
> error: array size missing in 'arr'
>
>
>
And for a zero-length array you get:
>
> error: ISO C forbids zero-size array 'arr' [-Wpedantic]
>
>
>
---
The reason why `int arr[]` works, is because this is an array declaration of [*tentative definition*](https://stackoverflow.com/questions/3095861/about-tentative-definition) with external linkage (see C17 6.9.2). It is valid C and can be regarded as a forward declaration. It means that elsewhere in the code, the compiler (or rather the linker) should expect to find for example `int arr[10]`, which is then referring to the same variable. This way, `arr` can be used in the code before the size is known. (I wouldn't recommend using this language feature, as it is a form of "spaghetti programming".)
When you use `static` you block the possibility to have the array size specified elsewhere, by forcing the variable to have internal linkage instead.
|
Invalid cast from 'System.Int32' to 'System.Nullable`1[[System.Int32, mscorlib]]
```
Type t = typeof(int?); //will get this dynamically
object val = 5; //will get this dynamically
object nVal = Convert.ChangeType(val, t);//getting exception here
```
I am getting InvalidCastException in above code. For above I could simply write `int? nVal = val`, but above code is executing dynamically.
I am getting a value(of non nullable type like int, float, etc) wrapped up in an object (here val), and I have to save it to another object by casting it to another type(which can or cannot be nullable version of it). When
>
> Invalid cast from 'System.Int32' to 'System.Nullable`1[[System.Int32,
> mscorlib, Version=4.0.0.0, Culture=neutral,
> PublicKeyToken=b77a5c561934e089]]'.
>
>
>
An `int`, should be convertible/type-castable to `nullable int`, what is the issue here ?
| You have to use `Nullable.GetUnderlyingType` to get underlying type of `Nullable`.
This is the method I use to overcome limitation of `ChangeType` for `Nullable`
```
public static T ChangeType<T>(object value)
{
var t = typeof(T);
if (t.IsGenericType && t.GetGenericTypeDefinition().Equals(typeof(Nullable<>)))
{
if (value == null)
{
return default(T);
}
t = Nullable.GetUnderlyingType(t);
}
return (T)Convert.ChangeType(value, t);
}
```
non generic method:
```
public static object ChangeType(object value, Type conversion)
{
var t = conversion;
if (t.IsGenericType && t.GetGenericTypeDefinition().Equals(typeof(Nullable<>)))
{
if (value == null)
{
return null;
}
t = Nullable.GetUnderlyingType(t);
}
return Convert.ChangeType(value, t);
}
```
|
How can I refresh just a Partial View in its View?
What Am I doing wrong guys? This is the idea...
Index view
```
<div class="col-lg-12 col-md-12 col-xs-12">
@Html.Partial("PartialView", Model)
</div>
```
Controller
```
public ActionResult PartialView()
{
return PartialView("PartialView");
}
[HttpPost, ValidateInput(false)]
public ActionResult POSTPartialView(string param1)
{
return PartialView("PartialView");
}
```
PartialView has a Form.
The first time I enter in Index, PartialView works, but the second time, after a POST call (coming from the form inside of PartialView), I only got to render the PartialView out of the Index.
So to fix it, I´m doing the next:
```
[HttpPost, ValidateInput(false)]
public ActionResult POSTPartialView(string param1)
{
return View("Index");
}
```
That works. I render all Index again (with my changes, after POST). But I refresh all page so I lost a few CSS Elements (accordion discollapsed for example).
Should I use Ajax for refreshing only the div which contents PartialView?
Thanks Mates.
EDITED:
```
@using (Html.BeginForm("PartialView", "Controller", FormMethod.Post, new { @class = "form-inline", role = "form" }))
{
<div class="form-group col-lg-3 col-md-3 col-xs-3">
<label for="DATA">DATA:</label>
<input type="text" class="form-control pull-right" name="DATA">
</div>
<button type="submit" class="btn btn-primary pull-right">Get Data</button>
}
```
| Well, I read the solution ([Auto Refresh Partial View](https://www.mindstick.com/Articles/1132/auto-refresh-partial-view-in-asp-dot-net-mvc)). I am posting it, hoping clarify the question:
**index.html**
```
<div class="col-lg-12 col-md-12 col-xs-12" id="divPartial">
@Html.Partial("PartialView", Model)
</div>
```
```
<script type="text/javascript">
$("#buttonForm").click(function(e){
$('#form').submit();
$('#divPartial').load('/PartialController/PartialView');
});
</script>
```
**PartialController**
```
public ActionResult PartialView()
{
// DO YOUR STUFF.
return PartialView("PartialView", model);
}
[HttpPost, ValidateInput(false)]
public EmptyResult POSTPartialView(string param1)
{
// DO YOUR STUFF AFTER SUBMIT.
return new EmptyResult();
}
```
|
Get address using Latitude and Longitude from two columns in DataFrame?
I have a dataframe with the longitude column and latitude column. When I try to get the address using `geolocator.reverse()` I get the error `ValueError: Must be a coordinate pair or Point`
I can't for the life of me insert the lat and long into the reverse function without getting that error. I tried creating a tuple using `list(zip(zips['Store_latitude'], zips['Store_longitude']))` but I get the same error.
Code:
```
import pandas as pd
from geopy.geocoders import Nominatim
from decimal import Decimal
from geopy.point import Point
zips = pd.read_excel("zips.xlsx")
geolocator = Nominatim(user_agent="geoapiExercises")
zips['Store_latitude']= zips['Store_latitude'].astype(str)
zips['Store_longitude'] = zips['Store_longitude'].astype(str)
zips['Location'] = list(zip(zips['Store_latitude'], zips['Store_longitude']))
zips['Address'] = geolocator.reverse(zips['Location'])
```
What my DataFrame looks like
| Store\_latitude | Store\_longitude |
| --- | --- |
| 34.2262225 | -118.4508349 |
| 34.017667 | -118.149135 |
| I think you might try with a tuple or a `geopy.point.Point` before going to a list to see whether the package works all right.
I tested just now as follows (Python 3.9.13, command line style)
```
import geopy
p = geopy.point.Point(51.4,3.45)
gl = geopy.geocoders.Nominatim(user_agent="my_test") # Without the user_agent it raises a ConfigurationError.
gl.reverse(p)
```
output:
`Location(Vlissingen, Zeeland, Nederland, (51.49433865, 3.415005767601362, 0.0))`
This is as expected.
Maybe you should cast your dataframe['Store\_latitude'] and dataframe['Store\_longitude'] before/after you convert to list? They are not strings?
More information on your dataframe and content would be required to further assist, I think.
Good luck!
EDIT: added information after OP's comments below.
1. When you read your excel file as `zips = pd.read("yourexcel.xlsx")` you will get a pandas dataframe.
The content of the dataframe is two columns (which will be of type Series) and each element will be a numpy.float64 (if your excel has real values as input and not strings!). You can check this using the type() command:
```
>>> type(zips)
<class 'pandas.core.frame.DataFrame'>
>>> type(zips['Lat'])
<class 'pandas.core.series.Series'>
>>> type(zips['Lat'][0])
<class 'numpy.float64'>
```
What you then do is convert these floats (=decimal numbers) to a string (=text) by performing `zips[...] = zips[...].astype(str)`. There is no reason to do that, because your geolocator requires numbers, not text.
2. As shown in the comment by @Derek, you need to iterate over each row and while doing so, you can put the resulting Locations you receive from the geolocator in a new column.
So in the next block, I first create a new (empty) list. Then i iterate over couples of lat,lon by combining your zips['Lat'] and zips['lon'] using the zip command (so the naming of zips is a bit unlucky if you don't know the zip command; it thus may be confusing you). But don't worry, what it does is just combining the entries of each row in the varables lat and lon. Within the for-each loop, I append the result of the geolocator lookup. Note that the argument of the reverse command is a tuple (lat,lon), so the complete syntax is reverse( (lat,lon) ). Instead of (lat,lon), you could also have created a Point as in my original example. But that is not necessary imo. (note: for brevity I just write 'Lat' and 'Lon' instead of your Store...).
Finally, assign the result list as a new column in your zip pandas dataframe.
```
import geopy as gp
# instiate a geolocator
gl = gp.geocoders.Nominatim(user_agent="my_test")
locations = [] # Create empty list
# For loop over each couple of lat, lon
for lat,lon in zip(zips['Lat'], zips['Lon']):
locations.append(gl.reverse((lat,lon))
# Add extra column to your pandas table (address will be the column name)
zips = zips.assign(address=locations)
```
One thing you still may want, is just have the text string instead of the complete geopy.Location() string in your table.
To get that you write the for loop with this small modification ([0] as the first element of the Location object). Note that this won't work if the result of the lookup of a given row is empty (None). Then the [0] will raise an error.
```
# For loop over each couple of lat, lon
for lat,lon in zip(zips['Lat'], zips['Lon']:
locations.append(gl.reverse((lat,lon)[0])
```
I hope this gets you going!
|
Unreachable statement: while true vs if true
How should I understand this Java compiler behaviour?
```
while (true) return;
System.out.println("I love Java");
// Err: unreachable statement
if (true) return;
System.out.println("I hate Java");
// OK.
```
Thanks.
**EDIT:**
I find out the point after a few minutes:
In the first case compiler throws error because of infinite loop. In both cases compiler does not think about the code inside the statement consequent.
**EDIT II:**
What impress me on javac now is:
```
if (true) return; // Correct
}
while (true) return; // Correct
}
```
It looks like javac knows what is inside both loop and if consequent,
but when you write another command (as in the first example) you get non-equivalent behaviour (which looks like javac forgot what is inside loop/if).
**public static final EDIT III:**
As the result of this answer I may remark (hopefully correct):
Expressions as `if (arg) { ...; return;}` and `while (arg) { ...; return;}` are equivalent both semantically and syntactically (in bytecode) for Java iff `argv` is non-constant (or effectively final type) expression. If `argv` is constant expression bytecode (and behaviour) may differs.
**Disclaimer**
This question is not on unreachable statements but different handling of logically equivalent expressions such as `while true return` and `if true return`.
| There are quite strict rules when statements are reachable in java. These rules are design to be easily evaluated and not to be 100% acurate. It should prevent basic programming errors. To reason about reachability in java you are restricted to these rules, "common logic" does not apply.
So here are the rules from the Java Language Specification [14.21. Unreachable Statements](https://docs.oracle.com/javase/specs/jls/se8/html/jls-14.html#jls-14.21)
>
> An if-then statement can complete normally iff it is reachable.
>
>
>
So without an else, statements after an if-then are always reachable
>
> A while statement can complete normally iff at least one of the following is true:
>
>
> - The while statement is reachable and the condition expression is not a constant expression (§15.28) with value true.
> - There is a reachable break statement that exits the while statement.
>
>
>
The condition is a constant expression "true", there is no break. Hence it does **not complete normally**.
|
Order of CSS files. Load module's CSS before theme's CSS
How can I change the order of CSS files to load module's CSS **before theme's CSS**? Here are some code examples:
**Theme's CSS file** (loaded on all pages) added in theme's *local.xml*:
```
<default>
<reference name="head">
<action method="addItem">
<type>skin_css</type>
<name>css/theme.css</name>
</action>
</reference>
</default>
```
**Extension's CSS file** (loaded only on category pages) added in module's XML layout file:
```
<catalog_category_layered>
<reference name="head">
<action method="addItem">
<type>skin_css</type>
<name>css/extension.css</name>
</action>
</reference>
</catalog_category_layered>
```
This is the order of loaded CSS files which I get on the category page, extension's CSS is loaded after theme's CSS:
>
> - /default/mytheme/css/styles.css
> - /default/mytheme/css/theme.css
> - /default/mytheme/css/**extension.css**
>
>
>
What I'm trying to achieve: extension's CSS is loaded before theme's CSS.
**How can I force this order of CSS files:**
>
> - /default/mytheme/css/styles.css
> - /default/mytheme/css/**extension.css**
> - /default/mytheme/css/theme.css
>
>
>
I've noticed that if I have many extensions installed, CSS of some extensions loads before theme's CSS and CSS of some other extensions loads after theme's CSS. I assume that it has something to do with the order of modules in Magento, but I don't understand how I can affect the order of CSS (or JavaScript) files on the frontend.
| There's two things here that I'll elaborate on first: 1) the order in which layout XML files are loaded and 2) the order which layout *handles* are processed.
1 - **Layout xml files are loaded *in the order which the extensions are loaded*.** Extensions are loaded alphabetically (as the server reads the files in app/etc/extensions), however when there is module dependencies, the module which is depended on by *another* module, will be loaded first. Magento actually loops through all these XML files twice in order to achieve this. First time to read all the extensions, and second time to load them by order of dependencies/then all the remaining get loaded in alphabetical order. **local.xml** however is a special case, and always is loaded last so that its instructions take priority over any extension's layout instructions.
Now I know what you're thinking at this point "if local.xml is loaded last, why is the extension's CSS file being loaded afterwards?" Well that's due to the following...
2 - **The order in which *layout handles* are processed.** This is what's getting you in this particular case. Despite the fact that local.xml is loaded **after** your extension's layout file, it is because the extension is targeting the 'catalog\_category\_layered' layout handle. This layout handle, gets processed **after** the layout handle 'default'. It is because of this, that you're getting stuck with extension's the CSS file being loaded after your theme's CSS file.
So what's the solution? Quite simple, although somewhat annoying. In your local.xml file, you just need to target this layout handle and first remove your theme's CSS file, then add it back in again.
This should do the trick for you:
```
<catalog_category_layered>
<reference name="head">
<action method="removeItem">
<type>skin_css</type>
<name>css/theme.css</name>
</action>
<action method="addItem">
<type>skin_css</type>
<name>css/theme.css</name>
</action>
</reference>
</catalog_category_layered>
```
Your site will process these instructions after the extension's instructions, within this layout handle. Therefore, your CSS file will be loaded afterwards as well.
|
detecting activity in the visual studio editor
I would like to know if there is a programmatically hook into the visual studio editor, so that I can determine whether or not someone is typing in the editor.
Thanks!
| There are a number of ways to do this in Visual Studio. Here are a few different hooks available.
- `IOleCommandTarget`: Key strokes in Visual Studio will eventually present themselves as commands and be routed through this chain
- `KeyProcessor`: For straight WPF keyboard input you can creata MEF `IKeyProcessorProvider` component and let the created `KeyProcessor` handle the input
- `ITextBuffer::Changed`: Listen directly to changes in the underlying buffer to interpret input.
Each of these is fairly complex and adding a full sample for them in an SO question is just not reasonable. But just typing their names into google should get you pointed in the correct direction.
Do note though that keyboard input in Visual Studio is a very complex process. It has to deal with windows messages, old style Visual Studio commands, WPF, MEF hooks, translating accelorators, etc ... It's very messy and trying to tackle all input at any one choke point is fraught with problems. I've done my best to document the current state of Visual Studio keyboard input while working on a plugin and you can find it here
- <https://github.com/jaredpar/VsVim/blob/master/Src/VsVim/KeyboardInputRouting.txt>
It's hardly a complete understanding though.
|
How to enable Location access programmatically in android?
I am working on map related android application and I need to check location access enable or not in client side development if location services is not enable show the dialog prompt.
How to enable "Location access" Programmatically in android?
| Use below code to check. If it is disabled, dialog box will be generated
```
public void statusCheck() {
final LocationManager manager = (LocationManager) getSystemService(Context.LOCATION_SERVICE);
if (!manager.isProviderEnabled(LocationManager.GPS_PROVIDER)) {
buildAlertMessageNoGps();
}
}
private void buildAlertMessageNoGps() {
final AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setMessage("Your GPS seems to be disabled, do you want to enable it?")
.setCancelable(false)
.setPositiveButton("Yes", new DialogInterface.OnClickListener() {
public void onClick(final DialogInterface dialog, final int id) {
startActivity(new Intent(android.provider.Settings.ACTION_LOCATION_SOURCE_SETTINGS));
}
})
.setNegativeButton("No", new DialogInterface.OnClickListener() {
public void onClick(final DialogInterface dialog, final int id) {
dialog.cancel();
}
});
final AlertDialog alert = builder.create();
alert.show();
}
```
|
Redirect System.out from every thread
I have created a swing ui and redirected System.out and System.err to a text field with this code
```
ConsoleOutputStream cos = new ConsoleOutputStream(textColor, printStream);
System.setOut( new PrintStream(cos, true) );
```
ConsoleOutputStream extends ByteArrayOutputStream, and as long as nothing is executed in new threads this works as expected.
However, my application executes third party jar files which in turn creates new threads. When these threads print to System.out it gets printed in the terminal that launched my application instead of in my text field. I have looked at this link: <http://maiaco.com/articles/java/threadOut.php> but i'm not sure it's applicable to my problem since I have no control whatsoever over the threads. As far as my application is aware no threads (except for the main gui thread) are created.
Is there some way to redirect all System.out:s and System.err:s independent of the threads? If not, can I maybe listen to calls to System.out and then print them to my text field? Could I potentially listen to the terminal that I launched my application from and redirect all output from it to my application?
| `System.out` is not thread specific. There are two possibilities:
1. The libraries read the `System.out` before you redirect it and cache the value. The fix is to redirect `System.out` before invoking third party library code.
2. The libraries do not use `System.out`. For writing to the console there are alternatives like creating a `new FileOutputStream(FileDescriptor.out)` and writing to it. Or using `System.console()`.
If this happens through one of the known logging APIs you can override the behavior by removing the default console-writing log handler and installing your own. Otherwise it’s rather hard to do. You will have to study the libraries and their API carefully to find out how to do it. Every sophisticated library will offer a way as writing messages directly to the console without offering an alternative is a really bad programming style, especially for a library.
It’s very likely that the libraries you are using use a logging API.
|
if-else conditional block in visualforce
I had used c:if, c:when JSTL tags in jsp. But I don't know if anything similar is available for visual force pages. just for example I am providing sample code for jsp. --
```
<h1>A Demo conditional section code</h1>
<c:choose>
<c:when test="${param.colorField == 'red'}">
<table border="0" width="150" height="50" bgcolor="#ff0000">
<tr><td>It is red</td></tr>
</table>
</c:when>
<c:when test="${param.colorField == 'blue'}">
<table border="0" width="150" height="50" bgcolor="#0000ff">
<tr><td>It is blue</td></tr>
</table>
</c:when>
<c:when test="${param.colorField == 'green'}">
<table border="0" width="150" height="50" bgcolor="#00ff00">
<tr><td>Green table</td></tr>
</table>
</c:when>
<c:otherwise>
<table border="0" width="150" height="50" bgcolor="#000000">
<tr><td>No colour changed</td></tr>
</table>
</c:otherwise>
</c:choose>
<br/>
and other codes....
```
I am missing this kind of page block preparation in vf pages.
| What I have found that we can use outputpanel (`<apex:outputpanel>`) for any block and use the `rendered` attribute to handle the condition for loading it.
```
<h1>A Demo conditional section code</h1>
<apex:outputpanel rendered="{!param.colorField == 'red'}">
<table border="0" width="150" height="50" bgcolor="#ff0000">
<tr><td>It is red</td></tr>
</table>
</apex:outputpanel>
<apex:outputpanel rendered="{!param.colorField == 'blue'}">
<table border="0" width="150" height="50" bgcolor="#0000ff">
<tr><td>It is blue</td></tr>
</table>
</apex:outputpanel>
:
:
and other codes....
```
|
How can I route traffic from my main wired router to the wireless computers on the other side of a connected Wi-Fi router?
I have a router that takes my Verizon FiOS as input and has five output ports. The main internal IP address of the router is *192.168.0.1*.
From one of the ports on the wired router, I have my Wi-Fi router connected, and it has an IP address of *192.168.0.10*. The other ports on the wired router all have *192.168.0.x* IP addresses.
On the other side of the Wi-Fi router, the wireless IP addresses are of the form *192.168.1.x*.
From the Wi-Fi network I **can** get to the *192.168.0.x* wired IP addresses, but I **cannot** get to the *192.168.1.x* wireless IP addresses from the wired network.
So, I believe I need to make a change in the Wi-Fi router to allow traffic from the *192.168.0.x* wired IP addresses into the *192.168.1.x* wireless IP addresses. Is this correct?
What do I need to look for in order to make this change (e.g., wireless settings, LAN settings, WAN settings, etc.)? Once there, what type of change should I make?
| You need to set up the device you're calling your "Wi-Fi router" to act as a simple Wi-Fi AP instead of acting as a router.
Some wireless router devices allow you to disable their NAT gateway and DHCP server features, and will automatically make their WAN Ethernet port act like a LAN Ethernet port. Others don't have this option, but you can still accomplish the same thing by not using the WAN port (connect the Ethernet cable from the LAN port of the upstream router to a LAN port of the downstream router), and by disabling the DHCP server feature if possible, or setting its DHCP IP address lease pool to be zero length (so it has no addresses it can lease out via DHCP).
By hooking things up this way, your home network becomes one single LAN with a single IP subnet.
|
Linux flock, how to "just" lock a file?
In Bash, I'm trying to make a function getLock to be used with different lock names.
```
function getLock
{
getLock_FILE="${1}"
getLock_OP="${2}"
case "${getLock_OP}" in
"LOCK_UN")
flock -u "${getLock_FILE}"
rm -fr "${getLock_FILE}"
;;
"LOCK_EX")
flock -x "${getLock_FILE}"
esac
}
```
But flock says `flock: bad number: myfilelock`
How can I just lock a file, and release it when I want, without having to execute a command in the flock?
It is to be used like this:
```
getLock myfilelock LOCK_EX
somecommands
........
getLock myfilelock LOCK_UN
```
| To lock the file:
```
exec 3>filename # open a file handle; this part will always succeed
flock -x 3 # lock the file handle; this part will block
```
To release the lock:
```
exec 3>&- # close the file handle
```
You can also do it the way the flock man page describes:
```
{
flock -x 3
...other stuff here...
} 3>filename
```
...in which case the file is automatically closed when the block exits. (A subshell can also be used here, via using `( )` rather than `{ }`, but this should be a deliberate decision -- as subshells have a performance penalty, and scope variable modifications and other state changes to themselves).
---
If you're running a new enough version of bash, you don't need to manage file descriptor numbers by hand:
```
# this requires a very new bash -- 4.2 or so.
exec {lock_fd}>filename # open filename, store FD number in lock_fd
flock -x "$lock_fd" # pass that FD number to flock
exec {lock_fd}>&- # later: release the lock
```
---
...now, for your function, we're going to need associative arrays and automatic FD allocation (and, to allow the same file to be locked and unlocked from different paths, GNU readlink) -- so this won't work with older bash releases:
```
declare -A lock_fds=() # store FDs in an associative array
getLock() {
local file=$(readlink -f "$1") # declare locals; canonicalize name
local op=$2
case $op in
LOCK_UN)
[[ ${lock_fds[$file]} ]] || return # if not locked, do nothing
exec {lock_fds[$file]}>&- # close the FD, releasing the lock
unset lock_fds[$file] # ...and clear the map entry.
;;
LOCK_EX)
[[ ${lock_fds[$file]} ]] && return # if already locked, do nothing
local new_lock_fd # don't leak this variable
exec {new_lock_fd}>"$file" # open the file...
flock -x "$new_lock_fd" # ...lock the fd...
lock_fds[$file]=$new_lock_fd # ...and store the locked FD.
;;
esac
}
```
If you're on a platform where GNU readlink is unavailable, I'd suggest replacing the `readlink -f` call with `realpath` from [sh-realpath by Michael Kropat](https://github.com/mkropat/sh-realpath) (relying only on widely-available readlink functionality, not GNU extensions).
|
FFMPEG: Stream a file with original playing rate
I want to stream a file to the network using [ffmpeg](http://ffmpeg.org/) in it's original frame rate; so I can play the generated UDP stream using some receiver client such as [VLC](http://videolan.org). I used this command:
>
> ffmpeg -i "myfile.mpg" -sameq -re -f mpegts "udp://127.0.0.1:2000"
>
>
>
By using this command the ffmpeg starts streaming the file in a very high rate; such that streaming of a file that has about 30 minutes length, is finished after just about 40 secs. I want to see the file in original rate. Also I want to have control on rate of video to play it faster or slower. Is there any options to do this? thank you.
| `-re` should be used as an input option, otherwise it will probably be ignored. A generalization of the basic syntax is:
```
ffmpeg [input options] -i input [output options] output
```
Do not use `-sameq`. See [sameq does not mean "same quality"](https://superuser.com/a/478550/110524) for a detailed explanation.
Have you tried simply copying the streams instead of re-encoding? Add `-map 0 -codec copy` as output options.
As for changing the video speed you can try the `setpts` multimedia filter. Note that you have to re-encode to use this filter. Examples from the documentation:
```
Apply fast motion effect: -filter:v setpts=0.5*PTS
Apply slow motion effect: -filter:v setpts=2.0*PTS
```
For audio see the `asetpts` or `atempo` filters.
|
What do returning curly braces mean in javascript (ex. return { init : init} )
I'm looking over this code:
```
$(function(){
var $sidescroll = (function() {
init = function() {
//STUFF
};
return { init : init }; //What does this do?
})();
$sidescroll.init();
});
```
What does the return statement mean? I haven't seen curly braces in a return statement before, and am not even sure what 'init : init' does.
| Curly braces mean two things in javascript:
1. blocks
2. object literals
You've probably seen the second -- also known in other languages as "dictionaries", key-value pairs, associative arrays, etc:
```
myDict = { a: "apple", b: "banana" };
```
When we say
```
return { a: "apple" };
```
it is the same as saying
```
myDict = { a: "apple" };
return myDict;
```
The "confusing" thing in this case is that (1) the key and the value are identical/have the same character representation, and (2) the value is not a normal string or variable but, a function. That is, accessing the key "init" of your object/dictionary will give you a function that you can call with `()`.
|
Count common sets of items between different customers
I have data on customers and the different products they have purchased:
```
Customer Product
1 A
1 B
1 C
2 D
2 E
2 F
3 A
3 B
3 D
4 A
4 B
```
I would like to check which sets of products that occur together across different customers. I want to get the count for product combinations of different lengths. For example, the product combination A and B together occurs in three different customers; the product group A, B and C occurs in one customer. And so on for all different sets of 2 or more products in the data. Something like:
```
Product Group Number
A, B, C 1
D, E, F 1
A, B, D 1
A, B 3
```
Thus, I'm counting the A, B combination in customers who only have product A and B (e.g. customer 4), *and* in customers who *have* A and B, but also any other product (e.g. customer 1, who has A, B and C).
Does anyone have any ideas how to do that with either a `tidyverse` or `base` R approach? I feel like it ought to be pretty trivial - maybe `pivot_wider` first, then count?
I have found [this question and answer](https://stackoverflow.com/questions/46536183/generate-all-possible-pairs-and-count-frequency-in-r) that can do what I need for pairs of products, but I need to count combinations also for more products than two.
| If you have the possibility to use a non-`base` package, you can use a tool dedicated for the task of finding item sets: `arules::apriori`. It is much faster on larger data sets.
```
library(arules)
# coerce data frame to binary incidence matrix
# use apriori to get "frequent itemsets"
r = apriori(data = as.matrix(table(dat) > 0),
# set: type of association mined, minimal support needed of an item set,
# minimal number of items per item set
par = list(target = "frequent itemsets",
support = 0,
minlen = 2))
# coerce itemset to data.frame, select relevant rows and columns
d = as(r, "data.frame")
d[d$count > 0, c("items", "count")]
# items count
# 4 {B,C} 1
# 5 {A,C} 1
# 6 {E,F} 1
# 7 {D,E} 1
# 10 {D,F} 1
# 13 {B,D} 1
# 14 {A,D} 1
# 15 {A,B} 3
# 25 {A,B,C} 1
# 26 {D,E,F} 1
# 35 {A,B,D} 1
```
---
Timing on larger data set: 10000 customers with up to 6 products each. `apriori` is quite a lot faster.
```
# Unit: milliseconds
# expr min lq mean median uq max neval
# f_henrik(dat) 38.95475 39.8621 41.44454 40.67313 41.05565 57.64655 20
# f_allan(dat) 4578.20595 4622.2363 4664.57187 4654.58713 4679.78119 4924.22537 20
# f_jay(dat) 2799.10516 2939.9727 2995.90038 2971.24127 2999.82019 3444.70819 20
# f_uwe_dt(dat) 2943.26219 3007.1212 3028.37550 3027.46511 3060.38380 3076.25664 20
# f_uwe_dplyr(dat) 6339.03141 6375.7727 6478.77979 6448.56399 6521.54196 6816.09911 20
```
10000 customers with up to 10 products each. `apriori` is several hundred times faster.
```
# Unit: milliseconds
# expr min lq mean median uq max neval
# f_henrik(dat) 58.40093 58.95241 59.71129 59.63988 60.43591 61.21082 20
# f_jay(dat) 52824.67760 53369.78899 53760.43652 53555.69881 54049.91600 55605.47980 20
# f_uwe_dt(dat) 22612.87954 22820.12012 22998.85072 22974.32710 23220.00390 23337.22815 20
# f_uwe_dplyr(dat) 26083.20240 26255.88861 26445.49295 26402.67887 26659.81195 27046.83491 20
```
On the larger data set, Allan's code gave warnings (`In rawToBits(as.raw(x)) : out-of-range values treated as 0 in coercion to raw`) on the toy data, which seemed to affect the result. Thus, it is not included in the second benchmark.
---
Data and benchmark code:
```
set.seed(3)
n_cust = 10000
n_product = sample(2:6, n_cust, replace = TRUE) # 2:10 in second run
dat = data.frame(
Customer = rep(1:n_cust, n_product),
Product = unlist(lapply(n_product, function(n) sample(letters[1:6], n)))) # 1:10 in 2nd run
library(microbenchmark)
res = microbenchmark(f_henrik(dat),
f_allan(dat),
f_jay(dat),
f_uwe_dt(dat),
f_uwe_dplyr(dat),
times = 20L)
```
---
Check for equality:
```
henrik = f_henrik(dat)
allan = f_allan(dat)
jay = f_jay(dat)
uwe_dt = f_uwe_dt(dat)
uwe_dplyr = f_uwe_dplyr(dat)
# change outputs to common format for comparison
# e.g. string format, column names, order
henrik$items = substr(henrik$items, 2, nchar(henrik$items) - 1)
henrik$items = gsub(",", ", ", henrik$items)
l = list(
henrik = henrik, allan = allan, jay = jay, uwe_dt = uwe_dt, uwe_dplyr = uwe_dplyr)
l = lapply(l, function(d){
d = setNames(as.data.frame(d), c("items", "count"))
d = d[order(d$items), ]
row.names(d) = NULL
d
})
all.equal(l[["henrik"]], l[["allan"]])
# TRUE
all.equal(l[["henrik"]], l[["jay"]])
# TRUE
all.equal(l[["henrik"]], l[["uwe_dt"]])
# TRUE
all.equal(l[["henrik"]], l[["uwe_dplyr"]])
# TRUE
```
---
Functions:
```
f_henrik = function(dat){
r = apriori(data = as.matrix(table(dat) > 0),
par = list(target = "frequent itemsets",
support = 0,
minlen = 2))
d = as(r, "data.frame")
d[d$count > 0, c("items", "count")]
}
f_allan = function(dat){
all_multiples <- function(strings)
{
n <- length(strings)
do.call("c", sapply(1:2^n, function(x) {
mystrings <- strings[as.character(rawToBits(as.raw(x))[seq(n)]) == "01"]
if (length(mystrings) > 1) paste(mystrings, collapse = ", ") else NULL
}))
}
dat %>%
group_by(Customer) %>%
arrange(Product) %>%
summarize(Product_group = all_multiples(Product)) %>%
group_by(Product_group) %>%
count(Product_group)
}
f_jay = function(dat){
a <- split(dat$Product, dat$Customer) ## thx to @Henrik
r <- range(lengths(a))
pr <- unlist(lapply(r[1]:r[2], function(x)
combn(unique(dat$Product), x, list)), recursive=F)
or <- rowSums(outer(pr, a, Vectorize(function(x, y) all(x %in% y))))
res <- data.frame(p.group=sapply(pr, toString), number=or)
res[res$number > 0, ]
}
f_uwe_dt = function(dat){
setorder(setDT(dat), Customer, Product)
dat[, .(Product.Group = unlist(lapply(tail(seq(.N), -1L),
function(m) combn(unique(Product), m, toString, FALSE)))),
by = Customer][
, .N, by = Product.Group]
}
f_uwe_dplyr = function(dat){
dat %>%
arrange(Customer, Product) %>%
group_by(Customer) %>%
summarise(Product.Group = n() %>%
seq() %>%
tail(-1L) %>%
lapply(function(m) combn(unique(Product), m, toString, FALSE)) %>%
unlist()) %>%
ungroup() %>%
count(Product.Group)
}
```
|
Ant script with embedded javascript trying to read files
I have the following ant build target :
```
<target name="analyze">
<script language="javascript">
<![CDATA[
importPackage(java.lang);
var path = project.getProperty("PROJECT_HOME") + "/oms";
System.out.println("path = " +path);
]]>
]]>
</script>
</target>
```
I'd like to *find* all files in the directory recursively and if they end in .`java` print them out. Is this possible?
| There is an example in the Ant [`script` task docs](http://ant.apache.org/manual/Tasks/script.html) that basically does this. Here's a simplified version:
```
<script language="javascript">
<![CDATA[
importClass(java.io.File);
fs = project.createDataType("fileset");
dir = "src";
fs.setDir( new File( dir ) );
fs.setIncludes( "**/*.java" );
// create echo Task via Ant API
echo = project.createTask("echo");
// iterate over files found.
srcFiles = fs.getDirectoryScanner( project ).getIncludedFiles( );
for ( i = 0; i < srcFiles.length; i++ ) {
var filename = srcFiles[i];
// use echo Task via Ant API
echo.setMessage( filename );
echo.perform( );
}]]>
</script>
```
This uses an Ant [FileSet](http://ant.apache.org/manual/Types/fileset.html) to find the files. Here an includes rule is set on the fileset so that only `.java` files found are returned by the iterator - saves using string operations on the filenames to discard any other files.
If you need to set exclusion rules you can do so by means of the setExcludes() method of the FileSet class (actually of the [AbstractFileset](http://api.dpml.net/ant/1.6.4/org/apache/tools/ant/types/AbstractFileSet.html) class).
See the [docs for patterns](http://ant.apache.org/manual/dirtasks.html#patterns) to understand a little more about Ant wildcards.
|
How do you use two SUM() aggregate functions in the same query for PostgreSQL?
I have a PostgreSQL query that yields the following results:
```
SELECT o.order || '-' || osh.ordinal_number AS order,
o.company,
o.order_total,
SUM(osh.items) AS order_shipment_total,
o.order_type
FROM orders o
JOIN order_shipments osh ON o.order_id = osh.order_id
WHERE o.order = [some order number]
GROUP BY o.order,
o.company,
o.order_total,
o.order_type;
order | company | order_total | order_shipment_total | order_type
-------------------------------------------------------------------
123-1 | A corp. | null | 125.00 | new
123-2 | B corp. | null | 100.00 | new
```
I need to replace the `o.order_total` (it doesn't work properly) and sum up the sum of the order\_shipment\_total column so that, for the example above, each row winds up saying 225.00. I need the results above to look like this below:
```
order | company | order_total | order_shipment_total | order_type
-------------------------------------------------------------------
123-1 | A corp. | 225.00 | 125.00 | new
123-2 | B corp. | 225.00 | 100.00 | new
```
**What I've Tried**
1.) To replace `o.order_total`, I've tried `SUM(SUM(osh.items))` but get the error message that you cannot nest aggregate functions.
2.) I've tried to put the entire query as a subquery and sum the `order_shipment_total` column, but when I do, it just repeats the column itself. See below:
```
SELECT order,
company,
SUM(order_shipment_total) AS order_shipment_total,
order_shipment_total,
order_type
FROM (
SELECT o.order || '-' || osh.ordinal_number AS order,
o.company,
o.order_total,
SUM(osh.items) AS order_shipment_total,
o.order_type
FROM orders o
JOIN order_shipments osh ON o.order_id = osh.order_id
WHERE o.order = [some order number]
GROUP BY o.order,
o.company,
o.order_total,
o.order_type
) subquery
GROUP BY order,
company,
order_shipment_total,
order_type;
order | company | order_total | order_shipment_total | order_type
-------------------------------------------------------------------
123-1 | A corp. | 125.00 | 125.00 | new
123-2 | B corp. | 100.00 | 100.00 | new
```
3.) I've tried to only include the rows I actually want to group by in my subquery/query example above, because I feel like I was able to do this in Oracle SQL. But when I do that, I get an error saying "column [name] must appear in the GROUP BY clause or be used in an aggregate function."
```
...
GROUP BY order,
company,
order_type;
ERROR: column "[a column name]" must appear in the GROUP BY clause or be used in an aggregate function.
```
How do I accomplish this? I was certain that a subquery would be the answer but I'm confused as to why this approach will not work.
| The thing you're not quite grasping with your query / approach is that you're actually wanting two different levels of grouping in the same query row results. The subquery approach is half right, but when you do a subquery that groups, inside another query that groups you can only use the data you've already got (from the subquery) and you can only choose to keep it at the level of aggregate detail it already is, or you can choose to lose precision in favor of grouping more. You can't keep the detail AND lose the detail in order to sum up further. A query-of-subquery is hence (in practical terms) relatively senseless because you might as well group to the level you want in one hit:
```
SELECT groupkey1, sum(sumx) FROM
(SELECT groupkey1, groupkey2, sum(x) as sumx FROM table GROUP BY groupkey1, groupkey2)
GROUP BY groupkey1
```
Is the same as:
```
SELECT groupkey1, sum(x) FROM
table
GROUP BY groupkey1
```
Gordon's answer will probably work out (except for the same bug yours exhibits in that the grouping set is wrong/doesn't cover all the columns) but it probably doesn't help much in terms of your understanding because it's a code-only answer. Here's a breakdown of how you need to approach this problem but with simpler data and foregoing the window functions in favor of what you already know.
Suppose there are apples and melons, of different types, in stock. You want a query that gives a total of each specific kind of fruit, regardless of the date of purchase. You also want a column for the total for each fruit overall type:
Detail:
```
fruit | type | purchasedate | count
apple | golden delicious | 2017-01-01 | 3
apple | golden delicious | 2017-01-02 | 4
apple | granny smith | 2017-01-04 ! 2
melon | honeydew | 2017-01-01 | 1
melon | cantaloupe | 2017-01-05 | 4
melon | cantaloupe | 2017-01-06 | 2
```
So that's 7 golden delicious, 2 granny smith, 1 honeydew, 6 cantaloupe, and its also 9 apples and 7 melons
You can't do it as one query\*, because you want two different levels of grouping. You have to do it as two queries and then (critical understanding point) you have to join the less-precise (apples/melons) results back to the more precise (granny smiths/golden delicious/honydew/cantaloupe):
```
SELECT * FROM
(
SELECT fruit, type, sum(count) as fruittypecount
FROM fruit
GROUP BY fruit, type
) fruittypesum
INNER JOIN
(
SELECT fruit, sum(count) as fruitcount
FROM fruit
GROUP BY fruit
) fruitsum
ON
fruittypesum.fruit = fruitsum.fruit
```
You'll get this:
```
fruit | type | fruittypecount | fruit | fruitcount
apple | golden delicious | 7 | apple | 9
apple | granny smith | 2 | apple | 9
melon | honeydew | 1 | melon | 7
melon | cantaloupe | 6 | melon | 7
```
Hence for your query, different groups, detail and summary:
```
SELECT
detail.order || '-' || detail.ordinal_number as order,
detail.company,
summary.order_total,
detail.order_shipment_total,
detail.order_type
FROM (
SELECT o.order,
osh.ordinal_number,
o.company,
SUM(osh.items) AS order_shipment_total,
o.order_type
FROM orders o
JOIN order_shipments osh ON o.order_id = osh.order_id
WHERE o.order = [some order number]
GROUP BY o.order,
o.company,
o.order_type
) detail
INNER JOIN
(
SELECT o.order,
SUM(osh.items) AS order_total
FROM orders o
JOIN order_shipments osh ON o.order_id = osh.order_id
--don't need the where clause; we'll join on order number
GROUP BY o.order,
o.company,
o.order_type
) summary
ON
summary.order = detail.order
```
Gordon's query uses a window function achieve the same effect; the window function runs after the grouping is done, and it establishes another level of grouping (`PARTITION BY ordernumber`) which is the effective equivalent of my `GROUP BY ordernumber` in the summary. The window function summary data is inherently connected to the detail data via ordernumber; it is implicit that a query saying:
```
SELECT
ordernumber,
lineitemnumber,
SUM(amount) linetotal
sum(SUM(amount)) over(PARTITION BY ordernumber) ordertotal
GROUP BY
ordernumber,
lineitemnumber
```
..will have an `ordertotal` that is the total of all the `linetotal` in the order: The GROUP BY prepares the data to the line level detail, and the window function prepares data to just the order level, and repeats the total as many times are necessary to fill in for every line item. I wrote the `SUM` that belongs to the GROUP BY operation in capitals.. the `sum` in lowercase belongs to the partition operation. it has to `sum(SUM())` and cannot simply say `sum(amount)` because amount as a column is not allowed on its own - it's not in the group by. Because amount is not allowed on its own and has to be SUMmed for the group by to work, we have to `sum(SUM())` for the partition to run (it runs after the group by is done)
It behaves exactly the same as grouping to two different levels and joining together, and indeed I chose that way to explain it because it makes it more clear how it's working in relation to what you already know about groups and joins
Remember: JOINS make datasets grow sideways, UNIONS make them grow downwards. When you have some detail data and you want to grow it sideways with some more data(a summary), JOIN it on. (If you'd wanted totals to go at the bottom of each column, it would be unioned on)
---
\*you can do it as one query (without window functions), but it can get awfully confusing because it requires all sorts of trickery that ultimately isn't worth it because it's too hard to maintain
|
Exclude a product category from Woocommerce related products
In woocommerce, I am trying to remove a specific product category from displayed related products on single product pages.
I have tried to use a function hooked in `woocommerce_get_related_product_cat_terms`, filter hook, like in [some answer threads](https://stackoverflow.com/search?q=woocommerce_get_related_product_cat_terms), but it doesn't seem to work anymore.
How to exclude a specific product category from Woocommerce related products?
| Try `woocommerce_related_products` hook in the following hooked function, to exclude a specific product category from displayed related products:
```
add_filter( 'woocommerce_related_products', 'exclude_product_category_from_related_products', 10, 3 );
function exclude_product_category_from_related_products( $related_posts, $product_id, $args ){
// HERE define your product category slug
$term_slug = 'hoodies';
// Get the product Ids in the defined product category
$exclude_ids = wc_get_products( array(
'status' => 'publish',
'limit' => -1,
'category' => array($term_slug),
'return' => 'ids',
) );
return array_diff( $related_posts, $exclude_ids );
}
```
Code goes in function.php file of the active child theme (or active theme).
Tested and works.
Related answer thread: [Exclude related products ids in Woocommerce](https://stackoverflow.com/questions/50340067/exclude-related-products-ids-in-woocommerce/50345889#50345889)
|
Is there a shorter way to refresh div
I have some elements in my page and I need to refresh their contents every 5 seconds. The code that I'm going to show you works well but it looks so long and repeating itself. When I use only **setInterval** function, page doesn't loaded regularly before the interval comes. Can you suggest a better way to do this? Thanks in advance. Here is my code:
```
var $song=$(".song");
var $album=$(".album");
var $cover=$(".cover");
var $background=$(".overlay-bg");
$.ajax({
url: "song.php",
success: function (response) {
var nowPlaying=$.parseJSON(response);
$song.html(nowPlaying.song);
$album.html(nowPlaying.album);
$cover.css("background-image", "url("+nowPlaying.cover+")");
$background.css("background-image", "url("+nowPlaying.cover+")");
}
})
var refreshSongDetails=setInterval(function() {
$.ajax({
url: "song.php",
success: function (response) {
var nowPlaying=$.parseJSON(response);
$song.html(nowPlaying.song);
$album.html(nowPlaying.album);
$cover.css("background-image", "url("+nowPlaying.cover+")");
$background.css("background-image", "url("+nowPlaying.cover+")");
}
})
}, 5000);
```
| Create your ajax call into a function and call it :
```
var $song=$(".song");
var $album=$(".album");
var $cover=$(".cover");
var $background=$(".overlay-bg");
function ajaxCall() {
$.ajax({
url: "song.php",
success: function (response) {
var nowPlaying=$.parseJSON(response);
$song.html(nowPlaying.song);
$album.html(nowPlaying.album);
$cover.css("background-image", "url("+nowPlaying.cover+")");
$background.css("background-image", "url("+nowPlaying.cover+")");
}
})
}
ajaxCall();
var refreshSongDetails = setInterval(ajaxCall, 5000);
```
|
Reset R instance
Is it possible to reset an instance of R?
Eg. if I have used the commands
```
x <- 1:10
plot(x, -x)
```
And thus polluted the system with the x variable. In this state can I then revert back to a clean state without shutting R down and launching it again?
| You can remove all variables from your workspace using
```
rm(list = ls())
```
You can 'unload' packages with
```
detach(package:packagename)
```
---
EDIT:
You can close all graphics devices with
```
graphics.off()
```
You can clear the command editor history with `CTRL+L`.
If you use Tinn-R as your editor, there is a 'Clear all' button, which clears your workspace and command editor history, and closes graphics devices. (It does not detach packages.)
---
ANOTHER EDIT:
One other thing that you would have to do to reset R is to close all open connections. It is incredibly bad form to leave open connections lying about, so this is more [belt and braces](http://www.phrases.org.uk/meanings/61250.html) than a necessity. ~~(You can probably fool `close_all_connections` by opening connections in obscure environments, but in that case you have only yourself to blame.)~~
```
is.connection <- function(x) inherits(x, "connection")
get_connections <- function(envir = parent.frame())
{
Filter(is.connection, mget(ls(envir = envir), envir = envir))
}
close_all_connections <- function()
{
lapply(c(sys.frames(), globalenv(), baseenv()),
function(e) lapply(get_connections(e), close))
}
close_all_connections()
```
As Marek suggests, use `closeAllConnections` to do this.
ANOTHER EDIT:
In response to Ben's comment about resetting options, that's actually a little bit tricky. the best way to do it would be to store a copy of your options when you load R, and then reset them at this point.
```
#on R load
assign(".Options2", options(), baseenv())
#on reset
options(baseenv()$.Options2)
```
If you aren't foresighted enough to set this up when you load R, then you need something like this function.
```
reset_options <- function()
{
is_win <- .Platform$OS.type == "windows"
options(
add.smooth = TRUE,
browserNLdisabled = FALSE,
CBoundsCheck = FALSE,
check.bounds = FALSE,
continue = "+ ",
contrasts = c(
unordered = "contr.treatment",
ordered = "contr.poly"
),
defaultPackages = c(
"datasets",
"utils",
"grDevices",
"graphics",
"stats",
"methods"
),
demo.ask = "default",
device = if(is_win) windows else x11,
device.ask.default = FALSE,
digits = 7,
echo = TRUE,
editor = "internal",
encoding = "native.enc",
example.ask = "default",
expressions = 5000,
help.search.types = c("vignette", "demo", "help"),
help.try.all.packages = FALSE,
help_type = "text",
HTTPUserAgent = with(
R.version,
paste0(
"R (",
paste(major, minor, sep = "."),
" ",
platform,
" ",
arch,
" ",
os,
")"
)
),
internet.info = 2,
keep.source = TRUE,
keep.source.pkgs = FALSE,
locatorBell = TRUE,
mailer = "mailto",
max.print = 99999,
menu.graphics = TRUE,
na.action = "na.omit",
nwarnings = 50,
OutDec = ".",
pager = "internal",
papersize = "a4",
pdfviewer = file.path(R.home("bin"), "open.exe"),
pkgType = if(is_win) "win.binary" else "source",
prompt = "> ",
repos = c(
CRAN = "@CRAN@",
CRANextra = "http://www.stats.ox.ac.uk/pub/RWin"
),
scipen = 0,
show.coef.Pvalues = TRUE,
show.error.messages = TRUE,
show.signif.stars = TRUE,
str = list(
strict.width = "no",
digits.d = 3,
vec.len = 4
),
str.dendrogram.last = "`",
stringsAsFactors = TRUE,
timeout = 60,
ts.eps = 1e-05,
ts.S.compat = FALSE,
unzip = "internal",
useFancyQuotes = TRUE,
verbose = FALSE,
warn = 0,
warning.length = 1000,
width = 80,
windowsTimeouts = c(100, 500)
)
)
```
The options in that function provide a vanilla R session so you might wish to source your Rprofile.site file afterwards to customise R how you like it.
```
source(file.path(R.home("etc"), "Rprofile.site"))
```
|
What does "live in the heap" mean?
I'm learning Objectiv C, and I hear the term "live in the heap" constantly, from what I understand its some kind of unknown area that a pointer lives in, but trying to really put head around the exact term...like "we should make our property `strong` so it won't live in the heap. He said that since the property is private. I know it'ss a big difference It's pretty clear that we want to make sure that we want to count the reference to this object so the autorelease wont clean it (we want to "retain" it from what i know so far), but I want to make sure I understand the term since it's being use pretty often.
Appreciate it
| There are three major memory areas used by C (and by extension, Objective C) programs for storing the data:
- The static area
- The automatic area (also known as "the stack"), and
- The dynamic area (also known as "the heap").
When you allocate objects by sending their class a `new` or `alloc` message, the resultant object is allocated in the dynamic storage area, so the object is said to live in the heap. All Objective-C objects are like that (although the pointers that *reference* these objects may be in any of the three memory data areas). In contrast, primitive local variables and arrays "live" on the stack, while global primitive variables and arrays live in the static data storage.
Only the heap objects are reference counted, although you can allocate memory from the heap using `malloc`/`calloc`/`realloc`, in which case the allocation would not be reference-counted: your code would be responsible for deciding when to `free` the allocated dynamic memory.
|
Correct the parameter count mismatch
How can I correct this error I'm having
>
> TargetParameterCountException was unhandled by user code. Parameter count mismatch.
>
>
>
This is my code where it's happening
```
public static void InvokeMethod(string className, string methodName, string fileName)
{
var t = Type.GetType(className);
using (StreamReader f = new StreamReader("params.txt"))
{
t.GetMethod(methodName).Invoke(t.GetConstructor(Type.EmptyTypes).Invoke(new object[] { }), new object[] { f.ReadLine() });
}
}
```
This is the whole code
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.IO;
class MyClass
{
private int i;
public double d;
private string s;
public bool b;
public MyClass()
{
i = 1;
d = 0.1;
s = "1";
b = true;
}
public void Method0()
{
Console.WriteLine("Method with no arguments, no return value.");
}
private int Method1(int arg0)
{
Console.WriteLine("The method returns int, int gets.");
return arg0;
}
private double Method2(int arg0, double arg1)
{
Console.WriteLine("Method returns a double, taking int and double.");
return arg1 * arg0;
}
public bool Method3(string arg0)
{
Console.WriteLine("Method returns a bool, accepts string");
return arg0.Length>10;
}
public bool Method3(string arg0,string arg1)
{
Console.WriteLine("The method takes two arguments string.");
return arg0 == arg1;
}
public static char Method4(string arg0)
{
Console.WriteLine("Method returns a char, accepts string. .");
Console.WriteLine(arg0);
return arg0[1];
}
public void Method5(int arg0, double arg1)
{
Console.WriteLine("arg1 = {0} arg2 = {1}.",arg0,arg1);
}
}
class MyTestClass
{
public static string[] GetMethodsWithStrParams(string className)
{
var t = Type.GetType(className);
List<string> res = new List<string>();
foreach (var method in t.GetMethods())
{
foreach (var param in method.GetParameters())
{
if (param.ParameterType == typeof(string))
{
res.Add(method.Name);
break;
}
}
}
return res.ToArray();
}
public static void InvokeMethod(string className, string methodName, string fileName)
{
var t = Type.GetType(className);
using (StreamReader f = new StreamReader("params.txt"))
{
t.GetMethod(methodName).Invoke(t.GetConstructor(Type.EmptyTypes).Invoke(new object[] { }),
new object[] { f.ReadLine() });
}
}
}
class Program
{
static void Main(string[] args)
{
string name = "MyClass";
foreach (var x in MyTestClass.GetMethodsWithStrParams(name))
{
Console.WriteLine(x);
}
MyTestClass.InvokeMethod("MyClass", "Method5", "params.txt");
Console.ReadKey(true);
}
}
```
| Your `InvokeMethod` implementation always calls `t.GetMethod(methodName).Invoke` with two arguments, the first being the target instance on which the method is called, and second being the array of method arguments, which contains only one string (`f.ReadLine()`).
Then you use `InvokeMethod` to call `MyClass.Method5` which takes two arguments, an int and a double. This obviously can't work, as `myClass.Method5("some string")` is syntactically incorrect, and this is what effectively happens. You can't expect that a string is a valid argument list for all `MyClass` methods, can you?
That is the cause of the error, but only you can decide how to fix it, as we don't know the greater context. You have to provide the correct number of parameters depending on the actual method being called.
Possible path to solution:
- what are the arguments I want to provide to Method5?
- where do I get them from?
- how do I move them from wherever they are to the array I give to `Invoke`?
This should get you started, but no one can tell you exactly as you have only described the error, but not the real problem you are trying to solve with your code.
|
Inspect serialized file content
I have a file which apparently contains serialized structures. The first 26 bytes contain the string "java.util.HashMap", so I am sure that this file holds serialized data.
Is there is nice tool maybe with a simple UI, where I can show the structured data?
I googled for it for a while, but I didn't find any proper resources. It should run preferred on Windows. Linux would be fine too but is overhead for me.
| ## jdeserialize
There is tool from Google called "jdeserialize":
>
> jdeserialize is a library that interprets Java serialized objects --
> the data generated by an ObjectOutputStream. **It also comes with a
> command-line tool** that can generate compilable class declarations,
> extract block data, and print textual representations of instance
> values.
>
>
>
[Project site of jdeserialize](https://code.google.com/archive/p/jdeserialize/)
[Git repository of jdeserialize](https://github.com/frohoff/jdeserialize/tree/master/jdeserialize)
---
## Serialysis
There is also a Java library called "Serialysis", that can be used to generate human-readable output of a serialized object, like so:
```
SEntity sint = SerialScan.examine(new Integer(5));
System.out.println(sint);
```
...produces this output:
```
SObject(java.lang.Integer) {
value = Prim(int){5}
}
```
[Explanation of how Serialysis works](https://community.oracle.com/blogs/emcmanus/2007/06/12/disassembling-serialized-java-objects)
[Git repository of Serialysis](https://github.com/frohoff/serialysis)
---
**Since both projects are written in Java, you can use them in both Windows and Linux.**
|
How can I refresh a stored and snapshotted jquery selector variable
I ran yesterday in a problem with a jquery-selector I assigned to a variable and it's driving me mad.
Here is a jsfiddle with testcase:
- assign the .elem to my obj var
- log both lengths to the console. Result => 4
- Remove #3 from the DOM
- log obj to the console => the removed #3 is still there and the length is still 4.
I figured out that jquery query is snapshotted? to the variable and can't?won't? be updated
- log .elem to the console.. yep Result => 3 and the #3 is gone
- Now I update .elem with a new width of 300
- logging obj & obj.width gives me 300.. So the snapshot has been updated ? What's interesting is that 3 of the 4 divs have the new width, but the removed #3 doesn't...
Another test: Adding a li element to the domtree and logging obj and .elem.
.elem does have the new li and obj doesn't, because it's still the old snapshot
<http://jsfiddle.net/CBDUK/1/>
Is there no way to update this obj with the new content?
I don't want to make a new obj, because in my application there is a lot information saved in that object, I don't want to destroy...
| Yeah, it's a snapshot. Furthermore, removing an element from the page DOM tree isn't magically going to vanish all references to the element.
You can refresh it like so:
```
var a = $(".elem");
a = $(a.selector);
```
Mini-plugin:
```
$.fn.refresh = function() {
return $(this.selector);
};
var a = $(".elem");
a = a.refresh();
```
This simple solution doesn't work with complex traversals though. You are going to have to make a parser for the `.selector` property to refresh the snapshot for those.
The format is like:
```
$("body").find("div").next(".sibling").prevAll().siblings().selector
//"body div.next(.sibling).prevAll().siblings()"
```
In-place mini-plugin:
```
$.fn.refresh = function() {
var elems = $(this.selector);
this.splice(0, this.length);
this.push.apply( this, elems );
return this;
};
var a = $(".elem");
a.refresh() //No assignment necessary
```
|
Morse code encoder decoder
I created this morse code encoder and decoder.
```
class DecodeError(BaseException):
__module__ = Exception.__module__
class EncodeError(BaseException):
__module__ = Exception.__module__
code_letter = {
'.-': 'A', '-...': 'B', '-.-.': 'C', '-..': 'D', '.': 'E', '..-.': 'F', '--.': 'G',
'....': 'H', '..': 'I', '.---': 'J', '-.-': 'K', '.-..': 'L', '--': 'M', '-.': 'N',
'---': 'O', '.--.': 'P', '--.-': 'Q', '.-.': 'R', '...': 'S', '-': 'T', '..-': 'U',
'...-': 'V', '.--': 'W', '-..-': 'X', '-.--': 'Y', '--..': 'Z',
'.----': '1', '..---': '2', '...--': '3', '....-': '4', '.....': '5',
'-....': '6', '--...': '7', '---..': '8', '----.': '9', '-----': '0',
'--..--': ', ', '.-.-.-': '.', '..--..': '?', '-..-.': '/',
'-....-': '-', '-.--.': '(', '-.--.-': ')', '/': ' '
}
letter_code = {value: key for key, value in zip(code_letter.keys(), code_letter.values())}
def morse_encode(string: str) -> str:
try:
return ' '.join(letter_code[i.upper()] for i in string)
except:
raise EncodeError('Unknown value in string')
def morse_decode(string: str) -> str:
try:
return ''.join(code_letter[i] for i in string.split())
except:
raise DecodeError('Unknown value in string')
if __name__ == '__main__':
string = input()
print(morse_encode(string))
```
Is it possible to make the code shorter while maintaining neatness?
Thanks!
| @l0b0 is right for (1.). The [docs](https://docs.python.org/3/library/exceptions.html#BaseException) say to inherit from `Exception` instead:
>
> The base class for all built-in exceptions. It is not meant to be directly inherited by user-defined classes (for that, use Exception)
>
>
>
I've also never seen `__module__` used in that case, but if you like to output better, I guess it works fine.
---
`letter_code` can make use of dictionary's [`items`](https://docs.python.org/3/library/stdtypes.html?highlight=dict%20items#dict.items) method.
```
zip(code_letter.keys(), code_letter.values())
```
Is roughly equivalent to
```
code_letter.items()
```
And just to make it a little clearer what the dictionary comprehension is doing, I might also rename loop variables:
```
letter_code = {letter: morse for morse, letter in code_letter.items()}
```
---
You're using bare `except`s, which is generally a bad idea. If you accidentally typo some other catchable error into the `try`, you'll get `'Unknown value in string'` messages instead of a real error. Specify what exact error you want to catch. I'd also make use of the fact that stringifying a `KeyError` exception tells you what the bad key was. You can print out the exception to tell the user what went wrong exactly
```
def morse_encode(string: str) -> str:
try:
return ' '.join(letter_code[i.upper()] for i in string)
except KeyError as e:
raise EncodeError(f'Unknown value in string: {e}')
>>> morse_encode("]")
EncodeError: Unknown value in string: ']'
```
And of course, `i` is a bad name for that loop variable. `i` suggests an index, but that's a letter, not an index. This reads much better
```
' '.join(letter_code[char.upper()] for char in string)
```
|
How stable and widespread is "OCaml Batteries Included" and is it recommended?
I'm just getting back into OCaml for a new small research project after many years of SML, Haskell and F#.
I quickly found myself missing some things when using the OCaml libraries, and I also missed having a syntax for monadic comprehensions.
OCaml Batteries Included seems to fill exactly these gaps. But I'm a little unsure whether it has reached the point of being mature, stable and widespread enough for this project. Part of my doubt comes from having a number of hiccups when installing it - including discovering that currently the installation instructions are out of date, and it was only by Googling that eventually found a response to a support request that explained what was required.
Is Batteries relatively stable? (Or at least more stable than the above would suggest?)
Is its use relatively widespread? (E.g., is 10% of new OCaml code written using it?)
What kinds of projects would it be recommended for?
(And, in particular would it be recommended for a small-medium sized research project that should yield a small library that likely would be maintained for some time.)
Any other recommendations?
| Semi-disclaimer: I am one of current developers of Batteries Included.
It hasn't seen a great deal of uptake yet so far as I know. I haven't seen any packages released that depend on it. However, that alone shouldn't stop you from using it. Hopefully it will gain some traction as it continues to mature. I personally use it for all of my new OCaml development, both for personal projects and for my research work.
Expect to see it in a fair amount of flux for the duration of the 1.x cycle. We're committed to keeping the APIs backwards-compatible for all 1.x releases, but new things will be added and old modules/functions likely deprecated. Much of it should be pretty stable, although we are still finding and fixing bugs. That will be greatly aided, though, by having more people use it and report the bugs we haven't seen yet.
We hope that the 2.0 release will be much more static and have a greater degree of consistency between modules, but 2.0 is likely a ways off.
My advice would be: if you would use Extlib, use Batteries. It has absorbed Extlib's feature set (indeed, most of its code), and fleshes it out quite a bit. If you find bugs, please report them :).
|
How to dynamically change CSS class of an HTML tag?
I am using JavaScript. I have a variable `var boolVal` that either evaluates to `true`/`false`. On my page, I have a `<div>` tag:
```
<div id='div1' class="redClass"></div>
```
Based on the value of `var boolVal`, I want to change the CSS class of the `<div>` tag to `blueClass`.
For example: present class makes `<div>` color red, then the new class should make the page blue at runtime without need for page refresh.
Can we achieve this in simple JavaScript? Can we use
```
document.getElementById("MyElement").className = "MyClass";
```
or should we use `AddClass`?
| You can add a CSS class based on `id` dynamically using [`classList`](https://developer.mozilla.org/en/docs/Web/API/Element/classList) API as follows:
```
document.getElementById('idOfElement').classList.add('newClassName');
```
Or the old way:
```
document.getElementById('idOfElement').className = 'newClassName';
// += to keep existing classes
```
---
Alternatively you can use other DOM query methods shown below to find elements. The last three return a collection so you'll have to iterate over it and apply the class to each element in the collection (similar to the example given below each).
- [`querySelector`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector)
- [`querySelectorAll`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelectorAll)
```
elements.forEach(element => element.classList.add('newClassName'));
```
- [`getElementsByClassName`](https://developer.mozilla.org/en/docs/Web/API/Document/getElementsByClassName)
```
Array.from(elements).forEach(element => element.classList.add('newName'));
```
- [`getElementsByTagName`](https://developer.mozilla.org/en/docs/Web/API/Document/getElementsByTagName)
```
Array.from(elements).forEach(element => element.classList.add('newName'));
```
---
In your case
```
var element = document.getElementById('div1');
if(boolVal)
element.className= 'blueClass'; // += ' blueClass'; to keep existing classes
else
element.className= 'redClass';
```
|
Checking environment variable in make through automake
Is there a way to have a conditional passed through automake so it is passed on to the resulting Makefile.in and Makefile later on?
I check whether JAVA\_HOME is defined in the environment in a Makefile using
```
ifeq (undefined,$(origin JAVA_HOME))
#CALL with defaults
else
#CALL according to the variable
endif
```
But when I process this in a Makefile.am with automake I get two erros:
```
else without if
endif without if
```
Looks like automake does not digest the ifeq. Is there a way to pass this through it (if it makes sense doing so), or is there another autotools-friendly way of getting the same result?
The idea is also to allow setting/changing the variable just before running make to easily target different JDKs.
| # What I think's the right way:
Rely on `$(JAVA_HOME)` being set in `Makefile.am` and make sure a sensible value for it is set by `configure`.
# Answering the Question as Written:
Because `automake` wants to generate Makefiles that work on POSIX make, it doesn't work too well with GNU-make conditionals.
So you do the test in `configure.ac`:
```
AC_SUBST([JAVA_HOME])
AM_CONDITIONAL([JAVA_HOME_SET], [test ! -z "$JAVA_HOME"])
```
Then in Makefile.am:
```
if JAVA_HOME_SET
## Something that relies on JAVA_HOME
else
## Defaults
endif
```
|