text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
Reading and modifying the text from the text file in Java
I am have a project that need to modify some text in the text file.
Like BB,BO,BR,BZ,CL,VE-BR
I need make it become BB,BO,BZ,CL,VE.
and HU, LT, LV, UA, PT-PT/AR become HU, LT, LV, UA,/AR.
I have tried to type some code, however the code fail to loop and also,in this case.
IN/CI, GH, KE, NA, NG, SH, ZW /EE, HU, LT, LV, UA,/AR, BB
"AR, BB,BO,BR,BZ,CL, CO, CR, CW, DM, DO,VE-AR-BR-MX"
I want to delete the AR in second row, but it just delete the AR in first row.
I got no idea and seeking for helps.
Please
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileWriter;
import java.util.Scanner;
public class tomy {
static StringBuffer stringBufferOfData = new StringBuffer();
static StringBuffer stringBufferOfData1 = stringBufferOfData;
static String filename = null;
static String input = null;
static String s = "-";
static Scanner sc = new Scanner(s);
public static void main(String[] args) {
boolean fileRead = readFile();
if (fileRead) {
replacement();
writeToFile();
}
System.exit(0);
}
private static boolean readFile() {
System.out.println("Please enter your files name and path i.e C:\\test.txt: ");
filename = "C:\\test.txt";
Scanner fileToRead = null;
try {
fileToRead = new Scanner(new File(filename));
for (String line; fileToRead.hasNextLine()
&& (line = fileToRead.nextLine()) != null;) {
System.out.println(line);
stringBufferOfData.append(line).append("\r\n");
}
fileToRead.close();
return true;
} catch (FileNotFoundException ex) {
System.out.println("The file " + filename + " could not be found! "+ ex.getMessage());
return false;
} finally {
fileToRead.close();
return true;
}
}
private static void writeToFile() {
try {
BufferedWriter bufwriter = new BufferedWriter(new FileWriter(
filename));
bufwriter.write(stringBufferOfData.toString());
bufwriter.close();
} catch (Exception e) {// if an exception occurs
System.out.println("Error occured while attempting to write to file: "+ e.getMessage());
}
}
private static void replacement() {
System.out.println("Please enter the contents of a line you would like to edit: ");
String lineToEdit = sc.nextLine();
int startIndex = stringBufferOfData.indexOf(lineToEdit);
int endIndex = startIndex + lineToEdit.length() + 2;
String getdata = stringBufferOfData.substring(startIndex + 1, endIndex);
String data = " ";
Scanner sc1 = new Scanner(getdata);
Scanner sc2 = new Scanner(data);
String lineToEdit1 = sc1.nextLine();
String replacementText1 = sc2.nextLine();
int startIndex1 = stringBufferOfData.indexOf(lineToEdit1);
int endIndex1 = startIndex1 + lineToEdit1.length() + 3;
boolean test = lineToEdit.contains(getdata);
boolean testh = lineToEdit.contains("-");
System.out.println(startIndex);
if (testh = true) {
stringBufferOfData.replace(startIndex, endIndex, replacementText1);
stringBufferOfData.replace(startIndex1, endIndex1 - 2,
replacementText1);
System.out.println("Here is the new edited text:\n"
+ stringBufferOfData);
} else {
System.out.println("nth" + stringBufferOfData);
System.out.println(getdata);
}
}
}
A:
I wrote a quick method for you that I think does what you want, i.e. remove all occurrences of a token in a line, where that token is embedded in the line and is identified by a leading dash.
The method reads the file and writes it straight out to a file after editing for the token. This would allow you to process a huge file without worrying about about memory constraints.
You can simply rename the output file after a successful edit. I'll leave it up to you to work that out.
If you feel you really must use string buffers to do in memory management, then grab the logic for the line editing from my method and modify it to work with string buffers.
static void onePassReadEditWrite(final String inputFilePath, final String outputPath)
{
// the input file
Scanner inputScanner = null;
// output file
FileWriter outputWriter = null;
try
{
// open the input file
inputScanner = new Scanner(new File(inputFilePath));
// open output file
File outputFile = new File(outputPath);
outputFile.createNewFile();
outputWriter = new FileWriter(outputFile);
try
{
for (
String lineToEdit = inputScanner.nextLine();
/*
* NOTE: when this loop attempts to read beyond EOF it will throw the
* java.util.NoSuchElementException exception which is caught in the
* containing try/catch block.
*
* As such there is NO predicate required for this loop.
*/;
lineToEdit = inputScanner.nextLine()
)
// scan all lines from input file
{
System.out.println("START LINE [" + lineToEdit + "]");
// get position of dash in line
int dashInLinePosition = lineToEdit.indexOf('-');
while (dashInLinePosition != -1)
// this line has needs editing
{
// split line on dash
String halfLeft = lineToEdit.substring(0, dashInLinePosition);
String halfRight = lineToEdit.substring(dashInLinePosition + 1);
// get token after dash that is to be removed from whole line
String tokenToRemove = halfRight.substring(0, 2);
// reconstruct line from the 2 halves without the dash
StringBuilder sb = new StringBuilder(halfLeft);
sb.append(halfRight.substring(0));
lineToEdit = sb.toString();
// get position of first token in line
int tokenInLinePosition = lineToEdit.indexOf(tokenToRemove);
while (tokenInLinePosition != -1)
// do for all tokens in line
{
// split line around token to be removed
String partLeft = lineToEdit.substring(0, tokenInLinePosition);
String partRight = lineToEdit.substring(tokenInLinePosition + tokenToRemove.length());
if ((!partRight.isEmpty()) && (partRight.charAt(0) == ','))
// remove prefix comma from right part
{
partRight = partRight.substring(1);
}
// reconstruct line from the left and right parts
sb.setLength(0);
sb = new StringBuilder(partLeft);
sb.append(partRight);
lineToEdit = sb.toString();
// find next token to be removed from line
tokenInLinePosition = lineToEdit.indexOf(tokenToRemove);
}
// handle additional dashes in line
dashInLinePosition = lineToEdit.indexOf('-');
}
System.out.println("FINAL LINE [" + lineToEdit + "]");
// write line to output file
outputWriter.write(lineToEdit);
outputWriter.write("\r\n");
}
}
catch (java.util.NoSuchElementException e)
// end of scan
{
}
finally
// housekeeping
{
outputWriter.close();
inputScanner.close();
}
}
catch(FileNotFoundException e)
{
e.printStackTrace();
}
catch(IOException e)
{
inputScanner.close();
e.printStackTrace();
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
logForm.$invalid.$setValidity what's this mean?
https://github.com/PatrickO10/meetUp/blob/master/index.html#L73
I am new in this field and reading one code.
I can't understand logForm.$invalid.$setValidity here. I can't find anything about it from internet. The setvalidity in the internet has two perimeters but here has not.
And the invalid here https://docs.angularjs.org/api/ng/type/form.FormController has been a boolean why setvalidity? Why don't you use ng-disabled="logForm.$invalid"
Could you tell me? Thanks.
<div class="modal fade login" tabindex="-1" role="dialog" aria-labelledby="loginModelLabel" ng-controller="LoginCtrl as logCtrl">
<div class="modal-dialog" role="document">
<div class="modal-content">
<div class="modal-header primary-color-dark-bg">
<button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button>
<h4 class="modal-title" id="loginModelLabel">Login</h4>
</div>
<div class="modal-body primary-bg">
<form class="row form-horizontal" id="loginForm" ng-submit="logCtrl.login(user)" name="logForm">
<label for="logEmail" class="col-xs-12 col-md-6 margin-top">
<span class="pad-right">Enter your email</span>
<input type="email" id="logEmail" ng-model="user.email" class="form-control" placeholder="example@krustykrab.com" required autocomplete="email" autofocus>
</label>
<label for="logPass" class="col-xs-12 col-md-6 margin-top">
<span>Enter your password</span>
<input type="password" id="logPass" ng-model="user.password" class="form-control" placeholder="Enter your password" required>
</label>
<div class="col-xs-12 margin-top" ng-show="loginError">
<p class="invalidPass">Login Fail! {{loginErrMsg}}</p>
</div>
<label class="col-xs-12 margin-top">
<input id="submitLogin" type="submit" value="Login" ng-disabled="logForm.$invalid.$setValidity">
</label>
</form>
</div>
<div class="modal-footer primary-color-dark-bg">
<button type="submit" class="btn btn-primary" data-dismiss="modal">Close</button>
</div>
</div>
<!-- /.modal-content -->
</div>
<!-- /.modal-dialog -->
</div>
A:
<input id="submitLogin" type="submit" value="Login"
ng-disabled="logForm.$invalid.$setValidity">
You are basically saying the input to be ng-disabled when the form is $invalid. i.e, here you are setting the input to $invalid, which makes the input to be disabled.
| {
"pile_set_name": "StackExchange"
} |
Q:
Lookup String from File and Use as Filter
I have some reference file which contains the conditions that's needed for tagging records in a data frame.
REFERENCE FILE
GROUP,CONDITION
1,df['a'].find('abc')
2,df['d'].find('def')
3,df['g'].find('ghi')
I want to check on my main data whether the string exists in the TEXTFIELD column and tag it to its respective group.
MAIN DATA
ID,TEXTFIELD
A,fsadflnashdfp**abc**asfa
B,**ghi**dsfasdfasfqegdfsd
C,orjtorenblmflvdfg**def**
DESIRED RESULT
ID,GROUP
A,1
B,3
C,2
How do I call the function inside the reference file? Or is there any other cleaner way to do this?
Current script looks like this which I believe I'm doing something wrong and throws the error KeyError: "df['TEXTFIELD'].find('abc')"
x = [
[1, "df['TEXTFIELD'].find('abc')"],
[2, "df['TEXTFIELD'].find('def')"],
[3, "df['TEXTFIELD'].find('ghi')"]
]
y = [
['A','fsadflnashdfpabcasfa'],
['B','ghidsfasdfasfqegdfsd'],
['C','orjtorenblmflvdfgdef ']
]
df_ref = pd.DataFrame(x,columns=["GROUP","CONDITION"])
df = pd.DataFrame(x,columns=["ID","TEXTFIELD"])
condition = df_ref.loc[0,'CONDITION']
df_out = df[condition]
A:
Alright, so there are 3 things wrong with your question, if I got it right:
1-
df_ref = pd.DataFrame(x,columns=["GROUP","CONDITION"])
df = pd.DataFrame(x,columns=["ID","TEXTFIELD"]) # LOOK AT ME
I believe in the second line above, you actually meant y, instead of x, so it should be:
df_ref = pd.DataFrame(x,columns=["GROUP","CONDITION"])
df = pd.DataFrame(y,columns=["ID","TEXTFIELD"])
# | <- right here
2-
df['TEXTFIELD'].find('abc')
find is not a method of DataFrame, but you can achieve what you want with contains:
df['TEXTFIELD'].str.contains('abc')
3-
df_ref.loc[0,'CONDITION']
This will return "df['TEXTFIELD'].str.contains('abc')", notice this is a string, if you call df["df['TEXTFIELD'].str.contains('abc')"] it will give you a key error because it doesn't know what that string is. You want to do a boolean mask with the expression of that string, for that, you can use eval:
df[eval(condition)]
And it will give the result you want:
ID TEXTFIELD
0 A fsadflnashdfpabcasfa
Obs.: I know people will throw eggs at me for suggesting the use of eval, it is a great security threat, so I only recommend using it if you are certain about whats on your file. It will solve your problem though...
| {
"pile_set_name": "StackExchange"
} |
Q:
aggregate function produces daily instead of hourly mean
I have a data.frame with 15 minute time steps in the first column and 16 more columns full of data. I want to get the hourly mean for each column. I am using aggregate and it works perfectly fine for 1 min data.
mydata <- list()
for(j in colnames(data_frame)){
data_mean <- aggregate(data_frame[j],
list(hour=cut(as.POSIXct(data_frame$TIME), "hour")),
mean, na.rm=TRUE)
mydata[[j]] <- data_mean
}
When I use this same setup for a 15 min data set it gives me the daily mean instead of the hourly mean. Any idea why?
My data looks like this for 1 min data:
"TIME","Tair","RH"
2016-01-01 00:01:00,5.9,82
2016-01-01 00:02:00,5.9,82
2016-01-01 00:03:00,5.9,82
2016-01-01 00:04:00,5.89,82
2016-01-01 00:05:00,5.8,82
2016-01-01 00:06:00,5.8,82
2016-01-01 00:07:00,5.8,82
2016-01-01 00:08:00,5.8,82
2016-01-01 00:09:00,5.8,82
2016-01-01 00:10:00,5.8,82
2016-01-01 00:11:00,5.8,82
2016-01-01 00:12:00,5.8,82
2016-01-01 00:13:00,5.8,82
2016-01-01 00:14:00,5.8,82
2016-01-01 00:15:00,5.8,82
2016-01-01 00:16:00,5.8,82
2016-01-01 00:17:00,5.8,82
2016-01-01 00:18:00,5.8,82
2016-01-01 00:19:00,5.8,82
2016-01-01 00:20:00,5.8,82
2016-01-01 00:21:00,5.75,82
2016-01-01 00:22:00,5.78,82
2016-01-01 00:23:00,5.78,83
2016-01-01 00:24:00,5.8,82
2016-01-01 00:25:00,5.73,82
2016-01-01 00:26:00,5.7,82
2016-01-01 00:27:00,5.7,82
2016-01-01 00:28:00,5.7,82
2016-01-01 00:29:00,5.7,82
2016-01-01 00:30:00,5.7,82
2016-01-01 00:31:00,5.7,83
2016-01-01 00:32:00,5.76,83
2016-01-01 00:33:00,5.8,83
2016-01-01 00:34:00,5.8,82
2016-01-01 00:35:00,5.8,82
2016-01-01 00:36:00,5.8,83
2016-01-01 00:37:00,5.79,83
2016-01-01 00:38:00,5.7,82
And for 15 min data:
"TIME","Tair","RH"
2016-01-01 00:15:00,6.228442,80.40858
2016-01-01 00:30:00,6.121088,81.00000
2016-01-01 00:45:00,6.075000,NA
2016-01-01 01:00:00,5.951910,NA
2016-01-01 01:15:00,5.844144,NA
2016-01-01 01:30:00,5.802242,NA
2016-01-01 01:45:00,5.747619,NA
2016-01-01 02:00:00,5.742889,NA
2016-01-01 02:15:00,5.752584,81.12135
2016-01-01 02:30:00,5.677753,81.00000
2016-01-01 02:45:00,5.500224,81.61435
2016-01-01 03:00:00,5.225282,82.29797
2016-01-01 03:15:00,5.266441,83.00000
2016-01-01 03:30:00,5.200448,83.32584
2016-01-01 03:45:00,5.098876,84.00000
2016-01-01 04:00:00,5.081061,83.76894
2016-01-01 04:15:00,5.230769,82.88664
2016-01-01 04:30:00,5.300000,82.06742
2016-01-01 04:45:00,5.300000,NA
2016-01-01 05:00:00,5.399776,NA
A:
Your code works for me.
However, your loop is slightly wasteful in that it repeatedly computes the cut of the TIME column for every column of the data.frame. You could precompute it, but there's a better solution.
You can produce the same result but in a simpler, more conventional, and more useful form with a single call to aggregate():
aggregate(df1[names(df1)!='TIME'],list(hour=cut(df1$TIME,'hour')),mean,na.rm=T);
## hour Tair RH
## 1 2016-01-01 5.786316 82.15789
aggregate(df15[names(df15)!='TIME'],list(hour=cut(df15$TIME,'hour')),mean,na.rm=T);
## hour Tair RH
## 1 2016-01-01 00:00:00 6.141510 80.70429
## 2 2016-01-01 01:00:00 5.836479 NaN
## 3 2016-01-01 02:00:00 5.668362 81.24523
## 4 2016-01-01 03:00:00 5.197762 83.15595
## 5 2016-01-01 04:00:00 5.227957 82.90767
## 6 2016-01-01 05:00:00 5.399776 NaN
Data
df1 <- data.frame(TIME=as.POSIXct(c('2016-01-01 00:01:00','2016-01-01 00:02:00',
'2016-01-01 00:03:00','2016-01-01 00:04:00','2016-01-01 00:05:00','2016-01-01 00:06:00',
'2016-01-01 00:07:00','2016-01-01 00:08:00','2016-01-01 00:09:00','2016-01-01 00:10:00',
'2016-01-01 00:11:00','2016-01-01 00:12:00','2016-01-01 00:13:00','2016-01-01 00:14:00',
'2016-01-01 00:15:00','2016-01-01 00:16:00','2016-01-01 00:17:00','2016-01-01 00:18:00',
'2016-01-01 00:19:00','2016-01-01 00:20:00','2016-01-01 00:21:00','2016-01-01 00:22:00',
'2016-01-01 00:23:00','2016-01-01 00:24:00','2016-01-01 00:25:00','2016-01-01 00:26:00',
'2016-01-01 00:27:00','2016-01-01 00:28:00','2016-01-01 00:29:00','2016-01-01 00:30:00',
'2016-01-01 00:31:00','2016-01-01 00:32:00','2016-01-01 00:33:00','2016-01-01 00:34:00',
'2016-01-01 00:35:00','2016-01-01 00:36:00','2016-01-01 00:37:00','2016-01-01 00:38:00')),
Tair=c(5.9,5.9,5.9,5.89,5.8,5.8,5.8,5.8,5.8,5.8,5.8,5.8,5.8,5.8,5.8,5.8,5.8,5.8,5.8,5.8,5.75,
5.78,5.78,5.8,5.73,5.7,5.7,5.7,5.7,5.7,5.7,5.76,5.8,5.8,5.8,5.8,5.79,5.7),RH=c(82L,82L,82L,
82L,82L,82L,82L,82L,82L,82L,82L,82L,82L,82L,82L,82L,82L,82L,82L,82L,82L,82L,83L,82L,82L,82L,
82L,82L,82L,82L,83L,83L,83L,82L,82L,83L,83L,82L));
df15 <- data.frame(TIME=as.POSIXct(c('2016-01-01 00:15:00','2016-01-01 00:30:00',
'2016-01-01 00:45:00','2016-01-01 01:00:00','2016-01-01 01:15:00','2016-01-01 01:30:00',
'2016-01-01 01:45:00','2016-01-01 02:00:00','2016-01-01 02:15:00','2016-01-01 02:30:00',
'2016-01-01 02:45:00','2016-01-01 03:00:00','2016-01-01 03:15:00','2016-01-01 03:30:00',
'2016-01-01 03:45:00','2016-01-01 04:00:00','2016-01-01 04:15:00','2016-01-01 04:30:00',
'2016-01-01 04:45:00','2016-01-01 05:00:00')),Tair=c(6.228442,6.121088,6.075,5.95191,
5.844144,5.802242,5.747619,5.742889,5.752584,5.677753,5.500224,5.225282,5.266441,5.200448,
5.098876,5.081061,5.230769,5.3,5.3,5.399776),RH=c(80.40858,81,NA,NA,NA,NA,NA,NA,81.12135,81,
81.61435,82.29797,83,83.32584,84,83.76894,82.88664,82.06742,NA,NA));
| {
"pile_set_name": "StackExchange"
} |
Q:
How does an import quota affect the demand / supply of currency?
An import quota in an open economy might not have the desired affect of increasing the GDP as it increases the demand of domestic currency in the open market raising exchange rate. How does it happen that import quota causes demand to shift positively ?
A:
The goal of import quotas is to get people to buy domestic goods, which can only be bought for local currency. Therefore import quotas increase demand for local currency. But...
Setting import quotas is a very undesirable economic policy. It is basically a state intrusion into a market economy. Quotas for imports are usually implemented as a desperate measure when there is a continuously negative trade balance, local currency rapidly loses value, and every other measure (increasing taxes, customs, subsidizing exporters, etc.) failed. Distribution of quotas usually involves plenty of corruption. On top of that, World Trade Organization does not normally allow import quotas.
The more common use of import quotas is in agriculture, to support local farmers or production of a certain crop. But such limited use of import quotas should have very limited effect on the currency, unless it is a 'banana republic' where agriculture is the main economic activity.
| {
"pile_set_name": "StackExchange"
} |
Q:
Android : getting an item from a list (using onItemClick) and pass that Item data to another layout?
I've created list(containing name,mail,number,date) from server using listview extending by listactivity. And, i've also change the layout using list item by onItemClick. Now, i want to pass the values in these 4 column to 4 textviews(another layout). How can i do this? Thanks in advance.
A:
you would want to get the underlying data object by using the position parameter from the onItemClick() in the getItem() method of the adapter that you are using for the list (getListAdapter()); Then you'd just call setText on each of your TextViews to set the data there.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can you determine whether a graph is the 1-skeleton of a polytope?
How do I test whether a given undirected graph is the 1-skeleton of a polytope?
How can I tell the dimension of a given 1-skeleton?
A:
A few comments:
In general, you can't tell the dimension of a polytope from its graph. For any $n \geq 6$, the complete graph $K_n$ is the edge graph of both a $4$-dimensional and a $5$-dimensional polytope. (Thanks to dan petersen for correcting my typo.) The term for such polytopes is "neighborly".
On the other hand, you can say that the dimension is bounded above by the lowest vertex degree occurring anywhere in the graph.
A beautiful paper of Gil Kalai shows that, given a $d$-regular graph, there is at most one way to realize it as the graph of a $d$-dimensional polytope, and gives an explicit algorithm for reconstructing that polytope. You could try running his algorithm on your graph. (Or a more efficient version recently found by Friedman.) This algorithm will output some face lattice; that is to say, it will tell you which collections of vertices should be $2$-faces, which should be $3$-faces and so forth.
Unfortunately, going from the face lattice to the polytope is very hard. According to the MathSciNet review, Richter-Gebert has shown that it is NP-hard to, given a lattice of subsets of a finite set, decide whether it is the face lattice of a polytope. Note that this is a lower bound for the difficulty of your problem.
Let me be more explicit about the last statement. Richter-Gebert shows that, given a collection $L$ of subsets of $[n]$, it is NP-hard to determine whether there is a polytope with vertices labeled by $[n]$ whose edges, $2$-faces and $3$-faces are the given sets. (Here $[n] = \{ 1,2, \ldots, n \}$.)
Suppose we had an algorithm to decide whether a graph could be the edge graph of a polytope. Take our collection $L$ and look at the two-element sets within it. These form a graph with vertex set $[n]$. Run the algorithm on it. If the output is NO, then the answer to Richter-Gebert's problem is also no. If the answer is YES, then we have the problem that our algorithm might have found a polytope whose $2$-faces and $3$-faces differ from those prescribed by $L$. If our graph is $4$-regular, this problem doesn't come up by Kalai's result. But, not having read Richter-Gebert myself, I don't know whether the problem is still NP-hard when we restrict to $4$-regular graphs.
However, even if Richter-Gebert's result doesn't apply directly, I find it difficult to imagine that there could be an efficient algorithm to solve the graph realization problem, since there isn't one to solve the face lattice problem.
A:
A few more remarks, On the bright side: To determine if a given graph is the graph of a d-polytope is decidable. Tarski's algorithm for real closed fields can be used.
In dimension 3 as Sam Nead mentioned graphs of 3-polytopes are precisely 3 connected planar graphs. The algorithm by Hopcroft and Tarjan and various subsequent algorithms give a linear-time algorithm for planarity.
Regarding the second question, it is possible that the same graph can be realized as the graph of d-polytopes of various dimensions. David already mentioned K_n which is the graph of a d-polytope for every d between 4 and n-1. Another example is the graph of a d-cube which was proved by Joswig and Ziegler to be a graph of e-polytopes for e between 4 and d.
Another fact is that there are not so many graphs of polytopes. There are only exponentially many different graphs of simple d-polytopes with n vertices. See this paper of Benedetti and Ziegler. It is not known if this result extends to graphs of general d-polytopes, or to all subgraphs of graphs of simple d-polytopes, or to dual graphs of all triangulations of (d-1)-spheres. The later question is discussed by Gromov in the paper spaces and questions (p. 33).
A:
In dimension three there is the Steinitz theorem.
| {
"pile_set_name": "StackExchange"
} |
Q:
using javascript to grab javascript/cdata in html
what is the best way to grab the memberId from this data and show it in an alert() dialog?
I would like to make this a bookmarlet something like this:
javascript:alert("member ID is\n"+document.getElementsByName("memberid")[0].value);
(This doesnt work; it's just an example.)
<script type='text/javascript'>
//<![CDATA[
var _SKYAUTH = {
loginUrl:'',
memberNick:'',
memberId:'233669',
profileUrl:'',
photoUrl:''
};
//]]>
</script>
A:
i want to make a bookmarlet
You should be able to catch the variable using
alert(_SKYAUTH.memberId);
| {
"pile_set_name": "StackExchange"
} |
Q:
how to check how many lines of code in file using node js
I tried to read line by line in file, but I have a doubt how to print count of line in file using nodejs.
data.js
console.log("123")
console.log("123")
console.log("123")
console.log("123")
file.js
var lineReader = require('readline').createInterface({
input: require('fs').createReadStream('./data.js')
});
lineReader.on('line', function (line) {
console.log('Line from file:', line);
});
I got this ouput
Line from file: console.log("123") Line from file:
Line from file: console.log("123") Line from file:
Line from file: Line from file:
console.log("123") Line from file: Line from
file: Line from file: Line from
file: console.log("123")
but I want how many lines of code in file using node js
A:
var i;
var count = 0;
require('fs').createReadStream(process.argv[2])
.on('data', function(chunk) {
for (i=0; i < chunk.length; ++i)
if (chunk[i] == 10) count++;
})
.on('end', function() {
console.log(count);
});
A:
let count = 0;
var lineReader = require('readline').createInterface({
input: require('fs').createReadStream('./data.js')
});
lineReader.on('data', line => {
for (i=0; i < line.length; ++i) if (line[i] == 10) count++;
})
.on('end', () => {
console.log(count);
})
By looping the lines in file you count the number of lines this way. Also you can checkout this link for more details
| {
"pile_set_name": "StackExchange"
} |
Q:
ServiceNow integration with .NET
I have been trying to get an insertResponse from the ServiceNow web-service.
I know that the elementFormDefault property under Webservices need to be set to False.
Since I do not have an admin login, I cannot do so.
Is there an alternate?
I need a response from the web-service, be it a Boolean that a new record is created.
Please help.
Thanks in advance
A:
You're in luck! You can specify a URL parameter in your WSDL URL to drive elementFormDefault.
Add this to your WSDL URL:
elementFormDefault=qualified
...here's what a full URL would look like:
https://<instance name>.service-now.com/<table name>.do?WSDL&elementFormDefault=qualified
You can validate this by simply loading up the WSDL in your browser and looking for this XML snippet at the top:
<xsd:schema elementFormDefault="qualified"
| {
"pile_set_name": "StackExchange"
} |
Q:
Printing grouped by values
I have written below code to print the Search objects in the groups map. But I am not getting the correct output.
Mycode:
Map<Integer, List<Search>> groups = group.stream().collect( Collectors.groupingBy( w -> w.getId()) );
System.out.println( groups );
Output I get:
{
1=[Models.Search@30269b0d],
2=[Models.Search@423e11a8],
3=[Models.Search@25e2f879]
}
I want my output to print the grouped Search objects. Please help.
Edited:
sample output I want
{
1=[Michael/14/UK/90, Tim/15/UK/91, George/14/UK/98],
2=[Jan/13/POLAND/92, Anna/15/POLAND/95],
3=[Helga/14/GERMANY/93, Leon/14/GERMANY/97]
}
A:
In your class Search,override toString like
@Override
public String toString() {
String res=value1+"/"+value2+"/"+...
return res;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Uniform HTML templating language
It seems like every web framework has its own pet template language. Ruby has eRuby, Python's django uses the Django template language, Haskell has Heist and Hamlet, Java's got JSP, and then there's PHP...
My question is, has anyone tried creating One Templating Language to Rule Them All? Are there any such templating languages that at least have some widespread support amongst the varying web frameworks?
A:
Mustache maybe.
A:
XSLT might be a candidate as a "universal" template language.
It might also be the greatest evil that this land has ever seen, but that's up for debate.
| {
"pile_set_name": "StackExchange"
} |
Q:
Did I interpret the question correctly? In correct English?
On another question I asked, I realised the first sentence should read like the one below. Now I need to ask if it goes with my second sentence?
Any play, literally or figuratively, is confined to a certain place
where the story unfolds. In this essay, I will discuss the nature of
Elsa’s character during her visit to Miss Helen within the confinement
of the time and space of the play.
This is the question I must answer: Based on the discussion of the idea of confinement in Chapter Four of Introduction to English Literary Studies (IELS), write an essay of at least five paragraphs in which you discuss how the confinement of time and space enables the audience or reader to see more deeply into the nature of Elsa’s character during her visit to Miss Helen. You must make references to relevant aspects of the play to support your answer.
A:
While your interpretation seems correct, I would reword the second sentence to demonstrate the relation between the confinement of time and space, and the audience's perception of Miss Helen.
...will discuss the influence of the time and space confinements on the reader's potential interpretation of Elsa's character.
Hence, I believe, it is more clear you wish to show how this confinement affects character perception, rather than describe character perception within confinement.
Hope this helps :)
| {
"pile_set_name": "StackExchange"
} |
Q:
Rewriting old program using deprecated $*
We have some really old Perl code last updated 1997. I'm trying to upgrade to a newer Perl version where $* is deprecated.
I've been trying to learn how to rewrite this but the only help you get from the perlvar documentation is "You should use the /s and /m regexp modifiers instead."
my ($file, $regexp, $flags) = @_;
my (@found_lines, @tmp_list, $comp_buf);
local ($*);
if ($flags =~ tr/c//d)
{
$* = 1;
(substr ($regexp, 0, 1) ne "^") && ($regexp = "^.*$regexp");
($regexp !~ /([^\\]|^)(\\\\)*\$$/) && ($regexp .= ".*\$");
&read_comp ($file, \$comp_buf);
@found_lines = grep ($_ .= "\n", ($comp_buf =~ /$regexp/g));
}
else
{
@tmp_list = &read_list ($file, 0);
@found_lines = grep (/$regexp/, @tmp_list);
}
if ($flags eq "q")
{
$#found_lines >= 0;
}
elsif ($flags eq "a")
{
$#found_lines+1;
}
else
{
@found_lines;
}
It's really hard to know how to replace $* here for me, from what I can understand from the comments we use $* here to enable multi-line matching for the following regexp search. So I'm guessing I have to add those flags to the regexp expressions somehow.
How do I rewrite this code to replace the existing $* instances?
A:
Unfortunately $* is a global variable, so setting it has an effect on all called functions (e.g. read_comp) if they use regexes.
Also, that code is written in a slightly bizarre way:
I assume the intention was to enable "multiline" matching for the $comp_buf =~ /$regexp/g part, but $* is set early, so it also affects $regexp !~ /([^\\]|^)(\\\\)*\$$/ and the read_comp call.
The checks for whether $regexp already starts/ends with ^/$ respectively are broken. For example, (?:^foo$) is an anchored regex, but the code would not detect that.
grep ($_ .= "\n", ...) is a baffling abuse of grep to emulate map. What the code is trying to do is to get the list of lines matched by the regex. However, the way the regex is built it does not match the terminating newline character "\n" on each line, so the code manually adds "\n" to every returned string.
The sane way of doing that would be:
@found_lines = map $_ . "\n", ...; # or map "$_\n", ...
Instead of map we could use an imperative loop, taking advantage of the fact that for aliases the loop variable to the current list element:
@temp = ...;
for (@temp) {
$_ .= "\n";
}
@found_lines = @temp;
Instead of a for loop we could use grep for its side effect of iterating over a list:
@temp = ...;
grep $_ .= "\n", @temp;
@found_lines = @temp;
grep also aliases $_ to the current element, so the "filter expression" can modify the list we're iterating over.
Finally, because .= returns the resulting string (and strings containing "\n" cannot be false), we can take advantage of the fact that our "filter expression" always returns a true value and effectively get a copy of the input list as the return value from grep:
@found_lines = grep $_ .= "\n", ... # blergh
As for the effect of $*: It is a boolean flag (initially false). If set to true, all regexes behave as if /m is in effect, i.e. ^ and $ match at embedded newlines as well as the beginning/end of the string.
Assuming my interpretation of the code is correct, you should be able to change it as follows:
local ($*); can be removed.
$* = 1; also needs to go.
$comp_buf =~ /$regexp/g should be changed to $comp_buf =~ /$regexp/mg. This is the only place I see where multiline mode makes sense.
I'd really like to rewrite the last line. Either
@found_lines = map "$_\n", ($comp_buf =~ /$regexp/g);
(functional style), or, if you prefer a more imperative style:
@found_lines = ($comp_buf =~ /$regexp/g);
$_ .= "\n" for @found_lines;
| {
"pile_set_name": "StackExchange"
} |
Q:
Dynamically adding items to WPF listbox
I'm trying to build an application that the user points to a folder of PDF files.
of Invoices, the program then parses the PDF files to find out wich ones contains an email address and wich ones don't. and this Is Where I'm stuck:
I then want to add the file names to either the Listbox for print or the Listbox for email.
I got all the other bits working, choosing the folder and parsing the PDF and adding the folder path to a textbox object.
I then run a function:
private void listFiles(string selectedPath)
{
string[] fileEntries = Directory.GetFiles(selectedPath);
foreach (string files in fileEntries)
{
try
{
ITextExtractionStrategy its = new iTextSharp.text.pdf.parser.LocationTextExtractionStrategy();
using (PdfReader reader = new PdfReader(files))
{
string thePage = PdfTextExtractor.GetTextFromPage(reader, 1, its);
string[] theLines = thePage.Split('\n');
if (theLines[1].Contains("@"))
{
// System.Windows.MessageBox.Show("denne fil kan sendes som email til " + theLines[1], "Email!");
}
else
{
System.Windows.MessageBox.Show("denne fil skal Printes da " + theLines[1] + " ikke er en email", "PRINT!");
}
}
}
catch (Exception exc)
{
System.Windows.MessageBox.Show("FEJL!", exc.Message);
}
}
}
And it is in this function I want to be able to add the files to either Listbox.
My XAML looks like this:
<Grid.Resources>
<local:ListofPrint x:Key="listofprint"/>
</Grid.Resources>
<ListBox x:Name="lbxPrint" ItemsSource="{StaticResource listofprint}" HorizontalAlignment="Left" Height="140" Margin="24.231,111.757,0,0" VerticalAlignment="Top" Width="230"/>
But I get the error: The name "ListofPrint" does not exist in the namespace "clr-namespace:test_app".
the ListofPrint is here:
public class ListofPrint : ObservableCollection<PDFtoPrint>
{
public ListofPrint(string xfile)
{
Add(new PDFtoPrint(xfile));
}
}
I've been trying to get the hang of the documentation on MSDN and have read 10 different similar Questions on this site, but I guess my problem is that I don't know exactly what my problem is. first of it's a data binding problem but I basically copied the sample from the documentation to play with but that is what is giving me the trouble.
Hopefully, someone here can explain to me the basics of data binding and how it corresponds to my ObservableCollection.
A:
You need to create an instance of your collection class and bind the ListBox to it.
The most simple thing is setting its DataContext to this. I wrote an example:
Window:
public class MyWindow : Window
{
// must be a property! This is your instance...
public YourCollection MyObjects {get; } = new YourCollection();
public MyWindow()
{
// set datacontext to the window's instance.
this.DataContext = this;
InitializeComponent();
}
public void Button_Click(object sender, EventArgs e)
{
// add an object to your collection (instead of directly to the listbox)
MyObjects.AddTitle("Hi There");
}
}
Your notifyObject collection:
public class YourCollection : ObservableCollection<MyObject>
{
// some wrapper functions for example:
public void Add(string title)
{
this.Add(new MyObject { Title = title });
}
}
Item class:
// by implementing the INotifyPropertyChanged, changes to properties
// will update the listbox on-the-fly
public class MyObject : INotifyPropertyChanged
{
private string _title;
// a property.
public string Title
{
get { return _title;}
set
{
if(_title!= value)
{
_title = value;
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs( nameof(Title)));
}
}
}
public event PropertyChangedEventHandler PropertyChanged;
}
Xaml:
<ListBox ItemsSource="{Binding MyObjects}">
<ListBox.ItemTemplate>
<DataTemplate>
<TextBlock Text="{Binding Title}"/>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
| {
"pile_set_name": "StackExchange"
} |
Q:
Countable union of relatively compact sets
Let $X$ be a topological space and $\mathcal K(X)$ be $\sigma$-algebra, generated by compacts of $X$. Prove that for any set $B \in \mathcal K(X)$ either $B$ or its complement can be represented as a countable union of relatively compact sets.
My attempt: Let $B\in K(X)$ be a proper subset of $X$. Since $B \in \mathcal K(X)$ either it is a countable union of compact sets or its complement is such. Since all compact sets are relatively compact, we're done.
So the whole question is if $X$ can be represented as a countable union of relatively compact sets. And I'm stuck on this. Could you help me?
A:
By definition, $\mathcal{K}(X)$ is the smallest $\sigma$-algebra which contains the compacts.
Let $\mathcal{A}$ denote the collection of all sets $A \subset X$ such that either $A$ or its complement $A^C$ is a countable union of relatively compact sets. Obviously $\mathcal{A}$ contains the compacts so, if $\mathcal{A}$ is also a $\sigma$-algebra, then $\mathcal{K}(X) \subset \mathcal{A}$ must hold.
It is obvious that $\mathcal{A}$ is closed under complementation.
Suppose $A_1,A_2,A_3\ldots \in \mathcal{A}$. If one of their complements, say $A_1^C$, can be expressed as a countable union of relatively compact sets, then so too can $\left( \bigcup_{i=1}^\infty A_i \right)^C = \bigcap_{i=1}^\infty A_i^C$ (why?). Otherwise, one must have that $A_i$ is a countable union of relatively compact sets for each $i$. In this case $\bigcup_{i=1}^\infty A_i$ can also be expressed in this form (why?).
| {
"pile_set_name": "StackExchange"
} |
Q:
Tower damage + Champion Defense
I am trying to work out how turret damage is affected by armour, skills and masteries.
Here is the information for turret damage as far as I am aware:
Turret Range Base HP AD Gains 6 AD every: AD Caps at:
-----------------------------------------------------------------------------
Outer Tower 1000* Various 152 1 min at each :40 second mark. 200
Middle Tower 1000* Various 197 1 min starting at 7:00 mark. 263
Inner Tower 1000* Various 210 1 min starting at 16:00 mark. 330
Nexus Tower 1000* Various 115 1 min at each :40 second mark. 343
Nexus Obelisk 1000* 9999* 999 N/A N/A
Does armour reduce it, the same it
would for auto attacks?
Do items like thornmail and skills like Rammus' curl reflect damage?
Do skills like Maoki's ultimate (-15% damage in the aoe) or Alistair's ultimate (-75% damage on self) affect the turret damage?
Does the last defensive mastery negate 4% of the damage?
A:
Lets break this out piece by piece.
Does armour reduce it, the same it would for auto attacks?
Armor does in fact reduce the damage you take from normal towers. The base tower however deals true damage.
Do items like thornmail and skills like Rammus' curl reflect damage?
Simple answer: No. Most abilities do not deal damage to towers either. Exceptions include Heimerdinger's turrets and Alistar's passive.
Do skills like Maoki's ultimate (-15% damage in the aoe) or Alistair's ultimate (-75% damage on self) affect the turret damage?
Maoki's ultimate does not reduce damage taken from turrets but Alistar's ultimate does.
Does the last defensive mastery negate 4% of the damage?
I believe so for the following reasons. It says it reduces "All damage" doesnt exclude towers specifically (the way Maoki's ultimate description does).
Hope that helps.
| {
"pile_set_name": "StackExchange"
} |
Q:
Accessing and modifying local static variables
So I was playing about trying to understand the use of static variables, clearly my understanding is not quite there yet as I don't understand how the following is operating:
Main calling my useful function that spits outs the contents of x and y from other.c
//main.c
#include <stdio.h>
#include <stdint.h>
#include "other.h"
int main (void)
{
printf("Main - X: %d, Y: %d\r\n\n",x,y);
func();
printf("Main - X: %d, Y: %d\r\n\n",x,y);
func();
printf("Main - X: %d, Y: %d\r\n\n",x,y);
printf("Main - X: %d, Y: %d\r\n\n",++x,++y);
return 0;
}
other.h
//other.h
void func(void);
uint8_t x;
static uint8_t y;
other.c
//other.c
#include <stdio.h>
#include <stdint.h>
#include "other.h"
void func(void)
{
x += 1;
y += 2;
printf("Func - X: %d Y: %d\r\n",x,y);
}
My output:
Main - X: 0, Y: 0
Func - X: 1 Y: 2
Main - X: 1, Y: 0
Func - X: 2 Y: 4
Main - X: 2, Y: 0
Main - X: 3, Y: 1
Two things I can't explain; Why is it that accessing y from main does not cause an warning/error?
How/Where am I storing the values set to y when it is equal to 0 and 1 in main?
What should I do different if I want to cause issues by accidentally accessing static variables outside their scope?
A:
According to the C Standard (6.9 External definitions)
5 An external definition is an external declaration that is also a
definition of a function (other than an inline definition) or an
object. If an identifier declared with external linkage is used in
an expression (other than as part of the operand of a sizeof operator
whose result is an integer constant), somewhere in the entire program
there shall be exactly one external definition for the identifier;
otherwise, there shall be no more than one.161)
However the compiler can not check whether an identifier with external linkage was defined more than one time. It is the linker that does such a check. So you should see options of the linker. Sometimes by default they simply eliminate superflouos definitions. However sometimes the behaviour becames undefined.
In general there are three possibilities.
The first one is that the linker issues an error.
The second one is that the linker keeps only one definition of the external object.
The third one is that the linker makes all duplicates of the object with external linkage as objects with internal linkage.
In your case it seems the linker keeps only one definition of the object with name x.
As for variable y with internal linkage due to its declaration with keyword static then the two compilation units one with main and other with func have their own object y. So inside the first compilation unit that is in main you change one object y while function func in other compilation unit changes its own object y though there is an illusion that y is the same object.
| {
"pile_set_name": "StackExchange"
} |
Q:
Would an existing database go corrupt upon changing collation
I have a MySQL database, full of data, with collation = latin1 - default collation.
If I change that collation to utf8 - default collation, a superset of the collation above,
would there be any corruption in the existing data?
A:
Your data will not be corrupted, but you must take care to update the collation of all tables and columns and not just change the default collation of the database as this would only apply to new tables. If you do not alter all existing tables you might encounter some odd behaviour when running queries where data are compared.
I would recommend that you run the following queries as taken from David Whittaker's answer
Heres how to change all databases/tables/columns. Run these queries and they will output all of the subsequent queries necessary to convert your entire schema to utf8. Hope this helps!
-- Change DATABASE Default Collation
SELECT DISTINCT concat('ALTER DATABASE `', TABLE_SCHEMA, '` CHARACTER SET utf8 COLLATE utf8_unicode_ci;')
from information_schema.tables
where TABLE_SCHEMA like 'database_name';
-- Change TABLE Collation / Char Set
SELECT concat('ALTER TABLE `', TABLE_SCHEMA, '`.`', table_name, '` CHARACTER SET utf8 COLLATE utf8_unicode_ci;')
from information_schema.tables
where TABLE_SCHEMA like 'database_name';
-- Change COLUMN Collation / Char Set
SELECT concat('ALTER TABLE `', t1.TABLE_SCHEMA, '`.`', t1.table_name, '` MODIFY `', t1.column_name, '` ', t1.data_type , '(' , t1.CHARACTER_MAXIMUM_LENGTH , ')' , ' CHARACTER SET utf8 COLLATE utf8_unicode_ci;')
from information_schema.columns t1
where t1.TABLE_SCHEMA like 'database_name' and t1.COLLATION_NAME = 'old_charset_name';
| {
"pile_set_name": "StackExchange"
} |
Q:
Loading a .PSD file with D3DX11CreateShaderResourceViewFromFile
I have a strange issue with DX11 and PSD files. I know that it says that DX11 does not support PSD files, however it DOES load RGB PSDs on my development machine. For some reason it ignores alpha channels if present. When I run this same program on another machine, it fails as documented and no texture is loaded.
Has anybody else seen this behavior? Did Microsoft add PSD support in a later update of DirectX that isn't documented, etc?
Both machines are running the June 2010 SDK, the only difference I can see is that I am using Win7 Professional SP1 (which works), and the other machine is Win7 Ultimate no service pack (doesn't work).
Does anybody know of a good way to get PSDs into DX11? Our pipeline is PSD native, so we would prefer to load them straight into memory. At the moment we convert to BMP via an external program, but it is really slow.
Thanks
A:
As per Ross Ridge's comment, this is the PSD Windows Imaging Component (WIC) that allows you to read PSD files into DX11 textures. Be aware however that it won't read in the alpha channel, just the RGB of the PSD. This is fine for most purposes.
| {
"pile_set_name": "StackExchange"
} |
Q:
gtest, undefined reference to 'testing::Test::~Test()', testing::Test::Test()
i installed the gtest using apt-get install libgtest-dev
and i am trying to check if that is working or not.
so i make simple code for testing in eclipse.
but there are error,
undefined reference to 'testing::Test::~Test()'
undefined reference
to 'testing::Test::Test()'
conversely if i change the inheritance at the ATest class as protected the error disappear but
the other error occur
testing::Test is inaccessible base of 'ATest_AAA_Test'
what is wrong ?
#include <iostream>
#include <gtest/gtest.h>
class A{
public:
int a;
A(int a){this->a = a;}
A(){}
~A(){}
int getA(){return a;}
void setA(int a){this->a = a;}
};
class ATest : public ::testing::Test{
public:
virtual void SetUp(){
a1 = A(1);
a2 = A(2);
a3 = A(3);
}
virtual void TearDwon(){}
A a1;
A a2;
A a3;
};
TEST_F(ATest, AAA){
EXPECT_EQ(1, a1.getA());
EXPECT_EQ(2, a2.getA());
EXPECT_EQ(3, a3.getA());
}
int main(int argc, char **argv){
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
A:
I know this sounds obvious but my psychic debugging skills tell me you forgot to add -lgtest after your object file name when you linked your final binary.
A:
Installing libgtest-dev does not install any of the gtest binaries.
It simply installs the googletest source package on your system - headers
under /usr/include/gtest and source files under /usr/src/gtest,
where you could proceed to build it with cmake or GNU autotools
if you want.
There is no binary package for googletest in ubuntu/debian software channels
(or elsewhere that I know of). The normal practice is to
download the source archive,
extract it and use binaries built by yourself. The README in the
source package gives guidance on building and using the library.
There is normally no purpose in performing a system install of
the source package, as you have done.
The linkage error you have encountered:
undefined reference to 'testing::Test::~Test()
has nothing to do with your code: it occurs because you are not
linking libgtest, and you cannot link it because it is not installed
by libgtest-dev and you have not built it yourself.
This linkage error disappears when you change the code in a way
that introduces a compilation error, because if compilation fails
then linkage does not happen.
| {
"pile_set_name": "StackExchange"
} |
Q:
youtube api v3 CORS support in sencha touch 2
Using Sencha Touch, How do we query youtube v3 search api?
Below is the sample url which works fine when issued from a browser directly (NOTE: Key is required) :
"https://www.googleapis.com/youtube/v3/search?part=snippet&order=date&type=video&channelId=UC3djj8jS0370cu_ghKs_Ong&key=MY_KEY"
However the same url fails when loaded using sencha touch Store ajax proxy.
It seems that OPTIONS call is made against this URL and GET was aborted.
What is needed in Sencha touch store for working with youtube V3 google apis? I did not find jsonp support for youtube V3 api.
A:
I have used this api like this,
proxy: {
type: 'ajax',
url: 'https://www.googleapis.com/youtube/v3/search',
useDefaultXhrHeader: false,
extraParams: {
part: 'snippet',
q: 'ambarsariya',
regionCode: 'IN',
maxResults: 30,
key: 'your_key'
},
reader: {
type: 'json',
rootProperty: 'items'
}
}
One more thing you have to do and thats is you have set a idProperty in the model of this store. Inside config of the model used I have used
idProperty: 'videoId',// its better if this name is not same as any fields name
fields: [{
name: 'snippet'
}, {
name: 'thumbnail',
mapping: 'snippet.thumbnails.default.url'
}, {
name: 'title',
mapping: 'snippet.title'
}]
Hope this solves your problem.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to prevent a UITableViewCell's textLabel from truncating its text on iPad
I am using a UITableView on an iPad and for some reason the UITableViewCell.textLabel's text is getting truncated even though there's plenty of space to display the text. I tried all of the following, none of which fixed it:
Set a flexible width autosizing mask on the cell
Calling [cell sizeToFit] after setting the text
Calling [cell.textLabel setNumberOfLines:0] and [cell.textLabel setLineBreakMode:NSLineBreakByWordWrapping]
I haven't yet tried subclassing UITableViewCell and setting the frame explicitly in the layoutSubviews method. Trying to avoid that as I feel like there should be a better solution. I also don't want to resize the text--there is plenty of space to fit the text at the full font size.
A:
u set set textLabel number of line 0
cell.textLabel.numberOfLines=0;
cell.textLabel.lineBreakMode=UILineBreakModeWordWrap; or NSLineBreakByWordWrapping;
hope it help you
A:
Set the "setNumberOfLines" property of the label to wrap the text to required number of lines.
If you don't want the "..." at the end of the text if it is too long then use
cell.textLabel.lineBreakMode = UILineBreakModeWordWrap; or NSLineBreakByWordWrapping for ios 6 or more
Try to set height and width of textlable and tableviewcell dynamicaly refer this link
A:
The answer had nothing to do with UITableViewCell in the end. There was a section of the code truncating the text to display in the table view cell! Thanks everyone for the help up to this point.
| {
"pile_set_name": "StackExchange"
} |
Q:
Update CoreData attribute of type Int32
I'd like to get the value of an core data attribute and after getting this object/value I'd like to update it. This is my code:
var numberOfChanges:Int32?
numberOfChanges = theme.valueForKey("numberThemeChanged") as Int32
var newValueThemeChanges:Int32?
newValueThemeChanges = numberOfChanges+1
theme.setValue(newValueThemeChanges, forKey: "numberThemeChanged")
I'm getting the following errors:
Type 'Int32' does not conform to protocol 'AnyObject' in this line of code: numberOfChanges = theme.valueForKey("numberThemeChanged") as Int32.
'Int32' is not identical to 'AnyObject' in this line of code: theme.setValue(newValueThemeChanges, forKey: "numberThemeChanged")
Do you know how I can do it/what's wrong with my code?
Thanks for your answer!
A:
The Key-Value Coding methods
func valueForKey(key: String) -> AnyObject?
func setValue(value: AnyObject?, forKey key: String)
work with values of the type AnyObject, for an integer attribute these
are instances of NSNumber.
There is an automatic bridging between Int and NSNumber, but not
between Int32 and NSNumber (and this has nothing to do with the
fact that you define the property as "Integer 32" in the Core Data
model inspector).
You have several options:
Simply use a variable of type Int:
var numChanges = theme.valueForKey("numberThemeChanged") as Int
numChanges++
theme.setValue(numChanges, forKey: "numberThemeChanged")
Use Int32 and convert from and to NSNumber explicitly:
var numberOfChanges = (theme.valueForKey("numberThemeChanged") as NSNumber).intValue // Int32
numberOfChanges++
theme.setValue(NSNumber(int: numberOfChanges), forKey: "numberThemeChanged")
Use Xcode -> Editor -> Create NSManagedObject subclass ... and check the "Use scalar properties for primitive data types" options.
This will give you a managed object subclass with the property
@NSManaged var numberThemeChanged: Int32
and you can access the property without Key-Value Coding:
var numberOfChanges = theme.numberThemeChanged
numberOfChanges++
theme.numberThemeChanged = numberOfChanges
Here is a complete "create-or-update" example:
var theme : Entity!
let request = NSFetchRequest(entityName: "Entity")
var error : NSError?
if let result = context.executeFetchRequest(request, error: &error) as [Entity]? {
if result.count > 0 {
// (At least) one object found, set `theme` to the first one:
theme = result.first!
} else {
// No object found, create a new one:
theme = NSEntityDescription.insertNewObjectForEntityForName("Entity", inManagedObjectContext: context) as Entity
// Set an initial value:
theme.setValue(0, forKey: "numberThemeChanged")
}
} else {
println("Fetch failed: \(error?.localizedDescription)")
}
// Get value and update value:
var numChanges = theme.valueForKey("numberThemeChanged") as Int
numChanges++
theme.setValue(numChanges, forKey: "numberThemeChanged")
// Save context:
if !context.save(&error) {
println("Save failed: \(error?.localizedDescription)")
}
println(numChanges)
| {
"pile_set_name": "StackExchange"
} |
Q:
Authentication with NGINX
I am running a set of NGINX proxies using basic browser authentication with the htpasswd file for users.
I have built a small application with Laravel that authenticates the user and presents them a list of links to these nginx proxies.
I am looking to update the nginx authentication to something like JWT tokens, however I am not sure that would be secure enough without an API behind it to validate the actual token itself?
The other option I was thinking was LDAP solution and having both Laravel and the NGINX proxies using the ldap authentication.
The apps that site behind the nginx proxy do not have any authentication and we have 0 intention of adding any to them at this time
A:
Nginx includes the request auth module, which
implements client authorization based on the result of a subrequest. If the subrequest returns a 2xx response code, the access is allowed. If it returns 401 or 403, the access is denied with the corresponding error code. Any other response code returned by the subrequest is considered an error.
For the 401 error, the client also receives the “WWW-Authenticate” header from the subrequest response.
Two possibilities come to my mind:
You could extend your existing Laravel application so that it would be possible to "link to it" from Nginx, which would take the user to a "Login" page, and, if authenticated, sends a "200 OK" response to Nginx.
You could check out first existing solutions, leveraging this technique, for example Nginx LDAP Auth.
Elaborating on the second answer:
If the account data is indeed stored in LDAP, you could write a script which is executed regularly via cron for example, which pulls the data out of LDAP and writes it into a htpasswd file to be read by Nginx.
| {
"pile_set_name": "StackExchange"
} |
Q:
what does it mean when we use " group by 1" in an SQL Query
I have come across a query where it is specified by
select concat(21*floor(diff/21), '-', 21*floor(diff/21) + 20) as `range`, count(*) as
`number of users` from new_table group by 1 order by diff;
here what exactly does group by 1 mean?
A:
Assuming you have a Select:
SELECT name FROM employee GROUP BY 1;
No matter what, it will always group by the first column given in the select.
In this case, the column 'name' is grouped.
So if we alternate the above statement to:
SELECT department FROM employee GROUP BY 1;
We now group the department, without having to change the '1' in the group by.
EDIT: (as requested by Stewart)
If we have the following Data in table 'employe':
-- name --
Walt
Walt
Brian
Barney
A simple select would deliver all rows above, whereas the 'group by 1' would result in one Walt-row:
output with group by:
-- name --
Walt
Brian
Barney
| {
"pile_set_name": "StackExchange"
} |
Q:
best way to write series of objects to a .ser file using ObjectOutputStream and read them back
I create series of objects out of Student Class and store them in a vector. I write each object in to a .ser file once it created. Then I read them back. My code is working perfectly. What I want to know is, I do this in a correct optimized way or are there any easy and optimized ways to get this done?? And also how to replace or delete specific object in a .ser file without re-witting whole the file again.
Student Class
public class Student implements Comparable<Student>, Serializable{
private String firstName = "";
private int registrationNumber;
private int coursework1;
private int coursework2;
private int finalExam;
private double moduleMark;
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public int getRegistrationNumber() {
return registrationNumber;
}
public void setRegistrationNumber(int registrationNumber) {
this.registrationNumber = registrationNumber;
}
public int getCoursework1() {
return coursework1;
}
public void setCoursework1(int coursework1) {
this.coursework1 = coursework1;
}
public int getCoursework2() {
return coursework2;
}
public void setCoursework2(int coursework2) {
this.coursework2 = coursework2;
}
public int getFinalExam() {
return finalExam;
}
public void setFinalExam(int finalExam) {
this.finalExam = finalExam;
}
public double getModuleMark() {
return moduleMark;
}
public void setModuleMark(double moduleMark) {
this.moduleMark = moduleMark;
}
public int compareTo(Student s){
if (this.moduleMark > s.moduleMark)
return 1;
else if (this.moduleMark == s.moduleMark)
return 0;
else
return -1;
}
}
File writing part
public static void Write(Student mm){
try
{
FileOutputStream fileOut = new FileOutputStream("info.ser",true);
ObjectOutputStream out = new ObjectOutputStream(new BufferedOutputStream(fileOut));
out.writeObject(mm);
out.close();
fileOut.close();
System.out.println("Serialized data is saved in info.ser");
}catch(IOException i)
{
//i.printStackTrace();
}
}
Reading part
public static int Read() {
int count=0;
try{
vector = new Vector<Student>();
FileInputStream saveFile = new FileInputStream("info.ser");
ObjectInputStream save;
try{
for(;;){
save = new ObjectInputStream(saveFile);
student = (Student) save.readObject();
vector.add(student);
count++;
}
}catch(EOFException e){
//e.printStackTrace();
}
saveFile.close();
}catch(Exception exc){
exc.printStackTrace();
}
return count;
}
A:
Why didn't you provide a constructor for Student class? Was that on purpose? However...
Use serialVersionUID.
In Java it's recommended not to use Vector class, instead use ArrayList.
Write method is serializing a Student individually. But as you stated you are writing a whole list of student into the .ser file. This will take many time for a list of 10,000 students.
Because you are opening two streams for each student. So instead pass the whole list and serialize them after opening the stream only once.
Do not close the stream in the try block. close them into finally block. Even better if you are using Java7 or higher use try-with-resources block. It is easy to use and you don't have to manually close the stream.
In Read method
for(;;){
save = new ObjectInputStream(saveFile);
you are creating new stream for every object thats not necessary instead do this
save = new ObjectInputStream(saveFile);
for(;;){
Onwards Java7 there is multiple catch statement. Give it a try(pun intended).
In Java we use mixed-case convention for naming a method. So Write and Read would be write and read.
| {
"pile_set_name": "StackExchange"
} |
Q:
JPA query syntax issue or what ?! JPA SELECT NEW syntax issue?
I have a TextHistory entity object and then I have this JPA query.
SELECT NEW TextHistory(i.id, i.fileName, i.importDate) FROM TextHistory i")
In the entity object I have provided the corresponding constructor.
I am trying to deploy my app under Payara 4.1 but I get this exception at deploy time:
Error occurred during deployment: Exception while deploying the app
[app-name] : Exception [EclipseLink-28019]
(Eclipse Persistence Services - 2.6.2.qualifier):
org.eclipse.persistence.exceptions.EntityManagerSetupException
Exception Description:
Deployment of PersistenceUnit [unit-name] failed.
Close all factories for this PersistenceUnit. Internal Exception:
Exception [EclipseLink-0] (Eclipse Persistence Services - 2.6.2.qualifier):
org.eclipse.persistence.exceptions.JPQLException
Exception Description: Internal problem encountered while compiling
[SELECT NEW TextHistory(i.id, i.fileName, i.importDate) FROM TextHistory i].
Internal Exception: java.lang.NullPointerException.
Please see server.log for more details.
I think my JPA query syntax is correct.
I've been struggling with this for several hours now.
What is the problem? Any ideas?
In the server.log I am seeing this exception.
[2017-08-15T21:32:24.546+0200] [Payara 4.1] [INFO] [] [org.eclipse.persistence.session./file:/D:/Work/TSSB_DEV_ENV_MASTER/domains/tssb_ms_gf4_domain_srm_tsbg/applications/tsbgam-application-2017-T3-SNAPSHOT/tsbgam-business_jar/_tsms_tsbg.connection] [tid: _ThreadID=154 _ThreadName=admin-listener(7)] [timeMillis: 1502825544546] [levelValue: 800] [[
/file:/D:/Work/TSSB_DEV_ENV_MASTER/domains/tssb_ms_gf4_domain_srm_tsbg/applications/tsbgam-application-2017-T3-SNAPSHOT/tsbgam-business_jar/_tsms_tsbg logout successful]]
[2017-08-15T21:32:24.546+0200] [Payara 4.1] [SEVERE] [] [org.eclipse.persistence.session./file:/D:/Work/TSSB_DEV_ENV_MASTER/domains/tssb_ms_gf4_domain_srm_tsbg/applications/tsbgam-application-2017-T3-SNAPSHOT/tsbgam-business_jar/_tsms_tsbg.ejb] [tid: _ThreadID=154 _ThreadName=admin-listener(7)] [timeMillis: 1502825544546] [levelValue: 1000] [[
Local Exception Stack:
Exception [EclipseLink-0] (Eclipse Persistence Services - 2.6.2.qualifier): org.eclipse.persistence.exceptions.JPQLException
Exception Description: Internal problem encountered while compiling [SELECT NEW TextHistory(i.id, i.fileName, i.importDate) FROM TextHistory i].
Internal Exception: java.lang.NullPointerException
at org.eclipse.persistence.internal.jpa.jpql.HermesParser.buildUnexpectedException(HermesParser.java:207)
at org.eclipse.persistence.internal.jpa.jpql.HermesParser.populateQueryImp(HermesParser.java:296)
at org.eclipse.persistence.internal.jpa.jpql.HermesParser.buildQuery(HermesParser.java:163)
at org.eclipse.persistence.internal.jpa.EJBQueryImpl.buildEJBQLDatabaseQuery(EJBQueryImpl.java:142)
at org.eclipse.persistence.internal.jpa.JPAQuery.processJPQLQuery(JPAQuery.java:223)
at org.eclipse.persistence.internal.jpa.JPAQuery.prepare(JPAQuery.java:184)
at org.eclipse.persistence.queries.DatabaseQuery.prepareInternal(DatabaseQuery.java:624)
at org.eclipse.persistence.internal.sessions.AbstractSession.processJPAQuery(AbstractSession.java:4366)
at org.eclipse.persistence.internal.sessions.AbstractSession.processJPAQueries(AbstractSession.java:4326)
at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.initializeDescriptors(DatabaseSessionImpl.java:598)
at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.postConnectDatasource(DatabaseSessionImpl.java:818)
at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:762)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:265)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:731)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.getAbstractSession(EntityManagerFactoryDelegate.java:205)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.createEntityManagerImpl(EntityManagerFactoryDelegate.java:305)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:337)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:303)
at org.glassfish.persistence.jpa.JPADeployer$2.visitPUD(JPADeployer.java:451)
at org.glassfish.persistence.jpa.JPADeployer$PersistenceUnitDescriptorIterator.iteratePUDs(JPADeployer.java:510)
at org.glassfish.persistence.jpa.JPADeployer.iterateInitializedPUsAtApplicationPrepare(JPADeployer.java:492)
at org.glassfish.persistence.jpa.JPADeployer.event(JPADeployer.java:395)
at org.glassfish.kernel.event.EventsImpl.send(EventsImpl.java:131)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:487)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:219)
at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:487)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:539)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:535)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2.execute(CommandRunnerImpl.java:534)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$3.run(CommandRunnerImpl.java:565)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$3.run(CommandRunnerImpl.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:556)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1464)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1300(CommandRunnerImpl.java:109)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1846)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1722)
at org.glassfish.admin.rest.utils.ResourceUtil.runCommand(ResourceUtil.java:253)
at org.glassfish.admin.rest.utils.ResourceUtil.runCommand(ResourceUtil.java:231)
at org.glassfish.admin.rest.utils.ResourceUtil.runCommand(ResourceUtil.java:275)
at org.glassfish.admin.rest.resources.TemplateListOfResource.createResource(TemplateListOfResource.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:160)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:384)
at org.glassfish.admin.rest.adapter.RestAdapter$2.service(RestAdapter.java:316)
at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:179)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:206)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:180)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:536)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.eclipse.persistence.queries.ReportQuery.beginAddingConstructorArguments(ReportQuery.java:558)
at org.eclipse.persistence.internal.jpa.jpql.ReportItemBuilder.visit(ReportItemBuilder.java:263)
at org.eclipse.persistence.jpa.jpql.parser.ConstructorExpression.accept(ConstructorExpression.java:84)
at org.eclipse.persistence.internal.jpa.jpql.ReportItemBuilder.visitAbstractSelectClause(ReportItemBuilder.java:695)
at org.eclipse.persistence.internal.jpa.jpql.ReportItemBuilder.visit(ReportItemBuilder.java:545)
at org.eclipse.persistence.jpa.jpql.parser.SelectClause.accept(SelectClause.java:42)
at org.eclipse.persistence.internal.jpa.jpql.ReportQueryVisitor.visitAbstractSelectClause(ReportQueryVisitor.java:82)
at org.eclipse.persistence.internal.jpa.jpql.AbstractObjectLevelReadQueryVisitor.visit(AbstractObjectLevelReadQueryVisitor.java:173)
at org.eclipse.persistence.jpa.jpql.parser.SelectClause.accept(SelectClause.java:42)
at org.eclipse.persistence.internal.jpa.jpql.AbstractObjectLevelReadQueryVisitor.visitAbstractSelectStatement(AbstractObjectLevelReadQueryVisitor.java:327)
at org.eclipse.persistence.internal.jpa.jpql.ReportQueryVisitor.visitAbstractSelectStatement(ReportQueryVisitor.java:92)
at org.eclipse.persistence.internal.jpa.jpql.AbstractObjectLevelReadQueryVisitor.visit(AbstractObjectLevelReadQueryVisitor.java:183)
at org.eclipse.persistence.jpa.jpql.parser.SelectStatement.accept(SelectStatement.java:101)
at org.eclipse.persistence.internal.jpa.jpql.HermesParser$DatabaseQueryVisitor.visit(HermesParser.java:438)
at org.eclipse.persistence.jpa.jpql.parser.SelectStatement.accept(SelectStatement.java:101)
at org.eclipse.persistence.internal.jpa.jpql.HermesParser$DatabaseQueryVisitor.visit(HermesParser.java:418)
at org.eclipse.persistence.jpa.jpql.parser.JPQLExpression.accept(JPQLExpression.java:135)
at org.eclipse.persistence.internal.jpa.jpql.HermesParser.populateQueryImp(HermesParser.java:282)
... 85 more
]]
[2017-08-15T21:32:24.548+0200] [Payara 4.1] [SEVERE] [] [javax.enterprise.system.core] [tid: _ThreadID=154 _ThreadName=admin-listener(7)] [timeMillis: 1502825544548] [levelValue: 1000] [[
Exception while deploying the app [tsbgam-application-2017-T3-SNAPSHOT]]]
[2017-08-15T21:32:24.548+0200] [Payara 4.1] [SEVERE] [NCLS-CORE-00026] [javax.enterprise.system.core] [tid: _ThreadID=154 _ThreadName=admin-listener(7)] [timeMillis: 1502825544548] [levelValue: 1000] [[
Exception during lifecycle processing
org.glassfish.deployment.common.DeploymentException: Exception [EclipseLink-28019] (Eclipse Persistence Services - 2.6.2.qualifier): org.eclipse.persistence.exceptions.EntityManagerSetupException
Exception Description: Deployment of PersistenceUnit [tsms_tsbg] failed. Close all factories for this PersistenceUnit.
Internal Exception: Exception [EclipseLink-0] (Eclipse Persistence Services - 2.6.2.qualifier): org.eclipse.persistence.exceptions.JPQLException
Exception Description: Internal problem encountered while compiling [SELECT NEW TextHistory(i.id, i.fileName, i.importDate) FROM TextHistory i].
Internal Exception: java.lang.NullPointerException
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.createDeployFailedPersistenceException(EntityManagerSetupImpl.java:869)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:809)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.getAbstractSession(EntityManagerFactoryDelegate.java:205)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.createEntityManagerImpl(EntityManagerFactoryDelegate.java:305)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:337)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:303)
at org.glassfish.persistence.jpa.JPADeployer$2.visitPUD(JPADeployer.java:451)
at org.glassfish.persistence.jpa.JPADeployer$PersistenceUnitDescriptorIterator.iteratePUDs(JPADeployer.java:510)
at org.glassfish.persistence.jpa.JPADeployer.iterateInitializedPUsAtApplicationPrepare(JPADeployer.java:492)
at org.glassfish.persistence.jpa.JPADeployer.event(JPADeployer.java:395)
at org.glassfish.kernel.event.EventsImpl.send(EventsImpl.java:131)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:487)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:219)
at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:487)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:539)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:535)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2.execute(CommandRunnerImpl.java:534)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$3.run(CommandRunnerImpl.java:565)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$3.run(CommandRunnerImpl.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:556)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1464)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1300(CommandRunnerImpl.java:109)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1846)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1722)
at org.glassfish.admin.rest.utils.ResourceUtil.runCommand(ResourceUtil.java:253)
at org.glassfish.admin.rest.utils.ResourceUtil.runCommand(ResourceUtil.java:231)
at org.glassfish.admin.rest.utils.ResourceUtil.runCommand(ResourceUtil.java:275)
at org.glassfish.admin.rest.resources.TemplateListOfResource.createResource(TemplateListOfResource.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:160)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:384)
at org.glassfish.admin.rest.adapter.RestAdapter$2.service(RestAdapter.java:316)
at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:179)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:206)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:180)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:536)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)
at java.lang.Thread.run(Thread.java:745)
]]
[2017-08-15T21:32:24.562+0200] [Payara 4.1] [SEVERE] [] [javax.enterprise.system.core] [tid: _ThreadID=154 _ThreadName=admin-listener(7)] [timeMillis: 1502825544562] [levelValue: 1000] [[
Exception while deploying the app [tsbgam-application-2017-T3-SNAPSHOT] : Exception [EclipseLink-28019] (Eclipse Persistence Services - 2.6.2.qualifier): org.eclipse.persistence.exceptions.EntityManagerSetupException
Exception Description: Deployment of PersistenceUnit [tsms_tsbg] failed. Close all factories for this PersistenceUnit.
Internal Exception: Exception [EclipseLink-0] (Eclipse Persistence Services - 2.6.2.qualifier): org.eclipse.persistence.exceptions.JPQLException
Exception Description: Internal problem encountered while compiling [SELECT NEW TextHistory(i.id, i.fileName, i.importDate) FROM TextHistory i].
Internal Exception: java.lang.NullPointerException]]
[2017-08-15T21:32:24.663+0200] [Payara 4.1] [INFO] [] [org.glassfish.admingui] [tid: _ThreadID=47 _ThreadName=admin-listener(5)] [timeMillis: 1502825544663] [levelValue: 800] [[
Exception Occurred :Error occurred during deployment: Exception while deploying the app [tsbgam-application-2017-T3-SNAPSHOT] : Exception [EclipseLink-28019] (Eclipse Persistence Services - 2.6.2.qualifier): org.eclipse.persistence.exceptions.EntityManagerSetupException
Exception Description: Deployment of PersistenceUnit [tsms_tsbg] failed. Close all factories for this PersistenceUnit.
Internal Exception: Exception [EclipseLink-0] (Eclipse Persistence Services - 2.6.2.qualifier): org.eclipse.persistence.exceptions.JPQLException
Exception Description: Internal problem encountered while compiling [SELECT NEW TextHistory(i.id, i.fileName, i.importDate) FROM TextHistory i].
Internal Exception: java.lang.NullPointerException. Please see server.log for more details. ]]
A:
Any hints on why do you actually want to use the SELECT NEW syntax?
This syntax is mostly used in (the rare) case that you want to select fields from one entity and construct a different object using the values of those fields.
In your case you are constructing the very same entity you are selecting from. This is usually done as select e from SomeEntity e. (Actually, in contrast to SQL, in JPQL the select clause is optional as long as you are selecting from a single entity. So from SomeEntity e just selects the whole table, and from SomeEntity e where e <whatever> selects all <whatever> records.)
In case the entity has some heavyweight attributes (like @Lobs) or @ManyToOne associations that you don't want to load initially, the proper way to do this is to make them lazily loaded. For LOBs this is done via @Basic(fetchType = LAZY) and for associations via @ManyToOne(fetchType = LAZY). Note that for @ManyToMany and @OneToMany lazy loading is the default anyway.
My impression is that you are trying to do JPA "the SQL way". A solid SQL knowledge is a must to properly use JPA, but you need to always do this "paradigm switch" from relational to object-oriented perspective in order to do things the way they are supposed to.
BTW, regarding
I don't even know what Eclipselink is
This pretty much says it all :-) JPA is a standardized Java API - it defines how things are supposed to work, no more, no less. It does not actually do the real work - this is left to the particular JPA implementation, a.k.a. persistence provider. There are a bunch of JPA implementations out there, the most prominent being Hibernate and Eclipselink. Every Java EE application server is required to include a JPA persistence provider, and it seems like your server comes with Eclipselink. Eclipselink came into existence when Oracle donated their proprietary JPA implementation named TopLink to the Eclipse Foundation.
Update: I did a bit more research and it seems like the reason for your error is quite mundane: The SELECT NEW syntax requires you to use the fully quialified name of the constructor, i.e. including package name. This is because you can select new into any POJO class you want, it does not require the class to be a JPA entity. In contrast, the from clause uses simple names because only @Entity classes are allowed there (which JPA enumerates and parses at deployment time).
Why not using a fully qualified name leads to a NullPointer in the Eclipselink code, is another story - seems like a bug in Eclipselink.
==> This leads to the question which one is better, select new or lazy loading. As always, it depends on your usecase. Annotating the fields for lazy loading will always be honored, no matter how the objects got into memory. For example, using EnitytManager.find() always returns a complete instance of the entity, use can't use a custom constructor there. But annotations on fields of course apply. The same goes when accessing entities by association - if A contains a reference to B, and you call A.getB() you get an instance of B initialized according to its annotations.
Using select new is a one-off technique in case you deliberately want to divert from the default.
| {
"pile_set_name": "StackExchange"
} |
Q:
Output not redirected to file inside of a windows task
I should also state im running this inside a windows container.
I'm trying to get the output of aws.cmd in a file, but when run in a task its not sending any output- i cant figure out why.
If I create the following script:
'get-date | out-file c:\logs\date.txt; C:\users\containeradministrator\appdata\Roaming\Python\Python37\Scripts\aws.cmd --version | out-file -append c:\logs\date.txt; getdate | out-file -append c:\logs\date.txt' > getdate.ps1
Then create a task:
schtasks --% /create /tn "test-date" /sc minute /mo 1 /ru SYSTEM /tr "powershell -file c:/getdate.ps1"
The task runs the script and write this to a file:
PS C:\> cat C:\logs\date.txt
Wednesday, May 23, 2018 2:38:00 PM
Wednesday, May 23, 2018 2:38:02 PM
Yet when running from the console directly it write output to a file:
PS C:\> C:\users\containeradministrator\appdata\Roaming\Python\Python37\Scripts\aws.cmd --version > C:\logs\consoletest.txt
PS C:\> cat C:\logs\consoletest.txt
aws-cli/1.15.24 Python/3.7.0b4 Windows/10 botocore/1.10.24
Whats going on here im loosing my mind. I sandwich the aws command between two get-date commands which both run so the aws command must be executing without error because it runs BOTH get-date commands.
EDIT: ok now im really confused, if I replace the aws command with nonsense it still runs both get-date commands and i see their output in the log file. For example replacing the aws command with "blah.exe --fartfartfart" in the script and executing the task it still write both get-date output to the file with no errors. WHY?! the script should at least fail at that wrong command! I dont understand what is happening
EDIT2 (redirecting all output):
try {
C:\Users\ContainerAdministrator\AppData\Roaming\Python\Python37\Scripts\aws.cmd --version *>> c:\logs\date.txt
}
catch {
$_.Exception | out-file c:\logs\date.txt
}
get-date | out-file -apend c:\logs\date.txt
Redirecting all output I see this error:
C:\Users\ContainerAdministrator\AppData\Roaming\Python\Python37\Scripts\aws.cmd
: Traceback (most recent call last):
At C:\getdate.ps1:1 char:6
+ try {C:\Users\ContainerAdministrator\AppData\Roaming\Python\Python37\ ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (Traceback (most recent call last)
::String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
File "C:\Users\ContainerAdministrator\AppData\Roaming\Python\Python37\Scripts
\aws.cmd", line 50, in <module>
import awscli.clidriver
ModuleNotFoundError: No module named 'awscli'
A:
pip install awscli --upgrade --user
I installed with the --user switch, removing that installed it to program files and it was accessible from the task as the system user
| {
"pile_set_name": "StackExchange"
} |
Q:
.npmignore not ignoring files
I have a private module stored on github that I'm including in one of my projects using npm. The module has a .npmignore file, but nothing is being ignored when I install or update the module.
Project's package.json
{
"name": "Your Cool Project",
"version": "0.0.0",
"dependencies": {
"myModule" : "git+ssh://git@github.com:user/myModule.git",
/* All your other dependencies */
}
...
}
Module's .npmignore file
.git*
gulpfile.js
index.html
tests.js
README.md
When I run npm update myModule these files are still being downloaded into my project. Am I missing something? Does .npmignore work for privately hosted modules? Thanks in advance.
A:
Since you're specifying the dependency myModule as a Git repository, npm is probably just cloning the repo. Therefore your .npmignore files isn't being used.
.npmignore seems to be used when "making modules" eg: pack or publish not consuming modules (like in your example).
| {
"pile_set_name": "StackExchange"
} |
Q:
Formality of classifying spaces (for not necessarily connected groups)
As should be evident from the title this question has a similar flavor to:
Formality of classifying spaces
However, unlike Geordie's question, I will be working with torsion free coefficients (say the complex numbers). The `torsion' in the current question is going to be from a different source.
Let $G$ be a reductive linear algebraic group (over $\mathbb{C}$) say. Relatively general formal considerations show that the $G$-equivariant derived category of a point is equivalent to the dg-derived category of a dg-algebra (I am going to ignore finiteness issues).
For instance, if $G$ is connected (so that $BG$ is simply connected), the dg-algebra is the algebra associated to the De Rham complex of $BG$ (let's be friendly and ignore finiteness issues again). In this case the $G$-equivariant derived category of a point is the subcategory of $D(BG)$ generated by the constant sheaf. The latter is equivalent to the derived category of the De Rham complex. Further, since $H^*(BG)$ is a polynomial ring (I am still in the $G$ is connected case) generated in even degree, it is easy to see that this dg-algebra is formal (the point is that the De Rham complex is (super)commutative before even passing to cohomology, so you can very naively construct your quasi-isomorphism by hand).
Ok, that's pretty nice. Now let's look at $G$ not connected. Then the $G$-equivariant derived category of a point is still governed by a dg-algebra, but it's a bit nastier. Essentially we want the subcategory of $D(BG)$ generated by local systems, and since $BG$ isn't simply connected anymore we are going to have non-trivial local systems. The dg-algebra we are after is the $Ext$-algebra of the sum of all the irreducible local systems. Now we can certainly compute the cohomology of this dg-algebra: it is the $H^*(BG^0)$-twisted group algebra of $G/G^0 = \pi_1(BG)$. Added later: to see this, loop the fibration $BG^0\hookrightarrow BG \twoheadrightarrow B(G/G^0)$.
The question: is the dg-algebra governing $D(BG)$ (for $G$ not necessarily connected) formal? If so, is it possible to deduce this formality from the connected case?
I am quite happy assuming that $G/G^0$ is abelian.
A:
The answer is yes $D^b_G(X)$ is equivariantly formal. The result has been proved in a diploma thesis written under the supervision of Wolfgang Soergel. (Unfortunately it is not available electronically).
The proof goes roughly as follows. Let $\pi:BG_0 \to BG$ the quotient map. Let $\mathcal{L}$ be the sum of simple perverse sheaves on $D_G(pt)$ and $\mathcal{L}_0$ the sum of simple perverse sheaves on $D_{G_0}(pt)$. Let $I_{\mathcal{L}}$ and $I_{\mathcal{L}_0}$ (note that $I_{\mathcal{L}_0}$ is a direct summand of $\pi^*(I_{\mathcal{L}})$). The corresponding injective resolutions i.e. $D^b_G(pt)\cong D^b(End(I_{\mathcal{L}}))$ ...
You want to show that the $dg$-algebra $End(I_{\mathcal{L}})$ is formal. Of course it would also be fine to show that $End(\pi_*\pi^*I_{\mathcal{L}})$ is formal. Now by [Soe01] Theorem 2.4.2 there exists an isomorphism
$$End(\pi_*\pi^*I_{\mathcal{L}})\cong End( I_{\mathcal{L}_0})\otimes \mathbb{C}[G/G_0] $$
Be carefull: The multiplication of $End( I_{\mathcal{L}_0})\otimes \mathbb{C}[G/G_0]$ has some twist corresponding to the action of $G/G_0$ on $End( I_{\mathcal{L}_0})$ Anyway, one can show that the "classical" quasi-isomorphism of $End(I_{\mathcal{L}_0})$ to $H^*(...)$ is equivariant with respect to the $G/G_0$ action on so induces the quasi-isomorphism you are looking for.
[Soe01] Langlands' Philosophy and Koszul Duality http://home.mathematik.uni-freiburg.de/soergel/PReprints/langlands.ps
| {
"pile_set_name": "StackExchange"
} |
Q:
cakephp display array name in form input options instead of variable
If i have an array such as
[Yellow] => 1 [Red] => 2 [Blue] => 3
and then want to use these in a form INPUT with $options to make a dropdown selection, is it possible to use the color names Yellow/Red/Blue instead of the values 1/2/3?
currently the dropdown has 1, 2, 3 as the options instead of the names. The array is used elsewhere and is in the format for a reason.
A:
You could use the array_flip method to swap the keys and values around
$array = array('Yellow' => 1, 'Red' => 2, 'Blue' => 3);
$flippedArray = array_flip($array);
// => [1] => 'Yellow', [2] => 'Red', [3] => 'Blue'
Then use the flippedArray as the options in your select element with the form helper
echo $this->Form->select('colours', $flippedArray);
Or you could combine the colours into a new array for the select element
$combinedArray = array_combine(array_keys($array), array_keys($array));
//=> [Yellow] => 'Yellow', [Red] => 'Red', [Blue] => 'Blue'
echo $this->Form->select('colours', $combinedArray);
In this way you could then use the value passed back from your form as the key of your orignal array if you needed to
| {
"pile_set_name": "StackExchange"
} |
Q:
Elasticsearch Integration in Spring MVC?
Is there anyone knows how to integrate the spring mvc and elastisearch?
I want to implement a web page like general web site(google,yahoo searcg engine)
Is there any tutorial or sample code?
A:
Check out the Spring Data Elasticsearch project.
Here is a sample application.
| {
"pile_set_name": "StackExchange"
} |
Q:
css - Put scrollable table on remaining height (between header and footer)
Attached Img.
Please refer to attached image. I have a Header, content and Footer.
Both Header and Footer are fixed. They should be visible at all times.
And inside the content, I have a <table>. I want to make this <table> take all the remaining height between the header and footer.
The table might contain a lot of data, so I want to make it scrollable as well.
The snippet below shows the closest I got by putting the table into a div with overflow-y: auto; max-height: 200px; (fixed size)
How can I make it take the full / remaining height automatically?
body {
background: green;
padding: 10px;
}
.header {
background: lightblue;
}
.content {
background: lime;
overflow-y: auto;
max-height: 200px;
}
.footer {
background: orange;
}
<div class="header">
Some awesome title
</div>
<div class="content">
<table>
<thead>
<tr>
<th>Lorem.</th>
<th>Numquam.</th>
<th>Id.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Lorem.</td>
<td>Eos.</td>
<td>Vel.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Quod.</td>
<td>Quaerat?</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Explicabo!</td>
<td>Esse.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Quo.</td>
<td>Praesentium!</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Perferendis!</td>
<td>Necessitatibus.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Facere.</td>
<td>Ex.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Ducimus.</td>
<td>Architecto.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Porro!</td>
<td>Voluptatum.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Culpa?</td>
<td>Dignissimos?</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Alias.</td>
<td>Deserunt!</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Mollitia!</td>
<td>Doloribus?</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Quia.</td>
<td>Aspernatur.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Est!</td>
<td>Nihil.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Neque.</td>
<td>Asperiores!</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Cupiditate.</td>
<td>Rerum.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Eligendi.</td>
<td>Qui?</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Libero.</td>
<td>Molestiae!</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Suscipit.</td>
<td>Nostrum.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Minima.</td>
<td>Voluptatem.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Quam.</td>
<td>Mollitia.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Minus!</td>
<td>Corporis.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Perferendis.</td>
<td>Deleniti.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Asperiores!</td>
<td>Rem.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Molestiae.</td>
<td>Dignissimos?</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Doloribus.</td>
<td>Ipsam.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Aperiam.</td>
<td>Obcaecati.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Suscipit.</td>
<td>Harum?</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Cupiditate.</td>
<td>Tenetur.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Ea!</td>
<td>Ipsam.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Officia!</td>
<td>Velit.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Mollitia!</td>
<td>Voluptatibus.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Rerum.</td>
<td>Accusamus?</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Distinctio.</td>
<td>Ducimus.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Iure.</td>
<td>Recusandae.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Quibusdam.</td>
<td>Veritatis.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Optio!</td>
<td>Voluptatum.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>At.</td>
<td>Facere.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Illum?</td>
<td>Placeat!</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Unde?</td>
<td>Explicabo.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Reiciendis.</td>
<td>Architecto.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Quasi?</td>
<td>Praesentium!</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Odit!</td>
<td>Ratione.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Expedita?</td>
<td>Incidunt!</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Nemo.</td>
<td>Reprehenderit?</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Blanditiis.</td>
<td>A.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Iusto.</td>
<td>Similique.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Sint?</td>
<td>Corrupti.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Consequatur.</td>
<td>Nihil!</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Magni.</td>
<td>Deleniti.</td>
</tr>
<tr>
<td>Lorem.</td>
<td>Nobis!</td>
<td>Eius.</td>
</tr>
</tbody>
</table>
</div>
<div class="footer">
Pagination stuff
</div>
A:
Suppose you have this HTML:
<div class="header"></div>
<div class="content">table</div>
<div class="footer"></div>
You were searching for this CSS solution (as flexbox is not supported in IE9):
body {margin: 0;}
.header {height: 50px;}
.content {height: calc(100vh - 100px); overflow-y: scroll;}
.footer {height: 50px;}
But I would use/prefer this one:
body {margin: 0;}
.header {height: 50px; position: fixed; top: 0;}
.content {padding: 50px 0;}
.footer {height: 50px; position: fixed; bottom: 0;}
The latter one uses the normal page scroll, instead of a div overflow, which causes less cross browser issues and better performance on (older) mobiles.
| {
"pile_set_name": "StackExchange"
} |
Q:
Logstash seemingly changes the Elasticsearch output URL
I have my Logstash configured with the following output:
output {
hosts => ["http://myhost/elasticsearch"]
}
This is a valid URL, as I can cURL commands to Elasticsearch with it, such as
curl "http://myhost/elasticsearch/_cat/indices?v"
returns my created indices.
However, when Logstash attempts to create a template, it uses the following URL:
http://myhost/_template/logstash
when I would expect it to use
http://myhost/elasticsearch/_template/logstash
It appears that the /elasticsearch portion of my URL is being chopped off. What's going on here? Is "elasticsearch" a reserved word in the URL that is removed? As far as I can tell, when I issue http://myhost/elasticsearch/elasticsearch, it attempts to find an index named "elasticsearch" which leads me to believe it isn't reserved.
Upon changing the endpoint URL to be
http://myhost/myes
Logstash is still attempting to access
http://myhost/_template/logstash
What might be the problem?
EDIT
Both Logstash and Elasticsearch are v5.0.0
A:
You have not specified which version of logstash you are using. If you are using one of the 2.x versions, you need to use use the path => '/myes/' parameter to specify that your ES instance is behind a proxy. In 2.x, the hosts parameter was just a list of hosts, not URIs.
| {
"pile_set_name": "StackExchange"
} |
Q:
Where's this verdant ridge?
Reverse image searching turned up nothing, as the log-in box possibly belied the original picture.
Any geographers here? What type of ridge is this?
A:
While not an exact location the original image is available on shutterstock: https://www.shutterstock.com/image-photo/sunset-mountains-landscape-carpathian-ukraine-579416353 which described the image as being from the Carpathian Mountains in ukraine.
| {
"pile_set_name": "StackExchange"
} |
Q:
Operator value evalution issue in C program
#include<stdio.h>
void main(){
int i;
i = i-3;
printf("%d",i); // output -3
}
Why answer is 3 every time run this c program. but we knows c picks random value for variable which is not initialized. so why it gives -3. I also try it using auto storage class. I am using codeblocks compiler.
A:
Your code is causing undefined behavior. Anything could happen in the code. Your code might print 3 or 300 or a Mona Lisa picture on the screen.
6.3.2.1p2:
If the lvalue designates an object of automatic storage duration that could have been declared with the register storage class (never had its address taken), and that object is uninitialized (not declared with an initializer and no assignment to it has been performed prior to use), the behavior is undefined.
A:
This code has following Error:
Uninitialized local variable 'i' used.
Incorrect Statments >>>
Xx. we knows c picks random value for variable which is not initialized .xX
| {
"pile_set_name": "StackExchange"
} |
Q:
Batch: How do I parse a string containing a filesystem path?
I have a string contained in a variable, for example:
"C:\Users\SomeUser\Desktop\SomeFolder\File.jar"
I would like to parse File.jar from this string into another variable. I currently have this somewhat working with the code:
FOR /f "tokens=1-6 Delims=\" %%A IN (%string%) DO (set myvariable=%%F)
This works as long as the folder path remains the same length. However, I want to be able to move the program and file and still have everything work right. Is there any way to parse, just as an idea, from right to left? Any advice would be greatly appreciated.
A:
Try to apply path modifiers as follows:
set "inputPath=C:\Users\SomeUser\Desktop\SomeFolder\File.jar"
for %%i in ("%inputPath%") do set fname=%%~nxi
echo %fname%
%%~nx<loop-var> extracts the filename root (n) and filename extension (x) from the loop variable; i.e., it extracts the last/filename component from the loop variable's value.
(%%i was chosen as the loop variable in this case, but any letter will do.)
P.S.: Another frequently used construct is %%~dp<param-or-loop-var-> to extract the drive spec. (d) and the absolute path (without drive spec.) with a terminating \ (p) - this even works for relative input paths.
For instance, %%~dp0 will expand to the absolute path of the folder in which a batch file is located.
A list of all supported path modifiers is here.
(Note that they're only discussed in terms of parameters, but they equally work with for-loop variables).
| {
"pile_set_name": "StackExchange"
} |
Q:
Integral identity using the transformation formula
Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be integrable. I want to show that
$$ \int_\mathbb{R}f(x)\,\mathrm{d}\lambda(x) = \int_\mathbb{R} f\left(x-\frac{1}{x}\right)\,\mathrm{d}\lambda(x).$$
I tried to use the transformation formula, but I did not get the identity. I would appreciate any hints.
A:
$\def\d{\mathrm{d}}$Define$$
g_1(x) = x - \frac{1}{x}\ (x < 0), \quad g_2(x) = x - \frac{1}{x}\ (x > 0),$$
then$$
h_1(y) = g_1^{-1}(y) = \frac{1}{2} (y - \sqrt{\smash[b]{y^2 + 4}}),\quad h_2(y) = g_2^{-1}(y) = \frac{1}{2} (y + \sqrt{\smash[b]{y^2 + 4}}). \quad \forall y \in \mathbb{R}
$$
Therefore for integrable $f$,\begin{align*}
\int_{\mathbb{R}} f\left( x - \frac{1}{x} \right) \,\d x &= \int_{(-\infty, 0)} f(g_1(x)) \,\d x + \int_{(0, +\infty)} f(g_2(x)) \,\d x\\
&= \int_{g_1((-\infty, 0))} f(u) |h_1'(u)| \,\d u + \int_{g_2((0, +\infty))} f(u) |h_2'(u)| \,\d u\\
&= \int_{\mathbb{R}} f(u) · \frac{1}{2} \left( 1 - \frac{u}{\sqrt{u^2 + 4}} \right) \,\d u + \int_{\mathbb{R}} f(u) · \frac{1}{2} \left( 1 + \frac{u}{\sqrt{u^2 + 4}} \right) \,\d u\\
&= \int_{\mathbb{R}} f(u) \,\d u.
\end{align*}
| {
"pile_set_name": "StackExchange"
} |
Q:
Binary linear programming with two-dimensional parameter
I am trying to solve a linear programming problem like the following, where $x_i$ represents a binary decision to purchase or not product $i$, and $a_i$ is the utility of product $i$:
$$max \sum_{i=1}^n a_ix_i$$
However, I also have a $n\times n$ matrix $b$, where $b_{ij}$ represents the added (or diminished) overall utility of purchasing products $i$ and $j$ together. Given that all the constraints are one-dimensional (i.e. total cost and total number of products), how can I modify the objective function to better represent the model?
I thought of $max \sum_{i=1}^n x_i(a_i+\sum_{j=1}^n x_j b_{ij})$, but I'm pretty sure this breaks the linearity because of the product $x_i x_j$.
Note: as expected, $b$ is symmetrical.
A:
You need to introduce a new set of binary variables $y_{ij}$ that take value $1$ if and only if items $i$ and $j$ are both selected.
So your objective function becomes
$$
\sum_{i=1}^n a_ix_i + \sum_{i=1}^n \sum_{j<i} b_{ij}y_{ij}
$$
Don't forget to define variables $y_{ij}$ in your constraints:
$$
y_{ij}\le x_i \\
y_{ij}\le x_j \\
$$
(If $y_{ij}$ takes value $1$, then $x_i$ and $x_j$ also have to take value $1$)
| {
"pile_set_name": "StackExchange"
} |
Q:
How does Apache load Javascript files?
I was creating a simple Wordpress plugin which uses a javascript file. Although the PHP edits did not need a server refresh and were reflected immediately on page reload, the javascript edits were not reflected until I restarted the server (they did not work even on a hitting "Refresh" on xampp).
What I would like to know:
1. How are Javascript files are loaded in Apache?
2. Is there anyway to configure it so that the files are loaded everytime I reload the page? ( I will be editing the Javascript files a lot. I do not want to be restarting the server everytime!)
A:
How are Javascript files are loaded in Apache?
Ans: It is the same with your html file or other static contents.
Is there anyway to configure it so that the files are loaded everytime I reload the page?
Ans: This is not the problem of Apache, It is mostly because your browser caches your javascript file. Simply clear your browser caches.
| {
"pile_set_name": "StackExchange"
} |
Q:
Which algorithm is best when running with parallel processors?
If I had a machine with multiple processors and trying to solve a huge problem, which algorithm would be best for solving this problem?
Dynamic Programming, Greedy or the Divide and Conquer algorithm?
A:
This is an extremely broad question, and many things will depend on the specific problem you're trying to solve, but what you're looking for is whether an algorithm can run its steps in parallel. This can only be done if the result of one step does not depend on the result of another.
We can't say anything about greedy algorithms here. They're only defined as taking the locally best next step.
Divide and conquer divides the problem into separate parts, each of which can be individually solved, so this is often a good candidate for running things in parallel.
Dynamic programming could be viewed as a sort of divide and conquer, but now, you're solving a small part of the problem, and then using that to solve a bigger part, and so on. For example, the knapsack problem is often used as a use case for dynamic programming. You can solve the problem starting with a trivially small knapsack, and building up your solution from there. The problem here is that each solution depends on the solution of the smaller problem. Unless individual steps can be divided among threads, this cannot be parallelized.
So generally, divide and conquer seems to be your best bet for running things in parallel.
| {
"pile_set_name": "StackExchange"
} |
Q:
Downvote newcomers poorly posted questions and answers?
Following up on How Can We Encourage Civility? .
Should we downvote poorly posted quesitons/answers until modified, even though someone is a newcomer?
A:
I'm of the opinion that it's not productive nor is it helpful to downvote questions from newbies and I'll give some of my reasons why.
It's my opinion that if you're going to downvote, you should explain what's wrong with the question (or answer) that you're downvoting.
Heaping downvotes on a newbie doesn't help them learn how to use the forum if you're not explaining what's wrong with their question.
For most, it's already intimidating enough to ask their 1st question in the forum to begin with without being bombarded with downvotes should they make a misstep.
Newbies have only 5 Reputation to lose, so the downvote has little if any impact.
I've observed that some will seem to "pile on", knowing their points will be returned once the question is closed. My thoughts are:
The behavior doesn't help the newbie learn to use the forum.
The behavior seems malicious and intimidating to a newbie.
It definitely isn't welcoming, as in "let us show you how to use our forum".
Finally, I'll add that if it were entirely up to me and the platform supported it, I'd require that users comment on what's "wrong" when they downvote. I think people need to know why they're being downvoted so they can learn from it. To me, a downvote without a comment tells me very little I can learn from.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why use 'being' in this sentence?
I came across this sentence in my programming book:
There are a number of differences that need to be explained though,
the most important one being that read or write accesses to the file
performed by applications actually affect the target of the link and
not the link itself.
Why would it bother to use the following?
...the most important one is that read or write...
A:
Being allows the writer to continue the sentence. If you wanted to replace the most important one being ... with the most important one is ... you'd have to start a new sentence. That would probably be quite a good thing anyway.
| {
"pile_set_name": "StackExchange"
} |
Q:
Comma separated list using angular ngfor and ngif
I have a comma separated list that displays an array of four items.
What I'm trying to do is this:
item1
item1, item2
item1, item2, item3
item1, item2, item3, item4 ...
What is happening is this:
item1,
item1, item2,
item1, item2, item3,
item1, item2, item3, item4 ...
Here is my code:
<span *ngFor="let item of record.referrerItemList; let i=index">
<span *ngIf="i <= 3">{{item}}</span><span class="list-format" *ngIf="i < 3">, </span>
<span *ngIf="(i > 3) && (i < 5)" class="hellip-format">…</span>
</span>
Here is some list results:
Item1, Item2, Item3, Item4 …
Item1,
Item1, Item2, Item3, Item4 …
Item1, Item2,
A:
just a little change should work, in place of i < 3 i!=record.referrerItemList.length-1 will work because only the last one you don't want the comma.
<span *ngFor="let item of record.referrerItemList; let i=index">
<span *ngIf="i <= 3">{{item}}</span><span class="list-format" *ngIf="i!=record.referrerItemList.length-1">, </span>
<span *ngIf="(i > 3) && (i < 5)" class="hellip-format">…</span>
</span>
| {
"pile_set_name": "StackExchange"
} |
Q:
OCaml equivalent to Haskell's @ in pattern matching (a.k.a. as-pattern)
In Haskell, while pattern matching, I can use @ to get the entire structure in a pattern. (For easier Googling, this structure is known as an as-pattern.)
For example, x:xs decomposes a list into a head and a tail; I can also get the entire list with xxs@(x:xs).
Does OCaml have something like this?
A:
You can use as:
let f = function
| [] -> (* ... *)
| (x::xs) as l ->
(*
here:
- x is the head
- xs is the tail
- l is the whole list
*)
A:
Let me extend Etienne's answer a little bit with some examples:
let x :: xs as list = [1;2;3];;
val x : int = 1
val xs : int list = [2; 3]
val list : int list = [1; 2; 3]
When you write <pattern> as <name>, the variable <name> is bound to the whole pattern on the left, in other words, the scope of as extends as far to the left as possible (speaking more techically as has lower priority than constructors, i.e., the constructors bind tighter). So, in case of the deep pattern matching, you might need to use parentheses to limit the scope, e.g.,
let [x;y] as fst :: ([z] as snd) :: xs as list = [[1;2];[3]; [4]];;
val x : int = 1
val y : int = 2
val fst : int list = [1; 2]
val z : int = 3
val snd : int list = [3]
val xs : int list list = [[4]]
val list : int list list = [[1; 2]; [3]; [4]]
| {
"pile_set_name": "StackExchange"
} |
Q:
Check if array values are ints
Possible Duplicate:
More concise way to check to see if an array contains only numbers (integers)
Is there a way to check that all values in an single dimension array are ints?
The best I could do is something like
function checkArray($array) {
foreach($array as $value) {
if (!is_int($value)) return false;
}
return true;
}
I'm wondering if there's a more concise/pre-built way, something like is_int_array() that I might not know about.
A:
I imagine you can never beat O(n) as all the elements need to be checked that they conform to the rule. The below checks each element once O(n) and removes it, if it is not an integer then does a simple comparison.
Still will have a slightly larger storage complexity however (needs to store the filtered array).
O(n) is a representation of complexity, in this case the complexity is n (the number of elements in the array) as each element must be looked at once.
If for example you wanted to multiple every number by every other number the complexity is approximately O(n^2) as for each element you must look at each other element (though this is a poor example)
See this guide for further information on Big O Notation as it is called
However try the below (adapted from previous question)
if($only_integers === array_filter($only_integers, 'is_int')); // true
if($letters === array_filter($letters, 'is_int')); // false
You could then do
/**
* Test array against provided filter
* testFilter(array(1, 2, 'a'), 'is_int'); -> false
*/
function testFilter($array, $test) {
return array_filter($array, $test) === $array;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Add a New Windows 2000 Server to a Windows 2003 Domain, Do I need ADPREP?
I've got a newish Windows 2003 DC and a very old Windows 2000 DC running together in Windows 2000 Native mode. I've got to pull the 2000 Server to repair it, so I want to add in a temporary 2000 Server to keep 2 DCs running. When I run DCPROMO on the temporary box, will it get the correct schema from the 2003 Server? Or do I have to run adprep /forestprep and adprep /domainprep before I add it to the domain?
A:
Dcpromo is really all you should need. [ad|forest]prep is only needed when upgrading the schema, which you're not doing at this time.
Additionally, if an [ad|forest]prep operation is ever needed, windows is usually good about notifying you as such.
| {
"pile_set_name": "StackExchange"
} |
Q:
Error importing Maven project in eclipse
I have a problem which I do not know how to solve. I want to import a maven project to eclipse. However when I am doing so I am getting following error from eclipse:
But when I am doing "mvn install" in the terminal, I am able to successfully build the project.
Here is the pom.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.thinkaurelius.titan</groupId>
<artifactId>titan-web-example</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>war</packaging>
<description>
This is a simple web app example.
</description>
<licenses>
<license>
<name>The Apache Software License, Version 2.0</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
</license>
</licenses>
<properties>
<spring.version>4.0.3.RELEASE</spring.version>
<titan.version>1.0.0</titan.version>
</properties>
<dependencies>
<dependency>
<groupId>com.thinkaurelius.titan</groupId>
<artifactId>titan-cassandra</artifactId>
<version>${titan.version}</version>
<exclusions>
<!-- These don't play well w/ the web classesthatrunthings deps below -->
<exclusion>
<groupId>org.mortbay.jetty</groupId>
<artifactId>servlet-api</artifactId>
</exclusion>
<exclusion>
<groupId>javax.servlet</groupId>
<artifactId>servlet-api</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.thinkaurelius.titan</groupId>
<artifactId>titan-es</artifactId>
<version>${titan.version}</version>
</dependency>
<dependency>
<groupId>org.apache.tinkerpop</groupId>
<artifactId>gremlin-groovy</artifactId>
<version>3.0.0-incubating</version>
</dependency>
<!-- Web App -->
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>jaxrs-api</artifactId>
<version>2.2.1.GA</version>
</dependency>
<dependency>
<groupId>com.sun.jersey</groupId>
<artifactId>jersey-server</artifactId>
<version>1.18.1</version>
</dependency>
<dependency>
<groupId>com.sun.jersey</groupId>
<artifactId>jersey-json</artifactId>
<version>1.18.1</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-aop</artifactId>
<version>${spring.version}</version>
</dependency>
<!-- Jersey -->
<dependency>
<groupId>com.sun.jersey.contribs</groupId>
<artifactId>jersey-spring</artifactId>
<version>1.18.1</version>
<exclusions>
<!-- oh maven you crazy old bird! -->
<exclusion>
<groupId>org.springframework</groupId>
<artifactId>spring</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework</groupId>
<artifactId>spring-beans</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework</groupId>
<artifactId>spring-aop</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Jetty Embedded App Container -->
<!-- You only need this if you want to run the web app in the embedded jetty container using the RunApp class -->
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-server</artifactId>
<version>9.2.0.v20140526</version>
</dependency>
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-webapp</artifactId>
<version>9.2.0.v20140526</version>
</dependency>
<!-- Util -->
<dependency>
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
<version>2.4</version>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.2.4</version>
</dependency>
<!-- Groovy GMaven -->
<dependency>
<groupId>org.codehaus.gmaven</groupId>
<artifactId>gmaven-plugin</artifactId>
<version>1.4</version>
</dependency>
</dependencies>
<build>
<finalName>titan-web-example</finalName>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.4</version>
</plugin>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.gmaven</groupId>
<artifactId>gmaven-plugin</artifactId>
<version>1.4</version>
<configuration>
<providerSelection>1.8</providerSelection>
</configuration>
<executions>
<execution>
<goals>
<goal>generateStubs</goal>
<goal>compile</goal>
<goal>generateTestStubs</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
It looks like gmaven has a problem in eclipse, but I do not know how to solve that.
A:
The way Eclipse handles Maven projects is currently through the m2e plugin.
m2e does not invoking Maven underneath, but parses the pom.xml files and for each plugin mentioned invoke code explicitly written to behave identically inside m2e as the plugin. This is what a "connector" is.
If a connector is not available for a given plugin (so m2e could emulate it for you) you see this error. This is unfortunately not uncommon for the more rare plugins.
You must investigate to see if it is enough for m2e to be able to present enough of the project to allow you to work anyway, and then just do real builds from the command line.
You may also consider using another free IDE for such a troublesome project. I know that IntelliJ works with almost any pom.xml file, and am told that Netbeans can too.
| {
"pile_set_name": "StackExchange"
} |
Q:
Rails tests (minitest) cause PostgreSQL syntax error
So whenever I run a test, Rails seems to be attempting to insert nothing into my PostgreSQL database... causing a syntax error. What could possibly cause this? For some context, this happens with every test, regardless of how simple or complex. For example:
class PlayerTest < ActiveSupport::TestCase
test "should not save empty player" do
assert true
end
end
And then I see the following error message:
Error:
PlayerTest#test_should_not_save_empty_player:
ActiveRecord::StatementInvalid: PG::SyntaxError: ERROR: syntax error at or near ")"
LINE 1: INSERT INTO "players_venues" () VALUES ()
Also, players_venues is a many-to-many join table between players and venues. This problem does not occur outside of tests. Any help would be greatly appreciated. If any more code is required please don't hesitate to ask.
A:
So I eventually figured this out. It turns out Rails generates empty fixtures like so:
one: {}
# column: value
#
two: {}
# column: value
When you run the tests it attempts to insert nothing! Thanks heaps to jdno for giving my a hint to look into the fixtures. Bloody champion.
| {
"pile_set_name": "StackExchange"
} |
Q:
Small sphere circling at intersection of sphere and sinusoid
I have an equation for a sphere which is x^2 +y^2 + (z - 1)^2 == 1 and for a sinusoid which is y == Sin[x] with z == 0.
I need to make a 3D animation that shows the sphere being cut in half by the sinusoid. Intersection of these two will be the trajectory for another sphere which will follow it.
[EDIT]
In the most simple way it should look like this but should be an animation. In this example i haven`t used any equations but y=sin[x] and x=3.
Show[Graphics3D[{Red, Sphere[{0, -0.5, 0}, 0.2]}],
Plot3D[y = Sin[x], {x, -5, 5}, {y, -1, Sin[x]}]]
A:
I admit that I am uncertain what the aim is. I post this with the hope that it may prompt correction/clarification with respect to the goal.
In the following the mesh shading reflects mesh functions y=Sin[x] (the 1 was used to avoid difficulties with 0). Either one certainly would 'cut the sphere' in half (but any great circle could be chosen without the need for Sin[x]). In addition I plot Sin[x] on the x-y plane with corresponding pre-image of stereographic projection from {0,0,2} on the index sphere.
cp = With[{ms = {{Blue, Opacity[0.3]}, {Yellow, Opacity[0.3]}}},
ContourPlot3D[
x^2 + y^2 + (z - 1)^2 == 1, {x, -1, 1}, {y, -1, 1}, {z, 0, 2},
ContourStyle -> None,
MeshFunctions -> {#2 - Sin[#1] + 1 &, #2 + Sin[#1] + 1 &},
Mesh -> {{1}, {1}}, MeshShading -> {ms, Reverse@ms}]];
spi[x_, y_] := {4 x, 4 y, 2 ( x^2 + y^2)}/(4 + x^2 + y^2);
anim[t_] := With[{
s = {t, Sin[t], 0},
tg = spi[t, Sin[t]],
p1 = cp,
pt = ParametricPlot3D[{u, Sin[u], 0}, {u, -Pi, Pi}],
plane = InfinitePlane[{{0, 0, 0}, {1, 0, 0}, {0, 1, 0}}],
sin = Table[spi[j, Sin[j]], {j, -100 Pi, 100 Pi, 0.1}]},
Show[cp, pt,
Graphics3D[{Orange, Opacity[0.5], plane, Opacity[1], Black, Red,
Thick, Line[sin], Purple, PointSize[0.04], Point[s], Point[tg],
Gray, Thickness[0.005], Line[{{0, 0, 2}, s, tg}]}],
PlotRange -> {{-Pi, Pi}, {-1, 1}, {0, 2}}, Axes -> False,
Boxed -> False, Background -> Black, BoxRatios -> Automatic]]
The animated gif was created by exporting a sequence with t fron $-\pi$ to $\pi$:
| {
"pile_set_name": "StackExchange"
} |
Q:
Java Regex for a requested URL with XML
I know that there are tons of questions with issues related to this topic which is regex, but I've been trying to fill a requirement for an URL. The URL comes as follows:
POST /fr.synomia.search.ws.module.ModuleSearch/geResults/jsonp?xmlQuery=<?xml version='1.0' encoding='UTF-8'?><query ids="16914"><matchWord>avoir</matchWord><fullText><![CDATA[]]></fullText><quotedText><![CDATA[]]></quotedText><sensitivity></sensitivity><operator>AND</operator><offsetCooc>0</offsetCooc><cooc></cooc><collection>0</collection><searchOn>all</searchOn><nbResultDisplay>10</nbResultDisplay><nbResultatsParAspect>5</nbResultatsParAspect><nbCoocDisplay>8</nbCoocDisplay><offsetDisplay>0</offsetDisplay><sortBy>date</sortBy><dateAfter>0</dateAfter><dateBefore>0</dateBefore><ipClient>82.122.169.244</ipClient><typeQuery>0</typeQuery><equivToDelete></equivToDelete><allCooc>false</allCooc><versionDTD>3.0.5</versionDTD><r34>1tcbet30]</r34><mi>IND</mi></query>&callback=__gwt_jsonp__.P1.onSuccess&failureCallback=__gwt_jsonp__.P1.onFailure HTTP/1.1
It is an URL requested to a REST WS, in the structure of this url, we can find a tag: <query ids="16914">
I want to extract that number 16914 from the whole URL, the regex I tried to implement is the following:
private static Pattern p = Pattern.compile(
"<\\?xml version='1.0' encoding='[^']+'\\?><query ids=\"([0-9]+)\"><matchWord>.*");
I tried with some tools like Debuggex but I can't manage to find what could be the problem, I prefer to use regex instead of working with a lot of methods from the String class.
I would really appreciate any help. Thanks a lot in advance.
A:
There is nothing wrong with your regex, it works for me.
String s = "POST /fr.synomia.search.ws.module.ModuleSearch/geResults/jsonp?xmlQuery=<?xml version='1.0' encoding='UTF-8'?><query ids=\"16914\"><matchWord>avoir</matchWord><fullText><![CDATA[]]></fullText><quotedText><![CDATA[]]></quotedText><sensitivity></sensitivity><operator>AND</operator><offsetCooc>0</offsetCooc><cooc></cooc><collection>0</collection><searchOn>all</searchOn><nbResultDisplay>10</nbResultDisplay><nbResultatsParAspect>5</nbResultatsParAspect><nbCoocDisplay>8</nbCoocDisplay><offsetDisplay>0</offsetDisplay><sortBy>date</sortBy><dateAfter>0</dateAfter><dateBefore>0</dateBefore><ipClient>82.122.169.244</ipClient><typeQuery>0</typeQuery><equivToDelete></equivToDelete><allCooc>false</allCooc><versionDTD>3.0.5</versionDTD><r34>1tcbet30]</r34><mi>IND</mi></query>&callback=__gwt_jsonp__.P1.onSuccess&failureCallback=__gwt_jsonp__.P1.onFailure HTTP/1.1";
Pattern p = Pattern.compile(
"<\\?xml version='1.0' encoding='[^']+'\\?><query ids=\"([0-9]+)\"><matchWord>.*");
Matcher m = p.matcher(s);
if (m.find()) {
System.out.println("Group: "+m.group(1));
}
Prints:
Group: 16914
| {
"pile_set_name": "StackExchange"
} |
Q:
SQL: Is it possible to block a table insert just prior to the completion of a transaction?
TL;DR: My real question is in the title, is it possible to block a table insert just prior to the completion of a transaction, that is, only concerning the data as it would be right before the transaction would be committed?
UPDATE: What procedes is merely a contrived example, possibly not a good one, demonstrating that I was unable to come up with a way to block an insertion/update prior to a transaction completing which contains two statements. I simply want to know if there is a way to do this, and the example is somewhat irrelevant.
The (possibly bad) example:
I am trying to prevent a transaction from occurring if some property of two tables is broken, for a simple example let's say I want to block if one of the first table's values (say ID) already exists in table 2.
create table dbo.tbl1
(
id int,
name varchar(20)
)
create table dbo.tbl2
(
id int,
name varchar(20)
)
go
The thing that I want to fail is the following:
begin transaction
insert into tbl1 values(1, 'tbl1_1')
insert into tbl2 values(1, 'tbl2_1')
commit transaction
Since at the end of the transaction the first table would have an id with the same value as that in table 2.
But unfortunately I've tried defining both a trigger to block this and a check constraint, and neither seems to block it.
Trigger (as suggested here):
CREATE TRIGGER MyTrigger ON dbo.tbl1
AFTER INSERT, UPDATE
AS
if exists ( select * from tbl2 inner join inserted i on i.id = tbl2.id)
begin
rollback
RAISERROR ('Duplicate Data', 16, 1);
end
Check Constraint (as suggested here):
create function dbo.tbl2WithID(@ID int) returns int
as
begin
declare @ret int
select @ret = count(*) from tbl2 where id = @ID
return @ret
end
go
alter table dbo.tbl1
add constraint chk_notbl2withid
check (dbo.tbl2WithID(id) = 0)
go
How can I update my code to succesfully block the transaction? Do I need to redefine the transaction to be same time?
A:
No, it's not possible to do what you want in MSSQLSever, but it might be in PostGres or Oracle.
Reason part 1: It's not possible to insert to two different tables in the same statement.
Reason part 2: "SQL Server [does not] allow constraint violations in a transaction as long as the transaction has not been committed yet."
Therefore it is not possible in SQLServer to have a single constraint for table insertion of more than one table that will block just prior to the completion of arbitrary transactions.
It's also worth mentioning that what you want is called a deferrable constraint. See more about that here.
| {
"pile_set_name": "StackExchange"
} |
Q:
replace goto using recursion
I have a simple digit root calculation (ie. sum all the digits in an integer, and if the total is more than one digit, repeat the process until you end up with a single digit answer).
My first impulse is to simply calculate the sum of the digits in the initial string, test if the result is more than one digit, and if so, goto the start of the summing suite:
eg.
line = "123456789"
sum_digits = 0
# label: if I could use goto, it would go to here!
n_chars = len(line)
while n_chars > 0:
sum_digits = sum_digits + int(line[0])
line = line[1:]
n_chars -= 1
line = str(sum_digits)
if len(line) < 2: # all done
print("digital root is ", sum_digits);
else:
goto label: # need to repeat until get single digit root number
But, of course, python doesn't support 'goto' -- so I guess this should be written as a recursive function? What is the neatest way to code this?
ps. I'm trying to keep the code very simple to understand/explain as I'm doing this as a coding exercise with my son, who is learning programming
A:
To replace the goto the idea is to keep track of what the line is after every iteration you find its sum.
So, if I understand your question, the following should work:
def find_root(line):
while len(line) > 1:
sum_digits = 0
for num in list(line):
sum_digits += int(num)
line = str(sum_digits)
print line
>>find_root("123456789")
9
>>find_root("93856")
4
| {
"pile_set_name": "StackExchange"
} |
Q:
Count of values across fields in Pig
I have the below test data.
A B C
M O
M M M
M M M
N O
P N
I would like to get the total count of entries in this sample test data i.e 12
I have the below code to do the same, but i am getting an incorrect result.
Any help on how to rectify would be helpful.
test= LOAD 'testdata' USING PigStorage(',') as (A:chararray,B:chararray,C:chararray);
values = FOREACH test GENERATE A==''?'null':(A is null?'null':A)) as A,(B==''?'null':(B is null?'null':B)) as B,(C==''?'null':(C is null?'null':C)) as C;
grp = GROUP values ALL;
counting = FOREACH grp GENERATE group, COUNT(values.A)+COUNT(values.B)+COUNT(values.C);
This is giving answer as 15, rather than 12 .
I would also like to get the count of each of these values,like M=7, N=2, O=2, P=1.
I have written the below code.
test= LOAD 'testdata' USING PigStorage(',') as (A:chararray,B:chararray,C:chararray);
values = FOREACH test GENERATE A==''?'null':(A is null?'null':A)) as A,(B==''?'null':(B is null?'null':B)) as B,(C==''?'null':(C is null?'null':C)) as C;
grp = GROUP values ALL;
A = FOREACH grp {
B =FILTER test.A=='M' OR test.B=='M' OR test.C=='M';
GENERATE group, COUNT(B);
};
I am getting an error "Scalar has more than one row in the output".
A:
You are counting the column names as well in your final count.Modify the script to ignore the first row and then group by and count.
test= LOAD 'testdata' USING PigStorage(',') as (A:chararray,B:chararray,C:chararray);
ranked = rank test;
test1 = Filter ranked by ($0 > 1); --Note:rank_test should work.
values = FOREACH test1 GENERATE A==''?'null':(A is null?'null':A)) as A,(B==''?'null':(B is null?'null':B)) as B,(C==''?'null':(C is null?'null':C)) as C;
grp = GROUP values ALL;
counting = FOREACH grp GENERATE group, COUNT(values.A)+COUNT(values.B)+COUNT(values.C);
| {
"pile_set_name": "StackExchange"
} |
Q:
C++ Template Meta-programming, with variadic templates to perform an operation of specific members of a structure
I devoloped this method whilst finding a way to nicely abstract binding a struct to a SQL statement for a SQLite wrapper, my aim is to be able to abstract away most of the binding process as well as being able to "alias" the specialised function template so that you dont need to retype the bindings on each use.
The method I found will compile on clang but I cant currently get it to work for both GCC and MSVC due to the use of the auto keyword this is replicated in the example below with add_2.
Example:
// final function to perform the addition on each member
template<class T, class M>
void add2_member(T& value, M member)
{
value.*member += 2;
}
// variadic template function that unpack the Members and calls add2_member for each
// arg in args using c++17 fold syntax.
template<class T, auto T::*... Members>
void add2(T& value)
{
(add2_member(value, Members), ...);
}
// example struct X
struct X
{
int a;
int b;
char c;
};
// alias add2 function specialisation
auto add2_x = add2<X, &X::a, &X::c>;
int main()
{
X x;
x.a = 2;
x.c = 1;
add2_x(x);
}
A live link showing this work is here which shows the example in godolt compiling in with Clang 9, but GCC is unable to deduce the type of the auto and I currently have not found a way to achive my goal without it, less this could be done using macro's.
A:
Solution thanks to Sam Varshavchik.
Both MSVC and GCC were now able to compile this code and deduce the auto type, though for MSVC you muse remember to pass /Zc:auto.
template<class T, class M>
void add2_member(T& value, M member)
{
value.*member += 2;
}
template<class T, auto ... Members>
void add2(T& value)
{
(add2_member(value, Members), ...);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
User-oriented sparql on views and referential graphs
I have a memory store that mixes graphs with referential data and a graph with user-bound resources. I would like to expose the data filtered by user and/or role along with all referential data.
Additionaly, I need RDFS inference on the dataset.
First, is it possible to add a reasoner to a sparqlview object or do I need to run the reasoner each time the view is refreshed ?
As for the architectural part, it seems that I have several options :
Build a view per user that unions referential data and the users scope
(but I cannot make the query to work with unions of different graph patterns)
Build a view per user with only the data he can browse/modify and run my queries against a dataset that defaults to the union of referential graphs and user view.
...
What is the best pattern to do this with dotNetRdf in regards of query performances, memory consumption and simplicity ?
A:
I've never tried to do something like this with the API but it should be possible. I would probably recommend not using SparqlView implementations unless the criteria for what data can be seen by each user is only expressible as a SPARQL query, a SparqlView is fairly expensive in memory terms since it takes a copy of the original data and is not currently a direct view over the underlying data (theoretically this is possible but would require a lot more coding and would trade off lower memory usage for performance).
From what you have described I would suggest the best approach might be to use a custom ISparqlDataset implementation, likely deriving from the decorator WrapperDataset. This way you can intercept all the calls that SPARQL queries will make and restrict exactly what each instance is able to retrieve. This way you can have a single store which is the underlying data and a wrapper for each user which provides their view onto that data.
If you do this one thing you will want to be careful of is thread safety, if the underlying dataset you will be wrapping does not implement IThreadSafeDataset then you will need to ensure that all your wrappers override the Lock property and use a shared lock as otherwise you could have problems.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it possible to add the values of two variables using Morphline's inbuilt set of commands?
I'm wondering if there is any way to add the values of two variables in morphlines, without having to write a custom command.
For example, something like:
addValues {
answer : "@{value_one}" + 50
}
Any help is appreciated, thanks
A:
I did not find out how to (or whether it was possible to) do this using Morphline's inbuilt commands. however, it is possible to do so within the java{} command, which allows you to write plain java code in-line in the Morphlines config.
| {
"pile_set_name": "StackExchange"
} |
Q:
Add with custom attributes using AngularJS controller
I'm attempting to add an <object></object>into my html using a controller. When I load a <div> or a <p>, it works properly, but when I add an <object> it doesn't appear, nor do any custom attributes.
HTML:
<html ng-app="myAngularSite">
...
...
<div ng-controller="MyController">
<div id="myloader" ng-bind-html="myObject"></div>
</div>
JS:
var app = angular.module('myAngularSite', ['ngRoute']);
angular.module('myAngularSite', ['ngSanitize'])
.controller('MyController', ['$scope',function($scope) {
$scope.myObject =
'<object id="my_object" data="mysite.html" width="99.5%" height="400px" style="overflow:auto;border:3px ridge gray"/>';
}]);
How can I add the custom attributes and the object into my site? I noticed that attributes won't appear when I try to load a <div id"with_attribut></div> with attributes, although the divs appear by themselves.
Thanks!
A:
The custom directive is probably the good solution, you can always add more custom behaviours. With ngBindHtml you will be limited. Here is link which can help you:
angular ng-bind-html and directive within it
| {
"pile_set_name": "StackExchange"
} |
Q:
JQuery .attr and .data method gives different value for same attribute
Here is my html code
<li class="current" title="some title" data-atype="some type" data-aid="119697371285186601">
Some text
</li>
This is what I get from the firebug / google chrome debugger console,
>> $().jquery
"1.7.2"
>> ($('li.current:first')).data('aid')
119697371285186610
>> ($('li.current:first')).attr('data-aid')
"119697371285186601"
I searched for the issue and I could not find the exact reason and solution for the issue. If anybody could help me to find out a solution and the cause of the problem, it would be really helpful.
Thanks in advance...
A:
$.fn.data tries to be smart about numeric types and converts them to integers. Integers however, in JavaScript, are nothing but floats and, thus, get less precise as they approach larger values. I’d stick with attr() since that will always return a string. There is also a ticket for this but it’s marked “WONTFIX”. For more information on the limitations of large numbers in JavaScript, see this article.
| {
"pile_set_name": "StackExchange"
} |
Q:
Show specific elements in a comma- separated stringbuffer
I am working with SNMP4J and have it successfully outputting comma separated strings (from SNMP traps).
i.e.
StringBuffer msg = new StringBuffer();
msg.append(event.toString());
Vector<? extends VariableBinding> varBinds = event.getPDU().getVariableBindings();
if (varBinds != null && !varBinds.isEmpty()) {
Iterator<? extends VariableBinding> varIter = varBinds.iterator();
while (varIter.hasNext()) {
VariableBinding var = varIter.next();
msg.append(var.toString()).append(";");
}
}
System.out.println("Message Received: " + msg.toString());
outputs -
Message Received: CommandResponderEvent[securityModel=1, securityLevel=1, maxSizeResponsePDU=65535, etc etc
As well as the entire output string I need to display specific elements (in particular
peerAddress=192.168.150.210/61263
and
VBS[1.3.6.1.4.1.332.10.14.19.11.0 = Fire]]
but only ideally the IP address part (192.168.150.210) and the 'meaning' (Fire)
Do I use split to find specific elements and then substring those or is there a better way?
String sixth_word = msg.toString().split(",")[6];
A:
You don’t need a StringBuffer, since you have String.join:
String msg = event + " " + String.join(";", event.getPDU().getVariableBindings());
Personally, for a diagnostic message, I wouldn’t even bother with String.join:
String msg = event + " " + event.getPDU().getVariableBindings();
Reading the documentation of the classes you’re using will go a long way toward showing you what is available to you. If you read the documentation for CommandResponderEvent, you’ll see it has public methods that provide the data you desire:
String peerAddress = event.getPeerAddress().toString();
Looking at the documentation of VariableBinding, we can see that it has a getVariable() method. Each binding’s Variable is where the “Fire” text comes from:
List<String> variables = new ArrayList<>();
for (VariableBinding binding : event.getPDU().getVariableBindings()) {
variables.add(String.valueOf(binding.getVariable()));
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Aptana; What does the ">" in front of file/folder names mean in the File explorer?
I'w wondering what it means when Aptana IDE displays an > in the beginning of file and folder names in the File explorer?
A:
The '>' character could mean incoming or outgoing changes (not sure about the direction) to version control systems.
Otherwise look for decorators in preferences (possibly label decorators, but not sure).
| {
"pile_set_name": "StackExchange"
} |
Q:
Android export database to internal storage
in my app I'm trying to make option which can export database from /data/data/com.example.damian.sshconnection/databases/ssh.db to internal storage /sdcard This is my code:
private void exportDatabase(){
File data = Environment.getDataDirectory();
FileChannel source=null;
FileChannel destination=null;
String currentDBPath = "/data/"+ "com.example.damian.sshconnection" +"/databases/ssh.db";
String backupDBPath = "/sdcard/ssh.db";
File currentDB = new File(data, currentDBPath);
File backupDB = new File(backupDBPath);
try {
source = new FileInputStream(currentDB).getChannel();
destination = new FileOutputStream(backupDB).getChannel();
destination.transferFrom(source, 0, source.size());
source.close();
destination.close();
Toast.makeText(this, "DB Exported!", Toast.LENGTH_LONG).show();
} catch(IOException e) {
e.printStackTrace();
}
}
And permissions from AndroidManifest.xml
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
But I'm getting this error:
java.io.FileNotFoundException: /sdcard/ssh.db (Permission denied)
What more can I do to get acces into storage from app ?
A:
For android 6+, besides manifest permission you should ask permission in runtime as well. Take a look at this article;
import android.support.v4.app.ActivityCompat;
import android.support.v4.content.ContextCompat;
ActivityCompat.requestPermissions(getActivity(), new String[]{
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE
},666); // 666 is any number,to manually identify permission request in onRequestPermissionsResult callback
| {
"pile_set_name": "StackExchange"
} |
Q:
Determine integral by using the following identity (which is imaginairy)?
I want to determine the following integral:
$$\int_{-\infty}^\infty \frac1{x^6+1} dx$$
by using the following identity:
$$\frac1{x^6+1} = \Im\left[\frac1{x^3-i}\right]$$
How in the world can I do this integral if I need to make use of the above identity?
If I plug the integral in wolphram alpha I get that the answer is $2\pi/3$.
A:
Hint
Assuming that you know how to derive $$\frac1{x^6+1} = \Im\left[\frac1{x^3-i}\right]$$ you could use partial fraction decomposition and obtain $$\frac1{x^3-i}=\frac{x-2 i}{3 \left(x^2-i x-1\right)}-\frac{1}{3 (x+i)}$$ which takes you to much simple integrals.
Added later
The question you asked is related to the integration. Then $$I=\int\frac{dx}{x^3-i}=\int\frac{x-2 i}{3 \left(x^2-i x-1\right)}dx-\int\frac{dx}{3 (x+i)}$$ $$I=\frac16\int\frac{2x- i}{ \left(x^2-i x-1\right)}dx-\frac12\int\frac{i}{ \left(x^2-i x-1\right)}dx-\frac13\int\frac{dx}{ (x+i)}$$ As written, these integrals are very simple. Then, use the bounds; the result is a complex.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to calculate interval time when same button clicked twice in java script or jquery
there is one button java script of j query , first time i press the button, after some seconds again i press the same button. how to calculate the interval time between the two presses of a same button. please can anyone help.
code i tried: this code down and ip, how can i calculate for same button preesing
var startTime, endTime;
$("#bu").on('mousedown', function () {
startTime = new Date().getTime();
});
$("#bu").on('mouseup', function () {
endTime = new Date().getTime();
longpress = (endTime - startTime < 500) ? false : true;
});
A:
Try this
var startTime;
$("#bu").on('click', function() {
if(startTime) {
alert( "Time difference: " + (new Date().getTime() - startTime) );
startTime = undefined;
} else {
startTime = new Date().getTime();
}
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<input type="button" id="bu" value="Click Me" />
| {
"pile_set_name": "StackExchange"
} |
Q:
django rest framework annotate with ArrayAgg and GROUP BY
I would like to use the django postgres function ArrayAgg, but I would like to also use it with GROUP BY as well. The sql is really easy to write, but I have not been able to get it to work with the ORM or raw sql
SELECT field1, ARRAY_AGG(field2)
FROM table1
GROUP BY field1
with the orm I would think something like this might work
subquery = Subquery(
models.Model1.objects
.filter(field1=OuterRef('field1'))
.values('field2')
.aggregate(field3=ArrayAgg('field2'))
.values('field3')
)
queryset = queryset.annotate(field3=subquery)
But it doesn't with a outerref error (I have tried many permutations)
and with a raw query I can get it to work, but then it returns all the fields I am guessing due to the RawQueryset and things like defer doesn't work so all fields are queried and returned.
rawqueryset = models.Model1.objects.raw(
'SELECT m.id, t.field1, t.field3 '
'FROM ('
'SELECT field1, array_agg(field2) as field3 '
'FROM app_table1 '
'GROUP BY frame_id '
') t LEFT OUTER JOIN app_table m ON m.field1 = t.frame_id',
[]
)
serializer = serializers.Model1(rawqueryset, many=True)
return Response(serializer.data)
Is there a way to do this?
A:
I was able to get it to work using raw sql
rawqueryset = models.Model1.objects.raw(
'SELECT m.id, t.field1, t.field3 '
'FROM ('
'SELECT field1, array_agg(field2) as field3 '
'FROM app_table1 '
'GROUP BY frame_id '
') t LEFT OUTER JOIN app_table m ON m.field1 = t.frame_id',
[]
)
serializer = serializers.Model1(rawqueryset, many=True, context={'request': request})
return Response(serializer.data)
What was missing was adding the request object to the passed context.
| {
"pile_set_name": "StackExchange"
} |
Q:
how to solve XMLParser.ErrorCode.invalidCharacterError in ios swift?
I have a homework needs to read some rss feeds and build user profile etc.
My problem is when i use XMLParser from foundation, I will encounter "The operation couldn’t be completed. (NSXMLParserErrorDomain error 9.)"
I checked documentation and it seems that I have the invalidCharacterError. I don't think my code have problem since it works well for another url feeds. So what should i do to overcome this problem?
Here is url: http://halley.exp.sis.pitt.edu/comet/utils/_rss.jsp?v=bookmark&user_id=3600
P.S. this feeds contains CDATA so i comment out title and description but it should display date, but it is still show that error. So my concern is that during parsing the xml, it encountered any invalid character and report the error. Anyway to fix it? I have to use this url though.
and some related code are here:
func parseFeed(url: String, completionHandler: (([RSSItem]) -> Void)?)
{
self.parserCompletionHandler = completionHandler
let request = URLRequest(url: URL(string: url)!)
let urlSession = URLSession.shared
let task = urlSession.dataTask(with: request) { (data, response, error) in
guard let data = data else {
if let error = error {
print(error.localizedDescription)
}
return
}
/// parse our xml data
let parser = XMLParser(data: data)
parser.delegate = self
parser.parse()
}
task.resume()
}
// MARK: - XML Parser Delegate
func parser(_ parser: XMLParser, didStartElement elementName: String, namespaceURI: String?, qualifiedName qName: String?, attributes attributeDict: [String : String] = [:])
{
currentElement = elementName
if currentElement == "item" {
currentTitle = ""
currentDescription = ""
currentPubDate = ""
}
}
func parser(_ parser: XMLParser, foundCharacters string: String)
{
switch currentElement {
// case "title": currentTitle += string
// case "description" : currentDescription += string
case "pubDate" : currentPubDate += string
default: break
}
}
func parser(_ parser: XMLParser, didEndElement elementName: String, namespaceURI: String?, qualifiedName qName: String?)
{
if elementName == "item" {
let rssItem = RSSItem(title: currentTitle, description: currentDescription, pubDate: currentPubDate)
self.rssItems.append(rssItem)
}
}
func parserDidEndDocument(_ parser: XMLParser) {
parserCompletionHandler?(rssItems)
}
func parser(_ parser: XMLParser, parseErrorOccurred parseError: Error)
{
print(parseError.localizedDescription)
}
A:
I found an invalid byte 0xFC inside one of the CDATA element in the response of the URL you have shown.
This is invalid as a UTF-8 byte in a document declaring encoding="UTF-8".
You should better tell the server engineer of the URL, that the XML of the RSS feed is invalid.
If you need to work with this sort of ill-formed XML, you need to convert it to the valid UTF-8 data.
0xFC represents ü in ISO-LATIN-1, so you can write something like this.
func parseFeed(url: String, completionHandler: (([RSSItem]) -> Void)?)
{
self.parserCompletionHandler = completionHandler
let request = URLRequest(url: URL(string: url)!)
let urlSession = URLSession.shared
let task = urlSession.dataTask(with: request) { (data, response, error) in
guard var data = data else { //###<-- `var` here
if let error = error {
print(error.localizedDescription)
}
return
}
//### When the input `data` cannot be decoded as a UTF-8 String,
if String(data: data, encoding: .utf8) == nil {
//Interpret the data as an ISO-LATIN-1 String,
let isoLatin1 = String(data: data, encoding: .isoLatin1)!
//And re-encode it as a valid UTF-8
data = isoLatin1.data(using: .utf8)!
}
/// parse our xml data
let parser = XMLParser(data: data)
parser.delegate = self
parser.parse()
}
task.resume()
}
If you need to work other encodings, the problem would be far more difficult, as it is hard to estimate the text encoding properly.
You may need to implement func parser(_ parser: XMLParser, foundCDATA CDATABlock: Data), but that seems to be another issue.
| {
"pile_set_name": "StackExchange"
} |
Q:
Messages from another developer that is going over my head (Probably silly)
Here is a snippet of stuff that I have worked with another developer with, I am very basic in PHP and need to move across some of his code into my site.
This is part of the message that he has sent:
Next, please edit config.php. You need to supply and execute the
loadercsv.php from command line like this:
$ php loadercsv.php
This will extract data from zip file and would populate database which
already must exists.
What does this really mean? I do understand this is quite a guessing game but would love to know if someone else could interpret what he is saying?
Also, below is the code from the config.php file, just incase it is needed:
<?php
// Temporary directory where data will be extracted. Must be directory, absolute path, writable.
define('DATA_DIR', '/CourseFinder/tmp');
// Location of zip file, must be readable and absoluate path.
define('ZIP_FILE', '/CourseFinder/assets/zip/ziplocation.zip');
define('DB_HOST', 'localhost');
define('DB_USER', 'removedforreasons');
define('DB_PASSWORD', 'removedforreasons');
define('DB_NAME', 'removedforreasons');
define('MAX_LINE_WIDTH', 2048);
define('CSV_SEP', ',');
define('RESULTS_PER_PAGE', 10);
/*
* List of CSV files to be loaded. These files are processed in order listed here.
* In case some file does not exists, process will break.
*
*/
/* Try to connect */
$connection = mysqli_connect(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME);
/* If cannot connect, simply exit. */
if (mysqli_connect_errno($connection)) {
$msg = sprintf("Cannot connect to MySQL: %s", mysqli_connect_errno($connection));
printf("ERROR: config.php - %s\n", $msg);
exit();
}
/* Function to close the connection */
function close_connection($connection_to_close)
{
mysqli_close($connection_to_close);
}
/* Register the function at shutdown. */
register_shutdown_function('close_connection', $connection);
?>
A:
This seems to be very straight-forward:
The program loadercsv.php is written to open a hard-coded zip file, process its contents, and insert the data into a database. The database itself is assumed to be set up and running.
The name of the zip file and the credentials for the database access are stored in a separate file, config.php, which is presumably being included in the first file. That is, rather than providing the configuration options on the command line or in any other fashion, you simply edit the config.php file to contain the desired data.
Finally, the program is simply run from the command line, with the command php loadercsv.php.
| {
"pile_set_name": "StackExchange"
} |
Q:
Unity - wait until animation finished
I try to start a co-routine after my animation finished playing.
I tried it like this:
...
while (animCamera.isPlaying) {
new WaitForSeconds(1);
}
StartCoroutine(LoadAsync(sceneName, sliderLoadbar, sliderLoadbarText));
But this crashes my unity and even my browser after a while and my unity stucks as soon as the while loop is entered.
How can I solve this?
A:
in the animation tab of your animated gameobject create an event at the last frame of the animation, attach the above script to this gameobject, and choose the method you want to run at the end of it
Remove The coroutine and just make a simple method
public void LoadScene()
{
LoadAsync(sceneName, sliderLoadbar, sliderLoadbarText)
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Is Reading the Spec Enough?
This question is centered around Scheme but really could be applied to any LISP or programming language in general.
Background
So I recently picked up Scheme again having toyed with it once or twice before. In order to solidify my understanding of the language, I found the Revised^5 Report on the Algorithmic Language Scheme and have been reading through that along with my compiler/interpreter's (Chicken Scheme) listed extensions/implementations.
Additionally, in order to see this applied I have been actively seeking out Scheme code in open source projects and such and tried to read and understand it.
This has been sufficient so far for me understanding the syntax of Scheme and I've completed almost all of the Ninety-nine Scheme problems (see here) as well as a decent number of Project Euler problems.
Question
While so far this hasn't been an issue and my solutions closely match those provided, am I missing out on a great part of Scheme?
Or to phrase my question more generally, does reading the specification of a language along with well written code in that language sufficient to learn from? Or are other resources, books, lectures, videos, blogs, etc necessary for the learning process as well.
A:
Generally, programming language specifications are not very good tutorials. They are worded so as to be prescriptive rather than descriptive, although I think the best specifications identify discrete, separable requirements in simple, disjoint shall statements that are pretty easy to evaluate, and are further documented with a description and an example.
Bjarne Stroustrup weighs in on just this question about the recent C++ standards work in his C++ FAQ.
Be warned that the standard is not a tutorial; even expert programmers
will do better learning about C++ and new C++ features from a
textbook.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I get the list of all fields(API name,Label,type) of any sObject?
I'm trying to build a Dynamic list(table) of records by selecting the Sobject and its fields.So I need to get all the fields with its API name,label,type.But I can't get the label and type.(Tried DescribeSObjectResult Class methods to get label,but it didn't help to get the label dynamically)
I tried it with the following code.
System.debug('Selection-----'+selectedObject);
Map<String, Schema.SObjectType> gd = Schema.getGlobalDescribe();
Schema.SObjectType ctype = gd.get(selectedObject);
Map<String, Schema.SobjectField> fmap = ctype.getDescribe().fields.getMap();
List<FieldWrap> strList = new List<WrapMap>();
for(String s: fmap.keySet()) {
WrapMap wmp = new WrapMap();
wmp.name = s;
wmp.label = String.valueOf(fmap.get(s));
strList.add(wmp);
}
Where FieldWrap is my wrapper class with name and field of String types.
I'm getting the List of API name of fields,but not the Label and type of it.Is there a way to get the label and type of field.If so, please help me.
A:
//It provides to get the object fields label.
String fieldLabel = fieldMap.get(fieldName).getDescribe().getLabel();
//It provides to get the object fields data type.
Schema.DisplayType fielddataType = fieldMap.get(fieldName).getDescribe().getType();
Your code will look like this..
Map<String, Schema.SObjectType> gd = Schema.getGlobalDescribe();
Schema.SObjectType ctype = gd.get(selectedObject);
Map<String, Schema.SobjectField> fmap = ctype.getDescribe().fields.getMap();
List<FieldWrap> strList = new List<WrapMap>();
for(String fieldName: fmap.keySet()) {
WrapMap wmp = new WrapMap();
wmp.name = fieldName;
wmp.label = fmap.get(fieldName).getDescribe().getLabel();
strList.add(wmp);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Can’t figure out how to remove cartridge of single handle tub faucet
We have a leaking bathtub faucet and need to replace the cartridge but we cant’t figure out how to remove the casing (estachian?) around it. I don’t know the brand or model number. Everything is stuck from the hard water. We’ve already replaced the faucet but that didn’t fix it. Any help is much appreciated!!
A:
The stem is held in place by a u-shaped piece of metal. If you scrape through enough crud, you'll find the bottom of it on the outside of the metal cylinder. Pull it out with a pair of pliers. https://www.youtube.com/watch?v=ZkZad_sHQkQ about 0:50 in is what I'm talking about.
| {
"pile_set_name": "StackExchange"
} |
Q:
Share functions across colaboratory files
I'm sharing a colaboratory file with my colleagues and we are having fun with it. But it's getting bigger and bigger, so we want to offload some of the functions to another colaboratory file. How can we load one colaboratory file into another?
A:
There's no way to do this right now, unfortunately: you'll need to move the code into a .py file that you load (say by cloning from github).
| {
"pile_set_name": "StackExchange"
} |
Q:
Ajax success function location.reload for individual php file
I have a feed page on my website (very similar to facebook) that enables me to like and comment on posts. I'm using ajax to update the posts, however, after a like, rather than each individual post reloading, the whole feed does (not the whole page itself, it just returns to the top of the feed).
I believe this is because each post is using a file named feedLikes.php that are all being reloaded rather than just that one specific post. I'm not sure how to only make that one post reload. below is my code.
From feed.php below, you can see i am searching for all the posts within the database. Each one of these posts is given a feedID like so:
$findShouts = $pdo->prepare('SELECT * FROM feed WHERE name IN (SELECT scoutingUsername FROM scout WHERE scoutedUsername =? OR scoutingUsername =?) ORDER BY timestamp DESC');
//execute query and variables
$findShouts->execute([$username, $username]);
if ($findShouts->rowCount() > 0)
{
//get the shouts for each scout
while($row = $findShouts->fetch(PDO::FETCH_ASSOC)){
$shoutID[] = $row['id'];
$shoutUsername[] = $row["username"];
$shoutName[] = $row["name"];
$shoutText[] = $row["text"];
$shoutTimestamp[] = $row["timestamp"];
}
$shoutCount = count($shoutUsername);
for($indexShout=0; $indexShout < $shoutCount; $indexShout++) {
print'
<div class=feedNewShout>
<div class=shoutInformation>
<div class=shoutName>
<p>'. $shoutName[$indexShout] .'</p>
</div>
<div class=shoutTimestamp>
<p>'. timeElapsed("$shoutTimestamp[$indexShout]", 2) .'</p>
</div>
<div class=shoutText>
<p>'. $shoutText[$indexShout] .'</p>
</div>
<input type="hidden" name="feedID" class="feedID" value="'. $shoutID[$indexShout] .'">
<div class=likesAndComments>
<div class=likesAjax data-id="'.$shoutID[$indexShout] .'">
</div>
<div class=commentsAjax data-id="'.$shoutID[$indexShout] .'">
</div>
<div class=deleteShoutAjax data-id="'.$shoutID[$indexShout] .'">
</div>
</div>
</div>
</div>';
}
unset($shoutID);
unset($shoutUsername);
unset($shoutName);
unset($shoutText);
unset($shoutTimestamp);
}
From this i use a jquery Ajax call in feedLikesAjax.js to find each individual feedID needed:
$(document).ready(function()
{
$(".likesAjax").each(function() {
var feedID = $(this).attr("data-id");
$.ajax({
url: "feedLikes.php",
cache: false,
type: "POST",
data: {feedID: feedID},
dataType: "html",
success: function(html){
$(".likesAjax[data-id='"+ feedID +"']").empty();
$(".likesAjax[data-id='"+ feedID +"']").append(html);
}
});
});
});
I use this information and pass it to feedLikes.php:
if (isset($_POST['feedID']))
{
$feedID = ($_POST['feedID']);
$findHasUserLiked = $pdo->prepare('SELECT username FROM feedLikes WHERE feedID =? and username=?');
//execute query and variables
$findHasUserLiked->execute([$feedID, $username]);
if ($findHasUserLiked->rowCount() > 0)
{
$hasUserLiked = $findHasUserLiked->fetchColumn();
echo<<<_END
<form action="feedLikes.php" id="unlikePostForm$feedID" method="post">
<button type="submit" class="unLikeButton"></button>
<input type="hidden" name="feedIDForUnlike" class="feedIDForUnlike$feedID" value="$feedID">
</form>
_END;
?>
<script type="text/javascript">
$(document).ready(function()
{
$('#unlikePostForm<?php echo $feedID ?>').on('submit', function (e) {
e.preventDefault();
var feedIDUnlike = $(".feedIDForUnlike<?php echo $feedID ?>").val();
$.ajax({
url: "feedLikesClicked.php",
cache: false,
type: "POST",
data: {feedIDUnlike: feedIDUnlike},
dataType: "html",
success: function(html){
location.reload();
}
});
});
});
</script>
<?php
}
else
{
echo<<<_END
<form action="feedLikes.php" id="likePostForm$feedID" method="post">
<button type="submit" class="likeButton"></button>
<input type="hidden" name="feedIDForLike" class="feedIDForLike$feedID" value="$feedID">
</form>
_END;
?>
<script type="text/javascript">
$(document).ready(function()
{
$('#likePostForm<?php echo $feedID ?>').on('submit', function (e) {
e.preventDefault();
var feedIDLike = $(".feedIDForLike<?php echo $feedID ?>").val();
$.ajax({
url: "feedLikesClicked.php",
cache: false,
type: "POST",
data: {feedIDLike: feedIDLike},
dataType: "html",
success: function(html){
location.reload();
}
});
});
});
</script>
<?php
}
$likesNumber = $pdo->prepare('SELECT count(*) FROM feedLikes WHERE feedID =?');
//execute query and variables
$likesNumber->execute([$feedID]);
$numberOfLikes = $likesNumber->fetchColumn();
print'
<div class=numberOfLikes data-id="'.$feedID .'">
<p>'. $numberOfLikes .'</p>
</div>';
}
?>
Like i said it all works perfectly apart from the reloading. Now i know the location.reload that is used on success is actually reloading every feedLikes.php for every post. But i'm really stuck on how to just reload the current feedLikes.php post that is needed for that specific post. I thought this would be really simple, and it maybe, but i cant find it anywhere.
Really grateful for any help. Thank you
A:
There are lots of ways to do this. To achieve what you are actually asking you need to modify your jQuery success function to target only the div element for the post you are interested in. Either by adding a unique ID to the HTML, or using a selector based on the class and data-id attributes to identify that specific post.
Then yout PHP needs to only return the HTML which you want to modify and you have your jQuery success function insert that into the div for the relevant post.
Having said that, for what you are trying to do is there really any need to reload the post? You could have your PHP script just return the new number of likes and whether or not current user has liked the post and then update those values in your success call.
You could optimise your code a lot, the feedLikesAjax.js script is calling feedLikes.php once the page is loaded, creating a new ajax request for each post. You could combine the code from feedLikes.php into feed.php and have the server output the page with all the data immediately, and get rid of feedLikesAjax.js altogether. You could replace the likes and unlikes forms with a single button for each post and right now you are setting an event handler for each form individually, if you give them all a common class you can just use a single event handler.
EDIT
To answer your comment:
You don't need another query in your while statement. You can expand your first query using a left join to have it also include data from the feedLikes table in the returned results or you can use another subquery to your original query to add another column to your returned results. Something along the lines of this should give you a userLiked row with a value of 1 or 0 for liked/not liked. You might have to edit it a bit to get it working for you, I'm not an SQL guru by any means.
SELECT *, (SELECT COUNT(L.username) FROM feedLikes L WHERE L.feedID = F.id AND L.username = F.username) AS userLiked
FROM feed F
WHERE name IN (SELECT scoutingUsername FROM scout WHERE scoutedUsername =? OR scoutingUsername =?)
ORDER BY timestamp DESC
| {
"pile_set_name": "StackExchange"
} |
Q:
what is meant "seg fs" in the linux kernel bootsect.s file
When I reading the early Linux kernel code, I encountered a problem in boot/bootsect.s that was difficult to understanding."seg fs" ,What is it doing? If I want to change to AT&T's assembly syntax, How should I do!
go: mov ax,cs
mov dx,#0x4000-12 ! 0x4000 is arbitrary value >= length of
! bootsect + length of setup + room for stack
! 12 is disk parm size
! bde - changed 0xff00 to 0x4000 to use debugger at 0x6400 up (bde). We
! wouldn't have to worry about this if we checked the top of memory. Also
! my BIOS can be configured to put the wini drive tables in high memory
! instead of in the vector table. The old stack might have clobbered the
! drive table.
mov ds,ax
mov es,ax
mov ss,ax ! put stack at INITSEG:0x4000-12.
mov sp,dx
/*
* Many BIOS's default disk parameter tables will not
* recognize multi-sector reads beyond the maximum sector number
* specified in the default diskette parameter tables - this may
* mean 7 sectors in some cases.
*
* Since single sector reads are slow and out of the question,
* we must take care of this by creating new parameter tables
* (for the first disk) in RAM. We will set the maximum sector
* count to 18 - the most we will encounter on an HD 1.44.
*
* High doesn't hurt. Low does.
*
* Segments are as follows: ds=es=ss=cs - INITSEG,
* fs = 0, gs = parameter table segment
*/
push #0
pop fs
mov bx,#0x78 ! fs:bx is parameter table address
seg fs
lgs si,(bx) ! gs:si is source
mov di,dx ! es:di is destination
mov cx,#6 ! copy 12 bytes
cld
rep
seg gs
movsw
mov di,dx
movb 4(di),*18 ! patch sector count
seg fs
mov (bx),di
seg fs
mov 2(bx),es
mov ax,cs
mov fs,ax
mov gs,ax
xor ah,ah ! reset FDC
xor dl,dl
int 0x13
A:
I assume it assembles as a fs prefix for the next instruction. That would match the comments, and is the only thing that makes sense.
Should be easy enough to build it and disassemble (into AT&T syntax if you want).
In AT&T syntax, you can just use fs as a prefix to other mnemonics.
fs movsw
assembles to this (in 64-bit mode. 16-bit mode would skip the 66 operand-size prefix).
0000000000000000 <.text>:
0: 64 66 a5 movsw %fs:(%rsi),%es:(%rdi)
| {
"pile_set_name": "StackExchange"
} |
Q:
What is this comic featuring a man whose eyes see two different versions of the world?
I would have read it in the mid-90s, but I'm not sure if was new at that time. The main character of the series sees the world normally through one eye, but the other shows a demonic version of things. He wears an eye patch to switch back and forth.
In the issue I read, he goes to visit his mother(I think) in a hospital. While there, a priest joins him in an elevator and pushes the button for the basement. Looking at the priest with the "demon" eye reveals him as a demon. The priest-demon is killed with a shovel of hot coals to the face.
A:
I think you are referring to Ectokid, part of the Razorline series, an imprint of Marvel.
The series only ran from Sep 1993 to May 1994. This matches your timeline.
The star is Dexter Mungo whose mother was human but his father was a ghost. His right eye sees into the normal world, but his left eye sees into the Ectosphere. One of the side effects of this ability, is that he can see through the disguises of demons.
It has been a while since I have read the comic, but your events (visit mother in hospital, finding that priest is a demon, killing the demon with hot coals) occur in the first issue of the series.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to create a Folder Dialog?
So I need to allow the user of my script to choose a directory that contains images, and then run my script on each image. I already have my script made here. I looked at this question already, but that is about exporting, not importing.
It should look like this:
And should allow the user to choose the directory.
Thanks!
A:
For directories or file paths on any panel it's key using a StringProperty. In order to get a File Dialog instead of a Folder Dialog, set its subtype from DIR_PATH to FILE_PATH.
path = StringProperty(
name = "",
description="Choose a directory:",
default="",
maxlen=1024,
subtype='DIR_PATH')
As of Blender 2.8x properties should be assigned using a single colon :
path : StringProperty(
name = "",
description="Choose a directory:",
...
subtype='DIR_PATH')
Following example adds a Folder Dialog to the Tool Shelf and prints the path to the console.
Blender 2.8x Update
import bpy
from bpy.props import (StringProperty,
PointerProperty,
)
from bpy.types import (Panel,
Operator,
AddonPreferences,
PropertyGroup,
)
# ------------------------------------------------------------------------
# Scene Properties
# ------------------------------------------------------------------------
class MyProperties(PropertyGroup):
path : StringProperty(
name="",
description="Path to Directory",
default="",
maxlen=1024,
subtype='DIR_PATH')
# ------------------------------------------------------------------------
# Panel in Object Mode
# ------------------------------------------------------------------------
class OBJECT_PT_CustomPanel(Panel):
bl_idname = "OBJECT_PT_my_panel"
bl_label = "My Panel"
bl_space_type = "VIEW_3D"
bl_region_type = "UI"
bl_category = "Tools"
bl_context = "objectmode"
def draw(self, context):
layout = self.layout
scn = context.scene
col = layout.column(align=True)
col.prop(scn.my_tool, "path", text="")
# print the path to the console
print (scn.my_tool.path)
# ------------------------------------------------------------------------
# Registration
# ------------------------------------------------------------------------
classes = (
MyProperties,
OBJECT_PT_CustomPanel
)
def register():
from bpy.utils import register_class
for cls in classes:
register_class(cls)
bpy.types.Scene.my_tool = PointerProperty(type=MyProperties)
def unregister():
from bpy.utils import unregister_class
for cls in reversed(classes):
unregister_class(cls)
del bpy.types.Scene.my_tool
if __name__ == "__main__":
register()
Blender 2.7x
import bpy
from bpy.props import (StringProperty,
PointerProperty,
)
from bpy.types import (Panel,
Operator,
AddonPreferences,
PropertyGroup,
)
# ------------------------------------------------------------------------
# UI (settings Class, Panel in Object Mode)
# ------------------------------------------------------------------------
class MySettings(PropertyGroup):
path = StringProperty(
name="",
description="Path to Directory",
default="",
maxlen=1024,
subtype='DIR_PATH')
class OBJECT_PT_my_panel(Panel):
bl_idname = "OBJECT_PT_my_panel"
bl_label = "My Tool"
bl_space_type = "VIEW_3D"
bl_region_type = "TOOLS"
bl_category = "Tools"
bl_context = "objectmode"
def draw(self, context):
layout = self.layout
scn = context.scene
col = layout.column(align=True)
col.prop(scn.my_tool, "path", text="")
# print the path to the console
print (scn.my_tool.path)
# ------------------------------------------------------------------------
# Registration
# ------------------------------------------------------------------------
def register():
bpy.utils.register_module(__name__)
bpy.types.Scene.my_tool = PointerProperty(type=MySettings)
def unregister():
bpy.utils.unregister_module(__name__)
del bpy.types.Scene.my_tool
if __name__ == "__main__":
register()
To collect all images in a folder use os.listdir() to return a list containing the files in the folder and make sure that the file type is correct. Simplest way is using a list comprehension:
import os
# path to the folder
path = '/home/user/Desktop/'
# collect all OpenExr files
exr_list = [f for f in os.listdir(path) if f.endswith('.exr')]
# iterate through the list
for i in exr_list:
print(os.path.join(path,i))
For more details about Property Appearance as well as on how to create Custom Interfaces, have a look into: How to create a custom UI?
| {
"pile_set_name": "StackExchange"
} |
Q:
What are new features in linq in c#4.0?
I am C# developer , i like to use linq very much. i like to know new features of linq in c#4.0.i already know ZIP method there.Is there Any new methods like That?
A:
There is the new Zip() extension method http://msdn.microsoft.com/en-us/library/dd267698.aspx , the new EF 4.0 http://msdn.microsoft.com/en-us/data/aa937723
While it isn't directly LINQ, they created the Tuple class tree http://msdn.microsoft.com/en-us/library/system.tuple.aspx , and expanded Action<T1, T2...> and Func<T1, T2...> up to 10 parameters. I'm not sure if covariance and contravariance should be listed here (IEnumerable<T> is covariant, and "he" is one of the basic "objects" of LINQ)
| {
"pile_set_name": "StackExchange"
} |
Q:
Django session authentication with Angular 2
I've been looking all around for session based authentication with Angular 2.
I'm building an application that has Django on backend and Angular 2 on the frontend. To keep the process simple I'm trying to implement Django session authentication.
// Angular 2 authentication service
import { Injectable } from "@angular/core";
import { Headers, Http, Response } from "@angular/http";
import "rxjs/add/operator/toPromise";
import 'rxjs/add/operator/map'
import { AppSettings } from "../../app.settings";
@Injectable()
export class UserAuthService {
private headers = new Headers({'Content-Type': 'application/json'});
private loginUrl = `${AppSettings.BACKEND_URL}` + '/api/v1/users/login/';
constructor(
private http: Http
) { }
login(username, password) {
let data = {
username: username,
password: password
};
return this.http.post(this.loginUrl, data, this.headers)
.map((response: Response) => response.json());
}
}
# Django Login view
def login(self, request):
username = request.data['username']
password = request.data['password']
user = authenticate(username=username, password=password)
if user is not None:
login(request, user)
serializer = self.serializer_class(user)
return Response(serializer.data, status=status.HTTP_200_OK)
raise AuthenticationFailed
I'm successfully calling backend API and my login view returns the successful response.
Also request.user gets updated after the login but when I try to call the other APIs using Angular or directly browse Django rest API user is not logged in.
A:
The answer to this question is to append CSRF token to the X-CSRF header, because django uses X-CSRF token header to verify the sessions.
I don't exactly remember where I saw this but Iachieved this by using angular2-cookie and writing a custom request options service like this
// Custom request options service
import { CookieService } from "angular2-cookie/services/cookies.service";
import { Headers, RequestOptions } from "@angular/http";
import { Injectable } from "@angular/core";
@Injectable()
export class CustomRequestOptionsService {
constructor(
private cookieService: CookieService
) { }
defaultRequestOptions() {
return new RequestOptions({
headers: new Headers({
'Content-Type': 'application/json',
}),
withCredentials: true
});
}
authorizationRequestOptions() {
return new RequestOptions({
headers: new Headers({
'Content-Type': 'application/json',
'X-CSRFToken': this.cookieService.get('csrftoken')
}),
withCredentials: true
});
}
}
and then in your service where you hit secure APIs use it like this
// Officer service
import { Http, Response} from "@angular/http";
import { Injectable } from "@angular/core";
import "rxjs/add/operator/map";
// Services
import { CustomRequestOptionsService } from "../shared/custom-request-options.service";
@Injectable()
export class OfficerService {
private officerDashboardUrl = `http://${process.env.API_URL}` + '/api/v1/officers/detail';
constructor(
private http: Http,
private customRequestOptionService: CustomRequestOptionsService
) { }
getOfficer(officerId: number) {
return this.http.get(`${this.officerDashboardUrl}/${officerId}/`,
this.customRequestOptionService.authorizationRequestOptions())
.toPromise()
.then((response: Response) => {
return response.json();
})
.catch((error: any) => {
return Promise.reject(error.message || error)
});
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Access denied error when building solution in Visual Studio 2005
I get the following error in Visual Studio 2005 when doing a build:
Error 9 Cannot register assembly
"E:\CSharp\project\Some.Assembly.dll"
- access denied. Access is denied. (Exception from HRESULT: 0x80070005
(E_ACCESSDENIED)) project
It happens only intermittantly and does go away if I restart the IDE, however this is incredibly annoying and I would like to put a stop to it happening permanently, if I can. I've checked the assembly itself, and it is not set to read only, so I've no idea why Visul Studio is getting a lock on it. I am working in Debug mode.
I've had a look around google, but can't seem to find anything other than "restart VS". Does anyone have any suggestions as to how I can resolve this annoying problem?
A:
It sounds like you have a DLL that gets locked every now and then, preventing VS from overwriting/locking it. Have you tried using tools like Process Explorer (http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx), or Unlocker (http://www.emptyloop.com/unlocker/) to see what is locking the DLL? Unlocker in particular has saved me many a time.
As noted in the comments below (Thanks Jeff), you can also kill an individual lock from within Process Explorer.
| {
"pile_set_name": "StackExchange"
} |
Q:
PageMethod and viewstate problem
i was trying to access viewstate from pagemethod. i know that pagemethod invoke a static method in code behind so that is why viewstate can't be access from static method but is there any wayout for this solution. i dont want to use session instead of viewstate.
thanks
A:
I guess you're in the wrong way.
ViewState is a way of storing values so a control or page would hold data across web requests.
Have you considered using ASP.NET Callback API? Maybe this could be your solution.
Check this article:
http://msdn.microsoft.com/en-us/library/ms178208.aspx
| {
"pile_set_name": "StackExchange"
} |
Q:
Clicking an image with no id in WebDriver using PHP
I have a form with an image used as the submit button. The image has no id nor class attribute, and the form is submitted strictly with javascript. I am trying to use WebDriver (with PHP) to submit this little form. I am using the Firefox driver and selenium standalone server v 2.30.
The javascript function that exsist on the page being tested is sendForm()
I have tried to execute javascript via webdriver with no luck:
<?php
$js = "sendForm()";
$driver->execute(array(
'script' => $js,
'args' => array(),
));
?>
The image button looks like this:
<form id="myForm" action="#">
<!-- input fields go here -->
<img src="/images/submit_button.png" alt="Go" onclick="sendForm();return false;">
</form>
So the two ways to submit this form would include 1) getting webdriver to execute sendForm() on its own, or 2) clicking the image which has no attributes to select.
Can I execute the javascript function some other way than what I have tried?
The form DOES have an id, so can I select the image via the form id and just 'click' it ?
A:
I would suggest you use XPath. The below should work:
//form[@id='myForm']//img
This assumes there is only ever one img as a direct child of that particular form though, so bear that in mind.
| {
"pile_set_name": "StackExchange"
} |
Q:
NSClassFromString returns nil
Why does NSClassFromString return nil ? As per the definition it has to return class name.
How should I take care to rectify this problem? I need to instantiate a class from string and call the method, which is in the class, using the instance created.
This is how my code looks like:
id myclass = [[NSClassFromString(@"Class_from_String") alloc] init];
[myclass method_from_class];
But the method_from_class function is not being called, control is not going into it. And my code is error free. Any idea how to solve this in Objective-C?
A:
If you are trying to instantiate a class from a static library, you must add the "-ObjC" flag to the "Other Linker Flags" build setting.
A:
The Documentation for the function says:
Return Value
The class object named by
aClassName, or nil if no class by that
name is currently loaded. If
aClassName is nil, returns nil.
An example of how this should be properly used is as follows:
Class dictionaryClass = NSClassFromString(@"NSMutableDictionary");
id object = [[dictionaryClass alloc] init];
[object setObject:@"Foo" forKey:@"Bar"];
A:
It is possible that your class is not getting linked if this is the only reference to it.
I had a factory method to instantiate various types of subclass. The factory had a switch statement that went to the appropriate subclass and alloc'd and init'ed it. I noticed that all of the alloc/init statements were exactly the same, except for the name of the class. So I was able to eliminate the entire switch block using the NSClassFromString() function.
I ran into the same problem - the return was nil. This was because the class was not used elsewhere in the program, so it wasn't getting linked, so it could not be found at runtime.
You can solve this by including the following statement:
[MyClass class];
That defeats the whole purpose of what I was trying to accomplish, but it might be all you need.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to typeset times in regular text
How should I enter in a time of day in latex in order to have it typeset properly, if I am using the amsmath package? I am using pdflatex.
say I want to write "12:00". In math mode, this gives a space after the colon, as below:
%MWE
\documentclass{article}
\usepackage{amsmath}
\begin{document}
Should it be $4\colon 00$, $5:00$, or 6$\colon$00? They all have a space.
\end{document}
Without the package, it looks fine:
%MWE
\documentclass{article}
\begin{document}
Without the amsmath package, 6$\colon$00 looks correct.
\end{document}
A:
Simple
I’d write it as plain text
Meeting at 5:00
Or if you need it in math mode with \text
$\text{5:00} + \text{1:00} = \text{6:00}$
But I can’t see a reason for that since times arn’t math and the colon can be misread as divide: 5:30 = 5/30 = \frac{5}{30}.
Advanced
You could even define a command to have a globe appearance that can be change later easily:
\newcommand\Time[2]{%
\text{#1:#2}% <------------------ change format here
}
and then use it like \Time{5}{00} in text or math mode.
To parse an input like \Time{5:00} you’ll need something like this:
\makeatletter
\def\@time@parse#1:#2\end@time{%
\text{#1\,h #2\,min}% <------------------ change format here
}
\newcommand{\Time}[1]{%
\@time@parse#1\end@time
}
\makeatother
Where \@time@parse does the work and is called from within \Time
full MWE
\documentclass{article}
\usepackage{amsmath}
\newcommand{\TimeI}[2]{%
\text{#1:#2}%
}
\makeatletter
\def\@time@parse#1:#2\end@time{%
\text{#1\,h #2\,min}%
}
\newcommand{\TimeII}[1]{%
\@time@parse#1\end@time
}
\makeatother
\begin{document}
Meeting at \TimeI{5}{00}
Meeting at \TimeII{5:00}
\end{document}
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I multiple select in mysql using one query?
I have an sql statement like this:
Select Id,Name From table where id = var
Select Id,Name From table where id = var
Select Id,Name From table where id = var
I was able to a multiple select in one query but it's only allows one column per select.
SELECT (
SELECT COUNT(*)
FROM user_table
) AS tot_user,
(
SELECT COUNT(*)
FROM cat_table
) AS tot_cat,
(
SELECT COUNT(*)
FROM course_table
) AS tot_course
A:
Use join if tables relates each other else
use union.
| {
"pile_set_name": "StackExchange"
} |
Q:
Get the new co-ordinates of new Rectangles after rotating at center.
Let say I have these rectangles,
All the rectangles are in same shape. I know all the co-ordinates of all inner rectangles. I know the width and height of outer rectangle, which contain all inner rectangles. So, I know the center also. I need to rotate all these inner rectangle from center. After rotating I need the co-ordinates of all inner rectangles.
A:
Simply transform the coordinates into coordinates relative to the center, and then apply the rotation matrix to each of the coordinates.
| {
"pile_set_name": "StackExchange"
} |
Q:
How is Firestore's pricing for array, maps and subcollections?
As I was studying Firestore's pricing and looking for ways to minimize costs in my application, I couldn't find an answer to this question:
Does reading from subcollections, arrays and maps count as 1 read per item or can I read the whole array for a single read?
I know a better model can do the trick, but I'd like to have this clarified before that.
A:
The billing applies to each document read, no matter where it lives, no matter how it was queried. It doesn't matter what the document contains. For mobile and web clients, the read will always return the entire contents of the document. There is no separate billing for the individual fields inside a document.
| {
"pile_set_name": "StackExchange"
} |
Q:
Incorrect End Date for event
I've implemented a script in my google sheet that grabs gcal calendar info and displays it in various tabs of the sheet. However, the end date of each event as displayed in the sheet is one day LONGER than what is displayed in the calendar itself. Code below. Through logging, I can't figure out why its happening. Is there something to do with the formatting of the date? Any help would be appreciated!
function populateAllTabs() {
var id = "[MY CAL ID HERE]"; // id is currently set to bookings calendar. NB: Takes a string!
var cal = CalendarApp.getCalendarById(id);
var startPeriod = new Date('January 1, 2020');
startPeriod.setHours(0, 0, 0, 0);
var endPeriod = new Date(startPeriod);
endPeriod.setDate(endPeriod.getDate() + 365); // looks for all events in the range one year
var ss = SpreadsheetApp.getActive();
for(var n in ss.getSheets()){// loop over all tabs in the spreadsheet
var sheet = ss.getSheets()[n];// look at every sheet in spreadsheet
var name = sheet.getName();//get name
if(name != 'List'){
var gig = sheet.getRange(1,1);
var gigName = gig.getValue();
var events = cal.getEvents(startPeriod, endPeriod, {search:gigName});
// find the title of each event in the calendar
var eventTitles = [];
for (var i = 0; i < events.length; i++) {
eventTitles.push([events[i].getTitle()]);
}
// find the start date of each event in the calendar
var starttime = [];
for (var i = 0; i < events.length; i++) {
starttime.push([Utilities.formatDate(events[i].getStartTime(), "GMT", "MM/dd/yy")]);
}
// find the end date of each event in the calendar
var endtime = [];
for (var i = 0; i < events.length; i++) {
endtime.push([Utilities.formatDate(events[i].getEndTime(), "GMT", "MM/dd/yy")]);
}
var cell = sheet.getRange("B3");
cell.setValue(starttime + ' - ' + endtime);
}
}
}
A:
This works for me. No problem with the dates
function populateAllTabs() {
var cal = CalendarApp.getCalendarById('id');
var startyear=2019;
var startPeriod = new Date(startyear,0,1);
var endPeriod = new Date(startyear+1,1,1);
var ss=SpreadsheetApp.getActive();
var shts=ss.getSheets();
for(var n=0;n<shts.length;n++){
var sheet=shts[n];
var name=sheet.getName();
if(name!='List'){
var gigName = sheet.getRange(1,1).getValue();
var ev=cal.getEvents(startPeriod, endPeriod, {search:gigName});
var gigs=[]
for (var i=0;i< ev.length;i++) {
gigs.push([ev[i].getTitle(),Utilities.formatDate(new Date(ev[i].getStartTime()),Session.getScriptTimeZone(),"MM/dd/yy"),Utilities.formatDate(new Date(ev[i].getEndTime()),Session.getScriptTimeZone(),"MM/dd/yy")]);
}
if(gigs) {
sheet.getRange(3,2,gigs.length,gigs[0].length).setValues(gigs);
}
}
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Pass raw selection statement to laravel Eloquent while creating a Paginator instance
Pagination in Laravel is super easy. I simply go:
$posts = Post::where( .... );
$posts->paginate(20, ['ID', 'title', 'body'], ....);
I am trying to replace the ID,title,body array with: ID, title, LEFT(body, 200) But Laravel doesn't accept that! I found out that I should use this instead:
connection()->raw('ID, title, LEFT(body, 200) AS body');
For this to work, but the paginate method only accepts an array. Is there a way around it?
A:
Do this:
$posts = Post::where( .... );
$posts->paginate(20, ['ID', 'title', \DB::raw('LEFT(body, 200) AS body')], ....);
| {
"pile_set_name": "StackExchange"
} |
Q:
CALayer cornerRadius + masksToBounds 10.11 glitch?
I have a 30x30 view which is rounded:
CALayer * layer = self.layer;
layer.backgroundColor = [NSColor redColor].CGColor;
layer.cornerRadius = 10.0f;
layer.masksToBounds = YES;
So far, so good:
Then I add a sublayer, like so:
CALayer * subLayer = [CALayer layer];
subLayer.backgroundColor = [NSColor yellowColor].CGColor;
subLayer.frame = CGRectMake(0.0f, 0.0f, 10.0f, 10.0f);
[layer addSublayer:subLayer];
And I end up with this, which is not what I want!
This is a problem that has only surfaced since my upgrade to El Capitan. In Yosemite the masking worked for the above code. What am I missing?
Update: this issue does not occur when I set layer.shouldRasterize = YES; however I want to keep memory down so I would prefer another solution.
A:
I have found my own solution, using a shape layer + mask instead of cornerRadius:
CALayer * layer = self.layer;
layer.backgroundColor = [NSColor redColor].CGColor;
//
// code to replace layer.cornerRadius:
CAShapeLayer * shapeLayer = [CAShapeLayer layer];
float const r = 10.0f;
float const w = self.bounds.size.width;
float const h = self.bounds.size.height;
CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path, NULL, r, 0.0f);
CGPathAddArcToPoint(path, NULL, w, 0.0f, w, r, r);
CGPathAddArcToPoint(path, NULL, w, h, w - r, h, r);
CGPathAddArcToPoint(path, NULL, 0.0f, h, 0.0f, h - r, r);
CGPathAddArcToPoint(path, NULL, 0.0f, 0.0f, r, 0.0f, r);
CGPathCloseSubpath ( path );
shapeLayer.path = path;
CGPathRelease(path);
self.layer.mask = shapeLayer;
//
// add the sublayer
CALayer * subLayer = [CALayer layer];
subLayer.backgroundColor = [NSColor yellowColor].CGColor;
subLayer.frame = CGRectMake(0.0f, 0.0f, 10.0f, 10.0f);
[layer addSublayer:subLayer];
Works as intended:
(Of course, if anyone has a more elegant fix, I'd love to see it!)
| {
"pile_set_name": "StackExchange"
} |
Q:
List all Sub-Groups (members) of Groups in Active Directory - Powershell
I'd like to get help to create simple script.
All it has to do is list sub-groups of groups, it shouldn't be even recursive and formatting for now is not that important.
I created script, but all it does, it writes lines with GROUP=$group but not sub-groups.
It should write:
GROUP=$group
subgroup
subgroup
GROUP=$group
subgroup
...and so on.
Get-ADGroup -filter * -properties GroupCategory | FORMAT-Table -property name -hidetableheaders | Out-File -FilePath "D:\groups_list.txt"
$sourcepath = "D:\groups_list.txt"
$path = "D:\groups.txt"
foreach ($group in get-content $sourcepath) {
Out-File -FilePath $path -InputObject "GROUP= $group" -Append
Get-ADGroupMember $group | FORMAT-Table -property name -hidetableheaders | Out-File -FilePath $path -Append
}
If I do script without loop then everything is fine so I think there is some problem in loop which I don't know how to fix.
$group = "DEPARTMENT_Finance"
Out-File -FilePath $path -InputObject "GROUP= $group" -Append
Get-ADGroupMember $group | FORMAT-Table -property name -hidetableheaders | Out-File -FilePath $path -Append
Where is the mistake in the loop?
A:
To add on to what @TheMadTechnician says, you should be using select (or select-object) instead of Format-Table in both the Get-ADGroup pipeline and the Get-AdGroupMember pipeline. When you are writing the list of groups to file, Format-Table inludes tons of whitespace at the end of each line. It is formatting a table, so even though you are only using a single column, that column will be as wide as the longest group name, using whitespace as filler to keep columns nice and orderly.
Essentially, when you are reading the list of groups back in, instead of getting "DEPARTMENT_Finance" you are getting "DEPARTMENT_Finance "
Get-AdGroupMember doesn't know to trim the whitespace.
Do something like this:
Get-ADGroup -filter * -properties GroupCategory | Select-Object -ExpandProperty name | Out-File -FilePath "D:\groups_list.txt"
$sourcepath = "D:\groups_list.txt"
$path = "D:\groups.txt"
foreach ($group in get-content $sourcepath) {
Out-File -FilePath $path -InputObject "GROUP= $group" -Append
Get-ADGroupMember $group | Select-Object -ExpandProperty name | Out-File -FilePath $path -Append
}
Since you said you want to only list sub-groups, you might want to add a Where-Object to the Get-AdGroupMember pipeline to limit the output to only group objects, like so:
Get-ADGroupMember $group | Where-Object objectClass -eq "group" | Select-Object -ExpandProperty name | Out-File -FilePath $path -Append
| {
"pile_set_name": "StackExchange"
} |
Q:
Objective-C - inter-app communication
I have an app that list a list of files in a UITableView. These files are .ifc files (3D renderings) I have this other app that I downloaded called Field3D which displays downloaded .ifc files. Currently my app does download this file and I am looking for away to use UIActivityViewController so the user can share this file with the Field3D app. I was told that inter-app communication was the way to go, so I did some research and found this https://developer.apple.com/library/ios/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/Inter-AppCommunication/Inter-AppCommunication.html
Now it says under Sending Files and Data to Another App that all I have to do is the following:
- (void)displayActivityControllerWithDataObject:(id)obj {
UIActivityViewController* vc = [[UIActivityViewController alloc]
initWithActivityItems:@[obj] applicationActivities:nil];
[self presentViewController:vc animated:YES completion:nil];
}
but I don't believe this will work just like that because my code for the UIActivityViewController is very simliar
NSData *pdfData = [NSData dataWithContentsOfFile:filePath];
UIActivityViewController * activityController = [[UIActivityViewController alloc] initWithActivityItems:@[pdfData] applicationActivities:nil];
activityController.popoverPresentationController.sourceView = self.view;
[self presentViewController:activityController animated:YES completion:^{}];
Can someone help me out and point me in the right direction?
A:
You should be able to use UIDocumentInteractionController to let the user open the file with Field3D.
An iOS app is distributed as an “IPA” file, which is just a ZIP file containing the app bundle and some other metadata files. I downloaded Field3D in iTunes and unzipped the IPA, then took a look at the app's Info.plist (using plutil -convert json -r -o - Info.plist) to see what document types it is registered for. Amongst others, it registers itself as a handler for com.tekla.collada3d.ifc with filename extension ifc and com.tekla.collada3d.ifczip with filename extension ifczip.
So if you have the file stored locally (presumably in your app sandbox's Documents or Caches directory) with an extension of .ifc or .ifczip, you should be able to let Field3D handle it just by presenting a UIDocumentInteractionController initialized with the URL of the (local) file. If you want, you can explicitly set the interaction controller's UTI property to com.tekla.collada3d.ifc or com.tekla.collada3d.ifczip instead of relying on the filename extension.
For more information, read “Previewing and Opening Files” in Document Interaction Programming Topics for iOS.
| {
"pile_set_name": "StackExchange"
} |
Q:
Strongly typed mapping. Lambda Expression based ORM
What do you think of the following table mapping style for domain entities?
class Customer {
public string Name;
}
class Order
{
public TotallyContainedIn<Customer> Parent { get { return null; } }
public HasReferenceTo<Person> SalesPerson { get { return new HasReferenceTo<Person>(0,1); } }
}
//...
[TableOf(typeof(Customer))]
class CustomerTable
{
//...
public Mapping Name { get { return Column.Implements<Customer>(c=>c.Name); } }
}
[TableOf(typeof(Order))]
class OrderTable
{
//...
public FK CustomerID { get { return References.FK<CustomerTable>(ct=>ct.Id;); } }
}
What I'm trying to achieve, is having my domain model ready for writing code against as soon as i type that and compile, without a need for any code generation routines and dependence on any xml artifacts, and with ability to strongly reference everything i'm working with.
No matter how it will be implemented, do you think it would be easy to use it this way?
A:
FluentNHibernate does practically the same for NHibernate:
public class CatMap : ClassMap<Cat>
{
public CatMap()
{
Id(x => x.Id);
Map(x => x.Name)
.Length(16)
.Not.Nullable();
Map(x => x.Sex);
References(x => x.Mate);
HasMany(x => x.Kittens);
}
}
Additionally, it supports so-called automapping:
var autoMappings = AutoPersistenceModel
.MapEntitiesFromAssemblyOf<Product>()
.Where(t => t.Namespace == "Storefront.Entities");
var sessionFactory = new Configuration()
.AddProperty(ConnectionString, ApplicationConnectionString)
.AddAutoMappings(autoMappings)
.BuildSessionFactory();
| {
"pile_set_name": "StackExchange"
} |
Q:
The closure $\overline{Gx}$ for an affine variety on which an reductive algebraic group acts
Let $G$ be a reductive group acting on an affine variety $X$. For simplicity, one may assume $G=SL_n$ or $G=U_n$ and assume the field is $\mathbb C$. Given this one can show $\mathbb C[X]^G$ is finitely generated algebra.
Question:
(1) Each of the orbit closures $\overline{Gx}$ is a finite union of $G$-orbits.
(2) Each of the orbit closures $\overline{Gx}$ contains a unique closed orbit which has minimal dimension among these $G$-orbits.
I guess the two questions might be related to a result saying that
$$
\pi(x_1)=\pi(x_2) \ \ \ \ \Longleftrightarrow \ \ \ \ \overline{G x_1} \cap \overline{G x_2} \neq \varnothing
$$
where $\pi: X\to X //G\equiv Spec \mathbb C[X]^G$ is the GIT quotient.
EDIT: For question (2), suppose $Gx$ is not closed. If $y\in \overline{Gx}$ but $y\not \in Gx$, then what relation can we expect between $\overline{Gx}$ and $\overline{Gy}$? For example, do we have $\dim\overline{Gy} < \dim\overline{Gx}$, or $\dim G_y \ge 1$?
A:
The answer to (2) is affirmative: an affine $G$-variety with a dense open orbit (like $\overline{Gx}$) contains a unique closed orbit. The reason for that is that any two closed are orbits are separated by a $G$-invariant. The closed orbit is clearly the unique orbit of minimal dimension since it is contained in the closure of every other orbit.
Question (1) is more delicate. As Victor Protsak's examples show there are, in general, infinitely many orbits in an affine orbit closure. There are some positive results, though:
V.L. Popov has shown (Quasihomogeneous affine algebraic varieties of the group SL(2). Izv. Akad. Nauk SSSR Ser. Mat 37 (1973), 792–832) that $\overline{Gx}$ has finitely many orbits when $G=SL(2)$.
There are also criteria in terms of the isotropy subgroup $H=G_x$:
$\overline{Gx}$ is a finite union of $G$-orbits if $G/H$ is spherical. In that case any embedding (affine or not) of $G/H$ has finitely many orbits. That is even equivalent to $G/H$ being spherical.
$\overline{Gx}$ is a finite union of $G$-orbits if $H$ is a reductive subgroup which is of finite index in its normalizer (e.g. $H=T$, the maximal torus). In that case $G/H=\overline{G/H}$ is already closed in $X$ (by Luna: Adherence d'orbites).
The paper Arzhantsev, I. V.; Timashev, D. A. Affine embeddings with a finite number of orbits. Transform. Groups 6 (2001), no. 2, 101–110 contains more results in this direction. In particular, the authors show that examples 2 and 3 exhaust (almost) all instances when $H$ is reductive.
A:
There is no reason to expect positive answer to Q1, even for a linear action with a dense orbit.
Let $X$ be the space of $n\times m$ matrices over a field $K$ with $G={\rm GL}_n(K)$ acting by left multiplication. By basic linear algebra $A$ and $B$ are in the same orbit iff $\ker A=\ker B$. Now suppose that $n\geq m\geq 2$. Then the matrices with zero kernel (i.e., of maximal rank, $m$) form a single dense orbit in $X$. However, the orbits of $G$ on $X$ are parametrized by pairs $(r, L)$, where $r\leq m$ and $L$ is an $r$-dimensional subspace of $K^m$. In particular, the orbit space is infinite.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get the intersection of two columns
For instance I have table A and table B
a.data = {1,2,3,4,5,6}
b.data = {4,5,7}
If you want to lookup one value in a.data or b.data you can use FIND_IN_SET(3, b.data).
But I want to know if at least all the values of b.data are in a.data, or else if I can find
at least the intersection between b.data and a.data. So in this case {4,5}.
WHERE INTERSECT(a.data, b.data) ... something like that. How should I do this in MySQL?
update
The b.data {4,5,7} is the column data of one 1 record, so joining a.data on b.data won't work.
table A
=======
ID DATA
1 {1,2,3,4,5,6}
2 {7,9,12}
table B
=======
ID DATA
1 {4,5,7}
2 {9,10,11,12}
A:
You can take interection of tables using INNER JOIN
have a look at Visual explaination of joins
SELECT fn_intersect_string(a.data, b.data) AS result FROM table_name;
also you can write a user defined function as:
CREATE FUNCTION fn_intersect_string(arg_str1 VARCHAR(255), arg_str2 VARCHAR(255))
RETURNS VARCHAR(255)
BEGIN
SET arg_str1 = CONCAT(arg_str1, ",");
SET @var_result = "";
WHILE(INSTR(arg_str1, ",") > 0)
DO
SET @var_val = SUBSTRING_INDEX(arg_str1, ",", 1);
SET arg_str1 = SUBSTRING(arg_str1, INSTR(arg_str1, ",") + 1);
IF(FIND_IN_SET(@var_val, arg_str2) > 0)
THEN
SET @var_result = CONCAT(@var_result, @var_val, ",");
END IF;
END WHILE;
RETURN TRIM(BOTH "," FROM @var_result);
END;
| {
"pile_set_name": "StackExchange"
} |
Q:
Math vs. CS for Software Engineering
I'm currently trying to decide what to major in undergrad and I'm between math and CS. I've taken multivar/linear alg/intro to abstract math so far and would really like to continue with math as my major rather than CS. The problem, though, is that I want to go into software engineering (couldn't really see myself in many math related careers like actuary, analyst, etc). So, does anyone know what the odds are that I'd be able to get a SE job with just a pure math degree in today's market? Should I just go for the CS major to have a better shot at landing a job? Unfortunately, I cannot double major due to schedule constraints. I could minor in CS if majoring in math, but would this help me very much in securing SE jobs?
In future years (say, the next 30 or 40), what would you guess will be the more valuable degree in the long run?
Thanks for the help.
A:
This question might very well be closed as not being about academic careers, or as an opinion question, but I'll give an answer anyway...
I studied math and computer science as an undergraduate in the early 1980's and ultimately ended up with a BS in Computer Science because it was a better degree for getting a job. After a few years as a software engineer at Motorola, I went back to graduate school in mathematics and then entered into an academic career as a math professor.
Although some of the more theoretical aspects of computer science that I studied in the 1980's are still relevant, the programming languages, operating systems, and computer hardware that I worked with are long out of date. To paraphrase a quote from Charles Stross, having a degree in Computer Science from 1984 is somewhat like having an Aeronautical Engineering degree from the 1920's.
On the other hand, most of the mathematics that I studied in the 1980's is still relevant and hasn't changed very much. For example, I've recently taught an Intro to Ordinary Differential Equations course using a later edition of the same textbook that I used as an undergraduate in the early 1980's.
The two points that I'm trying to make with this story are that
Computer Science and Software Engineering are very rapidly changing fields. You'll have to keep updating your education as the technology changes if you want to stay in it.
You might very well change careers one or more times over the next 30 to 40 years. It's unwise to plan for a career as if you'll be doing the same thing for the rest of your working life.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is my UIImageView's frame greater than the screen size?
I have a single view application (see below layout). The layout was done using iphone x. I have a central UIImageView that is within the safe area along with a custom bottom UIView that has some UIButtons. When I run the application on a iphone 6 plus (which does not have a notch) and run the code below, I get back a frame for the UIImage view that is larger than the screen dimensions AND the frame has a top inset of 44. I do not understand how this is possible?
override func viewDidLoad() {
super.viewDidLoad()
print("self.imgPreViewOutlet.frame: \(self.imgPreViewOutlet.frame)")
print("self.imgPreViewOutlet.bounds: \(self.imgPreViewOutlet.bounds)")
print("UIScreen.main.bounds: \(UIScreen.main.bounds)")
}
Console:
self.imgPreViewOutlet.frame: (0.0, 44.0, 414.0, 758.0)
self.imgPreViewOutlet.bounds: (0.0, 0.0, 414.0, 758.0)
UIScreen.main.bounds: (0.0, 0.0, 414.0, 736.0)
A:
Layout has not yet happened in viewDidLoad. The size you get there is meaningless. Ignore it. If you want to know how big the image view will really be, wait until later in the birth cycle of you view controller, such as viewDidLayoutSubviews.
| {
"pile_set_name": "StackExchange"
} |
Q:
storybook load image from css
I'm trying to run Storybook with a custom Webpack config, and it's putting image files (SVG in this case) in the wrong place; the SVG is output into storybook-static/[filehash].svg, but the CSS is altered to look in static/media/[filename].svg. There is a file there, but its contents are:
module.exports = __webpack_public_path__ + "[filehash].svg";
So for some reason it's putting the CommonJS module in the right place, but the css-loader (or something in the pipeline) is telling it to look at the module instead of the actual file.
Here's my .storyboox/webpack.config.js:
const path = require("path");
const MiniCssExtractPlugin = require("mini-css-extract-plugin");
module.exports = ({ config }) => {
config.plugins.push(
new MiniCssExtractPlugin(),
);
config.module.rules.push(
{
test: /\.(ts|tsx)$/,
use: [
{
loader: require.resolve('ts-loader'),
},
],
},
{
test: /\.(svg|jpe?g|png)$/,
use: [
{
loader: require.resolve('file-loader'),
},
],
},
{
test: /\.scss$/,
use: [
MiniCssExtractPlugin.loader,
// require.resolve("style-loader"),
{
loader: require.resolve("css-loader"),
options: {
importLoaders: 1,
},
},
{
loader: require.resolve("sass-loader"),
options: {
sourceMap: true,
data: '$theme-image-path: null;',
},
}
]
},
);
config.resolve.extensions.push('.ts', '.tsx');
return config;
};
A:
Fixed it by removing my own file-loader rule (Storybook apparently has its own)
| {
"pile_set_name": "StackExchange"
} |
Q:
What are the differences between DOM, SAX and StAX XML parsers?
I'm developing a RSS feed aggregator with Apache Tomcat. I was wondering which parser to use in order to read RSS feeds. Should I use DOM, SAX or StAX? I know that there are libraries specific to read RSS feeds with java but since this is a university project I am not supposed to use those.
Thank you.
A:
It mostly depends on your needs. Each has it's own features.
DOM - pull the whole thing into memory and walk around inside it. Good for comparatively small chunks of XML that you want to do complex stuff with. XSLT uses DOM.
SAX - Walk the XML as it arrives watching for things as they fly past. Good for large amounts of data or comparatively simple processing.
StAX - Much like SAX but instead of responding to events found in the stream you iterate through the xml - See When should I choose SAX over StAX? for discussion of which is best.
There's a good discussion here Parsing XML using DOM, SAX and StAX Parser in Java - By Mohamed Sanaulla. NB: There's a fault in his SAX parser - he should append characters, not replace them as character data is cumulative and may arrive in chunks.
content = String.copyValueOf(ch, start, length);
should be
content += String.copyValueOf(ch, start, length);
Also a blog post by Kaan Yamanyar Differences between DOM, SAX or StAX.
A:
I don't know StAX, but I can say something to DOM and SAX:
Dom holds the XML-Data in memory as a Object-Model. The advantage is, that you can access and change the data in a convenient and fast way in Memory. The disadvantage is, that this has a high memory consumption.
SAX uses some kind of an event-pattern to read the data and doesn't keep any data in memory. The advantage is that this is relatively fast and doesn't need much memoryspace. The disadvantage is, that you have to create your own data-model if you want to change the data in a convenient way.
Dom is a little more complex to use compared to SAX.
Use SAX if you need to parse big data as a Stream. Use DOM if you want to hold the complete data in memory to work with it and the data-size is small enough to safely fit into memory.
For example: XSLT doesn't work with SAX because it needs to look forward in the data-stream while reading it. So it uses DOM even if that leads to memory-issues with big data.
Hope that helped :-)
| {
"pile_set_name": "StackExchange"
} |
Q:
Transversals that are closed under multiplication in a group
Let $H \le G$ be a group with subgroup $H$. A right (or left) transversal is a set of element which contains exactly one element from each right (or left) coset. Now for example for $S_3$ and $H = \{ (), (1 ~ 2) \}$ we have
$$
H, \quad H\cdot (1 ~ 2 ~ 3) = \{ (1 ~ 2 ~ 3), (1 ~ 3) \}, \quad
H \cdot (1 ~ 3 ~ 2) = \{ (1 ~ 3 ~ 2), (3 ~ 2) \}
$$
and the right transversal $\{ (), (1 ~ 2 ~ 3), (1 ~ 3 ~ 2) \}$ even forms a group. But what I am interested in is the case when do they are closed under multiplication? Is this always the case, i.e. can we always find a right (or left) transversal $T$ such that $TT \subseteq T$?
A:
If you don't require your group to be finite, then it is easy to find a counter-example : if $G=\mathbb{Z}$, then any subgroup $H$ will be of the form $n\mathbb{Z}$. Then if $A\subset G$ is transversal, it must have $n$ elements exactly, and thus it cannot be closed under addition since it is finite (and non reduced to $0$).
Now that I think of it it is not hard to find a finite counterexample. Take $G=\mathbb{Z}/4\mathbb{Z}$ and $H=\mathbb{Z}/2\mathbb{Z}$. Then the cosets in $G$ are$$\{\bar{1},\bar{3}\},\ \{\bar{0},\bar{2}\}.$$So a transversal subset would have to contain exactly $2$ elements, with one of them being of order $4$; so it couldn't be closed under addition in $\mathbb{Z}/4\mathbb{Z}$.
| {
"pile_set_name": "StackExchange"
} |