qid
int64
1
74.7M
question
stringlengths
15
55.4k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
2
32.4k
response_k
stringlengths
9
40.5k
4,419,761
How to find the probability density function of the random variable $\frac{1}{X}$? Let $X$ be a random variable with pdf $f\_X(x)= \begin{cases} 0 ; x \le 0 \\ \frac{1}{2} ; 0 < x \le 1 \\ \frac{1}{2x^2} ; 1 < x < \infty \end{cases}$ I was trying to avoid using formula. So we see that $P(Y \le y) = 1 - P(X \le y)$ Case$1$: if $y \le 0$ Then $F(y) = 1 - P[X \le y] = 1$ Case $2:$ if $0 < y \le 1$ Then $F(y) = 1 - P(X \le y) = 1 - (0 + \int\_{0}^{y}\frac{1}{2}dy) = 1-\frac{y}{2}$ Case $3:$ if $1 < y < \infty$ Then $F(y) = 1 - P(X \le y) = 1 - (\frac{1}{2} + \int\_{1}^{y}\frac{1}{y^2}dy) = \frac{1}{2}+\frac{1}{y}-1$ Then on differentiating we can get the probability density function. I thibk that the distribution function that i got is not correct . Can someone help me out please
2022/04/04
[ "https://math.stackexchange.com/questions/4419761", "https://math.stackexchange.com", "https://math.stackexchange.com/users/919818/" ]
The PDF of $X$ is given by $$ f\_X(x) = \left\{ \begin{array}{ccc} 0 & \mbox{if} & x \leq 0 \\[2mm] {1 \over 2} & \mbox{if} & 0 < x < 1 \\[2mm] {1 \over 2 x^2} & \mbox{if} & 1 < x < \infty \\[2mm] \end{array} \right. \tag{1} $$ Thus, $X$ is a positive random variable. Since $Y = {1 \over X}$, it is immediate that $Y$ is also a positive random variable. Hence, $$ F\_Y(y)= P(Y \leq y) = 0 \ \ \mbox{for} \ \ y < 0. \tag{2} $$ Fix $y$ in the interval $0 < y < 1$. Then ${1 \over y} > 1$. Now, we find that $$ F\_Y(y) = P(Y \leq y) = P\left( {1 \over X} \leq y \right) = P\left( X \geq {1 \over y} \right) $$ which can be evaluated using (1) as $$ F\_Y(y) = \int\limits\_{1 \over y}^\infty \ {1 \over 2 x^2} \ dx = {1 \over 2} \ \left[ - {1 \over x} \right]\_{1 \over y}^\infty $$ or $$ F\_Y(y) = {1 \over 2} \ \left[ 0 + y \right] = {y \over 2} $$ Thus, $$ F\_Y(y) = {y \over 2} \ \ \mbox{for} \ \ 0 < y < 1. \tag{3} $$ Next, we fix in the interval $y > 1$. Then it follows that $0 < {1 \over y} < 1$. Now, $$ F\_Y(y) = P(Y \leq y) = P\left( {1 \over X} \leq y \right) = P\left( X \geq {1 \over y} \right) = 1 - P\left( X \leq {1 \over y} \right) $$ which can be evaluated using (1) as $$ F\_Y(y) = 1 - \int\limits\_{0}^{1 \over y} \ {1 \over 2} \ dx = 1 - {1 \over 2} \left[ {1 \over y} - 0 \right] = 1 - {1 \over 2 y}. $$ Thus, $$ F\_Y(y) = 1 - {1 \over 2 y} \ \ \mbox{for} \ \ y > 1 \tag{4} $$ Combining the three cases, we find that $$ F\_Y(y) = \left\{ \begin{array}{ccc} 0 & \mbox{if} & y \leq 0 \\[2mm] {y \over 2} & \mbox{if} & 0 < y < 1 \\[2mm] 1 - {1 \over 2 y} & \mbox{if} & y > 1 \\[2mm] \end{array} \right. \tag{5} $$ From (5), we find the PDF of $Y = {1 \over X}$ as $$ f\_Y(y) = F\_Y'(y) = \left\{ \begin{array}{ccc} 0 & \mbox{if} & y \leq 0 \\[2mm] {1 \over 2} & \mbox{if} & 0 < y < 1 \\[2mm] {1 \over 2 y^2} & \mbox{if} & y > 1 \\[2mm] \end{array} \right. \tag{6} $$
$X$ is a positive r.v. and so is $Y$. Hence, $P(Y\leq y)=0$ for $y \leq 0$. Also, $P(Y \leq y)=\frac y 2$ for $0<y \leq 1$ and $P(Y \leq y)=1-\frac 1 {2y}$ for $y >1$. [For $0<y\leq 1$ we have $P(Y \leq y)=P(\frac 1 X \leq y)=P(X\geq \frac 1 y)=\int\_{1/y}^{\infty} \frac 1 {2x^{2}}dx=\frac y 2$. I will leave the case $y>1$ to you].
15,557,739
``` Private Sub cmdAdd_Click() 'add data to table CurrentDb.Execute = "INSERT INTO jscbb_dir2(ID,Lastname,FirstName, PrimA, Artea,LubNum,OfficeNum,OfficePhone,Email,LabPhone,stats)" & _ " VALUES(" & Me.Textid & ",'" & Me.TextLast & "','" & Me.TextFirst & "','" & Me.Textprima & "','" & Me.Textarea & "','" & Me.Textlabnum & _ "','" & Me.Textofficenum & "','" & Me.Textofficephone & "','" & Me.Textemail & "','" & Me.Textlabphone & "','" & Me.Textstatus & "')" 'refresh data is list on focus jscbb_dirsub.Form.Requery End Sub ``` Why am I getting an error on the last (Me.Textstatus)? I know this is a low-level question, but I need another pair of eyes, I've been looking at this for over an hour. The error is "Compile Error: Argument Not Optional"
2013/03/21
[ "https://Stackoverflow.com/questions/15557739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2027275/" ]
Consider parameters, they will be easier to debug. ``` Dim qdf As QueryDef ssql = "INSERT INTO jscbb_dir2(ID,Lastname,FirstName,PrimA,Artea," _ & "LubNum,OfficeNum,OfficePhone,Email,LabPhone,stats) " _ & "VALUES([id],[last],[first],[prima],[area],[lab]," _ & "[office],[phone],[email],[stat])" Set qdf = CurrentDb.CreateQueryDef("", ssql) qdf.Parameters("id") = Me.TextID qdf.Parameters("last") = Me.Textlast qdf.Parameters("first") = Me.Textfirst qdf.Parameters("prima") = Me.Textprima qdf.Parameters("area") = Me.Textarea qdf.Parameters("lab") = Me.Textlabnum qdf.Parameters("office") = Me.Textofficenumbet qdf.Parameters("phone") = Me.Textofficephone qdf.Parameters("email") = Me.Textemail qdf.Parameters("stat") = Me.Textstatus qdf.Execute dbFailOnError ```
[`Execute`](http://msdn.microsoft.com/en-us/library/bb243015%28v=office.12%29.aspx) is a method, not a property. You don't use `=` between a method and its arguments, so ``` CurrentDb.Execute = "..." ``` should be ``` CurrentDb.Execute "..." ```
1,510,575
Is there a RSS library for .NET?
2009/10/02
[ "https://Stackoverflow.com/questions/1510575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183229/" ]
You might start with the [System.ServiceModel.Syndication Namespace](http://msdn.microsoft.com/en-us/library/system.servicemodel.syndication.aspx). It includes classes for RSS and Atom.
According to [Google](http://www.google.nl/search?q=rss+library+.net), there are a few options: * [RSS.NET](http://www.rssdotnet.com/) - with an updated version [over here](http://www.web20tools.net/) * [FeedDotNet](http://www.codeplex.com/FeedDotNet/) Also, many commercial networking toolkits for .NET (e.g. /n Software's [IP\*Works!](http://nsoftware.com/portal/dotnet/)) support RSS. In addition to that, the [RSS protocol](http://www.rssboard.org/rss-specification) itself isn't too involved: using .NET's native [HttpClient](http://msdn.microsoft.com/en-us/library/f3wxbf3f%28VS.80%29.aspx) and some LINQ to XML magic, it [should not be too difficult](http://weblogs.asp.net/scottgu/archive/2007/08/07/using-linq-to-xml-and-how-to-build-a-custom-rss-feed-reader-with-it.aspx) to implement a RSS client yourself...
1,510,575
Is there a RSS library for .NET?
2009/10/02
[ "https://Stackoverflow.com/questions/1510575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183229/" ]
According to [Google](http://www.google.nl/search?q=rss+library+.net), there are a few options: * [RSS.NET](http://www.rssdotnet.com/) - with an updated version [over here](http://www.web20tools.net/) * [FeedDotNet](http://www.codeplex.com/FeedDotNet/) Also, many commercial networking toolkits for .NET (e.g. /n Software's [IP\*Works!](http://nsoftware.com/portal/dotnet/)) support RSS. In addition to that, the [RSS protocol](http://www.rssboard.org/rss-specification) itself isn't too involved: using .NET's native [HttpClient](http://msdn.microsoft.com/en-us/library/f3wxbf3f%28VS.80%29.aspx) and some LINQ to XML magic, it [should not be too difficult](http://weblogs.asp.net/scottgu/archive/2007/08/07/using-linq-to-xml-and-how-to-build-a-custom-rss-feed-reader-with-it.aspx) to implement a RSS client yourself...
There are many of them, e.g. [rss.net](http://www.rssdotnet.com/). And it's easy to [implement reading](http://geekswithblogs.net/willemf/archive/2005/10/30/58562.aspx) without any lib
1,510,575
Is there a RSS library for .NET?
2009/10/02
[ "https://Stackoverflow.com/questions/1510575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183229/" ]
According to [Google](http://www.google.nl/search?q=rss+library+.net), there are a few options: * [RSS.NET](http://www.rssdotnet.com/) - with an updated version [over here](http://www.web20tools.net/) * [FeedDotNet](http://www.codeplex.com/FeedDotNet/) Also, many commercial networking toolkits for .NET (e.g. /n Software's [IP\*Works!](http://nsoftware.com/portal/dotnet/)) support RSS. In addition to that, the [RSS protocol](http://www.rssboard.org/rss-specification) itself isn't too involved: using .NET's native [HttpClient](http://msdn.microsoft.com/en-us/library/f3wxbf3f%28VS.80%29.aspx) and some LINQ to XML magic, it [should not be too difficult](http://weblogs.asp.net/scottgu/archive/2007/08/07/using-linq-to-xml-and-how-to-build-a-custom-rss-feed-reader-with-it.aspx) to implement a RSS client yourself...
I have answered a similar question 2 times ;- ) Check this out : [rss parser in .net](https://stackoverflow.com/questions/684507/rss-parser-in-net/684518#684518) > > <http://msdn.microsoft.com/en-us/library/system.servicemodel.syndication.aspx> > <http://msdn.microsoft.com/en-us/magazine/cc135976.aspx> > > > .net has a class to parse ATOM and RSS > feeds. Check out the links. What are > you trying to do? Can you give more > information? > > > Alternatively You can just remove the > "Feed version" from the XML file and > parse it as a normal XML file using > xmlDocument class. > > >
1,510,575
Is there a RSS library for .NET?
2009/10/02
[ "https://Stackoverflow.com/questions/1510575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183229/" ]
According to [Google](http://www.google.nl/search?q=rss+library+.net), there are a few options: * [RSS.NET](http://www.rssdotnet.com/) - with an updated version [over here](http://www.web20tools.net/) * [FeedDotNet](http://www.codeplex.com/FeedDotNet/) Also, many commercial networking toolkits for .NET (e.g. /n Software's [IP\*Works!](http://nsoftware.com/portal/dotnet/)) support RSS. In addition to that, the [RSS protocol](http://www.rssboard.org/rss-specification) itself isn't too involved: using .NET's native [HttpClient](http://msdn.microsoft.com/en-us/library/f3wxbf3f%28VS.80%29.aspx) and some LINQ to XML magic, it [should not be too difficult](http://weblogs.asp.net/scottgu/archive/2007/08/07/using-linq-to-xml-and-how-to-build-a-custom-rss-feed-reader-with-it.aspx) to implement a RSS client yourself...
There are tons, including using LinqToXML, however, IMHO, the Argotic framework is the most robust. <http://argotic.codeplex.com/>
1,510,575
Is there a RSS library for .NET?
2009/10/02
[ "https://Stackoverflow.com/questions/1510575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183229/" ]
You might start with the [System.ServiceModel.Syndication Namespace](http://msdn.microsoft.com/en-us/library/system.servicemodel.syndication.aspx). It includes classes for RSS and Atom.
There are many of them, e.g. [rss.net](http://www.rssdotnet.com/). And it's easy to [implement reading](http://geekswithblogs.net/willemf/archive/2005/10/30/58562.aspx) without any lib
1,510,575
Is there a RSS library for .NET?
2009/10/02
[ "https://Stackoverflow.com/questions/1510575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183229/" ]
You might start with the [System.ServiceModel.Syndication Namespace](http://msdn.microsoft.com/en-us/library/system.servicemodel.syndication.aspx). It includes classes for RSS and Atom.
I have answered a similar question 2 times ;- ) Check this out : [rss parser in .net](https://stackoverflow.com/questions/684507/rss-parser-in-net/684518#684518) > > <http://msdn.microsoft.com/en-us/library/system.servicemodel.syndication.aspx> > <http://msdn.microsoft.com/en-us/magazine/cc135976.aspx> > > > .net has a class to parse ATOM and RSS > feeds. Check out the links. What are > you trying to do? Can you give more > information? > > > Alternatively You can just remove the > "Feed version" from the XML file and > parse it as a normal XML file using > xmlDocument class. > > >
1,510,575
Is there a RSS library for .NET?
2009/10/02
[ "https://Stackoverflow.com/questions/1510575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183229/" ]
You might start with the [System.ServiceModel.Syndication Namespace](http://msdn.microsoft.com/en-us/library/system.servicemodel.syndication.aspx). It includes classes for RSS and Atom.
There are tons, including using LinqToXML, however, IMHO, the Argotic framework is the most robust. <http://argotic.codeplex.com/>
1,510,575
Is there a RSS library for .NET?
2009/10/02
[ "https://Stackoverflow.com/questions/1510575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183229/" ]
I have answered a similar question 2 times ;- ) Check this out : [rss parser in .net](https://stackoverflow.com/questions/684507/rss-parser-in-net/684518#684518) > > <http://msdn.microsoft.com/en-us/library/system.servicemodel.syndication.aspx> > <http://msdn.microsoft.com/en-us/magazine/cc135976.aspx> > > > .net has a class to parse ATOM and RSS > feeds. Check out the links. What are > you trying to do? Can you give more > information? > > > Alternatively You can just remove the > "Feed version" from the XML file and > parse it as a normal XML file using > xmlDocument class. > > >
There are many of them, e.g. [rss.net](http://www.rssdotnet.com/). And it's easy to [implement reading](http://geekswithblogs.net/willemf/archive/2005/10/30/58562.aspx) without any lib
1,510,575
Is there a RSS library for .NET?
2009/10/02
[ "https://Stackoverflow.com/questions/1510575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183229/" ]
There are tons, including using LinqToXML, however, IMHO, the Argotic framework is the most robust. <http://argotic.codeplex.com/>
There are many of them, e.g. [rss.net](http://www.rssdotnet.com/). And it's easy to [implement reading](http://geekswithblogs.net/willemf/archive/2005/10/30/58562.aspx) without any lib
1,510,575
Is there a RSS library for .NET?
2009/10/02
[ "https://Stackoverflow.com/questions/1510575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183229/" ]
I have answered a similar question 2 times ;- ) Check this out : [rss parser in .net](https://stackoverflow.com/questions/684507/rss-parser-in-net/684518#684518) > > <http://msdn.microsoft.com/en-us/library/system.servicemodel.syndication.aspx> > <http://msdn.microsoft.com/en-us/magazine/cc135976.aspx> > > > .net has a class to parse ATOM and RSS > feeds. Check out the links. What are > you trying to do? Can you give more > information? > > > Alternatively You can just remove the > "Feed version" from the XML file and > parse it as a normal XML file using > xmlDocument class. > > >
There are tons, including using LinqToXML, however, IMHO, the Argotic framework is the most robust. <http://argotic.codeplex.com/>
24,570,725
I have just started to read up on networking and I don't get this DHCP stuff... For past few hours I have been trying to find out how my computer talks to DHCP in order to get its IPs and all I understood is that my router is some sort of DHCP that gives private IPs. I am wondering if there is a way to contact DHCP server manually from my computer in order to bypass the router, but few of things that I already said made me believe that my router is DHCP server (for 255.255.255.0 at least). So what is it? Is my plan feasible, and how about doing it?
2014/07/04
[ "https://Stackoverflow.com/questions/24570725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3804802/" ]
Cells should not "remember" their state; collection view or table view **data source** should. In the respective `cellForIndexPath` method, you should set the **current state** of the cell and let it configure itself as needed.
``` - (void)prepareForReuse { [super prepareForReuse]; self.isBookable = nil; [self.book removeFromSuperview]; [self setNeedsLayout]; } ``` Try setting `isBookable` to nil. I'm assuming that the cell is setting its layout with the previous cell's `isBookable` value.
24,570,725
I have just started to read up on networking and I don't get this DHCP stuff... For past few hours I have been trying to find out how my computer talks to DHCP in order to get its IPs and all I understood is that my router is some sort of DHCP that gives private IPs. I am wondering if there is a way to contact DHCP server manually from my computer in order to bypass the router, but few of things that I already said made me believe that my router is DHCP server (for 255.255.255.0 at least). So what is it? Is my plan feasible, and how about doing it?
2014/07/04
[ "https://Stackoverflow.com/questions/24570725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3804802/" ]
As @iphonic said, the `isBookable` property should be (re)moved from the cell completely. Cells in a UICollectionView are being reused most of the time, so even though your saleImage.isBookable is in the correct state your cell.isBookable is probably not. I would do the following: ``` if(saleImage.isBookable){ self.bgImageView.frame = CGRectMake(0, 0, 140, self.bounds.size.height - self.book.bounds.size.height); cell.bgImageView.image = [UIImage imageNamed:saleImage.imageName]; cell.book.hidden = NO; } else{ self.bgImageView.frame = CGRectMake(0, 0, self.frame.size.width, self.frame.size.height); cell.bgImageView.image = nil; cell.book.hidden = YES; } [cell layoutIfNeeded]; ``` inside `collectionView: cellForItemAtIndexPath:`. I would also have finished setting up the book `UILabel` inside initWithFrame and have it initially hidden. Something like: ``` - (id)initWithFrame:(CGRect)frame{ self.layer.cornerRadius = 6.0; self.bgImageView = [[UIImageView alloc] init]; [self.contentView insertSubview:self.bgImageView atIndex:0]; self.book = [[UILabel alloc] initWithFrame:CGRectMake(0, self.bounds.size.height - 41, 140, 41)]; self.book.text = @"Book this Item"; self.book.textColor = [UIColor whiteColor]; self.book.adjustsFontSizeToFitWidth=YES; self.book.textAlignment = NSTextAlignmentCenter; self.book.backgroundColor= [UIColor darkGrayColor]; self.book.font = [UIFont fontWithName:kAppFont size:17.0]; self.book.hidden = YES; [self.contentView insertSubview:self.book atIndex:2]; } ``` Then you would not need to override `layoutSubviews`. Hope that helps.
You need to create two types of cells with different identifiers: ``` static NSString *Bookable = @"Bookable"; static NSString *NonBookable = @"NonBookable"; NSString *currentIdentifier; if(saleImage.isBookable)//isBookable property must be set in SaleImage class { currentIdentifier = Bookable; } else{ currentIdentifier = NonBookable; } SalesCollectionViewCell *cell = (SalesCollectionViewCell*)[collectionView dequeueReusableCellWithIdentifier:currentIdentifier]; ```
24,570,725
I have just started to read up on networking and I don't get this DHCP stuff... For past few hours I have been trying to find out how my computer talks to DHCP in order to get its IPs and all I understood is that my router is some sort of DHCP that gives private IPs. I am wondering if there is a way to contact DHCP server manually from my computer in order to bypass the router, but few of things that I already said made me believe that my router is DHCP server (for 255.255.255.0 at least). So what is it? Is my plan feasible, and how about doing it?
2014/07/04
[ "https://Stackoverflow.com/questions/24570725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3804802/" ]
``` - (void)prepareForReuse { [super prepareForReuse]; self.isBookable = nil; [self.book removeFromSuperview]; [self setNeedsLayout]; } ``` Try setting `isBookable` to nil. I'm assuming that the cell is setting its layout with the previous cell's `isBookable` value.
1st - Save/keep the indexPath of the cell that is changed. 2nd - ``` - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath SalesCollectionViewCell * cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"Cell" forIndexPath:indexPath]; if (indexPath == self.changedIndexPath) [cell trueImage]; else [cell falseImage]; return cell; } ``` Inside the cell, you just implement the `-(void) trueImage` and the `- (void) falseImage` which changes the image inside the cell. I hope I helped :)
24,570,725
I have just started to read up on networking and I don't get this DHCP stuff... For past few hours I have been trying to find out how my computer talks to DHCP in order to get its IPs and all I understood is that my router is some sort of DHCP that gives private IPs. I am wondering if there is a way to contact DHCP server manually from my computer in order to bypass the router, but few of things that I already said made me believe that my router is DHCP server (for 255.255.255.0 at least). So what is it? Is my plan feasible, and how about doing it?
2014/07/04
[ "https://Stackoverflow.com/questions/24570725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3804802/" ]
``` - (void)prepareForReuse { [super prepareForReuse]; self.isBookable = nil; [self.book removeFromSuperview]; [self setNeedsLayout]; } ``` Try setting `isBookable` to nil. I'm assuming that the cell is setting its layout with the previous cell's `isBookable` value.
I'd override `isBookable` setter in the cell class: ``` - (void) setIsBookable:(BOOL)isBookable { BOOL needsLayout = isBookable != _isBookable; _isBookable = isBookable; if(needsLayout) { [self.book removeFromSuperview]; [self setNeedsLayout]; } } ``` Also I'd recommend change `@property (nonatomic) BOOL isBookable;` with `@property (nonatomic, getter = isBookable) BOOL bookable;` in order to follow Apple Code Convention.
24,570,725
I have just started to read up on networking and I don't get this DHCP stuff... For past few hours I have been trying to find out how my computer talks to DHCP in order to get its IPs and all I understood is that my router is some sort of DHCP that gives private IPs. I am wondering if there is a way to contact DHCP server manually from my computer in order to bypass the router, but few of things that I already said made me believe that my router is DHCP server (for 255.255.255.0 at least). So what is it? Is my plan feasible, and how about doing it?
2014/07/04
[ "https://Stackoverflow.com/questions/24570725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3804802/" ]
I'd override `isBookable` setter in the cell class: ``` - (void) setIsBookable:(BOOL)isBookable { BOOL needsLayout = isBookable != _isBookable; _isBookable = isBookable; if(needsLayout) { [self.book removeFromSuperview]; [self setNeedsLayout]; } } ``` Also I'd recommend change `@property (nonatomic) BOOL isBookable;` with `@property (nonatomic, getter = isBookable) BOOL bookable;` in order to follow Apple Code Convention.
1st - Save/keep the indexPath of the cell that is changed. 2nd - ``` - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath SalesCollectionViewCell * cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"Cell" forIndexPath:indexPath]; if (indexPath == self.changedIndexPath) [cell trueImage]; else [cell falseImage]; return cell; } ``` Inside the cell, you just implement the `-(void) trueImage` and the `- (void) falseImage` which changes the image inside the cell. I hope I helped :)
24,570,725
I have just started to read up on networking and I don't get this DHCP stuff... For past few hours I have been trying to find out how my computer talks to DHCP in order to get its IPs and all I understood is that my router is some sort of DHCP that gives private IPs. I am wondering if there is a way to contact DHCP server manually from my computer in order to bypass the router, but few of things that I already said made me believe that my router is DHCP server (for 255.255.255.0 at least). So what is it? Is my plan feasible, and how about doing it?
2014/07/04
[ "https://Stackoverflow.com/questions/24570725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3804802/" ]
As @iphonic said, the `isBookable` property should be (re)moved from the cell completely. Cells in a UICollectionView are being reused most of the time, so even though your saleImage.isBookable is in the correct state your cell.isBookable is probably not. I would do the following: ``` if(saleImage.isBookable){ self.bgImageView.frame = CGRectMake(0, 0, 140, self.bounds.size.height - self.book.bounds.size.height); cell.bgImageView.image = [UIImage imageNamed:saleImage.imageName]; cell.book.hidden = NO; } else{ self.bgImageView.frame = CGRectMake(0, 0, self.frame.size.width, self.frame.size.height); cell.bgImageView.image = nil; cell.book.hidden = YES; } [cell layoutIfNeeded]; ``` inside `collectionView: cellForItemAtIndexPath:`. I would also have finished setting up the book `UILabel` inside initWithFrame and have it initially hidden. Something like: ``` - (id)initWithFrame:(CGRect)frame{ self.layer.cornerRadius = 6.0; self.bgImageView = [[UIImageView alloc] init]; [self.contentView insertSubview:self.bgImageView atIndex:0]; self.book = [[UILabel alloc] initWithFrame:CGRectMake(0, self.bounds.size.height - 41, 140, 41)]; self.book.text = @"Book this Item"; self.book.textColor = [UIColor whiteColor]; self.book.adjustsFontSizeToFitWidth=YES; self.book.textAlignment = NSTextAlignmentCenter; self.book.backgroundColor= [UIColor darkGrayColor]; self.book.font = [UIFont fontWithName:kAppFont size:17.0]; self.book.hidden = YES; [self.contentView insertSubview:self.book atIndex:2]; } ``` Then you would not need to override `layoutSubviews`. Hope that helps.
1st - Save/keep the indexPath of the cell that is changed. 2nd - ``` - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath SalesCollectionViewCell * cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"Cell" forIndexPath:indexPath]; if (indexPath == self.changedIndexPath) [cell trueImage]; else [cell falseImage]; return cell; } ``` Inside the cell, you just implement the `-(void) trueImage` and the `- (void) falseImage` which changes the image inside the cell. I hope I helped :)
24,570,725
I have just started to read up on networking and I don't get this DHCP stuff... For past few hours I have been trying to find out how my computer talks to DHCP in order to get its IPs and all I understood is that my router is some sort of DHCP that gives private IPs. I am wondering if there is a way to contact DHCP server manually from my computer in order to bypass the router, but few of things that I already said made me believe that my router is DHCP server (for 255.255.255.0 at least). So what is it? Is my plan feasible, and how about doing it?
2014/07/04
[ "https://Stackoverflow.com/questions/24570725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3804802/" ]
As @iphonic said, the `isBookable` property should be (re)moved from the cell completely. Cells in a UICollectionView are being reused most of the time, so even though your saleImage.isBookable is in the correct state your cell.isBookable is probably not. I would do the following: ``` if(saleImage.isBookable){ self.bgImageView.frame = CGRectMake(0, 0, 140, self.bounds.size.height - self.book.bounds.size.height); cell.bgImageView.image = [UIImage imageNamed:saleImage.imageName]; cell.book.hidden = NO; } else{ self.bgImageView.frame = CGRectMake(0, 0, self.frame.size.width, self.frame.size.height); cell.bgImageView.image = nil; cell.book.hidden = YES; } [cell layoutIfNeeded]; ``` inside `collectionView: cellForItemAtIndexPath:`. I would also have finished setting up the book `UILabel` inside initWithFrame and have it initially hidden. Something like: ``` - (id)initWithFrame:(CGRect)frame{ self.layer.cornerRadius = 6.0; self.bgImageView = [[UIImageView alloc] init]; [self.contentView insertSubview:self.bgImageView atIndex:0]; self.book = [[UILabel alloc] initWithFrame:CGRectMake(0, self.bounds.size.height - 41, 140, 41)]; self.book.text = @"Book this Item"; self.book.textColor = [UIColor whiteColor]; self.book.adjustsFontSizeToFitWidth=YES; self.book.textAlignment = NSTextAlignmentCenter; self.book.backgroundColor= [UIColor darkGrayColor]; self.book.font = [UIFont fontWithName:kAppFont size:17.0]; self.book.hidden = YES; [self.contentView insertSubview:self.book atIndex:2]; } ``` Then you would not need to override `layoutSubviews`. Hope that helps.
``` - (void)prepareForReuse { [super prepareForReuse]; self.isBookable = nil; [self.book removeFromSuperview]; [self setNeedsLayout]; } ``` Try setting `isBookable` to nil. I'm assuming that the cell is setting its layout with the previous cell's `isBookable` value.
24,570,725
I have just started to read up on networking and I don't get this DHCP stuff... For past few hours I have been trying to find out how my computer talks to DHCP in order to get its IPs and all I understood is that my router is some sort of DHCP that gives private IPs. I am wondering if there is a way to contact DHCP server manually from my computer in order to bypass the router, but few of things that I already said made me believe that my router is DHCP server (for 255.255.255.0 at least). So what is it? Is my plan feasible, and how about doing it?
2014/07/04
[ "https://Stackoverflow.com/questions/24570725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3804802/" ]
Cells should not "remember" their state; collection view or table view **data source** should. In the respective `cellForIndexPath` method, you should set the **current state** of the cell and let it configure itself as needed.
1st - Save/keep the indexPath of the cell that is changed. 2nd - ``` - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath SalesCollectionViewCell * cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"Cell" forIndexPath:indexPath]; if (indexPath == self.changedIndexPath) [cell trueImage]; else [cell falseImage]; return cell; } ``` Inside the cell, you just implement the `-(void) trueImage` and the `- (void) falseImage` which changes the image inside the cell. I hope I helped :)
24,570,725
I have just started to read up on networking and I don't get this DHCP stuff... For past few hours I have been trying to find out how my computer talks to DHCP in order to get its IPs and all I understood is that my router is some sort of DHCP that gives private IPs. I am wondering if there is a way to contact DHCP server manually from my computer in order to bypass the router, but few of things that I already said made me believe that my router is DHCP server (for 255.255.255.0 at least). So what is it? Is my plan feasible, and how about doing it?
2014/07/04
[ "https://Stackoverflow.com/questions/24570725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3804802/" ]
Cells should not "remember" their state; collection view or table view **data source** should. In the respective `cellForIndexPath` method, you should set the **current state** of the cell and let it configure itself as needed.
I'd override `isBookable` setter in the cell class: ``` - (void) setIsBookable:(BOOL)isBookable { BOOL needsLayout = isBookable != _isBookable; _isBookable = isBookable; if(needsLayout) { [self.book removeFromSuperview]; [self setNeedsLayout]; } } ``` Also I'd recommend change `@property (nonatomic) BOOL isBookable;` with `@property (nonatomic, getter = isBookable) BOOL bookable;` in order to follow Apple Code Convention.
24,570,725
I have just started to read up on networking and I don't get this DHCP stuff... For past few hours I have been trying to find out how my computer talks to DHCP in order to get its IPs and all I understood is that my router is some sort of DHCP that gives private IPs. I am wondering if there is a way to contact DHCP server manually from my computer in order to bypass the router, but few of things that I already said made me believe that my router is DHCP server (for 255.255.255.0 at least). So what is it? Is my plan feasible, and how about doing it?
2014/07/04
[ "https://Stackoverflow.com/questions/24570725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3804802/" ]
Cells should not "remember" their state; collection view or table view **data source** should. In the respective `cellForIndexPath` method, you should set the **current state** of the cell and let it configure itself as needed.
You need to create two types of cells with different identifiers: ``` static NSString *Bookable = @"Bookable"; static NSString *NonBookable = @"NonBookable"; NSString *currentIdentifier; if(saleImage.isBookable)//isBookable property must be set in SaleImage class { currentIdentifier = Bookable; } else{ currentIdentifier = NonBookable; } SalesCollectionViewCell *cell = (SalesCollectionViewCell*)[collectionView dequeueReusableCellWithIdentifier:currentIdentifier]; ```
3,296,030
I need to run a linux command such as "df" from my linux daemon to know free space,used space, total size of the parition and other info. I have options like calling system,exec,popen etc.. 1. But as this each command spawn a new process , is this not possible to run the commands in the same process from which it is invoked? 2. And at the same time as I need to run this command from a linux daemon, as my daemon should not hold any terminal. Will it effect my daemon behavior? Or is their any C or C++ standard API for getting the mounted paritions information
2010/07/21
[ "https://Stackoverflow.com/questions/3296030", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7965/" ]
There is no *standard* API, as this is an OS-specific concept. However, 1. You can parse `/proc/mounts` (or `/etc/mtab`) with (non-portable) [`getmntent`/`getmntent_r`](http://linux.die.net/man/3/getmntent) helper functions. 2. Using information about mounted filesystems, you can get its statistics with [`statfs`](http://linux.die.net/man/2/statfs).
You may find it useful to explore the `i3status` program source code: <http://code.stapelberg.de/git/i3status/tree/src/print_disk_info.c> To answer your other questions: > > But as this each command spawn a new process , is this not possible to run the commands in the same process from which it is invoked? > > > No; entire 'commands' are self-contained programs that must run in their own process. Depending upon how often you wish to execute your programs, `fork();exec()` is not so bad. There's no hard limits beyond which it would be better to gather data yourself vs executing a helper program. Once a minute, you're probably fine executing the commands. Once a second, you're probably better off gathering the data yourself. I'm not sure where the dividing line is. > > And at the same time as I need to run this command from a linux daemon, as my daemon should not hold any terminal. Will it effect my daemon behavior? > > > If the command calls `setsid(2)`, then `open(2)` on a terminal without including `O_NOCTTY`, that terminal [might](http://www.win.tue.nl/~aeb/linux/lk/lk-10.html) become the controlling terminal for that process. But that wouldn't influence your program, because your program already disowned the terminal when becoming a daemon, and as the child process is a session leader, it cannot change your process's controlling terminal.
13,010,051
I am trying out the phpunit in the Zf2 album module. I encountered an error which states about routing. Below is the debug information. It says 'Route with name "album" not found', but when I checked module.config.php in the album module folder, I see that is correctly set and in the browser the redirection to that route is working fine. ``` Album\Controller\AlbumControllerTest::testDeleteActionCanBeAccessed Zend\Mvc\Router\Exception\RuntimeException: Route with name "album" not found D:\www\zend2\vendor\zendframework\zendframework\library\Zend\Mvc\Router\SimpleRouteStack.php:292 D:\www\zend2\vendor\zendframework\zendframework\library\Zend\Mvc\Controller\Plugin\Url.php:88 D:\www\zend2\vendor\zendframework\zendframework\library\Zend\Mvc\Controller\Plugin\Redirect.php:54 D:\www\zend2\module\Album\src\Album\Controller\AlbumController.php:80 D:\www\zend2\vendor\zendframework\zendframework\library\Zend\Mvc\Controller\AbstractActionController.php:87 D:\www\zend2\vendor\zendframework\zendframework\library\Zend\EventManager\EventManager.php:468 D:\www\zend2\vendor\zendframework\zendframework\library\Zend\EventManager\EventManager.php:208 D:\www\zend2\vendor\zendframework\zendframework\library\Zend\Mvc\Controller\AbstractController.php:108 D:\www\zend2\tests\module\Album\src\Album\Controller\AlbumControllerTest.php:35 C:\wamp\bin\php\php5.4.3\phpunit:46 ``` I understand that the issue in AlbumController.php line 80 is ``` return $this->redirect()->toRoute('album'); ``` But not sure why it is not working. Any one has encountered and overcome such issues?
2012/10/22
[ "https://Stackoverflow.com/questions/13010051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/602988/" ]
I hope it will save approx. 30 minutes of searching in the zend framework 2 code: ``` class AlbumControllerTest extends PHPUnit_Framework_TestCase { //... protected function setUp() { $bootstrap = \Zend\Mvc\Application::init(include 'config/application.config.php'); $this->controller = new AlbumController(); $this->request = new Request(); $this->routeMatch = new RouteMatch(array('controller' => 'index')); $this->event = $bootstrap->getMvcEvent(); $router = new \Zend\Mvc\Router\SimpleRouteStack(); $options = array( 'route' => '/album[/:action][/:id]', 'constraints' => array( 'action' => '[a-zA-Z][a-zA-Z0-9_-]*', 'id' => '[0-9]+', ), 'defaults' => array( 'controller' => 'Album\Controller\Album', 'action' => 'index', ), ); $route = \Zend\Mvc\Router\Http\Segment::factory($options); $router->addRoute('album', $route); $this->event->setRouter($router); $this->event->setRouteMatch($this->routeMatch); $this->controller->setEvent($this->event); $this->controller->setEventManager($bootstrap->getEventManager()); $this->controller->setServiceLocator($bootstrap->getServiceManager()); } } ```
Actually the easy way is to get the config data from the service manager: ``` $config = $serviceManager->get('Config'); ``` Full code for the function `setUp()`: ``` protected function setUp() { $serviceManager = Bootstrap::getServiceManager(); $this -> controller = new AlbumController(); $this -> request = new Request(); $this -> routeMatch = new RouteMatch( array( 'controller' => 'index', ) ); $this -> event = new MvcEvent(); $config = $serviceManager->get('Config'); $routerConfig = isset($config['router']) ? $config['router'] : array(); $router = HttpRouter::factory($routerConfig); $this -> event -> setRouter($router); $this -> event -> setRouteMatch($this -> routeMatch); $this -> controller -> setEvent($this -> event); $this -> controller -> setServiceLocator($serviceManager); } ```
13,010,051
I am trying out the phpunit in the Zf2 album module. I encountered an error which states about routing. Below is the debug information. It says 'Route with name "album" not found', but when I checked module.config.php in the album module folder, I see that is correctly set and in the browser the redirection to that route is working fine. ``` Album\Controller\AlbumControllerTest::testDeleteActionCanBeAccessed Zend\Mvc\Router\Exception\RuntimeException: Route with name "album" not found D:\www\zend2\vendor\zendframework\zendframework\library\Zend\Mvc\Router\SimpleRouteStack.php:292 D:\www\zend2\vendor\zendframework\zendframework\library\Zend\Mvc\Controller\Plugin\Url.php:88 D:\www\zend2\vendor\zendframework\zendframework\library\Zend\Mvc\Controller\Plugin\Redirect.php:54 D:\www\zend2\module\Album\src\Album\Controller\AlbumController.php:80 D:\www\zend2\vendor\zendframework\zendframework\library\Zend\Mvc\Controller\AbstractActionController.php:87 D:\www\zend2\vendor\zendframework\zendframework\library\Zend\EventManager\EventManager.php:468 D:\www\zend2\vendor\zendframework\zendframework\library\Zend\EventManager\EventManager.php:208 D:\www\zend2\vendor\zendframework\zendframework\library\Zend\Mvc\Controller\AbstractController.php:108 D:\www\zend2\tests\module\Album\src\Album\Controller\AlbumControllerTest.php:35 C:\wamp\bin\php\php5.4.3\phpunit:46 ``` I understand that the issue in AlbumController.php line 80 is ``` return $this->redirect()->toRoute('album'); ``` But not sure why it is not working. Any one has encountered and overcome such issues?
2012/10/22
[ "https://Stackoverflow.com/questions/13010051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/602988/" ]
To avoid duplicate Code, you can load your Routes from Module Config: ``` $module = new \YourNameSpace\Module(); $config = $module->getConfig(); $route = \Zend\Mvc\Router\Http\Segment::factory($config['router']['routes']['Home']['options']); $router = new \Zend\Mvc\Router\SimpleRouteStack(); $router->addRoute('Home', $route); ```
Actually the easy way is to get the config data from the service manager: ``` $config = $serviceManager->get('Config'); ``` Full code for the function `setUp()`: ``` protected function setUp() { $serviceManager = Bootstrap::getServiceManager(); $this -> controller = new AlbumController(); $this -> request = new Request(); $this -> routeMatch = new RouteMatch( array( 'controller' => 'index', ) ); $this -> event = new MvcEvent(); $config = $serviceManager->get('Config'); $routerConfig = isset($config['router']) ? $config['router'] : array(); $router = HttpRouter::factory($routerConfig); $this -> event -> setRouter($router); $this -> event -> setRouteMatch($this -> routeMatch); $this -> controller -> setEvent($this -> event); $this -> controller -> setServiceLocator($serviceManager); } ```
33,159,169
I am getting the following errors for HDFS client installation on Ambari. Have reset the server several times but still cannot get it resolved. Any idea how to fix that? stderr: ``` Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 120, in <module> HdfsClient().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 36, in install self.configure(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 41, in configure hdfs() File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs.py", line 61, in hdfs group=params.user_group File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/xml_config.py", line 67, in action_create encoding = self.resource.encoding File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 87, in action_create raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname)) resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] failed, parent directory /usr/hdp/current/hadoop-client/conf doesn't exist ```
2015/10/15
[ "https://Stackoverflow.com/questions/33159169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/369921/" ]
This is a soft link that link to **/etc/hadoop/conf** I run ``` python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent --skip=users ``` After run it, it removes `/etc/hadoop/conf` However, reinstall does not recreate it. So you may have to create all conf files by yourself. Hope someone can patch it.
Creating `/usr/hdp/current/hadoop-client/conf` on failing host should solve the problem.
33,159,169
I am getting the following errors for HDFS client installation on Ambari. Have reset the server several times but still cannot get it resolved. Any idea how to fix that? stderr: ``` Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 120, in <module> HdfsClient().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 36, in install self.configure(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 41, in configure hdfs() File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs.py", line 61, in hdfs group=params.user_group File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/xml_config.py", line 67, in action_create encoding = self.resource.encoding File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 87, in action_create raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname)) resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] failed, parent directory /usr/hdp/current/hadoop-client/conf doesn't exist ```
2015/10/15
[ "https://Stackoverflow.com/questions/33159169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/369921/" ]
I ran into the same problem: I was using HDP 2.3.2 on Centos 7. **The first problem:** Some conf files point to the /etc//conf directory (same as they are supposed to) However, /etc//conf points back to the other conf directory which leads to an endless loop. I was able to fix this problem by removing the /etc//conf symbolic links and creating directories **The second problem** If you run the python scripts to clean up the installation and start over however, several directories do not get recreated, such as the hadoop-client directory. This leads to exact your error message. Also this cleanup script does not work out well as it does not clean several users and directories. You have to userdel and groupdel. UPDATE: It seems it was a problem of HDP 2.3.2. In HDP 2.3.4, I did not run into that problem any more.
Creating `/usr/hdp/current/hadoop-client/conf` on failing host should solve the problem.
33,159,169
I am getting the following errors for HDFS client installation on Ambari. Have reset the server several times but still cannot get it resolved. Any idea how to fix that? stderr: ``` Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 120, in <module> HdfsClient().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 36, in install self.configure(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 41, in configure hdfs() File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs.py", line 61, in hdfs group=params.user_group File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/xml_config.py", line 67, in action_create encoding = self.resource.encoding File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 87, in action_create raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname)) resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] failed, parent directory /usr/hdp/current/hadoop-client/conf doesn't exist ```
2015/10/15
[ "https://Stackoverflow.com/questions/33159169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/369921/" ]
``` yum -y erase hdp-select ``` If you have done installation multiple times, some packages might not be cleaned. To remove all HDP packages and start with fresh installation, erase hdp-select. If this is not helping, remove all the versions from `/usr/hdp` delete this directory if it contains multiple versions of `hdp` Remove all the installed packages like `hadoop,hdfs,zookeeper etc.` ``` yum remove zookeeper* hadoop* hdp* zookeeper* ```
Creating `/usr/hdp/current/hadoop-client/conf` on failing host should solve the problem.
33,159,169
I am getting the following errors for HDFS client installation on Ambari. Have reset the server several times but still cannot get it resolved. Any idea how to fix that? stderr: ``` Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 120, in <module> HdfsClient().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 36, in install self.configure(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 41, in configure hdfs() File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs.py", line 61, in hdfs group=params.user_group File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/xml_config.py", line 67, in action_create encoding = self.resource.encoding File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 87, in action_create raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname)) resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] failed, parent directory /usr/hdp/current/hadoop-client/conf doesn't exist ```
2015/10/15
[ "https://Stackoverflow.com/questions/33159169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/369921/" ]
This is a soft link that link to **/etc/hadoop/conf** I run ``` python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent --skip=users ``` After run it, it removes `/etc/hadoop/conf` However, reinstall does not recreate it. So you may have to create all conf files by yourself. Hope someone can patch it.
I ran into the same problem: I was using HDP 2.3.2 on Centos 7. **The first problem:** Some conf files point to the /etc//conf directory (same as they are supposed to) However, /etc//conf points back to the other conf directory which leads to an endless loop. I was able to fix this problem by removing the /etc//conf symbolic links and creating directories **The second problem** If you run the python scripts to clean up the installation and start over however, several directories do not get recreated, such as the hadoop-client directory. This leads to exact your error message. Also this cleanup script does not work out well as it does not clean several users and directories. You have to userdel and groupdel. UPDATE: It seems it was a problem of HDP 2.3.2. In HDP 2.3.4, I did not run into that problem any more.
33,159,169
I am getting the following errors for HDFS client installation on Ambari. Have reset the server several times but still cannot get it resolved. Any idea how to fix that? stderr: ``` Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 120, in <module> HdfsClient().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 36, in install self.configure(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 41, in configure hdfs() File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs.py", line 61, in hdfs group=params.user_group File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/xml_config.py", line 67, in action_create encoding = self.resource.encoding File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 87, in action_create raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname)) resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] failed, parent directory /usr/hdp/current/hadoop-client/conf doesn't exist ```
2015/10/15
[ "https://Stackoverflow.com/questions/33159169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/369921/" ]
``` yum -y erase hdp-select ``` If you have done installation multiple times, some packages might not be cleaned. To remove all HDP packages and start with fresh installation, erase hdp-select. If this is not helping, remove all the versions from `/usr/hdp` delete this directory if it contains multiple versions of `hdp` Remove all the installed packages like `hadoop,hdfs,zookeeper etc.` ``` yum remove zookeeper* hadoop* hdp* zookeeper* ```
I ran into the same problem: I was using HDP 2.3.2 on Centos 7. **The first problem:** Some conf files point to the /etc//conf directory (same as they are supposed to) However, /etc//conf points back to the other conf directory which leads to an endless loop. I was able to fix this problem by removing the /etc//conf symbolic links and creating directories **The second problem** If you run the python scripts to clean up the installation and start over however, several directories do not get recreated, such as the hadoop-client directory. This leads to exact your error message. Also this cleanup script does not work out well as it does not clean several users and directories. You have to userdel and groupdel. UPDATE: It seems it was a problem of HDP 2.3.2. In HDP 2.3.4, I did not run into that problem any more.
4,749,235
I have the following HTML code (a list item). The content isn't important--the problem is the end of line 2. ``` <li>Yes, you can learn how to play piano without becoming a great notation reader, however, <strong class="warning">you <em class="emphatic">will</em> have to acquire a <em class="emphatic">very</em>basic amount of notation reading skill</strong>. But the extremely difficult task of honing your note reading skills that classical students are required to endure for years and years is <em class="emphatic">totally non-existant</em>as a requirement for playing non-classical piano.</li> ``` The command fill-paragraph (M-q) has been applied. I can't for the life of me figure out why a line break is being placed on the second line after "reader," since there's more space available on that line to put "however,". Another weird thing I've noticed is that when I delete and then reapply the tab characters on lines 4 and 5 (starting with "have" and "of" respectively), two space characters are automatically inserted as well, like so: ``` <li>Yes, you can learn how to play piano without becoming a great notation reader, however, <strong class="warning">you <em class="emphatic">will</em> have to acquire a <em class="emphatic">very</em>basic amount of notation reading skill</strong>. But the extremely difficult task of honing your note reading skills that classical students are required to endure for years and years is <em class="emphatic">totally non-existant</em>as a requirement for playing non-classical piano.</li> ``` I don't know if this is some kind of clue or not. This doesn't happen with any of the other lines. Is this just a bug, or does any experienced Emacs person know what might be going on here? Thank you
2011/01/20
[ "https://Stackoverflow.com/questions/4749235", "https://Stackoverflow.com", "https://Stackoverflow.com/users/520648/" ]
This is intentional. Lines that start with an XML or SGML tag are paragraph separator lines. If Emacs broke the paragraph in such a way that the tag ended up at the start of a line, subsequent applications of `fill-paragraph` would stop at that line. This is to ensure that, for instance, ``` <p>a paragraph</p> <!-- no blank line --> <p>another paragraph</p> ``` does not turn into ``` <p>a paragraph</p> <!-- no blank line --> <p>another paragraph</p> ``` For the same reason, Emacs will not break a line after a period unless there are two or more spaces after the period, because it uses a double space to distinguish between a period that ends a sentence and a period that ends an abbreviation, and breaking a line after the period that ends an abbreviation would create an ambiguous situation.
Looks like a bug to me. I was able to trim down your example to something like this: ``` <li>blabla blabla <b>some_long_text_here</b> <b>more_long_text_here</b> ``` If I remove a single character of text from it, `fill-paragraph` works as expected. Or if I add a chacter between the two consequtive `<b>` elements.
22,037
I was playing around with aliases today and I noticed that aliases don't seem to be available whilst using `sudo`: ``` danny@kaon:~$ alias alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l='ls -CF' alias la='ls -A' alias ll='ls -alF' alias ls='ls --color=auto' danny@kaon:~$ ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // danny@kaon:~$ sudo -i root@kaon:~# ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // root@kaon:~# exit logout danny@kaon:~$ sudo ll -d / sudo: ll: command not found ``` Is there any reason why you cannot use aliases whilst using `sudo`?
2011/01/17
[ "https://askubuntu.com/questions/22037", "https://askubuntu.com", "https://askubuntu.com/users/3583/" ]
Add the following line to your `~/.bashrc`: ``` alias sudo='sudo ' ``` From the [bash manual](http://www.gnu.org/software/bash/manual/bashref.html#Aliases): > > Aliases allow a string to be substituted for a word when it is used as the first word of a simple command. The shell maintains a list of aliases that may be set and unset with the alias and unalias builtin commands. > > > **The first word of each simple command, if unquoted, is checked to see if it has an alias**. If so, that word is replaced by the text of the alias. The characters ‘/’, ‘$’, ‘`’, ‘=’ and any of the shell metacharacters or quoting characters listed above may not appear in an alias name. The replacement text may contain any valid shell input, including shell metacharacters. The first word of the replacement text is tested for aliases, but a word that is identical to an alias being expanded is not expanded a second time. This means that one may alias ls to "ls -F", for instance, and Bash does not try to recursively expand the replacement text. **If the last character of the alias value is a space or tab character, then the next command word following the alias is also checked for alias expansion**. > > > (Emphasis mine). Bash only checks the first word of a command for an alias, any words after that are not checked. That means in a command like `sudo ll`, only the first word (`sudo`) is checked by bash for an alias, `ll` is ignored. We can tell bash to check the next word after the alias (i.e `sudo`) by adding a space to the end of the alias value.
I have a different solution whereby you do not need to add `sudo` as an alias. I run Linux Mint 17.3 but it should be pretty similar to Ubuntu. When you are root, then the `.profile` is run from its home directory. If you do not know what the home directory under root is, then you can check with: ``` sudo su echo $HOME ``` As you can see, the home of `root` is `/root/`. Check its contents: ``` cd $HOME ls -al ``` There should be a `.profile` file. Open the file and add the following lines: ``` if [ "$BASH" ]; then if [ -f ~/.bash_aliases];then . ~/.bash_aliases fi fi ``` Basically, what this bash script does is to check for a file called `.bash_aliases`. If the files is present, it executes the file. Save the `.profile` file and create your aliases in `.bash_aliases`. If you already have the aliases file ready, then copy the file to this location Re-launch the terminal and you are good to go!
22,037
I was playing around with aliases today and I noticed that aliases don't seem to be available whilst using `sudo`: ``` danny@kaon:~$ alias alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l='ls -CF' alias la='ls -A' alias ll='ls -alF' alias ls='ls --color=auto' danny@kaon:~$ ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // danny@kaon:~$ sudo -i root@kaon:~# ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // root@kaon:~# exit logout danny@kaon:~$ sudo ll -d / sudo: ll: command not found ``` Is there any reason why you cannot use aliases whilst using `sudo`?
2011/01/17
[ "https://askubuntu.com/questions/22037", "https://askubuntu.com", "https://askubuntu.com/users/3583/" ]
Add the following line to your `~/.bashrc`: ``` alias sudo='sudo ' ``` From the [bash manual](http://www.gnu.org/software/bash/manual/bashref.html#Aliases): > > Aliases allow a string to be substituted for a word when it is used as the first word of a simple command. The shell maintains a list of aliases that may be set and unset with the alias and unalias builtin commands. > > > **The first word of each simple command, if unquoted, is checked to see if it has an alias**. If so, that word is replaced by the text of the alias. The characters ‘/’, ‘$’, ‘`’, ‘=’ and any of the shell metacharacters or quoting characters listed above may not appear in an alias name. The replacement text may contain any valid shell input, including shell metacharacters. The first word of the replacement text is tested for aliases, but a word that is identical to an alias being expanded is not expanded a second time. This means that one may alias ls to "ls -F", for instance, and Bash does not try to recursively expand the replacement text. **If the last character of the alias value is a space or tab character, then the next command word following the alias is also checked for alias expansion**. > > > (Emphasis mine). Bash only checks the first word of a command for an alias, any words after that are not checked. That means in a command like `sudo ll`, only the first word (`sudo`) is checked by bash for an alias, `ll` is ignored. We can tell bash to check the next word after the alias (i.e `sudo`) by adding a space to the end of the alias value.
I have another nice solution, that adds a bit of trust too: **Use bash completion to automatically replace words behind `sudo` with their aliases when pressing tab.** Save this as `/etc/bash_completion.d/sudo-alias.bashcomp`, and it should automatically be loaded at interactive shell startup: ```bsh _comp_sudo_alias() { from="$2"; COMPREPLY=() if [[ $COMP_CWORD == 1 ]]; then COMPREPLY=( "$( alias -p | grep "^ *alias $from=" | sed -r "s/^ *alias [^=]+='(.*)'$/\1/" )" ) return 0 fi return 1 } complete -o bashdefault -o default -F _comp_sudo_alias sudo ``` Then log in to a new terminal, and you should be good to go.
22,037
I was playing around with aliases today and I noticed that aliases don't seem to be available whilst using `sudo`: ``` danny@kaon:~$ alias alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l='ls -CF' alias la='ls -A' alias ll='ls -alF' alias ls='ls --color=auto' danny@kaon:~$ ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // danny@kaon:~$ sudo -i root@kaon:~# ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // root@kaon:~# exit logout danny@kaon:~$ sudo ll -d / sudo: ll: command not found ``` Is there any reason why you cannot use aliases whilst using `sudo`?
2011/01/17
[ "https://askubuntu.com/questions/22037", "https://askubuntu.com", "https://askubuntu.com/users/3583/" ]
I have another nice solution, that adds a bit of trust too: **Use bash completion to automatically replace words behind `sudo` with their aliases when pressing tab.** Save this as `/etc/bash_completion.d/sudo-alias.bashcomp`, and it should automatically be loaded at interactive shell startup: ```bsh _comp_sudo_alias() { from="$2"; COMPREPLY=() if [[ $COMP_CWORD == 1 ]]; then COMPREPLY=( "$( alias -p | grep "^ *alias $from=" | sed -r "s/^ *alias [^=]+='(.*)'$/\1/" )" ) return 0 fi return 1 } complete -o bashdefault -o default -F _comp_sudo_alias sudo ``` Then log in to a new terminal, and you should be good to go.
If you type `sudo -i` and elevate to a sudo prompt (`#`) you won't have the aliases or functions you like to use. To utilize your aliases and functions at the `#` prompt, use: ``` sudo cp "$HOME"/.bashrc /root/.bashrc ``` Where "$HOME" is expanded into "/home/YOUR\_USER\_NAME"
22,037
I was playing around with aliases today and I noticed that aliases don't seem to be available whilst using `sudo`: ``` danny@kaon:~$ alias alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l='ls -CF' alias la='ls -A' alias ll='ls -alF' alias ls='ls --color=auto' danny@kaon:~$ ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // danny@kaon:~$ sudo -i root@kaon:~# ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // root@kaon:~# exit logout danny@kaon:~$ sudo ll -d / sudo: ll: command not found ``` Is there any reason why you cannot use aliases whilst using `sudo`?
2011/01/17
[ "https://askubuntu.com/questions/22037", "https://askubuntu.com", "https://askubuntu.com/users/3583/" ]
Add the following line to your `~/.bashrc`: ``` alias sudo='sudo ' ``` From the [bash manual](http://www.gnu.org/software/bash/manual/bashref.html#Aliases): > > Aliases allow a string to be substituted for a word when it is used as the first word of a simple command. The shell maintains a list of aliases that may be set and unset with the alias and unalias builtin commands. > > > **The first word of each simple command, if unquoted, is checked to see if it has an alias**. If so, that word is replaced by the text of the alias. The characters ‘/’, ‘$’, ‘`’, ‘=’ and any of the shell metacharacters or quoting characters listed above may not appear in an alias name. The replacement text may contain any valid shell input, including shell metacharacters. The first word of the replacement text is tested for aliases, but a word that is identical to an alias being expanded is not expanded a second time. This means that one may alias ls to "ls -F", for instance, and Bash does not try to recursively expand the replacement text. **If the last character of the alias value is a space or tab character, then the next command word following the alias is also checked for alias expansion**. > > > (Emphasis mine). Bash only checks the first word of a command for an alias, any words after that are not checked. That means in a command like `sudo ll`, only the first word (`sudo`) is checked by bash for an alias, `ll` is ignored. We can tell bash to check the next word after the alias (i.e `sudo`) by adding a space to the end of the alias value.
I wrote a Bash function for it that shadows `sudo`. It checks whether I have an alias for the given command and runs the aliased command instead of the literal one with `sudo` in that case. Here is my function as one-liner: ```bsh sudo() { if alias "$1" &> /dev/null ; then $(type "$1" | sed -E 's/^.*`(.*).$/\1/') "${@:2}" ; else command sudo $@ ; fi } ``` Or nicely formatted: ```bsh sudo() { if alias "$1" &> /dev/null ; then $(type "$1" | sed -E 's/^.*`(.*).$/\1/') "${@:2}" else command sudo "$@" fi } ``` You can append it to your `.bashrc` file, don't forget to source it or restart your terminal session afterwards to apply the changes though.
22,037
I was playing around with aliases today and I noticed that aliases don't seem to be available whilst using `sudo`: ``` danny@kaon:~$ alias alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l='ls -CF' alias la='ls -A' alias ll='ls -alF' alias ls='ls --color=auto' danny@kaon:~$ ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // danny@kaon:~$ sudo -i root@kaon:~# ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // root@kaon:~# exit logout danny@kaon:~$ sudo ll -d / sudo: ll: command not found ``` Is there any reason why you cannot use aliases whilst using `sudo`?
2011/01/17
[ "https://askubuntu.com/questions/22037", "https://askubuntu.com", "https://askubuntu.com/users/3583/" ]
I wrote a Bash function for it that shadows `sudo`. It checks whether I have an alias for the given command and runs the aliased command instead of the literal one with `sudo` in that case. Here is my function as one-liner: ```bsh sudo() { if alias "$1" &> /dev/null ; then $(type "$1" | sed -E 's/^.*`(.*).$/\1/') "${@:2}" ; else command sudo $@ ; fi } ``` Or nicely formatted: ```bsh sudo() { if alias "$1" &> /dev/null ; then $(type "$1" | sed -E 's/^.*`(.*).$/\1/') "${@:2}" else command sudo "$@" fi } ``` You can append it to your `.bashrc` file, don't forget to source it or restart your terminal session afterwards to apply the changes though.
@Alvins answer is the shortest one. No doubt! :-) However I thought of a command line solution to **execute an aliased command in sudo** where there is no need to redefine `sudo` with an `alias` command. Here is my proposal for those to whom it may interest: Solution -------- ``` type -a <YOUR COMMAND HERE> | grep -o -P "(?<=\`).*(?=')" | xargs sudo ``` Example ------- In the case of the `ll` command ``` type -a ll | grep -o -P "(?<=\`).*(?=')" | xargs sudo ``` Explanation ----------- when you have an alias (such as: `ll`) the command `type -a` returns the aliased expression: ``` $type -a ll ll is aliased to `ls -l' ``` with `grep` you select the text between the accent ` and apostrophe ' in that case `ls -l` And `xargs` executes the selected text `ls -l` as parameter of `sudo`. Yes, a bit longer but **completely clean** ;-) No need to redefine `sudo` as alias.
22,037
I was playing around with aliases today and I noticed that aliases don't seem to be available whilst using `sudo`: ``` danny@kaon:~$ alias alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l='ls -CF' alias la='ls -A' alias ll='ls -alF' alias ls='ls --color=auto' danny@kaon:~$ ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // danny@kaon:~$ sudo -i root@kaon:~# ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // root@kaon:~# exit logout danny@kaon:~$ sudo ll -d / sudo: ll: command not found ``` Is there any reason why you cannot use aliases whilst using `sudo`?
2011/01/17
[ "https://askubuntu.com/questions/22037", "https://askubuntu.com", "https://askubuntu.com/users/3583/" ]
I wrote a Bash function for it that shadows `sudo`. It checks whether I have an alias for the given command and runs the aliased command instead of the literal one with `sudo` in that case. Here is my function as one-liner: ```bsh sudo() { if alias "$1" &> /dev/null ; then $(type "$1" | sed -E 's/^.*`(.*).$/\1/') "${@:2}" ; else command sudo $@ ; fi } ``` Or nicely formatted: ```bsh sudo() { if alias "$1" &> /dev/null ; then $(type "$1" | sed -E 's/^.*`(.*).$/\1/') "${@:2}" else command sudo "$@" fi } ``` You can append it to your `.bashrc` file, don't forget to source it or restart your terminal session afterwards to apply the changes though.
I have a different solution whereby you do not need to add `sudo` as an alias. I run Linux Mint 17.3 but it should be pretty similar to Ubuntu. When you are root, then the `.profile` is run from its home directory. If you do not know what the home directory under root is, then you can check with: ``` sudo su echo $HOME ``` As you can see, the home of `root` is `/root/`. Check its contents: ``` cd $HOME ls -al ``` There should be a `.profile` file. Open the file and add the following lines: ``` if [ "$BASH" ]; then if [ -f ~/.bash_aliases];then . ~/.bash_aliases fi fi ``` Basically, what this bash script does is to check for a file called `.bash_aliases`. If the files is present, it executes the file. Save the `.profile` file and create your aliases in `.bash_aliases`. If you already have the aliases file ready, then copy the file to this location Re-launch the terminal and you are good to go!
22,037
I was playing around with aliases today and I noticed that aliases don't seem to be available whilst using `sudo`: ``` danny@kaon:~$ alias alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l='ls -CF' alias la='ls -A' alias ll='ls -alF' alias ls='ls --color=auto' danny@kaon:~$ ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // danny@kaon:~$ sudo -i root@kaon:~# ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // root@kaon:~# exit logout danny@kaon:~$ sudo ll -d / sudo: ll: command not found ``` Is there any reason why you cannot use aliases whilst using `sudo`?
2011/01/17
[ "https://askubuntu.com/questions/22037", "https://askubuntu.com", "https://askubuntu.com/users/3583/" ]
[@WinEunuuchs2Unix](https://askubuntu.com/a/853332/830570): `$PWD` expands to the "present working directory". I think you want `$HOME`. Also, for most situations, it's probably best to have a separate root .bashrc file. In fact, I'd make it a real file in `/root`, soft link to it in the user's home directory (e.g., `.bashrc_root`), and source it from the user's `.bashrc` file. If at some later time this privileged user account is no longer present, the root `.bashrc` file is still available for other users.
If you type `sudo -i` and elevate to a sudo prompt (`#`) you won't have the aliases or functions you like to use. To utilize your aliases and functions at the `#` prompt, use: ``` sudo cp "$HOME"/.bashrc /root/.bashrc ``` Where "$HOME" is expanded into "/home/YOUR\_USER\_NAME"
22,037
I was playing around with aliases today and I noticed that aliases don't seem to be available whilst using `sudo`: ``` danny@kaon:~$ alias alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l='ls -CF' alias la='ls -A' alias ll='ls -alF' alias ls='ls --color=auto' danny@kaon:~$ ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // danny@kaon:~$ sudo -i root@kaon:~# ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // root@kaon:~# exit logout danny@kaon:~$ sudo ll -d / sudo: ll: command not found ``` Is there any reason why you cannot use aliases whilst using `sudo`?
2011/01/17
[ "https://askubuntu.com/questions/22037", "https://askubuntu.com", "https://askubuntu.com/users/3583/" ]
If you type `sudo -i` and elevate to a sudo prompt (`#`) you won't have the aliases or functions you like to use. To utilize your aliases and functions at the `#` prompt, use: ``` sudo cp "$HOME"/.bashrc /root/.bashrc ``` Where "$HOME" is expanded into "/home/YOUR\_USER\_NAME"
I have a different solution whereby you do not need to add `sudo` as an alias. I run Linux Mint 17.3 but it should be pretty similar to Ubuntu. When you are root, then the `.profile` is run from its home directory. If you do not know what the home directory under root is, then you can check with: ``` sudo su echo $HOME ``` As you can see, the home of `root` is `/root/`. Check its contents: ``` cd $HOME ls -al ``` There should be a `.profile` file. Open the file and add the following lines: ``` if [ "$BASH" ]; then if [ -f ~/.bash_aliases];then . ~/.bash_aliases fi fi ``` Basically, what this bash script does is to check for a file called `.bash_aliases`. If the files is present, it executes the file. Save the `.profile` file and create your aliases in `.bash_aliases`. If you already have the aliases file ready, then copy the file to this location Re-launch the terminal and you are good to go!
22,037
I was playing around with aliases today and I noticed that aliases don't seem to be available whilst using `sudo`: ``` danny@kaon:~$ alias alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l='ls -CF' alias la='ls -A' alias ll='ls -alF' alias ls='ls --color=auto' danny@kaon:~$ ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // danny@kaon:~$ sudo -i root@kaon:~# ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // root@kaon:~# exit logout danny@kaon:~$ sudo ll -d / sudo: ll: command not found ``` Is there any reason why you cannot use aliases whilst using `sudo`?
2011/01/17
[ "https://askubuntu.com/questions/22037", "https://askubuntu.com", "https://askubuntu.com/users/3583/" ]
I wrote a Bash function for it that shadows `sudo`. It checks whether I have an alias for the given command and runs the aliased command instead of the literal one with `sudo` in that case. Here is my function as one-liner: ```bsh sudo() { if alias "$1" &> /dev/null ; then $(type "$1" | sed -E 's/^.*`(.*).$/\1/') "${@:2}" ; else command sudo $@ ; fi } ``` Or nicely formatted: ```bsh sudo() { if alias "$1" &> /dev/null ; then $(type "$1" | sed -E 's/^.*`(.*).$/\1/') "${@:2}" else command sudo "$@" fi } ``` You can append it to your `.bashrc` file, don't forget to source it or restart your terminal session afterwards to apply the changes though.
If you type `sudo -i` and elevate to a sudo prompt (`#`) you won't have the aliases or functions you like to use. To utilize your aliases and functions at the `#` prompt, use: ``` sudo cp "$HOME"/.bashrc /root/.bashrc ``` Where "$HOME" is expanded into "/home/YOUR\_USER\_NAME"
22,037
I was playing around with aliases today and I noticed that aliases don't seem to be available whilst using `sudo`: ``` danny@kaon:~$ alias alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l='ls -CF' alias la='ls -A' alias ll='ls -alF' alias ls='ls --color=auto' danny@kaon:~$ ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // danny@kaon:~$ sudo -i root@kaon:~# ll -d / drwxr-xr-x 23 root root 4096 2011-01-06 20:29 // root@kaon:~# exit logout danny@kaon:~$ sudo ll -d / sudo: ll: command not found ``` Is there any reason why you cannot use aliases whilst using `sudo`?
2011/01/17
[ "https://askubuntu.com/questions/22037", "https://askubuntu.com", "https://askubuntu.com/users/3583/" ]
[@WinEunuuchs2Unix](https://askubuntu.com/a/853332/830570): `$PWD` expands to the "present working directory". I think you want `$HOME`. Also, for most situations, it's probably best to have a separate root .bashrc file. In fact, I'd make it a real file in `/root`, soft link to it in the user's home directory (e.g., `.bashrc_root`), and source it from the user's `.bashrc` file. If at some later time this privileged user account is no longer present, the root `.bashrc` file is still available for other users.
I have a different solution whereby you do not need to add `sudo` as an alias. I run Linux Mint 17.3 but it should be pretty similar to Ubuntu. When you are root, then the `.profile` is run from its home directory. If you do not know what the home directory under root is, then you can check with: ``` sudo su echo $HOME ``` As you can see, the home of `root` is `/root/`. Check its contents: ``` cd $HOME ls -al ``` There should be a `.profile` file. Open the file and add the following lines: ``` if [ "$BASH" ]; then if [ -f ~/.bash_aliases];then . ~/.bash_aliases fi fi ``` Basically, what this bash script does is to check for a file called `.bash_aliases`. If the files is present, it executes the file. Save the `.profile` file and create your aliases in `.bash_aliases`. If you already have the aliases file ready, then copy the file to this location Re-launch the terminal and you are good to go!
59,903,489
I followed this tutorial <https://forge.autodesk.com/blog/forge-aspnet-zero-hero-30-minutes> and made a web application that can upload and view a model, but I cannot view an old model without uploading it again as I cannot save the URN of the file to send it to viewer again. So, how can I get and save the URN of the file to use it whenever I want to view the model without uploading it again?
2020/01/24
[ "https://Stackoverflow.com/questions/59903489", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12777711/" ]
*Object ID* is typically retrieved using the [Data Management APIs](https://forge.autodesk.com/en/docs/data/v2/reference/http/), for example, when [listing contents of a bucket](https://forge.autodesk.com/en/docs/data/v2/reference/http/buckets-:bucketKey-objects-GET). *URN* is then obtained by base64-encoding the object ID. Here's an [example](https://github.com/petrbroz/learn.forge.viewmodels/blob/net/forgesample/Controllers/OSSController.cs#L157-L161) of how you can base64-encode strings in C#.
I would highly suggest to follow the new [Tutorial](https://learnforge.autodesk.io/#/?id=learn-autodesk-forge). This will give you a better understanding of how things work. If you want to just have the urn of an uploaded model you can just log it into the console at a couple of points. The `showModel(urn)` function in ForgeViewer.js is a good place to do so for example.
2,183,758
I'm confused on a couple of propositions in Bourbaki, Lie Groups and Lie Algebras, Chapter 5. We are given a quadratic form $q$ on $V = \mathbb{R}^n$ which is positive, meaning $q(v) \geq 0$ for all $v$. To $q$ we can associate a symmetric, positive bilinear form $$B(v,w) = \frac{1}{2}[q(v+w) - q(v) - q(w)]$$ We recover $q$ as $q(v) = B(v,v)$. By the *kernel* of $q$, I would normally assume we are talking about one of two things: > > 1 . The set of $v \in V$ such that $q(v) = 0$. > > > 2 . The set of $v \in V$ such that $B(v,w) = 0$ for all $w \in W$. > > > It is straightforward that the second set is contained in the first. In the beginning of Lemma 4, they cite a result in Algebra, Chapter IX which says that since $q$ is positive, they are actually equal. I looked up this result, I still can't figure out if it actually implies what is claimed. Anyway. On the other hand, in the beginning of the proof of Lemma 4, they seem to be using yet another definition. They are saying that if $v = |c\_1|a\_1 + \cdots +|c\_n|a\_n$ is in the kernel of $q$, then $$\sum\limits\_i q\_{ij}|c\_i| = 0$$ for all $j$. That is...not how you compute $q(v)$. You don't treat $q$ as a linear transformation and multiply the column vector. Right? I thought $q(v)$ would be $$\sum\limits\_{i,j} q\_{ij} |c\_ic\_j| $$ [![enter image description here](https://i.stack.imgur.com/6Xrij.png)](https://i.stack.imgur.com/6Xrij.png) [![enter image description here](https://i.stack.imgur.com/2N8CA.png)](https://i.stack.imgur.com/2N8CA.png)
2017/03/12
[ "https://math.stackexchange.com/questions/2183758", "https://math.stackexchange.com", "https://math.stackexchange.com/users/28556/" ]
Added: it appears that their positive is positive semidefinite. Not hard to adjust the stuff below to semidefinite case your early question, positivity of the quadratic form says that both versions of the kernel are the single vector $0.$ Think of a symmetric positive definite real matrix $M.$ If we have a column vector $X$ with $X^T M X = 0,$ then $X = 0.$ The bilinear form is $$ B(X,Y) = X^T M Y = Y^T M X. $$ If, for fixed $X,$ we always have $$ ( X^T M) Y = 0, $$ we are allowed to take $Y = MX,$ which proves that $$X^T M = 0, M X = 0.$$ Since $M$ is definite, $$ X = 0 $$
Okay, here's what I'm trying to prove. Let $V = \mathbb{R}^n$, let $M$ be an $n$ by $n$ real symmetric matrix such that $m\_{ij} \leq 0$ for all $i \neq j$, and such that $X^TMX \geq 0$ for all column vectors $X$. The following are equivalent for $X$: (i): $X^TMX = 0$; (ii): $MX = 0$; (ii): $X^TMY = 0$ for all column vectors $Y$. (ii) $\Rightarrow$ (iii) is clear, since $(MX)^T = X^TM^T = X^TM$, and one can then multiply on the right by $Y$. (iii) $\Rightarrow$ (i) is also clear. The difficulty for me is showing that (i) $\Rightarrow$ (ii). Proof of (i) $\Rightarrow$ (ii) is the case $n = 2$: Let $M = \begin{pmatrix} a & b \\ b & d \end{pmatrix}$ where $b \leq 0$. If $X = \begin{pmatrix} x \\ y \end{pmatrix}$, then $$X^TMX = ax^2 + 2bxy + dy^2$$ If $M$ is invertible, we already know that (i) and (ii) are equivalent to $X$ being $0$. So we may assume $ad = b^2$. Under the assumption $ax^2 + 2bxy + dy^2 = 0$, multiply by $b$ and use the fact that $2b^2 = ad + b^2$ to get $$0 = abx^2 + 2b^2xy + dy^2 = abx^2 + adxy + b^2xy + dy^2 = (ax + by)(bx + dy)$$ If one of the rows of $M$ is zero, then being symmetric, $M$ must be of the form $\begin{pmatrix} a & 0 \\ 0 & 0 \end{pmatrix}$ or $\begin{pmatrix} 0 & 0 \\ 0 & d \end{pmatrix}$, and these are easy to work out. Otherwise, the determinant of $M$ being zero, we can conclude that $(a,b)$ is a nonzero scalar multiple of $(d,b)$, and so $(ax+by)(bx + dy) = 0$ is equivalent to $ax + by$ and $bx + dy$ both being zero, or in other words, $MX = 0$, which was what I wanted to show.
2,183,758
I'm confused on a couple of propositions in Bourbaki, Lie Groups and Lie Algebras, Chapter 5. We are given a quadratic form $q$ on $V = \mathbb{R}^n$ which is positive, meaning $q(v) \geq 0$ for all $v$. To $q$ we can associate a symmetric, positive bilinear form $$B(v,w) = \frac{1}{2}[q(v+w) - q(v) - q(w)]$$ We recover $q$ as $q(v) = B(v,v)$. By the *kernel* of $q$, I would normally assume we are talking about one of two things: > > 1 . The set of $v \in V$ such that $q(v) = 0$. > > > 2 . The set of $v \in V$ such that $B(v,w) = 0$ for all $w \in W$. > > > It is straightforward that the second set is contained in the first. In the beginning of Lemma 4, they cite a result in Algebra, Chapter IX which says that since $q$ is positive, they are actually equal. I looked up this result, I still can't figure out if it actually implies what is claimed. Anyway. On the other hand, in the beginning of the proof of Lemma 4, they seem to be using yet another definition. They are saying that if $v = |c\_1|a\_1 + \cdots +|c\_n|a\_n$ is in the kernel of $q$, then $$\sum\limits\_i q\_{ij}|c\_i| = 0$$ for all $j$. That is...not how you compute $q(v)$. You don't treat $q$ as a linear transformation and multiply the column vector. Right? I thought $q(v)$ would be $$\sum\limits\_{i,j} q\_{ij} |c\_ic\_j| $$ [![enter image description here](https://i.stack.imgur.com/6Xrij.png)](https://i.stack.imgur.com/6Xrij.png) [![enter image description here](https://i.stack.imgur.com/2N8CA.png)](https://i.stack.imgur.com/2N8CA.png)
2017/03/12
[ "https://math.stackexchange.com/questions/2183758", "https://math.stackexchange.com", "https://math.stackexchange.com/users/28556/" ]
We have $M$ real, symmetric, and positive semidefinite. We may take an orthogonal matrix $P$ such that $ P^T D P = M,$ where $D$ is real diagonal. Let dimension be $n.$ Let the first $n-r$ diagonal elements of $D$ be zero, that is $D\_{jj} = 0$ for $1 \leq j \leq n-r.$ Then $D\_{jj} > 0$ for $n-r+1 \leq j \leq n.$ Your condition (i) reads $X^T M X = 0.$ This becomes $X^T P^T D P X = 0.$ Make a new vector $$ Y = PX. $$ We have $$ Y^T D Y = 0. $$ That is $$ \sum\_{j = n-r+1}^n D\_{jj} \; \; y\_j^2 = 0. $$ Positivity says $$y\_j = 0 \; \; \; \; \mbox{for} \; \; \; \; n-r+1 \leq j \leq n.$$ Alright, for $i \leq n -r+1,$ we have $D\_{ij} = 0.$ So $$ (DY)\_i = \sum\_j D\_{ij} y\_j = 0 \; \; \; \; \mbox{for} \; \; \; \;i \leq n\_r+1. $$ Different for $i \geq n-r.$ $$ (DY)\_i = \sum\_j D\_{ij} y\_j = D\_{ii} y\_i = 0 \; \; \; \; \mbox{for} \; \; \; \;i \geq n\_r. $$ Condition (i) implies $$ DPX = 0. $$ Therefore $$ P^T D P X = 0, $$ $$ MX = 0. $$ This was your condition (ii). This material comes under the general heading of Witt's Theorem; it is easier here because we have the real numbers and semidefiniteness, the "null cone" is just a hyperplane. <https://en.wikipedia.org/wiki/Witt>'s\_theorem [I like the book by Cassels for this material](http://store.doverpublications.com/0486466701.html). I have a copy and used it for something a week ago, now I can't find it. Found it...
Added: it appears that their positive is positive semidefinite. Not hard to adjust the stuff below to semidefinite case your early question, positivity of the quadratic form says that both versions of the kernel are the single vector $0.$ Think of a symmetric positive definite real matrix $M.$ If we have a column vector $X$ with $X^T M X = 0,$ then $X = 0.$ The bilinear form is $$ B(X,Y) = X^T M Y = Y^T M X. $$ If, for fixed $X,$ we always have $$ ( X^T M) Y = 0, $$ we are allowed to take $Y = MX,$ which proves that $$X^T M = 0, M X = 0.$$ Since $M$ is definite, $$ X = 0 $$
2,183,758
I'm confused on a couple of propositions in Bourbaki, Lie Groups and Lie Algebras, Chapter 5. We are given a quadratic form $q$ on $V = \mathbb{R}^n$ which is positive, meaning $q(v) \geq 0$ for all $v$. To $q$ we can associate a symmetric, positive bilinear form $$B(v,w) = \frac{1}{2}[q(v+w) - q(v) - q(w)]$$ We recover $q$ as $q(v) = B(v,v)$. By the *kernel* of $q$, I would normally assume we are talking about one of two things: > > 1 . The set of $v \in V$ such that $q(v) = 0$. > > > 2 . The set of $v \in V$ such that $B(v,w) = 0$ for all $w \in W$. > > > It is straightforward that the second set is contained in the first. In the beginning of Lemma 4, they cite a result in Algebra, Chapter IX which says that since $q$ is positive, they are actually equal. I looked up this result, I still can't figure out if it actually implies what is claimed. Anyway. On the other hand, in the beginning of the proof of Lemma 4, they seem to be using yet another definition. They are saying that if $v = |c\_1|a\_1 + \cdots +|c\_n|a\_n$ is in the kernel of $q$, then $$\sum\limits\_i q\_{ij}|c\_i| = 0$$ for all $j$. That is...not how you compute $q(v)$. You don't treat $q$ as a linear transformation and multiply the column vector. Right? I thought $q(v)$ would be $$\sum\limits\_{i,j} q\_{ij} |c\_ic\_j| $$ [![enter image description here](https://i.stack.imgur.com/6Xrij.png)](https://i.stack.imgur.com/6Xrij.png) [![enter image description here](https://i.stack.imgur.com/2N8CA.png)](https://i.stack.imgur.com/2N8CA.png)
2017/03/12
[ "https://math.stackexchange.com/questions/2183758", "https://math.stackexchange.com", "https://math.stackexchange.com/users/28556/" ]
We have $M$ real, symmetric, and positive semidefinite. We may take an orthogonal matrix $P$ such that $ P^T D P = M,$ where $D$ is real diagonal. Let dimension be $n.$ Let the first $n-r$ diagonal elements of $D$ be zero, that is $D\_{jj} = 0$ for $1 \leq j \leq n-r.$ Then $D\_{jj} > 0$ for $n-r+1 \leq j \leq n.$ Your condition (i) reads $X^T M X = 0.$ This becomes $X^T P^T D P X = 0.$ Make a new vector $$ Y = PX. $$ We have $$ Y^T D Y = 0. $$ That is $$ \sum\_{j = n-r+1}^n D\_{jj} \; \; y\_j^2 = 0. $$ Positivity says $$y\_j = 0 \; \; \; \; \mbox{for} \; \; \; \; n-r+1 \leq j \leq n.$$ Alright, for $i \leq n -r+1,$ we have $D\_{ij} = 0.$ So $$ (DY)\_i = \sum\_j D\_{ij} y\_j = 0 \; \; \; \; \mbox{for} \; \; \; \;i \leq n\_r+1. $$ Different for $i \geq n-r.$ $$ (DY)\_i = \sum\_j D\_{ij} y\_j = D\_{ii} y\_i = 0 \; \; \; \; \mbox{for} \; \; \; \;i \geq n\_r. $$ Condition (i) implies $$ DPX = 0. $$ Therefore $$ P^T D P X = 0, $$ $$ MX = 0. $$ This was your condition (ii). This material comes under the general heading of Witt's Theorem; it is easier here because we have the real numbers and semidefiniteness, the "null cone" is just a hyperplane. <https://en.wikipedia.org/wiki/Witt>'s\_theorem [I like the book by Cassels for this material](http://store.doverpublications.com/0486466701.html). I have a copy and used it for something a week ago, now I can't find it. Found it...
Okay, here's what I'm trying to prove. Let $V = \mathbb{R}^n$, let $M$ be an $n$ by $n$ real symmetric matrix such that $m\_{ij} \leq 0$ for all $i \neq j$, and such that $X^TMX \geq 0$ for all column vectors $X$. The following are equivalent for $X$: (i): $X^TMX = 0$; (ii): $MX = 0$; (ii): $X^TMY = 0$ for all column vectors $Y$. (ii) $\Rightarrow$ (iii) is clear, since $(MX)^T = X^TM^T = X^TM$, and one can then multiply on the right by $Y$. (iii) $\Rightarrow$ (i) is also clear. The difficulty for me is showing that (i) $\Rightarrow$ (ii). Proof of (i) $\Rightarrow$ (ii) is the case $n = 2$: Let $M = \begin{pmatrix} a & b \\ b & d \end{pmatrix}$ where $b \leq 0$. If $X = \begin{pmatrix} x \\ y \end{pmatrix}$, then $$X^TMX = ax^2 + 2bxy + dy^2$$ If $M$ is invertible, we already know that (i) and (ii) are equivalent to $X$ being $0$. So we may assume $ad = b^2$. Under the assumption $ax^2 + 2bxy + dy^2 = 0$, multiply by $b$ and use the fact that $2b^2 = ad + b^2$ to get $$0 = abx^2 + 2b^2xy + dy^2 = abx^2 + adxy + b^2xy + dy^2 = (ax + by)(bx + dy)$$ If one of the rows of $M$ is zero, then being symmetric, $M$ must be of the form $\begin{pmatrix} a & 0 \\ 0 & 0 \end{pmatrix}$ or $\begin{pmatrix} 0 & 0 \\ 0 & d \end{pmatrix}$, and these are easy to work out. Otherwise, the determinant of $M$ being zero, we can conclude that $(a,b)$ is a nonzero scalar multiple of $(d,b)$, and so $(ax+by)(bx + dy) = 0$ is equivalent to $ax + by$ and $bx + dy$ both being zero, or in other words, $MX = 0$, which was what I wanted to show.
172,119
I am trying to write a little voxel engine because it's fun, but struggle to find the best way to store the actual voxels. I'm aware I will need chunks of some sort so I don't need to have the entire world in memory, and I'm am aware I need render them with reasonable performance. I read about octrees and from what I understand it starts with 1 cube, and in that cube can be 8 more cubes, and in all those 8 cubes can be another 8 cubes etc. But I don't think this fits my voxel engine because my voxel cubes/items will all be the exact same size. So another option is to just create an array of 16\*16\*16 size and have that be one chunk, and you fill it with items. And parts where there aren't any items will have 0 as value (0 = air). But I'm afraid this is going to waste a lot of memory and won't be very fast. Then another option is a vector for each chunk, and fill it with cubes. And the cube holds its position in the chunk. This saves memory (no air blocks), but makes looking for a cube at a specific location a lot slower. So I can't really find a good solution, and I'm hoping someone can help me with that. So what would you use and why? But another problem is rendering. Just reading each chunk and sending it to the GPU using OpenGL is easy, but very slow. Generating one mesh per chunk would be better, but that means every time I break one block, I have to rebuild the entire chunk which could take a bit of time causing a minor but noticeable hiccup, which I obviously don't want either. So that would be harder. So how would I render the cubes? Just create all the cubes in one vertex buffer per chunk and render that and maybe try to put that in another thread, or is there another way? Thanks!
2019/05/20
[ "https://gamedev.stackexchange.com/questions/172119", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/-1/" ]
Storing the blocks as the positions and the values is actually very inefficient. Even without any overhead caused by the struct or object you use, you need to store 4 distinct values per block. It would only make sense to use it over the "storing blocks in fixed arrays" method (the one you described earlier) is when only a quarter of the blocks are solid, and this way you don't even take any other optimization methods into account. Octrees are actually great for voxel based games, since they specialize at storing data with larger features (e.g. patches of the same block). To illustrate this, I used a quadtree (basically octrees in 2d): This is my starting set containing 32x32 tiles, which would equal 1024 values: [![enter image description here](https://i.stack.imgur.com/2J6cg.png)](https://i.stack.imgur.com/2J6cg.png) Storing this as 1024 separate values doesn't seem that inefficient, but once you reach map sizes similar to games, such as [Terraria](https://terraria.gamepedia.com/Map_size), loading screens would take multiple seconds. And if you increase it to the third dimension, it starts to use up all space in the system. Quadtrees (or octrees in 3d) can help the situation. To create one, you can either go from tiles and group them together, or go from one huge cell and you divide it until you reach the tiles. I will use the first approach, because it's easier to visualize. So, in the first iteration you group everything into 2x2 cells, and if a cell only contains tiles of the same type, you drop the tiles and just store the type. After one iteration, our map will look like this: [![enter image description here](https://i.stack.imgur.com/0mL51.png)](https://i.stack.imgur.com/0mL51.png) The red lines mark what we store. Each square is just 1 value. This brought the size down from 1024 values to 439, that's a 57% decrease. But you know the [mantra](https://i.kym-cdn.com/photos/images/newsfeed/000/531/557/a88.jpg). Let's go one step further and group these into cells: [![enter image description here](https://i.stack.imgur.com/inefc.png)](https://i.stack.imgur.com/inefc.png) This reduced the amount of stored values to 367. That's only 36% of the original size. You obviously need to do this division until every 4 adjacent cell (8 adjacent block in 3d) inside a chunk is stored inside one cell, essentially converting a chunk to one large cell. This also has some other benefits, mainly when doing collision, but you might want to create a separate octree for that, which only cares about whether a single block is solid or not. That way, instead of checking against collision for every block inside a chunk, you can just do it against the cells.
Octrees exist to solve exactly the problem you describe, allowing dense storage of sparse data without large search times. The fact that your voxels are the same size just means that your octree has a fixed depth. eg. for a 16x16x16 chunk, you need at most 5 levels of tree: * chunk root (16x16x16) + first tier octant (8x8x8) - second tier octant (4x4x4) * third tier octant (2x2x2) + single voxel (1x1x1) This means you have at most 5 steps to go to find out whether there's a voxel at a particular position in the chunk: * chunk root: is the whole chunk the same value (eg. all air)? If so, we're done. If not... + first tier: is the octant that contains this position all the same value? If not... - second tier... * third tier... + now we're addressing a single voxel, and can return its value. Much shorter than scanning even 1% of the way through an array of up to 4096 voxels! Notice that this lets us compress the data wherever there's a full octant of the same value - whether that value is all air or all rock or anything else. It's only where octants contain mixed values that we need to subdivide further, down to the limit of single-voxel leaf nodes. --- For addressing the children of a chunk, typically we'll proceed in [Morton order](https://en.wikipedia.org/wiki/Z-order_curve), something like this: 1. X- Y- Z- 2. X- Y- Z+ 3. X- Y+ Z- 4. X- Y+ Z+ 5. X+ Y- Z- 6. X+ Y- Z+ 7. X+ Y+ Z- 8. X+ Y+ Z+ So, our Octree node navigation might look something like this: ``` GetOctreeValue(OctreeNode node, int depth, int3 nodeOrigin, int3 queryPoint) { if(node.IsAllOneValue) return node.Value; int childIndex = 0; childIndex += (queryPoint.x > nodeOrigin.x) ? 4 : 0; childIndex += (queryPoint.y > nodeOrigin.y) ? 2 : 0; childIndex += (queryPoint.z > nodeOrigin.z) ? 1 : 0; OctreeNode child = node.GetChild(childIndex); return GetOctreeValue( child, depth + 1, nodeOrigin + childOffset[depth, childIndex], queryPoint ); } ```
5,585
In the recycling rules for my community, there is an instruction to thoroughly wash all glass, plastic, and metal containers before depositing them into the recycle bin for collection. Is this step really necessary for the recycling process to occur (or occur without delay)? I've heard that recycling facilities already wash all glass, plastic, and metal material using very effective methods anyhow; why then would the items need to be washed twice?
2011/08/04
[ "https://skeptics.stackexchange.com/questions/5585", "https://skeptics.stackexchange.com", "https://skeptics.stackexchange.com/users/4322/" ]
Here are comments from two of the major recyclers in Australia: [From VISY](http://www.coolmelbourne.org/articles/2011/07/the-cool-rules-of-recycling/): > > **To rinse or not?** > > > Ever feel like you are wasting a ton of water > while rinsing out your recyclables? According to VISY, it is not > necessary to rinse out containers, but you do need to make sure all > food scraps are removed. > > > [From SITA](http://www.wsn.com.au/dir138/wsn.nsf/Content/Education%20and%20Safety_Facts%20and%20Figures%20Glass) who recently took over WSN: > > Rinse the containers to make sure they are clean and won't attract > pests. > > > [From Planet Ark](http://recyclingweek.planetark.org/recycling-info/how-clean.cfm) (respected environmental group) > > One of the most > common questions about recycling is how clean do jars, cans and pizza > boxes need to be before they can go in the recycling bin. > > > Small amounts of food left don’t interfere with the glass and steel > recycling process. Scrape all the solid food scraps out of jars and > cans and then put them in the recycling bin. If you’re concerned about > having left over food in the bin you can lightly rinse out your jars > and cans. Using left over washing up or rinsing water is best as > there’s no point wasting good water just to wash recycling. > > > Looking at the different methods of processing: **[Aluminium](http://www.wsn.com.au/dir138/wsn.nsf/Content/Education%20and%20Safety_Facts%20and%20Figures%20Aluminium)** > > Heating the aluminium to a temperature of 700°C changes it into a > liquid state. It is then cast into ingots, ready for delivery to > rolling mills where they are milled and remade into new products. > > > **[Glass](http://www.wsn.com.au/dir138/wsn.nsf/Content/Education%20and%20Safety_Facts%20and%20Figures%20Glass)** > > The single-colour cullet is put onto a conveyor belt and goes through > a special process called beneficiation, which removes contamination > such as bottle tops, metals, ceramics and labels. The cullet is then > crushed and sent to a glass furnace where it is added to the mix. > > > **[Plastics](http://www.wsn.com.au/dir138/wsn.nsf/Content/Education%20and%20Safety_Facts%20and%20Figures%20Plastics)** > > The plastics are either shredded, chopped or ground and then washed to > remove further contaminants. The plastic is melted and pushed through > an extruder, a bit like an old fashioned mincer or a spaghetti maker. > It is then cooled and pressed through a die and chopped or pelletised > into granules. It is then ready to be made into new products. > > > It would appear that neither bugs nor food scraps would put a spanner in the works. I assume there would also be some sort of prewash before it got to that stage. It appears that a thorough rinse would be a hygiene thing for your own garbage bin, but not going to affect the process. However it is necessary to remove solid food object.
A 2009 [Slate magazine article](http://www.slate.com/id/2210344/) tackles this question: > > Once you put your recyclables on the curb, they aren't processed right away. > [...] > Now imagine your bottle of half-eaten, four-month-old tartar sauce, lounging about in a stuffy warehouse and getting riper by the day. Not pleasant, is it? > > > The author, Nina Shen Rastogi, goes on to quote an [unreferenced anecdote about an anonymous person's mother-in-law](http://answers.yahoo.com/question/index?qid=20081010063128AAuNXDO) from a competing Q&A site. This emphasizes to me that the Slate article, itself, isn't from a peer-reviewed journal, and is a pretty ordinary reference. Can anyone do better?
5,585
In the recycling rules for my community, there is an instruction to thoroughly wash all glass, plastic, and metal containers before depositing them into the recycle bin for collection. Is this step really necessary for the recycling process to occur (or occur without delay)? I've heard that recycling facilities already wash all glass, plastic, and metal material using very effective methods anyhow; why then would the items need to be washed twice?
2011/08/04
[ "https://skeptics.stackexchange.com/questions/5585", "https://skeptics.stackexchange.com", "https://skeptics.stackexchange.com/users/4322/" ]
Here are comments from two of the major recyclers in Australia: [From VISY](http://www.coolmelbourne.org/articles/2011/07/the-cool-rules-of-recycling/): > > **To rinse or not?** > > > Ever feel like you are wasting a ton of water > while rinsing out your recyclables? According to VISY, it is not > necessary to rinse out containers, but you do need to make sure all > food scraps are removed. > > > [From SITA](http://www.wsn.com.au/dir138/wsn.nsf/Content/Education%20and%20Safety_Facts%20and%20Figures%20Glass) who recently took over WSN: > > Rinse the containers to make sure they are clean and won't attract > pests. > > > [From Planet Ark](http://recyclingweek.planetark.org/recycling-info/how-clean.cfm) (respected environmental group) > > One of the most > common questions about recycling is how clean do jars, cans and pizza > boxes need to be before they can go in the recycling bin. > > > Small amounts of food left don’t interfere with the glass and steel > recycling process. Scrape all the solid food scraps out of jars and > cans and then put them in the recycling bin. If you’re concerned about > having left over food in the bin you can lightly rinse out your jars > and cans. Using left over washing up or rinsing water is best as > there’s no point wasting good water just to wash recycling. > > > Looking at the different methods of processing: **[Aluminium](http://www.wsn.com.au/dir138/wsn.nsf/Content/Education%20and%20Safety_Facts%20and%20Figures%20Aluminium)** > > Heating the aluminium to a temperature of 700°C changes it into a > liquid state. It is then cast into ingots, ready for delivery to > rolling mills where they are milled and remade into new products. > > > **[Glass](http://www.wsn.com.au/dir138/wsn.nsf/Content/Education%20and%20Safety_Facts%20and%20Figures%20Glass)** > > The single-colour cullet is put onto a conveyor belt and goes through > a special process called beneficiation, which removes contamination > such as bottle tops, metals, ceramics and labels. The cullet is then > crushed and sent to a glass furnace where it is added to the mix. > > > **[Plastics](http://www.wsn.com.au/dir138/wsn.nsf/Content/Education%20and%20Safety_Facts%20and%20Figures%20Plastics)** > > The plastics are either shredded, chopped or ground and then washed to > remove further contaminants. The plastic is melted and pushed through > an extruder, a bit like an old fashioned mincer or a spaghetti maker. > It is then cooled and pressed through a die and chopped or pelletised > into granules. It is then ready to be made into new products. > > > It would appear that neither bugs nor food scraps would put a spanner in the works. I assume there would also be some sort of prewash before it got to that stage. It appears that a thorough rinse would be a hygiene thing for your own garbage bin, but not going to affect the process. However it is necessary to remove solid food object.
As an additional point to the existing excellent answers, if you have commingled recyclables, leaving food on the containers can lead to [paper contamination](https://americanrecyclingca.com/2011/05/paper/contamination-in-paper-recycling/). > > Contamination in paper recycling is a serious issue, with negative effects ranging from the strictly financial, to the health and safety of industry workers. The rapid expansion of recycling programs has seen a commensurate rise in contamination of collected recyclables. The trend towards single-stream curbside recycling (where paper and other recyclables are commingled with refuse and sorted at a processing facility) has brought contamination to the forefront of debate. > > > Contamination in paper recycling can refer to soiling of paper with food, grease, chemicals, or other noxious compounds, or to the inclusion of inappropriate material for the intended paper grade. > > > Simple soiling is easy to understand; once you’ve used a newspaper to soak up transmission fluid, for example, it is no longer recyclable. Food can also be a source of contamination, which often comes as a surprise. The truth of the matter is that it is difficult to separate pizza grease (or other food contaminants) from paper fibers. This is a major issue in the hotly contested debate surrounding single-stream recycling, as food contamination seems inevitable. > > > Even if you're not throwing open cans of spaghetti sauce on your newspapers, if the bin is placed outside, there's a good chance that rain and dew can lead to the contents of the can pouring out over the paper (admittedly, leaving the paper out in the rain seems like it will degrade it anyhow, but that's probably another matter).
41,314,073
When I try to run command like - ``` php artisan db:seed ``` it is saying - ``` ************************************** * Application In Production! * ************************************** Do you really wish to run this command? (yes/no) [no]: ``` My .env file says - ``` .... APP_ENV=local APP_KEY=... APP_DEBUG=true APP_LOG_LEVEL=debug APP_URL=http://localhost .... ``` my app.php file says ``` ... 'env' => env('APP_ENV', 'local'), ... ``` My laravel version is 5.3. What setting should I do to run the application in development mode?
2016/12/24
[ "https://Stackoverflow.com/questions/41314073", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6847223/" ]
Your settings are correct. Try to clear config cache: ``` php artisan config:cache ```
Please run the command on the terminal or cmd ``` php artisan up ``` it will come you out of the maintenance mode
574,264
I work on an environment for setting up exercises with zero or more associated hints. If I try to insert verbatim content in the body of the exercise, the latex interpreter hangs without specifying an error. A minimal example that reproduces this issue is shown here: ``` \documentclass{article} \usepackage{listings} \ExplSyntaxOn % Create the exercise environment \NewDocumentEnvironment{ex}{O{}+b}{% \par\noindent #2}{% % End exercise } \ExplSyntaxOff \begin{document} \section{First section} \begin{ex} Create a new virtual working environment for python % When the lstlisting environment below is uncommented % latex hangs with this information in the console: % Package Listings Warning: Text dropped after begin of listing on input line 24. % % % (/home/henrik/.TinyTeX/texmf-dist/tex/latex/base/omscmr.fd)) % * %\begin{lstlisting} %pipenv install opencv-python %\end{lstlisting} \end{ex} \end{document} ``` As I interpret this answer <https://tex.stackexchange.com/a/489459/1366> by David Carlisle it may not be possible to achieve what I want. The syntax I want to use in the final version is as follows ``` \begin{ex} Description of the exercise. Run the following command %\begin{verbatim} %print("Hello world") %\end{verbatim} \begin{hint} Run the example from the command line with the python command \end{hint} \begin{hint} The solution is 42. \end{hint} \end{ex} ``` When parsing this a headline should be included that presents the number of the exercise and includes links to the included hints (two in this case) which are inserted later in the document. To do so I have to parse the content / body of the ex environment. The output should look like this, the code for generating this is included at the end of the question. [![Example output](https://i.stack.imgur.com/vMjdU.png)](https://i.stack.imgur.com/vMjdU.png) ``` \documentclass{article} \usepackage{amsmath} \usepackage[colorlinks, linkcolor=blue, citecolor=blue, urlcolor=blue]{hyperref} \newcounter{ex} \numberwithin{ex}{section} \newcounter{hint} \numberwithin{hint}{ex} \newcounter{solution} \numberwithin{solution}{ex} \makeatletter \newcommand{\linkdest}[1]{\raisebox{1.7\baselineskip}[0pt][0pt]{\hypertarget{#1}{}}} \makeatother \ExplSyntaxOn % Define variables for storing the number of hints % and solutions given in the exercise. \int_new:N \l_hintenv_int \int_new:N \l_solenv_int % Open files for storing hints and solutions. \iow_new:N \g_hintfile_iow \iow_new:N \g_solutionfile_iow \iow_open:Nn \g_hintfile_iow {hintfile.tex} \iow_open:Nn \g_solutionfile_iow {solutionfile.tex} % Define strings to use in macros. \tl_new:N \g_text_solution_tl \tl_set:Nn \g_text_solution_tl { ~Solution:~ } \tl_new:N \g_text_solution_head_tl \tl_set:Nn \g_text_solution_head_tl { Solutino } \tl_new:N \g_text_hint_tl \tl_set:Nn \g_text_hint_tl { ~Hint:~ } \tl_new:N \g_text_exercise_tl \tl_set:Nn \g_text_exercise_tl { Exercise~ } \tl_new:N \g_back_to_exercise_tl \tl_set:Nn \g_back_to_exercise_tl { Back~to~exercise~ } % Create the exercise environment \NewDocumentEnvironment{ex}{O{}+b}{% % Start exercise \bigbreak \refstepcounter{ex} \label{exercise\theex} \noindent \textbf{\g_text_exercise_tl\theex{}:~#1} \hfill % Run a regular expression on the body of the % exercise to count the number of hints present % and store that number in a variable. \regex_count:nnN {\c{begin}\{hint\}} {#2} \l_hintenv_int \regex_count:nnN {\c{begin}\{sol\}} {#2} \l_solenv_int % If at least one hint is provided start a list with % links to the inserted hints. \int_compare:nTF { \l_hintenv_int > 0 } { \g_text_hint_tl } { } % For all integers in the range from one to % the number of inserted hints do. \int_step_variable:nNn {\l_hintenv_int} \l_iterator_tl{ \int_compare:nTF { \l_iterator_tl > 1 } { ,~ } { } \hyperlink{hint\theex.\l_iterator_tl}{\l_iterator_tl} } % If at least one solution is provided start a list with % links to the inserted solutions. \int_compare:nTF { \l_solenv_int > 0 } { \g_text_solution_tl } { } % For all integers in the range from one to % the number of inserted solutions do. \int_step_variable:nNn {\l_solenv_int} \l_iterator_tl{ \int_compare:nTF { \l_iterator_tl > 1 } { ,~ } { } \hyperlink{solution\theex.\l_iterator_tl}{\l_iterator_tl} } \par\noindent #2}{% % End exercise } \NewDocumentEnvironment{hint}{O{}+b}{% % hint start \refstepcounter{hint} \tl_set:Nx \l_temp_tl { hint\thehint } \iow_now:Nx \g_hintfile_iow { \par\noindent} \iow_now:Nx \g_hintfile_iow { \exp_not:N \textbf{Hint~\arabic{hint}~to~exercise~\theex}} \iow_now:Nx \g_hintfile_iow { \hfill \g_back_to_exercise_tl } \iow_now:Nx \g_hintfile_iow { \exp_not:N \ref{exercise\theex } } \iow_now:Nx \g_hintfile_iow { \par\noindent} \iow_now:Nx \g_hintfile_iow { \exp_not:N \linkdest{ \l_temp_tl } } \iow_now:Nx \g_hintfile_iow { \exp_not:N \vspace{-0.4cm}\par\noindent} \iow_now:Nn \g_hintfile_iow { #2 } \iow_now:Nn \g_hintfile_iow { \bigskip} \iow_now:Nn \g_hintfile_iow { \filbreak} }{ % hint end } \NewDocumentEnvironment{sol}{O{}+b}{ % hint start \refstepcounter{solution} \tl_set:Nx \l_temp_tl { solution\thesolution } \iow_now:Nx \g_solutionfile_iow { \par\noindent } \iow_now:Nx \g_solutionfile_iow { \exp_not:N \textbf{Solution~\arabic{solution}~to~exercise~\theex}} \iow_now:Nx \g_solutionfile_iow { \hfill \g_back_to_exercise_tl } \iow_now:Nx \g_solutionfile_iow { \exp_not:N \ref{exercise\theex } } \iow_now:Nx \g_solutionfile_iow { \par\noindent} \iow_now:Nx \g_solutionfile_iow { \exp_not:N \linkdest{ \l_temp_tl } } \iow_now:Nx \g_solutionfile_iow { \exp_not:N \vspace{-0.4cm}\par\noindent} \iow_now:Nn \g_solutionfile_iow { #2 } \iow_now:Nn \g_solutionfile_iow { \bigskip} \iow_now:Nn \g_solutionfile_iow { \filbreak} }{ % hint end } % Define command for closing the two files used % for storing hints and solutions. \NewDocumentCommand{\closehintandsolutionfile}{}{ \iow_close:N \g_hintfile_iow \iow_close:N \g_solutionfile_iow } \ExplSyntaxOff \begin{document} \section{Exercises} \begin{ex} Description of the exercise. Run the following command %\begin{verbatim} %print("Hello world") %\end{verbatim} \begin{hint} Run the example from the command line with the python command \end{hint} \begin{hint} The solution is 42. \end{hint} \end{ex} \closehintandsolutionfile \section{Hints} \input{hintfile.tex} \end{document} ```
2020/12/09
[ "https://tex.stackexchange.com/questions/574264", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/1366/" ]
If all else fails, you can place and save the verbatim into a box before entering the `ex` environment. ``` \documentclass{article} \usepackage{amsmath} \usepackage[colorlinks, linkcolor=blue, citecolor=blue, urlcolor=blue]{hyperref} \newcounter{ex} \numberwithin{ex}{section} \newcounter{hint} \numberwithin{hint}{ex} \newcounter{solution} \numberwithin{solution}{ex} \makeatletter \newcommand{\linkdest}[1]{\raisebox{1.7\baselineskip}[0pt][0pt]{\hypertarget{#1}{}}} \makeatother \ExplSyntaxOn % Define variables for storing the number of hints % and solutions given in the exercise. \int_new:N \l_hintenv_int \int_new:N \l_solenv_int % Open files for storing hints and solutions. \iow_new:N \g_hintfile_iow \iow_new:N \g_solutionfile_iow \iow_open:Nn \g_hintfile_iow {hintfile.tex} \iow_open:Nn \g_solutionfile_iow {solutionfile.tex} % Define strings to use in macros. \tl_new:N \g_text_solution_tl \tl_set:Nn \g_text_solution_tl { ~Solution:~ } \tl_new:N \g_text_solution_head_tl \tl_set:Nn \g_text_solution_head_tl { Solutino } \tl_new:N \g_text_hint_tl \tl_set:Nn \g_text_hint_tl { ~Hint:~ } \tl_new:N \g_text_exercise_tl \tl_set:Nn \g_text_exercise_tl { Exercise~ } \tl_new:N \g_back_to_exercise_tl \tl_set:Nn \g_back_to_exercise_tl { Back~to~exercise~ } % Create the exercise environment \NewDocumentEnvironment{ex}{O{}+b}{% % Start exercise \bigbreak \refstepcounter{ex} \label{exercise\theex} \noindent \textbf{\g_text_exercise_tl\theex{}:~#1} \hfill % Run a regular expression on the body of the % exercise to count the number of hints present % and store that number in a variable. \regex_count:nnN {\c{begin}\{hint\}} {#2} \l_hintenv_int \regex_count:nnN {\c{begin}\{sol\}} {#2} \l_solenv_int % If at least one hint is provided start a list with % links to the inserted hints. \int_compare:nTF { \l_hintenv_int > 0 } { \g_text_hint_tl } { } % For all integers in the range from one to % the number of inserted hints do. \int_step_variable:nNn {\l_hintenv_int} \l_iterator_tl{ \int_compare:nTF { \l_iterator_tl > 1 } { ,~ } { } \hyperlink{hint\theex.\l_iterator_tl}{\l_iterator_tl} } % If at least one solution is provided start a list with % links to the inserted solutions. \int_compare:nTF { \l_solenv_int > 0 } { \g_text_solution_tl } { } % For all integers in the range from one to % the number of inserted solutions do. \int_step_variable:nNn {\l_solenv_int} \l_iterator_tl{ \int_compare:nTF { \l_iterator_tl > 1 } { ,~ } { } \hyperlink{solution\theex.\l_iterator_tl}{\l_iterator_tl} } \par\noindent #2}{% % End exercise } \NewDocumentEnvironment{hint}{O{}+b}{% % hint start \refstepcounter{hint} \tl_set:Nx \l_temp_tl { hint\thehint } \iow_now:Nx \g_hintfile_iow { \par\noindent} \iow_now:Nx \g_hintfile_iow { \exp_not:N \textbf{Hint~\arabic{hint}~to~exercise~\theex}} \iow_now:Nx \g_hintfile_iow { \hfill \g_back_to_exercise_tl } \iow_now:Nx \g_hintfile_iow { \exp_not:N \ref{exercise\theex } } \iow_now:Nx \g_hintfile_iow { \par\noindent} \iow_now:Nx \g_hintfile_iow { \exp_not:N \linkdest{ \l_temp_tl } } \iow_now:Nx \g_hintfile_iow { \exp_not:N \vspace{-0.4cm}\par\noindent} \iow_now:Nn \g_hintfile_iow { #2 } \iow_now:Nn \g_hintfile_iow { \bigskip} \iow_now:Nn \g_hintfile_iow { \filbreak} }{ % hint end } \NewDocumentEnvironment{sol}{O{}+b}{ % hint start \refstepcounter{solution} \tl_set:Nx \l_temp_tl { solution\thesolution } \iow_now:Nx \g_solutionfile_iow { \par\noindent } \iow_now:Nx \g_solutionfile_iow { \exp_not:N \textbf{Solution~\arabic{solution}~to~exercise~\theex}} \iow_now:Nx \g_solutionfile_iow { \hfill \g_back_to_exercise_tl } \iow_now:Nx \g_solutionfile_iow { \exp_not:N \ref{exercise\theex } } \iow_now:Nx \g_solutionfile_iow { \par\noindent} \iow_now:Nx \g_solutionfile_iow { \exp_not:N \linkdest{ \l_temp_tl } } \iow_now:Nx \g_solutionfile_iow { \exp_not:N \vspace{-0.4cm}\par\noindent} \iow_now:Nn \g_solutionfile_iow { #2 } \iow_now:Nn \g_solutionfile_iow { \bigskip} \iow_now:Nn \g_solutionfile_iow { \filbreak} }{ % hint end } % Define command for closing the two files used % for storing hints and solutions. \NewDocumentCommand{\closehintandsolutionfile}{}{ \iow_close:N \g_hintfile_iow \iow_close:N \g_solutionfile_iow } \ExplSyntaxOff \usepackage{verbatimbox} \begin{document} \section{Exercises} \begin{myverbbox}{\hw} print("Hello world") Verbatim &^%$&\content \end{myverbbox} \begin{ex} Description of the exercise. Run the following command \smallskip\noindent\hw \begin{hint} Run the example from the command line with the python command \end{hint} \begin{hint} The solution is 42. \end{hint} \end{ex} \closehintandsolutionfile \section{Hints} \input{hintfile.tex} \end{document} ``` [![enter image description here](https://i.stack.imgur.com/CMRPf.jpg)](https://i.stack.imgur.com/CMRPf.jpg)
If LuaTeX is available, you can store and process verbatim content on Lua side, which helps avoid this problem. ``` \documentclass{article} \usepackage{amsmath} \usepackage{luacode} \usepackage{expl3, xparse} \usepackage[colorlinks, linkcolor=blue, citecolor=blue, urlcolor=blue]{hyperref} \newcounter{ex} \numberwithin{ex}{section} \newcounter{hint} \numberwithin{hint}{ex} \newcounter{solution} \numberwithin{solution}{ex} \makeatletter \newcommand{\linkdest}[1]{\raisebox{1.7\baselineskip}[0pt][0pt]{\hypertarget{#1}{}}} \makeatother \begin{luacode*} verb_table = {} function store_lines (str) texio.write_nl("line:"..str) if string.find (str , [[\end{ex}]] ) then luatexbase.remove_from_callback ( "process_input_buffer" , "store_lines") return [[\end{ex}]] else if str[1] ~= "%" then table.insert(verb_table, str) end end return "" end function register_verbatim() verb_table = {} luatexbase.add_to_callback( "process_input_buffer" , store_lines , "store_lines") end \end{luacode*} \ExplSyntaxOn \newcommand{\CurVerbatim}{} \newcommand{\BeginEx}{ \directlua{ register_verbatim() } } % Define variables for storing the number of hints % and solutions given in the exercise. \int_new:N \l_hintenv_int \int_new:N \l_solenv_int % Open files for storing hints and solutions. \iow_new:N \g_hintfile_iow \iow_new:N \g_solutionfile_iow \iow_open:Nn \g_hintfile_iow {hintfile.tex} \iow_open:Nn \g_solutionfile_iow {solutionfile.tex} % Define strings to use in macros. \tl_new:N \g_text_solution_tl \tl_set:Nn \g_text_solution_tl { ~Solution:~ } \tl_new:N \g_text_solution_head_tl \tl_set:Nn \g_text_solution_head_tl { Solutino } \tl_new:N \g_text_hint_tl \tl_set:Nn \g_text_hint_tl { ~Hint:~ } \tl_new:N \g_text_exercise_tl \tl_set:Nn \g_text_exercise_tl { Exercise~ } \tl_new:N \g_back_to_exercise_tl \tl_set:Nn \g_back_to_exercise_tl { Back~to~exercise~ } \cs_generate_variant:Nn \regex_count:nnN {nVN} % Create the exercise environment \NewDocumentEnvironment{ex}{O{}}{% % Begin excercise % capture verbatim on Lua side \BeginEx }{% % End excercise % retreive the content from lua side % save it in \CurVerbatim \directlua{ token.set_macro("CurVerbatim", table.concat(verb_table, "~")) } \bigbreak \refstepcounter{ex} \label{exercise\theex} \noindent \textbf{\g_text_exercise_tl\theex{}:~#1} \hfill % Run a regular expression on the body of the % exercise to count the number of hints present % and store that number in a variable. \regex_count:nVN {\c{begin}\{hint\}} \CurVerbatim \l_hintenv_int \regex_count:nVN {\c{begin}\{sol\}} \CurVerbatim \l_solenv_int % If at least one hint is provided start a list with % links to the inserted hints. \int_compare:nTF { \l_hintenv_int > 0 } { \g_text_hint_tl } { } % For all integers in the range from one to % the number of inserted hints do. \int_step_variable:nNn {\l_hintenv_int} \l_iterator_tl{ \int_compare:nTF { \l_iterator_tl > 1 } { ,~ } { } \hyperlink{hint\theex.\l_iterator_tl}{\l_iterator_tl} } % If at least one solution is provided start a list with % links to the inserted solutions. \int_compare:nTF { \l_solenv_int > 0 } { \g_text_solution_tl } { } % For all integers in the range from one to % the number of inserted solutions do. \int_step_variable:nNn {\l_solenv_int} \l_iterator_tl{ \int_compare:nTF { \l_iterator_tl > 1 } { ,~ } { } \hyperlink{solution\theex.\l_iterator_tl}{\l_iterator_tl} } \par\noindent % write verbatim content out and read it back \directlua{ local~verb = table.concat(verb_table, "\string\n") local~file = io.open(tex.jobname..".tmp", "w") file:write(verb) file:close() } \input{\jobname.tmp} } \NewDocumentEnvironment{hint}{O{}+b}{% % hint start \refstepcounter{hint} \tl_set:Nx \l_temp_tl { hint\thehint } \iow_now:Nx \g_hintfile_iow { \par\noindent} \iow_now:Nx \g_hintfile_iow { \exp_not:N \textbf{Hint~\arabic{hint}~to~exercise~\theex}} \iow_now:Nx \g_hintfile_iow { \hfill \g_back_to_exercise_tl } \iow_now:Nx \g_hintfile_iow { \exp_not:N \ref{exercise\theex } } \iow_now:Nx \g_hintfile_iow { \par\noindent} \iow_now:Nx \g_hintfile_iow { \exp_not:N \linkdest{ \l_temp_tl } } \iow_now:Nx \g_hintfile_iow { \exp_not:N \vspace{-0.4cm}\par\noindent} \iow_now:Nn \g_hintfile_iow { #2 } \iow_now:Nn \g_hintfile_iow { \bigskip} \iow_now:Nn \g_hintfile_iow { \filbreak} }{ % hint end } \NewDocumentEnvironment{sol}{O{}+b}{ % hint start \refstepcounter{solution} \tl_set:Nx \l_temp_tl { solution\thesolution } \iow_now:Nx \g_solutionfile_iow { \par\noindent } \iow_now:Nx \g_solutionfile_iow { \exp_not:N \textbf{Solution~\arabic{solution}~to~exercise~\theex}} \iow_now:Nx \g_solutionfile_iow { \hfill \g_back_to_exercise_tl } \iow_now:Nx \g_solutionfile_iow { \exp_not:N \ref{exercise\theex } } \iow_now:Nx \g_solutionfile_iow { \par\noindent} \iow_now:Nx \g_solutionfile_iow { \exp_not:N \linkdest{ \l_temp_tl } } \iow_now:Nx \g_solutionfile_iow { \exp_not:N \vspace{-0.4cm}\par\noindent} \iow_now:Nn \g_solutionfile_iow { #2 } \iow_now:Nn \g_solutionfile_iow { \bigskip} \iow_now:Nn \g_solutionfile_iow { \filbreak} }{ % hint end } % Define command for closing the two files used % for storing hints and solutions. \NewDocumentCommand{\closehintandsolutionfile}{}{ \iow_close:N \g_hintfile_iow \iow_close:N \g_solutionfile_iow } \ExplSyntaxOff \begin{document} \section{Exercises} \begin{ex} Description of the exercise. Run the following command \begin{verbatim} print("Hello world") \end{verbatim} % need one blank line after verbatim, otherwise excaptions occur \begin{hint} Run the example from the command line with the python command \end{hint} \begin{hint} The solution is 42. \end{hint} \end{ex} \closehintandsolutionfile \section{Hints} \input{hintfile.tex} \end{document} ```
57,844
In a Hamiltonian system [Chirikov's resonance overlap criterion](http://www.scholarpedia.org/article/Chirikov_criterion) approximately predicts the onset of chaotic behavior. Furthermore in a system where resonances overlap, the strengths of the resonances and their frequency differences can be used to approximate diffusion coefficients (as explored by Chirikov in '79). The overlap criterion is easy to estimate and so often used for physical systems. I was surprised to hear that there are non-linear systems that appear to satisfy a resonance overlap criterion but do not exhibit chaotic behavior. Is there a simple example of such a system? What are the properties of such systems? The site above refers to the Toda lattice but I am not gaining any intuition from it. Some background --- One can describe the KAM theorem in terms decaying Fourier coefficients for the perturbation and iterative perturbation theory for the Hamiltonian system. At some level in the perturbation theory the Fourier coefficients are sufficiently small that they no longer overlap and so the perturbation expansion must converge (and so you get an integrable model or tori). The focus here is on the existence of tori not on the onset of chaos. The reason for my interest is I am tempted to try and classify N-body systems based on width of analyticity of their perturbations (setting the decay rates of their Fourier coefficients) but the easiest way I know to do this is to count resonances and devise a way to estimate when they fill phase space.
2011/08/16
[ "https://math.stackexchange.com/questions/57844", "https://math.stackexchange.com", "https://math.stackexchange.com/users/13292/" ]
This is a very partial answer to your question but let's see if this helps So from a very heuristic perspective in the KAM-case the onset of chaos is a result of the smale horseshoes generated by the homoclinic tangles. Of course not any homoclinic tangle has the generating mechanism for the smale horseshoe and this is where the Chirikov condition comes in. If the underlying system is constructed in such a way that the system satisfies (many) resonances but these resonances are actually symmetries of the system then you can `fool' the Chirikov condition in thinking that chaotic dynamics takes place while the system on the whole might be integrable.
A system with intermittency is a good example. There're actually different attractors that can exist in the system's phase space. These attractors can occasionally appear, disappear and merge with each other. So, u can have a strange attractor (near which the system is actively mixed) in some domain of phase space and limit cycle or torus (predictability and integrability to some extent) in the other domain. Speaking in the language of frequencies and resonancies, this means that u can have resonance overlap at one time, and then some order can emerge in the system so overlapping can not exist anymore... In other words, Chirikov's resonance overlap criterion corresponds to a KAM system. But there would be a completely different situation when higher order terms are included in pertubation series...
129,249
I want to be able to use taskwarrior at work. But the computers at work don't allow me to install anything, they all run Win XP, and IE... So, I would like to somehow SSH to a linux box at home, and do it through a browser. I should mention that I'd be working with a dynamic IP. Is this possible? If so, what is the simplest way to do this?
2014/05/13
[ "https://unix.stackexchange.com/questions/129249", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23295/" ]
That's simply: ``` tail -fn+1 file ``` `-f` to follow, `-n+1` for tail to start from the first line (the beginning of the file).
Another solution is to use the follow feature in `less`. ``` less -f file ``` You can enter follow mode in `less` by pressing `Shift+f`. `Ctrl+c` exits follow mode at which point the `less` functionality is returned.
25,525,612
I'm newbie and want to implement advanced search, I have two model articles and books and I am using sunspot gem for search this is my articles model ``` class Article < ActiveRecord::Base searchable do text :title text :content end ``` this my books model ``` class Book < ActiveRecord::Base searchable do text :title text :description end ``` I have tried to implement a search form where user can select which category want to search for like this search form ![enter image description here](https://i.stack.imgur.com/xmgNo.png) but I have not get it so I will appreciate any help in how can I do it
2014/08/27
[ "https://Stackoverflow.com/questions/25525612", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3304070/" ]
Let's say you want to send notifications by e-mail when a ticket is ready to be reviewed. People responsible for the review are set via a `Reviewer` custom field (which can contain multiple values). Then you can send notifications as follows: ``` var entities = require('@jetbrains/youtrack-scripting-api/entities'); exports.rule = entities.Issue.stateMachine({ title: 'Workflow', fieldName: 'State', states: { 'To Be Reviewed': { onEnter: function(ctx) { var issue = ctx.issue; issue.fields.Reviewer.forEach(function(user) { user.notify("Reminder", "This is a reminder", true); }); }, transitions: {} }, }, requirements: { Reviewer: { type: entities.User.fieldType, multi: true } } }); ```
You can create a custom workflow like the following one: ``` when { if (Interested Parties.isNotEmpty) { for each user in Interested Parties { user.notify("subj", "body"); } } } ``` Another point is that you probably do not need this field since you can 'star' an issue on behalf of a user and the user thus will be notified about any changes. Just type *star user\_name* in the command window.
388,430
I have a given limit that depends on a variable $a$: $$\lim\_{x \rightarrow \infty} \left (\frac{e^{ax}}{1 - ax} \right)$$ I understand cases for $a < 0 \implies \lim = 0$ and $a > 0 \implies \lim = -\infty$. However, for the case $a = 0$, the expression $ax$ which is basically $0\cdot \infty$ in undefined. I somehow know, that the result will be $\lim (\frac{e^0}{1}) = 1$ but I am not sure how to justify that $0\cdot\infty$ is $0$ in this case. Thanks for any ideas or an explanation!
2013/05/11
[ "https://math.stackexchange.com/questions/388430", "https://math.stackexchange.com", "https://math.stackexchange.com/users/43803/" ]
If we choose the value $a=0$ we evaluate the expression $\frac{e^{ax}}{1 - ax}$ before passing to the limit so your result would be $$\lim\_{x\to\infty}\left(\left[\frac{e^{ax}}{1 - ax}\right]\_{a=0}\right)=\lim\_{x\to\infty}\frac{e^0}{1-0}=1$$
There is *no* “indetermination” in $e^{ax}$ when $a=0$: it just means $e^0=1$, because $0x=0$. You don't compute such a limit by plugging in $\infty$ in place of $x$, which wouldn't make sense. You *can*, however, use that 1. $\lim\_{x\to\infty}e^{ax}=\infty$ (for $a>0$); 2. $\lim\_{x\to\infty}e^{ax}=0$ (for $a<0$). But this is different from simply plugging in $\infty$. For instance, in the case of $a<0$, you can conclude that $$ \lim\_{x\to\infty}\frac{e^{ax}}{1-ax}=0 $$ because the numerator has $0$ limit and the denominator has $\infty$ limit. On the other hand, you cannot immediately draw a conclusion in case $a>0$, since the numerator and the denominator have limit $\infty$ and $-\infty$, respectively. For this you can do a simple application of L’Hôpital's theorem: $$ \lim\_{x\to\infty}\frac{e^{ax}}{1-ax} \overset{(H)}{=} \lim\_{x\to\infty}\frac{ae^{ax}}{-a} = \lim\_{x\to\infty}-e^{ax}=-\infty $$ For the case $a=0$ you simply have $$ \lim\_{x\to\infty}\frac{e^{ax}}{1-ax} = \lim\_{x\to\infty}\frac{1}{1}=1 $$
13,898,931
I seem to have hit a bump. I'm creating an "Economy" system for a minecraft bukkit server. I'm trying to order the table by "Richest" first, however the order being received is different. When I run the SQL through phpMyAdmin it is recieved in the correct order ![enter image description here](https://i.stack.imgur.com/b2jSa.jpg) ![enter image description here](https://i.stack.imgur.com/WLcZw.png) ``` public static HashMap<String, Double> topPlayers(String economyKey) { sql.build("SELECT b.balance, p.username FROM " + sql.prefix + "players p INNER JOIN " + sql.prefix + "balances b ON p.id=b.user_id WHERE economy_key=? ORDER BY b.balance DESC LIMIT 0,5"); String[] params = { economyKey }; ResultSet results = sql.executePreparedQuery(params); HashMap<String, Double> players = new HashMap<String, Double>(); try { while (results.next()) { players.put(results.getString("username"), results.getDouble("balance")); } } catch (SQLException e) { e.printStackTrace(); } return players; } ```
2012/12/16
[ "https://Stackoverflow.com/questions/13898931", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1509091/" ]
You're using HashMap which is not ordered. Try to use a [*List*](http://docs.oracle.com/javase/6/docs/api/java/util/LinkedList.html) or any other ordered data structure - it'll solve your problem.
You are using a `HashMap` that doesn't gaurante the ordering of elements. This what the API docs for [java.util.HashMap](http://docs.oracle.com/javase/6/docs/api/java/util/HashMap.html) says : ***This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.*** You should rather go for a concrete class that implements `SortedMap` interface which is ***A Map that further provides a total ordering on its keys***. For example a `TreeMap` will suffice as it implements the `SortedMap` interface.
66,328,274
I am trying to implement wishlist feature in reactjs, how do we check if clicked product is already in the wishlist array, here is my addToWishlist function, what logic am i missing here? ``` const addToWishlist = (id) => { const check_wishlist = products.findIndex((item) => item.id === id); console.log(check_wishlist); if (check_wishlist !== -1) { wishlist.push({ ...products.find((item) => item.id === id) }); setWishlist([...wishlist]); } else { wishlist.filter((item) => item.id !== id); setWishlist([...wishlist]); } console.log(wishlist); }; ```
2021/02/23
[ "https://Stackoverflow.com/questions/66328274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4531885/" ]
You have 2 error in your code 1: You should not push direct to wishlist 2: wishlist.filter not change value of wishlist ```js const addToWishlist = (id) => { const check_wishlist = wishlist.find((item) => item.id === id); if (!check_wishlist) { const product = products.find((item) => item.id === id); setWishlist([...wishlist, product]); } else { setWishlist(wishlist.filter((item) => item.id !== id)); } console.log(wishlist); }; ```
Array.filter always returns new list, so you need to set the new list not the wishlist. ``` const addToWishlist = (id) => { const check_wishlist = wishlist.findIndex((item) => item.id === id); console.log(check_wishlist); if (check_wishlist !== -1) { setWishlist([ ...wishlist products.find((item) => item.id === id); ]); } else { const newList = wishlist.filter((item) => item.id !== id); setWishlist(newList); } console.log(wishlist); }; ```
715,884
Does Ubuntu 14.04 have its own 'Task Manager'? Like the one in Windows, where you would check what program is using how much and such.
2016/01/02
[ "https://askubuntu.com/questions/715884", "https://askubuntu.com", "https://askubuntu.com/users/488182/" ]
You can use `top`. Just open a terminal enter `top` command and it will show you which process is consuming how much **Memory** and **CPU**. Visit the link it will really help you [12-Top-commands](http://www.tecmint.com/12-top-command-examples-in-linux/)
use the following command ``` gnome-system-monitor ``` if not found then run ``` sudo apt-get install gnome-system-monitor ```
715,884
Does Ubuntu 14.04 have its own 'Task Manager'? Like the one in Windows, where you would check what program is using how much and such.
2016/01/02
[ "https://askubuntu.com/questions/715884", "https://askubuntu.com", "https://askubuntu.com/users/488182/" ]
You can use `top`. Just open a terminal enter `top` command and it will show you which process is consuming how much **Memory** and **CPU**. Visit the link it will really help you [12-Top-commands](http://www.tecmint.com/12-top-command-examples-in-linux/)
The nice thing about the Task Manager in Windows is that you can use a keyboard shortcut to bring it up, even if the computer is frozen. The System Monitor in Ubuntu is accessible from the Dash: click on the Dash and type “System” and it should come up. This, however, is fairly slow: it involves the graphics-heavy Dash and a search through installed programs and files. As [another answer](https://askubuntu.com/a/886852) points out, it may be quicker (and more portable to different versions of Ubuntu), to open the System Monitor from the terminal: just type the command `gnome-system-monitor`. (The Terminal itself can be opened from the Dash, or by the keyboard shortcut `Ctrl`+`Alt`+`T`.) However, the best option is to [create a new keyboard shortcut](https://help.ubuntu.com/stable/ubuntu-help/keyboard-shortcuts-set.html). This option is available from System Settings → Keyboard → Shortcuts → Custom Shortcuts. You’ll want to create a new shortcut with the name `System Monitor` and the command `gnome-system-monitor`. The key combination you choose is up to you (I went with `Ctrl`+`Alt`+`End`). Now you’ll be able to launch the System Monitor even when the computer is freezing.
715,884
Does Ubuntu 14.04 have its own 'Task Manager'? Like the one in Windows, where you would check what program is using how much and such.
2016/01/02
[ "https://askubuntu.com/questions/715884", "https://askubuntu.com", "https://askubuntu.com/users/488182/" ]
Open 'System Monitor' application ; it's very similar to task manager in Windows.
use the following command ``` gnome-system-monitor ``` if not found then run ``` sudo apt-get install gnome-system-monitor ```
715,884
Does Ubuntu 14.04 have its own 'Task Manager'? Like the one in Windows, where you would check what program is using how much and such.
2016/01/02
[ "https://askubuntu.com/questions/715884", "https://askubuntu.com", "https://askubuntu.com/users/488182/" ]
Open 'System Monitor' application ; it's very similar to task manager in Windows.
The nice thing about the Task Manager in Windows is that you can use a keyboard shortcut to bring it up, even if the computer is frozen. The System Monitor in Ubuntu is accessible from the Dash: click on the Dash and type “System” and it should come up. This, however, is fairly slow: it involves the graphics-heavy Dash and a search through installed programs and files. As [another answer](https://askubuntu.com/a/886852) points out, it may be quicker (and more portable to different versions of Ubuntu), to open the System Monitor from the terminal: just type the command `gnome-system-monitor`. (The Terminal itself can be opened from the Dash, or by the keyboard shortcut `Ctrl`+`Alt`+`T`.) However, the best option is to [create a new keyboard shortcut](https://help.ubuntu.com/stable/ubuntu-help/keyboard-shortcuts-set.html). This option is available from System Settings → Keyboard → Shortcuts → Custom Shortcuts. You’ll want to create a new shortcut with the name `System Monitor` and the command `gnome-system-monitor`. The key combination you choose is up to you (I went with `Ctrl`+`Alt`+`End`). Now you’ll be able to launch the System Monitor even when the computer is freezing.
715,884
Does Ubuntu 14.04 have its own 'Task Manager'? Like the one in Windows, where you would check what program is using how much and such.
2016/01/02
[ "https://askubuntu.com/questions/715884", "https://askubuntu.com", "https://askubuntu.com/users/488182/" ]
The nice thing about the Task Manager in Windows is that you can use a keyboard shortcut to bring it up, even if the computer is frozen. The System Monitor in Ubuntu is accessible from the Dash: click on the Dash and type “System” and it should come up. This, however, is fairly slow: it involves the graphics-heavy Dash and a search through installed programs and files. As [another answer](https://askubuntu.com/a/886852) points out, it may be quicker (and more portable to different versions of Ubuntu), to open the System Monitor from the terminal: just type the command `gnome-system-monitor`. (The Terminal itself can be opened from the Dash, or by the keyboard shortcut `Ctrl`+`Alt`+`T`.) However, the best option is to [create a new keyboard shortcut](https://help.ubuntu.com/stable/ubuntu-help/keyboard-shortcuts-set.html). This option is available from System Settings → Keyboard → Shortcuts → Custom Shortcuts. You’ll want to create a new shortcut with the name `System Monitor` and the command `gnome-system-monitor`. The key combination you choose is up to you (I went with `Ctrl`+`Alt`+`End`). Now you’ll be able to launch the System Monitor even when the computer is freezing.
use the following command ``` gnome-system-monitor ``` if not found then run ``` sudo apt-get install gnome-system-monitor ```
4,500,493
A mailman sort 202 letters in N boxes. Does it mean each box contain one letter?
2022/07/26
[ "https://math.stackexchange.com/questions/4500493", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1081577/" ]
No. For example, the mailman could sort all $202$ letters into one box and assuming there are no other boxes, then the mailman has sorted all $202$ letters in $N$ boxes, but each box does not contain one letter.
If $N=1$ Yes If $N>1$ No, because maybe all letters are put in one box and other boxes are empty.
672,936
I have found a brute force python programmatic way of concatenating multiple files while inserting some text characters in between the files. Example: `test_file1 + " \'id#\',\',name,\' " +...+ test_fileN` BUT, is there a way to do this using only BASH commands (sed, grep, cat,...)?
2021/10/12
[ "https://unix.stackexchange.com/questions/672936", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9932/" ]
I would type : ``` (for a in test_file*; do cat $a;echo " \'id#\',\',name,\' ";done) | sed '$d' ``` You just have to replace the `test_file*` by your actual name list (separated by spaces).
Method using shell builtins `set`, `shift` and `printf`, with no loop: ``` echo " \'id#\',\',name,\' " > /tmp/foo set -- test_file[0-9]* f="$1" shift cat "$1" $(printf '/tmp/foo %s ' "$@") rm /tmp/foo ```
33,314
I'm aware that this is a common question and one that can be quite situational, but I have a few specific questions regarding nuance between these two pronouns for "I". jisho.org describes 俺 as "Male term or language, sounds rough or arrogant". However, based on my knowledge, I'd say that this is the most common pronoun among college and older males (please correct me if I'm wrong, I may just watch too much anime where the characters are over-confident). While I understand that it may sound too colloquial in a formal situation, is 俺 really as arrogant as a dictionary definition makes it out to be? In a similar vein, would 僕 sound overly submissive or weak for an adult male (I'm personally a 20 year old college student), or would it just sound more humble and polite (still colloquial, though)? Last, do people change their pronoun based on the situation? For example, would an adult use 僕 typically around friends/coworkers, but if it's necessary to take a leadership position briefly or make a strong point, switch to 俺 to be more assertive, then back to 僕 when the situation gets more relaxed again?
2016/04/02
[ "https://japanese.stackexchange.com/questions/33314", "https://japanese.stackexchange.com", "https://japanese.stackexchange.com/users/10795/" ]
> > I'd say that this is the most common pronoun among college and older males > > > I wouldn't say it's the most common one, but in a manly/friendly/aggressive environment you might encounter it. In real life speech it's not as common as 僕 and 私 since there are a lot less situations where you can use 俺 safely. It is used *a lot* on the internet, though. It's the most common pronoun I encounter in BBS boards. > > While I understand that it may sound too colloquial in a formal situation, is 俺 really as arrogant as a dictionary definition makes it out to be? > > > In Japanese, the nuance some words carry often changes as your environment changes. It depends on a lot of things; context, the people around you, level of politeness, and so on. If you are at work, you're at a polite setting, so using 俺 would come off as overconfident and arrogant. If you're with your manly mates, it comes off as normal. So it really depends on the context in which you're using it. What's for sure is that it's *definitely not* as common as animes make it out to be and most times it *will* sound rude/inappropriate/arrogant. > > In a similar vein, would 僕 sound overly submissive or weak for an adult male (I'm personally a 20 year old college student), or would it just sound more humble and polite (still colloquial, though)? > > > Again, it depends on the context. Some use it almost all the time, some don't. The nuance it carries is that you're just a normal dude, not much more than that. If you're in a formal setting, it could sound a bit inappropriate in the sense that it might sound weak or rude, but generally you should be able to get away with 僕 in almost every situation. How a speaker perceives someone using 僕 is up to them, but I would advise to avoid it in serious settings(work, business, etc). > > Last, do people change their pronoun based on the situation? > > > I don't think people change their pronoun if they're trying to make some sort of point, it sounds kind of anime-ish and weird. Japanese people don't use personal pronouns that much anyway. In fact, when you can avoid it, you generally should. Tip: don't learn Japanese from anime. It's highly unrealistic.
I typically use 僕 when talking to some one older who I respect. I will use 俺 if I am talking with male friends. I rarely ever use 私 unless Im talking to some one I have just met. Im not a native speaker but I have never had anyone correct me on my usage 僕 is generally only used by younger boys but it can also be used by some one who feels they are young or younger than the person they are talking to 俺 is typically an arrogant way of saying I. People who use it usually have higher self esteem than most and think very highly of themselves. Especially if they use it around other who are not good friends But like I said Im not a native speaker so I could be wrong on some points
33,314
I'm aware that this is a common question and one that can be quite situational, but I have a few specific questions regarding nuance between these two pronouns for "I". jisho.org describes 俺 as "Male term or language, sounds rough or arrogant". However, based on my knowledge, I'd say that this is the most common pronoun among college and older males (please correct me if I'm wrong, I may just watch too much anime where the characters are over-confident). While I understand that it may sound too colloquial in a formal situation, is 俺 really as arrogant as a dictionary definition makes it out to be? In a similar vein, would 僕 sound overly submissive or weak for an adult male (I'm personally a 20 year old college student), or would it just sound more humble and polite (still colloquial, though)? Last, do people change their pronoun based on the situation? For example, would an adult use 僕 typically around friends/coworkers, but if it's necessary to take a leadership position briefly or make a strong point, switch to 俺 to be more assertive, then back to 僕 when the situation gets more relaxed again?
2016/04/02
[ "https://japanese.stackexchange.com/questions/33314", "https://japanese.stackexchange.com", "https://japanese.stackexchange.com/users/10795/" ]
First, I'd like to explain the whole scheme concerning personal pronouns. 1. You are supposed to use Standard Japanese when you speak in public or formal situations and in this case, you basically use 私 (derived from old Tokyo) only. 2. Otherwise, you speak in a dialect of your own. 3. In many areas including most populated ones, people speak New Tokyo dialect, which is almost a virtual standard, and in which people use おれ (derived from Kanto, Tohoku) or ぼく (derived from current Yamaguchi pref.). As for use of おれ or ぼく, the former おれ is (I believe overwhelmingly) more common than ぼく as the first person's pronoun in private speech, but it depends on people. Some people use ぼく among internal societies beside their private use of おれ, like pupil in school(\*1) or athletes in sport industry. But as long as you speak in the same society, switching one to the other is not likely to happen. (\*1) It can be said that ぼく is the standard for elementally school classes.
I typically use 僕 when talking to some one older who I respect. I will use 俺 if I am talking with male friends. I rarely ever use 私 unless Im talking to some one I have just met. Im not a native speaker but I have never had anyone correct me on my usage 僕 is generally only used by younger boys but it can also be used by some one who feels they are young or younger than the person they are talking to 俺 is typically an arrogant way of saying I. People who use it usually have higher self esteem than most and think very highly of themselves. Especially if they use it around other who are not good friends But like I said Im not a native speaker so I could be wrong on some points
33,314
I'm aware that this is a common question and one that can be quite situational, but I have a few specific questions regarding nuance between these two pronouns for "I". jisho.org describes 俺 as "Male term or language, sounds rough or arrogant". However, based on my knowledge, I'd say that this is the most common pronoun among college and older males (please correct me if I'm wrong, I may just watch too much anime where the characters are over-confident). While I understand that it may sound too colloquial in a formal situation, is 俺 really as arrogant as a dictionary definition makes it out to be? In a similar vein, would 僕 sound overly submissive or weak for an adult male (I'm personally a 20 year old college student), or would it just sound more humble and polite (still colloquial, though)? Last, do people change their pronoun based on the situation? For example, would an adult use 僕 typically around friends/coworkers, but if it's necessary to take a leadership position briefly or make a strong point, switch to 俺 to be more assertive, then back to 僕 when the situation gets more relaxed again?
2016/04/02
[ "https://japanese.stackexchange.com/questions/33314", "https://japanese.stackexchange.com", "https://japanese.stackexchange.com/users/10795/" ]
> > I'd say that this is the most common pronoun among college and older males > > > I wouldn't say it's the most common one, but in a manly/friendly/aggressive environment you might encounter it. In real life speech it's not as common as 僕 and 私 since there are a lot less situations where you can use 俺 safely. It is used *a lot* on the internet, though. It's the most common pronoun I encounter in BBS boards. > > While I understand that it may sound too colloquial in a formal situation, is 俺 really as arrogant as a dictionary definition makes it out to be? > > > In Japanese, the nuance some words carry often changes as your environment changes. It depends on a lot of things; context, the people around you, level of politeness, and so on. If you are at work, you're at a polite setting, so using 俺 would come off as overconfident and arrogant. If you're with your manly mates, it comes off as normal. So it really depends on the context in which you're using it. What's for sure is that it's *definitely not* as common as animes make it out to be and most times it *will* sound rude/inappropriate/arrogant. > > In a similar vein, would 僕 sound overly submissive or weak for an adult male (I'm personally a 20 year old college student), or would it just sound more humble and polite (still colloquial, though)? > > > Again, it depends on the context. Some use it almost all the time, some don't. The nuance it carries is that you're just a normal dude, not much more than that. If you're in a formal setting, it could sound a bit inappropriate in the sense that it might sound weak or rude, but generally you should be able to get away with 僕 in almost every situation. How a speaker perceives someone using 僕 is up to them, but I would advise to avoid it in serious settings(work, business, etc). > > Last, do people change their pronoun based on the situation? > > > I don't think people change their pronoun if they're trying to make some sort of point, it sounds kind of anime-ish and weird. Japanese people don't use personal pronouns that much anyway. In fact, when you can avoid it, you generally should. Tip: don't learn Japanese from anime. It's highly unrealistic.
First, I'd like to explain the whole scheme concerning personal pronouns. 1. You are supposed to use Standard Japanese when you speak in public or formal situations and in this case, you basically use 私 (derived from old Tokyo) only. 2. Otherwise, you speak in a dialect of your own. 3. In many areas including most populated ones, people speak New Tokyo dialect, which is almost a virtual standard, and in which people use おれ (derived from Kanto, Tohoku) or ぼく (derived from current Yamaguchi pref.). As for use of おれ or ぼく, the former おれ is (I believe overwhelmingly) more common than ぼく as the first person's pronoun in private speech, but it depends on people. Some people use ぼく among internal societies beside their private use of おれ, like pupil in school(\*1) or athletes in sport industry. But as long as you speak in the same society, switching one to the other is not likely to happen. (\*1) It can be said that ぼく is the standard for elementally school classes.
14,336,734
Because of some reasons I have a spring application which has two client applications written in extjs. One only contains the login page and the other the application logic. In Spring I include them into two jsp pages which I'm using in the controller. The login and the redirect to the application page works fine. But if I logout the logout is done successful but I keep staying on the application page instead of being redirected to the login page. security config: ``` <security:logout logout-url="/main/logoutpage.html" delete-cookies="JSESSIONID" invalidate-session="true" logout-success-url="/test/logout.html"/> ``` Controller: ``` @RequestMapping(value="/test/logout.html",method=RequestMethod.GET) public ModelAndView testLogout(@RequestParam(required=false)Integer error, HttpServletRequest request, HttpServletResponse response){ return new ModelAndView("login"); } ``` "login" is the name of the view which contains the login application. In browser debugging I can see following two communcation: ``` Request URL:http://xx:8080/xx/xx/logoutpage.html?_dc=1358246248972 Request Method:GET Status Code:302 Moved Temporarily Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Query String Parametersview URL encoded _dc:1358246248972 Response Headersview source Content-Length:0 Date:Tue, 15 Jan 2013 10:37:33 GMT Location:http://xx:8080/xx/xx/login.html Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=""; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/xx Request URL:http://xx:8080/xx/xx/login.html Request Method:GET Status Code:200 OK Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Response Headersview source Content-Language:de-DE Content-Length:417 Content-Type:text/html;charset=ISO-8859-1 Date:Tue, 15 Jan 2013 10:37:33 GMT Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=532EBEED737BD4172E290F0D10085ED5; Path=/xx/; HttpOnly ``` The second response also contains the login page: ``` <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Login</title> <script src="http://extjs.cachefly.net/ext-4.1.1-gpl/ext-all.js"></script> <link rel="stylesheet" href="http://extjs.cachefly.net/ext-4.1.1-gpl/resources/css/ext-all.css"> <script type="text/javascript" src="/xx/main/app/app.js"></script> </head> <body></body> </html> ``` Somebody has an idea why the login page is not shown? Thanks
2013/01/15
[ "https://Stackoverflow.com/questions/14336734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1979942/" ]
Sure it's possible. [`Request.Form`](http://msdn.microsoft.com/en-us/library/system.web.httprequest.form.aspx) is a [`NameValueCollection`](http://msdn.microsoft.com/en-us/library/system.collections.specialized.namevaluecollection.aspx). I suggest reading up on [the documentation](http://msdn.microsoft.com/en-us/library/system.web.httprequest.form.aspx).
It certainly is. The type is a [NameValueCollection](http://msdn.microsoft.com/en-us/library/system.collections.specialized.namevaluecollection.aspx): ``` public string extract(NameValueCollection form) { ... } ```
14,336,734
Because of some reasons I have a spring application which has two client applications written in extjs. One only contains the login page and the other the application logic. In Spring I include them into two jsp pages which I'm using in the controller. The login and the redirect to the application page works fine. But if I logout the logout is done successful but I keep staying on the application page instead of being redirected to the login page. security config: ``` <security:logout logout-url="/main/logoutpage.html" delete-cookies="JSESSIONID" invalidate-session="true" logout-success-url="/test/logout.html"/> ``` Controller: ``` @RequestMapping(value="/test/logout.html",method=RequestMethod.GET) public ModelAndView testLogout(@RequestParam(required=false)Integer error, HttpServletRequest request, HttpServletResponse response){ return new ModelAndView("login"); } ``` "login" is the name of the view which contains the login application. In browser debugging I can see following two communcation: ``` Request URL:http://xx:8080/xx/xx/logoutpage.html?_dc=1358246248972 Request Method:GET Status Code:302 Moved Temporarily Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Query String Parametersview URL encoded _dc:1358246248972 Response Headersview source Content-Length:0 Date:Tue, 15 Jan 2013 10:37:33 GMT Location:http://xx:8080/xx/xx/login.html Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=""; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/xx Request URL:http://xx:8080/xx/xx/login.html Request Method:GET Status Code:200 OK Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Response Headersview source Content-Language:de-DE Content-Length:417 Content-Type:text/html;charset=ISO-8859-1 Date:Tue, 15 Jan 2013 10:37:33 GMT Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=532EBEED737BD4172E290F0D10085ED5; Path=/xx/; HttpOnly ``` The second response also contains the login page: ``` <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Login</title> <script src="http://extjs.cachefly.net/ext-4.1.1-gpl/ext-all.js"></script> <link rel="stylesheet" href="http://extjs.cachefly.net/ext-4.1.1-gpl/resources/css/ext-all.css"> <script type="text/javascript" src="/xx/main/app/app.js"></script> </head> <body></body> </html> ``` Somebody has an idea why the login page is not shown? Thanks
2013/01/15
[ "https://Stackoverflow.com/questions/14336734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1979942/" ]
Sure it's possible. [`Request.Form`](http://msdn.microsoft.com/en-us/library/system.web.httprequest.form.aspx) is a [`NameValueCollection`](http://msdn.microsoft.com/en-us/library/system.collections.specialized.namevaluecollection.aspx). I suggest reading up on [the documentation](http://msdn.microsoft.com/en-us/library/system.web.httprequest.form.aspx).
Yes you can, It's of type `FormCollection`, which inherits from `NameValueCollection`
14,336,734
Because of some reasons I have a spring application which has two client applications written in extjs. One only contains the login page and the other the application logic. In Spring I include them into two jsp pages which I'm using in the controller. The login and the redirect to the application page works fine. But if I logout the logout is done successful but I keep staying on the application page instead of being redirected to the login page. security config: ``` <security:logout logout-url="/main/logoutpage.html" delete-cookies="JSESSIONID" invalidate-session="true" logout-success-url="/test/logout.html"/> ``` Controller: ``` @RequestMapping(value="/test/logout.html",method=RequestMethod.GET) public ModelAndView testLogout(@RequestParam(required=false)Integer error, HttpServletRequest request, HttpServletResponse response){ return new ModelAndView("login"); } ``` "login" is the name of the view which contains the login application. In browser debugging I can see following two communcation: ``` Request URL:http://xx:8080/xx/xx/logoutpage.html?_dc=1358246248972 Request Method:GET Status Code:302 Moved Temporarily Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Query String Parametersview URL encoded _dc:1358246248972 Response Headersview source Content-Length:0 Date:Tue, 15 Jan 2013 10:37:33 GMT Location:http://xx:8080/xx/xx/login.html Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=""; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/xx Request URL:http://xx:8080/xx/xx/login.html Request Method:GET Status Code:200 OK Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Response Headersview source Content-Language:de-DE Content-Length:417 Content-Type:text/html;charset=ISO-8859-1 Date:Tue, 15 Jan 2013 10:37:33 GMT Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=532EBEED737BD4172E290F0D10085ED5; Path=/xx/; HttpOnly ``` The second response also contains the login page: ``` <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Login</title> <script src="http://extjs.cachefly.net/ext-4.1.1-gpl/ext-all.js"></script> <link rel="stylesheet" href="http://extjs.cachefly.net/ext-4.1.1-gpl/resources/css/ext-all.css"> <script type="text/javascript" src="/xx/main/app/app.js"></script> </head> <body></body> </html> ``` Somebody has an idea why the login page is not shown? Thanks
2013/01/15
[ "https://Stackoverflow.com/questions/14336734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1979942/" ]
Sure it's possible. [`Request.Form`](http://msdn.microsoft.com/en-us/library/system.web.httprequest.form.aspx) is a [`NameValueCollection`](http://msdn.microsoft.com/en-us/library/system.collections.specialized.namevaluecollection.aspx). I suggest reading up on [the documentation](http://msdn.microsoft.com/en-us/library/system.web.httprequest.form.aspx).
Using the [example in the documentaion](http://msdn.microsoft.com/en-us/library/system.web.httprequest.form.aspx) ``` public string extract(NameValueCollection myRequest) { int loop1; StringBuilder processed_data= new StringBuilder(); // Get names of all forms into a string array. String[] arr1 = myRequest.AllKeys; for (loop1 = 0; loop1 < arr1.Length; loop1++) { data.Append("Form: " + arr1[loop1] + "<br>"); } return processed_data.ToString(); } ```
14,336,734
Because of some reasons I have a spring application which has two client applications written in extjs. One only contains the login page and the other the application logic. In Spring I include them into two jsp pages which I'm using in the controller. The login and the redirect to the application page works fine. But if I logout the logout is done successful but I keep staying on the application page instead of being redirected to the login page. security config: ``` <security:logout logout-url="/main/logoutpage.html" delete-cookies="JSESSIONID" invalidate-session="true" logout-success-url="/test/logout.html"/> ``` Controller: ``` @RequestMapping(value="/test/logout.html",method=RequestMethod.GET) public ModelAndView testLogout(@RequestParam(required=false)Integer error, HttpServletRequest request, HttpServletResponse response){ return new ModelAndView("login"); } ``` "login" is the name of the view which contains the login application. In browser debugging I can see following two communcation: ``` Request URL:http://xx:8080/xx/xx/logoutpage.html?_dc=1358246248972 Request Method:GET Status Code:302 Moved Temporarily Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Query String Parametersview URL encoded _dc:1358246248972 Response Headersview source Content-Length:0 Date:Tue, 15 Jan 2013 10:37:33 GMT Location:http://xx:8080/xx/xx/login.html Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=""; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/xx Request URL:http://xx:8080/xx/xx/login.html Request Method:GET Status Code:200 OK Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Response Headersview source Content-Language:de-DE Content-Length:417 Content-Type:text/html;charset=ISO-8859-1 Date:Tue, 15 Jan 2013 10:37:33 GMT Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=532EBEED737BD4172E290F0D10085ED5; Path=/xx/; HttpOnly ``` The second response also contains the login page: ``` <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Login</title> <script src="http://extjs.cachefly.net/ext-4.1.1-gpl/ext-all.js"></script> <link rel="stylesheet" href="http://extjs.cachefly.net/ext-4.1.1-gpl/resources/css/ext-all.css"> <script type="text/javascript" src="/xx/main/app/app.js"></script> </head> <body></body> </html> ``` Somebody has an idea why the login page is not shown? Thanks
2013/01/15
[ "https://Stackoverflow.com/questions/14336734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1979942/" ]
It certainly is. The type is a [NameValueCollection](http://msdn.microsoft.com/en-us/library/system.collections.specialized.namevaluecollection.aspx): ``` public string extract(NameValueCollection form) { ... } ```
Yes you can, It's of type `FormCollection`, which inherits from `NameValueCollection`
14,336,734
Because of some reasons I have a spring application which has two client applications written in extjs. One only contains the login page and the other the application logic. In Spring I include them into two jsp pages which I'm using in the controller. The login and the redirect to the application page works fine. But if I logout the logout is done successful but I keep staying on the application page instead of being redirected to the login page. security config: ``` <security:logout logout-url="/main/logoutpage.html" delete-cookies="JSESSIONID" invalidate-session="true" logout-success-url="/test/logout.html"/> ``` Controller: ``` @RequestMapping(value="/test/logout.html",method=RequestMethod.GET) public ModelAndView testLogout(@RequestParam(required=false)Integer error, HttpServletRequest request, HttpServletResponse response){ return new ModelAndView("login"); } ``` "login" is the name of the view which contains the login application. In browser debugging I can see following two communcation: ``` Request URL:http://xx:8080/xx/xx/logoutpage.html?_dc=1358246248972 Request Method:GET Status Code:302 Moved Temporarily Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Query String Parametersview URL encoded _dc:1358246248972 Response Headersview source Content-Length:0 Date:Tue, 15 Jan 2013 10:37:33 GMT Location:http://xx:8080/xx/xx/login.html Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=""; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/xx Request URL:http://xx:8080/xx/xx/login.html Request Method:GET Status Code:200 OK Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Response Headersview source Content-Language:de-DE Content-Length:417 Content-Type:text/html;charset=ISO-8859-1 Date:Tue, 15 Jan 2013 10:37:33 GMT Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=532EBEED737BD4172E290F0D10085ED5; Path=/xx/; HttpOnly ``` The second response also contains the login page: ``` <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Login</title> <script src="http://extjs.cachefly.net/ext-4.1.1-gpl/ext-all.js"></script> <link rel="stylesheet" href="http://extjs.cachefly.net/ext-4.1.1-gpl/resources/css/ext-all.css"> <script type="text/javascript" src="/xx/main/app/app.js"></script> </head> <body></body> </html> ``` Somebody has an idea why the login page is not shown? Thanks
2013/01/15
[ "https://Stackoverflow.com/questions/14336734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1979942/" ]
It certainly is. The type is a [NameValueCollection](http://msdn.microsoft.com/en-us/library/system.collections.specialized.namevaluecollection.aspx): ``` public string extract(NameValueCollection form) { ... } ```
Using the [example in the documentaion](http://msdn.microsoft.com/en-us/library/system.web.httprequest.form.aspx) ``` public string extract(NameValueCollection myRequest) { int loop1; StringBuilder processed_data= new StringBuilder(); // Get names of all forms into a string array. String[] arr1 = myRequest.AllKeys; for (loop1 = 0; loop1 < arr1.Length; loop1++) { data.Append("Form: " + arr1[loop1] + "<br>"); } return processed_data.ToString(); } ```
66,029,135
Consider the table below: ([here's a db-fiddle with this example](https://www.db-fiddle.com/f/nVZp5EagiiEgqPYNLvQnKd/0)) ``` id primary_sort record_id record_sort alt_sort 1 2 1 11 100 2 2 2 10 101 3 3 1 12 108 4 3 1 13 107 5 3 2 14 105 6 1 2 15 109 ``` I'd like to sort this according to `primary_sort` first. If equal, the next sort field depends on the value of `record_id`: if two rows has the same `record_id`, then sort them by `record_sort`. Otherwise, sort them by `alt_sort`. I think the query should look something like this: ```sql select * from example order by primary_sort, case when [this_row].record_id = [other_row].record_id then record_sort else alt_sort end ; ``` Expected output: ``` id primary_sort record_id record_sort alt_sort 6 1 2 15 109 1 2 1 11 100 2 2 2 10 101 5 3 2 14 105 3 3 1 12 108 4 3 1 13 107 ``` Here's some pseudocode in Java, showing my intent: ```java int compareTo(Example other) { if (this.primary_sort != other.primary_sort) { return this.primary_sort.compareTo(other.primary_sort); } else if (this.record_id == other.record_id) { return this.record_sort.compareTo(other.record_sort); } else { return this.alt_sort.compareTo(other.alt_sort); } } ``` (this is a minimal, reproducible example. Similar SO questions I've found on conditional `order by` are not applicable, because my condition is based on values in both rows (i.e. `[this_row].record_id = [other_row].record_id`))
2021/02/03
[ "https://Stackoverflow.com/questions/66029135", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4171429/" ]
I normally use a structure like below, `/src/scss/core` is my custom sass directory: ``` // 1. Include functions first (so you can manipulate colors, SVGs, calc, etc) @import "../../node_modules/bootstrap/scss/functions"; @import "../../node_modules/bootstrap/scss/variables"; // 2. Include any default variable overrides here @import "core/variables"; // Custom theme variables @import "core/variables-bootstrap"; // Bootstrap variables overrides // Mixins @import "../../node_modules/bootstrap/scss/mixins"; @import "core/mixins.scss"; // Bootstrap core @import "../../node_modules/bootstrap/scss/utilities"; @import "../../node_modules/bootstrap/scss/root"; @import "../../node_modules/bootstrap/scss/reboot"; @import "../../node_modules/bootstrap/scss/type"; @import "../../node_modules/bootstrap/scss/images"; @import "../../node_modules/bootstrap/scss/containers"; @import "../../node_modules/bootstrap/scss/grid"; @import "../../node_modules/bootstrap/scss/tables"; @import "../../node_modules/bootstrap/scss/forms"; @import "../../node_modules/bootstrap/scss/buttons"; @import "../../node_modules/bootstrap/scss/transitions"; @import "../../node_modules/bootstrap/scss/dropdown"; @import "../../node_modules/bootstrap/scss/button-group"; @import "../../node_modules/bootstrap/scss/nav"; @import "../../node_modules/bootstrap/scss/navbar"; @import "../../node_modules/bootstrap/scss/card"; @import "../../node_modules/bootstrap/scss/accordion"; @import "../../node_modules/bootstrap/scss/breadcrumb"; @import "../../node_modules/bootstrap/scss/pagination"; @import "../../node_modules/bootstrap/scss/badge"; @import "../../node_modules/bootstrap/scss/alert"; @import "../../node_modules/bootstrap/scss/progress"; @import "../../node_modules/bootstrap/scss/list-group"; @import "../../node_modules/bootstrap/scss/close"; @import "../../node_modules/bootstrap/scss/toasts"; @import "../../node_modules/bootstrap/scss/modal"; @import "../../node_modules/bootstrap/scss/tooltip"; @import "../../node_modules/bootstrap/scss/popover"; @import "../../node_modules/bootstrap/scss/carousel"; @import "../../node_modules/bootstrap/scss/spinners"; @import "../../node_modules/bootstrap/scss/offcanvas"; @import "../../node_modules/bootstrap/scss/placeholders"; // Helpers @import "../../node_modules/bootstrap/scss/helpers"; // Utilities @import "../../node_modules/bootstrap/scss/utilities/api"; ``` This structure works for me with variable overrides/extending but like Zim said you'd need to override $link-hover-color too since it's default value is set before you defined your custom $primary colour (at least that's my understanding of the !default flag). ``` $primary: $red; $link-color: $primary; $link-hover-color: shift-color($link-color, $link-shade-percentage); ```
I think you'd have to set any other vars that use `$link-color` (ie: `$btn-link-color`) and merge the new colors into the `$theme-colors` map... ``` @import "functions"; @import "variables"; @import "mixins"; $primary: $red; $link-color: $primary; $btn-link-color: $primary; $theme-colors: map-merge( $theme-colors, ( "primary": $primary ) ); @import "bootstrap"; ``` [Demo](https://codeply.com/p/kUS6ulE754#)
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
If you have Microsoft Office: 1. **RIGHT**-drag the drive, folder or file from Windows Explorer into the body of a Word document or Outlook email 2. Select '**Create Hyperlink Here**' The inserted text will be the full UNC of the dragged item.
my 5 cents: a powershell script when run via PowerShell will create a short cut pointing to PowerShell and itself such that you can drop files onto it to get the UNC path (or local normal file path) into the clipboard <https://inmood.ch/get-unc-path-of-fileserver-file/> ``` # run without arguments will create a file called DropFileToGetUNCPath.lnk # if you drop a file onto the shortcut it'll return the UNC path if($args[0] -eq $null) { # creating the shortcut to drop files later on $path = $pwd.path $script = $MyInvocation.MyCommand.Path $WshShell = New-Object -comObject WScript.Shell $Shortcut = $WshShell.CreateShortcut("$path\DropFileToGetUNCPath.lnk") $Shortcut.TargetPath = "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" $Shortcut.Arguments = "-noprofile -file """ + $script + """" $Shortcut.Save() }else{ $file = $args[0] } $drive = $pwd.drive.name + ":" # find UNC paths for directories $drives = net use $drive = ($drives -match ".*" + $drive + ".*") #debug #echo $drive $parts = $drive -split "\s{1,11}" #debug #echo $parts $windowsDrive = $parts[1] $uncDrive = $parts[2] $file -replace $windowsDrive, $uncDrive | clip ```
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
In Windows, if you have mapped network drives and you don't know the UNC path for them, you can start a command prompt (*Start → Run → cmd.exe*) and use the `net use` command to list your mapped drives and their UNC paths: ``` C:\>net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Q: \\server1\foo Microsoft Windows Network OK X: \\server2\bar Microsoft Windows Network The command completed successfully. ``` Note that this shows the list of mapped and connected network file shares for the user context the command is run under. If you run `cmd.exe` under your own user account, the results shown are the network file shares for yourself. If you run `cmd.exe` under another user account, such as the local Administrator, you will instead see the network file shares for that user.
``` $CurrentFolder = "H:\Documents" $Query = "Select * from Win32_NetworkConnection where LocalName = '" + $CurrentFolder.Substring( 0, 2 ) + "'" ( Get-WmiObject -Query $Query ).RemoteName ``` OR ``` $CurrentFolder = "H:\Documents" $Tst = $CurrentFolder.Substring( 0, 2 ) ( Get-WmiObject -Query "Select * from Win32_NetworkConnection where LocalName = '$Tst'" ).RemoteName ```
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
In Windows, if you have mapped network drives and you don't know the UNC path for them, you can start a command prompt (*Start → Run → cmd.exe*) and use the `net use` command to list your mapped drives and their UNC paths: ``` C:\>net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Q: \\server1\foo Microsoft Windows Network OK X: \\server2\bar Microsoft Windows Network The command completed successfully. ``` Note that this shows the list of mapped and connected network file shares for the user context the command is run under. If you run `cmd.exe` under your own user account, the results shown are the network file shares for yourself. If you run `cmd.exe` under another user account, such as the local Administrator, you will instead see the network file shares for that user.
``` wmic path win32_mappedlogicaldisk get deviceid, providername ``` Result: ``` DeviceID ProviderName I: \\server1\Temp J: \\server2\Corporate Y: \\Server3\Dev_Repo Z: \\Server3\Repository ``` As a batch file ([src](https://github.com/maphew/code/blob/master/other/get-unc-path.bat)): ``` @if [%1]==[] goto :Usage @setlocal enabledelayedexpansion @set _NetworkPath= @pushd %1 @for /f "tokens=2" %%i in ('wmic path win32_mappedlogicaldisk get deviceid^, providername ^| findstr /i "%CD:~0,2%"') do @(set _NetworkPath=%%i%CD:~2%) @echo.%_NetworkPath% @popd @goto :EOF :: --------------------------------------------------------------------- :Usage @echo. @echo. Get the full UNC path for the specified mapped drive path @echo. @echo. %~n0 [mapped drive path] ``` Example: ``` C:\> get-unc-path.bat z:\Tools\admin \\EnvGeoServer\Repository\Tools\admin ``` Batch script adapted from <https://superuser.com/a/1123556/16966>. Please be sure to go vote that one up too if you like this solution. *Update 2021-11-15:* bug fix. Previously the batch only reported drive letter UNC root and neglected to also report the folder path. `%CD%` is set from `%%i` through some kind of CMD magic. `%CD:~0,2%` and `%CD:~2%` extract the drive letter and trailing path [substrings](https://ss64.com/nt/syntax-substring.html) respectively. e.g. `:~2%` does '\Tools\admin' from 'Z:\Tools\admin'.
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
If you have Microsoft Office: 1. **RIGHT**-drag the drive, folder or file from Windows Explorer into the body of a Word document or Outlook email 2. Select '**Create Hyperlink Here**' The inserted text will be the full UNC of the dragged item.
The answer is a simple `PowerShell` one-liner: ``` Get-WmiObject Win32_NetworkConnection | ft "RemoteName","LocalName" -A ``` If you only want to pull the `UNC` for one particular drive, add a where statement: ``` Get-WmiObject Win32_NetworkConnection | where -Property 'LocalName' -eq 'Z:' | ft "RemoteName","LocalName" -A ```
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
In Windows, if you have mapped network drives and you don't know the UNC path for them, you can start a command prompt (*Start → Run → cmd.exe*) and use the `net use` command to list your mapped drives and their UNC paths: ``` C:\>net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Q: \\server1\foo Microsoft Windows Network OK X: \\server2\bar Microsoft Windows Network The command completed successfully. ``` Note that this shows the list of mapped and connected network file shares for the user context the command is run under. If you run `cmd.exe` under your own user account, the results shown are the network file shares for yourself. If you run `cmd.exe` under another user account, such as the local Administrator, you will instead see the network file shares for that user.
This question has been answered already, but since there is a **more convenient way** to get the UNC path and some more I recommend using Path Copy, which is free and you can practically get any path you want with one click: <https://pathcopycopy.github.io/> Here is a screenshot demonstrating how it works. The latest version has more options and definitely UNC Path too: [![enter image description here](https://i.stack.imgur.com/4gDZU.png)](https://i.stack.imgur.com/4gDZU.png)
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
In Windows, if you have mapped network drives and you don't know the UNC path for them, you can start a command prompt (*Start → Run → cmd.exe*) and use the `net use` command to list your mapped drives and their UNC paths: ``` C:\>net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Q: \\server1\foo Microsoft Windows Network OK X: \\server2\bar Microsoft Windows Network The command completed successfully. ``` Note that this shows the list of mapped and connected network file shares for the user context the command is run under. If you run `cmd.exe` under your own user account, the results shown are the network file shares for yourself. If you run `cmd.exe` under another user account, such as the local Administrator, you will instead see the network file shares for that user.
If you have Microsoft Office: 1. **RIGHT**-drag the drive, folder or file from Windows Explorer into the body of a Word document or Outlook email 2. Select '**Create Hyperlink Here**' The inserted text will be the full UNC of the dragged item.
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
This question has been answered already, but since there is a **more convenient way** to get the UNC path and some more I recommend using Path Copy, which is free and you can practically get any path you want with one click: <https://pathcopycopy.github.io/> Here is a screenshot demonstrating how it works. The latest version has more options and definitely UNC Path too: [![enter image description here](https://i.stack.imgur.com/4gDZU.png)](https://i.stack.imgur.com/4gDZU.png)
``` $CurrentFolder = "H:\Documents" $Query = "Select * from Win32_NetworkConnection where LocalName = '" + $CurrentFolder.Substring( 0, 2 ) + "'" ( Get-WmiObject -Query $Query ).RemoteName ``` OR ``` $CurrentFolder = "H:\Documents" $Tst = $CurrentFolder.Substring( 0, 2 ) ( Get-WmiObject -Query "Select * from Win32_NetworkConnection where LocalName = '$Tst'" ).RemoteName ```
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
If you have Microsoft Office: 1. **RIGHT**-drag the drive, folder or file from Windows Explorer into the body of a Word document or Outlook email 2. Select '**Create Hyperlink Here**' The inserted text will be the full UNC of the dragged item.
``` $CurrentFolder = "H:\Documents" $Query = "Select * from Win32_NetworkConnection where LocalName = '" + $CurrentFolder.Substring( 0, 2 ) + "'" ( Get-WmiObject -Query $Query ).RemoteName ``` OR ``` $CurrentFolder = "H:\Documents" $Tst = $CurrentFolder.Substring( 0, 2 ) ( Get-WmiObject -Query "Select * from Win32_NetworkConnection where LocalName = '$Tst'" ).RemoteName ```
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
If you have Microsoft Office: 1. **RIGHT**-drag the drive, folder or file from Windows Explorer into the body of a Word document or Outlook email 2. Select '**Create Hyperlink Here**' The inserted text will be the full UNC of the dragged item.
``` wmic path win32_mappedlogicaldisk get deviceid, providername ``` Result: ``` DeviceID ProviderName I: \\server1\Temp J: \\server2\Corporate Y: \\Server3\Dev_Repo Z: \\Server3\Repository ``` As a batch file ([src](https://github.com/maphew/code/blob/master/other/get-unc-path.bat)): ``` @if [%1]==[] goto :Usage @setlocal enabledelayedexpansion @set _NetworkPath= @pushd %1 @for /f "tokens=2" %%i in ('wmic path win32_mappedlogicaldisk get deviceid^, providername ^| findstr /i "%CD:~0,2%"') do @(set _NetworkPath=%%i%CD:~2%) @echo.%_NetworkPath% @popd @goto :EOF :: --------------------------------------------------------------------- :Usage @echo. @echo. Get the full UNC path for the specified mapped drive path @echo. @echo. %~n0 [mapped drive path] ``` Example: ``` C:\> get-unc-path.bat z:\Tools\admin \\EnvGeoServer\Repository\Tools\admin ``` Batch script adapted from <https://superuser.com/a/1123556/16966>. Please be sure to go vote that one up too if you like this solution. *Update 2021-11-15:* bug fix. Previously the batch only reported drive letter UNC root and neglected to also report the folder path. `%CD%` is set from `%%i` through some kind of CMD magic. `%CD:~0,2%` and `%CD:~2%` extract the drive letter and trailing path [substrings](https://ss64.com/nt/syntax-substring.html) respectively. e.g. `:~2%` does '\Tools\admin' from 'Z:\Tools\admin'.
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
In Windows, if you have mapped network drives and you don't know the UNC path for them, you can start a command prompt (*Start → Run → cmd.exe*) and use the `net use` command to list your mapped drives and their UNC paths: ``` C:\>net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Q: \\server1\foo Microsoft Windows Network OK X: \\server2\bar Microsoft Windows Network The command completed successfully. ``` Note that this shows the list of mapped and connected network file shares for the user context the command is run under. If you run `cmd.exe` under your own user account, the results shown are the network file shares for yourself. If you run `cmd.exe` under another user account, such as the local Administrator, you will instead see the network file shares for that user.
The answer is a simple `PowerShell` one-liner: ``` Get-WmiObject Win32_NetworkConnection | ft "RemoteName","LocalName" -A ``` If you only want to pull the `UNC` for one particular drive, add a where statement: ``` Get-WmiObject Win32_NetworkConnection | where -Property 'LocalName' -eq 'Z:' | ft "RemoteName","LocalName" -A ```
57,946,590
I am trying to add google sign-in feature to my app. This is working fine with an android emulator but I am running the app in the real device it is not working. The problem is after the sign-in process google redirect to its own home page instead to app. The step I follow. Function I use to open google sign in page ``` const result = await Google.logInAsync({ androidStandaloneAppClientId: '131814552849-bi76mebb3eq5jsdergerdfh6werjd8udpen43.apps.googleusercontent.com', scopes: ['profile', 'email'], behavior: 'web }); ``` app.json I used Google Certificate Hash (SHA-1) in certificateHash ``` "android": { "package": "com.abc.mycompnay", "permissions": ["READ_EXTERNAL_STORAGE", "WRITE_EXTERNAL_STORAGE"], "config": { "googleSignIn": { "apiKey": "AIzaSyB6qp9VXGXrtwuihvna40F57xABKXJfEQ", "certificateHash": "29FD8B159A28F2F48ED3283548NEBFC957F6821D" } } } ``` > > google console setting > > > [![enter image description here](https://i.stack.imgur.com/1mpsC.png)](https://i.stack.imgur.com/1mpsC.png) **Client key** [![enter image description here](https://i.stack.imgur.com/J5HRd.png)](https://i.stack.imgur.com/J5HRd.png) After sign in its end up with its own home page [![enter image description here](https://i.stack.imgur.com/GQ9Op.png)](https://i.stack.imgur.com/GQ9Op.png)
2019/09/15
[ "https://Stackoverflow.com/questions/57946590", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2917268/" ]
I manage to fix it. below is what I did. I pass the redirectUrl in config ``` import * as AppAuth from 'expo-app-auth'; const result = await Google.logInAsync({ androidStandaloneAppClientId: 'myKey, iosStandaloneAppClientId: 'myKey, scopes: ['profile', 'email'], behavior: 'web', redirectUrl: `${AppAuth.OAuthRedirect}:/oauthredirect` }); ```
Open the gradle and change the redirect scheme ``` android { defaultConfig { manifestPlaceholders = [ appAuthRedirectScheme: 'com.example.yourpackagename' ] } } ```
57,946,590
I am trying to add google sign-in feature to my app. This is working fine with an android emulator but I am running the app in the real device it is not working. The problem is after the sign-in process google redirect to its own home page instead to app. The step I follow. Function I use to open google sign in page ``` const result = await Google.logInAsync({ androidStandaloneAppClientId: '131814552849-bi76mebb3eq5jsdergerdfh6werjd8udpen43.apps.googleusercontent.com', scopes: ['profile', 'email'], behavior: 'web }); ``` app.json I used Google Certificate Hash (SHA-1) in certificateHash ``` "android": { "package": "com.abc.mycompnay", "permissions": ["READ_EXTERNAL_STORAGE", "WRITE_EXTERNAL_STORAGE"], "config": { "googleSignIn": { "apiKey": "AIzaSyB6qp9VXGXrtwuihvna40F57xABKXJfEQ", "certificateHash": "29FD8B159A28F2F48ED3283548NEBFC957F6821D" } } } ``` > > google console setting > > > [![enter image description here](https://i.stack.imgur.com/1mpsC.png)](https://i.stack.imgur.com/1mpsC.png) **Client key** [![enter image description here](https://i.stack.imgur.com/J5HRd.png)](https://i.stack.imgur.com/J5HRd.png) After sign in its end up with its own home page [![enter image description here](https://i.stack.imgur.com/GQ9Op.png)](https://i.stack.imgur.com/GQ9Op.png)
2019/09/15
[ "https://Stackoverflow.com/questions/57946590", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2917268/" ]
I manage to fix it. below is what I did. I pass the redirectUrl in config ``` import * as AppAuth from 'expo-app-auth'; const result = await Google.logInAsync({ androidStandaloneAppClientId: 'myKey, iosStandaloneAppClientId: 'myKey, scopes: ['profile', 'email'], behavior: 'web', redirectUrl: `${AppAuth.OAuthRedirect}:/oauthredirect` }); ```
Okay, I'll put this here since this cost me a ton of lifetime. If you happen to test it with an Android device: Make sure you have selected Chrome as default browser. Others might not redirect you correctly!
57,946,590
I am trying to add google sign-in feature to my app. This is working fine with an android emulator but I am running the app in the real device it is not working. The problem is after the sign-in process google redirect to its own home page instead to app. The step I follow. Function I use to open google sign in page ``` const result = await Google.logInAsync({ androidStandaloneAppClientId: '131814552849-bi76mebb3eq5jsdergerdfh6werjd8udpen43.apps.googleusercontent.com', scopes: ['profile', 'email'], behavior: 'web }); ``` app.json I used Google Certificate Hash (SHA-1) in certificateHash ``` "android": { "package": "com.abc.mycompnay", "permissions": ["READ_EXTERNAL_STORAGE", "WRITE_EXTERNAL_STORAGE"], "config": { "googleSignIn": { "apiKey": "AIzaSyB6qp9VXGXrtwuihvna40F57xABKXJfEQ", "certificateHash": "29FD8B159A28F2F48ED3283548NEBFC957F6821D" } } } ``` > > google console setting > > > [![enter image description here](https://i.stack.imgur.com/1mpsC.png)](https://i.stack.imgur.com/1mpsC.png) **Client key** [![enter image description here](https://i.stack.imgur.com/J5HRd.png)](https://i.stack.imgur.com/J5HRd.png) After sign in its end up with its own home page [![enter image description here](https://i.stack.imgur.com/GQ9Op.png)](https://i.stack.imgur.com/GQ9Op.png)
2019/09/15
[ "https://Stackoverflow.com/questions/57946590", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2917268/" ]
I manage to fix it. below is what I did. I pass the redirectUrl in config ``` import * as AppAuth from 'expo-app-auth'; const result = await Google.logInAsync({ androidStandaloneAppClientId: 'myKey, iosStandaloneAppClientId: 'myKey, scopes: ['profile', 'email'], behavior: 'web', redirectUrl: `${AppAuth.OAuthRedirect}:/oauthredirect` }); ```
In app.json, package name has to be as all small letters like com.app.cloneapp
72,194,499
I've got a generic task in my Gradle build that copies some configuration files to be included in the build, but aren't required for compiling or anything else (they're used at runtime). Basically: ``` val copyConfiguration by tasks.registering(Copy::class) { from("${projectDir}/configuration") into("${buildDir}/") } ``` This however leads to an issue in every other task as I now get the Gradle warning about how the tasks use this output without declaring an explicit or implicit dependency ``` Execution optimizations have been disabled for task ':jacocoTestCoverageVerification' to ensure correctness due to the following reasons: - Gradle detected a problem with the following location: '...'. Reason: Task ':jacocoTestCoverageVerification' uses this output of task ':copyConfiguration' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.4.1/userguide/validation_problems.html#implicit_dependency for more details about this problem. ``` Now this is only a warning, and the build succeeds, and my service starts up and runs fine. But it does clog my output making it harder to find the line where something went wrong and is in general an eyesore. I'd like to somehow remove that warning. I saw (from the wiki) that the general solution for this is to write an explicit dependency in the task definition, but since this is happening for every task (from compile, to test, to ktlint, to jacoco, etc.) I don't really want to do that. Is there an alternative, like an anti-dependency, wherein I can tell Gradle that it shouldn't care about the output of the `:copyConfiguration` task?
2022/05/11
[ "https://Stackoverflow.com/questions/72194499", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7041558/" ]
Given (emphasis mine to show what to look for) > > Execution optimizations have been disabled for task 'spotlessJava' to ensure correctness due to the following reasons: > > > * Gradle detected a problem with the following location: '...\build\generated\source\proto\main\grpc'. Reason: Task **'spotlessJava'** uses this output of task **'generateProto'** without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to <https://docs.gradle.org/7.5.1/userguide/validation_problems.html#implicit_dependency> for more details about this problem. > > > Add the following to `build.gradle` ``` tasks.named("spotlessJava").configure { dependsOn("generateProto") } ```
I had a similar issue and funny that it started with a task related to Jacoco. I documented a solution here <https://discuss.gradle.org/t/task-a-uses-this-output-of-task-b-without-declaring-an-explicit-or-implicit-dependency/42896> In short, what worked for me was to get the location with the problem using the task properties, e.g. getOutputs. Hope this helps.
45,174,184
I'm a little new to python 2.7 and I was wondering if there was a way I could search within a folder (and all its subfolders, PDFs, and Word docs) for a certain word. I need to compile all PDF and Word files that contain a certain keyword into a new folder so I thought python might be the best way to do this instead of manually going through each file and searching for the word. Any thoughts?
2017/07/18
[ "https://Stackoverflow.com/questions/45174184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8327139/" ]
It should be obvious that without any information on the function at hand, the poor man's approach is optimal (in some probabilistic sense). Because the roots of general functions a spread uniformly and independently of each other, so that unequal steps, possibly based on the function values, would be a waste of time. You are in a better position when you can exploit some property of the function. For instance, if you have a bound on the derivative in an interval, for suitable values of the function at the endpoints you can show that no root can be present.
I do not think there is a magical method that would find **all roots** of a general equation. Your "poor man's approach" is not too bad to start with. I would use product instead of `|data|<eps`. For example, ``` dp = data[1:] * data[:-1] indices = np.where(dp <= 0) ``` would provide the location of the "suspicious" intervals. Then you can run a better method providing as initial guess every such suspicious interval's center coordinate. A more sophisticated method maybe could adapt to the slope and adjust function sampling instead of having a constant one like you get with `linspace()`.
3,041,922
I have a small question regarding rails. I have a search controller which searches for a name in database, if found shows the details about it or else I am redirecting to the new name page. Is there anyway after redirection that the name searched for to automatically appear in the new form page? Thanks in advance.
2010/06/15
[ "https://Stackoverflow.com/questions/3041922", "https://Stackoverflow.com", "https://Stackoverflow.com/users/357010/" ]
You can add the imports to `$HOME/.groovy/groovysh.rc`
From <http://groovy.codehaus.org/Groovy+Shell>: This script, if it exists, is loaded when the shell starts up: ``` $HOME/.groovy/groovysh.profile ``` This script, if it exists, is loaded when the shell enters interactive mode: ``` $HOME/.groovy/groovysh.rc ``` Edit-line history is stored in this file: ``` $HOME/.groovy/groovysh.history ```
17,809,819
I have a python class in PyCharm containing an overriding method and want to see its documentation as quickly as possible. How can I do it?
2013/07/23
[ "https://Stackoverflow.com/questions/17809819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2510374/" ]
I didn't check your code but I always use the following snippet and it works till now. **JSONParser.java** ``` import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONException; import org.json.JSONObject; import android.util.Log; public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } public JSONObject getJSONFromUrl(String url) { // Making HTTP request try { // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } ``` And call it like: ``` JSONParser jParser = new JSONParser(); // getting JSON string from URL JSONObject json = jParser.getJSONFromUrl(url); ``` Sometimes json starts with an array node instead of jSON Object node. In those case, you have to return an `JSONArray` instead of `JSONObject`
From my point of view, you must call `close()` to `InputStream` and `reader` before returning the response as: ``` stream.close(); reader.close(); return sb.toString(); ``` It would be better if you specify what kind of error you are getting while running the above piece of code to analyse the issue. Thanks!
17,809,819
I have a python class in PyCharm containing an overriding method and want to see its documentation as quickly as possible. How can I do it?
2013/07/23
[ "https://Stackoverflow.com/questions/17809819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2510374/" ]
I didn't check your code but I always use the following snippet and it works till now. **JSONParser.java** ``` import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONException; import org.json.JSONObject; import android.util.Log; public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } public JSONObject getJSONFromUrl(String url) { // Making HTTP request try { // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } ``` And call it like: ``` JSONParser jParser = new JSONParser(); // getting JSON string from URL JSONObject json = jParser.getJSONFromUrl(url); ``` Sometimes json starts with an array node instead of jSON Object node. In those case, you have to return an `JSONArray` instead of `JSONObject`
Try the following:- ``` static InputStream is = null; try { DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); httpPost.setEntity(new UrlEncodedFormEntity(params, "UTF-8")); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); }} catch (Exception e) { Log.e("Buffer Error", "Error Converting Result" + e.toString()); } ```
17,809,819
I have a python class in PyCharm containing an overriding method and want to see its documentation as quickly as possible. How can I do it?
2013/07/23
[ "https://Stackoverflow.com/questions/17809819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2510374/" ]
I didn't check your code but I always use the following snippet and it works till now. **JSONParser.java** ``` import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONException; import org.json.JSONObject; import android.util.Log; public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } public JSONObject getJSONFromUrl(String url) { // Making HTTP request try { // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } ``` And call it like: ``` JSONParser jParser = new JSONParser(); // getting JSON string from URL JSONObject json = jParser.getJSONFromUrl(url); ``` Sometimes json starts with an array node instead of jSON Object node. In those case, you have to return an `JSONArray` instead of `JSONObject`
Found the problem! I was getting back an array and had to use JSONArray rather than JSONObject
50,249,002
After several problems, I decided to purge Docker to reinstall it in a second time. Here's the steps that I did to purge all the packages related to Docker: ``` - dpkg -l | grep -i docker - sudo apt-get purge docker-engine docker docker-compose - sudo apt-get autoremove --purge docker docker-compose docker-engin ``` I even delete the folder which contains Docker files and containters `/var/lib/docker` But I still display the docker version after all I did. ``` docker -v Docker version 17.06.2-ce, build a04f55b ```
2018/05/09
[ "https://Stackoverflow.com/questions/50249002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3652210/" ]
**EDIT :** This solution is for systems using Debian packages (Debian, Ubuntu, Mint, ...). You saw that the docker binary is still present in your system. You can locate it using the `whereis` command : ``` # whereis docker docker: /usr/bin/docker /usr/lib/docker /etc/docker /usr/share/man/man1/docker.1.gz ``` Now that the binary is located (it's `/usr/bin/docker` in the example) you can use the `dpkg -S <location>` to look for its package. See [related post](https://askubuntu.com/questions/481/how-do-i-find-the-package-that-provides-a-file). ``` # dpkg -S /usr/bin/docker docker-ce: /usr/bin/docker ``` And then you can get rid of the package (here `docker-ce`) using your usual tools (`apt-get purge`, or `dpkg -r` if the package was not installed through a repository).
That version number looks like the last release of the snap package. If you installed by snap, then the uninstall uses the same tool: ``` sudo snap remove docker ```
34,398,588
hello i try to create an object named 'gerant' ``` class gerant { public double CIN_GERANT, NUM_TEL_GERANT, MOBILE_GERANT; public string NOM_GERANT, PRENOM_GERANT, ADRESSE__GERANT, MAIL_GERANT, VILLE_GERANT; public int CP_GERANT; public DateTime DATE_GERANT; public gerant(double _Cin_gerant, string _Nom_Gerant, string _Prenom_Gerant, string _Adresse_Gerant, double _Num_Tel_Gerant, string _Mail_Gerant, double _Mobile_Gerant, int _cp_gerant, string _ville_gerant, DateTime _date_gerant) { this.CIN_GERANT = _Cin_gerant; this.NOM_GERANT = _Nom_Gerant; this.PRENOM_GERANT = _Prenom_Gerant; this.ADRESSE__GERANT = _Adresse_Gerant; this.NUM_TEL_GERANT = _Num_Tel_Gerant; this.MAIL_GERANT = _Mail_Gerant; this.MOBILE_GERANT = _Mobile_Gerant; this.CP_GERANT = _cp_gerant; this.VILLE_GERANT = _ville_gerant; this.DATE_GERANT = _date_gerant; } public gerant getinfogerant() { gerant gerer = null; string sql_gerant = "select CIN,NOM,PRENOM,ADRESS_PERSONNEL,NUM_TEL,MAIL,MOBILE,CP_GERANT,VILLE_GERANT,DATE_CIN from GERANT"; connexion connect = new connexion(); OleDbConnection connection = connect.getconnexion(); // try //{ connection.Open(); OleDbCommand cmd = new OleDbCommand(sql_gerant, connection); OleDbDataReader reader = cmd.ExecuteReader(); if (reader.Read()) { gerer = new gerant(reader.GetDouble(0), reader.GetString(1), reader.GetString(2), reader.GetString(3), reader.GetDouble(4), reader.GetString(5), reader.GetDouble(6), reader.GetInt32(7), reader.GetString(8), reader.GetDateTime(9) ); } connection.Close(); return gerer; } } ``` but when i try to fill my combobox with gerant i try to insert this code ``` foreach(Modele.gerant ligne in liste_gerant) { } ``` but i make this error for me foreach statement cannot operate on variables of type 'gerant' because 'gerant' does not contain a public definition for 'GetEnumerator' how can i resolve that?
2015/12/21
[ "https://Stackoverflow.com/questions/34398588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5441811/" ]
No way to do it using standard DbMigration methods. The best way is to include a "select fieldToCheck from myTable where 1=2" into a try catch then add the field if required (in catch). The other way is to write a custom migration generator that extends the Migration generator (i.e. adding an AddColumnIfNotExists method). You can have a look here to see how to do it: <http://romiller.com/2013/02/27/ef6-writing-your-own-code-first-migration-operations/>
Basic example with sql: ``` // add colun if not exists migrationBuilder.Sql( @"IF COL_LENGTH('schemaName.TableName', 'ColumnName') IS NULL ALTER TABLE[TableName] ADD[ColumnName] int NULL GO "); ```
34,398,588
hello i try to create an object named 'gerant' ``` class gerant { public double CIN_GERANT, NUM_TEL_GERANT, MOBILE_GERANT; public string NOM_GERANT, PRENOM_GERANT, ADRESSE__GERANT, MAIL_GERANT, VILLE_GERANT; public int CP_GERANT; public DateTime DATE_GERANT; public gerant(double _Cin_gerant, string _Nom_Gerant, string _Prenom_Gerant, string _Adresse_Gerant, double _Num_Tel_Gerant, string _Mail_Gerant, double _Mobile_Gerant, int _cp_gerant, string _ville_gerant, DateTime _date_gerant) { this.CIN_GERANT = _Cin_gerant; this.NOM_GERANT = _Nom_Gerant; this.PRENOM_GERANT = _Prenom_Gerant; this.ADRESSE__GERANT = _Adresse_Gerant; this.NUM_TEL_GERANT = _Num_Tel_Gerant; this.MAIL_GERANT = _Mail_Gerant; this.MOBILE_GERANT = _Mobile_Gerant; this.CP_GERANT = _cp_gerant; this.VILLE_GERANT = _ville_gerant; this.DATE_GERANT = _date_gerant; } public gerant getinfogerant() { gerant gerer = null; string sql_gerant = "select CIN,NOM,PRENOM,ADRESS_PERSONNEL,NUM_TEL,MAIL,MOBILE,CP_GERANT,VILLE_GERANT,DATE_CIN from GERANT"; connexion connect = new connexion(); OleDbConnection connection = connect.getconnexion(); // try //{ connection.Open(); OleDbCommand cmd = new OleDbCommand(sql_gerant, connection); OleDbDataReader reader = cmd.ExecuteReader(); if (reader.Read()) { gerer = new gerant(reader.GetDouble(0), reader.GetString(1), reader.GetString(2), reader.GetString(3), reader.GetDouble(4), reader.GetString(5), reader.GetDouble(6), reader.GetInt32(7), reader.GetString(8), reader.GetDateTime(9) ); } connection.Close(); return gerer; } } ``` but when i try to fill my combobox with gerant i try to insert this code ``` foreach(Modele.gerant ligne in liste_gerant) { } ``` but i make this error for me foreach statement cannot operate on variables of type 'gerant' because 'gerant' does not contain a public definition for 'GetEnumerator' how can i resolve that?
2015/12/21
[ "https://Stackoverflow.com/questions/34398588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5441811/" ]
I have worked on creating a custom migration method, AddColumnIfNotExists You need a custom MigrationOperation class: ``` public class AddColumnIfNotExistsOperation : MigrationOperation { public readonly string Table; public readonly string Name; public readonly ColumnModel ColumnModel; public AddColumnIfNotExistsOperation(string table, string name, Func<ColumnBuilder, ColumnModel> columnAction, object anonymousArguments) : base(anonymousArguments) { ArgumentValidator.CheckForEmptyArgument(table, nameof(table)); ArgumentValidator.CheckForEmptyArgument(name, nameof(name)); ArgumentValidator.CheckForNullArgument(columnAction, nameof(columnAction)); Table = table; Name = name; ColumnModel = columnAction(new ColumnBuilder()); ColumnModel.Name = name; } public override bool IsDestructiveChange => false; public override MigrationOperation Inverse => new DropColumnOperation(Table, Name, removedAnnotations: ColumnModel.Annotations.ToDictionary(s => s.Key,s => (object)s.Value) , anonymousArguments: null); } ``` You also need a custom SqlGenerator class: ``` public class AddColumnIfNotExistsSqlGenerator : SqlServerMigrationSqlGenerator { protected override void Generate(MigrationOperation migrationOperation) { var operation = migrationOperation as AddColumnIfNotExistsOperation; if (operation == null) return; using (var writer = Writer()) { writer.WriteLine("IF NOT EXISTS(SELECT 1 FROM sys.columns"); writer.WriteLine($"WHERE Name = N'{operation.Name}' AND Object_ID = Object_ID(N'{Name(operation.Table)}'))"); writer.WriteLine("BEGIN"); writer.WriteLine("ALTER TABLE "); writer.WriteLine(Name(operation.Table)); writer.Write(" ADD "); var column = operation.ColumnModel; Generate(column, writer); if (column.IsNullable != null && !column.IsNullable.Value && (column.DefaultValue == null) && (string.IsNullOrWhiteSpace(column.DefaultValueSql)) && !column.IsIdentity && !column.IsTimestamp && !column.StoreType.EqualsIgnoreCase("rowversion") && !column.StoreType.EqualsIgnoreCase("timestamp")) { writer.Write(" DEFAULT "); if (column.Type == PrimitiveTypeKind.DateTime) { writer.Write(Generate(DateTime.Parse("1900-01-01 00:00:00", CultureInfo.InvariantCulture))); } else { writer.Write(Generate((dynamic)column.ClrDefaultValue)); } } writer.WriteLine("END"); Statement(writer); } } } ``` And an Extension Method to give you your "AddColumnIfNotExists" function: ``` public static class MigrationExtensions { public static void AddColumnIfNotExists(this DbMigration migration, string table, string name, Func<ColumnBuilder, ColumnModel> columnAction, object anonymousArguments = null) { ((IDbMigration)migration) .AddOperation(new AddColumnIfNotExistsOperation(table, name, columnAction, anonymousArguments)); } } ``` In your EF Migrations Configuration file, you need to register the custom SQL generator: ``` [ExcludeFromCodeCoverage] internal sealed class Configuration : DbMigrationsConfiguration<YourDbContext> { public Configuration() { AutomaticMigrationsEnabled = false; // Register our custom generator SetSqlGenerator("System.Data.SqlClient", new AddColumnIfNotExistsSqlGenerator()); } } ``` And then you should be able to use it in place of AddColum like this (notice the **this** keyword): ``` [ExcludeFromCodeCoverage] public partial class AddVersionAndChangeActivity : DbMigration { public override void Up() { this.AddColumnIfNotExists("dbo.Action", "VersionId", c => c.Guid(nullable: false)); AlterColumn("dbo.Action", "Activity", c => c.String(nullable: false, maxLength: 8000, unicode: false)); } public override void Down() { AlterColumn("dbo.Action", "Activity", c => c.String(nullable: false, maxLength: 50)); DropColumn("dbo.Action", "VersionId"); } } ``` And of course you want some tests for the operation: ``` [TestClass] public class AddColumnIfNotExistsOperationTests { [TestMethod] public void Can_get_and_set_table_and_column_info() { Func<ColumnBuilder, ColumnModel> action = c => c.Decimal(name: "T"); var addColumnOperation = new AddColumnIfNotExistsOperation("T", "C", action, null); Assert.AreEqual("T", addColumnOperation.Table); Assert.AreEqual("C", addColumnOperation.Name); } [TestMethod] public void Inverse_should_produce_drop_column_operation() { Func<ColumnBuilder, ColumnModel> action = c => c.Decimal(name: "C", annotations: new Dictionary<string, AnnotationValues> { { "A1", new AnnotationValues(null, "V1") } }); var addColumnOperation = new AddColumnIfNotExistsOperation("T", "C", action, null); var dropColumnOperation = (DropColumnOperation)addColumnOperation.Inverse; Assert.AreEqual("C", dropColumnOperation.Name); Assert.AreEqual("T", dropColumnOperation.Table); Assert.AreEqual("V1", ((AnnotationValues)dropColumnOperation.RemovedAnnotations["A1"]).NewValue); Assert.IsNull(((AnnotationValues)dropColumnOperation.RemovedAnnotations["A1"]).OldValue); } [TestMethod] [ExpectedException(typeof(ArgumentNullException))] public void Ctor_should_validate_preconditions_tableName() { Func<ColumnBuilder, ColumnModel> action = c => c.Decimal(name: "T"); // ReSharper disable once ObjectCreationAsStatement new AddColumnIfNotExistsOperation(null, "T", action, null); } [TestMethod] [ExpectedException(typeof(ArgumentNullException))] public void Ctor_should_validate_preconditions_columnName() { Func<ColumnBuilder, ColumnModel> action = c => c.Decimal(); // ReSharper disable once ObjectCreationAsStatement new AddColumnIfNotExistsOperation("T", null, action, null); } [TestMethod] [ExpectedException(typeof(ArgumentNullException))] public void Ctor_should_validate_preconditions_columnAction() { // ReSharper disable once ObjectCreationAsStatement new AddColumnIfNotExistsOperation("T", "C", null, null); } } ``` And tests for the SQL Generator: ``` [TestClass] public class AddColumnIfNotExistsSqlGeneratorTests { [TestMethod] public void AddColumnIfNotExistsSqlGenerator_Generate_can_output_add_column_statement_for_GUID_and_uses_newid() { var migrationSqlGenerator = new AddColumnIfNotExistsSqlGenerator(); Func<ColumnBuilder, ColumnModel> action = c => c.Guid(nullable: false, identity: true, name: "Bar"); var addColumnOperation = new AddColumnIfNotExistsOperation("Foo", "bar", action, null); var sql = string.Join(Environment.NewLine, migrationSqlGenerator.Generate(new[] {addColumnOperation}, "2005") .Select(s => s.Sql)); Assert.IsTrue(sql.Contains("IF NOT EXISTS(SELECT 1 FROM sys.columns")); Assert.IsTrue(sql.Contains("WHERE Name = N\'bar\' AND Object_ID = Object_ID(N\'[Foo]\'))")); Assert.IsTrue(sql.Contains("BEGIN")); Assert.IsTrue(sql.Contains("ALTER TABLE")); Assert.IsTrue(sql.Contains("[Foo]")); Assert.IsTrue(sql.Contains("ADD [bar] [uniqueidentifier] NOT NULL DEFAULT newsequentialid()END")); } } ```
Basic example with sql: ``` // add colun if not exists migrationBuilder.Sql( @"IF COL_LENGTH('schemaName.TableName', 'ColumnName') IS NULL ALTER TABLE[TableName] ADD[ColumnName] int NULL GO "); ```
437,800
I've designed a Likert-scale survey with 4 measurements: * x (it has 6 items/questions) * y (it has 9 items/questions) * z (it has 7 items/questions) * w (it has 3 items/questions) How can I rank these measurements? In other words: how can I understand which measurement is more important to the respondents? In what order? for example XYWZ or ZWXY ...?
2019/11/25
[ "https://stats.stackexchange.com/questions/437800", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/266770/" ]
A couple things: > > ...following two methods can be used for feature selection prior to model development > > > Those are actually *part of the model development and should be cross validated*. What most people do is look at the correlations, select only the most correlated with the output, and then move on to do cross validation etc. That's wrong for two reasons. 1. Correlation measures strength of a linear relationship. If the effect of the variable is non linear, your correlation may not pick up on this. Here is a concrete example. When I worked for a marketing company, most of the customers we dealt with were in their late 30s to early 40s. They spent the most money, and people younger and older spent less because young people typically didn't have as much money or interest in our products. So the effect of age kind of looked like a concave function. If you simulate something like `x = rnorm(1000); y = -0.2*x^2 + rnorm(1000, 0, 0.5)` (here x has a concave relationship to y) the correlation is low even though x can explain 75% of the variation observed in y. If you removed features based on their correlation, surely you would not select this very important feature. 2. Had you had different training data, you might have picked different features. So when you fit models in the cross validation step, you need to repeat the selection of features based on the correlations. Same thing with the lasso. In every cross validation refit, you need to fit the lasso, select the features, then refit a model with the selected features. > > Which of the above two methods is preferred? > > > I don't think either are very good to be honest. Correlation is myopic for the reasons I've described. Lasso is better, but there is no reason to think the features it selects are the "best" features, nor is there a reason to think that the features it selects would be selected had you had different data. Here is a code example to demonstrate that ``` library(tidyverse) library(glmnet) S = 0 N = 1000 p = 100 mu = rep(0, p) betas = rnorm(p, 2, 2)*rbinom(p, 1, 0.10) while(max(abs(S))<0.9){ S = rethinking::rlkjcorr(1, p) } do_glmnet<-function(){ X = MASS::mvrnorm(N, mu, S) y = X %*% betas + rnorm(N, 0, 2.5) cvmodel = cv.glmnet(X,y, alpha = 1) model = glmnet(X, y, alpha = 1) coef(model, cvmodel$lambda.1se) %>% as.matrix() %>% t() %>% as_tibble() } results = map_df(1:100, ~do_glmnet()) results %>% summarise_all(~mean(abs(.)>0) ) ``` In that example, I generate data from a sparse linear model. Some variables are selected every time (those are the variables with real effects) but you can see that some variables with 0 effect are sometimes selected and sometimes not selected. The absolute best way to select features is to use your knowledge about the data generating process to determine what is important and what is not. If you can't do that, use lasso to trade off variance for a but of bias, but don't select out features. Just keep the entire fit in the model. I saw Trevor Hastie speak at a zoom talk the other day and he showed us an example of which LASSO with all features performed better than selecting features with LASSO and then refitting the full model. I can't say that is the case for every problem, but it was pretty compelling evidence. Let me see if I can find a link to the talk. That being said, I'm open to seeing numerical experiments that show that selection via glmnet does better than just putting everything into glmnet and not selecting. That just hasn't been the story I've seen.
I think by saying correlation you are referring to SIS, developed by Jianqing Fan and Jinchi Lv. Actually, the logic behind the two methods is different. LASSO does the selection by using a penalized loss function and sparsity of the variables is required. Normally, for ultra-high dimensional data, we perform SIS first and reduce the dimension to a relatively small amount, and then perform LASSO to further reduce the number of variables that enter the final model.
22,039,373
I'm usign the xsd 3.3.0 Compiler in order to parse a xsd (xml best friend) file to C++ class. (see last weblink) The comand name is > > xsd cxx-tree (options) file.xsd > > > (+ info <http://www.codesynthesis.com/projects/xsd/documentation/cxx/tree/guide/>) I've seen some examples provided by codesynthesis where they parse a hello.xsd document and creates a .hxx and a .cxx file very easily. The .hxx has a method to open a xml document creating an object where you can find the diferent parts of the xml, check it, etc... The .hxx has a code like this: ``` // Parse a URI or a local file. // ::std::auto_ptr< ::hello_t > hello (const ::std::string& uri, ::xml_schema::flags f = 0, const ::xml_schema::properties& p = ::xml_schema::properties ()); ``` It receive a string with the file name > > string& uri = "hello.xsd" > > > and create the object that you use in the main.cxx. So, I'm trying to do the same with my xsd file. I use the xsd cxx-tree compiler but it doesn't create the methods to "Parse a URI or a local file.". Then I can't create an object from a xml file on my main program. I solve some compiling problems using differents options from codesys compiler documentation (<http://www.codesynthesis.com/projects/xsd/documentation/xsd.xhtml>). There are differents options about what do you want to compile, how do you want to do it, etc... but I can't find any options to enable the methods used to "Parse a URI or a local file.". Giving more onformation, the xml-xsd documents are CBML protocol documents. Thank you for your help!
2014/02/26
[ "https://Stackoverflow.com/questions/22039373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3355390/" ]
I found the solution by myself! I used different options for compiling. I used the option "--root-element library" and it caused that the methods to "Parse a URI or a local file." wasn't created. I delete this option and I added "--root-element-all" that create parse methods for all principals objects! Now my program works! thanks!
I use this product all the time. Here is an example of what you're trying to do (I think): `xsd cxx-tree hello.xsd` Which generates the `hello.hxx` and the `hello.cxx`, as you've said. I think where you're falling short is understanding how to use these files to load an XML file (e.g., loading a "local file"). I like to explicitly tell the software where to find the XSD schema. The following code will not compile but I've included it for reference. ``` void load(const std::string &xml, const std::string &xsd) { xml_schema::properties properties; properties.no_namespace_schema_location(xsd); // or, if you have a specific namespace you want to load, use // props.schema_location("which namespace", xsd); try { std::auto_ptr< ::hello_t> xml_file = hello(xml, 0, props); // operate on the xml_file via "->" notation std::cout << xml_file->hello() << std::endl; } catch (const ::xml_schema::exception &e) { std::ostringstream os; os << e.what(); // report the error as you see fit, use os.str() to get the string } } ``` Make sure to link the `*.cxx` file to your `*.cpp` files. If this is not what you wanted, let me know in the comments and I can try to help you some more.
33,156,051
It seems like despite the fact we're not using transactions at all we get random deadlock error from SQL Azure. Are there no transnational situation when SQL Azure can get into a deadlock? It seems like when we are running a batch of UPDATE queries it acts like the batch is a one big transaction. All the updates are by id and update a single a line.
2015/10/15
[ "https://Stackoverflow.com/questions/33156051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/593425/" ]
There is no such things as "not using transactions". There's always a transaction, wether you start one explicitly or not. Read [Tracking down Deadlocks in SQL Database](http://blogs.msdn.com/b/sqldatabasetalk/archive/2013/05/01/tracking-down-deadlocks-in-sql-database.aspx) for how to obtain the deadlock graph in SQL Azure. Connect to `master` and run: ``` SELECT * FROM sys.event_log WHERE database_name like '<your db name>' AND event_type = 'deadlock'; ``` Then analyze the deadlock graph do understand the cause. Most likely you're doing scan because of missing indexes.
When you have concurrent transactions running (either implicit or explicit) you encounter deadlocks. Probably when you you said no transactions that means your transactions are implicit.
25,703,878
I have a price database that stores numbers as floating point. These are presented on a website. Prices can be in the format. ``` x.x (e.g. 1.4) x.xx (e.g. 1.99) x.xxx (e.g. 1.299) <-- new price format ``` I used to use the string format or `%.2f` to standardize the prices to two decimal places but now I need to show 3 as well but only if the price is 3 decimal place long. ``` e.g. 1.4 would display 1.40 1.45 would display 1.45 1.445 would display 1.445 ``` The above formats would be the desired output for the given input. using `%.3f` shows all with 3 digits. ``` e.g. 1.4 would display 1.400 1.45 would display 1.450 1.445 would display 1.445 ``` But that is not what i want does anyone know the best way to do the following. i.e. any number should display 2 decimal places if it has 0 1 or 2 decimal places if it has 3 or more decimal places it should display 3 decimal places
2014/09/06
[ "https://Stackoverflow.com/questions/25703878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/333661/" ]
I would just format it to three places, then trim a final 0. ``` $formatted = number_format($value, 3, ".", ""); if (substr($formatted, -1) === "0") $formatted = substr($formatted, 0, -1); ```
Use this dude ``` number_format($data->price, 0, ',', '.'); ``` <http://php.net/manual/en/function.number-format.php>
25,703,878
I have a price database that stores numbers as floating point. These are presented on a website. Prices can be in the format. ``` x.x (e.g. 1.4) x.xx (e.g. 1.99) x.xxx (e.g. 1.299) <-- new price format ``` I used to use the string format or `%.2f` to standardize the prices to two decimal places but now I need to show 3 as well but only if the price is 3 decimal place long. ``` e.g. 1.4 would display 1.40 1.45 would display 1.45 1.445 would display 1.445 ``` The above formats would be the desired output for the given input. using `%.3f` shows all with 3 digits. ``` e.g. 1.4 would display 1.400 1.45 would display 1.450 1.445 would display 1.445 ``` But that is not what i want does anyone know the best way to do the following. i.e. any number should display 2 decimal places if it has 0 1 or 2 decimal places if it has 3 or more decimal places it should display 3 decimal places
2014/09/06
[ "https://Stackoverflow.com/questions/25703878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/333661/" ]
I would just format it to three places, then trim a final 0. ``` $formatted = number_format($value, 3, ".", ""); if (substr($formatted, -1) === "0") $formatted = substr($formatted, 0, -1); ```
Here is what I did due to the need to cope with some special cases I had in the app. 1. count the number of dec places ($prices is a float from the database). 2. format based on the count in the places using a switch statement. 3. For all cases with less than 3 decimal places format with 2 (except zero) 4. For all other case format with 3. ``` $decimals = strlen(substr(strrchr($price,"."),1)); switch ($decimals) { case 0: { if ($price != 0) { $price = number_format($price),2); } break; } case 1: { $price = number_format($price),2); break; } case 2: { $price = number_format($price),2); break; } default: { $price = number_format($price),3); // three dec places all other prices break; } ``` } Thanks for the help...
37,138
I'm using the standalone GeoWebCache to serve tiles from a remote GeoServer. My problem is that the polygon label is added to each one of the tiles served, instead of only once in the polygon centroid. I found a post which discusses the issue: <http://osgeo-org.1560.n6.nabble.com/polygon-label-repeated-for-each-tile-td4995203.html> The first reply mentioned a possible solution: > > "All in all, I suggest to use a tile rendering engine (GeoWebCache, MapProxy, TileCache) anyway, instead of requesting small image from GeoServer and have the tile rendering engine do the tile slicing afterwards. You will have send fewer requests to GeoServer (1 large image instead of multiple small images), so this speeds up the overall tile cache creation time." > > > Problem is that I couldn't find how to do that by referring to the GeoWebCache documentation, and the above mentioned post doesn't explain the way to implement that. I also found a [post](https://gis.stackexchange.com/questions/29127/labeling-geoserver-sld) with an answer that links to the [GeoWebCache "Tiled" documentation](http://docs.geoserver.org/latest/en/user/services/wms/vendor.html#tiled), but my code allready uses all the necessary attributes and still the label shows up multiple times: ``` var Layer_1874 = new OpenLayers.Layer.WMS( 'Grundkort', '/wms10.ashx' , { format: 'image/png', srs: 'EPSG:25832', layers: 'ballerupkommune_grundkort_bk', tiled: true, tilesOrigin: '698804,6173460' } , { displayInLayerSwitcher: true, isBaseLayer: true, transitionEffect: 'resize', displayOutsideMaxExtent: true, visibility: false } ); ``` Anyone has an idea?
2012/10/19
[ "https://gis.stackexchange.com/questions/37138", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/12096/" ]
Below is an example of an SLD rule that places a label at the center of a feature's geometry. This uses the ogc:Function called "centroid" to place the label. You can read more about SLD functions in the GeoServer [docs](http://docs.geoserver.org/latest/en/user/filter/function_reference.html), and some examples are given [here](http://docs.geoserver.org/latest/en/user/filter/function.html). ``` <sld:Rule> <MaxScaleDenominator>5000</MaxScaleDenominator> <sld:TextSymbolizer> <sld:Geometry> <ogc:Function name="centroid"> <ogc:PropertyName>the_geom</ogc:PropertyName> </ogc:Function> </sld:Geometry> <sld:Label> <ogc:PropertyName>LOT_NAME</ogc:PropertyName> </sld:Label> <sld:Font> <sld:CssParameter name="font-family">Arial</sld:CssParameter> <sld:CssParameter name="font-size">11</sld:CssParameter> <sld:CssParameter name="font-style">normal</sld:CssParameter> <sld:CssParameter name="font-weight">bold</sld:CssParameter> </sld:Font> <sld:LabelPlacement> <sld:PointPlacement> <sld:AnchorPoint> <sld:AnchorPointX> <ogc:Literal>0.0</ogc:Literal> </sld:AnchorPointX> <sld:AnchorPointY> <ogc:Literal>0.5</ogc:Literal> </sld:AnchorPointY> </sld:AnchorPoint> <sld:Rotation> <ogc:Literal>0</ogc:Literal> </sld:Rotation> </sld:PointPlacement> </sld:LabelPlacement> <sld:Halo> <sld:Radius> <ogc:Literal>1.0</ogc:Literal> </sld:Radius> <sld:Fill> <sld:CssParameter name="fill">#FFFFFF</sld:CssParameter> </sld:Fill> </sld:Halo> <sld:VendorOption name="conflictResolution">true</sld:VendorOption> <sld:VendorOption name="goodnessOfFit">0</sld:VendorOption> <sld:VendorOption name="autoWrap">60</sld:VendorOption> </sld:TextSymbolizer> </sld:Rule> ``` Also, the [SLD Cookbook](http://blog.geoserver.org/2010/04/09/sld-cookbook/) is a great reference. One thing that can trip you up is the ordering of tags in the SLD. For the TextSymbolizer rule above you can see required order by looking in the schema definition. Don't worry, it's not too scary! Just search for "textsymbolizer" in that .xsd file, an you should easily find the "sequence" tag. There you'll find that the element references match up with the order in my example. (Note: I didn't use the text symbolizer's "fill" attribute, my fill just applies to the halo.)
Computing labels with collision resolution (moving labels out of the way or removing lower priority ones so they don't overlap) requires knowing about every label that might collide with the label you are drawing, every label that might collide with them, and so on. So, in general, you either need to compute all the labels at once by looking at every feature, or break the map into blocks with labels computed within each block. By default, GeoWebCache uses a 4x4 block of tiles called a "metatile". When you request a tile that isn't in the cache, GWC will request the entire metatile as one big image from the backend and then slice the metatile into tiles which it caches. You can adjust the metatile factor when setting up a layer. Larger metatiles give better looking labels, but increase the latency of a cache miss. If you aren't using label collision resolution on the back end, you can set the metatiling to 1x1. You can also tell GWC to add a gutter around the metatile which is extra space that will be cut off. It's risky to do this if you have label collision resolution on as a label may be positioned differently or even be removed entirely on the other side of a metatile boundary. If you have labels that are totally fixed in position and never get supressed to avoid collision though, you can use a wide gutter to allow the labels to cross tile boundaries. This will have a performance cost as GeoServer will have to render a larger tile. You can set metatiling and gutter on the Tile Layer tab of the layer configuration, or the default that will be used for new layers can be set on the Caching Defaults page. To disable conflict resolution, you can use the [`conflictResolution`](https://docs.geoserver.org/stable/en/user/styling/sld/reference/labeling.html#conflictresolution) vendor option in your styles.