qid
int64
1
74.7M
question
stringlengths
0
58.3k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
2
48.3k
response_k
stringlengths
2
40.5k
14,336,734
Because of some reasons I have a spring application which has two client applications written in extjs. One only contains the login page and the other the application logic. In Spring I include them into two jsp pages which I'm using in the controller. The login and the redirect to the application page works fine. But if I logout the logout is done successful but I keep staying on the application page instead of being redirected to the login page. security config: ``` <security:logout logout-url="/main/logoutpage.html" delete-cookies="JSESSIONID" invalidate-session="true" logout-success-url="/test/logout.html"/> ``` Controller: ``` @RequestMapping(value="/test/logout.html",method=RequestMethod.GET) public ModelAndView testLogout(@RequestParam(required=false)Integer error, HttpServletRequest request, HttpServletResponse response){ return new ModelAndView("login"); } ``` "login" is the name of the view which contains the login application. In browser debugging I can see following two communcation: ``` Request URL:http://xx:8080/xx/xx/logoutpage.html?_dc=1358246248972 Request Method:GET Status Code:302 Moved Temporarily Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Query String Parametersview URL encoded _dc:1358246248972 Response Headersview source Content-Length:0 Date:Tue, 15 Jan 2013 10:37:33 GMT Location:http://xx:8080/xx/xx/login.html Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=""; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/xx Request URL:http://xx:8080/xx/xx/login.html Request Method:GET Status Code:200 OK Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Response Headersview source Content-Language:de-DE Content-Length:417 Content-Type:text/html;charset=ISO-8859-1 Date:Tue, 15 Jan 2013 10:37:33 GMT Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=532EBEED737BD4172E290F0D10085ED5; Path=/xx/; HttpOnly ``` The second response also contains the login page: ``` <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Login</title> <script src="http://extjs.cachefly.net/ext-4.1.1-gpl/ext-all.js"></script> <link rel="stylesheet" href="http://extjs.cachefly.net/ext-4.1.1-gpl/resources/css/ext-all.css"> <script type="text/javascript" src="/xx/main/app/app.js"></script> </head> <body></body> </html> ``` Somebody has an idea why the login page is not shown? Thanks
2013/01/15
[ "https://Stackoverflow.com/questions/14336734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1979942/" ]
It certainly is. The type is a [NameValueCollection](http://msdn.microsoft.com/en-us/library/system.collections.specialized.namevaluecollection.aspx): ``` public string extract(NameValueCollection form) { ... } ```
Yes you can, It's of type `FormCollection`, which inherits from `NameValueCollection`
14,336,734
Because of some reasons I have a spring application which has two client applications written in extjs. One only contains the login page and the other the application logic. In Spring I include them into two jsp pages which I'm using in the controller. The login and the redirect to the application page works fine. But if I logout the logout is done successful but I keep staying on the application page instead of being redirected to the login page. security config: ``` <security:logout logout-url="/main/logoutpage.html" delete-cookies="JSESSIONID" invalidate-session="true" logout-success-url="/test/logout.html"/> ``` Controller: ``` @RequestMapping(value="/test/logout.html",method=RequestMethod.GET) public ModelAndView testLogout(@RequestParam(required=false)Integer error, HttpServletRequest request, HttpServletResponse response){ return new ModelAndView("login"); } ``` "login" is the name of the view which contains the login application. In browser debugging I can see following two communcation: ``` Request URL:http://xx:8080/xx/xx/logoutpage.html?_dc=1358246248972 Request Method:GET Status Code:302 Moved Temporarily Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Query String Parametersview URL encoded _dc:1358246248972 Response Headersview source Content-Length:0 Date:Tue, 15 Jan 2013 10:37:33 GMT Location:http://xx:8080/xx/xx/login.html Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=""; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/xx Request URL:http://xx:8080/xx/xx/login.html Request Method:GET Status Code:200 OK Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:JSESSIONID=6E22E42CC6835C8A6DFF2535907DEF17 Host:xx:8080 Referer:http://xx:8080/xx/xx/Home.html User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 X-Requested-With:XMLHttpRequest Response Headersview source Content-Language:de-DE Content-Length:417 Content-Type:text/html;charset=ISO-8859-1 Date:Tue, 15 Jan 2013 10:37:33 GMT Server:Apache-Coyote/1.1 Set-Cookie:JSESSIONID=532EBEED737BD4172E290F0D10085ED5; Path=/xx/; HttpOnly ``` The second response also contains the login page: ``` <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Login</title> <script src="http://extjs.cachefly.net/ext-4.1.1-gpl/ext-all.js"></script> <link rel="stylesheet" href="http://extjs.cachefly.net/ext-4.1.1-gpl/resources/css/ext-all.css"> <script type="text/javascript" src="/xx/main/app/app.js"></script> </head> <body></body> </html> ``` Somebody has an idea why the login page is not shown? Thanks
2013/01/15
[ "https://Stackoverflow.com/questions/14336734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1979942/" ]
It certainly is. The type is a [NameValueCollection](http://msdn.microsoft.com/en-us/library/system.collections.specialized.namevaluecollection.aspx): ``` public string extract(NameValueCollection form) { ... } ```
Using the [example in the documentaion](http://msdn.microsoft.com/en-us/library/system.web.httprequest.form.aspx) ``` public string extract(NameValueCollection myRequest) { int loop1; StringBuilder processed_data= new StringBuilder(); // Get names of all forms into a string array. String[] arr1 = myRequest.AllKeys; for (loop1 = 0; loop1 < arr1.Length; loop1++) { data.Append("Form: " + arr1[loop1] + "<br>"); } return processed_data.ToString(); } ```
66,029,135
Consider the table below: ([here's a db-fiddle with this example](https://www.db-fiddle.com/f/nVZp5EagiiEgqPYNLvQnKd/0)) ``` id primary_sort record_id record_sort alt_sort 1 2 1 11 100 2 2 2 10 101 3 3 1 12 108 4 3 1 13 107 5 3 2 14 105 6 1 2 15 109 ``` I'd like to sort this according to `primary_sort` first. If equal, the next sort field depends on the value of `record_id`: if two rows has the same `record_id`, then sort them by `record_sort`. Otherwise, sort them by `alt_sort`. I think the query should look something like this: ```sql select * from example order by primary_sort, case when [this_row].record_id = [other_row].record_id then record_sort else alt_sort end ; ``` Expected output: ``` id primary_sort record_id record_sort alt_sort 6 1 2 15 109 1 2 1 11 100 2 2 2 10 101 5 3 2 14 105 3 3 1 12 108 4 3 1 13 107 ``` Here's some pseudocode in Java, showing my intent: ```java int compareTo(Example other) { if (this.primary_sort != other.primary_sort) { return this.primary_sort.compareTo(other.primary_sort); } else if (this.record_id == other.record_id) { return this.record_sort.compareTo(other.record_sort); } else { return this.alt_sort.compareTo(other.alt_sort); } } ``` (this is a minimal, reproducible example. Similar SO questions I've found on conditional `order by` are not applicable, because my condition is based on values in both rows (i.e. `[this_row].record_id = [other_row].record_id`))
2021/02/03
[ "https://Stackoverflow.com/questions/66029135", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4171429/" ]
I normally use a structure like below, `/src/scss/core` is my custom sass directory: ``` // 1. Include functions first (so you can manipulate colors, SVGs, calc, etc) @import "../../node_modules/bootstrap/scss/functions"; @import "../../node_modules/bootstrap/scss/variables"; // 2. Include any default variable overrides here @import "core/variables"; // Custom theme variables @import "core/variables-bootstrap"; // Bootstrap variables overrides // Mixins @import "../../node_modules/bootstrap/scss/mixins"; @import "core/mixins.scss"; // Bootstrap core @import "../../node_modules/bootstrap/scss/utilities"; @import "../../node_modules/bootstrap/scss/root"; @import "../../node_modules/bootstrap/scss/reboot"; @import "../../node_modules/bootstrap/scss/type"; @import "../../node_modules/bootstrap/scss/images"; @import "../../node_modules/bootstrap/scss/containers"; @import "../../node_modules/bootstrap/scss/grid"; @import "../../node_modules/bootstrap/scss/tables"; @import "../../node_modules/bootstrap/scss/forms"; @import "../../node_modules/bootstrap/scss/buttons"; @import "../../node_modules/bootstrap/scss/transitions"; @import "../../node_modules/bootstrap/scss/dropdown"; @import "../../node_modules/bootstrap/scss/button-group"; @import "../../node_modules/bootstrap/scss/nav"; @import "../../node_modules/bootstrap/scss/navbar"; @import "../../node_modules/bootstrap/scss/card"; @import "../../node_modules/bootstrap/scss/accordion"; @import "../../node_modules/bootstrap/scss/breadcrumb"; @import "../../node_modules/bootstrap/scss/pagination"; @import "../../node_modules/bootstrap/scss/badge"; @import "../../node_modules/bootstrap/scss/alert"; @import "../../node_modules/bootstrap/scss/progress"; @import "../../node_modules/bootstrap/scss/list-group"; @import "../../node_modules/bootstrap/scss/close"; @import "../../node_modules/bootstrap/scss/toasts"; @import "../../node_modules/bootstrap/scss/modal"; @import "../../node_modules/bootstrap/scss/tooltip"; @import "../../node_modules/bootstrap/scss/popover"; @import "../../node_modules/bootstrap/scss/carousel"; @import "../../node_modules/bootstrap/scss/spinners"; @import "../../node_modules/bootstrap/scss/offcanvas"; @import "../../node_modules/bootstrap/scss/placeholders"; // Helpers @import "../../node_modules/bootstrap/scss/helpers"; // Utilities @import "../../node_modules/bootstrap/scss/utilities/api"; ``` This structure works for me with variable overrides/extending but like Zim said you'd need to override $link-hover-color too since it's default value is set before you defined your custom $primary colour (at least that's my understanding of the !default flag). ``` $primary: $red; $link-color: $primary; $link-hover-color: shift-color($link-color, $link-shade-percentage); ```
I think you'd have to set any other vars that use `$link-color` (ie: `$btn-link-color`) and merge the new colors into the `$theme-colors` map... ``` @import "functions"; @import "variables"; @import "mixins"; $primary: $red; $link-color: $primary; $btn-link-color: $primary; $theme-colors: map-merge( $theme-colors, ( "primary": $primary ) ); @import "bootstrap"; ``` [Demo](https://codeply.com/p/kUS6ulE754#)
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
If you have Microsoft Office: 1. **RIGHT**-drag the drive, folder or file from Windows Explorer into the body of a Word document or Outlook email 2. Select '**Create Hyperlink Here**' The inserted text will be the full UNC of the dragged item.
my 5 cents: a powershell script when run via PowerShell will create a short cut pointing to PowerShell and itself such that you can drop files onto it to get the UNC path (or local normal file path) into the clipboard <https://inmood.ch/get-unc-path-of-fileserver-file/> ``` # run without arguments will create a file called DropFileToGetUNCPath.lnk # if you drop a file onto the shortcut it'll return the UNC path if($args[0] -eq $null) { # creating the shortcut to drop files later on $path = $pwd.path $script = $MyInvocation.MyCommand.Path $WshShell = New-Object -comObject WScript.Shell $Shortcut = $WshShell.CreateShortcut("$path\DropFileToGetUNCPath.lnk") $Shortcut.TargetPath = "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" $Shortcut.Arguments = "-noprofile -file """ + $script + """" $Shortcut.Save() }else{ $file = $args[0] } $drive = $pwd.drive.name + ":" # find UNC paths for directories $drives = net use $drive = ($drives -match ".*" + $drive + ".*") #debug #echo $drive $parts = $drive -split "\s{1,11}" #debug #echo $parts $windowsDrive = $parts[1] $uncDrive = $parts[2] $file -replace $windowsDrive, $uncDrive | clip ```
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
In Windows, if you have mapped network drives and you don't know the UNC path for them, you can start a command prompt (*Start → Run → cmd.exe*) and use the `net use` command to list your mapped drives and their UNC paths: ``` C:\>net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Q: \\server1\foo Microsoft Windows Network OK X: \\server2\bar Microsoft Windows Network The command completed successfully. ``` Note that this shows the list of mapped and connected network file shares for the user context the command is run under. If you run `cmd.exe` under your own user account, the results shown are the network file shares for yourself. If you run `cmd.exe` under another user account, such as the local Administrator, you will instead see the network file shares for that user.
``` $CurrentFolder = "H:\Documents" $Query = "Select * from Win32_NetworkConnection where LocalName = '" + $CurrentFolder.Substring( 0, 2 ) + "'" ( Get-WmiObject -Query $Query ).RemoteName ``` OR ``` $CurrentFolder = "H:\Documents" $Tst = $CurrentFolder.Substring( 0, 2 ) ( Get-WmiObject -Query "Select * from Win32_NetworkConnection where LocalName = '$Tst'" ).RemoteName ```
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
In Windows, if you have mapped network drives and you don't know the UNC path for them, you can start a command prompt (*Start → Run → cmd.exe*) and use the `net use` command to list your mapped drives and their UNC paths: ``` C:\>net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Q: \\server1\foo Microsoft Windows Network OK X: \\server2\bar Microsoft Windows Network The command completed successfully. ``` Note that this shows the list of mapped and connected network file shares for the user context the command is run under. If you run `cmd.exe` under your own user account, the results shown are the network file shares for yourself. If you run `cmd.exe` under another user account, such as the local Administrator, you will instead see the network file shares for that user.
``` wmic path win32_mappedlogicaldisk get deviceid, providername ``` Result: ``` DeviceID ProviderName I: \\server1\Temp J: \\server2\Corporate Y: \\Server3\Dev_Repo Z: \\Server3\Repository ``` As a batch file ([src](https://github.com/maphew/code/blob/master/other/get-unc-path.bat)): ``` @if [%1]==[] goto :Usage @setlocal enabledelayedexpansion @set _NetworkPath= @pushd %1 @for /f "tokens=2" %%i in ('wmic path win32_mappedlogicaldisk get deviceid^, providername ^| findstr /i "%CD:~0,2%"') do @(set _NetworkPath=%%i%CD:~2%) @echo.%_NetworkPath% @popd @goto :EOF :: --------------------------------------------------------------------- :Usage @echo. @echo. Get the full UNC path for the specified mapped drive path @echo. @echo. %~n0 [mapped drive path] ``` Example: ``` C:\> get-unc-path.bat z:\Tools\admin \\EnvGeoServer\Repository\Tools\admin ``` Batch script adapted from <https://superuser.com/a/1123556/16966>. Please be sure to go vote that one up too if you like this solution. *Update 2021-11-15:* bug fix. Previously the batch only reported drive letter UNC root and neglected to also report the folder path. `%CD%` is set from `%%i` through some kind of CMD magic. `%CD:~0,2%` and `%CD:~2%` extract the drive letter and trailing path [substrings](https://ss64.com/nt/syntax-substring.html) respectively. e.g. `:~2%` does '\Tools\admin' from 'Z:\Tools\admin'.
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
If you have Microsoft Office: 1. **RIGHT**-drag the drive, folder or file from Windows Explorer into the body of a Word document or Outlook email 2. Select '**Create Hyperlink Here**' The inserted text will be the full UNC of the dragged item.
The answer is a simple `PowerShell` one-liner: ``` Get-WmiObject Win32_NetworkConnection | ft "RemoteName","LocalName" -A ``` If you only want to pull the `UNC` for one particular drive, add a where statement: ``` Get-WmiObject Win32_NetworkConnection | where -Property 'LocalName' -eq 'Z:' | ft "RemoteName","LocalName" -A ```
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
In Windows, if you have mapped network drives and you don't know the UNC path for them, you can start a command prompt (*Start → Run → cmd.exe*) and use the `net use` command to list your mapped drives and their UNC paths: ``` C:\>net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Q: \\server1\foo Microsoft Windows Network OK X: \\server2\bar Microsoft Windows Network The command completed successfully. ``` Note that this shows the list of mapped and connected network file shares for the user context the command is run under. If you run `cmd.exe` under your own user account, the results shown are the network file shares for yourself. If you run `cmd.exe` under another user account, such as the local Administrator, you will instead see the network file shares for that user.
This question has been answered already, but since there is a **more convenient way** to get the UNC path and some more I recommend using Path Copy, which is free and you can practically get any path you want with one click: <https://pathcopycopy.github.io/> Here is a screenshot demonstrating how it works. The latest version has more options and definitely UNC Path too: [![enter image description here](https://i.stack.imgur.com/4gDZU.png)](https://i.stack.imgur.com/4gDZU.png)
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
In Windows, if you have mapped network drives and you don't know the UNC path for them, you can start a command prompt (*Start → Run → cmd.exe*) and use the `net use` command to list your mapped drives and their UNC paths: ``` C:\>net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Q: \\server1\foo Microsoft Windows Network OK X: \\server2\bar Microsoft Windows Network The command completed successfully. ``` Note that this shows the list of mapped and connected network file shares for the user context the command is run under. If you run `cmd.exe` under your own user account, the results shown are the network file shares for yourself. If you run `cmd.exe` under another user account, such as the local Administrator, you will instead see the network file shares for that user.
If you have Microsoft Office: 1. **RIGHT**-drag the drive, folder or file from Windows Explorer into the body of a Word document or Outlook email 2. Select '**Create Hyperlink Here**' The inserted text will be the full UNC of the dragged item.
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
This question has been answered already, but since there is a **more convenient way** to get the UNC path and some more I recommend using Path Copy, which is free and you can practically get any path you want with one click: <https://pathcopycopy.github.io/> Here is a screenshot demonstrating how it works. The latest version has more options and definitely UNC Path too: [![enter image description here](https://i.stack.imgur.com/4gDZU.png)](https://i.stack.imgur.com/4gDZU.png)
``` $CurrentFolder = "H:\Documents" $Query = "Select * from Win32_NetworkConnection where LocalName = '" + $CurrentFolder.Substring( 0, 2 ) + "'" ( Get-WmiObject -Query $Query ).RemoteName ``` OR ``` $CurrentFolder = "H:\Documents" $Tst = $CurrentFolder.Substring( 0, 2 ) ( Get-WmiObject -Query "Select * from Win32_NetworkConnection where LocalName = '$Tst'" ).RemoteName ```
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
If you have Microsoft Office: 1. **RIGHT**-drag the drive, folder or file from Windows Explorer into the body of a Word document or Outlook email 2. Select '**Create Hyperlink Here**' The inserted text will be the full UNC of the dragged item.
``` $CurrentFolder = "H:\Documents" $Query = "Select * from Win32_NetworkConnection where LocalName = '" + $CurrentFolder.Substring( 0, 2 ) + "'" ( Get-WmiObject -Query $Query ).RemoteName ``` OR ``` $CurrentFolder = "H:\Documents" $Tst = $CurrentFolder.Substring( 0, 2 ) ( Get-WmiObject -Query "Select * from Win32_NetworkConnection where LocalName = '$Tst'" ).RemoteName ```
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
If you have Microsoft Office: 1. **RIGHT**-drag the drive, folder or file from Windows Explorer into the body of a Word document or Outlook email 2. Select '**Create Hyperlink Here**' The inserted text will be the full UNC of the dragged item.
``` wmic path win32_mappedlogicaldisk get deviceid, providername ``` Result: ``` DeviceID ProviderName I: \\server1\Temp J: \\server2\Corporate Y: \\Server3\Dev_Repo Z: \\Server3\Repository ``` As a batch file ([src](https://github.com/maphew/code/blob/master/other/get-unc-path.bat)): ``` @if [%1]==[] goto :Usage @setlocal enabledelayedexpansion @set _NetworkPath= @pushd %1 @for /f "tokens=2" %%i in ('wmic path win32_mappedlogicaldisk get deviceid^, providername ^| findstr /i "%CD:~0,2%"') do @(set _NetworkPath=%%i%CD:~2%) @echo.%_NetworkPath% @popd @goto :EOF :: --------------------------------------------------------------------- :Usage @echo. @echo. Get the full UNC path for the specified mapped drive path @echo. @echo. %~n0 [mapped drive path] ``` Example: ``` C:\> get-unc-path.bat z:\Tools\admin \\EnvGeoServer\Repository\Tools\admin ``` Batch script adapted from <https://superuser.com/a/1123556/16966>. Please be sure to go vote that one up too if you like this solution. *Update 2021-11-15:* bug fix. Previously the batch only reported drive letter UNC root and neglected to also report the folder path. `%CD%` is set from `%%i` through some kind of CMD magic. `%CD:~0,2%` and `%CD:~2%` extract the drive letter and trailing path [substrings](https://ss64.com/nt/syntax-substring.html) respectively. e.g. `:~2%` does '\Tools\admin' from 'Z:\Tools\admin'.
21,482,825
I need to be able determine the path of the network Q drive at work for a WEBMethods project. The code that I have before is in my configuration file. I placed single character leters inside of the directories just for security reasons. I am not sure what the semi-colon is for, but I think that the double slashes are were the drive name comes to play. Question: Is there an easy way on a Windows 7 machine to find out what the full path of the UNC is for any specific drive location? Code: ``` allowedWritePaths=Q:/A/B/C/D/E/ allowedReadPaths=C:/A/B;//itpr99999/c$/A/FileName.txt allowedDeletePaths= ```
2014/01/31
[ "https://Stackoverflow.com/questions/21482825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487995/" ]
In Windows, if you have mapped network drives and you don't know the UNC path for them, you can start a command prompt (*Start → Run → cmd.exe*) and use the `net use` command to list your mapped drives and their UNC paths: ``` C:\>net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Q: \\server1\foo Microsoft Windows Network OK X: \\server2\bar Microsoft Windows Network The command completed successfully. ``` Note that this shows the list of mapped and connected network file shares for the user context the command is run under. If you run `cmd.exe` under your own user account, the results shown are the network file shares for yourself. If you run `cmd.exe` under another user account, such as the local Administrator, you will instead see the network file shares for that user.
The answer is a simple `PowerShell` one-liner: ``` Get-WmiObject Win32_NetworkConnection | ft "RemoteName","LocalName" -A ``` If you only want to pull the `UNC` for one particular drive, add a where statement: ``` Get-WmiObject Win32_NetworkConnection | where -Property 'LocalName' -eq 'Z:' | ft "RemoteName","LocalName" -A ```
57,946,590
I am trying to add google sign-in feature to my app. This is working fine with an android emulator but I am running the app in the real device it is not working. The problem is after the sign-in process google redirect to its own home page instead to app. The step I follow. Function I use to open google sign in page ``` const result = await Google.logInAsync({ androidStandaloneAppClientId: '131814552849-bi76mebb3eq5jsdergerdfh6werjd8udpen43.apps.googleusercontent.com', scopes: ['profile', 'email'], behavior: 'web }); ``` app.json I used Google Certificate Hash (SHA-1) in certificateHash ``` "android": { "package": "com.abc.mycompnay", "permissions": ["READ_EXTERNAL_STORAGE", "WRITE_EXTERNAL_STORAGE"], "config": { "googleSignIn": { "apiKey": "AIzaSyB6qp9VXGXrtwuihvna40F57xABKXJfEQ", "certificateHash": "29FD8B159A28F2F48ED3283548NEBFC957F6821D" } } } ``` > > google console setting > > > [![enter image description here](https://i.stack.imgur.com/1mpsC.png)](https://i.stack.imgur.com/1mpsC.png) **Client key** [![enter image description here](https://i.stack.imgur.com/J5HRd.png)](https://i.stack.imgur.com/J5HRd.png) After sign in its end up with its own home page [![enter image description here](https://i.stack.imgur.com/GQ9Op.png)](https://i.stack.imgur.com/GQ9Op.png)
2019/09/15
[ "https://Stackoverflow.com/questions/57946590", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2917268/" ]
I manage to fix it. below is what I did. I pass the redirectUrl in config ``` import * as AppAuth from 'expo-app-auth'; const result = await Google.logInAsync({ androidStandaloneAppClientId: 'myKey, iosStandaloneAppClientId: 'myKey, scopes: ['profile', 'email'], behavior: 'web', redirectUrl: `${AppAuth.OAuthRedirect}:/oauthredirect` }); ```
Open the gradle and change the redirect scheme ``` android { defaultConfig { manifestPlaceholders = [ appAuthRedirectScheme: 'com.example.yourpackagename' ] } } ```
57,946,590
I am trying to add google sign-in feature to my app. This is working fine with an android emulator but I am running the app in the real device it is not working. The problem is after the sign-in process google redirect to its own home page instead to app. The step I follow. Function I use to open google sign in page ``` const result = await Google.logInAsync({ androidStandaloneAppClientId: '131814552849-bi76mebb3eq5jsdergerdfh6werjd8udpen43.apps.googleusercontent.com', scopes: ['profile', 'email'], behavior: 'web }); ``` app.json I used Google Certificate Hash (SHA-1) in certificateHash ``` "android": { "package": "com.abc.mycompnay", "permissions": ["READ_EXTERNAL_STORAGE", "WRITE_EXTERNAL_STORAGE"], "config": { "googleSignIn": { "apiKey": "AIzaSyB6qp9VXGXrtwuihvna40F57xABKXJfEQ", "certificateHash": "29FD8B159A28F2F48ED3283548NEBFC957F6821D" } } } ``` > > google console setting > > > [![enter image description here](https://i.stack.imgur.com/1mpsC.png)](https://i.stack.imgur.com/1mpsC.png) **Client key** [![enter image description here](https://i.stack.imgur.com/J5HRd.png)](https://i.stack.imgur.com/J5HRd.png) After sign in its end up with its own home page [![enter image description here](https://i.stack.imgur.com/GQ9Op.png)](https://i.stack.imgur.com/GQ9Op.png)
2019/09/15
[ "https://Stackoverflow.com/questions/57946590", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2917268/" ]
I manage to fix it. below is what I did. I pass the redirectUrl in config ``` import * as AppAuth from 'expo-app-auth'; const result = await Google.logInAsync({ androidStandaloneAppClientId: 'myKey, iosStandaloneAppClientId: 'myKey, scopes: ['profile', 'email'], behavior: 'web', redirectUrl: `${AppAuth.OAuthRedirect}:/oauthredirect` }); ```
Okay, I'll put this here since this cost me a ton of lifetime. If you happen to test it with an Android device: Make sure you have selected Chrome as default browser. Others might not redirect you correctly!
57,946,590
I am trying to add google sign-in feature to my app. This is working fine with an android emulator but I am running the app in the real device it is not working. The problem is after the sign-in process google redirect to its own home page instead to app. The step I follow. Function I use to open google sign in page ``` const result = await Google.logInAsync({ androidStandaloneAppClientId: '131814552849-bi76mebb3eq5jsdergerdfh6werjd8udpen43.apps.googleusercontent.com', scopes: ['profile', 'email'], behavior: 'web }); ``` app.json I used Google Certificate Hash (SHA-1) in certificateHash ``` "android": { "package": "com.abc.mycompnay", "permissions": ["READ_EXTERNAL_STORAGE", "WRITE_EXTERNAL_STORAGE"], "config": { "googleSignIn": { "apiKey": "AIzaSyB6qp9VXGXrtwuihvna40F57xABKXJfEQ", "certificateHash": "29FD8B159A28F2F48ED3283548NEBFC957F6821D" } } } ``` > > google console setting > > > [![enter image description here](https://i.stack.imgur.com/1mpsC.png)](https://i.stack.imgur.com/1mpsC.png) **Client key** [![enter image description here](https://i.stack.imgur.com/J5HRd.png)](https://i.stack.imgur.com/J5HRd.png) After sign in its end up with its own home page [![enter image description here](https://i.stack.imgur.com/GQ9Op.png)](https://i.stack.imgur.com/GQ9Op.png)
2019/09/15
[ "https://Stackoverflow.com/questions/57946590", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2917268/" ]
I manage to fix it. below is what I did. I pass the redirectUrl in config ``` import * as AppAuth from 'expo-app-auth'; const result = await Google.logInAsync({ androidStandaloneAppClientId: 'myKey, iosStandaloneAppClientId: 'myKey, scopes: ['profile', 'email'], behavior: 'web', redirectUrl: `${AppAuth.OAuthRedirect}:/oauthredirect` }); ```
In app.json, package name has to be as all small letters like com.app.cloneapp
72,194,499
I've got a generic task in my Gradle build that copies some configuration files to be included in the build, but aren't required for compiling or anything else (they're used at runtime). Basically: ``` val copyConfiguration by tasks.registering(Copy::class) { from("${projectDir}/configuration") into("${buildDir}/") } ``` This however leads to an issue in every other task as I now get the Gradle warning about how the tasks use this output without declaring an explicit or implicit dependency ``` Execution optimizations have been disabled for task ':jacocoTestCoverageVerification' to ensure correctness due to the following reasons: - Gradle detected a problem with the following location: '...'. Reason: Task ':jacocoTestCoverageVerification' uses this output of task ':copyConfiguration' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.4.1/userguide/validation_problems.html#implicit_dependency for more details about this problem. ``` Now this is only a warning, and the build succeeds, and my service starts up and runs fine. But it does clog my output making it harder to find the line where something went wrong and is in general an eyesore. I'd like to somehow remove that warning. I saw (from the wiki) that the general solution for this is to write an explicit dependency in the task definition, but since this is happening for every task (from compile, to test, to ktlint, to jacoco, etc.) I don't really want to do that. Is there an alternative, like an anti-dependency, wherein I can tell Gradle that it shouldn't care about the output of the `:copyConfiguration` task?
2022/05/11
[ "https://Stackoverflow.com/questions/72194499", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7041558/" ]
Given (emphasis mine to show what to look for) > > Execution optimizations have been disabled for task 'spotlessJava' to ensure correctness due to the following reasons: > > > * Gradle detected a problem with the following location: '...\build\generated\source\proto\main\grpc'. Reason: Task **'spotlessJava'** uses this output of task **'generateProto'** without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to <https://docs.gradle.org/7.5.1/userguide/validation_problems.html#implicit_dependency> for more details about this problem. > > > Add the following to `build.gradle` ``` tasks.named("spotlessJava").configure { dependsOn("generateProto") } ```
I had a similar issue and funny that it started with a task related to Jacoco. I documented a solution here <https://discuss.gradle.org/t/task-a-uses-this-output-of-task-b-without-declaring-an-explicit-or-implicit-dependency/42896> In short, what worked for me was to get the location with the problem using the task properties, e.g. getOutputs. Hope this helps.
45,174,184
I'm a little new to python 2.7 and I was wondering if there was a way I could search within a folder (and all its subfolders, PDFs, and Word docs) for a certain word. I need to compile all PDF and Word files that contain a certain keyword into a new folder so I thought python might be the best way to do this instead of manually going through each file and searching for the word. Any thoughts?
2017/07/18
[ "https://Stackoverflow.com/questions/45174184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8327139/" ]
It should be obvious that without any information on the function at hand, the poor man's approach is optimal (in some probabilistic sense). Because the roots of general functions a spread uniformly and independently of each other, so that unequal steps, possibly based on the function values, would be a waste of time. You are in a better position when you can exploit some property of the function. For instance, if you have a bound on the derivative in an interval, for suitable values of the function at the endpoints you can show that no root can be present.
I do not think there is a magical method that would find **all roots** of a general equation. Your "poor man's approach" is not too bad to start with. I would use product instead of `|data|<eps`. For example, ``` dp = data[1:] * data[:-1] indices = np.where(dp <= 0) ``` would provide the location of the "suspicious" intervals. Then you can run a better method providing as initial guess every such suspicious interval's center coordinate. A more sophisticated method maybe could adapt to the slope and adjust function sampling instead of having a constant one like you get with `linspace()`.
3,041,922
I have a small question regarding rails. I have a search controller which searches for a name in database, if found shows the details about it or else I am redirecting to the new name page. Is there anyway after redirection that the name searched for to automatically appear in the new form page? Thanks in advance.
2010/06/15
[ "https://Stackoverflow.com/questions/3041922", "https://Stackoverflow.com", "https://Stackoverflow.com/users/357010/" ]
You can add the imports to `$HOME/.groovy/groovysh.rc`
From <http://groovy.codehaus.org/Groovy+Shell>: This script, if it exists, is loaded when the shell starts up: ``` $HOME/.groovy/groovysh.profile ``` This script, if it exists, is loaded when the shell enters interactive mode: ``` $HOME/.groovy/groovysh.rc ``` Edit-line history is stored in this file: ``` $HOME/.groovy/groovysh.history ```
17,809,819
I have a python class in PyCharm containing an overriding method and want to see its documentation as quickly as possible. How can I do it?
2013/07/23
[ "https://Stackoverflow.com/questions/17809819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2510374/" ]
I didn't check your code but I always use the following snippet and it works till now. **JSONParser.java** ``` import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONException; import org.json.JSONObject; import android.util.Log; public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } public JSONObject getJSONFromUrl(String url) { // Making HTTP request try { // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } ``` And call it like: ``` JSONParser jParser = new JSONParser(); // getting JSON string from URL JSONObject json = jParser.getJSONFromUrl(url); ``` Sometimes json starts with an array node instead of jSON Object node. In those case, you have to return an `JSONArray` instead of `JSONObject`
From my point of view, you must call `close()` to `InputStream` and `reader` before returning the response as: ``` stream.close(); reader.close(); return sb.toString(); ``` It would be better if you specify what kind of error you are getting while running the above piece of code to analyse the issue. Thanks!
17,809,819
I have a python class in PyCharm containing an overriding method and want to see its documentation as quickly as possible. How can I do it?
2013/07/23
[ "https://Stackoverflow.com/questions/17809819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2510374/" ]
I didn't check your code but I always use the following snippet and it works till now. **JSONParser.java** ``` import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONException; import org.json.JSONObject; import android.util.Log; public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } public JSONObject getJSONFromUrl(String url) { // Making HTTP request try { // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } ``` And call it like: ``` JSONParser jParser = new JSONParser(); // getting JSON string from URL JSONObject json = jParser.getJSONFromUrl(url); ``` Sometimes json starts with an array node instead of jSON Object node. In those case, you have to return an `JSONArray` instead of `JSONObject`
Try the following:- ``` static InputStream is = null; try { DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); httpPost.setEntity(new UrlEncodedFormEntity(params, "UTF-8")); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); }} catch (Exception e) { Log.e("Buffer Error", "Error Converting Result" + e.toString()); } ```
17,809,819
I have a python class in PyCharm containing an overriding method and want to see its documentation as quickly as possible. How can I do it?
2013/07/23
[ "https://Stackoverflow.com/questions/17809819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2510374/" ]
I didn't check your code but I always use the following snippet and it works till now. **JSONParser.java** ``` import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONException; import org.json.JSONObject; import android.util.Log; public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } public JSONObject getJSONFromUrl(String url) { // Making HTTP request try { // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } ``` And call it like: ``` JSONParser jParser = new JSONParser(); // getting JSON string from URL JSONObject json = jParser.getJSONFromUrl(url); ``` Sometimes json starts with an array node instead of jSON Object node. In those case, you have to return an `JSONArray` instead of `JSONObject`
Found the problem! I was getting back an array and had to use JSONArray rather than JSONObject
50,249,002
After several problems, I decided to purge Docker to reinstall it in a second time. Here's the steps that I did to purge all the packages related to Docker: ``` - dpkg -l | grep -i docker - sudo apt-get purge docker-engine docker docker-compose - sudo apt-get autoremove --purge docker docker-compose docker-engin ``` I even delete the folder which contains Docker files and containters `/var/lib/docker` But I still display the docker version after all I did. ``` docker -v Docker version 17.06.2-ce, build a04f55b ```
2018/05/09
[ "https://Stackoverflow.com/questions/50249002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3652210/" ]
**EDIT :** This solution is for systems using Debian packages (Debian, Ubuntu, Mint, ...). You saw that the docker binary is still present in your system. You can locate it using the `whereis` command : ``` # whereis docker docker: /usr/bin/docker /usr/lib/docker /etc/docker /usr/share/man/man1/docker.1.gz ``` Now that the binary is located (it's `/usr/bin/docker` in the example) you can use the `dpkg -S <location>` to look for its package. See [related post](https://askubuntu.com/questions/481/how-do-i-find-the-package-that-provides-a-file). ``` # dpkg -S /usr/bin/docker docker-ce: /usr/bin/docker ``` And then you can get rid of the package (here `docker-ce`) using your usual tools (`apt-get purge`, or `dpkg -r` if the package was not installed through a repository).
That version number looks like the last release of the snap package. If you installed by snap, then the uninstall uses the same tool: ``` sudo snap remove docker ```
34,398,588
hello i try to create an object named 'gerant' ``` class gerant { public double CIN_GERANT, NUM_TEL_GERANT, MOBILE_GERANT; public string NOM_GERANT, PRENOM_GERANT, ADRESSE__GERANT, MAIL_GERANT, VILLE_GERANT; public int CP_GERANT; public DateTime DATE_GERANT; public gerant(double _Cin_gerant, string _Nom_Gerant, string _Prenom_Gerant, string _Adresse_Gerant, double _Num_Tel_Gerant, string _Mail_Gerant, double _Mobile_Gerant, int _cp_gerant, string _ville_gerant, DateTime _date_gerant) { this.CIN_GERANT = _Cin_gerant; this.NOM_GERANT = _Nom_Gerant; this.PRENOM_GERANT = _Prenom_Gerant; this.ADRESSE__GERANT = _Adresse_Gerant; this.NUM_TEL_GERANT = _Num_Tel_Gerant; this.MAIL_GERANT = _Mail_Gerant; this.MOBILE_GERANT = _Mobile_Gerant; this.CP_GERANT = _cp_gerant; this.VILLE_GERANT = _ville_gerant; this.DATE_GERANT = _date_gerant; } public gerant getinfogerant() { gerant gerer = null; string sql_gerant = "select CIN,NOM,PRENOM,ADRESS_PERSONNEL,NUM_TEL,MAIL,MOBILE,CP_GERANT,VILLE_GERANT,DATE_CIN from GERANT"; connexion connect = new connexion(); OleDbConnection connection = connect.getconnexion(); // try //{ connection.Open(); OleDbCommand cmd = new OleDbCommand(sql_gerant, connection); OleDbDataReader reader = cmd.ExecuteReader(); if (reader.Read()) { gerer = new gerant(reader.GetDouble(0), reader.GetString(1), reader.GetString(2), reader.GetString(3), reader.GetDouble(4), reader.GetString(5), reader.GetDouble(6), reader.GetInt32(7), reader.GetString(8), reader.GetDateTime(9) ); } connection.Close(); return gerer; } } ``` but when i try to fill my combobox with gerant i try to insert this code ``` foreach(Modele.gerant ligne in liste_gerant) { } ``` but i make this error for me foreach statement cannot operate on variables of type 'gerant' because 'gerant' does not contain a public definition for 'GetEnumerator' how can i resolve that?
2015/12/21
[ "https://Stackoverflow.com/questions/34398588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5441811/" ]
No way to do it using standard DbMigration methods. The best way is to include a "select fieldToCheck from myTable where 1=2" into a try catch then add the field if required (in catch). The other way is to write a custom migration generator that extends the Migration generator (i.e. adding an AddColumnIfNotExists method). You can have a look here to see how to do it: <http://romiller.com/2013/02/27/ef6-writing-your-own-code-first-migration-operations/>
Basic example with sql: ``` // add colun if not exists migrationBuilder.Sql( @"IF COL_LENGTH('schemaName.TableName', 'ColumnName') IS NULL ALTER TABLE[TableName] ADD[ColumnName] int NULL GO "); ```
34,398,588
hello i try to create an object named 'gerant' ``` class gerant { public double CIN_GERANT, NUM_TEL_GERANT, MOBILE_GERANT; public string NOM_GERANT, PRENOM_GERANT, ADRESSE__GERANT, MAIL_GERANT, VILLE_GERANT; public int CP_GERANT; public DateTime DATE_GERANT; public gerant(double _Cin_gerant, string _Nom_Gerant, string _Prenom_Gerant, string _Adresse_Gerant, double _Num_Tel_Gerant, string _Mail_Gerant, double _Mobile_Gerant, int _cp_gerant, string _ville_gerant, DateTime _date_gerant) { this.CIN_GERANT = _Cin_gerant; this.NOM_GERANT = _Nom_Gerant; this.PRENOM_GERANT = _Prenom_Gerant; this.ADRESSE__GERANT = _Adresse_Gerant; this.NUM_TEL_GERANT = _Num_Tel_Gerant; this.MAIL_GERANT = _Mail_Gerant; this.MOBILE_GERANT = _Mobile_Gerant; this.CP_GERANT = _cp_gerant; this.VILLE_GERANT = _ville_gerant; this.DATE_GERANT = _date_gerant; } public gerant getinfogerant() { gerant gerer = null; string sql_gerant = "select CIN,NOM,PRENOM,ADRESS_PERSONNEL,NUM_TEL,MAIL,MOBILE,CP_GERANT,VILLE_GERANT,DATE_CIN from GERANT"; connexion connect = new connexion(); OleDbConnection connection = connect.getconnexion(); // try //{ connection.Open(); OleDbCommand cmd = new OleDbCommand(sql_gerant, connection); OleDbDataReader reader = cmd.ExecuteReader(); if (reader.Read()) { gerer = new gerant(reader.GetDouble(0), reader.GetString(1), reader.GetString(2), reader.GetString(3), reader.GetDouble(4), reader.GetString(5), reader.GetDouble(6), reader.GetInt32(7), reader.GetString(8), reader.GetDateTime(9) ); } connection.Close(); return gerer; } } ``` but when i try to fill my combobox with gerant i try to insert this code ``` foreach(Modele.gerant ligne in liste_gerant) { } ``` but i make this error for me foreach statement cannot operate on variables of type 'gerant' because 'gerant' does not contain a public definition for 'GetEnumerator' how can i resolve that?
2015/12/21
[ "https://Stackoverflow.com/questions/34398588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5441811/" ]
I have worked on creating a custom migration method, AddColumnIfNotExists You need a custom MigrationOperation class: ``` public class AddColumnIfNotExistsOperation : MigrationOperation { public readonly string Table; public readonly string Name; public readonly ColumnModel ColumnModel; public AddColumnIfNotExistsOperation(string table, string name, Func<ColumnBuilder, ColumnModel> columnAction, object anonymousArguments) : base(anonymousArguments) { ArgumentValidator.CheckForEmptyArgument(table, nameof(table)); ArgumentValidator.CheckForEmptyArgument(name, nameof(name)); ArgumentValidator.CheckForNullArgument(columnAction, nameof(columnAction)); Table = table; Name = name; ColumnModel = columnAction(new ColumnBuilder()); ColumnModel.Name = name; } public override bool IsDestructiveChange => false; public override MigrationOperation Inverse => new DropColumnOperation(Table, Name, removedAnnotations: ColumnModel.Annotations.ToDictionary(s => s.Key,s => (object)s.Value) , anonymousArguments: null); } ``` You also need a custom SqlGenerator class: ``` public class AddColumnIfNotExistsSqlGenerator : SqlServerMigrationSqlGenerator { protected override void Generate(MigrationOperation migrationOperation) { var operation = migrationOperation as AddColumnIfNotExistsOperation; if (operation == null) return; using (var writer = Writer()) { writer.WriteLine("IF NOT EXISTS(SELECT 1 FROM sys.columns"); writer.WriteLine($"WHERE Name = N'{operation.Name}' AND Object_ID = Object_ID(N'{Name(operation.Table)}'))"); writer.WriteLine("BEGIN"); writer.WriteLine("ALTER TABLE "); writer.WriteLine(Name(operation.Table)); writer.Write(" ADD "); var column = operation.ColumnModel; Generate(column, writer); if (column.IsNullable != null && !column.IsNullable.Value && (column.DefaultValue == null) && (string.IsNullOrWhiteSpace(column.DefaultValueSql)) && !column.IsIdentity && !column.IsTimestamp && !column.StoreType.EqualsIgnoreCase("rowversion") && !column.StoreType.EqualsIgnoreCase("timestamp")) { writer.Write(" DEFAULT "); if (column.Type == PrimitiveTypeKind.DateTime) { writer.Write(Generate(DateTime.Parse("1900-01-01 00:00:00", CultureInfo.InvariantCulture))); } else { writer.Write(Generate((dynamic)column.ClrDefaultValue)); } } writer.WriteLine("END"); Statement(writer); } } } ``` And an Extension Method to give you your "AddColumnIfNotExists" function: ``` public static class MigrationExtensions { public static void AddColumnIfNotExists(this DbMigration migration, string table, string name, Func<ColumnBuilder, ColumnModel> columnAction, object anonymousArguments = null) { ((IDbMigration)migration) .AddOperation(new AddColumnIfNotExistsOperation(table, name, columnAction, anonymousArguments)); } } ``` In your EF Migrations Configuration file, you need to register the custom SQL generator: ``` [ExcludeFromCodeCoverage] internal sealed class Configuration : DbMigrationsConfiguration<YourDbContext> { public Configuration() { AutomaticMigrationsEnabled = false; // Register our custom generator SetSqlGenerator("System.Data.SqlClient", new AddColumnIfNotExistsSqlGenerator()); } } ``` And then you should be able to use it in place of AddColum like this (notice the **this** keyword): ``` [ExcludeFromCodeCoverage] public partial class AddVersionAndChangeActivity : DbMigration { public override void Up() { this.AddColumnIfNotExists("dbo.Action", "VersionId", c => c.Guid(nullable: false)); AlterColumn("dbo.Action", "Activity", c => c.String(nullable: false, maxLength: 8000, unicode: false)); } public override void Down() { AlterColumn("dbo.Action", "Activity", c => c.String(nullable: false, maxLength: 50)); DropColumn("dbo.Action", "VersionId"); } } ``` And of course you want some tests for the operation: ``` [TestClass] public class AddColumnIfNotExistsOperationTests { [TestMethod] public void Can_get_and_set_table_and_column_info() { Func<ColumnBuilder, ColumnModel> action = c => c.Decimal(name: "T"); var addColumnOperation = new AddColumnIfNotExistsOperation("T", "C", action, null); Assert.AreEqual("T", addColumnOperation.Table); Assert.AreEqual("C", addColumnOperation.Name); } [TestMethod] public void Inverse_should_produce_drop_column_operation() { Func<ColumnBuilder, ColumnModel> action = c => c.Decimal(name: "C", annotations: new Dictionary<string, AnnotationValues> { { "A1", new AnnotationValues(null, "V1") } }); var addColumnOperation = new AddColumnIfNotExistsOperation("T", "C", action, null); var dropColumnOperation = (DropColumnOperation)addColumnOperation.Inverse; Assert.AreEqual("C", dropColumnOperation.Name); Assert.AreEqual("T", dropColumnOperation.Table); Assert.AreEqual("V1", ((AnnotationValues)dropColumnOperation.RemovedAnnotations["A1"]).NewValue); Assert.IsNull(((AnnotationValues)dropColumnOperation.RemovedAnnotations["A1"]).OldValue); } [TestMethod] [ExpectedException(typeof(ArgumentNullException))] public void Ctor_should_validate_preconditions_tableName() { Func<ColumnBuilder, ColumnModel> action = c => c.Decimal(name: "T"); // ReSharper disable once ObjectCreationAsStatement new AddColumnIfNotExistsOperation(null, "T", action, null); } [TestMethod] [ExpectedException(typeof(ArgumentNullException))] public void Ctor_should_validate_preconditions_columnName() { Func<ColumnBuilder, ColumnModel> action = c => c.Decimal(); // ReSharper disable once ObjectCreationAsStatement new AddColumnIfNotExistsOperation("T", null, action, null); } [TestMethod] [ExpectedException(typeof(ArgumentNullException))] public void Ctor_should_validate_preconditions_columnAction() { // ReSharper disable once ObjectCreationAsStatement new AddColumnIfNotExistsOperation("T", "C", null, null); } } ``` And tests for the SQL Generator: ``` [TestClass] public class AddColumnIfNotExistsSqlGeneratorTests { [TestMethod] public void AddColumnIfNotExistsSqlGenerator_Generate_can_output_add_column_statement_for_GUID_and_uses_newid() { var migrationSqlGenerator = new AddColumnIfNotExistsSqlGenerator(); Func<ColumnBuilder, ColumnModel> action = c => c.Guid(nullable: false, identity: true, name: "Bar"); var addColumnOperation = new AddColumnIfNotExistsOperation("Foo", "bar", action, null); var sql = string.Join(Environment.NewLine, migrationSqlGenerator.Generate(new[] {addColumnOperation}, "2005") .Select(s => s.Sql)); Assert.IsTrue(sql.Contains("IF NOT EXISTS(SELECT 1 FROM sys.columns")); Assert.IsTrue(sql.Contains("WHERE Name = N\'bar\' AND Object_ID = Object_ID(N\'[Foo]\'))")); Assert.IsTrue(sql.Contains("BEGIN")); Assert.IsTrue(sql.Contains("ALTER TABLE")); Assert.IsTrue(sql.Contains("[Foo]")); Assert.IsTrue(sql.Contains("ADD [bar] [uniqueidentifier] NOT NULL DEFAULT newsequentialid()END")); } } ```
Basic example with sql: ``` // add colun if not exists migrationBuilder.Sql( @"IF COL_LENGTH('schemaName.TableName', 'ColumnName') IS NULL ALTER TABLE[TableName] ADD[ColumnName] int NULL GO "); ```
437,800
I've designed a Likert-scale survey with 4 measurements: * x (it has 6 items/questions) * y (it has 9 items/questions) * z (it has 7 items/questions) * w (it has 3 items/questions) How can I rank these measurements? In other words: how can I understand which measurement is more important to the respondents? In what order? for example XYWZ or ZWXY ...?
2019/11/25
[ "https://stats.stackexchange.com/questions/437800", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/266770/" ]
A couple things: > > ...following two methods can be used for feature selection prior to model development > > > Those are actually *part of the model development and should be cross validated*. What most people do is look at the correlations, select only the most correlated with the output, and then move on to do cross validation etc. That's wrong for two reasons. 1. Correlation measures strength of a linear relationship. If the effect of the variable is non linear, your correlation may not pick up on this. Here is a concrete example. When I worked for a marketing company, most of the customers we dealt with were in their late 30s to early 40s. They spent the most money, and people younger and older spent less because young people typically didn't have as much money or interest in our products. So the effect of age kind of looked like a concave function. If you simulate something like `x = rnorm(1000); y = -0.2*x^2 + rnorm(1000, 0, 0.5)` (here x has a concave relationship to y) the correlation is low even though x can explain 75% of the variation observed in y. If you removed features based on their correlation, surely you would not select this very important feature. 2. Had you had different training data, you might have picked different features. So when you fit models in the cross validation step, you need to repeat the selection of features based on the correlations. Same thing with the lasso. In every cross validation refit, you need to fit the lasso, select the features, then refit a model with the selected features. > > Which of the above two methods is preferred? > > > I don't think either are very good to be honest. Correlation is myopic for the reasons I've described. Lasso is better, but there is no reason to think the features it selects are the "best" features, nor is there a reason to think that the features it selects would be selected had you had different data. Here is a code example to demonstrate that ``` library(tidyverse) library(glmnet) S = 0 N = 1000 p = 100 mu = rep(0, p) betas = rnorm(p, 2, 2)*rbinom(p, 1, 0.10) while(max(abs(S))<0.9){ S = rethinking::rlkjcorr(1, p) } do_glmnet<-function(){ X = MASS::mvrnorm(N, mu, S) y = X %*% betas + rnorm(N, 0, 2.5) cvmodel = cv.glmnet(X,y, alpha = 1) model = glmnet(X, y, alpha = 1) coef(model, cvmodel$lambda.1se) %>% as.matrix() %>% t() %>% as_tibble() } results = map_df(1:100, ~do_glmnet()) results %>% summarise_all(~mean(abs(.)>0) ) ``` In that example, I generate data from a sparse linear model. Some variables are selected every time (those are the variables with real effects) but you can see that some variables with 0 effect are sometimes selected and sometimes not selected. The absolute best way to select features is to use your knowledge about the data generating process to determine what is important and what is not. If you can't do that, use lasso to trade off variance for a but of bias, but don't select out features. Just keep the entire fit in the model. I saw Trevor Hastie speak at a zoom talk the other day and he showed us an example of which LASSO with all features performed better than selecting features with LASSO and then refitting the full model. I can't say that is the case for every problem, but it was pretty compelling evidence. Let me see if I can find a link to the talk. That being said, I'm open to seeing numerical experiments that show that selection via glmnet does better than just putting everything into glmnet and not selecting. That just hasn't been the story I've seen.
I think by saying correlation you are referring to SIS, developed by Jianqing Fan and Jinchi Lv. Actually, the logic behind the two methods is different. LASSO does the selection by using a penalized loss function and sparsity of the variables is required. Normally, for ultra-high dimensional data, we perform SIS first and reduce the dimension to a relatively small amount, and then perform LASSO to further reduce the number of variables that enter the final model.
22,039,373
I'm usign the xsd 3.3.0 Compiler in order to parse a xsd (xml best friend) file to C++ class. (see last weblink) The comand name is > > xsd cxx-tree (options) file.xsd > > > (+ info <http://www.codesynthesis.com/projects/xsd/documentation/cxx/tree/guide/>) I've seen some examples provided by codesynthesis where they parse a hello.xsd document and creates a .hxx and a .cxx file very easily. The .hxx has a method to open a xml document creating an object where you can find the diferent parts of the xml, check it, etc... The .hxx has a code like this: ``` // Parse a URI or a local file. // ::std::auto_ptr< ::hello_t > hello (const ::std::string& uri, ::xml_schema::flags f = 0, const ::xml_schema::properties& p = ::xml_schema::properties ()); ``` It receive a string with the file name > > string& uri = "hello.xsd" > > > and create the object that you use in the main.cxx. So, I'm trying to do the same with my xsd file. I use the xsd cxx-tree compiler but it doesn't create the methods to "Parse a URI or a local file.". Then I can't create an object from a xml file on my main program. I solve some compiling problems using differents options from codesys compiler documentation (<http://www.codesynthesis.com/projects/xsd/documentation/xsd.xhtml>). There are differents options about what do you want to compile, how do you want to do it, etc... but I can't find any options to enable the methods used to "Parse a URI or a local file.". Giving more onformation, the xml-xsd documents are CBML protocol documents. Thank you for your help!
2014/02/26
[ "https://Stackoverflow.com/questions/22039373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3355390/" ]
I found the solution by myself! I used different options for compiling. I used the option "--root-element library" and it caused that the methods to "Parse a URI or a local file." wasn't created. I delete this option and I added "--root-element-all" that create parse methods for all principals objects! Now my program works! thanks!
I use this product all the time. Here is an example of what you're trying to do (I think): `xsd cxx-tree hello.xsd` Which generates the `hello.hxx` and the `hello.cxx`, as you've said. I think where you're falling short is understanding how to use these files to load an XML file (e.g., loading a "local file"). I like to explicitly tell the software where to find the XSD schema. The following code will not compile but I've included it for reference. ``` void load(const std::string &xml, const std::string &xsd) { xml_schema::properties properties; properties.no_namespace_schema_location(xsd); // or, if you have a specific namespace you want to load, use // props.schema_location("which namespace", xsd); try { std::auto_ptr< ::hello_t> xml_file = hello(xml, 0, props); // operate on the xml_file via "->" notation std::cout << xml_file->hello() << std::endl; } catch (const ::xml_schema::exception &e) { std::ostringstream os; os << e.what(); // report the error as you see fit, use os.str() to get the string } } ``` Make sure to link the `*.cxx` file to your `*.cpp` files. If this is not what you wanted, let me know in the comments and I can try to help you some more.
33,156,051
It seems like despite the fact we're not using transactions at all we get random deadlock error from SQL Azure. Are there no transnational situation when SQL Azure can get into a deadlock? It seems like when we are running a batch of UPDATE queries it acts like the batch is a one big transaction. All the updates are by id and update a single a line.
2015/10/15
[ "https://Stackoverflow.com/questions/33156051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/593425/" ]
There is no such things as "not using transactions". There's always a transaction, wether you start one explicitly or not. Read [Tracking down Deadlocks in SQL Database](http://blogs.msdn.com/b/sqldatabasetalk/archive/2013/05/01/tracking-down-deadlocks-in-sql-database.aspx) for how to obtain the deadlock graph in SQL Azure. Connect to `master` and run: ``` SELECT * FROM sys.event_log WHERE database_name like '<your db name>' AND event_type = 'deadlock'; ``` Then analyze the deadlock graph do understand the cause. Most likely you're doing scan because of missing indexes.
When you have concurrent transactions running (either implicit or explicit) you encounter deadlocks. Probably when you you said no transactions that means your transactions are implicit.
25,703,878
I have a price database that stores numbers as floating point. These are presented on a website. Prices can be in the format. ``` x.x (e.g. 1.4) x.xx (e.g. 1.99) x.xxx (e.g. 1.299) <-- new price format ``` I used to use the string format or `%.2f` to standardize the prices to two decimal places but now I need to show 3 as well but only if the price is 3 decimal place long. ``` e.g. 1.4 would display 1.40 1.45 would display 1.45 1.445 would display 1.445 ``` The above formats would be the desired output for the given input. using `%.3f` shows all with 3 digits. ``` e.g. 1.4 would display 1.400 1.45 would display 1.450 1.445 would display 1.445 ``` But that is not what i want does anyone know the best way to do the following. i.e. any number should display 2 decimal places if it has 0 1 or 2 decimal places if it has 3 or more decimal places it should display 3 decimal places
2014/09/06
[ "https://Stackoverflow.com/questions/25703878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/333661/" ]
I would just format it to three places, then trim a final 0. ``` $formatted = number_format($value, 3, ".", ""); if (substr($formatted, -1) === "0") $formatted = substr($formatted, 0, -1); ```
Use this dude ``` number_format($data->price, 0, ',', '.'); ``` <http://php.net/manual/en/function.number-format.php>
25,703,878
I have a price database that stores numbers as floating point. These are presented on a website. Prices can be in the format. ``` x.x (e.g. 1.4) x.xx (e.g. 1.99) x.xxx (e.g. 1.299) <-- new price format ``` I used to use the string format or `%.2f` to standardize the prices to two decimal places but now I need to show 3 as well but only if the price is 3 decimal place long. ``` e.g. 1.4 would display 1.40 1.45 would display 1.45 1.445 would display 1.445 ``` The above formats would be the desired output for the given input. using `%.3f` shows all with 3 digits. ``` e.g. 1.4 would display 1.400 1.45 would display 1.450 1.445 would display 1.445 ``` But that is not what i want does anyone know the best way to do the following. i.e. any number should display 2 decimal places if it has 0 1 or 2 decimal places if it has 3 or more decimal places it should display 3 decimal places
2014/09/06
[ "https://Stackoverflow.com/questions/25703878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/333661/" ]
I would just format it to three places, then trim a final 0. ``` $formatted = number_format($value, 3, ".", ""); if (substr($formatted, -1) === "0") $formatted = substr($formatted, 0, -1); ```
Here is what I did due to the need to cope with some special cases I had in the app. 1. count the number of dec places ($prices is a float from the database). 2. format based on the count in the places using a switch statement. 3. For all cases with less than 3 decimal places format with 2 (except zero) 4. For all other case format with 3. ``` $decimals = strlen(substr(strrchr($price,"."),1)); switch ($decimals) { case 0: { if ($price != 0) { $price = number_format($price),2); } break; } case 1: { $price = number_format($price),2); break; } case 2: { $price = number_format($price),2); break; } default: { $price = number_format($price),3); // three dec places all other prices break; } ``` } Thanks for the help...
37,138
I'm using the standalone GeoWebCache to serve tiles from a remote GeoServer. My problem is that the polygon label is added to each one of the tiles served, instead of only once in the polygon centroid. I found a post which discusses the issue: <http://osgeo-org.1560.n6.nabble.com/polygon-label-repeated-for-each-tile-td4995203.html> The first reply mentioned a possible solution: > > "All in all, I suggest to use a tile rendering engine (GeoWebCache, MapProxy, TileCache) anyway, instead of requesting small image from GeoServer and have the tile rendering engine do the tile slicing afterwards. You will have send fewer requests to GeoServer (1 large image instead of multiple small images), so this speeds up the overall tile cache creation time." > > > Problem is that I couldn't find how to do that by referring to the GeoWebCache documentation, and the above mentioned post doesn't explain the way to implement that. I also found a [post](https://gis.stackexchange.com/questions/29127/labeling-geoserver-sld) with an answer that links to the [GeoWebCache "Tiled" documentation](http://docs.geoserver.org/latest/en/user/services/wms/vendor.html#tiled), but my code allready uses all the necessary attributes and still the label shows up multiple times: ``` var Layer_1874 = new OpenLayers.Layer.WMS( 'Grundkort', '/wms10.ashx' , { format: 'image/png', srs: 'EPSG:25832', layers: 'ballerupkommune_grundkort_bk', tiled: true, tilesOrigin: '698804,6173460' } , { displayInLayerSwitcher: true, isBaseLayer: true, transitionEffect: 'resize', displayOutsideMaxExtent: true, visibility: false } ); ``` Anyone has an idea?
2012/10/19
[ "https://gis.stackexchange.com/questions/37138", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/12096/" ]
Below is an example of an SLD rule that places a label at the center of a feature's geometry. This uses the ogc:Function called "centroid" to place the label. You can read more about SLD functions in the GeoServer [docs](http://docs.geoserver.org/latest/en/user/filter/function_reference.html), and some examples are given [here](http://docs.geoserver.org/latest/en/user/filter/function.html). ``` <sld:Rule> <MaxScaleDenominator>5000</MaxScaleDenominator> <sld:TextSymbolizer> <sld:Geometry> <ogc:Function name="centroid"> <ogc:PropertyName>the_geom</ogc:PropertyName> </ogc:Function> </sld:Geometry> <sld:Label> <ogc:PropertyName>LOT_NAME</ogc:PropertyName> </sld:Label> <sld:Font> <sld:CssParameter name="font-family">Arial</sld:CssParameter> <sld:CssParameter name="font-size">11</sld:CssParameter> <sld:CssParameter name="font-style">normal</sld:CssParameter> <sld:CssParameter name="font-weight">bold</sld:CssParameter> </sld:Font> <sld:LabelPlacement> <sld:PointPlacement> <sld:AnchorPoint> <sld:AnchorPointX> <ogc:Literal>0.0</ogc:Literal> </sld:AnchorPointX> <sld:AnchorPointY> <ogc:Literal>0.5</ogc:Literal> </sld:AnchorPointY> </sld:AnchorPoint> <sld:Rotation> <ogc:Literal>0</ogc:Literal> </sld:Rotation> </sld:PointPlacement> </sld:LabelPlacement> <sld:Halo> <sld:Radius> <ogc:Literal>1.0</ogc:Literal> </sld:Radius> <sld:Fill> <sld:CssParameter name="fill">#FFFFFF</sld:CssParameter> </sld:Fill> </sld:Halo> <sld:VendorOption name="conflictResolution">true</sld:VendorOption> <sld:VendorOption name="goodnessOfFit">0</sld:VendorOption> <sld:VendorOption name="autoWrap">60</sld:VendorOption> </sld:TextSymbolizer> </sld:Rule> ``` Also, the [SLD Cookbook](http://blog.geoserver.org/2010/04/09/sld-cookbook/) is a great reference. One thing that can trip you up is the ordering of tags in the SLD. For the TextSymbolizer rule above you can see required order by looking in the schema definition. Don't worry, it's not too scary! Just search for "textsymbolizer" in that .xsd file, an you should easily find the "sequence" tag. There you'll find that the element references match up with the order in my example. (Note: I didn't use the text symbolizer's "fill" attribute, my fill just applies to the halo.)
Computing labels with collision resolution (moving labels out of the way or removing lower priority ones so they don't overlap) requires knowing about every label that might collide with the label you are drawing, every label that might collide with them, and so on. So, in general, you either need to compute all the labels at once by looking at every feature, or break the map into blocks with labels computed within each block. By default, GeoWebCache uses a 4x4 block of tiles called a "metatile". When you request a tile that isn't in the cache, GWC will request the entire metatile as one big image from the backend and then slice the metatile into tiles which it caches. You can adjust the metatile factor when setting up a layer. Larger metatiles give better looking labels, but increase the latency of a cache miss. If you aren't using label collision resolution on the back end, you can set the metatiling to 1x1. You can also tell GWC to add a gutter around the metatile which is extra space that will be cut off. It's risky to do this if you have label collision resolution on as a label may be positioned differently or even be removed entirely on the other side of a metatile boundary. If you have labels that are totally fixed in position and never get supressed to avoid collision though, you can use a wide gutter to allow the labels to cross tile boundaries. This will have a performance cost as GeoServer will have to render a larger tile. You can set metatiling and gutter on the Tile Layer tab of the layer configuration, or the default that will be used for new layers can be set on the Caching Defaults page. To disable conflict resolution, you can use the [`conflictResolution`](https://docs.geoserver.org/stable/en/user/styling/sld/reference/labeling.html#conflictresolution) vendor option in your styles.
41,588,101
Can anyone help me with adding an additional models.CharField to my Django cities\_light\_region table. This is what i want to implement: ``` class MyRegion(Region): state_code = models.CharField(max_length=100, default='XXX', blank=True) class Meta: proxy = True ``` Error: ?: (models.E017) Proxy model 'MyRegion' contains model fields.
2017/01/11
[ "https://Stackoverflow.com/questions/41588101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3774814/" ]
We can do this with below trick ``` class Model(object): ''' Skip extra field validation "models.E017" ''' @classmethod def _check_model(cls): errors = [] return errors class MyRegion(Model, Region): state_code = models.CharField(max_length=100, default='XXX', blank=True) class Meta: proxy = True ```
Well, the error message say it all : a proxy model cannot contain model fields, for the very obvious reason that [a proxy model is a class that uses the table of another model and only add or override behaviour](https://docs.djangoproject.com/en/1.10/topics/db/models/#proxy-models).
127,142
I know is not the best, but I'd like to display an image before another already referred. I mean I'd like to cite image 2 before image 1 but display them in the correct order. This is becasue image 1 is very small and I want to display it at the top (or bottom of the page) while image 2 is very big and is iserted in its own page. LaTeX correctly display images in the order I recall them in the text, but I'd like to avoid this, but only for these two images. > > E.g. > > > Text text text see Image 2. Text text text see Image 1. > > > Display image 1 in the same page (top or bottom). > > > Newpage with image 2. > > > How can I do this? Thankyou!
2013/08/07
[ "https://tex.stackexchange.com/questions/127142", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/32689/" ]
This works. But don't do this. ``` \documentclass{article} \usepackage{graphicx} \begin{document} Text text text see Image~\ref{fig:B}. Text text text see Image~\ref{fig:A}. \begin{figure}[htb] \centering \includegraphics[width=4in]{example-image-a} \caption{Caption here} \label{fig:A} \end{figure} %\clearpage %% uncomment if needed to shipout all floats before this point. \begin{figure}[htb] \centering \includegraphics[width=4in]{example-image-b} \caption{Caption here} \label{fig:B} \end{figure} \end{document} ``` ![enter image description here](https://i.stack.imgur.com/rAxN5.png) For more information on float placement, see [this answer](https://tex.stackexchange.com/a/39020/11232) by Frank.
This can easily be resolved by using the `float` package in conjunction with the parameter `[H]`: ``` \documentclass{article} \usepackage{float} \begin{document} \begin{figure}[H] ...picture code... \label{fig:figA} \end{figure} \begin{figure}[H] ...picture code... \label{fig:figB} \end{figure} \end{document} ``` By using the `[H]` option the figure will appear exactly where you put the code. So you can define the order manually.
4,583,607
here's a very simple js but i don't know where to begin. in a html page, if some text is enclosed by angle brackets, like this: ``` 〈some text〉 ``` i want the text to be colored (but not the brackets). in normal html, i'd code it like this ``` 〈<span class="booktitle">some text</span>〉 ``` So, my question is, how do i start to write such a js script that search the text and replace it with span tags? some basic guide on how to would be sufficient. Thanks. (i know i need to read the whole html, find the match perhaps using regex, then replace the page with the new one. But have no idea how that can be done with js/DOM. Do i need to traverse every element, get their inner text, do possible replacement? A short example would be greatly appreciated.)
2011/01/03
[ "https://Stackoverflow.com/questions/4583607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/369203/" ]
It depends partially on how cautious you need to be not to disturb event handlers on the elements you're traversing. If it's your page and you're in control of the handlers, you may not need to worry; if you're doing a library or bookmarklet or similar, you need to be *very* careful. For example, consider this markup: ``` <p>And <a href='foo.html'>the 〈foo〉 is 〈bar〉</a>.</p> ``` If you did this: ``` var p = /* ...get a reference to the `p` element... */; p.innerHTML = p.innerHTML.replace(/〈([^〉]*)〉/g, function(whole, c0) { return "〈<span class='booktitle'>" + c0 + "</span>〉"; }); ``` [(live example)](http://jsbin.com/edacu3/2) *(the example uses unicode escapes and HTML numeric entities for 〈 and 〉 rather than the literals above, because JSBin [doesn't like them raw](http://jsbin.com/edacu3/2), presumably an encoding issue)* ...that would *work* and be really easy (as you see), but if there were an event handler on the `a`, it would get blown away (because we're destroying the `a` and recreating it). But if your text is uncomplicated and you're in control of the event handlers on it, that kind of simple solution might be all you need. To be (almost) completely minimal-impact will require walking the DOM tree and only processing text nodes. For that, you'd be using [`Node#childNodes`](http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/core.html#ID-1451460987) (for walking through the DOM), [`Node#nodeType`](http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/core.html#ID-111237558) (to know what kind of node you're dealing with), [`Node#nodeValue`](http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/core.html#ID-F68D080) (to get the text of a text node), [`Node#splitText`](http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/core.html#ID-38853C1D) (on the text nodes, to split them in two so you can move one of them into your `span`), and [`Node#appendChild`](http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/core.html#ID-184E7107) (to rehome the text node that you need to put in your `span`; don't worry about removing them from their parent, `appendChild` handles that for you). The above are covered by the DOM specification ([v2 here](http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/), [v3 here](http://www.w3.org/TR/DOM-Level-3-Core/); most browsers are somewhere between the two; the links in the text above are to the DOM2 spec). You'll want to be careful about this sort of case: ``` <p>The 〈foo <em>and</em> bar〉.</p> ``` ...where the 〈 and the 〉 are in different text nodes (both children of the `p`, on either side of an `em` element); there you'll have to move part of each text node and the whole of the `em` into your `span`, most likely. Hopefully that's enough to get you started.
If the text could be anywhere in the page, you have to traverse through each DOM element, split the text when you found a match using a regex. I have put my code up there on jsfiddle: <http://jsfiddle.net/thai/RjHqe/> **What it does:** It looks at the node you put it in, * If it's an **element**, then it looks into every child nodes of it. * If it's a **text node**, it finds the text enclosed in 〈angle brackets〉. If there is a match (look at the first match only), then it splits the text node into 3 parts: + `left` (the opening bracket and also text before that) + `middle` (the text inside the angle bracket) + `right` (the closing bracket and text after it)the `middle` part is wrapped inside the `<span>` and the `right` part is being looked for more angle brackets.
140,903
I am having a bit of trouble with my survival data. Basically I am trying to assess whether or not the cox proportional hazard assumption is met, by ploting the schoenfeld residuals in R. I have several variables and some of them have 3 or more categories. As far as I know, Schoenfeld residuals are adjusted for each individual and each variable. So, when I tried to get the residuals for each of this variables with 3 or more categories, I was expecting to have one residual for individual, but instead I got more. It's my first time using R with survival data, so probably I am doing something wrong. Here is the code I used with a variable that has 4 categories. ``` >fit=survfit(s~data$school) #Cox Model >summary(coxph(s~data$school)) #Test for proportional hazards assumption >cox.zph(coxph(s~data$school)) #Schoenfeld residuals (defined for each variable for uncensored subjects) >res=residuals(coxph(s~data$school), type="schoenfeld", collapse=F) ``` When I call the first 3 rows (time = 7 days), I get a set of 3 residuals per individual: ``` > res[1:3,] data$school2 data$school3 data$school4 7 -0.6250424 -0.1333578 0.8462863 7 -0.6250424 -0.1333578 0.8462863 7 0.3749576 -0.1333578 -0.1537137 ``` Can anyone please help me we this?
2015/03/08
[ "https://stats.stackexchange.com/questions/140903", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/70688/" ]
Occasionally it is a good idea to concentrate the effects of a categorical predictor just for the purpose of checking proportional hazards or interaction. If you don't want 3 d.f. for the PH test you can use `predict(fit, type='terms')` to get one column for each predictor, then run `cox.cph`. This doesn't exactly preserve type I error but it's close. The method doesn't recognize that 3 parameters are being estimated instead of 1.
There is a separate Schoenfeld residual *for each covariate* for every uncensored individual. In the case of a categorical covariate with $k$ levels, $k-1$ dummy variates appear in the model.
416,108
For an ideal OpAmp the voltages at the inputs are said to be of equal value and the currents 0, however, for we also know that $$V\_\text{out}=A(V\_p-V\_n)$$ but if the voltages are the same the output should always be 0! I know that what I'm saying is wrong, but I don't know where the loophole is, and I can't find it anywhere. [![enter image description here](https://i.stack.imgur.com/5dXFM.png)](https://i.stack.imgur.com/5dXFM.png)
2019/01/09
[ "https://electronics.stackexchange.com/questions/416108", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/201757/" ]
> > For an ideal op-amp the voltages at the inputs are said to be of equal value ... > > > No. If negative feedback is applied and the output is not driven into saturation then the inputs will be very, very close to equal. > > ... and the currents 0, ... > > > Yes, due to the high and sometimes very, very high input impedance. > > ... however, for we also know that \$ V\_{out}=A(V\_p−V\_n) \$ > but if the voltages are the same the output should always be 0! > > > Correct. And that is why the voltages are not the same (but very, very close). The difference in voltages can be worked out by rearranging your formula to $$ V\_p - V\_n = \frac {V\_{out}}{A} $$ and since A is very large the difference in inputs is very small - so small that it's *almost* zero. ![schematic](https://i.stack.imgur.com/JnKk7.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fJnKk7.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) *Figure 1. Non-inverting amplifier with a gain of 2.* --- From the comments: > > When analyzing an op-amp circuit with negative feedback assuming it is ideal we make the assumptions (almost in this order): > > > 1. Input voltages are the same; > > > Yes, but we know deep down that they are slightly different. > > 2. The gain of the op-amp goes to infinity; > > > The gain of the op-amp is fixed as specified in the datasheet. 1M to 10M would be typical. > > 3. Work out the output voltage and find out that the gain is finite and dependent of the circuit around the op-amp. > > > The gain of **the complete circuit** is "finite" (a modest value, say, of 1 to 1k) rather than the open-loop gain of 1M to 10M which hasn't changed, no matter what the feedback arrangement is. > > If in a circuit the only thing that causes a gain is the op-amp then the gain of the circuit is the gain of the op-amp, so our reasoning appears to be flawed. > > > Nope. You're forgetting the negative feedback. This *controls* the overall circuit gain and corrects any variations and non-linearities in the op-amp itself. --- > > I think I just got it, my problem now lies in "How" does the negative feedback controls the circuit gain, ... > > > Let's look at Figure 1 again. * Initially Vp is at 0 V so the output and Vn are at 0 V too. * VP is suddenly and instantly switched from 0 to +1 V. * VP - Vn = 1 V so (since the open-loop gain of the op-amp itself is 1M) the output starts to swing towards +1,000,000 V at a rate limited by the slew rate of the op-amp. * As it does, VP - Vn is decreasing and by the time Vout reaches 0.5 V, VP - Vn = 0.75 V so the output is now trying to swing to 750,000 V (but is still only at 0.5 V). Remember that the feedback is dividing by 2. * This process continues so that as the output gets approaches 2.0 V the difference between the inputs is getting close to zero and so the target output voltage is falling too. The feedback is reducing the output drive and bringing it under control. * Very quickly the output reaches 2.0 V and at this point Vn is just a millionth or two below 1 V and the circuit is stable, balanced, happy and under control.
The loophole is that, for an **ideal** op-amp, \$A = +\infty\$. Hence, unless you reach saturation, \$V\_p - V\_n = {V\_{out} \over A} = {V\_{out} \over +\infty} = 0\$. **Edit:** If that helps you, you can consider a non-ideal op-amp with a finite \$A\$ gain, using it in an inverter configuration, as you suggested in your comment: ![schematic](https://i.stack.imgur.com/b0Ggr.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fb0Ggr.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) Then you have both \$V\_{out} = A (V\_+ - V\_-) = A (0 - V\_-) = - A V\_-\$ and \$V\_- = {V\_{in} + V\_{out} \over 2}\$. That leads to \$V\_{out} = - {A \over A+2} V\_{in}\$ and \$V\_- = {1 \over A+2} V\_{in}\$. Now, when \$A \to +\infty\$, you have \$V\_{out} \to -V\_{in}\$ and \$V\_- \to 0\$.
416,108
For an ideal OpAmp the voltages at the inputs are said to be of equal value and the currents 0, however, for we also know that $$V\_\text{out}=A(V\_p-V\_n)$$ but if the voltages are the same the output should always be 0! I know that what I'm saying is wrong, but I don't know where the loophole is, and I can't find it anywhere. [![enter image description here](https://i.stack.imgur.com/5dXFM.png)](https://i.stack.imgur.com/5dXFM.png)
2019/01/09
[ "https://electronics.stackexchange.com/questions/416108", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/201757/" ]
The loophole is that, for an **ideal** op-amp, \$A = +\infty\$. Hence, unless you reach saturation, \$V\_p - V\_n = {V\_{out} \over A} = {V\_{out} \over +\infty} = 0\$. **Edit:** If that helps you, you can consider a non-ideal op-amp with a finite \$A\$ gain, using it in an inverter configuration, as you suggested in your comment: ![schematic](https://i.stack.imgur.com/b0Ggr.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fb0Ggr.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) Then you have both \$V\_{out} = A (V\_+ - V\_-) = A (0 - V\_-) = - A V\_-\$ and \$V\_- = {V\_{in} + V\_{out} \over 2}\$. That leads to \$V\_{out} = - {A \over A+2} V\_{in}\$ and \$V\_- = {1 \over A+2} V\_{in}\$. Now, when \$A \to +\infty\$, you have \$V\_{out} \to -V\_{in}\$ and \$V\_- \to 0\$.
One formula often used in thinking about opamp behavior is Closed Loop Gain = G /(1 + G \* H) where G = the open-loop-gain and H is the feedback-factor. Suppose the G rolls off with frequency, and suppose the H is 0.0001 (purpose is to produce a precision gain of 10,000x). What happens in a real opamp? [![enter image description here](https://i.stack.imgur.com/cprw5.png)](https://i.stack.imgur.com/cprw5.png) Look at the right side axis, where linear-error is plotted, Notice the error is 10%, at 10 Hz. And 1% error at 1Hz. And 0.1% error, or 10 bits, at 0.1 Hz.
416,108
For an ideal OpAmp the voltages at the inputs are said to be of equal value and the currents 0, however, for we also know that $$V\_\text{out}=A(V\_p-V\_n)$$ but if the voltages are the same the output should always be 0! I know that what I'm saying is wrong, but I don't know where the loophole is, and I can't find it anywhere. [![enter image description here](https://i.stack.imgur.com/5dXFM.png)](https://i.stack.imgur.com/5dXFM.png)
2019/01/09
[ "https://electronics.stackexchange.com/questions/416108", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/201757/" ]
> > For an ideal op-amp the voltages at the inputs are said to be of equal value ... > > > No. If negative feedback is applied and the output is not driven into saturation then the inputs will be very, very close to equal. > > ... and the currents 0, ... > > > Yes, due to the high and sometimes very, very high input impedance. > > ... however, for we also know that \$ V\_{out}=A(V\_p−V\_n) \$ > but if the voltages are the same the output should always be 0! > > > Correct. And that is why the voltages are not the same (but very, very close). The difference in voltages can be worked out by rearranging your formula to $$ V\_p - V\_n = \frac {V\_{out}}{A} $$ and since A is very large the difference in inputs is very small - so small that it's *almost* zero. ![schematic](https://i.stack.imgur.com/JnKk7.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fJnKk7.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) *Figure 1. Non-inverting amplifier with a gain of 2.* --- From the comments: > > When analyzing an op-amp circuit with negative feedback assuming it is ideal we make the assumptions (almost in this order): > > > 1. Input voltages are the same; > > > Yes, but we know deep down that they are slightly different. > > 2. The gain of the op-amp goes to infinity; > > > The gain of the op-amp is fixed as specified in the datasheet. 1M to 10M would be typical. > > 3. Work out the output voltage and find out that the gain is finite and dependent of the circuit around the op-amp. > > > The gain of **the complete circuit** is "finite" (a modest value, say, of 1 to 1k) rather than the open-loop gain of 1M to 10M which hasn't changed, no matter what the feedback arrangement is. > > If in a circuit the only thing that causes a gain is the op-amp then the gain of the circuit is the gain of the op-amp, so our reasoning appears to be flawed. > > > Nope. You're forgetting the negative feedback. This *controls* the overall circuit gain and corrects any variations and non-linearities in the op-amp itself. --- > > I think I just got it, my problem now lies in "How" does the negative feedback controls the circuit gain, ... > > > Let's look at Figure 1 again. * Initially Vp is at 0 V so the output and Vn are at 0 V too. * VP is suddenly and instantly switched from 0 to +1 V. * VP - Vn = 1 V so (since the open-loop gain of the op-amp itself is 1M) the output starts to swing towards +1,000,000 V at a rate limited by the slew rate of the op-amp. * As it does, VP - Vn is decreasing and by the time Vout reaches 0.5 V, VP - Vn = 0.75 V so the output is now trying to swing to 750,000 V (but is still only at 0.5 V). Remember that the feedback is dividing by 2. * This process continues so that as the output gets approaches 2.0 V the difference between the inputs is getting close to zero and so the target output voltage is falling too. The feedback is reducing the output drive and bringing it under control. * Very quickly the output reaches 2.0 V and at this point Vn is just a millionth or two below 1 V and the circuit is stable, balanced, happy and under control.
The assumption that the input voltages are always the same only applies when the op amp is used with **negative feedback**. It is the feedback that causes the input voltages to be the same (or nearly so, in the real world). Of course, we also assume that the output voltage is within the operating limits of the amplifier. When operated without feedback, or with positive feedback, then the input voltages may be much different. In this case it is reasonable to say that $$V\_{OUT} = A\_{OL}(V\_P - V\_N)$$ where \$A\_{OL}\$ is the open loop gain of the op amp itself.
416,108
For an ideal OpAmp the voltages at the inputs are said to be of equal value and the currents 0, however, for we also know that $$V\_\text{out}=A(V\_p-V\_n)$$ but if the voltages are the same the output should always be 0! I know that what I'm saying is wrong, but I don't know where the loophole is, and I can't find it anywhere. [![enter image description here](https://i.stack.imgur.com/5dXFM.png)](https://i.stack.imgur.com/5dXFM.png)
2019/01/09
[ "https://electronics.stackexchange.com/questions/416108", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/201757/" ]
The assumption that the input voltages are always the same only applies when the op amp is used with **negative feedback**. It is the feedback that causes the input voltages to be the same (or nearly so, in the real world). Of course, we also assume that the output voltage is within the operating limits of the amplifier. When operated without feedback, or with positive feedback, then the input voltages may be much different. In this case it is reasonable to say that $$V\_{OUT} = A\_{OL}(V\_P - V\_N)$$ where \$A\_{OL}\$ is the open loop gain of the op amp itself.
One formula often used in thinking about opamp behavior is Closed Loop Gain = G /(1 + G \* H) where G = the open-loop-gain and H is the feedback-factor. Suppose the G rolls off with frequency, and suppose the H is 0.0001 (purpose is to produce a precision gain of 10,000x). What happens in a real opamp? [![enter image description here](https://i.stack.imgur.com/cprw5.png)](https://i.stack.imgur.com/cprw5.png) Look at the right side axis, where linear-error is plotted, Notice the error is 10%, at 10 Hz. And 1% error at 1Hz. And 0.1% error, or 10 bits, at 0.1 Hz.
416,108
For an ideal OpAmp the voltages at the inputs are said to be of equal value and the currents 0, however, for we also know that $$V\_\text{out}=A(V\_p-V\_n)$$ but if the voltages are the same the output should always be 0! I know that what I'm saying is wrong, but I don't know where the loophole is, and I can't find it anywhere. [![enter image description here](https://i.stack.imgur.com/5dXFM.png)](https://i.stack.imgur.com/5dXFM.png)
2019/01/09
[ "https://electronics.stackexchange.com/questions/416108", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/201757/" ]
> > For an ideal op-amp the voltages at the inputs are said to be of equal value ... > > > No. If negative feedback is applied and the output is not driven into saturation then the inputs will be very, very close to equal. > > ... and the currents 0, ... > > > Yes, due to the high and sometimes very, very high input impedance. > > ... however, for we also know that \$ V\_{out}=A(V\_p−V\_n) \$ > but if the voltages are the same the output should always be 0! > > > Correct. And that is why the voltages are not the same (but very, very close). The difference in voltages can be worked out by rearranging your formula to $$ V\_p - V\_n = \frac {V\_{out}}{A} $$ and since A is very large the difference in inputs is very small - so small that it's *almost* zero. ![schematic](https://i.stack.imgur.com/JnKk7.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fJnKk7.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) *Figure 1. Non-inverting amplifier with a gain of 2.* --- From the comments: > > When analyzing an op-amp circuit with negative feedback assuming it is ideal we make the assumptions (almost in this order): > > > 1. Input voltages are the same; > > > Yes, but we know deep down that they are slightly different. > > 2. The gain of the op-amp goes to infinity; > > > The gain of the op-amp is fixed as specified in the datasheet. 1M to 10M would be typical. > > 3. Work out the output voltage and find out that the gain is finite and dependent of the circuit around the op-amp. > > > The gain of **the complete circuit** is "finite" (a modest value, say, of 1 to 1k) rather than the open-loop gain of 1M to 10M which hasn't changed, no matter what the feedback arrangement is. > > If in a circuit the only thing that causes a gain is the op-amp then the gain of the circuit is the gain of the op-amp, so our reasoning appears to be flawed. > > > Nope. You're forgetting the negative feedback. This *controls* the overall circuit gain and corrects any variations and non-linearities in the op-amp itself. --- > > I think I just got it, my problem now lies in "How" does the negative feedback controls the circuit gain, ... > > > Let's look at Figure 1 again. * Initially Vp is at 0 V so the output and Vn are at 0 V too. * VP is suddenly and instantly switched from 0 to +1 V. * VP - Vn = 1 V so (since the open-loop gain of the op-amp itself is 1M) the output starts to swing towards +1,000,000 V at a rate limited by the slew rate of the op-amp. * As it does, VP - Vn is decreasing and by the time Vout reaches 0.5 V, VP - Vn = 0.75 V so the output is now trying to swing to 750,000 V (but is still only at 0.5 V). Remember that the feedback is dividing by 2. * This process continues so that as the output gets approaches 2.0 V the difference between the inputs is getting close to zero and so the target output voltage is falling too. The feedback is reducing the output drive and bringing it under control. * Very quickly the output reaches 2.0 V and at this point Vn is just a millionth or two below 1 V and the circuit is stable, balanced, happy and under control.
One formula often used in thinking about opamp behavior is Closed Loop Gain = G /(1 + G \* H) where G = the open-loop-gain and H is the feedback-factor. Suppose the G rolls off with frequency, and suppose the H is 0.0001 (purpose is to produce a precision gain of 10,000x). What happens in a real opamp? [![enter image description here](https://i.stack.imgur.com/cprw5.png)](https://i.stack.imgur.com/cprw5.png) Look at the right side axis, where linear-error is plotted, Notice the error is 10%, at 10 Hz. And 1% error at 1Hz. And 0.1% error, or 10 bits, at 0.1 Hz.
828,931
I have been trying to configure Postfix to use SMTP authentication. When I telnet on port 587, I appear to be authenticating correctly, but the mail fails to reach its destination and instead comes back as 553 rejected by Spamhaus because my IP is on the PBL. When I read the documentation on Spamhaus, I am told that being on the PBL is not a block, I just need to ensure that I authenticate correctly (<https://www.spamhaus.org/faq/section/Spamhaus%20PBL#253>). I have searched extensively, but have not found a way to ensure mail is delivered successfully from this server. Would anyone know what I might be missing here? Here is the result of my telnet test: ``` ubuntu@dev-server:~$ telnet api.mijnvitalefuncties.com 587 Trying 192.168.0.11... Connected to api.mijnvitalefuncties.com. Escape character is '^]'. 220 dev-server ESMTP Postfix (Ubuntu) ehlo api.mijnvitalefuncties.com 250-dev-server 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH PLAIN LOGIN 250-AUTH=PLAIN LOGIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN AUTH LOGIN 334 VXNlcm5hbWU6 *** 334 UGFzc3dvcmQ6 *** 235 2.7.0 Authentication successful MAIL FROM:<someone@api.mijnvitalefuncties.com> 250 2.1.0 Ok RCPT TO:<someone@example.com> 250 2.1.5 Ok DATA 354 End data with <CR><LF>.<CR><LF> . 250 2.0.0 Ok: queued as B70A764235 quit 221 2.0.0 Bye Connection closed by foreign host. ``` Here is the email I receive informing me that the mail cannot be delivered: ``` This is the mail system at host dev-server. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <someone@example.com>: host cluster5.eu.messagelabs.com[193.109.255.99] said: 553-mail rejected because your IP is in the PBL. See 553 http://www.spamhaus.org/pbl (in reply to RCPT TO command) ``` Here is the error I get when I try to disable port 25 so as to force mail through submission (port 587). ``` Jan 27 11:53:26 dev-server postfix/qmgr[16821]: warning: connect to transport private/smtp: Connection refused Jan 27 11:53:26 dev-server postfix/error[16841]: 5137E64232: to=<peter.heylin@ie.fujitsu.com>, relay=none, delay=19, delays=19/0/0/0.01, dsn=4.3.0, status=deferred (mail transport unavailable) ```
2017/01/27
[ "https://serverfault.com/questions/828931", "https://serverfault.com", "https://serverfault.com/users/395531/" ]
The log shows that authentication is properly configured and working ``` 235 2.7.0 Authentication successful ``` but the reply says you are getting blocked ``` 553-mail rejected because your IP is in the PBL. See 553 http://www.spamhaus.org/pbl (in reply to RCPT TO command) ``` Go to spamhaus web site and follow the link to get unblocked > > Blocked? To check, get info and resolve listings go to [Blocklist > Removal Center](https://www.spamhaus.org/lookup/) > > >
Proving that authentication is working is not the same thing as proving that unauthenticated requests are blocked. However that's not relevant to the problem you are having (unable to deliver email to remote systems). > > when I try to disable port 25 > > > It's rather worrying that you think this has any part in a solution to your problem. You've also not provided any details of how your MTA is configured.(a diff of the main.cf and any other config files you have modified are pretty essential to understanding what's going on here). > > When I read the documentation on Spamhaus, I am told that being on the PBL is not a block, I just need to ensure that I authenticate correctly > > > You seem to have the drawn the wrong conclusions from the information provided there. Here authentication solves the problem where *your* MTA is rejecting messages from *your* MUA. While your problem is that the messagelabs MTA is not accepting mail from your MTA. If you have a formal arrangement with messagelabs whereby they provide you with a login to their MTA, and you configure your MTA to use that account when passing mail to their servers then you would be able to get your email forwarded by them. But such an approach is just silly - messagelabs are not interested in maintaining such relationships with all the people who might want to send their customers email. Read "[What if I want to run a mail server on dynamic IPs listed in the PBL?](https://www.spamhaus.org/faq/section/Spamhaus%20PBL#219)" carefully. You need to get a static IP address (in a static range) or use a smart relay which already has such a IP.
22,052,258
I am building an authentication system using Passport.js using [Easy Node Authentication: Setup and Local tutorial](https://scotch.io/tutorials/easy-node-authentication-setup-and-local). I am confused about what `passport.session()` does. After playing around with the different middleware I came to understand that `express.session()` is what sends a session ID over cookies to the client, but I'm confused about what `passport.session()` does and why it is required in addition to `express.session()`. Here is how I set up my application: // Server.js configures the application and sets up the webserver ``` //importing our modules var express = require('express'); var app = express(); var port = process.env.PORT || 8080; var mongoose = require('mongoose'); var passport = require('passport'); var flash = require('connect-flash'); var configDB = require('./config/database.js'); //Configuration of Databse and App mongoose.connect(configDB.url); //connect to our database require('./config/passport')(passport); //pass passport for configuration app.configure(function() { //set up our express application app.use(express.logger('dev')); //log every request to the console app.use(express.cookieParser()); //read cookies (needed for auth) app.use(express.bodyParser()); //get info from html forms app.set('view engine', 'ejs'); //set up ejs for templating //configuration for passport app.use(express.session({ secret: 'olhosvermelhoseasenhaclassica', maxAge:null })); //session secret app.use(passport.initialize()); app.use(passport.session()); //persistent login session app.use(flash()); //use connect-flash for flash messages stored in session }); //Set up routes require('./app/routes.js')(app, passport); //launch app.listen(port); console.log("Server listening on port" + port); ```
2014/02/26
[ "https://Stackoverflow.com/questions/22052258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1835903/" ]
`passport.session()` acts as a middleware to alter the req object and change the 'user' value that is currently the session id (from the client cookie) into the true deserialized user object. Whilst the other answers make some good points I thought that some more specific detail could be provided. ``` app.use(passport.session()); ``` is equivalent to ``` app.use(passport.authenticate('session')); ``` Where 'session' refers to the following strategy that is bundled with passportJS. Here's a link to the file: <https://github.com/jaredhanson/passport/blob/master/lib/strategies/session.js> And a [permalink](https://github.com/jaredhanson/passport/blob/b220766870cdbb98cc7570764cd3ced93da8945e/lib/strategies/session.js#L66) pointing to the following lines at the time of this writing: ``` var property = req._passport.instance._userProperty || 'user'; req[property] = user; ``` Where it essentially acts as a middleware and alters the value of the 'user' property in the req object to contain the deserialized identity of the user. To allow this to work correctly you must include `serializeUser` and `deserializeUser` functions in your custom code. ``` passport.serializeUser(function (user, done) { done(null, user.id); }); passport.deserializeUser(function (user, done) { //If using Mongoose with MongoDB; if other you will need JS specific to that schema. User.findById(user.id, function (err, user) { done(err, user); }); }); ``` This will find the correct user from the database and pass it as a closure variable into the callback `done(err,user);` so the above code in the `passport.session()` can replace the 'user' value in the req object and pass on to the next middleware in the pile.
From the [documentation](http://passportjs.org/guide/configure/) > > In a Connect or Express-based application, passport.initialize() > middleware is required to initialize Passport. If your application > uses persistent login sessions, passport.session() middleware must > also be used. > > > and > > Sessions > > > In a typical web application, the credentials used to authenticate a > user will only be transmitted during the login request. If > authentication succeeds, a session will be established and maintained > via a cookie set in the user's browser. > > > Each subsequent request will not contain credentials, but rather the > unique cookie that identifies the session. In order to support login > sessions, Passport will serialize and deserialize user instances to > and from the session. > > > and > > Note that enabling session support is entirely optional, though it is > recommended for most applications. If enabled, be sure to use > express.session() before passport.session() to ensure that the login > session is restored in the correct order. > > >
22,052,258
I am building an authentication system using Passport.js using [Easy Node Authentication: Setup and Local tutorial](https://scotch.io/tutorials/easy-node-authentication-setup-and-local). I am confused about what `passport.session()` does. After playing around with the different middleware I came to understand that `express.session()` is what sends a session ID over cookies to the client, but I'm confused about what `passport.session()` does and why it is required in addition to `express.session()`. Here is how I set up my application: // Server.js configures the application and sets up the webserver ``` //importing our modules var express = require('express'); var app = express(); var port = process.env.PORT || 8080; var mongoose = require('mongoose'); var passport = require('passport'); var flash = require('connect-flash'); var configDB = require('./config/database.js'); //Configuration of Databse and App mongoose.connect(configDB.url); //connect to our database require('./config/passport')(passport); //pass passport for configuration app.configure(function() { //set up our express application app.use(express.logger('dev')); //log every request to the console app.use(express.cookieParser()); //read cookies (needed for auth) app.use(express.bodyParser()); //get info from html forms app.set('view engine', 'ejs'); //set up ejs for templating //configuration for passport app.use(express.session({ secret: 'olhosvermelhoseasenhaclassica', maxAge:null })); //session secret app.use(passport.initialize()); app.use(passport.session()); //persistent login session app.use(flash()); //use connect-flash for flash messages stored in session }); //Set up routes require('./app/routes.js')(app, passport); //launch app.listen(port); console.log("Server listening on port" + port); ```
2014/02/26
[ "https://Stackoverflow.com/questions/22052258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1835903/" ]
`passport.session()` acts as a middleware to alter the req object and change the 'user' value that is currently the session id (from the client cookie) into the true deserialized user object. Whilst the other answers make some good points I thought that some more specific detail could be provided. ``` app.use(passport.session()); ``` is equivalent to ``` app.use(passport.authenticate('session')); ``` Where 'session' refers to the following strategy that is bundled with passportJS. Here's a link to the file: <https://github.com/jaredhanson/passport/blob/master/lib/strategies/session.js> And a [permalink](https://github.com/jaredhanson/passport/blob/b220766870cdbb98cc7570764cd3ced93da8945e/lib/strategies/session.js#L66) pointing to the following lines at the time of this writing: ``` var property = req._passport.instance._userProperty || 'user'; req[property] = user; ``` Where it essentially acts as a middleware and alters the value of the 'user' property in the req object to contain the deserialized identity of the user. To allow this to work correctly you must include `serializeUser` and `deserializeUser` functions in your custom code. ``` passport.serializeUser(function (user, done) { done(null, user.id); }); passport.deserializeUser(function (user, done) { //If using Mongoose with MongoDB; if other you will need JS specific to that schema. User.findById(user.id, function (err, user) { done(err, user); }); }); ``` This will find the correct user from the database and pass it as a closure variable into the callback `done(err,user);` so the above code in the `passport.session()` can replace the 'user' value in the req object and pass on to the next middleware in the pile.
It simply authenticates the session (which is populated by `express.session()`). It is equivalent to: ``` passport.authenticate('session'); ``` as can be seen in the code here: <https://github.com/jaredhanson/passport/blob/42ff63c/lib/authenticator.js#L233>
22,052,258
I am building an authentication system using Passport.js using [Easy Node Authentication: Setup and Local tutorial](https://scotch.io/tutorials/easy-node-authentication-setup-and-local). I am confused about what `passport.session()` does. After playing around with the different middleware I came to understand that `express.session()` is what sends a session ID over cookies to the client, but I'm confused about what `passport.session()` does and why it is required in addition to `express.session()`. Here is how I set up my application: // Server.js configures the application and sets up the webserver ``` //importing our modules var express = require('express'); var app = express(); var port = process.env.PORT || 8080; var mongoose = require('mongoose'); var passport = require('passport'); var flash = require('connect-flash'); var configDB = require('./config/database.js'); //Configuration of Databse and App mongoose.connect(configDB.url); //connect to our database require('./config/passport')(passport); //pass passport for configuration app.configure(function() { //set up our express application app.use(express.logger('dev')); //log every request to the console app.use(express.cookieParser()); //read cookies (needed for auth) app.use(express.bodyParser()); //get info from html forms app.set('view engine', 'ejs'); //set up ejs for templating //configuration for passport app.use(express.session({ secret: 'olhosvermelhoseasenhaclassica', maxAge:null })); //session secret app.use(passport.initialize()); app.use(passport.session()); //persistent login session app.use(flash()); //use connect-flash for flash messages stored in session }); //Set up routes require('./app/routes.js')(app, passport); //launch app.listen(port); console.log("Server listening on port" + port); ```
2014/02/26
[ "https://Stackoverflow.com/questions/22052258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1835903/" ]
`passport.session()` acts as a middleware to alter the req object and change the 'user' value that is currently the session id (from the client cookie) into the true deserialized user object. Whilst the other answers make some good points I thought that some more specific detail could be provided. ``` app.use(passport.session()); ``` is equivalent to ``` app.use(passport.authenticate('session')); ``` Where 'session' refers to the following strategy that is bundled with passportJS. Here's a link to the file: <https://github.com/jaredhanson/passport/blob/master/lib/strategies/session.js> And a [permalink](https://github.com/jaredhanson/passport/blob/b220766870cdbb98cc7570764cd3ced93da8945e/lib/strategies/session.js#L66) pointing to the following lines at the time of this writing: ``` var property = req._passport.instance._userProperty || 'user'; req[property] = user; ``` Where it essentially acts as a middleware and alters the value of the 'user' property in the req object to contain the deserialized identity of the user. To allow this to work correctly you must include `serializeUser` and `deserializeUser` functions in your custom code. ``` passport.serializeUser(function (user, done) { done(null, user.id); }); passport.deserializeUser(function (user, done) { //If using Mongoose with MongoDB; if other you will need JS specific to that schema. User.findById(user.id, function (err, user) { done(err, user); }); }); ``` This will find the correct user from the database and pass it as a closure variable into the callback `done(err,user);` so the above code in the `passport.session()` can replace the 'user' value in the req object and pass on to the next middleware in the pile.
While you will be using `PassportJs` for validating the user as part of your login URL, you still need some mechanism to store this user information in the session and retrieve it with every subsequent request (i.e. serialize/deserialize the user). So in effect, you are authenticating the user with every request, even though this authentication needn't look up a database or oauth as in the login response. So passport will treat session authentication also as yet another authentication strategy. And to use this strategy - which is named `session`, just use a simple shortcut - `app.use(passport.session())`. Also note that this particular strategy will want you to implement serialize and deserialize functions for obvious reasons.
5,590,203
So I'm coming down to release-time for my application. We plan on releasing two versions, a free ad-based play-to-unlock version, and a paid fully unlocked version. I have the code set up that I can simply set a flag on startup to enable/disable ads and lock/unlock all the features. So literally only one line of code will execute differently between these versions. In order to release two separate applications, they require different package names, so my question is this: Is there an easy way to refactor my application's package name? Eclipse's refactoring tool doesn't resolve the generated R file, or any XML references in layout and manifest files. I've attempted to make a new project using the original as source, but I can't reference the assets and resources, and I'm looking to avoid duplicating any of my code and assets. It's not a huge pain to refactor it manually, but I feel there must be a better way to do it. Anybody have an elegant solution to this? Edit/Answered: For my situation I find it perfectly acceptable to just use Project -> Android Tools -> Rename Application Package. I wasn't aware this existed, and I feel like an idiot for posting this now. Thanks for everyone's answers and comments, feel free to vote this closed.
2011/04/08
[ "https://Stackoverflow.com/questions/5590203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/615779/" ]
Possibly a duplicate of [Bulk Publishing of Android Apps](https://stackoverflow.com/questions/4740319/bulk-publishing-of-android-apps/4740728#4740728). Android Library projects will do this for you nicely. You'll end up with 1 library project and then a project for each edition (free/full) with those really just containing different resources like app icons and different manifests, which is where the package name will be varied. Hope that helps. It has worked well for me.
Gradle allows to use generated BuildConfig.java to pass some data to code. ``` productFlavors { paid { packageName "com.simple.paid" buildConfigField 'boolean', 'PAID', 'true' buildConfigField "int", "THING_ONE", "1" } free { packageName "com.simple.free" buildConfigField 'boolean', 'PAID', 'false' buildConfigField "int", "THING_ONE", "0" } ```
5,590,203
So I'm coming down to release-time for my application. We plan on releasing two versions, a free ad-based play-to-unlock version, and a paid fully unlocked version. I have the code set up that I can simply set a flag on startup to enable/disable ads and lock/unlock all the features. So literally only one line of code will execute differently between these versions. In order to release two separate applications, they require different package names, so my question is this: Is there an easy way to refactor my application's package name? Eclipse's refactoring tool doesn't resolve the generated R file, or any XML references in layout and manifest files. I've attempted to make a new project using the original as source, but I can't reference the assets and resources, and I'm looking to avoid duplicating any of my code and assets. It's not a huge pain to refactor it manually, but I feel there must be a better way to do it. Anybody have an elegant solution to this? Edit/Answered: For my situation I find it perfectly acceptable to just use Project -> Android Tools -> Rename Application Package. I wasn't aware this existed, and I feel like an idiot for posting this now. Thanks for everyone's answers and comments, feel free to vote this closed.
2011/04/08
[ "https://Stackoverflow.com/questions/5590203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/615779/" ]
The best way is to use "Android Studio" -> gradle.build -> [productFlavors + generate manifest file from template]. This combination allows to build free/paid versions and bunch of editions for different app markets from one source. --- This is a part of templated manifest file: --- ``` <manifest android:versionCode="1" android:versionName="1" package="com.example.product" xmlns:android="http://schemas.android.com/apk/res/android"> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/{f:FREE}app_name_free{/f}{f:PAID}app_name_paid{/f}" android:name=".ApplicationMain" android:theme="@style/AppTheme"> <activity android:label="@string/{f:FREE}app_name_free{/f}{f:PAID}app_name_paid{/f}" android:name=".ActivityMain"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> ``` --- This is template "ProductInfo.template" for java file: ProductInfo.java --- ``` package com.packagename.generated; import com.packagename.R; public class ProductInfo { public static final boolean mIsPaidVersion = {f:PAID}true{/f}{f:FREE}false{/f}; public static final int mAppNameId = R.string.app_name_{f:PAID}paid{/f}{f:FREE}free{/f}; public static final boolean mIsDebug = {$DEBUG}; } ``` --- This manifest is processed by gradle.build script with **productFlavors** and **processManifest** task hook: --- ``` import java.util.regex.Matcher; import java.util.regex.Pattern; import org.gradle.api.DefaultTask import org.gradle.api.tasks.TaskAction ... android { ... productFlavors { free { packageName 'com.example.product.free' } paid { packageName 'com.example.product.paid' } } ... } afterEvaluate { project -> android.applicationVariants.each { variant -> def flavor = variant.productFlavors[0].name tasks['prepare' + variant.name + 'Dependencies'].doLast { println "Generate java files..." //Copy templated and processed by build system manifest file to filtered_manifests forder def productInfoPath = "${projectDir}/some_sourcs_path/generated/" copy { from(productInfoPath) into(productInfoPath) include('ProductInfo.template') rename('ProductInfo.template', 'ProductInfo.java') } tasks.create(name: variant.name + 'ProcessProductInfoJavaFile', type: processTemplateFile) { templateFilePath = productInfoPath + "ProductInfo.java" flavorName = flavor buildTypeName = variant.buildType.name } tasks[variant.name + 'ProcessProductInfoJavaFile'].execute() } variant.processManifest.doLast { println "Customization manifest file..." // Copy templated and processed by build system manifest file to filtered_manifests forder copy { from("${buildDir}/manifests") { include "${variant.dirName}/AndroidManifest.xml" } into("${buildDir}/filtered_manifests") } tasks.create(name: variant.name + 'ProcessManifestFile', type: processTemplateFile) { templateFilePath = "${buildDir}/filtered_manifests/${variant.dirName}/AndroidManifest.xml" flavorName = flavor buildTypeName = variant.buildType.name } tasks[variant.name + 'ProcessManifestFile'].execute() } variant.processResources.manifestFile = file("${buildDir}/filtered_manifests/${variant.dirName}/AndroidManifest.xml") } } ``` --- This is separated task to process file --- ``` class processTemplateFile extends DefaultTask { def String templateFilePath = "" def String flavorName = "" def String buildTypeName = "" @TaskAction void run() { println templateFilePath // Load file to memory def fileObj = project.file(templateFilePath) def content = fileObj.getText() // Flavor. Find "{f:<flavor_name>}...{/f}" pattern and leave only "<flavor_name>==flavor" def patternAttribute = Pattern.compile("\\{f:((?!${flavorName.toUpperCase()})).*?\\{/f\\}",Pattern.DOTALL); content = patternAttribute.matcher(content).replaceAll(""); def pattern = Pattern.compile("\\{f:.*?\\}"); content = pattern.matcher(content).replaceAll(""); pattern = Pattern.compile("\\{/f\\}"); content = pattern.matcher(content).replaceAll(""); // Build. Find "{$DEBUG}" pattern and replace with "true"/"false" pattern = Pattern.compile("\\{\\\$DEBUG\\}", Pattern.DOTALL); if (buildTypeName == "debug"){ content = pattern.matcher(content).replaceAll("true"); } else{ content = pattern.matcher(content).replaceAll("false"); } // Save processed manifest file fileObj.write(content) } } ``` **Updated:** processTemplateFile created for code reusing purposes.
If you want another application name, depending of the flavor, you can also add this: ``` productFlavors { lite { applicationId = 'com.project.test.app' resValue "string", "app_name", "test lite" versionCode 1 versionName '1.0.0' } pro { applicationId = 'com.project.testpro.app' resValue "string", "app_name", "test pro" versionCode 1 versionName '1.0.0' } } ```
5,590,203
So I'm coming down to release-time for my application. We plan on releasing two versions, a free ad-based play-to-unlock version, and a paid fully unlocked version. I have the code set up that I can simply set a flag on startup to enable/disable ads and lock/unlock all the features. So literally only one line of code will execute differently between these versions. In order to release two separate applications, they require different package names, so my question is this: Is there an easy way to refactor my application's package name? Eclipse's refactoring tool doesn't resolve the generated R file, or any XML references in layout and manifest files. I've attempted to make a new project using the original as source, but I can't reference the assets and resources, and I'm looking to avoid duplicating any of my code and assets. It's not a huge pain to refactor it manually, but I feel there must be a better way to do it. Anybody have an elegant solution to this? Edit/Answered: For my situation I find it perfectly acceptable to just use Project -> Android Tools -> Rename Application Package. I wasn't aware this existed, and I feel like an idiot for posting this now. Thanks for everyone's answers and comments, feel free to vote this closed.
2011/04/08
[ "https://Stackoverflow.com/questions/5590203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/615779/" ]
Possibly a duplicate of [Bulk Publishing of Android Apps](https://stackoverflow.com/questions/4740319/bulk-publishing-of-android-apps/4740728#4740728). Android Library projects will do this for you nicely. You'll end up with 1 library project and then a project for each edition (free/full) with those really just containing different resources like app icons and different manifests, which is where the package name will be varied. Hope that helps. It has worked well for me.
If you want another application name, depending of the flavor, you can also add this: ``` productFlavors { lite { applicationId = 'com.project.test.app' resValue "string", "app_name", "test lite" versionCode 1 versionName '1.0.0' } pro { applicationId = 'com.project.testpro.app' resValue "string", "app_name", "test pro" versionCode 1 versionName '1.0.0' } } ```
5,590,203
So I'm coming down to release-time for my application. We plan on releasing two versions, a free ad-based play-to-unlock version, and a paid fully unlocked version. I have the code set up that I can simply set a flag on startup to enable/disable ads and lock/unlock all the features. So literally only one line of code will execute differently between these versions. In order to release two separate applications, they require different package names, so my question is this: Is there an easy way to refactor my application's package name? Eclipse's refactoring tool doesn't resolve the generated R file, or any XML references in layout and manifest files. I've attempted to make a new project using the original as source, but I can't reference the assets and resources, and I'm looking to avoid duplicating any of my code and assets. It's not a huge pain to refactor it manually, but I feel there must be a better way to do it. Anybody have an elegant solution to this? Edit/Answered: For my situation I find it perfectly acceptable to just use Project -> Android Tools -> Rename Application Package. I wasn't aware this existed, and I feel like an idiot for posting this now. Thanks for everyone's answers and comments, feel free to vote this closed.
2011/04/08
[ "https://Stackoverflow.com/questions/5590203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/615779/" ]
Gradle allows to use generated BuildConfig.java to pass some data to code. ``` productFlavors { paid { packageName "com.simple.paid" buildConfigField 'boolean', 'PAID', 'true' buildConfigField "int", "THING_ONE", "1" } free { packageName "com.simple.free" buildConfigField 'boolean', 'PAID', 'false' buildConfigField "int", "THING_ONE", "0" } ```
For everyone who want to use the solution by Denis: In the new gradle version `packageName` is now `applicationId` and don't forget to put `productFlavors { ... }` in `android { ... }` ``` productFlavors { lite { applicationId = 'com.project.test.app' versionCode 1 versionName '1.0.0' } pro { applicationId = 'com.project.testpro.app' versionCode 1 versionName '1.0.0' } } ```
5,590,203
So I'm coming down to release-time for my application. We plan on releasing two versions, a free ad-based play-to-unlock version, and a paid fully unlocked version. I have the code set up that I can simply set a flag on startup to enable/disable ads and lock/unlock all the features. So literally only one line of code will execute differently between these versions. In order to release two separate applications, they require different package names, so my question is this: Is there an easy way to refactor my application's package name? Eclipse's refactoring tool doesn't resolve the generated R file, or any XML references in layout and manifest files. I've attempted to make a new project using the original as source, but I can't reference the assets and resources, and I'm looking to avoid duplicating any of my code and assets. It's not a huge pain to refactor it manually, but I feel there must be a better way to do it. Anybody have an elegant solution to this? Edit/Answered: For my situation I find it perfectly acceptable to just use Project -> Android Tools -> Rename Application Package. I wasn't aware this existed, and I feel like an idiot for posting this now. Thanks for everyone's answers and comments, feel free to vote this closed.
2011/04/08
[ "https://Stackoverflow.com/questions/5590203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/615779/" ]
It's very simple by using build.gradle in Android Studio. Read about [productFlavors](http://tools.android.com/tech-docs/new-build-system/user-guide). It is a very usefull feature. Just simply add following lines in build.gradle: ``` productFlavors { lite { packageName = 'com.project.test.app' versionCode 1 versionName '1.0.0' } pro { packageName = 'com.project.testpro.app' versionCode 1 versionName '1.0.0' } } ``` In this example I add two product flavors: first for lite version and second for full version. Each version has his own versionCode and versionName (for Google Play publication). In code just check BuildConfig.FLAVOR: ``` if (BuildConfig.FLAVOR == "lite") { // add some ads or restrict functionallity } ``` For running and testing on device use "Build Variants" tab in Android Studio to switch between versions: ![enter image description here](https://i.stack.imgur.com/C4f2A.jpg)
The best way is to use "Android Studio" -> gradle.build -> [productFlavors + generate manifest file from template]. This combination allows to build free/paid versions and bunch of editions for different app markets from one source. --- This is a part of templated manifest file: --- ``` <manifest android:versionCode="1" android:versionName="1" package="com.example.product" xmlns:android="http://schemas.android.com/apk/res/android"> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/{f:FREE}app_name_free{/f}{f:PAID}app_name_paid{/f}" android:name=".ApplicationMain" android:theme="@style/AppTheme"> <activity android:label="@string/{f:FREE}app_name_free{/f}{f:PAID}app_name_paid{/f}" android:name=".ActivityMain"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> ``` --- This is template "ProductInfo.template" for java file: ProductInfo.java --- ``` package com.packagename.generated; import com.packagename.R; public class ProductInfo { public static final boolean mIsPaidVersion = {f:PAID}true{/f}{f:FREE}false{/f}; public static final int mAppNameId = R.string.app_name_{f:PAID}paid{/f}{f:FREE}free{/f}; public static final boolean mIsDebug = {$DEBUG}; } ``` --- This manifest is processed by gradle.build script with **productFlavors** and **processManifest** task hook: --- ``` import java.util.regex.Matcher; import java.util.regex.Pattern; import org.gradle.api.DefaultTask import org.gradle.api.tasks.TaskAction ... android { ... productFlavors { free { packageName 'com.example.product.free' } paid { packageName 'com.example.product.paid' } } ... } afterEvaluate { project -> android.applicationVariants.each { variant -> def flavor = variant.productFlavors[0].name tasks['prepare' + variant.name + 'Dependencies'].doLast { println "Generate java files..." //Copy templated and processed by build system manifest file to filtered_manifests forder def productInfoPath = "${projectDir}/some_sourcs_path/generated/" copy { from(productInfoPath) into(productInfoPath) include('ProductInfo.template') rename('ProductInfo.template', 'ProductInfo.java') } tasks.create(name: variant.name + 'ProcessProductInfoJavaFile', type: processTemplateFile) { templateFilePath = productInfoPath + "ProductInfo.java" flavorName = flavor buildTypeName = variant.buildType.name } tasks[variant.name + 'ProcessProductInfoJavaFile'].execute() } variant.processManifest.doLast { println "Customization manifest file..." // Copy templated and processed by build system manifest file to filtered_manifests forder copy { from("${buildDir}/manifests") { include "${variant.dirName}/AndroidManifest.xml" } into("${buildDir}/filtered_manifests") } tasks.create(name: variant.name + 'ProcessManifestFile', type: processTemplateFile) { templateFilePath = "${buildDir}/filtered_manifests/${variant.dirName}/AndroidManifest.xml" flavorName = flavor buildTypeName = variant.buildType.name } tasks[variant.name + 'ProcessManifestFile'].execute() } variant.processResources.manifestFile = file("${buildDir}/filtered_manifests/${variant.dirName}/AndroidManifest.xml") } } ``` --- This is separated task to process file --- ``` class processTemplateFile extends DefaultTask { def String templateFilePath = "" def String flavorName = "" def String buildTypeName = "" @TaskAction void run() { println templateFilePath // Load file to memory def fileObj = project.file(templateFilePath) def content = fileObj.getText() // Flavor. Find "{f:<flavor_name>}...{/f}" pattern and leave only "<flavor_name>==flavor" def patternAttribute = Pattern.compile("\\{f:((?!${flavorName.toUpperCase()})).*?\\{/f\\}",Pattern.DOTALL); content = patternAttribute.matcher(content).replaceAll(""); def pattern = Pattern.compile("\\{f:.*?\\}"); content = pattern.matcher(content).replaceAll(""); pattern = Pattern.compile("\\{/f\\}"); content = pattern.matcher(content).replaceAll(""); // Build. Find "{$DEBUG}" pattern and replace with "true"/"false" pattern = Pattern.compile("\\{\\\$DEBUG\\}", Pattern.DOTALL); if (buildTypeName == "debug"){ content = pattern.matcher(content).replaceAll("true"); } else{ content = pattern.matcher(content).replaceAll("false"); } // Save processed manifest file fileObj.write(content) } } ``` **Updated:** processTemplateFile created for code reusing purposes.
5,590,203
So I'm coming down to release-time for my application. We plan on releasing two versions, a free ad-based play-to-unlock version, and a paid fully unlocked version. I have the code set up that I can simply set a flag on startup to enable/disable ads and lock/unlock all the features. So literally only one line of code will execute differently between these versions. In order to release two separate applications, they require different package names, so my question is this: Is there an easy way to refactor my application's package name? Eclipse's refactoring tool doesn't resolve the generated R file, or any XML references in layout and manifest files. I've attempted to make a new project using the original as source, but I can't reference the assets and resources, and I'm looking to avoid duplicating any of my code and assets. It's not a huge pain to refactor it manually, but I feel there must be a better way to do it. Anybody have an elegant solution to this? Edit/Answered: For my situation I find it perfectly acceptable to just use Project -> Android Tools -> Rename Application Package. I wasn't aware this existed, and I feel like an idiot for posting this now. Thanks for everyone's answers and comments, feel free to vote this closed.
2011/04/08
[ "https://Stackoverflow.com/questions/5590203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/615779/" ]
For everyone who want to use the solution by Denis: In the new gradle version `packageName` is now `applicationId` and don't forget to put `productFlavors { ... }` in `android { ... }` ``` productFlavors { lite { applicationId = 'com.project.test.app' versionCode 1 versionName '1.0.0' } pro { applicationId = 'com.project.testpro.app' versionCode 1 versionName '1.0.0' } } ```
One approach I'm experimenting with is using fully-qualified names for activities, and just changing the package attribute. It avoids any real refactoring (1 file copy, 1 text sub). This almost works, but the generated R class isn't picked up, as the package for this is pulled out of AndroidManifest.xml, so ends up in the new package. I think it should be fairly straight forward to build AndroidManifest.xml via an Ant rule (in -pre-build) that inserts the distribution package name, and then (in -pre-compile) the generated resources into the default (Java) package. Hope this helps, Phil Lello
5,590,203
So I'm coming down to release-time for my application. We plan on releasing two versions, a free ad-based play-to-unlock version, and a paid fully unlocked version. I have the code set up that I can simply set a flag on startup to enable/disable ads and lock/unlock all the features. So literally only one line of code will execute differently between these versions. In order to release two separate applications, they require different package names, so my question is this: Is there an easy way to refactor my application's package name? Eclipse's refactoring tool doesn't resolve the generated R file, or any XML references in layout and manifest files. I've attempted to make a new project using the original as source, but I can't reference the assets and resources, and I'm looking to avoid duplicating any of my code and assets. It's not a huge pain to refactor it manually, but I feel there must be a better way to do it. Anybody have an elegant solution to this? Edit/Answered: For my situation I find it perfectly acceptable to just use Project -> Android Tools -> Rename Application Package. I wasn't aware this existed, and I feel like an idiot for posting this now. Thanks for everyone's answers and comments, feel free to vote this closed.
2011/04/08
[ "https://Stackoverflow.com/questions/5590203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/615779/" ]
For everyone who want to use the solution by Denis: In the new gradle version `packageName` is now `applicationId` and don't forget to put `productFlavors { ... }` in `android { ... }` ``` productFlavors { lite { applicationId = 'com.project.test.app' versionCode 1 versionName '1.0.0' } pro { applicationId = 'com.project.testpro.app' versionCode 1 versionName '1.0.0' } } ```
If you want another application name, depending of the flavor, you can also add this: ``` productFlavors { lite { applicationId = 'com.project.test.app' resValue "string", "app_name", "test lite" versionCode 1 versionName '1.0.0' } pro { applicationId = 'com.project.testpro.app' resValue "string", "app_name", "test pro" versionCode 1 versionName '1.0.0' } } ```
5,590,203
So I'm coming down to release-time for my application. We plan on releasing two versions, a free ad-based play-to-unlock version, and a paid fully unlocked version. I have the code set up that I can simply set a flag on startup to enable/disable ads and lock/unlock all the features. So literally only one line of code will execute differently between these versions. In order to release two separate applications, they require different package names, so my question is this: Is there an easy way to refactor my application's package name? Eclipse's refactoring tool doesn't resolve the generated R file, or any XML references in layout and manifest files. I've attempted to make a new project using the original as source, but I can't reference the assets and resources, and I'm looking to avoid duplicating any of my code and assets. It's not a huge pain to refactor it manually, but I feel there must be a better way to do it. Anybody have an elegant solution to this? Edit/Answered: For my situation I find it perfectly acceptable to just use Project -> Android Tools -> Rename Application Package. I wasn't aware this existed, and I feel like an idiot for posting this now. Thanks for everyone's answers and comments, feel free to vote this closed.
2011/04/08
[ "https://Stackoverflow.com/questions/5590203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/615779/" ]
The best way is to use "Android Studio" -> gradle.build -> [productFlavors + generate manifest file from template]. This combination allows to build free/paid versions and bunch of editions for different app markets from one source. --- This is a part of templated manifest file: --- ``` <manifest android:versionCode="1" android:versionName="1" package="com.example.product" xmlns:android="http://schemas.android.com/apk/res/android"> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/{f:FREE}app_name_free{/f}{f:PAID}app_name_paid{/f}" android:name=".ApplicationMain" android:theme="@style/AppTheme"> <activity android:label="@string/{f:FREE}app_name_free{/f}{f:PAID}app_name_paid{/f}" android:name=".ActivityMain"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> ``` --- This is template "ProductInfo.template" for java file: ProductInfo.java --- ``` package com.packagename.generated; import com.packagename.R; public class ProductInfo { public static final boolean mIsPaidVersion = {f:PAID}true{/f}{f:FREE}false{/f}; public static final int mAppNameId = R.string.app_name_{f:PAID}paid{/f}{f:FREE}free{/f}; public static final boolean mIsDebug = {$DEBUG}; } ``` --- This manifest is processed by gradle.build script with **productFlavors** and **processManifest** task hook: --- ``` import java.util.regex.Matcher; import java.util.regex.Pattern; import org.gradle.api.DefaultTask import org.gradle.api.tasks.TaskAction ... android { ... productFlavors { free { packageName 'com.example.product.free' } paid { packageName 'com.example.product.paid' } } ... } afterEvaluate { project -> android.applicationVariants.each { variant -> def flavor = variant.productFlavors[0].name tasks['prepare' + variant.name + 'Dependencies'].doLast { println "Generate java files..." //Copy templated and processed by build system manifest file to filtered_manifests forder def productInfoPath = "${projectDir}/some_sourcs_path/generated/" copy { from(productInfoPath) into(productInfoPath) include('ProductInfo.template') rename('ProductInfo.template', 'ProductInfo.java') } tasks.create(name: variant.name + 'ProcessProductInfoJavaFile', type: processTemplateFile) { templateFilePath = productInfoPath + "ProductInfo.java" flavorName = flavor buildTypeName = variant.buildType.name } tasks[variant.name + 'ProcessProductInfoJavaFile'].execute() } variant.processManifest.doLast { println "Customization manifest file..." // Copy templated and processed by build system manifest file to filtered_manifests forder copy { from("${buildDir}/manifests") { include "${variant.dirName}/AndroidManifest.xml" } into("${buildDir}/filtered_manifests") } tasks.create(name: variant.name + 'ProcessManifestFile', type: processTemplateFile) { templateFilePath = "${buildDir}/filtered_manifests/${variant.dirName}/AndroidManifest.xml" flavorName = flavor buildTypeName = variant.buildType.name } tasks[variant.name + 'ProcessManifestFile'].execute() } variant.processResources.manifestFile = file("${buildDir}/filtered_manifests/${variant.dirName}/AndroidManifest.xml") } } ``` --- This is separated task to process file --- ``` class processTemplateFile extends DefaultTask { def String templateFilePath = "" def String flavorName = "" def String buildTypeName = "" @TaskAction void run() { println templateFilePath // Load file to memory def fileObj = project.file(templateFilePath) def content = fileObj.getText() // Flavor. Find "{f:<flavor_name>}...{/f}" pattern and leave only "<flavor_name>==flavor" def patternAttribute = Pattern.compile("\\{f:((?!${flavorName.toUpperCase()})).*?\\{/f\\}",Pattern.DOTALL); content = patternAttribute.matcher(content).replaceAll(""); def pattern = Pattern.compile("\\{f:.*?\\}"); content = pattern.matcher(content).replaceAll(""); pattern = Pattern.compile("\\{/f\\}"); content = pattern.matcher(content).replaceAll(""); // Build. Find "{$DEBUG}" pattern and replace with "true"/"false" pattern = Pattern.compile("\\{\\\$DEBUG\\}", Pattern.DOTALL); if (buildTypeName == "debug"){ content = pattern.matcher(content).replaceAll("true"); } else{ content = pattern.matcher(content).replaceAll("false"); } // Save processed manifest file fileObj.write(content) } } ``` **Updated:** processTemplateFile created for code reusing purposes.
For everyone who want to use the solution by Denis: In the new gradle version `packageName` is now `applicationId` and don't forget to put `productFlavors { ... }` in `android { ... }` ``` productFlavors { lite { applicationId = 'com.project.test.app' versionCode 1 versionName '1.0.0' } pro { applicationId = 'com.project.testpro.app' versionCode 1 versionName '1.0.0' } } ```
5,590,203
So I'm coming down to release-time for my application. We plan on releasing two versions, a free ad-based play-to-unlock version, and a paid fully unlocked version. I have the code set up that I can simply set a flag on startup to enable/disable ads and lock/unlock all the features. So literally only one line of code will execute differently between these versions. In order to release two separate applications, they require different package names, so my question is this: Is there an easy way to refactor my application's package name? Eclipse's refactoring tool doesn't resolve the generated R file, or any XML references in layout and manifest files. I've attempted to make a new project using the original as source, but I can't reference the assets and resources, and I'm looking to avoid duplicating any of my code and assets. It's not a huge pain to refactor it manually, but I feel there must be a better way to do it. Anybody have an elegant solution to this? Edit/Answered: For my situation I find it perfectly acceptable to just use Project -> Android Tools -> Rename Application Package. I wasn't aware this existed, and I feel like an idiot for posting this now. Thanks for everyone's answers and comments, feel free to vote this closed.
2011/04/08
[ "https://Stackoverflow.com/questions/5590203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/615779/" ]
Possibly a duplicate of [Bulk Publishing of Android Apps](https://stackoverflow.com/questions/4740319/bulk-publishing-of-android-apps/4740728#4740728). Android Library projects will do this for you nicely. You'll end up with 1 library project and then a project for each edition (free/full) with those really just containing different resources like app icons and different manifests, which is where the package name will be varied. Hope that helps. It has worked well for me.
One approach I'm experimenting with is using fully-qualified names for activities, and just changing the package attribute. It avoids any real refactoring (1 file copy, 1 text sub). This almost works, but the generated R class isn't picked up, as the package for this is pulled out of AndroidManifest.xml, so ends up in the new package. I think it should be fairly straight forward to build AndroidManifest.xml via an Ant rule (in -pre-build) that inserts the distribution package name, and then (in -pre-compile) the generated resources into the default (Java) package. Hope this helps, Phil Lello
5,590,203
So I'm coming down to release-time for my application. We plan on releasing two versions, a free ad-based play-to-unlock version, and a paid fully unlocked version. I have the code set up that I can simply set a flag on startup to enable/disable ads and lock/unlock all the features. So literally only one line of code will execute differently between these versions. In order to release two separate applications, they require different package names, so my question is this: Is there an easy way to refactor my application's package name? Eclipse's refactoring tool doesn't resolve the generated R file, or any XML references in layout and manifest files. I've attempted to make a new project using the original as source, but I can't reference the assets and resources, and I'm looking to avoid duplicating any of my code and assets. It's not a huge pain to refactor it manually, but I feel there must be a better way to do it. Anybody have an elegant solution to this? Edit/Answered: For my situation I find it perfectly acceptable to just use Project -> Android Tools -> Rename Application Package. I wasn't aware this existed, and I feel like an idiot for posting this now. Thanks for everyone's answers and comments, feel free to vote this closed.
2011/04/08
[ "https://Stackoverflow.com/questions/5590203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/615779/" ]
Gradle allows to use generated BuildConfig.java to pass some data to code. ``` productFlavors { paid { packageName "com.simple.paid" buildConfigField 'boolean', 'PAID', 'true' buildConfigField "int", "THING_ONE", "1" } free { packageName "com.simple.free" buildConfigField 'boolean', 'PAID', 'false' buildConfigField "int", "THING_ONE", "0" } ```
If you want another application name, depending of the flavor, you can also add this: ``` productFlavors { lite { applicationId = 'com.project.test.app' resValue "string", "app_name", "test lite" versionCode 1 versionName '1.0.0' } pro { applicationId = 'com.project.testpro.app' resValue "string", "app_name", "test pro" versionCode 1 versionName '1.0.0' } } ```
1,439,686
Prove that $|a^2-b^2| < 2|a| + 1$ given that $|a-b|<1$ I understand that a variant of the triangle inequality is used in the solution: $|b| - |a| \le |a-b|$ I'm confused as to how it's derived, can anybody help me understand Thank you for your time
2015/09/17
[ "https://math.stackexchange.com/questions/1439686", "https://math.stackexchange.com", "https://math.stackexchange.com/users/189897/" ]
$$|a^2-b^2|=|a-b||a+b|<1.|a+b|\leq|a|+|b|<|a|+|a|+1=2|a|+1$$ where we used $|b|\leq |a|+|a-b|<|a|+1$. The last one follows from triangle inequality again: Substitute in $|x+y|\leq |x|+|y|$ $x=b-a$ and $y=a$ to get $|b-a+a|\leq |b-a|+|a|$
$$2|a|+1>|a^2-b^2|\ge|b^2|-|a^2|$$ $$\iff(|a|+1)^2\ge|b^2|\iff |a|+1\ge |b|$$ $$1\ge |b|-|a|\ge|b-a|$$
1,439,686
Prove that $|a^2-b^2| < 2|a| + 1$ given that $|a-b|<1$ I understand that a variant of the triangle inequality is used in the solution: $|b| - |a| \le |a-b|$ I'm confused as to how it's derived, can anybody help me understand Thank you for your time
2015/09/17
[ "https://math.stackexchange.com/questions/1439686", "https://math.stackexchange.com", "https://math.stackexchange.com/users/189897/" ]
$$|a^2-b^2|=|a-b||a+b|<1.|a+b|\leq|a|+|b|<|a|+|a|+1=2|a|+1$$ where we used $|b|\leq |a|+|a-b|<|a|+1$. The last one follows from triangle inequality again: Substitute in $|x+y|\leq |x|+|y|$ $x=b-a$ and $y=a$ to get $|b-a+a|\leq |b-a|+|a|$
We have $$|a^2-b^2|=|a-b|\cdot|a+b|<|a+b|\le |a|+|b|\tag1$$ moreover, $$|b|-|a|\le|a-b|<1\implies |b|<|a|+1\tag2$$ We deduce the desired result from $(1)$ and $(2)$.
43,752,286
A bit tricky situation. For the code below, I have added `(keydown.enter)="false"` to ignore the break line/enter button in textarea This is causing a user issue and would like the existing behaviour where Pressing enter should automatically trigger the "Save button" Any idea how to trigger the Save button when still focusing in textArea but ignore the breakline? ``` <textarea #textArea style="overflow:hidden; height:auto; resize:none;" rows="1" class="form-control" [attr.placeholder]="placeholder" [attr.maxlength]="maxlength" [attr.autofocus]="autofocus" [name]="name" [attr.readonly]="readonly ? true : null" [attr.required]="required ? true : null" (input)="onUpdated($event)" [tabindex]="skipTab ? -1 : ''" (keydown.enter)="false" [(ngModel)]="value"> </textarea > ```
2017/05/03
[ "https://Stackoverflow.com/questions/43752286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/485445/" ]
you can bind the same function of `Save` button to `keydown.enter` of texterea, and call `$event.preventDefault` to avoid the newline. sample [plunker](https://plnkr.co/edit/J2DwmYBcD2D0evSGcQMX?p=preview).
You can create a service which can send a notification to other components that will handle the command. The service could look like this: ``` import { Injectable } from "@angular/core"; import { Subject } from "rxjs/Subject"; @Injectable() export class DataSavingService { private dataSavingRequested = new Subject<void>(); public dataSavingRequested$ = this.dataSavingRequested.asObservable(); public requestDataSaving(): void { this.dataSavingRequested.next(); } } ``` ... and should be registered in the `providers` section of the module. Note: if data must be passed in the notification, you can declare a non-void parameter type for the `dataSavingRequested` Subject (e.g. `string`). The service would be injected in the component with the textarea element and called in the handler of the `Enter` keypress event: ``` import { DataSavingService } from "./services/data-saving.service"; ... @Component({ template: ` <textarea (keypress.enter)="handleEnterKeyPress($event)" ...></textarea> ` }) export class ComponentWithTextarea { constructor(private dataSavingService: DataSavingService, ...) { ... } public handleEnterKeyPress(event: KeyboardEvent): void { event.preventDefault(); // Prevent the insertion of a new line this.dataSavingService.requestDataSaving(); } ... } ``` The component with the Save button would subscribe to the `dataSavingRequested$` notification of the service and save the data when notified: ``` import { Component, OnDestroy, ... } from "@angular/core"; import { Subscription } from "rxjs/Subscription"; import { DataSavingService } from "../services/data-saving.service"; ... @Component({ ... }) export class ComponentWithSaveButton implements OnDestroy { private subscription: Subscription; constructor(private dataSavingService: DataSavingService, ...) { this.subscription = this.dataSavingService.dataSavingRequested$.subscribe(() => { this.saveData(); }); } public ngOnDestroy(): void { this.subscription.unsubscribe(); } private saveData(): void { // Perform data saving here // Note: this method should also be called by the Save button ... } } ``` --- The code above assumes that the saving must be performed in the component with the Save button. An alternative would be to move that logic into the service, which would expose a `saveData` method that could be called by the components. The service would need to gather the data to save, however. It could be obtained with a Subject/Observable mechanism, or supplied directly by the components as a parameter to `saveData` or by calling another method of the service.
43,752,286
A bit tricky situation. For the code below, I have added `(keydown.enter)="false"` to ignore the break line/enter button in textarea This is causing a user issue and would like the existing behaviour where Pressing enter should automatically trigger the "Save button" Any idea how to trigger the Save button when still focusing in textArea but ignore the breakline? ``` <textarea #textArea style="overflow:hidden; height:auto; resize:none;" rows="1" class="form-control" [attr.placeholder]="placeholder" [attr.maxlength]="maxlength" [attr.autofocus]="autofocus" [name]="name" [attr.readonly]="readonly ? true : null" [attr.required]="required ? true : null" (input)="onUpdated($event)" [tabindex]="skipTab ? -1 : ''" (keydown.enter)="false" [(ngModel)]="value"> </textarea > ```
2017/05/03
[ "https://Stackoverflow.com/questions/43752286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/485445/" ]
Extending the answer by @Pengyy You can bind the bind the enter key to a pseudoSave function, and preventDefault inside of that, thus preventing both the Save function and the newline. Then you can either call the save function from there(assuming it is accessible such as a service) or you can emit an EventEmitter, and have that emit get caught to trigger the Save function.
you can bind the same function of `Save` button to `keydown.enter` of texterea, and call `$event.preventDefault` to avoid the newline. sample [plunker](https://plnkr.co/edit/J2DwmYBcD2D0evSGcQMX?p=preview).
43,752,286
A bit tricky situation. For the code below, I have added `(keydown.enter)="false"` to ignore the break line/enter button in textarea This is causing a user issue and would like the existing behaviour where Pressing enter should automatically trigger the "Save button" Any idea how to trigger the Save button when still focusing in textArea but ignore the breakline? ``` <textarea #textArea style="overflow:hidden; height:auto; resize:none;" rows="1" class="form-control" [attr.placeholder]="placeholder" [attr.maxlength]="maxlength" [attr.autofocus]="autofocus" [name]="name" [attr.readonly]="readonly ? true : null" [attr.required]="required ? true : null" (input)="onUpdated($event)" [tabindex]="skipTab ? -1 : ''" (keydown.enter)="false" [(ngModel)]="value"> </textarea > ```
2017/05/03
[ "https://Stackoverflow.com/questions/43752286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/485445/" ]
you can bind the same function of `Save` button to `keydown.enter` of texterea, and call `$event.preventDefault` to avoid the newline. sample [plunker](https://plnkr.co/edit/J2DwmYBcD2D0evSGcQMX?p=preview).
it could be 2 solutions: 1. Use javascript to handle enter event and trigger Save function in it or 2. Use Same thing from Angular side as describe in [this](https://stackoverflow.com/questions/17470790/how-to-use-a-keypress-event-in-angularjs). [This](https://stackoverflow.com/questions/2099661/enter-key-in-textarea) may also help you
43,752,286
A bit tricky situation. For the code below, I have added `(keydown.enter)="false"` to ignore the break line/enter button in textarea This is causing a user issue and would like the existing behaviour where Pressing enter should automatically trigger the "Save button" Any idea how to trigger the Save button when still focusing in textArea but ignore the breakline? ``` <textarea #textArea style="overflow:hidden; height:auto; resize:none;" rows="1" class="form-control" [attr.placeholder]="placeholder" [attr.maxlength]="maxlength" [attr.autofocus]="autofocus" [name]="name" [attr.readonly]="readonly ? true : null" [attr.required]="required ? true : null" (input)="onUpdated($event)" [tabindex]="skipTab ? -1 : ''" (keydown.enter)="false" [(ngModel)]="value"> </textarea > ```
2017/05/03
[ "https://Stackoverflow.com/questions/43752286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/485445/" ]
Assuming that your `textarea` is inside a `form` element. [**{Plunker Demo}**](https://plnkr.co/edit/0ojy8fQdP2abzfQZNZud?p=preview) You can achieve it by using a **hidden submit** input, like this ``` @Component({ selector: 'my-app', template: ` <form (submit)="formSubmitted($event)"> <input #proxySubmitBtn type="submit" [hidden]="true"/> <textarea #textArea (keydown.enter)="$event.preventDefault(); proxySubmitBtn.click()"> </textarea> </form> `, }) export class App { formSubmitted(e) { e.preventDefault(); alert('Form is submitted!'); } } ```
You can create a service which can send a notification to other components that will handle the command. The service could look like this: ``` import { Injectable } from "@angular/core"; import { Subject } from "rxjs/Subject"; @Injectable() export class DataSavingService { private dataSavingRequested = new Subject<void>(); public dataSavingRequested$ = this.dataSavingRequested.asObservable(); public requestDataSaving(): void { this.dataSavingRequested.next(); } } ``` ... and should be registered in the `providers` section of the module. Note: if data must be passed in the notification, you can declare a non-void parameter type for the `dataSavingRequested` Subject (e.g. `string`). The service would be injected in the component with the textarea element and called in the handler of the `Enter` keypress event: ``` import { DataSavingService } from "./services/data-saving.service"; ... @Component({ template: ` <textarea (keypress.enter)="handleEnterKeyPress($event)" ...></textarea> ` }) export class ComponentWithTextarea { constructor(private dataSavingService: DataSavingService, ...) { ... } public handleEnterKeyPress(event: KeyboardEvent): void { event.preventDefault(); // Prevent the insertion of a new line this.dataSavingService.requestDataSaving(); } ... } ``` The component with the Save button would subscribe to the `dataSavingRequested$` notification of the service and save the data when notified: ``` import { Component, OnDestroy, ... } from "@angular/core"; import { Subscription } from "rxjs/Subscription"; import { DataSavingService } from "../services/data-saving.service"; ... @Component({ ... }) export class ComponentWithSaveButton implements OnDestroy { private subscription: Subscription; constructor(private dataSavingService: DataSavingService, ...) { this.subscription = this.dataSavingService.dataSavingRequested$.subscribe(() => { this.saveData(); }); } public ngOnDestroy(): void { this.subscription.unsubscribe(); } private saveData(): void { // Perform data saving here // Note: this method should also be called by the Save button ... } } ``` --- The code above assumes that the saving must be performed in the component with the Save button. An alternative would be to move that logic into the service, which would expose a `saveData` method that could be called by the components. The service would need to gather the data to save, however. It could be obtained with a Subject/Observable mechanism, or supplied directly by the components as a parameter to `saveData` or by calling another method of the service.
43,752,286
A bit tricky situation. For the code below, I have added `(keydown.enter)="false"` to ignore the break line/enter button in textarea This is causing a user issue and would like the existing behaviour where Pressing enter should automatically trigger the "Save button" Any idea how to trigger the Save button when still focusing in textArea but ignore the breakline? ``` <textarea #textArea style="overflow:hidden; height:auto; resize:none;" rows="1" class="form-control" [attr.placeholder]="placeholder" [attr.maxlength]="maxlength" [attr.autofocus]="autofocus" [name]="name" [attr.readonly]="readonly ? true : null" [attr.required]="required ? true : null" (input)="onUpdated($event)" [tabindex]="skipTab ? -1 : ''" (keydown.enter)="false" [(ngModel)]="value"> </textarea > ```
2017/05/03
[ "https://Stackoverflow.com/questions/43752286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/485445/" ]
Extending the answer by @Pengyy You can bind the bind the enter key to a pseudoSave function, and preventDefault inside of that, thus preventing both the Save function and the newline. Then you can either call the save function from there(assuming it is accessible such as a service) or you can emit an EventEmitter, and have that emit get caught to trigger the Save function.
You can create a service which can send a notification to other components that will handle the command. The service could look like this: ``` import { Injectable } from "@angular/core"; import { Subject } from "rxjs/Subject"; @Injectable() export class DataSavingService { private dataSavingRequested = new Subject<void>(); public dataSavingRequested$ = this.dataSavingRequested.asObservable(); public requestDataSaving(): void { this.dataSavingRequested.next(); } } ``` ... and should be registered in the `providers` section of the module. Note: if data must be passed in the notification, you can declare a non-void parameter type for the `dataSavingRequested` Subject (e.g. `string`). The service would be injected in the component with the textarea element and called in the handler of the `Enter` keypress event: ``` import { DataSavingService } from "./services/data-saving.service"; ... @Component({ template: ` <textarea (keypress.enter)="handleEnterKeyPress($event)" ...></textarea> ` }) export class ComponentWithTextarea { constructor(private dataSavingService: DataSavingService, ...) { ... } public handleEnterKeyPress(event: KeyboardEvent): void { event.preventDefault(); // Prevent the insertion of a new line this.dataSavingService.requestDataSaving(); } ... } ``` The component with the Save button would subscribe to the `dataSavingRequested$` notification of the service and save the data when notified: ``` import { Component, OnDestroy, ... } from "@angular/core"; import { Subscription } from "rxjs/Subscription"; import { DataSavingService } from "../services/data-saving.service"; ... @Component({ ... }) export class ComponentWithSaveButton implements OnDestroy { private subscription: Subscription; constructor(private dataSavingService: DataSavingService, ...) { this.subscription = this.dataSavingService.dataSavingRequested$.subscribe(() => { this.saveData(); }); } public ngOnDestroy(): void { this.subscription.unsubscribe(); } private saveData(): void { // Perform data saving here // Note: this method should also be called by the Save button ... } } ``` --- The code above assumes that the saving must be performed in the component with the Save button. An alternative would be to move that logic into the service, which would expose a `saveData` method that could be called by the components. The service would need to gather the data to save, however. It could be obtained with a Subject/Observable mechanism, or supplied directly by the components as a parameter to `saveData` or by calling another method of the service.
43,752,286
A bit tricky situation. For the code below, I have added `(keydown.enter)="false"` to ignore the break line/enter button in textarea This is causing a user issue and would like the existing behaviour where Pressing enter should automatically trigger the "Save button" Any idea how to trigger the Save button when still focusing in textArea but ignore the breakline? ``` <textarea #textArea style="overflow:hidden; height:auto; resize:none;" rows="1" class="form-control" [attr.placeholder]="placeholder" [attr.maxlength]="maxlength" [attr.autofocus]="autofocus" [name]="name" [attr.readonly]="readonly ? true : null" [attr.required]="required ? true : null" (input)="onUpdated($event)" [tabindex]="skipTab ? -1 : ''" (keydown.enter)="false" [(ngModel)]="value"> </textarea > ```
2017/05/03
[ "https://Stackoverflow.com/questions/43752286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/485445/" ]
Extending the answer by @Pengyy You can bind the bind the enter key to a pseudoSave function, and preventDefault inside of that, thus preventing both the Save function and the newline. Then you can either call the save function from there(assuming it is accessible such as a service) or you can emit an EventEmitter, and have that emit get caught to trigger the Save function.
Assuming that your `textarea` is inside a `form` element. [**{Plunker Demo}**](https://plnkr.co/edit/0ojy8fQdP2abzfQZNZud?p=preview) You can achieve it by using a **hidden submit** input, like this ``` @Component({ selector: 'my-app', template: ` <form (submit)="formSubmitted($event)"> <input #proxySubmitBtn type="submit" [hidden]="true"/> <textarea #textArea (keydown.enter)="$event.preventDefault(); proxySubmitBtn.click()"> </textarea> </form> `, }) export class App { formSubmitted(e) { e.preventDefault(); alert('Form is submitted!'); } } ```
43,752,286
A bit tricky situation. For the code below, I have added `(keydown.enter)="false"` to ignore the break line/enter button in textarea This is causing a user issue and would like the existing behaviour where Pressing enter should automatically trigger the "Save button" Any idea how to trigger the Save button when still focusing in textArea but ignore the breakline? ``` <textarea #textArea style="overflow:hidden; height:auto; resize:none;" rows="1" class="form-control" [attr.placeholder]="placeholder" [attr.maxlength]="maxlength" [attr.autofocus]="autofocus" [name]="name" [attr.readonly]="readonly ? true : null" [attr.required]="required ? true : null" (input)="onUpdated($event)" [tabindex]="skipTab ? -1 : ''" (keydown.enter)="false" [(ngModel)]="value"> </textarea > ```
2017/05/03
[ "https://Stackoverflow.com/questions/43752286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/485445/" ]
Assuming that your `textarea` is inside a `form` element. [**{Plunker Demo}**](https://plnkr.co/edit/0ojy8fQdP2abzfQZNZud?p=preview) You can achieve it by using a **hidden submit** input, like this ``` @Component({ selector: 'my-app', template: ` <form (submit)="formSubmitted($event)"> <input #proxySubmitBtn type="submit" [hidden]="true"/> <textarea #textArea (keydown.enter)="$event.preventDefault(); proxySubmitBtn.click()"> </textarea> </form> `, }) export class App { formSubmitted(e) { e.preventDefault(); alert('Form is submitted!'); } } ```
it could be 2 solutions: 1. Use javascript to handle enter event and trigger Save function in it or 2. Use Same thing from Angular side as describe in [this](https://stackoverflow.com/questions/17470790/how-to-use-a-keypress-event-in-angularjs). [This](https://stackoverflow.com/questions/2099661/enter-key-in-textarea) may also help you
43,752,286
A bit tricky situation. For the code below, I have added `(keydown.enter)="false"` to ignore the break line/enter button in textarea This is causing a user issue and would like the existing behaviour where Pressing enter should automatically trigger the "Save button" Any idea how to trigger the Save button when still focusing in textArea but ignore the breakline? ``` <textarea #textArea style="overflow:hidden; height:auto; resize:none;" rows="1" class="form-control" [attr.placeholder]="placeholder" [attr.maxlength]="maxlength" [attr.autofocus]="autofocus" [name]="name" [attr.readonly]="readonly ? true : null" [attr.required]="required ? true : null" (input)="onUpdated($event)" [tabindex]="skipTab ? -1 : ''" (keydown.enter)="false" [(ngModel)]="value"> </textarea > ```
2017/05/03
[ "https://Stackoverflow.com/questions/43752286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/485445/" ]
Extending the answer by @Pengyy You can bind the bind the enter key to a pseudoSave function, and preventDefault inside of that, thus preventing both the Save function and the newline. Then you can either call the save function from there(assuming it is accessible such as a service) or you can emit an EventEmitter, and have that emit get caught to trigger the Save function.
it could be 2 solutions: 1. Use javascript to handle enter event and trigger Save function in it or 2. Use Same thing from Angular side as describe in [this](https://stackoverflow.com/questions/17470790/how-to-use-a-keypress-event-in-angularjs). [This](https://stackoverflow.com/questions/2099661/enter-key-in-textarea) may also help you
6,979,796
I am a little bit confused about the use of [`Thread.yield()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Thread.html#yield()) method in Java, specifically in the example code below. I've also read that yield() is 'used to prevent execution of a thread'. My questions are: 1. I believe the code below result in the same output both when using `yield()` and when not using it. Is this correct? 2. What are, in fact, the main uses of `yield()`? 3. In what ways is `yield()` different from the `join()` and `interrupt()` methods? The code example: ``` public class MyRunnable implements Runnable { public static void main(String[] args) { Thread t = new Thread(new MyRunnable()); t.start(); for(int i=0; i<5; i++) { System.out.println("Inside main"); } } public void run() { for(int i=0; i<5; i++) { System.out.println("Inside run"); Thread.yield(); } } } ``` I obtain the same output using the code above both with and without using `yield()`: ``` Inside main Inside main Inside main Inside main Inside main Inside run Inside run Inside run Inside run Inside run ```
2011/08/08
[ "https://Stackoverflow.com/questions/6979796", "https://Stackoverflow.com", "https://Stackoverflow.com/users/827583/" ]
> > The current answer(s) are out-of-date and require revision given recent changes. > > > There is no *practical* difference of `Thread.yield()` between Java versions since 6 to 9. **TL;DR;** Conclusions based on OpenJDK source code (<http://hg.openjdk.java.net/>). If not to take into account HotSpot support of USDT probes (system tracing information is described in [dtrace guide](https://docs.oracle.com/javase/8/docs/technotes/guides/vm/dtrace.html)) and JVM property `ConvertYieldToSleep` then source code of `yield()` is almost the same. See explanation below. **Java 9**: `Thread.yield()` calls OS-specific method `os::naked_yield()`: On Linux: ``` void os::naked_yield() { sched_yield(); } ``` On Windows: ``` void os::naked_yield() { SwitchToThread(); } ``` **Java 8 and earlier:** `Thread.yield()` calls OS-specific method `os::yield()`: On Linux: ``` void os::yield() { sched_yield(); } ``` On Windows: ``` void os::yield() { os::NakedYield(); } ``` As you can see, `Thread.yeald()` on Linux is identical for all Java versions. Let's see Windows's `os::NakedYield()` from JDK 8: ``` os::YieldResult os::NakedYield() { // Use either SwitchToThread() or Sleep(0) // Consider passing back the return value from SwitchToThread(). if (os::Kernel32Dll::SwitchToThreadAvailable()) { return SwitchToThread() ? os::YIELD_SWITCHED : os::YIELD_NONEREADY ; } else { Sleep(0); } return os::YIELD_UNKNOWN ; } ``` The difference between Java 9 and Java 8 in the additional check of the existence of the Win32 API's `SwitchToThread()` method. The same code is present for Java 6. Source code of `os::NakedYield()` in JDK 7 is slightly different but it has the same behavior: ``` os::YieldResult os::NakedYield() { // Use either SwitchToThread() or Sleep(0) // Consider passing back the return value from SwitchToThread(). // We use GetProcAddress() as ancient Win9X versions of windows doen't support SwitchToThread. // In that case we revert to Sleep(0). static volatile STTSignature stt = (STTSignature) 1 ; if (stt == ((STTSignature) 1)) { stt = (STTSignature) ::GetProcAddress (LoadLibrary ("Kernel32.dll"), "SwitchToThread") ; // It's OK if threads race during initialization as the operation above is idempotent. } if (stt != NULL) { return (*stt)() ? os::YIELD_SWITCHED : os::YIELD_NONEREADY ; } else { Sleep (0) ; } return os::YIELD_UNKNOWN ; } ``` The additional check has been dropped due to `SwitchToThread()` method are available since Windows XP and Windows Server 2003 (see [msdn notes](https://msdn.microsoft.com/ru-ru/en-en/library/windows/desktop/ms686352(v=vs.85).aspx)).
yield() main use is for putting a multi-threading application on hold. all these methods differences are yield() puts thread on hold while executing another thread and returning back after the completion of that thread, join() will bring the beginning of threads together executing until the end and of another thread to run after that thread has ended, interrupt() will stop the execution of a thread for a while.
6,979,796
I am a little bit confused about the use of [`Thread.yield()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Thread.html#yield()) method in Java, specifically in the example code below. I've also read that yield() is 'used to prevent execution of a thread'. My questions are: 1. I believe the code below result in the same output both when using `yield()` and when not using it. Is this correct? 2. What are, in fact, the main uses of `yield()`? 3. In what ways is `yield()` different from the `join()` and `interrupt()` methods? The code example: ``` public class MyRunnable implements Runnable { public static void main(String[] args) { Thread t = new Thread(new MyRunnable()); t.start(); for(int i=0; i<5; i++) { System.out.println("Inside main"); } } public void run() { for(int i=0; i<5; i++) { System.out.println("Inside run"); Thread.yield(); } } } ``` I obtain the same output using the code above both with and without using `yield()`: ``` Inside main Inside main Inside main Inside main Inside main Inside run Inside run Inside run Inside run Inside run ```
2011/08/08
[ "https://Stackoverflow.com/questions/6979796", "https://Stackoverflow.com", "https://Stackoverflow.com/users/827583/" ]
**Source: <http://www.javamex.com/tutorials/threads/yield.shtml>** > > ### Windows > > > In the Hotspot implementation, the way that `Thread.yield()` works has > changed between Java 5 and Java 6. > > > In Java 5, `Thread.yield()` calls the Windows API call `Sleep(0)`. This > has the special effect of **clearing the current thread's quantum** and > putting it to the **end of the queue for its *priority level***. In other > words, all runnable threads of the same priority (and those of greater > priority) will get a chance to run before the yielded thread is next > given CPU time. When it is eventually re-scheduled, it will come back > with a full [full quantum](http://www.javamex.com/tutorials/threads/thread_scheduling.shtml#quantum), but doesn't "carry over" any of the > remaining quantum from the time of yielding. This behaviour is a > little different from a non-zero sleep where the sleeping thread > generally loses 1 quantum value (in effect, 1/3 of a 10 or 15ms tick). > > > In Java 6, this behaviour was changed. The Hotspot VM now implements > `Thread.yield()` using the Windows `SwitchToThread()` API call. This call > makes the current thread **give up its *current timeslice***, but not its > entire quantum. This means that depending on the priorities of other > threads, the yielding thread can be **scheduled back in one interrupt > period later**. (See the section on [thread scheduling](http://www.javamex.com/tutorials/threads/thread_scheduling.shtml) for more > information on timeslices.) > > > ### Linux > > > Under Linux, Hotspot simply calls `sched_yield()`. The consequences of > this call are a little different, and possibly more severe than under > Windows: > > > * a yielded thread will not get another slice of CPU **until *all* other threads have had a slice of CPU**; > * (at least in kernel 2.6.8 onwards), the fact that the thread has yielded is implicitly taken into account by the scheduler's heuristics > on its recent CPU allocation— thus, implicitly, a thread that has > yielded could be given more CPU when scheduled in the future. > > > (See the section on [thread scheduling](http://www.javamex.com/tutorials/threads/thread_scheduling.shtml) for more details on priorities > and scheduling algorithms.) > > > ### When to use `yield()`? > > > I would say **practically never**. Its behaviour isn't standardly defined > and there are generally better ways to perform the tasks that you > might want to perform with yield(): > > > * if you're trying to **use only a portion of the CPU**, you can do this in a more controllable way by estimating how much CPU the thread > has used in its last chunk of processing, then **sleeping** for some > amount of time to compensate: see the [sleep()](http://www.javamex.com/tutorials/threads/sleep.shtml) method; > * if you're **waiting for a process or resource** to complete or become available, there are more efficient ways to accomplish this, > such as by using [join()](http://www.javamex.com/tutorials/threads/yield.shtml#join) to wait for another thread to complete, using > the [wait/notify](http://www.javamex.com/tutorials/synchronization_wait_notify.shtml) mechanism to allow one thread to signal to another > that a task is complete, or ideally by using one of the Java 5 > concurrency constructs such as a [Semaphore](http://www.javamex.com/tutorials/synchronization_concurrency_semaphore.shtml) or [blocking queue](http://www.javamex.com/tutorials/synchronization_concurrency_8_queues.shtml). > > >
I see the question has been reactivated with a bounty, now asking what the practical uses for `yield` are. I'll give an example from my experience. As we know, `yield` forces the calling thread to give up the processor that it's running on so that another thread can be scheduled to run. This is useful when the current thread has finished its work for now but wants to quickly return to the front of the queue and check whether some condition has changed. How is this different from a condition variable? `yield` enables the thread to return much quicker to a running state. When waiting on a condition variable the thread is suspended and needs to wait for a different thread to signal that it should continue. `yield` basically says "allow a different thread to run, but allow me to get back to work very soon as I expect something to change in my state very very quickly". This hints towards busy spinning, where a condition can change rapidly but suspending the thread would incur a large performance hit. But enough babbling, here's a concrete example: the wavefront parallel pattern. A basic instance of this problem is computing the individual "islands" of 1s in a bidimensional array filled with 0s and 1s. An "island" is a group of cells that are adjacent to eachother either vertically or horizontally: ``` 1 0 0 0 1 1 0 0 0 0 0 1 0 0 1 1 0 0 1 1 ``` Here we have two islands of 1s: top-left and bottom-right. A simple solution is to make a first pass over the entire array and replace the 1 values with an incrementing counter such that by the end each 1 was replaced with its sequence number in row major order: ``` 1 0 0 0 2 3 0 0 0 0 0 4 0 0 5 6 0 0 7 8 ``` In the next step, each value is replaced by the minimum between itself and its neighbours' values: ``` 1 0 0 0 1 1 0 0 0 0 0 4 0 0 4 4 0 0 4 4 ``` We can now easily determine that we have two islands. The part we want to run in parallel is the the step where we compute the minimums. Without going into too much detail, each thread gets rows in an interleaved manner and relies on the values computed by the thread processing the row above. Thus, each thread needs to slightly lag behind the thread processing the previous line, but must also keep up within reasonable time. More details and an implementation are presented by myself in [this document](https://drive.google.com/file/d/0BwHrufT_K__FNWYzMDIzOTctZDQyNC00NTEwLWI1YzMtNTdhMTc4NmNkYTdk/edit?usp=sharing). Note the usage of `sleep(0)` which is more or less the C equivalent of `yield`. In this case `yield` was used in order to force each thread in turn to pause, but since the thread processing the adjacent row would advance very quickly in the meantime, a condition variable would prove a disastrous choice. As you can see, `yield` is quite a fine-grain optimization. Using it in the wrong place e.g. waiting on a condition that changes seldomly, will cause excessive use of the CPU. Sorry for the long babble, hope I made myself clear.
6,979,796
I am a little bit confused about the use of [`Thread.yield()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Thread.html#yield()) method in Java, specifically in the example code below. I've also read that yield() is 'used to prevent execution of a thread'. My questions are: 1. I believe the code below result in the same output both when using `yield()` and when not using it. Is this correct? 2. What are, in fact, the main uses of `yield()`? 3. In what ways is `yield()` different from the `join()` and `interrupt()` methods? The code example: ``` public class MyRunnable implements Runnable { public static void main(String[] args) { Thread t = new Thread(new MyRunnable()); t.start(); for(int i=0; i<5; i++) { System.out.println("Inside main"); } } public void run() { for(int i=0; i<5; i++) { System.out.println("Inside run"); Thread.yield(); } } } ``` I obtain the same output using the code above both with and without using `yield()`: ``` Inside main Inside main Inside main Inside main Inside main Inside run Inside run Inside run Inside run Inside run ```
2011/08/08
[ "https://Stackoverflow.com/questions/6979796", "https://Stackoverflow.com", "https://Stackoverflow.com/users/827583/" ]
> > The current answer(s) are out-of-date and require revision given recent changes. > > > There is no *practical* difference of `Thread.yield()` between Java versions since 6 to 9. **TL;DR;** Conclusions based on OpenJDK source code (<http://hg.openjdk.java.net/>). If not to take into account HotSpot support of USDT probes (system tracing information is described in [dtrace guide](https://docs.oracle.com/javase/8/docs/technotes/guides/vm/dtrace.html)) and JVM property `ConvertYieldToSleep` then source code of `yield()` is almost the same. See explanation below. **Java 9**: `Thread.yield()` calls OS-specific method `os::naked_yield()`: On Linux: ``` void os::naked_yield() { sched_yield(); } ``` On Windows: ``` void os::naked_yield() { SwitchToThread(); } ``` **Java 8 and earlier:** `Thread.yield()` calls OS-specific method `os::yield()`: On Linux: ``` void os::yield() { sched_yield(); } ``` On Windows: ``` void os::yield() { os::NakedYield(); } ``` As you can see, `Thread.yeald()` on Linux is identical for all Java versions. Let's see Windows's `os::NakedYield()` from JDK 8: ``` os::YieldResult os::NakedYield() { // Use either SwitchToThread() or Sleep(0) // Consider passing back the return value from SwitchToThread(). if (os::Kernel32Dll::SwitchToThreadAvailable()) { return SwitchToThread() ? os::YIELD_SWITCHED : os::YIELD_NONEREADY ; } else { Sleep(0); } return os::YIELD_UNKNOWN ; } ``` The difference between Java 9 and Java 8 in the additional check of the existence of the Win32 API's `SwitchToThread()` method. The same code is present for Java 6. Source code of `os::NakedYield()` in JDK 7 is slightly different but it has the same behavior: ``` os::YieldResult os::NakedYield() { // Use either SwitchToThread() or Sleep(0) // Consider passing back the return value from SwitchToThread(). // We use GetProcAddress() as ancient Win9X versions of windows doen't support SwitchToThread. // In that case we revert to Sleep(0). static volatile STTSignature stt = (STTSignature) 1 ; if (stt == ((STTSignature) 1)) { stt = (STTSignature) ::GetProcAddress (LoadLibrary ("Kernel32.dll"), "SwitchToThread") ; // It's OK if threads race during initialization as the operation above is idempotent. } if (stt != NULL) { return (*stt)() ? os::YIELD_SWITCHED : os::YIELD_NONEREADY ; } else { Sleep (0) ; } return os::YIELD_UNKNOWN ; } ``` The additional check has been dropped due to `SwitchToThread()` method are available since Windows XP and Windows Server 2003 (see [msdn notes](https://msdn.microsoft.com/ru-ru/en-en/library/windows/desktop/ms686352(v=vs.85).aspx)).
`Thread.yield();` frees the bottom thread. `Thread` is using OS threads, so `Thread.yield();` might free the hardware thread. Bad implementation for `sleep(millis)` ``` public class MySleep { public static void sleep(long millis) throws InterruptedException { long start = System.currentTimeMillis(); do { Thread.yield(); if (Thread.interrupted()) { throw new InterruptedException(); } } while (System.currentTimeMillis() - start < millis); } } ``` and `join()` ``` public class MyJoin { public static void join(Thread t) throws InterruptedException { while (t.getState() != Thread.State.TERMINATED) { Thread.yield(); if (Thread.interrupted()) { throw new InterruptedException(); } } } public static void main(String[] args) { Thread thread = new Thread(()-> { try { Thread.sleep(2000); } catch (Exception e) { } }); thread.start(); System.out.println("before"); try { join(thread); } catch (Exception e) { } System.out.println("after"); } } ``` This should work even if there is only one hardware thread, unless `Thread.yield();` is removed.
6,979,796
I am a little bit confused about the use of [`Thread.yield()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Thread.html#yield()) method in Java, specifically in the example code below. I've also read that yield() is 'used to prevent execution of a thread'. My questions are: 1. I believe the code below result in the same output both when using `yield()` and when not using it. Is this correct? 2. What are, in fact, the main uses of `yield()`? 3. In what ways is `yield()` different from the `join()` and `interrupt()` methods? The code example: ``` public class MyRunnable implements Runnable { public static void main(String[] args) { Thread t = new Thread(new MyRunnable()); t.start(); for(int i=0; i<5; i++) { System.out.println("Inside main"); } } public void run() { for(int i=0; i<5; i++) { System.out.println("Inside run"); Thread.yield(); } } } ``` I obtain the same output using the code above both with and without using `yield()`: ``` Inside main Inside main Inside main Inside main Inside main Inside run Inside run Inside run Inside run Inside run ```
2011/08/08
[ "https://Stackoverflow.com/questions/6979796", "https://Stackoverflow.com", "https://Stackoverflow.com/users/827583/" ]
`Thread.yield();` frees the bottom thread. `Thread` is using OS threads, so `Thread.yield();` might free the hardware thread. Bad implementation for `sleep(millis)` ``` public class MySleep { public static void sleep(long millis) throws InterruptedException { long start = System.currentTimeMillis(); do { Thread.yield(); if (Thread.interrupted()) { throw new InterruptedException(); } } while (System.currentTimeMillis() - start < millis); } } ``` and `join()` ``` public class MyJoin { public static void join(Thread t) throws InterruptedException { while (t.getState() != Thread.State.TERMINATED) { Thread.yield(); if (Thread.interrupted()) { throw new InterruptedException(); } } } public static void main(String[] args) { Thread thread = new Thread(()-> { try { Thread.sleep(2000); } catch (Exception e) { } }); thread.start(); System.out.println("before"); try { join(thread); } catch (Exception e) { } System.out.println("after"); } } ``` This should work even if there is only one hardware thread, unless `Thread.yield();` is removed.
yield() main use is for putting a multi-threading application on hold. all these methods differences are yield() puts thread on hold while executing another thread and returning back after the completion of that thread, join() will bring the beginning of threads together executing until the end and of another thread to run after that thread has ended, interrupt() will stop the execution of a thread for a while.
6,979,796
I am a little bit confused about the use of [`Thread.yield()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Thread.html#yield()) method in Java, specifically in the example code below. I've also read that yield() is 'used to prevent execution of a thread'. My questions are: 1. I believe the code below result in the same output both when using `yield()` and when not using it. Is this correct? 2. What are, in fact, the main uses of `yield()`? 3. In what ways is `yield()` different from the `join()` and `interrupt()` methods? The code example: ``` public class MyRunnable implements Runnable { public static void main(String[] args) { Thread t = new Thread(new MyRunnable()); t.start(); for(int i=0; i<5; i++) { System.out.println("Inside main"); } } public void run() { for(int i=0; i<5; i++) { System.out.println("Inside run"); Thread.yield(); } } } ``` I obtain the same output using the code above both with and without using `yield()`: ``` Inside main Inside main Inside main Inside main Inside main Inside run Inside run Inside run Inside run Inside run ```
2011/08/08
[ "https://Stackoverflow.com/questions/6979796", "https://Stackoverflow.com", "https://Stackoverflow.com/users/827583/" ]
About the differences between `yield()`, `interrupt()` and `join()` - in general, not just in Java: 1. **yielding**: Literally, to 'yield' means to let go, to give up, to surrender. A yielding thread tells the operating system (or the virtual machine, or what not) it's willing to let other threads be scheduled in its stead. This indicates it's not doing something too critical. It's only a hint, though, and not guaranteed to have any effect. 2. **joining**: When multiple threads 'join' on some handle, or token, or entity, all of them wait until all other relevant threads have completed execution (entirely or upto their own corresponding join). That means a bunch of threads have all completed their tasks. Then each one of these threads can be scheduled to continue other work, being able to assume all those tasks are indeed complete. (Not to be confused with SQL Joins!) 3. **interruption**: Used by one thread to 'poke' another thread which is sleeping, or waiting, or joining - so that it is scheduled to continue running again, perhaps with an indication it has been interrupted. (Not to be confused with hardware interrupts!) For Java specifically, see 1. Joining: [How to use Thread.join?](https://stackoverflow.com/questions/1908515/java-how-to-use-thread-join) (here on StackOverflow) [When to join threads?](http://javahowto.blogspot.co.il/2007/05/when-to-join-threads.html) 2. Yielding: 3. Interrupting: [Is Thread.interrupt() evil?](https://stackoverflow.com/q/2020992/1593077) (here on StackOverflow)
First, the actual description is > > Causes the currently executing thread object to temporarily pause and > allow other threads to execute. > > > Now, it is very likely that your main thread will execute the loop five times before the `run` method of the new thread is being executed, so all the calls to `yield` will happen only after the loop in the main thread is executed. `join` will stop the current thread until the thread being called with `join()` is done executing. `interrupt` will interrupt the thread it is being called on, causing [InterruptedException](http://download.oracle.com/javase/1.4.2/docs/api/java/lang/InterruptedException.html). `yield` allows a context switch to other threads, so this thread will not consume the entire CPU usage of the process.
6,979,796
I am a little bit confused about the use of [`Thread.yield()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Thread.html#yield()) method in Java, specifically in the example code below. I've also read that yield() is 'used to prevent execution of a thread'. My questions are: 1. I believe the code below result in the same output both when using `yield()` and when not using it. Is this correct? 2. What are, in fact, the main uses of `yield()`? 3. In what ways is `yield()` different from the `join()` and `interrupt()` methods? The code example: ``` public class MyRunnable implements Runnable { public static void main(String[] args) { Thread t = new Thread(new MyRunnable()); t.start(); for(int i=0; i<5; i++) { System.out.println("Inside main"); } } public void run() { for(int i=0; i<5; i++) { System.out.println("Inside run"); Thread.yield(); } } } ``` I obtain the same output using the code above both with and without using `yield()`: ``` Inside main Inside main Inside main Inside main Inside main Inside run Inside run Inside run Inside run Inside run ```
2011/08/08
[ "https://Stackoverflow.com/questions/6979796", "https://Stackoverflow.com", "https://Stackoverflow.com/users/827583/" ]
> > What are, in fact, the main uses of yield()? > > > Yield suggests to the CPU that you may stop the current thread and start executing threads with higher priority. In other words, assigning a low priority value to the current thread to leave room for more critical threads. > > I believe the code below result in the same output both when using yield() and when not using it. Is this correct? > > > NO, the two will produce different results. Without a yield(), once the thread gets control it will execute the 'Inside run' loop in one go. However, with a yield(), once the thread gets control it will print the 'Inside run' once and then will hand over control to other thread if any. If no thread in pending, this thread will be resumed again. So every time "Inside run' is executed it will look for other threads to execute and if no thread is available, the current thread will keep on executing. > > In what ways is yield() different from the join() and interrupt() methods? > > > yield() is for giving room to other important threads, join() is for waiting for another thread to complete its execution, and interrupt() is for interrupting a currently executing thread to do something else.
`Thread.yield()` causes thread to go from "Running" state to "Runnable" state. Note: It doesn't cause thread to go "Waiting" state.
6,979,796
I am a little bit confused about the use of [`Thread.yield()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Thread.html#yield()) method in Java, specifically in the example code below. I've also read that yield() is 'used to prevent execution of a thread'. My questions are: 1. I believe the code below result in the same output both when using `yield()` and when not using it. Is this correct? 2. What are, in fact, the main uses of `yield()`? 3. In what ways is `yield()` different from the `join()` and `interrupt()` methods? The code example: ``` public class MyRunnable implements Runnable { public static void main(String[] args) { Thread t = new Thread(new MyRunnable()); t.start(); for(int i=0; i<5; i++) { System.out.println("Inside main"); } } public void run() { for(int i=0; i<5; i++) { System.out.println("Inside run"); Thread.yield(); } } } ``` I obtain the same output using the code above both with and without using `yield()`: ``` Inside main Inside main Inside main Inside main Inside main Inside run Inside run Inside run Inside run Inside run ```
2011/08/08
[ "https://Stackoverflow.com/questions/6979796", "https://Stackoverflow.com", "https://Stackoverflow.com/users/827583/" ]
`Thread.yield()` causes thread to go from "Running" state to "Runnable" state. Note: It doesn't cause thread to go "Waiting" state.
Thread.yield() > > When we invoke Thread.yield() method, the thread scheduler keep the currently running thread to Runnable state and picks another thread of equal priority or higher priority. If there is no equal and higher priority thread then it reschedule the calling yield() thread. Remember yield method does not make the thread to go to Wait or Blocked state. It can only make a thread from Running State to Runnable State. > > > join() > > When join is invoked by a thread instance, this thread will tell currently executing thread to wait till the Joining thread completes. Join is used in the situations when a task which should be completed before the current task is going to finish. > > >
6,979,796
I am a little bit confused about the use of [`Thread.yield()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Thread.html#yield()) method in Java, specifically in the example code below. I've also read that yield() is 'used to prevent execution of a thread'. My questions are: 1. I believe the code below result in the same output both when using `yield()` and when not using it. Is this correct? 2. What are, in fact, the main uses of `yield()`? 3. In what ways is `yield()` different from the `join()` and `interrupt()` methods? The code example: ``` public class MyRunnable implements Runnable { public static void main(String[] args) { Thread t = new Thread(new MyRunnable()); t.start(); for(int i=0; i<5; i++) { System.out.println("Inside main"); } } public void run() { for(int i=0; i<5; i++) { System.out.println("Inside run"); Thread.yield(); } } } ``` I obtain the same output using the code above both with and without using `yield()`: ``` Inside main Inside main Inside main Inside main Inside main Inside run Inside run Inside run Inside run Inside run ```
2011/08/08
[ "https://Stackoverflow.com/questions/6979796", "https://Stackoverflow.com", "https://Stackoverflow.com/users/827583/" ]
> > What are, in fact, the main uses of yield()? > > > Yield suggests to the CPU that you may stop the current thread and start executing threads with higher priority. In other words, assigning a low priority value to the current thread to leave room for more critical threads. > > I believe the code below result in the same output both when using yield() and when not using it. Is this correct? > > > NO, the two will produce different results. Without a yield(), once the thread gets control it will execute the 'Inside run' loop in one go. However, with a yield(), once the thread gets control it will print the 'Inside run' once and then will hand over control to other thread if any. If no thread in pending, this thread will be resumed again. So every time "Inside run' is executed it will look for other threads to execute and if no thread is available, the current thread will keep on executing. > > In what ways is yield() different from the join() and interrupt() methods? > > > yield() is for giving room to other important threads, join() is for waiting for another thread to complete its execution, and interrupt() is for interrupting a currently executing thread to do something else.
`Thread.yield();` frees the bottom thread. `Thread` is using OS threads, so `Thread.yield();` might free the hardware thread. Bad implementation for `sleep(millis)` ``` public class MySleep { public static void sleep(long millis) throws InterruptedException { long start = System.currentTimeMillis(); do { Thread.yield(); if (Thread.interrupted()) { throw new InterruptedException(); } } while (System.currentTimeMillis() - start < millis); } } ``` and `join()` ``` public class MyJoin { public static void join(Thread t) throws InterruptedException { while (t.getState() != Thread.State.TERMINATED) { Thread.yield(); if (Thread.interrupted()) { throw new InterruptedException(); } } } public static void main(String[] args) { Thread thread = new Thread(()-> { try { Thread.sleep(2000); } catch (Exception e) { } }); thread.start(); System.out.println("before"); try { join(thread); } catch (Exception e) { } System.out.println("after"); } } ``` This should work even if there is only one hardware thread, unless `Thread.yield();` is removed.
6,979,796
I am a little bit confused about the use of [`Thread.yield()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Thread.html#yield()) method in Java, specifically in the example code below. I've also read that yield() is 'used to prevent execution of a thread'. My questions are: 1. I believe the code below result in the same output both when using `yield()` and when not using it. Is this correct? 2. What are, in fact, the main uses of `yield()`? 3. In what ways is `yield()` different from the `join()` and `interrupt()` methods? The code example: ``` public class MyRunnable implements Runnable { public static void main(String[] args) { Thread t = new Thread(new MyRunnable()); t.start(); for(int i=0; i<5; i++) { System.out.println("Inside main"); } } public void run() { for(int i=0; i<5; i++) { System.out.println("Inside run"); Thread.yield(); } } } ``` I obtain the same output using the code above both with and without using `yield()`: ``` Inside main Inside main Inside main Inside main Inside main Inside run Inside run Inside run Inside run Inside run ```
2011/08/08
[ "https://Stackoverflow.com/questions/6979796", "https://Stackoverflow.com", "https://Stackoverflow.com/users/827583/" ]
> > What are, in fact, the main uses of yield()? > > > Yield suggests to the CPU that you may stop the current thread and start executing threads with higher priority. In other words, assigning a low priority value to the current thread to leave room for more critical threads. > > I believe the code below result in the same output both when using yield() and when not using it. Is this correct? > > > NO, the two will produce different results. Without a yield(), once the thread gets control it will execute the 'Inside run' loop in one go. However, with a yield(), once the thread gets control it will print the 'Inside run' once and then will hand over control to other thread if any. If no thread in pending, this thread will be resumed again. So every time "Inside run' is executed it will look for other threads to execute and if no thread is available, the current thread will keep on executing. > > In what ways is yield() different from the join() and interrupt() methods? > > > yield() is for giving room to other important threads, join() is for waiting for another thread to complete its execution, and interrupt() is for interrupting a currently executing thread to do something else.
Thread.yield() > > When we invoke Thread.yield() method, the thread scheduler keep the currently running thread to Runnable state and picks another thread of equal priority or higher priority. If there is no equal and higher priority thread then it reschedule the calling yield() thread. Remember yield method does not make the thread to go to Wait or Blocked state. It can only make a thread from Running State to Runnable State. > > > join() > > When join is invoked by a thread instance, this thread will tell currently executing thread to wait till the Joining thread completes. Join is used in the situations when a task which should be completed before the current task is going to finish. > > >
6,979,796
I am a little bit confused about the use of [`Thread.yield()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Thread.html#yield()) method in Java, specifically in the example code below. I've also read that yield() is 'used to prevent execution of a thread'. My questions are: 1. I believe the code below result in the same output both when using `yield()` and when not using it. Is this correct? 2. What are, in fact, the main uses of `yield()`? 3. In what ways is `yield()` different from the `join()` and `interrupt()` methods? The code example: ``` public class MyRunnable implements Runnable { public static void main(String[] args) { Thread t = new Thread(new MyRunnable()); t.start(); for(int i=0; i<5; i++) { System.out.println("Inside main"); } } public void run() { for(int i=0; i<5; i++) { System.out.println("Inside run"); Thread.yield(); } } } ``` I obtain the same output using the code above both with and without using `yield()`: ``` Inside main Inside main Inside main Inside main Inside main Inside run Inside run Inside run Inside run Inside run ```
2011/08/08
[ "https://Stackoverflow.com/questions/6979796", "https://Stackoverflow.com", "https://Stackoverflow.com/users/827583/" ]
About the differences between `yield()`, `interrupt()` and `join()` - in general, not just in Java: 1. **yielding**: Literally, to 'yield' means to let go, to give up, to surrender. A yielding thread tells the operating system (or the virtual machine, or what not) it's willing to let other threads be scheduled in its stead. This indicates it's not doing something too critical. It's only a hint, though, and not guaranteed to have any effect. 2. **joining**: When multiple threads 'join' on some handle, or token, or entity, all of them wait until all other relevant threads have completed execution (entirely or upto their own corresponding join). That means a bunch of threads have all completed their tasks. Then each one of these threads can be scheduled to continue other work, being able to assume all those tasks are indeed complete. (Not to be confused with SQL Joins!) 3. **interruption**: Used by one thread to 'poke' another thread which is sleeping, or waiting, or joining - so that it is scheduled to continue running again, perhaps with an indication it has been interrupted. (Not to be confused with hardware interrupts!) For Java specifically, see 1. Joining: [How to use Thread.join?](https://stackoverflow.com/questions/1908515/java-how-to-use-thread-join) (here on StackOverflow) [When to join threads?](http://javahowto.blogspot.co.il/2007/05/when-to-join-threads.html) 2. Yielding: 3. Interrupting: [Is Thread.interrupt() evil?](https://stackoverflow.com/q/2020992/1593077) (here on StackOverflow)
> > What are, in fact, the main uses of yield()? > > > Yield suggests to the CPU that you may stop the current thread and start executing threads with higher priority. In other words, assigning a low priority value to the current thread to leave room for more critical threads. > > I believe the code below result in the same output both when using yield() and when not using it. Is this correct? > > > NO, the two will produce different results. Without a yield(), once the thread gets control it will execute the 'Inside run' loop in one go. However, with a yield(), once the thread gets control it will print the 'Inside run' once and then will hand over control to other thread if any. If no thread in pending, this thread will be resumed again. So every time "Inside run' is executed it will look for other threads to execute and if no thread is available, the current thread will keep on executing. > > In what ways is yield() different from the join() and interrupt() methods? > > > yield() is for giving room to other important threads, join() is for waiting for another thread to complete its execution, and interrupt() is for interrupting a currently executing thread to do something else.
30,435,317
I have single polyline on my map. I add dblclick event to it. It is working but also map is zooming. How to disable zooming when double click on polyline? (Or anything else with dblclick event); Any event propagation, event stop or something? I know and I can use `map.setOptions({disableDoubleClickZoom: true });` but it is not what i really want to do. EDIT: STUPID HACK In polyline `'dblclick'` event just added at start : `map.setZoom(map.getZoom-1);`
2015/05/25
[ "https://Stackoverflow.com/questions/30435317", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4554009/" ]
This is the way to do this through the api: ``` google.maps.event.addListener(polygonObject,'dblclick',function(e){e.stop();}) ``` [Disabling zoom when double click on polygon](https://stackoverflow.com/questions/11278409/disabling-zoom-when-double-click-on-polygon)
**STUPID HACK** In polyline 'dblclick' event just added at start : map.setZoom(map.getZoom-1); So first u deacrease zoom and then increase.
25,730,046
I have a controller that outputs data from the database in raw JSON format. I want this to function as an API and allow anyone to make views with any technology that can consume the JSON i.e. Angular, Jquery/Ajax. However I also want to make a view in Laravel. So what's the best practice for creating a view from Laravel that uses data from a controller while still allowing the controller to output raw JSON? The options I'm thinking of are to call the controller from the view(not good?) or to create extra routes.
2014/09/08
[ "https://Stackoverflow.com/questions/25730046", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1821450/" ]
You are probably using NTFS or FAT32 on Windows, and those filesystems do not support the *executable* permission. Instead, [cygwin looks at the file name and contents to determine whether it's executable](https://cygwin.com/cygwin-ug-net/using-filemodes.html): > > Files are considered to be executable if the filename ends with .bat, .com or .exe, or if its content starts with #!. > > > So you should make sure that the bash file starts with a [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) (e.g. `#!/bin/bash`). Then, you should be able to just execute the file, disregarding the permission output of `ls`.
If you're updating scripts in a windows environment that are being deployed to a linux filesystem, even though they are permitted to run locally, you may still find yourself needing to grant execute before pushing. From this article on [Change file permissions when working with git repo's on windows](https://medium.com/@akash1233/change-file-permissions-when-working-with-git-repos-on-windows-ea22e34d5cee): 1. Open up a bash terminal like git-bash on Windows 2. Navigate to the `.sh` file where you want to grant execute permissions 3. Check the existing permissions with the following command: ```sh git ls-files --stage ``` Which should return something like *100644* 4. Update the permissions with the following command ```sh git update-index --chmod=+x 'name-of-shell-script' ``` 5. Check the file permission again ```sh git ls-files --stage ``` Which should return something like *100755* 6. Commit changes and push! [![git bash on windows](https://i.stack.imgur.com/9oUd2.jpg)](https://i.stack.imgur.com/9oUd2.jpg)
25,730,046
I have a controller that outputs data from the database in raw JSON format. I want this to function as an API and allow anyone to make views with any technology that can consume the JSON i.e. Angular, Jquery/Ajax. However I also want to make a view in Laravel. So what's the best practice for creating a view from Laravel that uses data from a controller while still allowing the controller to output raw JSON? The options I'm thinking of are to call the controller from the view(not good?) or to create extra routes.
2014/09/08
[ "https://Stackoverflow.com/questions/25730046", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1821450/" ]
You are probably using NTFS or FAT32 on Windows, and those filesystems do not support the *executable* permission. Instead, [cygwin looks at the file name and contents to determine whether it's executable](https://cygwin.com/cygwin-ug-net/using-filemodes.html): > > Files are considered to be executable if the filename ends with .bat, .com or .exe, or if its content starts with #!. > > > So you should make sure that the bash file starts with a [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) (e.g. `#!/bin/bash`). Then, you should be able to just execute the file, disregarding the permission output of `ls`.
Very likely you do not have "shebang" at the beginning of your script. So Windows does not know which binary should be used to run the script. Thus the permissions gets silently ignored. For example: ``` #!/usr/bin/sh ``` or ``` #!/usr/bin/bash ``` It looks like git-bash detects it automaticly, because the executable attribute gets set even without chmod command. Moreover it is not possible to disable it by using chmod.
25,730,046
I have a controller that outputs data from the database in raw JSON format. I want this to function as an API and allow anyone to make views with any technology that can consume the JSON i.e. Angular, Jquery/Ajax. However I also want to make a view in Laravel. So what's the best practice for creating a view from Laravel that uses data from a controller while still allowing the controller to output raw JSON? The options I'm thinking of are to call the controller from the view(not good?) or to create extra routes.
2014/09/08
[ "https://Stackoverflow.com/questions/25730046", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1821450/" ]
If you're updating scripts in a windows environment that are being deployed to a linux filesystem, even though they are permitted to run locally, you may still find yourself needing to grant execute before pushing. From this article on [Change file permissions when working with git repo's on windows](https://medium.com/@akash1233/change-file-permissions-when-working-with-git-repos-on-windows-ea22e34d5cee): 1. Open up a bash terminal like git-bash on Windows 2. Navigate to the `.sh` file where you want to grant execute permissions 3. Check the existing permissions with the following command: ```sh git ls-files --stage ``` Which should return something like *100644* 4. Update the permissions with the following command ```sh git update-index --chmod=+x 'name-of-shell-script' ``` 5. Check the file permission again ```sh git ls-files --stage ``` Which should return something like *100755* 6. Commit changes and push! [![git bash on windows](https://i.stack.imgur.com/9oUd2.jpg)](https://i.stack.imgur.com/9oUd2.jpg)
Very likely you do not have "shebang" at the beginning of your script. So Windows does not know which binary should be used to run the script. Thus the permissions gets silently ignored. For example: ``` #!/usr/bin/sh ``` or ``` #!/usr/bin/bash ``` It looks like git-bash detects it automaticly, because the executable attribute gets set even without chmod command. Moreover it is not possible to disable it by using chmod.
49,585,618
**Task:** ``` id | title | description | --------------------------------------------------------------------- 1 | Task1 | Descr1 | 2 | Task2 | Descr1 | 3 | Task2 | Descr1 | 4 | Task2 | Descr1 | 5 | Task2 | Descr1 | ``` **Message:** ``` id | task_id | message | status | --------------------------------------------------------------------- 1 | 1 | Message1 | HOLD 2 | 1 | Message2 | OK 3 | 1 | Message3 | ERROR 4 | 1 | Message4 | ERROR 5 | 2 | Message5 | HOLD 6 | 2 | Message6 | OK 7 | 2 | Message7 | OK 8 | 2 | Message7 | OK 9 | 3 | Message7 | OK ``` I want to show as here: ``` id | title | description | count(HOLD) | count(OK) | count(ERROR) --------------------------------------------------------------------- 1 | Task1 | Descr1 | 1 | 1 | 2 2 | Task2 | Descr1 | 1 | 3 | 0 3 | Task2 | Descr1 | 0 | 1 | 0 4 | Task2 | Descr1 | 0 | 0 | 0 5 | Task2 | Descr1 | 0 | 0 | 0 ```
2018/03/31
[ "https://Stackoverflow.com/questions/49585618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2100323/" ]
You could use a selective aggregation baded on sum and CASE when ``` select task.id , task.title , task.description , sum(case when Message.status = 1 then 1 else 0 end ) status1 , sum(case when Message.status = 2 then 1 else 0 end ) status2 , sum(case when Message.status = 3 then 1 else 0 end ) status3 from Task INNER JOIN Message ON Task.id = Message.task_id group by task.id , task.title , task.description ```
Another way ``` SELECT task.id, task.title, task.description, SUM(DECODE (status, 'HOLD',1,0 end)) AS "HOLD_COUNT" SUM(DECODE (status, OK,1, 0 end)) AS "OK_COUNT" SUM(DECODE (status, ERROR,1, 0 end ) AS "ERROUR_COUNT" FROM task JOIN Message ON Task.id = Message.task_id group by task.id, task.title , task.description ```
49,585,618
**Task:** ``` id | title | description | --------------------------------------------------------------------- 1 | Task1 | Descr1 | 2 | Task2 | Descr1 | 3 | Task2 | Descr1 | 4 | Task2 | Descr1 | 5 | Task2 | Descr1 | ``` **Message:** ``` id | task_id | message | status | --------------------------------------------------------------------- 1 | 1 | Message1 | HOLD 2 | 1 | Message2 | OK 3 | 1 | Message3 | ERROR 4 | 1 | Message4 | ERROR 5 | 2 | Message5 | HOLD 6 | 2 | Message6 | OK 7 | 2 | Message7 | OK 8 | 2 | Message7 | OK 9 | 3 | Message7 | OK ``` I want to show as here: ``` id | title | description | count(HOLD) | count(OK) | count(ERROR) --------------------------------------------------------------------- 1 | Task1 | Descr1 | 1 | 1 | 2 2 | Task2 | Descr1 | 1 | 3 | 0 3 | Task2 | Descr1 | 0 | 1 | 0 4 | Task2 | Descr1 | 0 | 0 | 0 5 | Task2 | Descr1 | 0 | 0 | 0 ```
2018/03/31
[ "https://Stackoverflow.com/questions/49585618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2100323/" ]
You could use a selective aggregation baded on sum and CASE when ``` select task.id , task.title , task.description , sum(case when Message.status = 1 then 1 else 0 end ) status1 , sum(case when Message.status = 2 then 1 else 0 end ) status2 , sum(case when Message.status = 3 then 1 else 0 end ) status3 from Task INNER JOIN Message ON Task.id = Message.task_id group by task.id , task.title , task.description ```
I believe the correct solution is: ``` select t.id, t.title, t.description, coalesce(sum( m.status = 'Hold' ), 0) as num_hold, coalesce(sum( m.status = 'OK' ), 0) as num_ok, coalesce(sum( m.status = 'Error' ), 0) as num_error from task t left join message m on m.task_id = t.id group by t.id, t.title, t.description; ```
51,984
I observed the below behavior of echo ``` #!/bin/bash x=" hello" echo $x echo "$x" ``` Now when I run the above code I get ``` ronnie@ronnie:~$ bash test.sh hello hello ronnie@ronnie:~$ ``` So, can someone explain to me why `whitespace` in first case is not present in output and also points me to the documentation where this behavior is defined.
2012/10/16
[ "https://unix.stackexchange.com/questions/51984", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24545/" ]
It is not the `echo` behavior. It is a bash behavior. When you use `echo $x` form the bash get the following command to process (treat `␣` as space): ``` echo␣␣hello ``` Then this command is tokenized and bash get two tokens: `echo` and `hello` thus the output is just `hello` When you use the `echo "$x"` form then the bash has the following at the input of tokenizer: ``` echo␣"␣hello" ``` thus it has two tokens `echo` and `␣hello`, so the output is different.
The reason you see different output is because the `echo[1 space]Hello` line is syntactically equal to `echo[5 spaces]Hello`. The whitespace is ignored, and the word 'Hello' is treated as the argument to `echo`. The first line, in it's simplest form, is much the same as if you had said `echo "Hello"`. In the second line, you have explicitly included a leading space as part of the argument to `echo`, with `echo " Hello"`. In the first line, you are passing a 5 character string to `echo`, and in the second line you are passing a 6 character string to `echo`. So, in fact, the behavior of `echo` is the same in both instances, it's just the string being passed to `echo` that changes.
95,587
I need to get every UTC day stamp in yyyy-MM-dd format (e.g. "2015-07-02") between any two arbitrary Date fields. My first attempt is this: ``` public static Set<String> getUTCDayStringsBetween(Date startDt, Date endDt) { if (!startDt.before(endDt)) { throw new IllegalArgumentException("Start date (" + startDt + ") must be before end date (" + endDt + ")"); } final TimeZone UTC = TimeZone.getTimeZone("UTC"); final Calendar c = Calendar.getInstance(); final SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd"); sdf.setTimeZone(UTC); final Set<String> dayStrings = new LinkedHashSet<>(); c.setTime(startDt); while (true) { dayStrings.add(sdf.format(c.getTime())); c.add(Calendar.DATE, 1);//add 1 day if (c.getTime().after(endDt)) { //reached the end of our range. set time to the endDt and get the final day string c.setTime(endDt); dayStrings.add(sdf.format(c.getTime())); break; } } return dayStrings; } ``` It seems to work, but I'm wondering if there is a more efficient way to do it (aside from using external time libraries like Joda.). Also, is there anything I'm glossing over in terms of handling time zones and day-light-savings changes appropriately.
2015/07/02
[ "https://codereview.stackexchange.com/questions/95587", "https://codereview.stackexchange.com", "https://codereview.stackexchange.com/users/52425/" ]
> > *No, but I welcome answers that use Java8 features* - [bradvido](https://codereview.stackexchange.com/questions/95587/getting-all-the-utc-day-stamps-between-2-dates#comment174428_95587) > > > In Java 8, the new [`Instant`](https://docs.oracle.com/javase/8/docs/api/java/time/Instant.html) class is analogous to the old `Date` class, in the sense that both represent a single point on a time-line. Therefore, if this was to be given a Java 8 makeover, I'll suggest modifying your method to become a wrapper-method over one that takes in two `Instant` objects: ``` public static Set<String> getUTCDayStringsBetween(Date startDt, Date endDt) { return getUTCDayStringsBetween(startDt.toInstant(), endDt.toInstant()); } public static Set<String> getUTCDayStringsBetween(Instant startInstant, Instant endInstant) { // ... } ``` One nice thing about the new Java 8 Time APIs, which are largely (almost completely? wholly?) based on Joda-Time, is that the new classes come with a wealth of methods, letting us easily manipulate dates, times and time zones. For starters, let's convert our `Instant` objects to the desired `UTC` time zone: ``` // declared as class field public static final ZoneId UTC = ZoneId.of("Z"); // inside the method ZonedDateTime start = startInstant.atZone(UTC); ZonedDateTime end = endInstant.atZone(UTC).with(start.toLocalTime()); ``` `start` and `end` now represents the UTC date-time, and crucially `end` is now at the same time defined in `start` as well (will be explained below). Now, we just need to: 1. count the number of **whole** days between `start` and `end`, 2. add each of the days to `start`, 3. format to desired `String` representation, and 4. collect to the desired `Set`. Putting it all together (note the comments indicating the above points): ``` public static Set<String> getUTCDayStringsBetween(Instant startInstant, Instant endInstant) { if (endInstant.isBefore(startInstant)) { throw new IllegalArgumentException("Start date (" + startInstant + ") must be before end date (" + endInstant + ")"); } ZonedDateTime start = startInstant.atZone(UTC); ZonedDateTime end = endInstant.atZone(UTC).with(start.toLocalTime()); return LongStream.rangeClosed(0, start.until(end, ChronoUnit.DAYS)) // 1 .mapToObj(start::plusDays) // 2 .map(DateTimeFormatter.ISO_LOCAL_DATE::format) // 3 .collect(Collectors.toCollection(LinkedHashSet::new)); // 4 } ``` [`start.until(end, ChronoUnit.DAYS)`](https://docs.oracle.com/javase/8/docs/api/java/time/ZonedDateTime.html#until-java.time.temporal.Temporal-java.time.temporal.TemporalUnit-) only counts the *complete days* between the two arguments, hence the necessity to set the time of `end` to be the same as `start`. Effectively, we are doing a form of 'rounding up' here. Having generated a `Stream` of days to add, we do so using (the [method reference](https://docs.oracle.com/javase/tutorial/java/javaOO/methodreferences.html)) [`start::plusDays`](https://docs.oracle.com/javase/8/docs/api/java/time/ZonedDateTime.html#plusDays-long-). Next we use the predefined `DateTimeFormatter` instance [`ISO_LOCAL_DATE`](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html#ISO_LOCAL_DATE) to get our desired results. Finally, I use a `LinkedHashSet` as the backing `Set` implementation since I thought it may make more sense to have a defined ordering (by insertion) as the result. **Don't forget your unit tests too!** If you have yet to do so, unit testing allows you to easily and quickly verify whether the implementation is correct. For example, I have tested with the following two cases, although I will suggest testing for different time zones or even the exceptional cases: ``` ZonedDateTime test = ZonedDateTime.of(2015, 7, 1, 19, 59, 59, 0, ZoneId.of("US/Eastern")); getUTCDayStringsBetween(test.toInstant(), test.plusSeconds(1).toInstant()) .forEach(System.out::println); getUTCDayStringsBetween(test.toInstant(), test.plusDays(7).toInstant()) .forEach(System.out::println); ``` The test output I get is: ``` 2015-07-01 2015-07-02 2015-07-01 2015-07-02 2015-07-03 2015-07-04 2015-07-05 2015-07-06 2015-07-07 2015-07-08 ``` In the first case, since we ended on UTC midnight, two dates are printed. In the second case, we include the eighth day itself (the seventh day after `test`), hence there are eight dates. --- > > Also, is there anything I'm glossing over in terms of handling time zones and day-light-savings changes appropriately. > > > Since `Date` objects do not have the concept of time zones (besides the fact that they are 'zero-ed' to UTC), and **therefore unaffected by daylight savings**, my take is that you don't have to worry about both.
Instead of an infinite `while` loop + an `if` to break out, it would be more natural to convert this to a do-while loop. The comment "//reached the end of our range. set time to the endDt and get the final day string" is pointless, it just says the same thing with words that the code already tells us perfectly clearly. `c` is not a great name for a `Calendar`. How about `cal`, or even `calendar` ? Instead of `startDt` and `endDt`, I would just also them out. It's a bit smelly about the interface that the method requires the date parameters in a specific order, but this is not obvious from the name. And if the caller uses the wrong order, the runtime exception is a heavy penalty, crashing the application. It's not great to have rules that cannot be enforced at compile time, but I don't have a good counter proposal for you.
288,935
I know this is a very basic question but I cant seem to find the answer with Google. What is the difference between a hotfix and a bugfix?
2015/07/07
[ "https://softwareengineering.stackexchange.com/questions/288935", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/134708/" ]
The term hotfix is generally used when client has found an issue within the current release of the product and can not wait to be fixed until the next big release. Hence a hotfix issue is created to fix it and is released as a part of update to the current release usually called Cumulative Update(CU). CUs are nothing but a bunch of hotfixes together. Bugfix - We usually use this when an issue is found during the development and testing phase internally.
A bugfix is just that: a fix for a bug. This could happen at almost any time in a product's lifetime: during development, during testing, or after release. A hotfix can be one or more bugfixes. The important part is the hot, which refers to when it is applied. Originally, it referred to patching an actively running system (aka, 'hot'). It's grown to more generally refer to bugfixes provided after the product is released to the public (this could be during public beta testing, too), but outside of the regular update schedule.
288,935
I know this is a very basic question but I cant seem to find the answer with Google. What is the difference between a hotfix and a bugfix?
2015/07/07
[ "https://softwareengineering.stackexchange.com/questions/288935", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/134708/" ]
The term hotfix is generally used when client has found an issue within the current release of the product and can not wait to be fixed until the next big release. Hence a hotfix issue is created to fix it and is released as a part of update to the current release usually called Cumulative Update(CU). CUs are nothing but a bunch of hotfixes together. Bugfix - We usually use this when an issue is found during the development and testing phase internally.
From my experience in support at a large software company the two terms are unrelated. `Bug fix` is an action on the source code, it is a code change or set of changes to address a reported code defect (a bug.) A `hotfix` is generally a patch or update for clients / deployed systems but more specifically they are patches which are:- * not released to a schedule. * intended to address either 'niche' situations or 'emergency' responses. * only relevant to the specific issue documented in the release notes. * poorly tested. If at all. * a potential source for the (re)introduction of bugs. * intended for small audiences. * likely to affect automated patching systems and require additional monitoring. Hotfixes may deploy a file/library with unusually high version number to prevent the hotfix from being patched over. * supplied by the software maker directly to named contacts, not publically available. Customers are often expected to contact technical support to request hotfixes for example. * frequently branched from the 'last known good' source tree. As a 'quick fix' the code used in the hotfix may never make it back into the main build (it may be that as a temporary fix a better solution requires more time/resources.)
288,935
I know this is a very basic question but I cant seem to find the answer with Google. What is the difference between a hotfix and a bugfix?
2015/07/07
[ "https://softwareengineering.stackexchange.com/questions/288935", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/134708/" ]
From my experience in support at a large software company the two terms are unrelated. `Bug fix` is an action on the source code, it is a code change or set of changes to address a reported code defect (a bug.) A `hotfix` is generally a patch or update for clients / deployed systems but more specifically they are patches which are:- * not released to a schedule. * intended to address either 'niche' situations or 'emergency' responses. * only relevant to the specific issue documented in the release notes. * poorly tested. If at all. * a potential source for the (re)introduction of bugs. * intended for small audiences. * likely to affect automated patching systems and require additional monitoring. Hotfixes may deploy a file/library with unusually high version number to prevent the hotfix from being patched over. * supplied by the software maker directly to named contacts, not publically available. Customers are often expected to contact technical support to request hotfixes for example. * frequently branched from the 'last known good' source tree. As a 'quick fix' the code used in the hotfix may never make it back into the main build (it may be that as a temporary fix a better solution requires more time/resources.)
A bugfix is just that: a fix for a bug. This could happen at almost any time in a product's lifetime: during development, during testing, or after release. A hotfix can be one or more bugfixes. The important part is the hot, which refers to when it is applied. Originally, it referred to patching an actively running system (aka, 'hot'). It's grown to more generally refer to bugfixes provided after the product is released to the public (this could be during public beta testing, too), but outside of the regular update schedule.
54,624
sorry if I'm asking stupid questions but I really spent days trying to figure out how to add functionality to my map so that when user clicks on some map object he gets the attribute table on the side with the attribute data of that object. Is it even possible? I don't speak about popups, I speak about actual table. This is how my map looks now: ![map](https://i.stack.imgur.com/6Liqu.png) I would like that space on the east side of monitor populate with that attribute table when users click. I'm not saying that's the best solution but that's how I imagined that. But I'm open for suggestions. As you can see I use geoext, openlayers and extjs and honestly, I don't know lot about php or some other frameworks so I would be happy if the solution could be solved using geoext or smilair. I was looking around but nothing what I found is what I'm actually looking for. One more time, sorry if my question is unprofessional or something but I'm really kind of desperate :)
2013/03/15
[ "https://gis.stackexchange.com/questions/54624", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/-1/" ]
You could use the FeatureGrid : <http://api.geoext.org/1.1/examples/feature-grid.html>
You can try this code, this will display a pop-up containing attribute information of layers on that particular point. ``` controls.push(new OpenLayers.Control.WMSGetFeatureInfo({ autoActivate: true, infoFormat: "application/vnd.ogc.gml", maxFeatures: 10, eventListeners: { "getfeatureinfo": function(e) { var items = []; Ext.each(e.features, function(feature) { items.push({ xtype: "propertygrid", title: feature.fid, source: feature.attributes, editable: false }); }); new GeoExt.Popup({ title: "Feature Info", width: 300, height: 500, layout: "accordion", map: app.mapPanel, location: e.xy, items: items }).show(); } } })); ``` Ref: <http://workshops.boundlessgeo.com/geoext/stores/getfeatureinfo.html>
46,634,279
I'm new to clojure and want to accomplish the following: I have some functions with a one letter name, a string "commands" and an argument arg ``` (defn A [x] ...) (defn B [x] ...) (defn C [x] ...) ``` I want to have a function (let's call it `apply-fns`) that, given the string with the names of the functions, applies the function to the given argument in order: ``` ; commands = "ACCBB" (apply-fns commands arg) ;should have the same effect as (B (B (C (C (A arg))))) ``` Any help appreciated
2017/10/08
[ "https://Stackoverflow.com/questions/46634279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8741941/" ]
Being a literal genie, I'll give you exactly what you're asking: ``` (defn A [x]) (defn B [x]) (defn C [x]) (def commands "AACCBB") (defmacro string-fns [arg] (let [cs (map (comp symbol str) commands)] `(-> ~arg ~@cs))) (comment (macroexpand '(string-fns :foo)) ;;=> (B (B (C (C (A (A :foo)))))) ) ``` Without any context, however, this makes no sense. What are you trying to do?
Your aim is to apply a series of functions (in order), so that the following function is applied to the result of the previous one. The result should be the same as nesting the forms (i.e., function calls), like in your example: `(B (B (C (C (A arg)))))` If you can use a sequence of the commands (or can extract the sequence from the string representation `"ACCBB"`), you can accomplish this by reducing over the functions, which are named as *commands* in the example. ``` (def commands [A C C B B]) (defn apply-all [initial commands] (reduce #(%2 %1) initial commands)) ``` A fuller example: ``` (defn A [x] (str x "A")) (defn B [x] (str x "B")) (defn C [x] (str x "C")) (def commands [A C C B B]) (defn apply-all [initial commands] (reduce #(%2 %1) initial commands)) user=> (apply-all "" commands) ; => "ACCBB" ```
70,832,568
I'm trying to get data from the database using ajax to insert it in other element but the post data not passing to **get-data.php** so what the reason can be and the solution [![enter image description here](https://i.stack.imgur.com/RTCbi.png)](https://i.stack.imgur.com/RTCbi.png) [![passed data shown here but no resposns](https://i.stack.imgur.com/VibUZ.png)](https://i.stack.imgur.com/VibUZ.png) **addBuilding.php** ``` <?php require_once("./dbConfig.php"); $selectIL = "SELECT * FROM iller "; $selectIL = $db->prepare($selectIL); $selectIL->execute(); $res = $selectIL->get_result(); ?> <form action="" method="post"> <select name="pp" id="cites"> <option value="">-select state-</option> <?php while ($row = $res->fetch_assoc()) { ?> <option value="<?= $row['id'] ?>"><?= $row['il_adi'] ?></option> <?php } ?> </select> <select name="district" id="district"> <option value="">-select district-</option> </select> </form> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script> <script src="getdata.js"></script> ``` **getdata.js** ``` $(document).ready(function() { $("#cites").change(function() { if ( $("#cites").val()!="") { $("#district").prop("disabled",false); }else { $("#district").prop("disabled",true); } var city = $("#cites").val(); $.ajax({ type: "POST", url:"get-data.php", data:$(city).serialize(), success: function(result) { $("#district").append(result); } }); }); }); ``` **get-data.php** I can see the form data in network inspection put no data passing to **get-data.php** ``` <?php require_once("./dbConfig.php"); if (isset($_POST['pp'])) { $cites = $_POST['cites']; $selectIlce = "SELECT * FROM ilceler where il_id=? "; $selectIlce = $db->prepare($selectIlce); $selectIlce->bind_param("i", $cites); $selectIlce->execute(); $res = $selectIlce->get_result(); ?> <?php while ($row = $res->fetch_assoc()) { ?> <option value="<?= $row['id'] ?>"><?= $row['ilce_adi'] ?></option> <?php } } ?> ```
2022/01/24
[ "https://Stackoverflow.com/questions/70832568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13661132/" ]
You need to echo the results in get-data.php ``` <?php while ($row = $res->fetch_assoc()) { ?> echo "<option value='". $row["id"]."'>".$row['ilce_adi']."</option>"; <?php } } ?> ```
1- Get data by serialize from form: ``` $("form").serialize() ``` 2- Add dataType: "json" to ajax option: ``` $.ajax({ type: "POST", url:"get-data.php", data:$(city).serialize(), dataType: "json", success: function(result) { $("#district").append(result); } }); ```
34,717,787
We are having a Analytics product. For each of our customer we give one JavaScript code, they put that in their web sites. If a user visit our customer site the java script code hit our server so that we store this page visit on behalf of this customer. Each customer contains unique domain name. we are storing this page visits in MySql table. Following is the table schema. ``` CREATE TABLE `page_visits` ( `domain` varchar(50) DEFAULT NULL, `guid` varchar(100) DEFAULT NULL, `sid` varchar(100) DEFAULT NULL, `url` varchar(2500) DEFAULT NULL, `ip` varchar(20) DEFAULT NULL, `is_new` varchar(20) DEFAULT NULL, `ref` varchar(2500) DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, `stats_time` datetime DEFAULT NULL, `country` varchar(50) DEFAULT NULL, `region` varchar(50) DEFAULT NULL, `city` varchar(50) DEFAULT NULL, `city_lat_long` varchar(50) DEFAULT NULL, `email` varchar(100) DEFAULT NULL, KEY `sid_index` (`sid`) USING BTREE, KEY `domain_index` (`domain`), KEY `email_index` (`email`), KEY `stats_time_index` (`stats_time`), KEY `domain_statstime` (`domain`,`stats_time`), KEY `domain_email` (`domain`,`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 | ``` We don't have primary key for this table. MySql server details It is Google cloud MySql (version is 5.6) and storage capacity is 10TB. As of now we are having 350 million rows in our table and table size is 300 GB. We are storing all of our customer details in the same table even though there is no relation between one customer to another. **Problem 1**: For few of our customers having huge number of rows in table, so performance of queries against these customers are very slow. Example Query 1: ``` SELECT count(DISTINCT sid) AS count,count(sid) AS total FROM page_views WHERE domain = 'aaa' AND stats_time BETWEEN CONVERT_TZ('2015-02-05 00:00:00','+05:30','+00:00') AND CONVERT_TZ('2016-01-01 23:59:59','+05:30','+00:00'); +---------+---------+ | count | total | +---------+---------+ | 1056546 | 2713729 | +---------+---------+ 1 row in set (13 min 19.71 sec) ``` I will update more queries here. We need results in below 5-10 seconds, will it be possible? **Problem 2**: The table size is rapidly increasing, we might hit table size 5 TB by this year end so we want to shard our table. We want to keep all records related to one customer in one machine. What are the best practises for this sharding. We are thinking following approaches for above issues, please suggest us best practices to overcome these issues. Create separate table for each customer 1) What are the advantages and disadvantages if we create separate table for each customer. As of now we are having 30k customers we might hit 100k by this year end that means 100k tables in DB. We access all tables simultaneously for Read and Write. 2) We will go with same table and will create partitions based on date range **UPDATE** : Is a "customer" determined by the domain? **Answer is Yes** Thanks
2016/01/11
[ "https://Stackoverflow.com/questions/34717787", "https://Stackoverflow.com", "https://Stackoverflow.com/users/619097/" ]
First, a critique if the **excessively large datatypes**: ``` `domain` varchar(50) DEFAULT NULL, -- normalize to MEDIUMINT UNSIGNED (3 bytes) `guid` varchar(100) DEFAULT NULL, -- what is this for? `sid` varchar(100) DEFAULT NULL, -- varchar? `url` varchar(2500) DEFAULT NULL, `ip` varchar(20) DEFAULT NULL, -- too big for IPv4, too small for IPv6; see below `is_new` varchar(20) DEFAULT NULL, -- flag? Consider `TINYINT` or `ENUM` `ref` varchar(2500) DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, -- normalize! (add new rows as new agents are created) `stats_time` datetime DEFAULT NULL, `country` varchar(50) DEFAULT NULL, -- use standard 2-letter code (see below) `region` varchar(50) DEFAULT NULL, -- see below `city` varchar(50) DEFAULT NULL, -- see below `city_lat_long` varchar(50) DEFAULT NULL, -- unusable in current format; toss? `email` varchar(100) DEFAULT NULL, ``` For IP addresses, use `inet6_aton()`, then store in `BINARY(16)`. For `country`, use `CHAR(2) CHARACTER SET ascii` -- only 2 bytes. country + region + city + (maybe) latlng -- normalize this to a "location". All these changes may cut the disk footprint in half. Smaller --> more cacheable --> less I/O --> faster. **Other issues**... To greatly speed up your `sid` counter, change ``` KEY `domain_statstime` (`domain`,`stats_time`), ``` to ``` KEY dss (domain_id,`stats_time`, sid), ``` That will be a "covering index", hence won't have to bounce between the index and the data 2713729 times -- the bouncing is what cost 13 minutes. (`domain_id` is discussed below.) This is redundant with the above index, `DROP` it: KEY `domain_index` (`domain`) Is a "customer" determined by the `domain`? Every InnoDB table must have a `PRIMARY KEY`. There are 3 ways to get a PK; you picked the 'worst' one -- a hidden 6-byte integer fabricated by the engine. I assume there is no 'natural' PK available from some combination of columns? Then, an explicit `BIGINT UNSIGNED` is called for. (Yes that would be 8 bytes, but various forms of maintenance need an *explicit* PK.) If *most* queries include `WHERE domain = '...'`, then I recommend the following. (And this will greatly improve all such queries.) ``` id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT, domain_id MEDIUMINT UNSIGNED NOT NULL, -- normalized to `Domains` PRIMARY KEY(domain_id, id), -- clustering on customer gives you the speedup INDEX(id) -- this keeps AUTO_INCREMENT happy ``` Recommend you look into `pt-online-schema-change` for making all these changes. However, I don't know if it can work without an explicit `PRIMARY KEY`. "Separate table for each customer"? *No*. This is a common question; the resounding answer is No. I won't repeat all the reasons for not having 100K tables. **Sharding** "Sharding" is splitting the data across multiple *machines*. To do sharding, you need to have code somewhere that looks at `domain` and decides which server will handle the query, then hands it off. Sharding is advisable when you have *write scaling* problems. You did not mention such, so it is unclear whether sharding is advisable. When sharding on something like `domain` (or `domain_id`), you could use (1) a hash to pick the server, (2) a dictionary lookup (of 100K rows), or (3) a hybrid. I like the hybrid -- hash to, say, 1024 values, then look up into a 1024-row table to see which machine has the data. Since adding a new shard and migrating a user to a different shard are major undertakings, I feel that the hybrid is a reasonable compromise. The lookup table needs to be distributed to all clients that redirect actions to shards. If your 'writing' is running out of steam, see [*high speed ingestion*](http://mysql.rjweb.org/doc.php/staging_table) for possible ways to speed that up. **PARTITIONing** `PARTITIONing` is splitting the data across multiple "sub-tables". There are only a [*limited number of use cases*](http://mysql.rjweb.org/doc.php/partitionmaint) where partitioning buys you any performance. You not indicated that any apply to your use case. Read that blog and see if you think that partitioning might be useful. You mentioned "partition by date range". Will most of the queries include a date range? If so, such partitioning *may* be advisable. (See the link above for best practices.) Some other options come to mind: Plan A: `PRIMARY KEY(domain_id, stats_time, id)` But that is bulky and requires even more overhead on each secondary index. (Each secondary index silently includes all the columns of the PK.) Plan B: Have stats\_time include microseconds, then tweak the values to avoid having dups. Then use `stats_time` instead of `id`. But this requires some added complexity, especially if there are multiple clients inserting data. (I can elaborate if needed.) Plan C: Have a table that maps stats\_time values to ids. Look up the id range before doing the real query, then use both `WHERE id BETWEEN ... AND stats_time ...`. (Again, messy code.) **Summary tables** Are many of the queries of the form of counting things over date ranges? Suggest having Summary Tables based perhaps on per-hour. [*More discussion*](http://mysql.rjweb.org/doc.php/summarytables). `COUNT(DISTINCT sid)` is especially difficult to fold into summary tables. For example, the unique counts for each hour cannot be added together to get the unique count for the day. But I have a [*technique*](http://mysql.rjweb.org/doc.php/uniques) for that, too.
I wouldn't do this if i were you. First thing that come to mind would be, on receive a pageview message, i send the message to a queue so that a worker can pickup and insert to database later (in bulk maybe); also i increase the counter of `siteid:date` in redis (for example). Doing `count` in sql is just a bad idea for this scenario.
23,629,204
I'm using KineticJS in my MVC application. In order to retrieve data from database, I'm making some ajax calls to web services in the API controller. There is an API that returns an id, which I want to assign to the current `Kinetic.Group` id attribute on success. After drag and drop, a popup `PopUpAddRoom(FloorId)` appears and calls the AddRoom function. my code: ``` function AddRoom(FloorId, RoomName, TypeID) { $.ajax({ beforeSend: function (xhr) { // verify session }, url: "/api/someurl", //url for adding the room, that returns room's id type: "Post", dataType: 'json', success: function (data) { //data is the room id var r = rightLayer.find("#" + data)[0]; console.log(r.getType()); // here I don't even get the type, I get "Object []" var rec = r.find("Rect"); //rec is a rectangle inside the group rec.setId(data); rightLayer.draw(); } }); } ``` The problem is that `r = rightLayer.find("#" + data)[0];` is empty. It returns *undefined*. When I call `console.log(data);` to see if data's content is empty, it returns the correct id ! I initialized the id like this: `id: ''` and want it to change as it gets the data from database. But this failed. Is there something wrong with this code? **EDIT :** After figuring out that `id: ''` is really dumb idea (with the help of markE), I tried initializing id to an empty variable ident which gets its value from a web service (this ws increments the id when a new instance is added successfully). But the problem doesn't come from `r = rightLayer.find("#" + data)[0];`. It's the fact of assigning the id to a node (location of the instruction) ``` var ident; var group = new Kinetic.Group({ ... id: ident ... }); ``` I added then this line: `ident = data;` after success of the ajax call, still the same error. It seems like this instruction isn't doing nothing. The value of ident isn't changing after that I call `PopUpAddRoom` function.
2014/05/13
[ "https://Stackoverflow.com/questions/23629204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3365233/" ]
No, they are not supported. You should use separate struct declarations and regular fields: ``` struct Foo {} struct Test { foo: Foo, } ```
They are not supported by Rust. But you can write yourself a proc macro that emulates them. I [have](https://lib.rs/structstruck), it turns ```rust structstruck::strike!{ struct Test { foo: struct {} } } ``` into ``` struct Foo {} struct Test { foo: Foo, } ``` You haven't explicitly said so, but I suspect that your goal for using nested structs is not more easily readable data structure declarations, but namespacing? You can't actually have a struct named `Test` and access `Foo` as `Test::Foo`, but you could make yourself a proc macro that at least automatically creates a `mod test { Foo {} }`.
23,629,204
I'm using KineticJS in my MVC application. In order to retrieve data from database, I'm making some ajax calls to web services in the API controller. There is an API that returns an id, which I want to assign to the current `Kinetic.Group` id attribute on success. After drag and drop, a popup `PopUpAddRoom(FloorId)` appears and calls the AddRoom function. my code: ``` function AddRoom(FloorId, RoomName, TypeID) { $.ajax({ beforeSend: function (xhr) { // verify session }, url: "/api/someurl", //url for adding the room, that returns room's id type: "Post", dataType: 'json', success: function (data) { //data is the room id var r = rightLayer.find("#" + data)[0]; console.log(r.getType()); // here I don't even get the type, I get "Object []" var rec = r.find("Rect"); //rec is a rectangle inside the group rec.setId(data); rightLayer.draw(); } }); } ``` The problem is that `r = rightLayer.find("#" + data)[0];` is empty. It returns *undefined*. When I call `console.log(data);` to see if data's content is empty, it returns the correct id ! I initialized the id like this: `id: ''` and want it to change as it gets the data from database. But this failed. Is there something wrong with this code? **EDIT :** After figuring out that `id: ''` is really dumb idea (with the help of markE), I tried initializing id to an empty variable ident which gets its value from a web service (this ws increments the id when a new instance is added successfully). But the problem doesn't come from `r = rightLayer.find("#" + data)[0];`. It's the fact of assigning the id to a node (location of the instruction) ``` var ident; var group = new Kinetic.Group({ ... id: ident ... }); ``` I added then this line: `ident = data;` after success of the ajax call, still the same error. It seems like this instruction isn't doing nothing. The value of ident isn't changing after that I call `PopUpAddRoom` function.
2014/05/13
[ "https://Stackoverflow.com/questions/23629204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3365233/" ]
No, they are not supported. You should use separate struct declarations and regular fields: ``` struct Foo {} struct Test { foo: Foo, } ```
I found the optional answer for your request. Maybe it can help you: <https://internals.rust-lang.org/t/nested-struct-declaration/13314/4> [Structural records](https://github.com/rust-lang/rfcs/pull/2584) 592 would give you the nesting (without the privacy controls or naming). ``` // From the RFC struct RectangleTidy { dimensions: { width: u64, height: u64, }, color: { red: u8, green: u8, blue: u8, }, } ``` Below is my old answer, please ignore. Build using the Stable version: 1.65.0. Here's a example for you. [rust playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=4d52faab01bb63bc2ee47c2fc545392c) codes: ``` #[derive(Debug)] struct Inner { i: i32, } #[derive(Debug)] struct Outer { o: i32, inner: Inner, } pub fn test() { // add your code here let obj = Outer { o: 10, inner: Inner { i: 9 }, }; assert!(10i32 == obj.o); assert!(9i32 == obj.inner.i); println!("{}", obj.o); println!("{}", obj.inner.i); println!("{:?}", obj); } fn main() { test(); } ```
23,629,204
I'm using KineticJS in my MVC application. In order to retrieve data from database, I'm making some ajax calls to web services in the API controller. There is an API that returns an id, which I want to assign to the current `Kinetic.Group` id attribute on success. After drag and drop, a popup `PopUpAddRoom(FloorId)` appears and calls the AddRoom function. my code: ``` function AddRoom(FloorId, RoomName, TypeID) { $.ajax({ beforeSend: function (xhr) { // verify session }, url: "/api/someurl", //url for adding the room, that returns room's id type: "Post", dataType: 'json', success: function (data) { //data is the room id var r = rightLayer.find("#" + data)[0]; console.log(r.getType()); // here I don't even get the type, I get "Object []" var rec = r.find("Rect"); //rec is a rectangle inside the group rec.setId(data); rightLayer.draw(); } }); } ``` The problem is that `r = rightLayer.find("#" + data)[0];` is empty. It returns *undefined*. When I call `console.log(data);` to see if data's content is empty, it returns the correct id ! I initialized the id like this: `id: ''` and want it to change as it gets the data from database. But this failed. Is there something wrong with this code? **EDIT :** After figuring out that `id: ''` is really dumb idea (with the help of markE), I tried initializing id to an empty variable ident which gets its value from a web service (this ws increments the id when a new instance is added successfully). But the problem doesn't come from `r = rightLayer.find("#" + data)[0];`. It's the fact of assigning the id to a node (location of the instruction) ``` var ident; var group = new Kinetic.Group({ ... id: ident ... }); ``` I added then this line: `ident = data;` after success of the ajax call, still the same error. It seems like this instruction isn't doing nothing. The value of ident isn't changing after that I call `PopUpAddRoom` function.
2014/05/13
[ "https://Stackoverflow.com/questions/23629204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3365233/" ]
They are not supported by Rust. But you can write yourself a proc macro that emulates them. I [have](https://lib.rs/structstruck), it turns ```rust structstruck::strike!{ struct Test { foo: struct {} } } ``` into ``` struct Foo {} struct Test { foo: Foo, } ``` You haven't explicitly said so, but I suspect that your goal for using nested structs is not more easily readable data structure declarations, but namespacing? You can't actually have a struct named `Test` and access `Foo` as `Test::Foo`, but you could make yourself a proc macro that at least automatically creates a `mod test { Foo {} }`.
I found the optional answer for your request. Maybe it can help you: <https://internals.rust-lang.org/t/nested-struct-declaration/13314/4> [Structural records](https://github.com/rust-lang/rfcs/pull/2584) 592 would give you the nesting (without the privacy controls or naming). ``` // From the RFC struct RectangleTidy { dimensions: { width: u64, height: u64, }, color: { red: u8, green: u8, blue: u8, }, } ``` Below is my old answer, please ignore. Build using the Stable version: 1.65.0. Here's a example for you. [rust playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=4d52faab01bb63bc2ee47c2fc545392c) codes: ``` #[derive(Debug)] struct Inner { i: i32, } #[derive(Debug)] struct Outer { o: i32, inner: Inner, } pub fn test() { // add your code here let obj = Outer { o: 10, inner: Inner { i: 9 }, }; assert!(10i32 == obj.o); assert!(9i32 == obj.inner.i); println!("{}", obj.o); println!("{}", obj.inner.i); println!("{:?}", obj); } fn main() { test(); } ```
4,239,645
I tried searching a bit and didn't find an answer. Does the Razor View Engine work in Mono?
2010/11/21
[ "https://Stackoverflow.com/questions/4239645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/69742/" ]
My guess is that you'll need to wait for the release of MVC3 (when it becomes open-source) before that can be answered perfectly. I'm sure the Mono team will make it work, though.
It looks like we're getting there: <http://gonzalo.name/blog/archive/2011/Jan-21.html> Looks like it isn't in any of the published versions yet, but you can run it from source control.
4,239,645
I tried searching a bit and didn't find an answer. Does the Razor View Engine work in Mono?
2010/11/21
[ "https://Stackoverflow.com/questions/4239645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/69742/" ]
Yes, it does. I have it working with mono on Linux. You need mono 2.10.2+ from the stable sources from ~~<http://ftp.novell.com/pub/mono/sources-stable/>~~ <http://download.mono-project.com/sources/mono/> Then, you need to localcopy these assemblies into your app's bin directory (you take them from Visual Studio on Windows): System.Web.Mvc.dll System.Web.Razor.dll System.Web.WebPages.dll System.Web.WebPages.Deployment.dll System.Web.WebPages.Razor.dll Then, you might have to get rid of the following errors you might have made like this: Error: Storage scopes cannot be created when \_AppStart is executing. Cause: Microsoft.Web.Infrastructure.dll was localcopied to the bin directory. Resolution: Delete Microsoft.Web.Infrastructure.dll **and use the mono version**. Error: Invalid IL code in System.Web.Handlers.ScriptModule:.ctor (): method body is empty. Cause: System.Web.Extensions.dll somehow gets localcopied to the bin directory. Resolution: Delete System.Web.Extensions.dll **and use the mono version**. Error: The classes in the module cannot be loaded. Description: HTTP 500. Error processing request. Cause: System.Web.WebPages.Administration.dll was localcopied to the bin directory. Resolution: Delete System.Web.WebPages.Administration.dll **and unreference it** Error: Could not load type 'System.Web.WebPages.Razor.RazorBuildProvider' from assembly 'System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Description: HTTP 500. Error processing request. Cause: System.Web.Razor.dll is corrupt or missing ~~(or x64 instead of x32 or vice-versa)~~ ... Resolution: Get an **uncorrupted** version of System.Web.Razor.dll and localcopy to the bin directory **Edit** As of mono 2.12 / MonoDevelop 2.8, all of this is not necessary anymore. Note that on 2.10 (Ubuntu 11.10), one needs to localcopy `System.Web.DynamicData.dll` as well, or else you get an error that only occurs on App\_Start (if you don't do that, you get a YSOD the first time you call a page, but ONLY the first time, because only then App\_Start is called.). **Note** for mono 3.0+ with ASP.NET MVC4: There is a "bug" in the install script. Or rather an incompleteness. mod-mono, fastcgi-mono-server4 and xsp4 won't work correctly. For example: fastcgi-mono-server4 gives you this debug output: ``` [error] 3384#0: *101 upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost:8000" ``` This is, because after the installation of mono3, it uses framework 4.5, but xsp, fastcgi-mono-server4 and mod-mono are not in the 4.5 GAC, only the 4.0 gac. To fix this, use this bash script: ``` #!/bin/bash # Your mono directory #PREFIX=/usr PREFIX=/opt/mono/3.0.3 FILES=('mod-mono-server4' 'fastcgi-mono-server4' 'xsp4') cd $PREFIX/lib/mono/4.0 for file in "${FILES[@]}" do cp "$file.exe" ../4.5 done cd $PREFIX/bin for file in "${FILES[@]}" do sed -ie 's|mono/4.0|mono/4.5|g' $file done ``` And if you use it via FastCGI (e.g. nginx), you also need this fix for TransmitFile for the chuncked\_encoding bug [Why do I have unwanted extra bytes at the beginning of image?](https://stackoverflow.com/questions/14662795/why-do-i-have-unwanted-extra-bytes-at-the-beginning-of-image/14671753#14671753) (fixed in mono 3.2.3) **PS:** You can get the .debs for 3.x from here: <https://www.meebey.net/posts/mono_3.0_preview_debian_ubuntu_packages/> or compile them yourselfs from github [Installing Mono 3.x in Ubuntu/Debian](https://stackoverflow.com/questions/13365158/installing-mono-3-0) or like this from the stable sources <http://ubuntuforums.org/showthread.php?t=1591370> **2015** You can now use the [Xamarin provided packages](http://www.mono-project.com/docs/getting-started/install/linux/#debian-ubuntu-and-derivatives) ``` sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list sudo apt-get update ``` If you need the vary latest features, you can also fetch the [CI packages (nightly builds, so to say)](http://www.mono-project.com/docs/getting-started/install/linux/ci-packages/), if you need the latest (or almost latest) version ``` sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF echo "deb http://jenkins.mono-project.com/repo/debian sid main" | sudo tee /etc/apt/sources.list.d/mono-jenkins.list sudo apt-get update ```
My guess is that you'll need to wait for the release of MVC3 (when it becomes open-source) before that can be answered perfectly. I'm sure the Mono team will make it work, though.
4,239,645
I tried searching a bit and didn't find an answer. Does the Razor View Engine work in Mono?
2010/11/21
[ "https://Stackoverflow.com/questions/4239645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/69742/" ]
[Not yet.](http://lists.ximian.com/pipermail/mono-list/2010-November/046052.html)
It looks like we're getting there: <http://gonzalo.name/blog/archive/2011/Jan-21.html> Looks like it isn't in any of the published versions yet, but you can run it from source control.
4,239,645
I tried searching a bit and didn't find an answer. Does the Razor View Engine work in Mono?
2010/11/21
[ "https://Stackoverflow.com/questions/4239645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/69742/" ]
Yes, it does. I have it working with mono on Linux. You need mono 2.10.2+ from the stable sources from ~~<http://ftp.novell.com/pub/mono/sources-stable/>~~ <http://download.mono-project.com/sources/mono/> Then, you need to localcopy these assemblies into your app's bin directory (you take them from Visual Studio on Windows): System.Web.Mvc.dll System.Web.Razor.dll System.Web.WebPages.dll System.Web.WebPages.Deployment.dll System.Web.WebPages.Razor.dll Then, you might have to get rid of the following errors you might have made like this: Error: Storage scopes cannot be created when \_AppStart is executing. Cause: Microsoft.Web.Infrastructure.dll was localcopied to the bin directory. Resolution: Delete Microsoft.Web.Infrastructure.dll **and use the mono version**. Error: Invalid IL code in System.Web.Handlers.ScriptModule:.ctor (): method body is empty. Cause: System.Web.Extensions.dll somehow gets localcopied to the bin directory. Resolution: Delete System.Web.Extensions.dll **and use the mono version**. Error: The classes in the module cannot be loaded. Description: HTTP 500. Error processing request. Cause: System.Web.WebPages.Administration.dll was localcopied to the bin directory. Resolution: Delete System.Web.WebPages.Administration.dll **and unreference it** Error: Could not load type 'System.Web.WebPages.Razor.RazorBuildProvider' from assembly 'System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Description: HTTP 500. Error processing request. Cause: System.Web.Razor.dll is corrupt or missing ~~(or x64 instead of x32 or vice-versa)~~ ... Resolution: Get an **uncorrupted** version of System.Web.Razor.dll and localcopy to the bin directory **Edit** As of mono 2.12 / MonoDevelop 2.8, all of this is not necessary anymore. Note that on 2.10 (Ubuntu 11.10), one needs to localcopy `System.Web.DynamicData.dll` as well, or else you get an error that only occurs on App\_Start (if you don't do that, you get a YSOD the first time you call a page, but ONLY the first time, because only then App\_Start is called.). **Note** for mono 3.0+ with ASP.NET MVC4: There is a "bug" in the install script. Or rather an incompleteness. mod-mono, fastcgi-mono-server4 and xsp4 won't work correctly. For example: fastcgi-mono-server4 gives you this debug output: ``` [error] 3384#0: *101 upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost:8000" ``` This is, because after the installation of mono3, it uses framework 4.5, but xsp, fastcgi-mono-server4 and mod-mono are not in the 4.5 GAC, only the 4.0 gac. To fix this, use this bash script: ``` #!/bin/bash # Your mono directory #PREFIX=/usr PREFIX=/opt/mono/3.0.3 FILES=('mod-mono-server4' 'fastcgi-mono-server4' 'xsp4') cd $PREFIX/lib/mono/4.0 for file in "${FILES[@]}" do cp "$file.exe" ../4.5 done cd $PREFIX/bin for file in "${FILES[@]}" do sed -ie 's|mono/4.0|mono/4.5|g' $file done ``` And if you use it via FastCGI (e.g. nginx), you also need this fix for TransmitFile for the chuncked\_encoding bug [Why do I have unwanted extra bytes at the beginning of image?](https://stackoverflow.com/questions/14662795/why-do-i-have-unwanted-extra-bytes-at-the-beginning-of-image/14671753#14671753) (fixed in mono 3.2.3) **PS:** You can get the .debs for 3.x from here: <https://www.meebey.net/posts/mono_3.0_preview_debian_ubuntu_packages/> or compile them yourselfs from github [Installing Mono 3.x in Ubuntu/Debian](https://stackoverflow.com/questions/13365158/installing-mono-3-0) or like this from the stable sources <http://ubuntuforums.org/showthread.php?t=1591370> **2015** You can now use the [Xamarin provided packages](http://www.mono-project.com/docs/getting-started/install/linux/#debian-ubuntu-and-derivatives) ``` sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list sudo apt-get update ``` If you need the vary latest features, you can also fetch the [CI packages (nightly builds, so to say)](http://www.mono-project.com/docs/getting-started/install/linux/ci-packages/), if you need the latest (or almost latest) version ``` sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF echo "deb http://jenkins.mono-project.com/repo/debian sid main" | sudo tee /etc/apt/sources.list.d/mono-jenkins.list sudo apt-get update ```
[Not yet.](http://lists.ximian.com/pipermail/mono-list/2010-November/046052.html)
4,239,645
I tried searching a bit and didn't find an answer. Does the Razor View Engine work in Mono?
2010/11/21
[ "https://Stackoverflow.com/questions/4239645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/69742/" ]
Mono 2.10 onwards fully supports MVC3 and Razor, albeit the Mono Project cannot currently ship Mono with an open-source implementation of the MVC3/Razor stack included (in the same way as MVC1 and MVC2 are included) just yet. From the [Release Notes](http://www.mono-project.com/Release_Notes_Mono_2.10#ASP.NET_MVC3_Support): > > Although ASP.NET MVC3 is open source > and licensed under the terms of the > MS-PL license, it takes a few > dependencies on new libraries that are > not open source nor are they part of > the Microsoft.NET Framework. > > > At this point we do not have open > source implementations of those > libraries, so we can not ship the full > ASP.NET MVC3 stack with Mono (We still > ship ASP.NET MVC 1 and MVC 2 with Mono > for your deployment enjoyment). > > > This Mono release however has enough > bug fixes and patches that you will be > able to run ASP.NET MVC3 sites with > it. > > >
It looks like we're getting there: <http://gonzalo.name/blog/archive/2011/Jan-21.html> Looks like it isn't in any of the published versions yet, but you can run it from source control.
4,239,645
I tried searching a bit and didn't find an answer. Does the Razor View Engine work in Mono?
2010/11/21
[ "https://Stackoverflow.com/questions/4239645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/69742/" ]
Yes, it does. I have it working with mono on Linux. You need mono 2.10.2+ from the stable sources from ~~<http://ftp.novell.com/pub/mono/sources-stable/>~~ <http://download.mono-project.com/sources/mono/> Then, you need to localcopy these assemblies into your app's bin directory (you take them from Visual Studio on Windows): System.Web.Mvc.dll System.Web.Razor.dll System.Web.WebPages.dll System.Web.WebPages.Deployment.dll System.Web.WebPages.Razor.dll Then, you might have to get rid of the following errors you might have made like this: Error: Storage scopes cannot be created when \_AppStart is executing. Cause: Microsoft.Web.Infrastructure.dll was localcopied to the bin directory. Resolution: Delete Microsoft.Web.Infrastructure.dll **and use the mono version**. Error: Invalid IL code in System.Web.Handlers.ScriptModule:.ctor (): method body is empty. Cause: System.Web.Extensions.dll somehow gets localcopied to the bin directory. Resolution: Delete System.Web.Extensions.dll **and use the mono version**. Error: The classes in the module cannot be loaded. Description: HTTP 500. Error processing request. Cause: System.Web.WebPages.Administration.dll was localcopied to the bin directory. Resolution: Delete System.Web.WebPages.Administration.dll **and unreference it** Error: Could not load type 'System.Web.WebPages.Razor.RazorBuildProvider' from assembly 'System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Description: HTTP 500. Error processing request. Cause: System.Web.Razor.dll is corrupt or missing ~~(or x64 instead of x32 or vice-versa)~~ ... Resolution: Get an **uncorrupted** version of System.Web.Razor.dll and localcopy to the bin directory **Edit** As of mono 2.12 / MonoDevelop 2.8, all of this is not necessary anymore. Note that on 2.10 (Ubuntu 11.10), one needs to localcopy `System.Web.DynamicData.dll` as well, or else you get an error that only occurs on App\_Start (if you don't do that, you get a YSOD the first time you call a page, but ONLY the first time, because only then App\_Start is called.). **Note** for mono 3.0+ with ASP.NET MVC4: There is a "bug" in the install script. Or rather an incompleteness. mod-mono, fastcgi-mono-server4 and xsp4 won't work correctly. For example: fastcgi-mono-server4 gives you this debug output: ``` [error] 3384#0: *101 upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost:8000" ``` This is, because after the installation of mono3, it uses framework 4.5, but xsp, fastcgi-mono-server4 and mod-mono are not in the 4.5 GAC, only the 4.0 gac. To fix this, use this bash script: ``` #!/bin/bash # Your mono directory #PREFIX=/usr PREFIX=/opt/mono/3.0.3 FILES=('mod-mono-server4' 'fastcgi-mono-server4' 'xsp4') cd $PREFIX/lib/mono/4.0 for file in "${FILES[@]}" do cp "$file.exe" ../4.5 done cd $PREFIX/bin for file in "${FILES[@]}" do sed -ie 's|mono/4.0|mono/4.5|g' $file done ``` And if you use it via FastCGI (e.g. nginx), you also need this fix for TransmitFile for the chuncked\_encoding bug [Why do I have unwanted extra bytes at the beginning of image?](https://stackoverflow.com/questions/14662795/why-do-i-have-unwanted-extra-bytes-at-the-beginning-of-image/14671753#14671753) (fixed in mono 3.2.3) **PS:** You can get the .debs for 3.x from here: <https://www.meebey.net/posts/mono_3.0_preview_debian_ubuntu_packages/> or compile them yourselfs from github [Installing Mono 3.x in Ubuntu/Debian](https://stackoverflow.com/questions/13365158/installing-mono-3-0) or like this from the stable sources <http://ubuntuforums.org/showthread.php?t=1591370> **2015** You can now use the [Xamarin provided packages](http://www.mono-project.com/docs/getting-started/install/linux/#debian-ubuntu-and-derivatives) ``` sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list sudo apt-get update ``` If you need the vary latest features, you can also fetch the [CI packages (nightly builds, so to say)](http://www.mono-project.com/docs/getting-started/install/linux/ci-packages/), if you need the latest (or almost latest) version ``` sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF echo "deb http://jenkins.mono-project.com/repo/debian sid main" | sudo tee /etc/apt/sources.list.d/mono-jenkins.list sudo apt-get update ```
It looks like we're getting there: <http://gonzalo.name/blog/archive/2011/Jan-21.html> Looks like it isn't in any of the published versions yet, but you can run it from source control.
4,239,645
I tried searching a bit and didn't find an answer. Does the Razor View Engine work in Mono?
2010/11/21
[ "https://Stackoverflow.com/questions/4239645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/69742/" ]
Yes, it does. I have it working with mono on Linux. You need mono 2.10.2+ from the stable sources from ~~<http://ftp.novell.com/pub/mono/sources-stable/>~~ <http://download.mono-project.com/sources/mono/> Then, you need to localcopy these assemblies into your app's bin directory (you take them from Visual Studio on Windows): System.Web.Mvc.dll System.Web.Razor.dll System.Web.WebPages.dll System.Web.WebPages.Deployment.dll System.Web.WebPages.Razor.dll Then, you might have to get rid of the following errors you might have made like this: Error: Storage scopes cannot be created when \_AppStart is executing. Cause: Microsoft.Web.Infrastructure.dll was localcopied to the bin directory. Resolution: Delete Microsoft.Web.Infrastructure.dll **and use the mono version**. Error: Invalid IL code in System.Web.Handlers.ScriptModule:.ctor (): method body is empty. Cause: System.Web.Extensions.dll somehow gets localcopied to the bin directory. Resolution: Delete System.Web.Extensions.dll **and use the mono version**. Error: The classes in the module cannot be loaded. Description: HTTP 500. Error processing request. Cause: System.Web.WebPages.Administration.dll was localcopied to the bin directory. Resolution: Delete System.Web.WebPages.Administration.dll **and unreference it** Error: Could not load type 'System.Web.WebPages.Razor.RazorBuildProvider' from assembly 'System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Description: HTTP 500. Error processing request. Cause: System.Web.Razor.dll is corrupt or missing ~~(or x64 instead of x32 or vice-versa)~~ ... Resolution: Get an **uncorrupted** version of System.Web.Razor.dll and localcopy to the bin directory **Edit** As of mono 2.12 / MonoDevelop 2.8, all of this is not necessary anymore. Note that on 2.10 (Ubuntu 11.10), one needs to localcopy `System.Web.DynamicData.dll` as well, or else you get an error that only occurs on App\_Start (if you don't do that, you get a YSOD the first time you call a page, but ONLY the first time, because only then App\_Start is called.). **Note** for mono 3.0+ with ASP.NET MVC4: There is a "bug" in the install script. Or rather an incompleteness. mod-mono, fastcgi-mono-server4 and xsp4 won't work correctly. For example: fastcgi-mono-server4 gives you this debug output: ``` [error] 3384#0: *101 upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost:8000" ``` This is, because after the installation of mono3, it uses framework 4.5, but xsp, fastcgi-mono-server4 and mod-mono are not in the 4.5 GAC, only the 4.0 gac. To fix this, use this bash script: ``` #!/bin/bash # Your mono directory #PREFIX=/usr PREFIX=/opt/mono/3.0.3 FILES=('mod-mono-server4' 'fastcgi-mono-server4' 'xsp4') cd $PREFIX/lib/mono/4.0 for file in "${FILES[@]}" do cp "$file.exe" ../4.5 done cd $PREFIX/bin for file in "${FILES[@]}" do sed -ie 's|mono/4.0|mono/4.5|g' $file done ``` And if you use it via FastCGI (e.g. nginx), you also need this fix for TransmitFile for the chuncked\_encoding bug [Why do I have unwanted extra bytes at the beginning of image?](https://stackoverflow.com/questions/14662795/why-do-i-have-unwanted-extra-bytes-at-the-beginning-of-image/14671753#14671753) (fixed in mono 3.2.3) **PS:** You can get the .debs for 3.x from here: <https://www.meebey.net/posts/mono_3.0_preview_debian_ubuntu_packages/> or compile them yourselfs from github [Installing Mono 3.x in Ubuntu/Debian](https://stackoverflow.com/questions/13365158/installing-mono-3-0) or like this from the stable sources <http://ubuntuforums.org/showthread.php?t=1591370> **2015** You can now use the [Xamarin provided packages](http://www.mono-project.com/docs/getting-started/install/linux/#debian-ubuntu-and-derivatives) ``` sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list sudo apt-get update ``` If you need the vary latest features, you can also fetch the [CI packages (nightly builds, so to say)](http://www.mono-project.com/docs/getting-started/install/linux/ci-packages/), if you need the latest (or almost latest) version ``` sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF echo "deb http://jenkins.mono-project.com/repo/debian sid main" | sudo tee /etc/apt/sources.list.d/mono-jenkins.list sudo apt-get update ```
Mono 2.10 onwards fully supports MVC3 and Razor, albeit the Mono Project cannot currently ship Mono with an open-source implementation of the MVC3/Razor stack included (in the same way as MVC1 and MVC2 are included) just yet. From the [Release Notes](http://www.mono-project.com/Release_Notes_Mono_2.10#ASP.NET_MVC3_Support): > > Although ASP.NET MVC3 is open source > and licensed under the terms of the > MS-PL license, it takes a few > dependencies on new libraries that are > not open source nor are they part of > the Microsoft.NET Framework. > > > At this point we do not have open > source implementations of those > libraries, so we can not ship the full > ASP.NET MVC3 stack with Mono (We still > ship ASP.NET MVC 1 and MVC 2 with Mono > for your deployment enjoyment). > > > This Mono release however has enough > bug fixes and patches that you will be > able to run ASP.NET MVC3 sites with > it. > > >
22,104,670
I am writing data to a table and allocating a "group-id" for each batch of data that is written. To illustrate, consider the following table. ``` GroupId Value ------- ----- 1 a 1 b 1 c 2 a 2 b 3 a 3 b 3 c 3 d ``` In this example, there are three groups of data, each with similar but varying values. How do I query this table to find a group that contains a given set of values? For instance, if I query for (a,b,c) the result should be group 1. Similarly, a query for (b,a) should result in group 2, and a query for (a, b, c, e) should result in the empty set. I can write a stored procedure that performs the following steps: * select distinct GroupId from Groups -- and store locally * for each distinct GroupId: perform a set-difference (`except`) between the input and table values (for the group), and vice versa * return the GroupId if both set-difference operations produced empty sets This seems a bit excessive, and I hoping to leverage some other commands in SQL to simplify. Is there a simpler way to perform a set-comparison in this context, or to select the group ID that contains the exact input values for the query?
2014/02/28
[ "https://Stackoverflow.com/questions/22104670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/131407/" ]
This is a set-within-sets query. I like to solve it using `group by` and `having`: ``` select groupid from GroupValues gv group by groupid having sum(case when value = 'a' then 1 else 0 end) > 0 and sum(case when value = 'b' then 1 else 0 end) > 0 and sum(case when value = 'c' then 1 else 0 end) > 0 and sum(case when value not in ('a', 'b', 'c') then 1 else - end) = 0; ``` The first three conditions in the `having` clause check that each elements exists. The last condition checks that there are no other values. This method is quite flexible, for various exclusions and inclusion conditions on the values you are looking for. EDIT: If you want to pass in a list, you can use: ``` with thelist as ( select 'a' as value union all select 'b' union all select 'c' ) select groupid from GroupValues gv left outer join thelist on gv.value = thelist.value group by groupid having count(distinct gv.value) = (select count(*) from thelist) and count(distinct (case when gv.value = thelist.value then gv.value end)) = count(distinct gv.value); ``` Here the `having` clause counts the number of matching values and makes sure that this is the same size as the list. EDIT: query compile failed because missing the table alias. updated with right table alias.
This is kind of ugly, but it works. On larger datasets I'm not sure what performance would look like, but the nested instances of `#GroupValues` key off `GroupID` in the main table so I think as long as you have a good index on `GroupID` it probably wouldn't be too horrible. ``` If Object_ID('tempdb..#GroupValues') Is Not Null Drop Table #GroupValues Create Table #GroupValues (GroupID Int, Val Varchar(10)); Insert #GroupValues (GroupID, Val) Values (1,'a'),(1,'b'),(1,'c'),(2,'a'),(2,'b'),(3,'a'),(3,'b'),(3,'c'),(3,'d'); If Object_ID('tempdb..#FindValues') Is Not Null Drop Table #FindValues Create Table #FindValues (Val Varchar(10)); Insert #FindValues (Val) Values ('a'),('b'),('c'); Select Distinct gv.GroupID From (Select Distinct GroupID From #GroupValues) gv Where Not Exists (Select 1 From #FindValues fv2 Where Not Exists (Select 1 From #GroupValues gv2 Where gv.GroupID = gv2.GroupID And fv2.Val = gv2.Val)) And Not Exists (Select 1 From #GroupValues gv3 Where gv3.GroupID = gv.GroupID And Not Exists (Select 1 From #FindValues fv3 Where gv3.Val = fv3.Val)) ```