qid
int64
question
string
date
string
metadata
sequence
response_j
string
response_k
string
74,831
This has been asked and answered before, but the answers are for the old interface (predating Google Drive), so I'm asking again. I have about a hundred docs in my Google Docs view, most of which are junk -- they were apparently shared by some other user, and I really don't want them. The only way I can see to remove them is one at a time, which takes maybe 10-15s each. That's an experience I'm not interested in having. I really only want to keep a handful of the documents, and I'd even be able to live with it if they had to go, too, but I'm finding Docs to be pretty useless with all the noise. **Edit:** Please stop trying to "solve" this problem for me on Google Drive. It's not a problem with Google Drive, and the files in question don't show up there at all. The problem is with Google *Docs* (<http://docs.google.com/>). There's no "inbox", there's no opportunity to "select" multiple files. The only controls are a little "hamburger" menu that lets me view "Docs", "Sheets", or "Slides", open a Settings page, get Help, or visit Drive, and some icons to switch between tiled and list views, change the sort order, or open the file picker, and an icon to create a new file.
2015/03/10
[ "https://webapps.stackexchange.com/questions/74831", "https://webapps.stackexchange.com", "https://webapps.stackexchange.com/users/89163/" ]
Short anwser ============ Google Docs (<http://docs.google.com>) doesn't have a way to delete several files at once. Instead of using it for that use Google Drive web UI, Google Drive for PC/Mac or another tool that use the Google Drive API or Google Apps Script. Explanation =========== Operations in the new Google Docs "file management" web app are very limited. It could be used to: * Sort files * Filter files by owned by the user by owned not by the user or not filtered * Open Google Docs files through the file pickers * Delete single files * Rename single files but could not be used to select multiple files and do actions over the selected files at once.
Unfortunately you cannot multi-select files from Docs (or Sheets or Slides) and delete them all at the same time. This kind of file management can only be done from Drive, but it sounds like you're having issues with that. 1. Here are some steps to [find missing files in Drive](https://support.google.com/drive/answer/1716222). 2. If you're still having problems, [contact the Drive support team](https://support.google.com/drive/answer/4431192) -- see the "Contact Us" section. 3. Finally, you can always send feedback about this to the team -- same link as above, see the "User Feedback" section.
124,620
What would be a good solution for hooking up the satellite TV box (Dish network) in my room to be able to watch it on my PC and possibly record video from it? Please share with me the best cards, cables, software, anything else needed to do this in the most efficient way? I am looking at the WinTV-HVR from Hauppauge. I am not sure, for performance, what would be the best to go with, PCI, PCI express, USB2.0? Also on the Hauppauge website I saw this note: ***"WinTV-PVR products will not work in PC systems with 4GB or more of memory."*** The new PC I am building will be 12-24gb of DDR3 RAM, does that mean their products will not work at all with my memory? So confused now!
2010/03/26
[ "https://superuser.com/questions/124620", "https://superuser.com", "https://superuser.com/users/3700/" ]
The fact that they say block all unknown threats makes me believe they are a load of bull. Pardon my American. I would say just run any good program like Malwarebytes and just do the standard safety stuff like don't load attachments that are fishy. If you are so worried about keyloggers load up a linux livecd and use that.
The simple answer would be not enter sensitive information (like logging into your bank account) when you aren't on a trusted computer. Library's, friends computers, public terminals, etc. Those would be the most vulnerable.
144,121
I was requested to split a MySQL in two, it's kind of a horizontal partition, in which some rows correspond to one site, and some other correspond to another site. But they want to split it in two DBs in the same MySQL server. I'm no DB expert but I guess keeping them in the same MySQL server with the same amount of memory and processor and the same platform won't improve things. What we're trying to avoid is the "Too many connections" problem.
2010/05/21
[ "https://serverfault.com/questions/144121", "https://serverfault.com", "https://serverfault.com/users/40882/" ]
Running multiple instances on the same server would not be recommended as a solution to a connection limit. If you mean multiple databases on the same instance on the same server that's primarily an architectural consideration and not necessarily a direct solution to a resource issue. It sounds like your server has to be tuned. I've provided connection tuning recommendations before, this should help: [How do I set my.cnf in Mysql so that there are no limits to connections?](https://serverfault.com/questions/119468/how-do-i-set-my-cnf-in-mysql-so-that-there-are-no-limits-to-connections/119550#119550)
You are correct, that won't help a connection problem (assuming you mean two dbs and one mysql binary running, not two different mysql installations on the same hardware, which might help but is a totally backwards way of doing it...)
144,121
I was requested to split a MySQL in two, it's kind of a horizontal partition, in which some rows correspond to one site, and some other correspond to another site. But they want to split it in two DBs in the same MySQL server. I'm no DB expert but I guess keeping them in the same MySQL server with the same amount of memory and processor and the same platform won't improve things. What we're trying to avoid is the "Too many connections" problem.
2010/05/21
[ "https://serverfault.com/questions/144121", "https://serverfault.com", "https://serverfault.com/users/40882/" ]
Running multiple instances on the same server would not be recommended as a solution to a connection limit. If you mean multiple databases on the same instance on the same server that's primarily an architectural consideration and not necessarily a direct solution to a resource issue. It sounds like your server has to be tuned. I've provided connection tuning recommendations before, this should help: [How do I set my.cnf in Mysql so that there are no limits to connections?](https://serverfault.com/questions/119468/how-do-i-set-my-cnf-in-mysql-so-that-there-are-no-limits-to-connections/119550#119550)
You should investigate why you're running out of connections, first and foremost. Do clients grab a connection and then hold it for a long time? Does each child of a web server have its own connection, or does the server manage a connection pool? I recommend turning on the general query log to investigate - it records connection details. Note that you probably don't want to have the general query log on in general production use! Having multiple databases on the same server doesn't change the connections to the server. Increase connections only because you genuinely have some large number of clients who are known to manage connections properly. However, it most cases, "too many connections" indicates a client-side problem.
144,121
I was requested to split a MySQL in two, it's kind of a horizontal partition, in which some rows correspond to one site, and some other correspond to another site. But they want to split it in two DBs in the same MySQL server. I'm no DB expert but I guess keeping them in the same MySQL server with the same amount of memory and processor and the same platform won't improve things. What we're trying to avoid is the "Too many connections" problem.
2010/05/21
[ "https://serverfault.com/questions/144121", "https://serverfault.com", "https://serverfault.com/users/40882/" ]
You should investigate why you're running out of connections, first and foremost. Do clients grab a connection and then hold it for a long time? Does each child of a web server have its own connection, or does the server manage a connection pool? I recommend turning on the general query log to investigate - it records connection details. Note that you probably don't want to have the general query log on in general production use! Having multiple databases on the same server doesn't change the connections to the server. Increase connections only because you genuinely have some large number of clients who are known to manage connections properly. However, it most cases, "too many connections" indicates a client-side problem.
You are correct, that won't help a connection problem (assuming you mean two dbs and one mysql binary running, not two different mysql installations on the same hardware, which might help but is a totally backwards way of doing it...)
70,637
Reading The Intelligent Investor and came across a passage (pg 48) which I felt I couldn't fully grasp. I'd appreciate it if someone could explain it using simpler language: > > Toward the end of 1971 it was possible to obtain 8% taxable interest > on good medium-term corporate bonds, and 5.7% tax-free on good state > or municipal securities. In the shorter-term field the investor could > realize about 6% on U.S. government issues due in five years. In the > latter case the buyer need not be concerned about a possible loss in > market value, since he is sure of full repayment, including the 6% > interest return, at the end of a comparatively short holding period. > The DJIA at its recurrent price level of 900 in 1971 yields only 3.5%. > > > What is the "latter" case here? Is everything after "In the shorter-term" referring to the 5 year government bonds that have 6% interest? Also, are the "state and municipal securities" the same thing as the "government issues"? Contrasting the phrases "shorter-term" and "latter case" seems to indicate that the author is advocating for the 8% corporate bonds in one case and the government issues in the other, but I don't see any mention of the corporate bonds...
2016/09/12
[ "https://money.stackexchange.com/questions/70637", "https://money.stackexchange.com", "https://money.stackexchange.com/users/48446/" ]
*Toward the end of 1971 it was possible to obtain 8% taxable interest on good medium-term corporate bonds, and 5.7% tax-free on good state or municipal securities. In the shorter-term field the investor could realize about 6% on U.S. government issues due in five years. In the latter case the buyer need not be concerned about a possible loss in market value, since he is sure of full repayment, including the 6% interest return, at the end of a comparatively short holding period. The DJIA at its recurrent price level of 900 in 1971 yields only 3.5%.* > > What is the "latter" case here? Is everything after "In the > shorter-term" referring to the 5 year government bonds that have 6% > interest? > > > Yes, but the quote is for US Federal Government bonds. **Also, are the "state and municipal securities" the same thing as the "government issues"?** "State and municipal securities" are not the same thing as **US** Government Securities. State and Local are State, County, City, Town bonds. **Contrasting the phrases "shorter-term" and "latter case" seems to indicate that the author is advocating for the 8% corporate bonds in one case and the government issues in the other, but I don't see any mention of the corporate bonds...** He is saying the US Gov 5 year bonds are rock solid. There is no fear of losing money. With the corporate bonds there is no guarantee of 100% pay back, that is why the rate is higher.
I agree that the paragraph is not well written. It could easily have been put into a more readable table. When he mentions the 6% on U.S. government issues, he says the buyer need not be concerned about loss, as the US government is considered to be reliable and will repay the investor's capital with certainty. For example, if the US government was running out of money and couldn't afford to repay its investors the money it owes on bonds, it would simply raise taxes to cover its debts. Compare this to a private company; if a private company increases their customer fees to try to pay off their debts, their customers will just get angry and leave (switch to another provider), resulting in a spiral of money problems for the company where they have fewer customers paying their higher service fees. He mentions the 5.7% tax-free return on good state or municipal securities, but doesn't discuss this further in the paragraph. State or municipal securities are also considered highly reliable for the same reason that the US government is reliable; the state or city can just raise land taxes, parking fees, fees to use the bathrooms, museums and so forth. And the people have no option to pay, unless they choose to move to another state.
36,269,673
I know this might sound *extremely* dumb, but I can't find the console application template in visual studio 2015. When installed I clicked on custom and selected everything so I wouldn't have to go through the hassle of adding files later on so of course I have visual c++ installed. But for some reason there's no console application template. Any ideas? I'm using the community edition.
2016/03/28
[ "https://Stackoverflow.com/questions/36269673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6063262/" ]
It's located under *Templates->Visual C++->Win32->Win32 Console Application*. If you cannot find it there, it probably means you did not install the Windows SDK. Go to the *Control Panel->Programs and Features*, select *Microsoft Visual Studio Community 2015*, right click on it and select *Change*. The SDK is hidden quite well in the list of installable features. You can find it under *Windows and Web Development->Universal Windows App Development Tools*. Make sure the *Windows 10 SDK* is selected. By default it's not and I tend to forget to check it when installing VS2015 because it's so damn well hidden.
SOLVED !!! it is generally with windows 10 or xp so when you go to visual c++ there is option of install windows support for ... and let it download bingo!! next time you can see it there install these two [![enter image description here](https://i.stack.imgur.com/cTSOd.jpg)](https://i.stack.imgur.com/cTSOd.jpg) after installation [![enter image description here](https://i.stack.imgur.com/fHcbR.jpg)](https://i.stack.imgur.com/fHcbR.jpg)
36,269,673
I know this might sound *extremely* dumb, but I can't find the console application template in visual studio 2015. When installed I clicked on custom and selected everything so I wouldn't have to go through the hassle of adding files later on so of course I have visual c++ installed. But for some reason there's no console application template. Any ideas? I'm using the community edition.
2016/03/28
[ "https://Stackoverflow.com/questions/36269673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6063262/" ]
It's located under *Templates->Visual C++->Win32->Win32 Console Application*. If you cannot find it there, it probably means you did not install the Windows SDK. Go to the *Control Panel->Programs and Features*, select *Microsoft Visual Studio Community 2015*, right click on it and select *Change*. The SDK is hidden quite well in the list of installable features. You can find it under *Windows and Web Development->Universal Windows App Development Tools*. Make sure the *Windows 10 SDK* is selected. By default it's not and I tend to forget to check it when installing VS2015 because it's so damn well hidden.
I have same problem. This is the way of show before i fix it. [![with error](https://i.stack.imgur.com/Ywzme.png)](https://i.stack.imgur.com/Ywzme.png) then check you have proper internet connection. net double click and install both. now you can create c++ project. This is new appearance..[![After solve problem](https://i.stack.imgur.com/SZpaO.png)](https://i.stack.imgur.com/SZpaO.png) thank you.
36,269,673
I know this might sound *extremely* dumb, but I can't find the console application template in visual studio 2015. When installed I clicked on custom and selected everything so I wouldn't have to go through the hassle of adding files later on so of course I have visual c++ installed. But for some reason there's no console application template. Any ideas? I'm using the community edition.
2016/03/28
[ "https://Stackoverflow.com/questions/36269673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6063262/" ]
It's located under *Templates->Visual C++->Win32->Win32 Console Application*. If you cannot find it there, it probably means you did not install the Windows SDK. Go to the *Control Panel->Programs and Features*, select *Microsoft Visual Studio Community 2015*, right click on it and select *Change*. The SDK is hidden quite well in the list of installable features. You can find it under *Windows and Web Development->Universal Windows App Development Tools*. Make sure the *Windows 10 SDK* is selected. By default it's not and I tend to forget to check it when installing VS2015 because it's so damn well hidden.
[![enter image description here](https://i.stack.imgur.com/pz1Qt.png)](https://i.stack.imgur.com/pz1Qt.png) Unable to download using VS2015 installer. Even when set to download from internet it will fail Better download winsdksetup.exe at <https://developer.microsoft.com/es-es/windows/downloads/sdk-archive/>
36,269,673
I know this might sound *extremely* dumb, but I can't find the console application template in visual studio 2015. When installed I clicked on custom and selected everything so I wouldn't have to go through the hassle of adding files later on so of course I have visual c++ installed. But for some reason there's no console application template. Any ideas? I'm using the community edition.
2016/03/28
[ "https://Stackoverflow.com/questions/36269673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6063262/" ]
SOLVED !!! it is generally with windows 10 or xp so when you go to visual c++ there is option of install windows support for ... and let it download bingo!! next time you can see it there install these two [![enter image description here](https://i.stack.imgur.com/cTSOd.jpg)](https://i.stack.imgur.com/cTSOd.jpg) after installation [![enter image description here](https://i.stack.imgur.com/fHcbR.jpg)](https://i.stack.imgur.com/fHcbR.jpg)
I have same problem. This is the way of show before i fix it. [![with error](https://i.stack.imgur.com/Ywzme.png)](https://i.stack.imgur.com/Ywzme.png) then check you have proper internet connection. net double click and install both. now you can create c++ project. This is new appearance..[![After solve problem](https://i.stack.imgur.com/SZpaO.png)](https://i.stack.imgur.com/SZpaO.png) thank you.
36,269,673
I know this might sound *extremely* dumb, but I can't find the console application template in visual studio 2015. When installed I clicked on custom and selected everything so I wouldn't have to go through the hassle of adding files later on so of course I have visual c++ installed. But for some reason there's no console application template. Any ideas? I'm using the community edition.
2016/03/28
[ "https://Stackoverflow.com/questions/36269673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6063262/" ]
SOLVED !!! it is generally with windows 10 or xp so when you go to visual c++ there is option of install windows support for ... and let it download bingo!! next time you can see it there install these two [![enter image description here](https://i.stack.imgur.com/cTSOd.jpg)](https://i.stack.imgur.com/cTSOd.jpg) after installation [![enter image description here](https://i.stack.imgur.com/fHcbR.jpg)](https://i.stack.imgur.com/fHcbR.jpg)
[![enter image description here](https://i.stack.imgur.com/pz1Qt.png)](https://i.stack.imgur.com/pz1Qt.png) Unable to download using VS2015 installer. Even when set to download from internet it will fail Better download winsdksetup.exe at <https://developer.microsoft.com/es-es/windows/downloads/sdk-archive/>
36,269,673
I know this might sound *extremely* dumb, but I can't find the console application template in visual studio 2015. When installed I clicked on custom and selected everything so I wouldn't have to go through the hassle of adding files later on so of course I have visual c++ installed. But for some reason there's no console application template. Any ideas? I'm using the community edition.
2016/03/28
[ "https://Stackoverflow.com/questions/36269673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6063262/" ]
I have same problem. This is the way of show before i fix it. [![with error](https://i.stack.imgur.com/Ywzme.png)](https://i.stack.imgur.com/Ywzme.png) then check you have proper internet connection. net double click and install both. now you can create c++ project. This is new appearance..[![After solve problem](https://i.stack.imgur.com/SZpaO.png)](https://i.stack.imgur.com/SZpaO.png) thank you.
[![enter image description here](https://i.stack.imgur.com/pz1Qt.png)](https://i.stack.imgur.com/pz1Qt.png) Unable to download using VS2015 installer. Even when set to download from internet it will fail Better download winsdksetup.exe at <https://developer.microsoft.com/es-es/windows/downloads/sdk-archive/>
51,099,571
[I know it already exists this same question](https://stackoverflow.com/questions/1922816/are-there-any-free-alternatives-to-visualsvn), but since it's from 2009, I want to ask it again to know if there is any good other alternative. In the question I referenced before, they mention: * **AnkhSVN:** Not working in the latest Visual Studio 2017 versions. * **TortoiseSVN:** Not integrated with Visual Studio. Also, I am in an Active Directory domain, so the free license is inapplicable to me.
2018/06/29
[ "https://Stackoverflow.com/questions/51099571", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5682825/" ]
Looks like someone took over the old code and got it working with 2019! <https://marketplace.visualstudio.com/items?itemName=PhilJollans.AnkhSVN2019&ssr=false#overview>
If your environment does not allow using the free VisualSVN Community license, and you don't have a paid license, you can try [integrating TortoiseSVN into Visual Studio manually](https://tortoisesvn.net/visualstudio.html). Note that this manual setup enables only a very basic integration because it only adds SVN commands to the menus. VisualSVN and AnkhSVN plug-ins offer much deeper and complete integration of SVN into Visual Studio. BTW, it is surprising to find out that AnkhSVN does not work reliably with Visual Studio 2017. However, it seems that it's true that there are some issues and VS2017 may crash when working with AnkhSVN. There are other reports about similar behavior on Visual Studio Marketplace: <https://marketplace.visualstudio.com/items?itemName=vs-publisher-303797.AnkhSVN-SubversionSupportforVisualStudio#review-details>. You may want to report this to AnkhSVN developers.
92,684
I visited my local pig village during the day to find the last pig man being slaughtered by three werepigs. I have no idea where they came from. I ran for it but couldn't get away from all three. I've seen them transform when fed monster meat and during the full moon, but don't know of any way for three to appear in a pig village during the day, but obviously want to avoid similar confrontations in the future. What caused the Werepigs to appear?
2012/11/11
[ "https://gaming.stackexchange.com/questions/92684", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/20800/" ]
[It is a known bug](http://forums.kleientertainment.com/showthread.php?1413-Werepig-Glitches&p=11723&viewfull=1#post11723), and [they're working to fix it](http://forums.kleientertainment.com/showthread.php?1881-Gameplay-pigs-sometimes-don-t-transform-back-when-day-comes&p=14130&viewfull=1#post14130). Basically, if the werepigs are busy (probably killing something) at the moment they're supposed to turn back to normal pigmen, they'll stay in their current condition, attacking on sight anything near them, be them spiders, tentacles, Wilson or even their own kin.
Well, sometimes, Werepigs will stay in that form until the next one to three days. They will be found sleeping outdoors (USUALLY!) when they transform into Pigmen again. If they are chasing you and transform again, they will usually fall asleep, then the next day, run to their village. Hope this is verified information!
103,155
I'm getting a little annoyed of having to wrap primitive types in wrapper objects to store them in collection data structures (sets, maps, lists, etc.) in languages like Java and Objective C. I'd really like to have, say, a Map data structure that works the same way whether I'm mapping NSNumbers to Strings or integers to doubles or integers to MyBigCustomObject\*. My understanding is that the reason collection data structures in these languages require wrapping in an object is so it can just always assume that the number given is a pointer - I actually *can* make a NSDictionary with [NSDictionary dictionaryWithObject: MyCustomObject forKey: 1], but it will treat the 1 as a pointer instead of an integer, try to access memory address 1 and segfault. But I don't see any reason why you couldn't make a collection data structure that keeps track of the *type* of keys and values that wouldn't have this problem. Even a relatively inflexible data structure like a Map of specifically-ints to specifically-pointers-to-objects would be a decently common use case (I certainly could use one, I'm doing a lot of work recently that involves indexing objects by an integer ID). Is there a reason why this isn't a common paradigm in languages that have an Object/primitive type distinction?
2011/08/23
[ "https://softwareengineering.stackexchange.com/questions/103155", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35093/" ]
This is basically what happens in C++- the language always knows whether something is a pointer or not. Quite simply, there is no fundamental reason that this is required at all- Java and other languages just do it because they enjoy restricting programmer freedom.
in Java, this seem to be a common paradigm indeed. One can find plenty 3-rd party libraries for collections like you refer to. Some of these were discussed at SO: [What is the most efficient Java Collections library?](https://stackoverflow.com/questions/629804/what-is-the-most-efficient-java-collections-library) Searching web for *java library high performance collections* shows even more, like eg [HPPC](http://labs.carrotsearch.com/hppc.html) * The reason why such libraries didn't make it into standard Java API (yet?) is most likely that these did not gain popularity sufficient to justify such an inclusion (yet?). Another possible reason is that there seem to be no common agreement (yet?) on how to design standard API for these collections. As Joshua Bloch [said](http://dl.acm.org/citation.cfm?id=1176622), *Public APIs, like diamonds, are forever. You have one chance to get it right so give it your best.*
103,155
I'm getting a little annoyed of having to wrap primitive types in wrapper objects to store them in collection data structures (sets, maps, lists, etc.) in languages like Java and Objective C. I'd really like to have, say, a Map data structure that works the same way whether I'm mapping NSNumbers to Strings or integers to doubles or integers to MyBigCustomObject\*. My understanding is that the reason collection data structures in these languages require wrapping in an object is so it can just always assume that the number given is a pointer - I actually *can* make a NSDictionary with [NSDictionary dictionaryWithObject: MyCustomObject forKey: 1], but it will treat the 1 as a pointer instead of an integer, try to access memory address 1 and segfault. But I don't see any reason why you couldn't make a collection data structure that keeps track of the *type* of keys and values that wouldn't have this problem. Even a relatively inflexible data structure like a Map of specifically-ints to specifically-pointers-to-objects would be a decently common use case (I certainly could use one, I'm doing a lot of work recently that involves indexing objects by an integer ID). Is there a reason why this isn't a common paradigm in languages that have an Object/primitive type distinction?
2011/08/23
[ "https://softwareengineering.stackexchange.com/questions/103155", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35093/" ]
> > But I don't see any reason why you couldn't make a collection data structure that keeps track of the type of keys and values that wouldn't have this problem. > > > this is exactly the issue in those languages. Due to the way that Generics are implemented in Java and Objective-C, this information is not available anymore at runtime. It is conceivable though to implement a data structure whose constructor takes instances of `Type` in its constructor and using that information to store the data. It sounds like a lot of work though, and the API gets messy. Question to the community: does an implementation of such a data structure already exist?
in Java, this seem to be a common paradigm indeed. One can find plenty 3-rd party libraries for collections like you refer to. Some of these were discussed at SO: [What is the most efficient Java Collections library?](https://stackoverflow.com/questions/629804/what-is-the-most-efficient-java-collections-library) Searching web for *java library high performance collections* shows even more, like eg [HPPC](http://labs.carrotsearch.com/hppc.html) * The reason why such libraries didn't make it into standard Java API (yet?) is most likely that these did not gain popularity sufficient to justify such an inclusion (yet?). Another possible reason is that there seem to be no common agreement (yet?) on how to design standard API for these collections. As Joshua Bloch [said](http://dl.acm.org/citation.cfm?id=1176622), *Public APIs, like diamonds, are forever. You have one chance to get it right so give it your best.*
103,155
I'm getting a little annoyed of having to wrap primitive types in wrapper objects to store them in collection data structures (sets, maps, lists, etc.) in languages like Java and Objective C. I'd really like to have, say, a Map data structure that works the same way whether I'm mapping NSNumbers to Strings or integers to doubles or integers to MyBigCustomObject\*. My understanding is that the reason collection data structures in these languages require wrapping in an object is so it can just always assume that the number given is a pointer - I actually *can* make a NSDictionary with [NSDictionary dictionaryWithObject: MyCustomObject forKey: 1], but it will treat the 1 as a pointer instead of an integer, try to access memory address 1 and segfault. But I don't see any reason why you couldn't make a collection data structure that keeps track of the *type* of keys and values that wouldn't have this problem. Even a relatively inflexible data structure like a Map of specifically-ints to specifically-pointers-to-objects would be a decently common use case (I certainly could use one, I'm doing a lot of work recently that involves indexing objects by an integer ID). Is there a reason why this isn't a common paradigm in languages that have an Object/primitive type distinction?
2011/08/23
[ "https://softwareengineering.stackexchange.com/questions/103155", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35093/" ]
This is basically what happens in C++- the language always knows whether something is a pointer or not. Quite simply, there is no fundamental reason that this is required at all- Java and other languages just do it because they enjoy restricting programmer freedom.
> > But I don't see any reason why you couldn't make a collection data structure that keeps track of the type of keys and values that wouldn't have this problem. > > > this is exactly the issue in those languages. Due to the way that Generics are implemented in Java and Objective-C, this information is not available anymore at runtime. It is conceivable though to implement a data structure whose constructor takes instances of `Type` in its constructor and using that information to store the data. It sounds like a lot of work though, and the API gets messy. Question to the community: does an implementation of such a data structure already exist?
4,121,297
Can anyone recommend a really advanced book on asynchronous programming in JavaScript? Something that assumes the reader already knows how to do things in JavaScript, but is looking for in-depth analysis on finding the best way. Something that's not about performance tweaks and hacks, but a serious book on architecture. What I'm looking for would go into things like queueing asynchronous actions, unifying error handling between regular synchronous exceptions and onError asynchronous callbacks, chaining asynchronous calls while ensuring cleanup operations required by the original call occur, etc. And the question of how to best manage state when 10 Ajax requests are running at the same time could be a book in itself. Outside of JavaScript books, any recommendations for a favorite book on such patterns in general?
2010/11/08
[ "https://Stackoverflow.com/questions/4121297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/155979/" ]
I also recommend [Ajax Patterns and Best Practices](https://rads.stackoverflow.com/amzn/click/com/1590596161) along with the other books recommended. [RESTful Web Services](http://oreilly.com/catalog/9780596529260) also describes how to introduce loose coupling between the client and server; as well the reasons to why all requests to the server should be stateless.
[Ajax Design Patterns](http://oreilly.com/catalog/9780596101800) is quite a good book. It's a bit dated and the javascript code style is not to my liking but the techniques and design patterns themselves are still highly relevant.
4,121,297
Can anyone recommend a really advanced book on asynchronous programming in JavaScript? Something that assumes the reader already knows how to do things in JavaScript, but is looking for in-depth analysis on finding the best way. Something that's not about performance tweaks and hacks, but a serious book on architecture. What I'm looking for would go into things like queueing asynchronous actions, unifying error handling between regular synchronous exceptions and onError asynchronous callbacks, chaining asynchronous calls while ensuring cleanup operations required by the original call occur, etc. And the question of how to best manage state when 10 Ajax requests are running at the same time could be a book in itself. Outside of JavaScript books, any recommendations for a favorite book on such patterns in general?
2010/11/08
[ "https://Stackoverflow.com/questions/4121297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/155979/" ]
I also recommend [Ajax Patterns and Best Practices](https://rads.stackoverflow.com/amzn/click/com/1590596161) along with the other books recommended. [RESTful Web Services](http://oreilly.com/catalog/9780596529260) also describes how to introduce loose coupling between the client and server; as well the reasons to why all requests to the server should be stateless.
I'd second Ajax Design Patterns. Also from O'Reilly, and a little more current: [Even Faster Web Sites](http://oreilly.com/catalog/9780596522315/). Not apparent from the title, but this one covers a lot of advanced Ajax/asynchronous techniques.
19,704,544
I'm a beginner programmer in my first year of Computer Science. I'm curious about the 32 bit and 64 bit systems, and how it affects developing software. When I download software I need to choose between the two, while other software only has a 32 bit version. * Are there different ways of programming for a 64 bit system? * Is it compiled in the same way? * What are the main benefits of a separate 64 bit app? Cheers
2013/10/31
[ "https://Stackoverflow.com/questions/19704544", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2261532/" ]
Generally speaking the main benefit of 64 bit application is that it has access to more memory. Having 32 bit pointer you can access only 4GB of memory. Most modern compilers have option to compile either 32 bit or 64 bit code. 32/64 coding is the same unless you are dealing with huge in-memory objects, where you would need to use 64 bit specifically.
An interesting fact/example is that Unix time is stored as a single number. It is calculated as a number of seconds passed from January 1st 1970. This number will soon reach 32-bit size, so eventually we will have to upgrade all of our systems to 64-bit so they can hold such a large number.
19,704,544
I'm a beginner programmer in my first year of Computer Science. I'm curious about the 32 bit and 64 bit systems, and how it affects developing software. When I download software I need to choose between the two, while other software only has a 32 bit version. * Are there different ways of programming for a 64 bit system? * Is it compiled in the same way? * What are the main benefits of a separate 64 bit app? Cheers
2013/10/31
[ "https://Stackoverflow.com/questions/19704544", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2261532/" ]
Generally speaking the main benefit of 64 bit application is that it has access to more memory. Having 32 bit pointer you can access only 4GB of memory. Most modern compilers have option to compile either 32 bit or 64 bit code. 32/64 coding is the same unless you are dealing with huge in-memory objects, where you would need to use 64 bit specifically.
> > Are there different ways of programming for a 64 bit system? > > > Yes and no. No, in the sense that most of the time you should be able to write platform-independent code, even if you are coding in a language like C. Yes, in the sense that having knowledge of the underlying architecture (not just the word size!) helps to speed up critical parts of your program. For instance, you may be able to use special instructions available. > > Is it compiled in the same way? > > > Again, yes and no. Compilers for systems languages work in similar ways for all architectures, but of course, the details differ a bit. For instance, the compiler will use knowledge about your architecture to generate as efficient code as possible for it, but also has to take care of differences between architectures and other details, like calling conventions. > > What are the main benefits of a separate 64 bit app? > > > I assume you are asking about the usual desktop CPUs, i.e. x86 architecture, but note that there are other architectures with word sizes ranging from 8-bit to 128-bit. Typically, people would compile a program targeting a single architecture (i.e. for a given machine), and that's about it. However, x86 is a bit special, in that the CPU can operate in different modes, each with a different word size: 16-bit, 32-bit and 64-bit (among other differences). Effectively, they implement several ISAs (Instruction Set Architectures) in a single CPU. This was done to preserve backwards compatibility, and it is key to their commercial success. Consider that, when people bought the first 64-bit capable CPUs, it was most likely that they were still using 32-bit operating systems and software, so they really needed the compatibility. The other options are emulating it (poor performance) or making sure all the popular customer software has been ported (hard to achieve in ecosystems like Windows with many independent, proprietary vendors). There are several benefits of 64-bit x86 over 32-bit x86: more addressable memory, more integer registers, twice the XMM registers, a better calling convention, guaranteed SSE2... The only downside is using 64-bit pointers, which implies more memory and cache usage. In practice, many programs can expect to be slightly faster in x64 (e.g. 10%), but pointer-heavy programs may even see a decrease in performance.
19,704,544
I'm a beginner programmer in my first year of Computer Science. I'm curious about the 32 bit and 64 bit systems, and how it affects developing software. When I download software I need to choose between the two, while other software only has a 32 bit version. * Are there different ways of programming for a 64 bit system? * Is it compiled in the same way? * What are the main benefits of a separate 64 bit app? Cheers
2013/10/31
[ "https://Stackoverflow.com/questions/19704544", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2261532/" ]
> > Are there different ways of programming for a 64 bit system? > > > Yes and no. No, in the sense that most of the time you should be able to write platform-independent code, even if you are coding in a language like C. Yes, in the sense that having knowledge of the underlying architecture (not just the word size!) helps to speed up critical parts of your program. For instance, you may be able to use special instructions available. > > Is it compiled in the same way? > > > Again, yes and no. Compilers for systems languages work in similar ways for all architectures, but of course, the details differ a bit. For instance, the compiler will use knowledge about your architecture to generate as efficient code as possible for it, but also has to take care of differences between architectures and other details, like calling conventions. > > What are the main benefits of a separate 64 bit app? > > > I assume you are asking about the usual desktop CPUs, i.e. x86 architecture, but note that there are other architectures with word sizes ranging from 8-bit to 128-bit. Typically, people would compile a program targeting a single architecture (i.e. for a given machine), and that's about it. However, x86 is a bit special, in that the CPU can operate in different modes, each with a different word size: 16-bit, 32-bit and 64-bit (among other differences). Effectively, they implement several ISAs (Instruction Set Architectures) in a single CPU. This was done to preserve backwards compatibility, and it is key to their commercial success. Consider that, when people bought the first 64-bit capable CPUs, it was most likely that they were still using 32-bit operating systems and software, so they really needed the compatibility. The other options are emulating it (poor performance) or making sure all the popular customer software has been ported (hard to achieve in ecosystems like Windows with many independent, proprietary vendors). There are several benefits of 64-bit x86 over 32-bit x86: more addressable memory, more integer registers, twice the XMM registers, a better calling convention, guaranteed SSE2... The only downside is using 64-bit pointers, which implies more memory and cache usage. In practice, many programs can expect to be slightly faster in x64 (e.g. 10%), but pointer-heavy programs may even see a decrease in performance.
An interesting fact/example is that Unix time is stored as a single number. It is calculated as a number of seconds passed from January 1st 1970. This number will soon reach 32-bit size, so eventually we will have to upgrade all of our systems to 64-bit so they can hold such a large number.
99,060
I joined the armature for eye animation with the armature for the head animation by Ctrl+J. Right after that the head becomes very small rotates 90 degrees and jumps into one of the eyes. If I hide the head mesh, then the eyes become very big, move above head and overlap. I managed to partially solve the problem by unparenting meshes from the armatures, then joining the armatures by ctrl+j and then parenting back the eyes. It all worked OK, but I cant parent back the head, as it get deformed in a very strange way.
2018/01/21
[ "https://blender.stackexchange.com/questions/99060", "https://blender.stackexchange.com", "https://blender.stackexchange.com/users/51136/" ]
I had a look at the file. This is caused by your shapekeys that uses vertex groups. Weighting is also assigning vertex groups. And that's what is conflicting here when you skin your headmesh. When i remove the vertex groups for the shapekeys before skinning then all works well. So, first skin the head mesh, then create the shape keys for it.
After joining armatures bones matrices are rewritten relative to new armature and every child object or paint weighted object goes to its local coordinates. So the cleanest way is that first you remove any parenting or weight paint from joining armature and after joining redo skinning and parenting.
384,780
I am finishing up a Drupal site and I have attached a word document on one of my page nodes. I want anonymous people to be able to download it, but they don't even see the document. I do as a logged in user. Any Ideas?
2008/12/21
[ "https://Stackoverflow.com/questions/384780", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3208/" ]
You need to give anonymous users permission to view uploaded files. I'm guessing that you used the upload module. The permissions page is at /admin/user/permissions and under the group "upload module" you are looking for "view uploaded files". -Ed
Assuming you only want to enable this one particular file to anonymous users, you can do it without touching permissions, which would affect your whole site. The simplest way, assuming you have configured downloads as "public" instead of "private" is to just add a link to the document in your page node, like <a href="/files/myfile.doc">Download myfile.doc</a>
1,800
Does the mail app in iPhones still use POP for Gmail? Or is it updated to use IMAP?
2010/07/21
[ "https://apple.stackexchange.com/questions/1800", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/-1/" ]
If you pick the Gmail option when setting up your account, it will use IMAP. However, if you set it up [as descibed here](http://www.google.com/support/mobile/bin/answer.py?hl=en&answer=138740), it uses ActiveSync. Doing it this way gives you push notification as well as contact and calendar sync.
The "Gmail" option for iPhone mail configuration has always used IMAP. I would, however, recommend configuring it as an Exchange account, especially now that iOS officially supports multiple Exchange accounts (previously the domain of jailbreakers only). Doing so will push emails, contacts, and calendars directly to the phone, though you won't be able to sync Notes like you can when using the "Gmail" option.
86,665
I noticed that many games (if not most or all) are employing different leaderboards for different platforms. For example, different leaderboard for iOS and Android. Different leaderboard for Xbox and PS. I can't see the logic behind that. After all, let's say iPhone and Android are sending the data to a single service/system. Is it a business decision?
2014/10/30
[ "https://gamedev.stackexchange.com/questions/86665", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/3184/" ]
It might be a business decision. It might be a convenience decision. It really depends; the only thing you can say for sure is that it's the decision of the people involved in making the game (developer, publisher, platform vendor, et cetera). You're *assuming* that "iPhone and Android are sending the data to a single service," which isn't always the case. The iOS APIs for leaderboards ([Game Center](https://developer.apple.com/game-center/)) send data to Apple. [Google's Leadersboards service](https://developers.google.com/games/services/web/api/leaderboards) sends data to Google. A developer may choose to use the platform-provided leaderboard API for his or her game because it's easier or cheaper for them (despite the fact that they'll need to build an implementation against both APIs). They may also choose to do it because those APIs offer better device integration than 3rd party APIs might, and they feel this creates a better experience for the player. There are various third-party leaderboard APIs a game might adopt. Some of these may have additional cost (either as retail products or in the form of requiring the developer to provide and pay for the hosting of the leaderboard databases). They may not look or feel as integrated with the device, either. Those potential disadvantages have to be weighed against the advantage of only having to write against the API once (in theory), reducing development time. The particular balance of all those factors (and more) will all play into the decision of which particular method to use for leaderboards; it basically depends on the developer's needs and wants.
If you use GameCenter's built-in leaderboard functionality to avoid having to implement leaderboards yourself, then you end up with seperate leaderboards when you make your Android version. For me, it's a decision that goes like this: "I don't want to reimplement a leaderboard server just so I can have cross-platform leaderboards."
86,665
I noticed that many games (if not most or all) are employing different leaderboards for different platforms. For example, different leaderboard for iOS and Android. Different leaderboard for Xbox and PS. I can't see the logic behind that. After all, let's say iPhone and Android are sending the data to a single service/system. Is it a business decision?
2014/10/30
[ "https://gamedev.stackexchange.com/questions/86665", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/3184/" ]
Different platforms may result in different scores even with the same skills. For example because you were forced to have less enemies on the Android game due to performance issues. This (along with the control difference) is why console and PC games generally never play together (either multiplayer or through leaderboards).
If you use GameCenter's built-in leaderboard functionality to avoid having to implement leaderboards yourself, then you end up with seperate leaderboards when you make your Android version. For me, it's a decision that goes like this: "I don't want to reimplement a leaderboard server just so I can have cross-platform leaderboards."
86,665
I noticed that many games (if not most or all) are employing different leaderboards for different platforms. For example, different leaderboard for iOS and Android. Different leaderboard for Xbox and PS. I can't see the logic behind that. After all, let's say iPhone and Android are sending the data to a single service/system. Is it a business decision?
2014/10/30
[ "https://gamedev.stackexchange.com/questions/86665", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/3184/" ]
If you use GameCenter's built-in leaderboard functionality to avoid having to implement leaderboards yourself, then you end up with seperate leaderboards when you make your Android version. For me, it's a decision that goes like this: "I don't want to reimplement a leaderboard server just so I can have cross-platform leaderboards."
New member, can't comment but I agree with these answers. As a gamer I would like to see my "global" rank on a game that is cross platform. However as a Software Developer I understand why they are separate. The first thing that comes to mind is the fact that they are different companies (what @JoshPetrie mentioned). Microsoft has their own servers and databases while Sony has theirs. Since they are separate companies with separate data hubs, there will naturally be separate rankings. They would have to work together and share a system that houses both datasets. That doesn't seem likely because that would mean that Microsoft has access to Sony's data and vice versa. And nothing is more personal to a company than their data. Let's assume however that this were true and there was a shared database where all the rankings are together. I bet the rankings wouldn't be as evenly distributed as one might think and it would be due to hardware differences which result in a big difference in the ability for a user to master the game. Each gaming console is a different environment, or medium, in which the user can play the game. Based on its design, the environment creates limitations on "how well" the user can play with respect to users on a different medium. The biggest example I can think of (and was already mentioned in another answer) is the difference between Console and PC. Let's use Team Fortress 2 as an example. The console version is played with a hand held controller. Usually the contoller is operated with both thumbs and index fingers (4 digits, at least how I play). The PC version uses the keyboard with W-A-S-D movement, other keys for functions, and the mouse (6 digits used at once, by my count). Not only are the "controllers" different, but, like @ratchetfreak said, each system has its own degree of performance power. So the exact same game could be slow and laggy on Xbox, but very fluid on PS or PC. Also PC's can be upgraded as opposed to console games which are static until the new version comes out. Which then the leaderboards would be different since a newer version of the game is required (which means a different server and database and blah blah...) On top of these differences, you also have system settings like mouse speed which can be increased higher than joystick sensitivity (which, for me at least, makes up the biggest advantage of PC gaming). Many other factors are involved I'm sure but these were the first things that came to mind when I saw this question. Sorry for being long winded, I got excited and blurbed it out as fast as I could.
86,665
I noticed that many games (if not most or all) are employing different leaderboards for different platforms. For example, different leaderboard for iOS and Android. Different leaderboard for Xbox and PS. I can't see the logic behind that. After all, let's say iPhone and Android are sending the data to a single service/system. Is it a business decision?
2014/10/30
[ "https://gamedev.stackexchange.com/questions/86665", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/3184/" ]
If you use GameCenter's built-in leaderboard functionality to avoid having to implement leaderboards yourself, then you end up with seperate leaderboards when you make your Android version. For me, it's a decision that goes like this: "I don't want to reimplement a leaderboard server just so I can have cross-platform leaderboards."
Most of the answers here cover the reasons why.I would like to mention when it comes to iOS and Android development a new situation arises. Apple favors games and apps that use native iOS features. You are more likely to be featured if you are using Game Center for leaderboards. They highly recommend it and may even contact you with offers to promote you on the front page if you switch to it. Android is the same way with Google Play and Google+. So having two separate leader boards a GameCenter and GooglePlay/Google+ version is highly advantageous to get featured and promote your game.
86,665
I noticed that many games (if not most or all) are employing different leaderboards for different platforms. For example, different leaderboard for iOS and Android. Different leaderboard for Xbox and PS. I can't see the logic behind that. After all, let's say iPhone and Android are sending the data to a single service/system. Is it a business decision?
2014/10/30
[ "https://gamedev.stackexchange.com/questions/86665", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/3184/" ]
Different platforms may result in different scores even with the same skills. For example because you were forced to have less enemies on the Android game due to performance issues. This (along with the control difference) is why console and PC games generally never play together (either multiplayer or through leaderboards).
It might be a business decision. It might be a convenience decision. It really depends; the only thing you can say for sure is that it's the decision of the people involved in making the game (developer, publisher, platform vendor, et cetera). You're *assuming* that "iPhone and Android are sending the data to a single service," which isn't always the case. The iOS APIs for leaderboards ([Game Center](https://developer.apple.com/game-center/)) send data to Apple. [Google's Leadersboards service](https://developers.google.com/games/services/web/api/leaderboards) sends data to Google. A developer may choose to use the platform-provided leaderboard API for his or her game because it's easier or cheaper for them (despite the fact that they'll need to build an implementation against both APIs). They may also choose to do it because those APIs offer better device integration than 3rd party APIs might, and they feel this creates a better experience for the player. There are various third-party leaderboard APIs a game might adopt. Some of these may have additional cost (either as retail products or in the form of requiring the developer to provide and pay for the hosting of the leaderboard databases). They may not look or feel as integrated with the device, either. Those potential disadvantages have to be weighed against the advantage of only having to write against the API once (in theory), reducing development time. The particular balance of all those factors (and more) will all play into the decision of which particular method to use for leaderboards; it basically depends on the developer's needs and wants.
86,665
I noticed that many games (if not most or all) are employing different leaderboards for different platforms. For example, different leaderboard for iOS and Android. Different leaderboard for Xbox and PS. I can't see the logic behind that. After all, let's say iPhone and Android are sending the data to a single service/system. Is it a business decision?
2014/10/30
[ "https://gamedev.stackexchange.com/questions/86665", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/3184/" ]
It might be a business decision. It might be a convenience decision. It really depends; the only thing you can say for sure is that it's the decision of the people involved in making the game (developer, publisher, platform vendor, et cetera). You're *assuming* that "iPhone and Android are sending the data to a single service," which isn't always the case. The iOS APIs for leaderboards ([Game Center](https://developer.apple.com/game-center/)) send data to Apple. [Google's Leadersboards service](https://developers.google.com/games/services/web/api/leaderboards) sends data to Google. A developer may choose to use the platform-provided leaderboard API for his or her game because it's easier or cheaper for them (despite the fact that they'll need to build an implementation against both APIs). They may also choose to do it because those APIs offer better device integration than 3rd party APIs might, and they feel this creates a better experience for the player. There are various third-party leaderboard APIs a game might adopt. Some of these may have additional cost (either as retail products or in the form of requiring the developer to provide and pay for the hosting of the leaderboard databases). They may not look or feel as integrated with the device, either. Those potential disadvantages have to be weighed against the advantage of only having to write against the API once (in theory), reducing development time. The particular balance of all those factors (and more) will all play into the decision of which particular method to use for leaderboards; it basically depends on the developer's needs and wants.
New member, can't comment but I agree with these answers. As a gamer I would like to see my "global" rank on a game that is cross platform. However as a Software Developer I understand why they are separate. The first thing that comes to mind is the fact that they are different companies (what @JoshPetrie mentioned). Microsoft has their own servers and databases while Sony has theirs. Since they are separate companies with separate data hubs, there will naturally be separate rankings. They would have to work together and share a system that houses both datasets. That doesn't seem likely because that would mean that Microsoft has access to Sony's data and vice versa. And nothing is more personal to a company than their data. Let's assume however that this were true and there was a shared database where all the rankings are together. I bet the rankings wouldn't be as evenly distributed as one might think and it would be due to hardware differences which result in a big difference in the ability for a user to master the game. Each gaming console is a different environment, or medium, in which the user can play the game. Based on its design, the environment creates limitations on "how well" the user can play with respect to users on a different medium. The biggest example I can think of (and was already mentioned in another answer) is the difference between Console and PC. Let's use Team Fortress 2 as an example. The console version is played with a hand held controller. Usually the contoller is operated with both thumbs and index fingers (4 digits, at least how I play). The PC version uses the keyboard with W-A-S-D movement, other keys for functions, and the mouse (6 digits used at once, by my count). Not only are the "controllers" different, but, like @ratchetfreak said, each system has its own degree of performance power. So the exact same game could be slow and laggy on Xbox, but very fluid on PS or PC. Also PC's can be upgraded as opposed to console games which are static until the new version comes out. Which then the leaderboards would be different since a newer version of the game is required (which means a different server and database and blah blah...) On top of these differences, you also have system settings like mouse speed which can be increased higher than joystick sensitivity (which, for me at least, makes up the biggest advantage of PC gaming). Many other factors are involved I'm sure but these were the first things that came to mind when I saw this question. Sorry for being long winded, I got excited and blurbed it out as fast as I could.
86,665
I noticed that many games (if not most or all) are employing different leaderboards for different platforms. For example, different leaderboard for iOS and Android. Different leaderboard for Xbox and PS. I can't see the logic behind that. After all, let's say iPhone and Android are sending the data to a single service/system. Is it a business decision?
2014/10/30
[ "https://gamedev.stackexchange.com/questions/86665", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/3184/" ]
It might be a business decision. It might be a convenience decision. It really depends; the only thing you can say for sure is that it's the decision of the people involved in making the game (developer, publisher, platform vendor, et cetera). You're *assuming* that "iPhone and Android are sending the data to a single service," which isn't always the case. The iOS APIs for leaderboards ([Game Center](https://developer.apple.com/game-center/)) send data to Apple. [Google's Leadersboards service](https://developers.google.com/games/services/web/api/leaderboards) sends data to Google. A developer may choose to use the platform-provided leaderboard API for his or her game because it's easier or cheaper for them (despite the fact that they'll need to build an implementation against both APIs). They may also choose to do it because those APIs offer better device integration than 3rd party APIs might, and they feel this creates a better experience for the player. There are various third-party leaderboard APIs a game might adopt. Some of these may have additional cost (either as retail products or in the form of requiring the developer to provide and pay for the hosting of the leaderboard databases). They may not look or feel as integrated with the device, either. Those potential disadvantages have to be weighed against the advantage of only having to write against the API once (in theory), reducing development time. The particular balance of all those factors (and more) will all play into the decision of which particular method to use for leaderboards; it basically depends on the developer's needs and wants.
Most of the answers here cover the reasons why.I would like to mention when it comes to iOS and Android development a new situation arises. Apple favors games and apps that use native iOS features. You are more likely to be featured if you are using Game Center for leaderboards. They highly recommend it and may even contact you with offers to promote you on the front page if you switch to it. Android is the same way with Google Play and Google+. So having two separate leader boards a GameCenter and GooglePlay/Google+ version is highly advantageous to get featured and promote your game.
86,665
I noticed that many games (if not most or all) are employing different leaderboards for different platforms. For example, different leaderboard for iOS and Android. Different leaderboard for Xbox and PS. I can't see the logic behind that. After all, let's say iPhone and Android are sending the data to a single service/system. Is it a business decision?
2014/10/30
[ "https://gamedev.stackexchange.com/questions/86665", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/3184/" ]
Different platforms may result in different scores even with the same skills. For example because you were forced to have less enemies on the Android game due to performance issues. This (along with the control difference) is why console and PC games generally never play together (either multiplayer or through leaderboards).
New member, can't comment but I agree with these answers. As a gamer I would like to see my "global" rank on a game that is cross platform. However as a Software Developer I understand why they are separate. The first thing that comes to mind is the fact that they are different companies (what @JoshPetrie mentioned). Microsoft has their own servers and databases while Sony has theirs. Since they are separate companies with separate data hubs, there will naturally be separate rankings. They would have to work together and share a system that houses both datasets. That doesn't seem likely because that would mean that Microsoft has access to Sony's data and vice versa. And nothing is more personal to a company than their data. Let's assume however that this were true and there was a shared database where all the rankings are together. I bet the rankings wouldn't be as evenly distributed as one might think and it would be due to hardware differences which result in a big difference in the ability for a user to master the game. Each gaming console is a different environment, or medium, in which the user can play the game. Based on its design, the environment creates limitations on "how well" the user can play with respect to users on a different medium. The biggest example I can think of (and was already mentioned in another answer) is the difference between Console and PC. Let's use Team Fortress 2 as an example. The console version is played with a hand held controller. Usually the contoller is operated with both thumbs and index fingers (4 digits, at least how I play). The PC version uses the keyboard with W-A-S-D movement, other keys for functions, and the mouse (6 digits used at once, by my count). Not only are the "controllers" different, but, like @ratchetfreak said, each system has its own degree of performance power. So the exact same game could be slow and laggy on Xbox, but very fluid on PS or PC. Also PC's can be upgraded as opposed to console games which are static until the new version comes out. Which then the leaderboards would be different since a newer version of the game is required (which means a different server and database and blah blah...) On top of these differences, you also have system settings like mouse speed which can be increased higher than joystick sensitivity (which, for me at least, makes up the biggest advantage of PC gaming). Many other factors are involved I'm sure but these were the first things that came to mind when I saw this question. Sorry for being long winded, I got excited and blurbed it out as fast as I could.
86,665
I noticed that many games (if not most or all) are employing different leaderboards for different platforms. For example, different leaderboard for iOS and Android. Different leaderboard for Xbox and PS. I can't see the logic behind that. After all, let's say iPhone and Android are sending the data to a single service/system. Is it a business decision?
2014/10/30
[ "https://gamedev.stackexchange.com/questions/86665", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/3184/" ]
Different platforms may result in different scores even with the same skills. For example because you were forced to have less enemies on the Android game due to performance issues. This (along with the control difference) is why console and PC games generally never play together (either multiplayer or through leaderboards).
Most of the answers here cover the reasons why.I would like to mention when it comes to iOS and Android development a new situation arises. Apple favors games and apps that use native iOS features. You are more likely to be featured if you are using Game Center for leaderboards. They highly recommend it and may even contact you with offers to promote you on the front page if you switch to it. Android is the same way with Google Play and Google+. So having two separate leader boards a GameCenter and GooglePlay/Google+ version is highly advantageous to get featured and promote your game.
86,665
I noticed that many games (if not most or all) are employing different leaderboards for different platforms. For example, different leaderboard for iOS and Android. Different leaderboard for Xbox and PS. I can't see the logic behind that. After all, let's say iPhone and Android are sending the data to a single service/system. Is it a business decision?
2014/10/30
[ "https://gamedev.stackexchange.com/questions/86665", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/3184/" ]
Most of the answers here cover the reasons why.I would like to mention when it comes to iOS and Android development a new situation arises. Apple favors games and apps that use native iOS features. You are more likely to be featured if you are using Game Center for leaderboards. They highly recommend it and may even contact you with offers to promote you on the front page if you switch to it. Android is the same way with Google Play and Google+. So having two separate leader boards a GameCenter and GooglePlay/Google+ version is highly advantageous to get featured and promote your game.
New member, can't comment but I agree with these answers. As a gamer I would like to see my "global" rank on a game that is cross platform. However as a Software Developer I understand why they are separate. The first thing that comes to mind is the fact that they are different companies (what @JoshPetrie mentioned). Microsoft has their own servers and databases while Sony has theirs. Since they are separate companies with separate data hubs, there will naturally be separate rankings. They would have to work together and share a system that houses both datasets. That doesn't seem likely because that would mean that Microsoft has access to Sony's data and vice versa. And nothing is more personal to a company than their data. Let's assume however that this were true and there was a shared database where all the rankings are together. I bet the rankings wouldn't be as evenly distributed as one might think and it would be due to hardware differences which result in a big difference in the ability for a user to master the game. Each gaming console is a different environment, or medium, in which the user can play the game. Based on its design, the environment creates limitations on "how well" the user can play with respect to users on a different medium. The biggest example I can think of (and was already mentioned in another answer) is the difference between Console and PC. Let's use Team Fortress 2 as an example. The console version is played with a hand held controller. Usually the contoller is operated with both thumbs and index fingers (4 digits, at least how I play). The PC version uses the keyboard with W-A-S-D movement, other keys for functions, and the mouse (6 digits used at once, by my count). Not only are the "controllers" different, but, like @ratchetfreak said, each system has its own degree of performance power. So the exact same game could be slow and laggy on Xbox, but very fluid on PS or PC. Also PC's can be upgraded as opposed to console games which are static until the new version comes out. Which then the leaderboards would be different since a newer version of the game is required (which means a different server and database and blah blah...) On top of these differences, you also have system settings like mouse speed which can be increased higher than joystick sensitivity (which, for me at least, makes up the biggest advantage of PC gaming). Many other factors are involved I'm sure but these were the first things that came to mind when I saw this question. Sorry for being long winded, I got excited and blurbed it out as fast as I could.
346,654
I am currently working with a shapefile made up of multi-part polygons symbolizing zoning areas. Geocortex is a web-based service used to display your data. When I click on a larger polygon in ArcMap the polygon highlights neatly along the lines as a selected feature should. When I do the same through the Geocortex web-based data viewer, the polygon selected highlights from random points - creating triangular connection points - and not the line represented. I'll show an example in the attached image. [![Geocortex Data Viewer](https://i.stack.imgur.com/m0p4o.png)](https://i.stack.imgur.com/m0p4o.png) If anyone has experienced this issue, then how can I fix it?
2020/01/06
[ "https://gis.stackexchange.com/questions/346654", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/156024/" ]
Try re-creating the issue with your own polygon layer: 1. Create a new polygon featureClass, 2. Edit it in ArcMAP 3. Draw all sorts of polygons : Hourglass, Islands, multi-polygon (user the cut/split editing functions). Draw polygons clockwise and anticlockwise. 4. Save all edits 5. Publish as a map service AND as a feature service 6. Consume the map Service in a test Geocortex map 7. Select : is the selection polygon accurately following your data? 8. Consume the same map service in Portal/WebAppBuilder or AGOL : can you select? During your tests, condense to the simplest tests possible so you can isolate where the issue is coming from.
There are two possible reasons for this behavior: 1. When a geometry is selected when you're zoomed out, and then you zoom in, the highlight may retain the approximate vertices that were returned when you zoom out. To avoid this, you need to re-query the service when you are at the zoom level that you want to use. 2. The Geocortex web viewer will generalize geometries by default when there are more than 5000 vertices in a polygon geometry. To avoid this, you must configure the viewer's Highlight module to not generalize, or raise its threshold above 5000 vertices. (Note that this configuration is for performance reasons, so you may want to break you data up into smaller pieces if you regularly have polygons with thousands of vertices, and you regularly want to highlight those polygons)
239,719
I recently heard someone say the following: > > It's cats and dogs out there! > > > As in "it's raining cats and dogs out there." I then thought that person should have said > > Those are cats and dogs out there! > > > because the phrase refers to multiple objects. My hobbyist-linguist friend then said to me that the person was correct because cats and dogs were a compound subject - a result of the idiom itself. I'm inclined to believe my friend, but I'm not sure. I think I may have initially been confused because of the diversion from the format of use of the idiom. Which is correct?
2015/04/16
[ "https://english.stackexchange.com/questions/239719", "https://english.stackexchange.com", "https://english.stackexchange.com/users/117344/" ]
The use of any plural noun as an adjective, adverb, adjective clause, or adverbial clause does not affect the count of the verb. The verb only cares about the subject of the sentence. What is the subject of this particular sentence? If it were "cats and dogs", then you would be correct to believe the sentence would be "there are cats and dogs" or something similar. However, the actual subject of the sentence is the expletive pronoun "it" (sometimes, and with great protest from "it", referred to as a "dummy" pronoun). > > It is unfortunate. > > > In this case, "it" is always a singular subject for the purposes of the verb. There is no actual subject matter with an expletive pronoun. A subject can be devised and a sentence reworded, but > > it is not necessary to do so... > > >
The idiom can be considered both (1) to be extra-grammatical (here hijacking a reasonably common if far from simple sentence structure and using it in a strange way) and (2) to be using words in a strange way. We'd expect a weather-it sentence to have one or more adverbs or prepositional phrases etc modifying the verb: * *It's raining continuously.* / *It's snowing really heavily out there.* * *It's snowing for the first time in two years.* / *It's thundering on the coast of Maine.* But here, we have a noun group, *cats and dogs*, obviously used adverbially rather than as a plausible direct object. * *It's raining cats and dogs [out there]*. With a typical extraposed-it sentence * *It's clear that the MP doesn't know the true facts.* a rearrangement fronting the semantic subject is possible (if unwieldy): * *That the MP doesn't know the true facts is clear.* As there is no semantic subject with weather-it sentences, this is impossible with say *It's raining.* But at first glance, * *Cats and dogs are being rained out there.* looks a possibility (after all, * *Cats and dogs are being trained out there.* is fine). But it's ungrammatical. The idiom (as idioms usually do) resists such transformations. And that really includes, in standard English, deleting the verb. The [Farlex Partner Idioms Dictionary](https://idioms.thefreedictionary.com/brass+monkeys) *does* admittedly give a parallel example (which may well have encouraged OP's example): > > *Wear a hat – it's brass monkeys out there tonight!* > > > But it adds the caveats [usually British] **slang**. So 'It's raining cats and dogs' is best regarded as an idiom best analysed at the idiom level, with 'cats and dogs' **only in this usage** a very strange adverbial looking like a noun phrase, and 'It's cats and dogs out there' a non-standard adaptation apparently using 'cats and dogs' adjectivally (so doubly resistant to analysis about 'plurality') but with 'Those are cats and dogs out there' not even vaguely acceptable in the intended sense.
239,719
I recently heard someone say the following: > > It's cats and dogs out there! > > > As in "it's raining cats and dogs out there." I then thought that person should have said > > Those are cats and dogs out there! > > > because the phrase refers to multiple objects. My hobbyist-linguist friend then said to me that the person was correct because cats and dogs were a compound subject - a result of the idiom itself. I'm inclined to believe my friend, but I'm not sure. I think I may have initially been confused because of the diversion from the format of use of the idiom. Which is correct?
2015/04/16
[ "https://english.stackexchange.com/questions/239719", "https://english.stackexchange.com", "https://english.stackexchange.com/users/117344/" ]
The use of any plural noun as an adjective, adverb, adjective clause, or adverbial clause does not affect the count of the verb. The verb only cares about the subject of the sentence. What is the subject of this particular sentence? If it were "cats and dogs", then you would be correct to believe the sentence would be "there are cats and dogs" or something similar. However, the actual subject of the sentence is the expletive pronoun "it" (sometimes, and with great protest from "it", referred to as a "dummy" pronoun). > > It is unfortunate. > > > In this case, "it" is always a singular subject for the purposes of the verb. There is no actual subject matter with an expletive pronoun. A subject can be devised and a sentence reworded, but > > it is not necessary to do so... > > >
Grammatically, "It is raining cats and dogs" makes the verb "rain" transitive. In fact, the verb "rain" is intransitive. In this respect, this archaic idiom is grammatically wrong, from the viewpoint of present day grammar. I think we ought to avoid using this grammatically anomalous expression. You can simply say "It's pouring rain out there." --- Henry 2022-06-11
239,719
I recently heard someone say the following: > > It's cats and dogs out there! > > > As in "it's raining cats and dogs out there." I then thought that person should have said > > Those are cats and dogs out there! > > > because the phrase refers to multiple objects. My hobbyist-linguist friend then said to me that the person was correct because cats and dogs were a compound subject - a result of the idiom itself. I'm inclined to believe my friend, but I'm not sure. I think I may have initially been confused because of the diversion from the format of use of the idiom. Which is correct?
2015/04/16
[ "https://english.stackexchange.com/questions/239719", "https://english.stackexchange.com", "https://english.stackexchange.com/users/117344/" ]
The idiom can be considered both (1) to be extra-grammatical (here hijacking a reasonably common if far from simple sentence structure and using it in a strange way) and (2) to be using words in a strange way. We'd expect a weather-it sentence to have one or more adverbs or prepositional phrases etc modifying the verb: * *It's raining continuously.* / *It's snowing really heavily out there.* * *It's snowing for the first time in two years.* / *It's thundering on the coast of Maine.* But here, we have a noun group, *cats and dogs*, obviously used adverbially rather than as a plausible direct object. * *It's raining cats and dogs [out there]*. With a typical extraposed-it sentence * *It's clear that the MP doesn't know the true facts.* a rearrangement fronting the semantic subject is possible (if unwieldy): * *That the MP doesn't know the true facts is clear.* As there is no semantic subject with weather-it sentences, this is impossible with say *It's raining.* But at first glance, * *Cats and dogs are being rained out there.* looks a possibility (after all, * *Cats and dogs are being trained out there.* is fine). But it's ungrammatical. The idiom (as idioms usually do) resists such transformations. And that really includes, in standard English, deleting the verb. The [Farlex Partner Idioms Dictionary](https://idioms.thefreedictionary.com/brass+monkeys) *does* admittedly give a parallel example (which may well have encouraged OP's example): > > *Wear a hat – it's brass monkeys out there tonight!* > > > But it adds the caveats [usually British] **slang**. So 'It's raining cats and dogs' is best regarded as an idiom best analysed at the idiom level, with 'cats and dogs' **only in this usage** a very strange adverbial looking like a noun phrase, and 'It's cats and dogs out there' a non-standard adaptation apparently using 'cats and dogs' adjectivally (so doubly resistant to analysis about 'plurality') but with 'Those are cats and dogs out there' not even vaguely acceptable in the intended sense.
Grammatically, "It is raining cats and dogs" makes the verb "rain" transitive. In fact, the verb "rain" is intransitive. In this respect, this archaic idiom is grammatically wrong, from the viewpoint of present day grammar. I think we ought to avoid using this grammatically anomalous expression. You can simply say "It's pouring rain out there." --- Henry 2022-06-11
17,706,833
Please let me know how to convert 32 bit float to 24 bit normalized value? What I tried is (units \* (1 <<24) but doesn't seem to be working. Please help me with this. Thanks.
2013/07/17
[ "https://Stackoverflow.com/questions/17706833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1128265/" ]
Technically it's the latter, but depending on the application server, it can be made to *look* like the former because the former is easier to manage. One example is Phusion Passenger. Take a look at <https://www.phusionpassenger.com/> and <http://www.modrails.com/documentation/Architectural%20overview.html>
Second option. In fact it Ruby that boot the application (can have multiple instances depending of the case .i.e: using puma you can request multiple workers to handle requests) then as soon as ready (depending of the side of your application .i.e: if your routes.rb file where you build each URLs is huge, it will take more time of course) the application start to handle the requests.
112,707
When you frequently log in to a SharePoint website Y on one machine, the browser remembers you with cookies. Then, if you try to go to a different SharePoint website X (totally unrelated site), your browser tries to log you in with the original credentials for website X. Of course, this fails and you get a log in error. My question is, is there someway to make it so your SharePoint website Y does not accept any cookies in this way? So that every time you go to it, it ignores any cookies someone may have on their machine and give a blank log in screen. I am looking for a way to do this that does not involve changing individual users browser settings, since many people from around the country will be logging in and naturally, I cannot be at their computer to set up their browser to ignore cookies. I am working with Office 365, and making a website inside of that. Thank You if you have a solution/help with this.
2014/08/21
[ "https://sharepoint.stackexchange.com/questions/112707", "https://sharepoint.stackexchange.com", "https://sharepoint.stackexchange.com/users/32706/" ]
The only way this should be happening is if one website was a sub-domain of another. If so, because of domain cookie sharing, the cookie would be sent to the other website. I've highlighted other concerns when using multiple SharePoint farms where one is a subdomain of another on my blog: <http://steve.thelineberrys.com/sharepoint-dns-name-considerations-when-using-anonymous-access-2/> The best thing to do is to change the dns of your SharePoint site so one is not a sub-domain of the other.
@user244034 I am not sure I understand this completely. * Is website Y a sub-domain of website X? For ex. X = somewebsite.com, and Y = otherwebsite.somewebsite.com * Are both websites bound to the same SSO provider? If so, this may help, but it is a little old: <http://www.shailensukul.com/2010/05/adfs-2-sharepoint-2010-signout.html> HTH
13,061,751
When connecting to the mail server via the email client, we are forced to use SSL. Yet, we only have a self-signed certificate which the IT dept wants us to trust. What are the real security repercussions?
2012/10/25
[ "https://Stackoverflow.com/questions/13061751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1470572/" ]
Assuming the root key doesn't leak, which would break down the whole company CA system, the only issue specific to *this use* of a self signed certificate is distribution; a certificate authority certificate is normally already on any computer that needs a connection to the server, while this certificate needs to be distributed manually. If a new computer needs a connection to the server and does not have the certificate, there is *no real security* if you connect anyway and just accept the certificate. For it to be of any use, it needs to *already exist on the computer*.
This is a more complex question than it might appear. The short answer is that if they are following best practices with regard to protecting their self-signed root CA and deploying the root to client machines (incredibly important!) then there is no additional risk beyond that normally incurred with X509 PKI. The longer answer is, as usual, "it depends what corners they cut". Here's a scenario under which the risk is higher... > > Your company does not install the self-signed root directly on > employee laptops. When asked about this oversight they say "that's a > real PITA in a heterogenous computing environment". This (depending on > client and other factors) forces you to choose to click "continue" > through an untrusted dialog each time you launch the client. Big deal > right? It's still encrypted. One day you're in Madrid, enjoying room > service in a five star hotel (as a rising star within the company you > get the poshest assignments) and you open your laptop up to get some > work done... > > > You always connect to the VPN, but you left your mail client open last > time you put the laptop to sleep and it throws up the same annoying > warning it always shows when you open it. You swiftly click through > the dialog because you've been trained by your IT dept to do so. Sadly, > this time it's a different cert. If you had inspected it directly (you > have a photographic memory and love doing comparisons of base64 > encoded public keys) you wouldn't have clicked through, but you were > in a hurry and now the unscrupulous hotel manager running the hotel > capture portal knows your email login and password (p@ssw0rd1). > > > Unfortunately, you use p@ssw0rd1 on all sorts of other websites, so > you find your reddit, amazon, and even stackoverflow accounts swiftly > compromised. Worse yet, your boss gets an email from "you" with an > obscene rant and notice of resignation. All because you clicked > through that innocuous dialog you've clicked through a thousand times > before. > > > This (unlikely) scenario is possible when blindly clicking through "untrusted cert" dialogs. Users can (hopefully) be trained to avoid this by telling them not to click through such dialogs and by adding the corporate CA to the trusted roots list of your OS/browser or using a trusted public CA.
13,061,751
When connecting to the mail server via the email client, we are forced to use SSL. Yet, we only have a self-signed certificate which the IT dept wants us to trust. What are the real security repercussions?
2012/10/25
[ "https://Stackoverflow.com/questions/13061751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1470572/" ]
Assuming the root key doesn't leak, which would break down the whole company CA system, the only issue specific to *this use* of a self signed certificate is distribution; a certificate authority certificate is normally already on any computer that needs a connection to the server, while this certificate needs to be distributed manually. If a new computer needs a connection to the server and does not have the certificate, there is *no real security* if you connect anyway and just accept the certificate. For it to be of any use, it needs to *already exist on the computer*.
Just as the other two have said, it basically relies on how much you trust your company, which is a factor anyway, so it's likely not a big deal (though they could easily get a free SSL certificate from startcom, so I have no idea why they would insist on a self-signed one). But as Paul outlined with his example, it also matters if they have you install their own root certificate on your computer; if they don't and instead ask you to click through warning boxes each time, I would suggest speaking up, and emailing a link to this page to your company's IT department.
13,061,751
When connecting to the mail server via the email client, we are forced to use SSL. Yet, we only have a self-signed certificate which the IT dept wants us to trust. What are the real security repercussions?
2012/10/25
[ "https://Stackoverflow.com/questions/13061751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1470572/" ]
Assuming the root key doesn't leak, which would break down the whole company CA system, the only issue specific to *this use* of a self signed certificate is distribution; a certificate authority certificate is normally already on any computer that needs a connection to the server, while this certificate needs to be distributed manually. If a new computer needs a connection to the server and does not have the certificate, there is *no real security* if you connect anyway and just accept the certificate. For it to be of any use, it needs to *already exist on the computer*.
It's wrong if you are expected to do anything about it such as click dialog boxes accepting certificates. If they are doing their job properly they should distribute the certificate internally such that all required network elements already trust it, e.g. browsers, mail user agents, applications, ... In other words, if you know about it, it's wrong.
13,061,751
When connecting to the mail server via the email client, we are forced to use SSL. Yet, we only have a self-signed certificate which the IT dept wants us to trust. What are the real security repercussions?
2012/10/25
[ "https://Stackoverflow.com/questions/13061751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1470572/" ]
Just as the other two have said, it basically relies on how much you trust your company, which is a factor anyway, so it's likely not a big deal (though they could easily get a free SSL certificate from startcom, so I have no idea why they would insist on a self-signed one). But as Paul outlined with his example, it also matters if they have you install their own root certificate on your computer; if they don't and instead ask you to click through warning boxes each time, I would suggest speaking up, and emailing a link to this page to your company's IT department.
This is a more complex question than it might appear. The short answer is that if they are following best practices with regard to protecting their self-signed root CA and deploying the root to client machines (incredibly important!) then there is no additional risk beyond that normally incurred with X509 PKI. The longer answer is, as usual, "it depends what corners they cut". Here's a scenario under which the risk is higher... > > Your company does not install the self-signed root directly on > employee laptops. When asked about this oversight they say "that's a > real PITA in a heterogenous computing environment". This (depending on > client and other factors) forces you to choose to click "continue" > through an untrusted dialog each time you launch the client. Big deal > right? It's still encrypted. One day you're in Madrid, enjoying room > service in a five star hotel (as a rising star within the company you > get the poshest assignments) and you open your laptop up to get some > work done... > > > You always connect to the VPN, but you left your mail client open last > time you put the laptop to sleep and it throws up the same annoying > warning it always shows when you open it. You swiftly click through > the dialog because you've been trained by your IT dept to do so. Sadly, > this time it's a different cert. If you had inspected it directly (you > have a photographic memory and love doing comparisons of base64 > encoded public keys) you wouldn't have clicked through, but you were > in a hurry and now the unscrupulous hotel manager running the hotel > capture portal knows your email login and password (p@ssw0rd1). > > > Unfortunately, you use p@ssw0rd1 on all sorts of other websites, so > you find your reddit, amazon, and even stackoverflow accounts swiftly > compromised. Worse yet, your boss gets an email from "you" with an > obscene rant and notice of resignation. All because you clicked > through that innocuous dialog you've clicked through a thousand times > before. > > > This (unlikely) scenario is possible when blindly clicking through "untrusted cert" dialogs. Users can (hopefully) be trained to avoid this by telling them not to click through such dialogs and by adding the corporate CA to the trusted roots list of your OS/browser or using a trusted public CA.
13,061,751
When connecting to the mail server via the email client, we are forced to use SSL. Yet, we only have a self-signed certificate which the IT dept wants us to trust. What are the real security repercussions?
2012/10/25
[ "https://Stackoverflow.com/questions/13061751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1470572/" ]
This is a more complex question than it might appear. The short answer is that if they are following best practices with regard to protecting their self-signed root CA and deploying the root to client machines (incredibly important!) then there is no additional risk beyond that normally incurred with X509 PKI. The longer answer is, as usual, "it depends what corners they cut". Here's a scenario under which the risk is higher... > > Your company does not install the self-signed root directly on > employee laptops. When asked about this oversight they say "that's a > real PITA in a heterogenous computing environment". This (depending on > client and other factors) forces you to choose to click "continue" > through an untrusted dialog each time you launch the client. Big deal > right? It's still encrypted. One day you're in Madrid, enjoying room > service in a five star hotel (as a rising star within the company you > get the poshest assignments) and you open your laptop up to get some > work done... > > > You always connect to the VPN, but you left your mail client open last > time you put the laptop to sleep and it throws up the same annoying > warning it always shows when you open it. You swiftly click through > the dialog because you've been trained by your IT dept to do so. Sadly, > this time it's a different cert. If you had inspected it directly (you > have a photographic memory and love doing comparisons of base64 > encoded public keys) you wouldn't have clicked through, but you were > in a hurry and now the unscrupulous hotel manager running the hotel > capture portal knows your email login and password (p@ssw0rd1). > > > Unfortunately, you use p@ssw0rd1 on all sorts of other websites, so > you find your reddit, amazon, and even stackoverflow accounts swiftly > compromised. Worse yet, your boss gets an email from "you" with an > obscene rant and notice of resignation. All because you clicked > through that innocuous dialog you've clicked through a thousand times > before. > > > This (unlikely) scenario is possible when blindly clicking through "untrusted cert" dialogs. Users can (hopefully) be trained to avoid this by telling them not to click through such dialogs and by adding the corporate CA to the trusted roots list of your OS/browser or using a trusted public CA.
It's wrong if you are expected to do anything about it such as click dialog boxes accepting certificates. If they are doing their job properly they should distribute the certificate internally such that all required network elements already trust it, e.g. browsers, mail user agents, applications, ... In other words, if you know about it, it's wrong.
13,061,751
When connecting to the mail server via the email client, we are forced to use SSL. Yet, we only have a self-signed certificate which the IT dept wants us to trust. What are the real security repercussions?
2012/10/25
[ "https://Stackoverflow.com/questions/13061751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1470572/" ]
Just as the other two have said, it basically relies on how much you trust your company, which is a factor anyway, so it's likely not a big deal (though they could easily get a free SSL certificate from startcom, so I have no idea why they would insist on a self-signed one). But as Paul outlined with his example, it also matters if they have you install their own root certificate on your computer; if they don't and instead ask you to click through warning boxes each time, I would suggest speaking up, and emailing a link to this page to your company's IT department.
It's wrong if you are expected to do anything about it such as click dialog boxes accepting certificates. If they are doing their job properly they should distribute the certificate internally such that all required network elements already trust it, e.g. browsers, mail user agents, applications, ... In other words, if you know about it, it's wrong.
15,905,674
I am working with Delphi 16 and Fast Reports 4.1. Embarcadero Edidion. Roughly, my question is : Is there a way to force frxReport (connected to a Data Base) to skip some of the table entries during report generation? What I mean can be illustrated as follows: Assume that we have a large table from the database, one with 1000 entries, and assume that we want to generate a report filled with data from this table, with one page per entry. Naturally, 1000 pages report will be a bit too much to handle, so we need a way to tell Fast Report which entries of the table to use while preparing the report. To select only a subset of all entries of the table, I use DBGrid where I set the DBGrid.Options.dgMultiselect := True and select the desired rows with mouse clicks. Then I copy the selected rows into a seperate TClientDataSet(named TempDataSet) component, after which I set the current report's data set to the newly created TempDataSet ( 'frxDataSet1.DataSet := TempDataSet' ) and launch prepare report. This seems to be ok, but in the context of the application I am programming, is unacceptable. So going back to the original question: Is there a way to control which entries in 'frxDBDataset1.DataSet' must be visited when 'frxReport1.PrepareReport();' is called ? Thank you
2013/04/09
[ "https://Stackoverflow.com/questions/15905674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1921526/" ]
Those function speak regular TCP and DNS, I would assume you need to use a bonjour API, such as <http://dnssd.rubyforge.org/>
Looks like the link from DGM's post stopped working and I don't have the rep for comments :) <https://github.com/tenderlove/dnssd>
2,825,706
I installed **TFS 2010** just to test, and I have no experience with previous versions. Now, the problem is that I have just created some **New Bugs** items but when I run the Report "*Bug Status*" they are not shown. Are these reports created and cached at certain interval of time? Is there a setting that I can change to force the creation of a report? Thank you in advance, Marco
2010/05/13
[ "https://Stackoverflow.com/questions/2825706", "https://Stackoverflow.com", "https://Stackoverflow.com/users/52879/" ]
You'll need to update the refresh frequency of the warehouse: <http://msdn.microsoft.com/en-us/library/ms244694(VS.80).aspx> > > 1. Log on to the application tier of Team Foundation Server. > 2. Start Internet Explorer. > 3. In the Internet Explorer Address box, type <http://localhost:8080/Warehouse/v1.0/warehousecontroller.asmx>. > 4. On the ControllerService page, click ChangeSetting. > 5. In the settingID box, type RunIntervalSeconds. > 6. In the newValue box, type the new number of seconds, and then click Invoke. > 7. This opens a confirmation web page which indicates that the RunIntervalSeconds setting was changed. > > >
You may also need to change the IncrementalProcessingIntervalSeconds parameter (this is something we picked up in 2010). The blog post below covers this pretty well, as we now have a few more settings to work with regardingreport refreshing: <http://estebanfg.blogspot.com/2011/02/controlling-data-warehouse-refresh-in.html>
37,345
*Disclaimer: I know very little about complexity theory.* I'm sorry but there is really no way to ask this question without being (terribly) concise: > > **What should be the morphisms in "the" category of Turing machines?** > > > This is obviously subjective and depends upon one's interpretation of the theory, so an answer to this question should ideally give some evidence and reasoning supporting the answer as well. I'd like to emphasis the point that i'm looking for a category of **Turing machines** and not of **formal languages** for example. In particular I think my morphisms should contain finer information then reductions or anything like that (I'm not sure though). Of course if there's already a well known and used category in the literature i'd like to know what it is.
2017/01/18
[ "https://cstheory.stackexchange.com/questions/37345", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/20887/" ]
Saal Hardali mentioned that he wanted a category of Turing machines to do geometry (or at least homotopy theory) on. However, there are a lot of different ways to achieve similar aims. * There is a very strong analogy between computability and topology. The intuition is that termination/nontermination is like the Sierpinski space, since termination is finitely observable (i.e., open) and nontermination isn't (not open). See Martin Escardo's lecture notes [*Synthetic topology of data types and classical spaces*](https://www.cs.bham.ac.uk/%7Emhe/papers/entcs87.pdf) for a moderately gentle but comprehensive introduction to these ideas. * In concurrent and distributed computation, it is often useful to think of the possible executions of a program as a space, and then various synchronization constraints can be expressed as homotopical properties of the space. (The fact that execution has a time order seems to call for directed homotopy theory, rather than ordinary homotopy theory.) See Eric Goubault's article [*Some Geometric Perspectives on Concurrency Theory*](https://projecteuclid.org/journals/homology-homotopy-and-applications/volume-5/issue-2/Some-geometric-perspectives-in-concurrency-theory/hha/1088453323.full) for more details. Also see Maurice Herlihy and Nir Shavit's Goedel-prize winning paper, [*The Topological Structure of Asynchronous Computability*](https://cs.brown.edu/%7Emph/HerlihyS99/p858-herlihy.pdf), which settled some long-standing open problems in the theory of distributed programming. * A third idea comes via homotopy type theory, via the discovery that Martin-Löf type theory is (likely?) a syntactic presentation (in the sense of generators and relations) of the theory of omega-groupoids -- ie, the models of abstract homotopy theory. The best introduction to these ideas is the [homotopy type theory book](https://homotopytypetheory.org/book/). Note that all of these ideas are very different from each other, but all still use geometric intuitions! And there are still others, which I don't know, like the uses that arise in geometric complexity theory, and the way that the theories of circuits can be described in terms of [the (co)homology theory of graphs](http://128.2.67.219/johnbaez/show/Circuit%20theory#chain_complexes_from_circuits). Basically, when you are doing CS, geometry is a *tool* -- you use it to formalize your intuitions, so that you can get leverage via the enormous body of work that has been done on it. But it's an idea amplifier, not a substitute for having ideas!
You might be interested in [Turing categories](http://www.sciencedirect.com/science/article/pii/S0168007208000948) by Robin Cockett and Pieter Hofstra. From the point of view of category theory the question "what is *the* category of Turing machines" is less interesting than "what is the categorical structure which underlies computation". Thus, Robin and Pieter identify a general kind of category that is suitable for developing computability theory. Then, there are several possibilities for defining such a category starting from Turing machines. Why have one category when you can have a whole category of them?
524,749
I'm setting up a Windows 7 "resource computer" that will have applications installed like Visio 2013 and others that are used by my employees but not needed enough to buy individual licenses for everyone. I don't have a need for a full blown terminal server for this (even with the current drawbacks), and have set it up as a resource in Exchange for scheduling purposes. I've done a few local policy edits to enforce disconnects of idle sessions, printer drivers, and bandwidth control. **My questions are**: 1. Is there a way to prevent a simple disconnect of a session (X/close) and require a logoff? 2. Any method for controlling the actual resource and time allotments short of manual intervention? Meaning if a user books the resource in Exchange from 2pm-4pm right now there's nothing that forces them or at least prompts them that their session should be ending and they technically could keep using the shared computer past 4pm even if someone else is scheduled (thus requiring me to intervene like a parent). I keep hoping for something similar to scheduling a "webex" or similar, but so far no luck. Trying to search online for some of these issues I haven't found a great method for preventing disconnects. Most involved programming or a script on the client side not the resource computer side. For the second question, I haven't found anything yet.
2013/07/19
[ "https://serverfault.com/questions/524749", "https://serverfault.com", "https://serverfault.com/users/7861/" ]
I am afraid this is still a license violation from [Microsoft Office License term](http://office.microsoft.com/en-001/products/microsoft-software-license-agreement-FX103576343.aspx) LICENSE RIGHTS AND MULTI USER SCENARIOS 1. Computer. In this agreement, "computer" means a hardware system (whether physical or virtual) with a storage device capable of running the software. A hardware partition or blade is considered to be a computer. 2. Multiple or pooled connections. You may not use hardware or software to multiplex or pool connections, or otherwise allow multiple users or multiple computers or devices to access or use the software indirectly through the licensed computer. 3. Use in a virtualized environment. If you use virtualization software, including client hyper-v, to create one or more virtual computers on a single computer hardware system, each virtual computer, and the physical computer, is considered a separate computer for purposes of this agreement. This license allows you to install only one copy of the software for use on one computer, whether that computer is physical or virtual. If you want to use the software on more than one computer, you must obtain separate copies of the software and a separate license for each copy. Content protected by digital rights management technology or other full-volume disk drive encryption technology may be less secure in a virtualized environment. 4. Remote access. The user that primarily uses the licensed computer is the "primary user." The primary user may access and use the software installed on the licensed device remotely from any other device, as long as the software installed on the licensed device is not being used non-remotely by another user simultaneously. As an exception, you may allow others to access the software simultaneously only to provide you with technical support.
Ignoring the licensing issue for a moment (which is valid and you should definitely take into consideration)... 1) No, there's no way for the server to know whether the user manually disconnected or was simply disconnected due to networking issues. So the best you really have here is the existing idle session limits that force logoff. 2) There's no way I know of using in-box features to enforce scheduling like this. It would require something 3rd party. But it also doesn't sound like something terribly difficult to implement either. The most likely scenario is that the machine is running some sort of agent that is aware of the scheduling data in Exchange and just keeps track of the time and forces logouts at the designated end times.
4,403,999
What is the purpose of tagging in a software release? Thanks
2010/12/09
[ "https://Stackoverflow.com/questions/4403999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/32484/" ]
This is a point in time marker. For example, software released to other people (production, open source) is often given a version number. IE. MongoDB 1.6.5 came out. This I guess be tagged "/tags/mongodb\_1.6.5" This way, any time someone wanted to know EXACTLY what the source code looked like at release mongodb 1.6.5 they could just look though the tags, instead of the commit log which might be ... very... long.. reading all of those log messages would be a pain. -daniel
It's so you can keep a copy of the source as it was when that version was released. Example: You work on your project, committing your changes to trunk. When you release, you make a tag of the trunk just as you're ready and there are no more code changes. Then once the release is out there in the wild, you continue adding new features and bug fixes to the trunk. Then you get a bug report, that needs to be fixed in a 1.1 release. Now here's the problem. You may have already fixed the bug in your trunk after releasing, but you don't want to release that code because it has new features that may not be ready yet. So what you can do now, since you have a copy of the code as it was when you released 1.0 in your 1.0 tag. You will branch the tag, apply the bug fix (or maybe merge the fix from the trunk into the branch) and you can release that as your 1.1. You could then make a tag of 1.1 and repeat the process as many times as you need to.
4,403,999
What is the purpose of tagging in a software release? Thanks
2010/12/09
[ "https://Stackoverflow.com/questions/4403999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/32484/" ]
In my company we use it to signify that the given revision is special in some way (granted, we use git but mostly the purpose of tagging is the same). We use it for the following: * Mark released versions * Mark the reviewed code portion (with a date, e.g. reviewed-2010-12-10) * Mark special patches, like forward ports, backports, etc.
This is a point in time marker. For example, software released to other people (production, open source) is often given a version number. IE. MongoDB 1.6.5 came out. This I guess be tagged "/tags/mongodb\_1.6.5" This way, any time someone wanted to know EXACTLY what the source code looked like at release mongodb 1.6.5 they could just look though the tags, instead of the commit log which might be ... very... long.. reading all of those log messages would be a pain. -daniel
4,403,999
What is the purpose of tagging in a software release? Thanks
2010/12/09
[ "https://Stackoverflow.com/questions/4403999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/32484/" ]
This is a point in time marker. For example, software released to other people (production, open source) is often given a version number. IE. MongoDB 1.6.5 came out. This I guess be tagged "/tags/mongodb\_1.6.5" This way, any time someone wanted to know EXACTLY what the source code looked like at release mongodb 1.6.5 they could just look though the tags, instead of the commit log which might be ... very... long.. reading all of those log messages would be a pain. -daniel
a tag is a symbolic name for a set of revisions (or in svn a revision). It may correspond to a version number - which is an identifier with many parts meant to convey where in the tree it is, the degree of change from a different version and it's likely compatibility with other applications or libraries.
4,403,999
What is the purpose of tagging in a software release? Thanks
2010/12/09
[ "https://Stackoverflow.com/questions/4403999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/32484/" ]
In my company we use it to signify that the given revision is special in some way (granted, we use git but mostly the purpose of tagging is the same). We use it for the following: * Mark released versions * Mark the reviewed code portion (with a date, e.g. reviewed-2010-12-10) * Mark special patches, like forward ports, backports, etc.
It's so you can keep a copy of the source as it was when that version was released. Example: You work on your project, committing your changes to trunk. When you release, you make a tag of the trunk just as you're ready and there are no more code changes. Then once the release is out there in the wild, you continue adding new features and bug fixes to the trunk. Then you get a bug report, that needs to be fixed in a 1.1 release. Now here's the problem. You may have already fixed the bug in your trunk after releasing, but you don't want to release that code because it has new features that may not be ready yet. So what you can do now, since you have a copy of the code as it was when you released 1.0 in your 1.0 tag. You will branch the tag, apply the bug fix (or maybe merge the fix from the trunk into the branch) and you can release that as your 1.1. You could then make a tag of 1.1 and repeat the process as many times as you need to.
4,403,999
What is the purpose of tagging in a software release? Thanks
2010/12/09
[ "https://Stackoverflow.com/questions/4403999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/32484/" ]
In my company we use it to signify that the given revision is special in some way (granted, we use git but mostly the purpose of tagging is the same). We use it for the following: * Mark released versions * Mark the reviewed code portion (with a date, e.g. reviewed-2010-12-10) * Mark special patches, like forward ports, backports, etc.
a tag is a symbolic name for a set of revisions (or in svn a revision). It may correspond to a version number - which is an identifier with many parts meant to convey where in the tree it is, the degree of change from a different version and it's likely compatibility with other applications or libraries.
133,427
<http://www.economist.com/news/united-states/21588388-georgias-governor-faces-ethics-questions-not-first-time-raw-deal> > > ...., he has two options: accept the charges or rebut them. > > > Is the sentence fragmented, as gerunds or "to" are needed when you have verbs inside an appositive. Should the sentence rewrite as the following: > > ...., he has two options: to accept/accepting the charges or to rebut/rebutting them. > > >
2013/10/29
[ "https://english.stackexchange.com/questions/133427", "https://english.stackexchange.com", "https://english.stackexchange.com/users/49119/" ]
I agree with you that the first version sounds somewhat over-staccato. It meaning is, however, quite clear. The composition of parentheticals (here 'accept the charges or rebut them') is not usually scrutinised quite as thoroughly as the syntax of the matrix sentence. Recasting slightly: > > ...., he has two options, which are: accept the charges or rebut them. > > > seems equally over-staccato. > > ...., he has two options, which are to accept the charges or (to) rebut them. > > > sound fine. > > ...., he has two options, which are accept the charges or rebut them. > > > is possibly ungrammatical. > > ...., he has two options (accept the charges or rebut them). > > > is probably allowable, as bracketed parentheticals seem to be allowed more flexibility - they're already 'staccatoed'. > > ...., he has two options (to accept the charges or rebut them). > > > is undoubtedly allowable. > > ...., he has two options, which are: > > > (a) accept the charges > > > (b) rebut them. > > > is now in listed-point format, acceptable but quite different in style. I wouldn't dare to label the Economist's rendition as ungrammatical, but I'd avoid it. There's probably a US - UK divide over assessing 'how it sounds' as well, with US speakers considering it more acceptable stylewise.
*accept the charges or rebut them* is **not** two choices — think of it: though it appears so and may be so understood by instinct. Why? What is intended is: … he has **two** options: "to accept the charges," **and** "to rebut them." i.e., … he has two options, to accept the charges, and to rebut them. However, that is not as clear as the structure where the options are clearly set out in quotes. … he has **the** option either "to accept the charges," **or** "to rebut them." i.e., … he has **the** option either to accept the charges, **or** to rebut them. The last conveys the intended meaning without ambiguity.