qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
14,692
I found this sentence on the internet: > > Seems like society **leveled itself out** once again. > > > What does *level out* mean here? I have looked up OALD, and it defines *level out/off* as: > > level off/out > > 1 to stop rising or falling and remain horizontal. > > *The plane levelled off at 1500 feet.* > > *After the long hill, the road levelled out.* > > 2 to stay at a steady level of development or progress after a period of sharp rises or falls. > > *Sales have levelled off after a period of rapid growth.* > > > However, I'm not quite sure whether either of them fits the context.
2013/12/20
[ "https://ell.stackexchange.com/questions/14692", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/1473/" ]
(To)Level out stands for work on an area to make it even, smooth and free of indents, dents or dings. Figuratively, if some person or group levels out, it comes ahead of its shortcomings, no more ups and downs, and it can have a smooth riding future.
I couldn't find OALD, but I believe that the online [Oxford Dictionary](http://www.oxforddictionaries.com/definition/english/level) should be close enough. A similar definition for ***level out***/***off*** can be found under *verb* sense 2. > > 2 [*no object*] (**level off/out**) begin to fly horizontally after climbing or diving: > > *he quickly levelled off at 1500 ft* > > - (of a path, road, or incline) cease to slope: > > *the track levelled out and there below us was the bay* > > - remain at a steady level after falling or rising: > > *inflation has levelled out at an acceptable rate* > > > This definition is for **level out** being used as an intransitive verb (denoted by [*no object*]), which is quite different from your quoted sentence: > > *"Seems like **society leveled itself out** once again."* > > > where sense 1 (give a flat and even surface to) and sense 3 (make [something, especially a score in sport] equal or similar) are more relevant. Thus, the phrase ***society leveled itself out*** should not be understood as the society has changed its direction to continue at a steady level. (It might be possible if it were about the future direction of economy.) In this context, it just means: ***everyone has an equal chance***.
415,877
I followed some advice I found here for a replacement for Disk Inventory X [OSX disk space shows 200GB as "other"](https://apple.stackexchange.com/questions/388549/osx-disk-space-shows-200gb-as-other#answer-388581) I installed the DaisyDisk app and it worked very well, however now my machine is constantly overheating and mds\_stores is consuming 99% of the cpu. DaisyDisk does not appear in the Applications folder. How do I uninstall it? Does anyone know? They don't have any information on their website. I should have been more careful. <https://daisydiskapp.com/>
2021/03/16
[ "https://apple.stackexchange.com/questions/415877", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/131099/" ]
You may want to reinstall it and use "[AppCleaner](http://freemacsoft.net/appcleaner/)" to properly remove it and all of its remnants.
If you installed it from the App Store you can delete it from LaunchPad or Finder. Finder moving it to trash works for the direct download version of the app. Spotlight should reveal the location of the app in most cases.
31,670
I have an 8 year old son who refuses to use the restroom. He does #1 in toilet but not #2. This has been going on for less than a year, I'd say about 8 months now. Before that, his potty routine was normal. Out of nowhere he started refusing to go. I've asked him if there's any reason to why he doesn't use the toilet and he responds with "I don't know, there's no reason" He stays with his grandfather every weekend and he does it there as well. He doesn't do it at school at all, I guess he waits till he's at home where he's most "comfortable" because he doesn't want "kids to smell him". What can I do to make him use the restroom like he used to? I'm tired of having to wash his underwear as often as I do and having talks with him about the need of toilets.
2017/09/08
[ "https://parenting.stackexchange.com/questions/31670", "https://parenting.stackexchange.com", "https://parenting.stackexchange.com/users/29589/" ]
> > What can I do to make him use the restroom like he used to? > > > My son went through something similar when he was young, and in my case it was just a battle of wills. I said to him you need to **poopy in the potty**. I would repeat this phrase over and over again. I would sit with him in the bathroom until he did use the bath room. ( Not for hours on end, but 10 min ever hour or so ) The other part of this is I had him hand wash the majority of the mess every single time, be it on just his underwear, or his clothes. Once he started falling back in line, I provided rewards for doing the right thing, such as allowing him to watch his TV show, or an extra sweet. Be patient he will come around. **NOTE:** In writing this answer, I am assuming that **medical conditions** have been eliminated. Have your doctor examine him if this is a concern.
I don't know if this will help you but I must share my experience as what we did HELPED!!! 1. We had NO idea our (completely healthy)teen child was having issues with what I NOW know is called soiling/encopresis. I had no idea it was a thing or that my poor child had been dealing with it for YEARS. THEY didn't even know they were struggling with It. We knew they had issues with regular/harder bowel movements and were making them drink more water, consume more fruits/veggies in hopes to alleviate it 2. They had had minor soiling in underwear for years but we just reduced it to poor cleaning or not making it to restroom in time. 3.BUT then their bowel movements became so heinous we suspected something was going on internally. 4. My poor child was starting to smell like horrible poo ALL THE TIME! We'd talk and talk about showering, better hygiene, proper cleaning methods ect. But they would be so frustrated with it all because according to them, they WERE doing all the right things with no explanation for the still smelling/soiling problem. The soiling got worse over time and we had had enough! Last year I spent tons of time researching the issue, still not knowing what it was we were dealing with. I came across this site with others who were struggling with the same issues. Thankfully, we now had a name for the issue and secondly we were not the only ones. But we still didn't have a solution. Our child isn't developmentally challenged, hasn't gone through any traumatic events or anything else that could possibly explain what was happening. We spoke with a dr who suggested the typical treatment for this, which meant OTC drugs/stool softener. But I read about the side affects and there were too many families who noticed a negative change in their children's behavior, ie. Depression, mood swings, rage ect. The softener is only meant for adults and for short periods. We were being advised to take for 6mos + as those parents. I was NOT willing to take a gamble. So we did NOT take the medicine as advised. I researched more looking for an alternative....I came across a post on a blog about some behavioral issues/tendencies. It spoke so much about what my child had been displaying in other areas of their life: low energy, lack of effort in most areas, lack of focus, easily sidetracked or distracted, that I kept reading to see what their solution was. Its main purpose was to bring attention to our children who have an unhealthy gut and an imbalanced level of serotonin which affects so much. www.homeschool-your-boys.com/focusing-attention-sensoryissues/ It was such a blessing to see this article and how adding a few healthy supplements/vitamins can truly make a difference. I was SO desperate for my child I was willing to give it a try as it mentioned digestive issues as a side affect of low serotonin. It proposes the use of a prebiotic and grapefruit extract seed for 3 months to help build good bacteria and fight the bad. So I ordered enough for 3 months on Amazon. Well let me tell you in just 2 MONTHS of daily doses 3x a day + regular water intake, fruit/veggie comsumption, regular potty breaks AND checking their stool daily for softer stools and comfirm that they had indeed used the restroom,, my child's mood improved, their focus improved AND the bowel issue was almost resolved!! This was at the beginning of Feb 2017, by May we could see a significant difference in our child and buy the 3rd month it was completely resolved. No more heinous stench in the bathroom just the typical poo smell, no more soiled clothes, better overall mood/behavior and a confidence we hadn't seen in years got restored. My child had suffered without ever speaking a word to us out of fear/embarrassment. We continued the supplements for another 3 months just to keep them on track and they have not had the issue again! I pray something you read here is helpful and even more what you read on the blog. They suggest behavioral training with the supplements BUT we didn't use the training just the supplements. We continue to monitor our child just in case but to this day Jan 1 2018 they do not have the issue any more!
31,670
I have an 8 year old son who refuses to use the restroom. He does #1 in toilet but not #2. This has been going on for less than a year, I'd say about 8 months now. Before that, his potty routine was normal. Out of nowhere he started refusing to go. I've asked him if there's any reason to why he doesn't use the toilet and he responds with "I don't know, there's no reason" He stays with his grandfather every weekend and he does it there as well. He doesn't do it at school at all, I guess he waits till he's at home where he's most "comfortable" because he doesn't want "kids to smell him". What can I do to make him use the restroom like he used to? I'm tired of having to wash his underwear as often as I do and having talks with him about the need of toilets.
2017/09/08
[ "https://parenting.stackexchange.com/questions/31670", "https://parenting.stackexchange.com", "https://parenting.stackexchange.com/users/29589/" ]
The first thing which needs to be done is that the child needs to see a pediatrician who specializes in *encopresis*, or a pediatric gastroenterologist. Once a child past the age of 4 starts soiling their pants, it might be a medical problem. (There are medical conditions that start earlier, but they are usually recognized as such.) This can start insidiously with just constipation then a painful BM. The painful BM makes the child afraid to go again, which results in their holding it in, having a harder stool, maybe larger, and again painful, so it becomes a self-reinforcing problem. [Boston's Children's Hospital](http://www.childrenshospital.org/conditions-and-treatments/conditions/encopresis/symptoms-and-causes) describes it like this: > > How does encopresis happen? > > > Constipated children have fewer bowel movements than normal, and their bowel movements can be hard, dry, difficult to pass and so large that they can often even block up the toilet. Here are some examples why: > > > -Your child's stool can become impacted (packed into her rectum and large intestine). > > -Her rectum and intestine become enlarged due to the retained stool. > > -Eventually, her rectum and intestine have problems sensing the presence of stool, and the anal sphincter (the muscle at the end of the digestive tract that helps hold stool in) becomes dilated, losing its strength. > > -Stool can start to leak around the impacted stool, soiling your child's clothing. > > -As more and more stool collects, your child will be less and less able to hold it in, leading to accidents. Because of decreased sensitivity in your child’s rectum due to its larger size, she may not even be aware she’s had an accident until after it has occurred. > > > This is why "I don't know" is the most common answer parents get when they ask their child, "Why didn't you tell me you needed to go?" The colon and sphincter do not give them the signals they need to feel to evacuate 'properly'. The approach initially will be medical: a dietary change, a bowel evacuation, laxatives, liquids, fiber, etc. With luck (it's still early), your child will respond and this will be a thing of the past. Often, however, the treatment of longer episodes of encopresis require a multidiciplinary approach: doctor, dietician, and therapist. Read reputable sites on encopresis. Many parents have gone through this; there are even online encopresis support groups you can join. Good luck!
I don't know if this will help you but I must share my experience as what we did HELPED!!! 1. We had NO idea our (completely healthy)teen child was having issues with what I NOW know is called soiling/encopresis. I had no idea it was a thing or that my poor child had been dealing with it for YEARS. THEY didn't even know they were struggling with It. We knew they had issues with regular/harder bowel movements and were making them drink more water, consume more fruits/veggies in hopes to alleviate it 2. They had had minor soiling in underwear for years but we just reduced it to poor cleaning or not making it to restroom in time. 3.BUT then their bowel movements became so heinous we suspected something was going on internally. 4. My poor child was starting to smell like horrible poo ALL THE TIME! We'd talk and talk about showering, better hygiene, proper cleaning methods ect. But they would be so frustrated with it all because according to them, they WERE doing all the right things with no explanation for the still smelling/soiling problem. The soiling got worse over time and we had had enough! Last year I spent tons of time researching the issue, still not knowing what it was we were dealing with. I came across this site with others who were struggling with the same issues. Thankfully, we now had a name for the issue and secondly we were not the only ones. But we still didn't have a solution. Our child isn't developmentally challenged, hasn't gone through any traumatic events or anything else that could possibly explain what was happening. We spoke with a dr who suggested the typical treatment for this, which meant OTC drugs/stool softener. But I read about the side affects and there were too many families who noticed a negative change in their children's behavior, ie. Depression, mood swings, rage ect. The softener is only meant for adults and for short periods. We were being advised to take for 6mos + as those parents. I was NOT willing to take a gamble. So we did NOT take the medicine as advised. I researched more looking for an alternative....I came across a post on a blog about some behavioral issues/tendencies. It spoke so much about what my child had been displaying in other areas of their life: low energy, lack of effort in most areas, lack of focus, easily sidetracked or distracted, that I kept reading to see what their solution was. Its main purpose was to bring attention to our children who have an unhealthy gut and an imbalanced level of serotonin which affects so much. www.homeschool-your-boys.com/focusing-attention-sensoryissues/ It was such a blessing to see this article and how adding a few healthy supplements/vitamins can truly make a difference. I was SO desperate for my child I was willing to give it a try as it mentioned digestive issues as a side affect of low serotonin. It proposes the use of a prebiotic and grapefruit extract seed for 3 months to help build good bacteria and fight the bad. So I ordered enough for 3 months on Amazon. Well let me tell you in just 2 MONTHS of daily doses 3x a day + regular water intake, fruit/veggie comsumption, regular potty breaks AND checking their stool daily for softer stools and comfirm that they had indeed used the restroom,, my child's mood improved, their focus improved AND the bowel issue was almost resolved!! This was at the beginning of Feb 2017, by May we could see a significant difference in our child and buy the 3rd month it was completely resolved. No more heinous stench in the bathroom just the typical poo smell, no more soiled clothes, better overall mood/behavior and a confidence we hadn't seen in years got restored. My child had suffered without ever speaking a word to us out of fear/embarrassment. We continued the supplements for another 3 months just to keep them on track and they have not had the issue again! I pray something you read here is helpful and even more what you read on the blog. They suggest behavioral training with the supplements BUT we didn't use the training just the supplements. We continue to monitor our child just in case but to this day Jan 1 2018 they do not have the issue any more!
1,393,216
I have an strange problem. This is Microsoft Office 365 under Windows 10 and I don't remember when, but every time I start the computer, Excel is opened with a blank workbook. I looked at startup tab in task manager and it is not there.. I also saw in Settings -> Applications -> Startup and it is not there two. Do you have an advice to avoid this? Thanks Jaime --- Well I did just as you said to do unfortunately though now I **can't get past the windows log in screen because it disabled my fingerprint reader and my pin code**. *Can you tell me how to set it back to a normal boot up so that I can log back into my computer please.*
2019/01/11
[ "https://superuser.com/questions/1393216", "https://superuser.com", "https://superuser.com/users/524721/" ]
It turns out that this may be caused by a so-called feature of Microsoft. Short answer: Windows Settings->Accounts->Sign-In Options->Privacy->Off Longer answer: <https://answers.microsoft.com/en-us/msoffice/forum/all/microsoft-word-and-excel-2016-automatically-opens/8d5869df-0212-4f04-9fac-c7e99256a005>
The issue might be from a startup application or service which is opening excel at startup. Run msconfig from the run dialog(Windows Key + R) to open the System Configuration. From the General Tab choose Selective StartUp, uncheck Load startup items(this will disable all startup items seen in the Task Manager). Apply and reboot your computer. See if it still pops up. If it does, then try also disabling the services by unchecking Load system services in the Selective startup section. If it doesn't pop up, then you can set your computer back to Normal Startup in the system configuration, and then one by one disabling the Startup apps from the Task Manager to find out which app is causing the issue.
19,680
I mean through out the whole show cars are depicted almost exactly the same as cars from our time, except for the lack of wheels. You would think that people 1000 years more advanced than us would be able to come up with a new design.
2012/06/29
[ "https://scifi.stackexchange.com/questions/19680", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/7446/" ]
I say it's nostalgia and marketing...As evidence for this I reference the existence of New New York, Mom's Friendly Robot Company ads and just human nature in general. For every viewpoint, you can use nostalgia to sell it! That, I believe, is the simple, unadorned reason that cars look the same in the year 3000.
The requirements for a car in 3000AD are probably very similar to those of cars today i.e. get people around comfortably, safely and cheaply, so they'll end up with broadly similar designs. Certainly demands of fashion will affect the decorative bits that car designers add to distinguish their models, but until material wealth becomes irrelevant I suspect cars will look broadly similar.
5,019,360
I have made a webapp for my android phone where I can check the sales at my store when I'm not there. I would like to have notifications on my phone whenever a sale is made. What is the best way of doing this? Any existing android apps that I can configure to check a php-script every 2 minutes or something?
2011/02/16
[ "https://Stackoverflow.com/questions/5019360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/348776/" ]
Try using XMPP to send a message to yourself, which you can receive via gtalk on the phone. Alternatively, an email, SMS, etc.
Could you rig up an RSS feed and use a feed reader to notify you?
31,946,407
I have a C# application using Oracle database and Entity Framework 5. Oracle client is version 12c R1. My application uses database first approach. I'm trying to run the app using Visual Studio Enterprise 2015. When I access the edmx file and I try to update the model from the database, it gives me the following error: > > An exception of type 'System.ArgumentException' occurred while attempting to update from the database. The exception message is: 'Unable to convert runtime connection string to its design-time equivalent. The libraries required to enable Visual Studio to communicate with the database for design purposes (DDEX provider) are not installed for provider 'Oracle.DataAccess.Client'. Connection string: XXXXX. > > > This error does not occur when I use Visual Studio Ultimate 2013. Only on Visual Studio Enterprise 2015. Is there any known incompatibility issue with the new one?
2015/08/11
[ "https://Stackoverflow.com/questions/31946407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4791851/" ]
I believe it is because not yet out ODT version compatible with Visual Studio 2015. Wait or will have no choice for now? [Oracle Developer Tools](http://www.oracle.com/technetwork/developer-tools/visual-studio/overview/index.html)
I installed the Oracle Developer Tools for 2015 but still could not get this to work. When I would attempt to do an Update Model from Database with Entity Framework, I encountered this error below. [![enter image description here](https://i.stack.imgur.com/tKQM6.jpg)](https://i.stack.imgur.com/tKQM6.jpg) So I did as instructed and removed all references to Oracle from the GAC, even following advice here [Oracle .Net Developer's Guide](https://docs.oracle.com/cd/E11882_01/win.112/e23174/InstallODP.htm#ODPNT152), but it still didn't work. Being that I'm on a tight schedule and don't have time to fool with this, I opened my solution in VS2012, did my Entity Framework changes, and then reopened the solution in VS2015, and that worked fine. Irritating, but at least I have a workaround for now.
130,462
I am doing some hands-on training with Kali linux and one of the exercise I came across was to make an exe file containing malicious payload(say to obtain reverse shell), send it to a victim and hope for the victim to download the exe file. If he/she downloads it, it is fairly easy after that. Now this isn't tricky at all, assuming the fact that many users aren't aware of security. But in real world, to carry out this attack, there must be more hurdles that an attacker needs to cross for exploitation? What are those security measures that stops attempt to exploit users? For example I can think of firewall which tries to detect malicious looking requests. Please answer the question for both internal and external attacks.
2016/07/19
[ "https://security.stackexchange.com/questions/130462", "https://security.stackexchange.com", "https://security.stackexchange.com/users/110848/" ]
There are a number of ways that people attempt to mitigate these attacks. **External** * Prevent spam filter from allowing MIME types frequently associated with malware (it's highly unlikely there is a business relevant reason to send .exe or .bat files for instance) * Use Anti-Virus as .exe's can be detected even after several rounds of encoding. Preferably you'd want "behavioural based" anti-virus which block applications based on actions they attempt to take. * Educate your staff on how to spot threats and what to do should they encounter them. * If you're IT department is big enough and you want to pull your hair out figuring it out, software restriction policies can be implemented to only allow known .exe's from running * Disable the ability for executables to run from the temp directory / directories * Implement firewall egress filtering. Set up a proxy with deep packet inspection that intercepts SSL / TLS connections and blocks outbound traffic that seems suspicious. **Internal** * Do not give local admin access to end users. Just don't. If they need to raise privileges assign them a special account that they have to use the "Run As..." functionality to use, but end users should not have administrative privileges on their workstations. * Software restrictions / egress filtering / AV / Education / disabling executables from temp directory all still apply for internal * Segregate your network to prevent lateral movement in the instance there is compromise
You'd be surprised how successful something like this can be - especially for a typical user who may not have any security controls in place. Also, I would argue that if you are able to convince a user to execute code you have provided, a malicious exploit is probably not even necessary - (i.e. "Microsoft Tech Support calls" that obtain remote access to systems utilizing legitimate tools). In a corporate environment, there would be a variety of security measures that should help to prevent (stop it from happening) and control (limit the successfulness) of such an attack. Most companies take a layered security approach with multiple measures in place. Walking through how such an attack would take place, here are some of the controls that might be implemented for an attack like this: 1. Provide malicious file to user via email or other means * Email filtering (Antispam, AV, whitelisting), Email policies (Attachment restrictions such as size or file type) 2. User saves file and launches it * User awareness training, group policy restrictions (prevent files can be launched from), application whitelisting (prevents unknown files from running), Endpoint AV (scans/deletes/blocks known malicious files). 3. File launches and establishes connection back to attacker (perhaps via an exploit or via legitimate software * Endpoint and/or network (egress) filtering or proxy (block unknown/untrusted connections), system patches (block known exploits) 4. Attacker gains control of system, exfiltrates data, maintains access, and spreads laterally * Restricted user permissions (i.e. non-admin), monitoring tools (to record forensic activity for later review) I'm sure there are more but this was briefly pulled from the top of my head - but you can always take an attack, walk through the appropriate steps, then think of potential mitigations for those steps - and of course you can then rinse, repeat with more attacks against those mitigations.
130,462
I am doing some hands-on training with Kali linux and one of the exercise I came across was to make an exe file containing malicious payload(say to obtain reverse shell), send it to a victim and hope for the victim to download the exe file. If he/she downloads it, it is fairly easy after that. Now this isn't tricky at all, assuming the fact that many users aren't aware of security. But in real world, to carry out this attack, there must be more hurdles that an attacker needs to cross for exploitation? What are those security measures that stops attempt to exploit users? For example I can think of firewall which tries to detect malicious looking requests. Please answer the question for both internal and external attacks.
2016/07/19
[ "https://security.stackexchange.com/questions/130462", "https://security.stackexchange.com", "https://security.stackexchange.com/users/110848/" ]
There are a number of ways that people attempt to mitigate these attacks. **External** * Prevent spam filter from allowing MIME types frequently associated with malware (it's highly unlikely there is a business relevant reason to send .exe or .bat files for instance) * Use Anti-Virus as .exe's can be detected even after several rounds of encoding. Preferably you'd want "behavioural based" anti-virus which block applications based on actions they attempt to take. * Educate your staff on how to spot threats and what to do should they encounter them. * If you're IT department is big enough and you want to pull your hair out figuring it out, software restriction policies can be implemented to only allow known .exe's from running * Disable the ability for executables to run from the temp directory / directories * Implement firewall egress filtering. Set up a proxy with deep packet inspection that intercepts SSL / TLS connections and blocks outbound traffic that seems suspicious. **Internal** * Do not give local admin access to end users. Just don't. If they need to raise privileges assign them a special account that they have to use the "Run As..." functionality to use, but end users should not have administrative privileges on their workstations. * Software restrictions / egress filtering / AV / Education / disabling executables from temp directory all still apply for internal * Segregate your network to prevent lateral movement in the instance there is compromise
First and foremost you can adopt a firewall policy of "allow what's needed, block the rest". That will only go so far because your malicious outgoing link may take advantage of a hole you have poked in the firewall. Enterprise grade products will "hook" certain functions like connect(). By hooking the function an analysis engine determines if the connection is legit or bad. This decision take many data points into account like where is the connection going, what prompted the connection, etc... If you OSX and want to see this in action get a copy of a tool called Little Snitch. This program will hook outgoing connections and allow you to decide if it should be allowed or not.
130,462
I am doing some hands-on training with Kali linux and one of the exercise I came across was to make an exe file containing malicious payload(say to obtain reverse shell), send it to a victim and hope for the victim to download the exe file. If he/she downloads it, it is fairly easy after that. Now this isn't tricky at all, assuming the fact that many users aren't aware of security. But in real world, to carry out this attack, there must be more hurdles that an attacker needs to cross for exploitation? What are those security measures that stops attempt to exploit users? For example I can think of firewall which tries to detect malicious looking requests. Please answer the question for both internal and external attacks.
2016/07/19
[ "https://security.stackexchange.com/questions/130462", "https://security.stackexchange.com", "https://security.stackexchange.com/users/110848/" ]
You'd be surprised how successful something like this can be - especially for a typical user who may not have any security controls in place. Also, I would argue that if you are able to convince a user to execute code you have provided, a malicious exploit is probably not even necessary - (i.e. "Microsoft Tech Support calls" that obtain remote access to systems utilizing legitimate tools). In a corporate environment, there would be a variety of security measures that should help to prevent (stop it from happening) and control (limit the successfulness) of such an attack. Most companies take a layered security approach with multiple measures in place. Walking through how such an attack would take place, here are some of the controls that might be implemented for an attack like this: 1. Provide malicious file to user via email or other means * Email filtering (Antispam, AV, whitelisting), Email policies (Attachment restrictions such as size or file type) 2. User saves file and launches it * User awareness training, group policy restrictions (prevent files can be launched from), application whitelisting (prevents unknown files from running), Endpoint AV (scans/deletes/blocks known malicious files). 3. File launches and establishes connection back to attacker (perhaps via an exploit or via legitimate software * Endpoint and/or network (egress) filtering or proxy (block unknown/untrusted connections), system patches (block known exploits) 4. Attacker gains control of system, exfiltrates data, maintains access, and spreads laterally * Restricted user permissions (i.e. non-admin), monitoring tools (to record forensic activity for later review) I'm sure there are more but this was briefly pulled from the top of my head - but you can always take an attack, walk through the appropriate steps, then think of potential mitigations for those steps - and of course you can then rinse, repeat with more attacks against those mitigations.
First and foremost you can adopt a firewall policy of "allow what's needed, block the rest". That will only go so far because your malicious outgoing link may take advantage of a hole you have poked in the firewall. Enterprise grade products will "hook" certain functions like connect(). By hooking the function an analysis engine determines if the connection is legit or bad. This decision take many data points into account like where is the connection going, what prompted the connection, etc... If you OSX and want to see this in action get a copy of a tool called Little Snitch. This program will hook outgoing connections and allow you to decide if it should be allowed or not.
31,423
I'm just starting to use CiviCRM. So I realize that this could be a very dumb question. I'm trying to deduplicate contacts by name AND surname. When I create a new deduplicating rule using name and surname, it looks like it gets all the contacts with the same name OR surname. For example, if I have * Mario Rossi * Mario Verdi * Mario Bianchi * Mario Neri they are all shown as duplicates...but they are not. Is there a way to create a rule which addresses just contacts with same name AND surname (or vice versa)? Thank you in advance.
2019/07/19
[ "https://civicrm.stackexchange.com/questions/31423", "https://civicrm.stackexchange.com", "https://civicrm.stackexchange.com/users/7271/" ]
Welcome to CiviCRM SE. Eileen's answer is definitely the right place to start, but you might want to look at a related question that I asked about the built-in (reserved) de-duplicate rules, as they aren't entirely clear from the documentation. It says "NAME" in the description, but actually uses the fields "First Name" and "Last Name" (You have used different terminology. It took me a while to work out how it worked. See [Reserved de-dupe rules](https://civicrm.stackexchange.com/questions/29155/reserved-de-dupe-rules) Update: Remembering odd behaviour with my previous issues, I looked again and have found that the de-dupe rules appear to be cached. So if you create a rule, adjust it and try with the new version, the old version is still used. I missed this before because I was checking the database and the rule is updated properly there. If you go to Administer >> System >> Cleanup Caches and Update Paths and select Cleanup Caches then try the de-dupe rule again it works as expected. If you were experimenting, then I expect this is the problem. Alternatively if you delete the rule and add the new version it will also work. A note on the page where you edit de-dupe rules to tell you to clear the cache would be very helpful. Let me know if this solves your problem and I'll report it as a bug/enhancement.
I think this blog does a reasonable job of describing it <https://civicrm.org/blog/spidersilk/understanding-civicrm-dedupe-rules> Here are the official docs <https://docs.civicrm.org/user/en/latest/common-workflows/deduping-and-merging/> * but we could do with pulling some in from the docs Basically you need to check your weights - if you have them both set to '5' & the threshold is 10 then you need both for a dupe. If the threshold is 5 then either/or
17,955
Just like it says in the title, how can I store blocks of cheese for max shelf life? I will be making a grilled cheese sandwich and shredding 3 varieties of cheese (cheddar, swiss, parm(?)) and I am afraid that I won't be able to use three whole blocks on one sandwich.
2011/09/23
[ "https://cooking.stackexchange.com/questions/17955", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/3818/" ]
Hard, aged cheeses like cheddar and Parmesan are fine to freeze, particularly if you're going to be melting them when you get around to using them anyway. Freezing causes ice particles to break up the molecules of the cheese, and when they thaw, they leave holes in what was (prior to freezing) a pretty smooth cheese. So you might notice if you freeze blocks of cheese, they are more crumbly when you unfreeze them than they were when you bought them. The cheeses you're working with should be fine if stored properly, but softer / creamier cheeses (brie, harvarti, etc.) might become somewhat unpleasant if you freeze them. As far as storage is concerned, you can actually do one of two things: 1. Grate the cheese before you freeze it. All you need to do for this method is grate your cheese and put it in a ziploc freezer bag (thicker than a regular zip-top bag). Just make sure to squeeze the air out before sealing, and seal it well. 2. Freeze the cheese in blocks. Wrap them in plastic wrap and then put then in a ziploc bag, and you should be all set; it'll keep for 4-6 months. ([source](http://food.unl.edu/web/fnh/food-facts#canufreeze)) No matter which method you use, you may notice a slight change in texture. Make sure you thaw the cheese before using it. (Though I've put frozen shredded mozzarella on pizza and frozen shredded Mexican cheese blend - a blend of cheddar, monterey jack, queso blanco and asadero - on tacos and not had any trouble.)
The best way to keep cheese in the fridge ... and the way I've made semisoft cheeses like cheddar last 6-10 weeks, sometimes more: 1. Wrap the cheese in butcher paper, or baking parchment if you can't get butcher paper. 2. Enclose the wrapped cheese in a plastic grocery bag or plastic wrap. 3. Each time you slice off some of the cheese, change the paper. The paper keeps the cheese dry, and the plastic keeps it moist. So the cheese doesn't dessicate, but doesn't get moldy either. Works a charm.
17,955
Just like it says in the title, how can I store blocks of cheese for max shelf life? I will be making a grilled cheese sandwich and shredding 3 varieties of cheese (cheddar, swiss, parm(?)) and I am afraid that I won't be able to use three whole blocks on one sandwich.
2011/09/23
[ "https://cooking.stackexchange.com/questions/17955", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/3818/" ]
Hard, aged cheeses like cheddar and Parmesan are fine to freeze, particularly if you're going to be melting them when you get around to using them anyway. Freezing causes ice particles to break up the molecules of the cheese, and when they thaw, they leave holes in what was (prior to freezing) a pretty smooth cheese. So you might notice if you freeze blocks of cheese, they are more crumbly when you unfreeze them than they were when you bought them. The cheeses you're working with should be fine if stored properly, but softer / creamier cheeses (brie, harvarti, etc.) might become somewhat unpleasant if you freeze them. As far as storage is concerned, you can actually do one of two things: 1. Grate the cheese before you freeze it. All you need to do for this method is grate your cheese and put it in a ziploc freezer bag (thicker than a regular zip-top bag). Just make sure to squeeze the air out before sealing, and seal it well. 2. Freeze the cheese in blocks. Wrap them in plastic wrap and then put then in a ziploc bag, and you should be all set; it'll keep for 4-6 months. ([source](http://food.unl.edu/web/fnh/food-facts#canufreeze)) No matter which method you use, you may notice a slight change in texture. Make sure you thaw the cheese before using it. (Though I've put frozen shredded mozzarella on pizza and frozen shredded Mexican cheese blend - a blend of cheddar, monterey jack, queso blanco and asadero - on tacos and not had any trouble.)
I've had good luck simply storing the cheese tightly wrapped in plastic wrap in the refrigerator. If you use a good quality wrap material and wrap it tightly, the cheese will stay dry and also not lose moisture. In the past I tried using ziploc bags, evacuating air before sealing, but the simple plastic wrap approach works better. I can keep 6-7 types of cheese fresh during the time it takes my family of four to eat it... up to several months depending on cheese type.
17,955
Just like it says in the title, how can I store blocks of cheese for max shelf life? I will be making a grilled cheese sandwich and shredding 3 varieties of cheese (cheddar, swiss, parm(?)) and I am afraid that I won't be able to use three whole blocks on one sandwich.
2011/09/23
[ "https://cooking.stackexchange.com/questions/17955", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/3818/" ]
The best way to keep cheese in the fridge ... and the way I've made semisoft cheeses like cheddar last 6-10 weeks, sometimes more: 1. Wrap the cheese in butcher paper, or baking parchment if you can't get butcher paper. 2. Enclose the wrapped cheese in a plastic grocery bag or plastic wrap. 3. Each time you slice off some of the cheese, change the paper. The paper keeps the cheese dry, and the plastic keeps it moist. So the cheese doesn't dessicate, but doesn't get moldy either. Works a charm.
I've had good luck simply storing the cheese tightly wrapped in plastic wrap in the refrigerator. If you use a good quality wrap material and wrap it tightly, the cheese will stay dry and also not lose moisture. In the past I tried using ziploc bags, evacuating air before sealing, but the simple plastic wrap approach works better. I can keep 6-7 types of cheese fresh during the time it takes my family of four to eat it... up to several months depending on cheese type.
307,168
Why I can't feel the actual speed of plane when the plane in the sky? I mean I cannot judge how fast the plane is going in terms of the light on the ground and I feel it is flying so slow. How can I explain this mismatch?
2017/01/24
[ "https://physics.stackexchange.com/questions/307168", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/118807/" ]
You do not feel speed, you only feel acceleration, or other forces, like those from the wind on your face - and you cannot feel that in a plane. So you do feel something when the plane is accelerating, taking off, sometimes when it banks, or in bad weather. But a plane's speed is typically steady, unchanging, for most of the trip. When changes in the plane's motion occur they are relatively small (except for very bad weather, jet stream turbulence and the like ). The plane's motion is normally kept within reasonable acceleration rates for the precise reason to avoid passenger discomfort (and to avoid excessive stress on the airframe). So you're in a system that's designed to minimize your sensation of motion. As you're high up, you cannot see how fast the plane's ground speed is. The closer you are to a passing object, the faster you think you're going. You're not close to the ground so it almost drifts by. A similar effect is why some people experience more fear when driving that others. They concentrate on the road surface near their car and it gives a greater impression of speed than the place they should be looking (further ahead). You might also notice it if you were on a skateboard and compared the sensation of speed when standing to that when kneeling. The speed doesn't change, but your brain picks up different clues to motion and interprets them as different speeds. In a high flying plane there are no obvious speed clues so your brain can interpret that as not moving fast.
I believe it's an illusion that has to do more with cognitive science. The treatment of objects movements and the evaluation of their speed is done pretty unconsciously through different visual areas of the cortex. [Some people for example cannot perceive](https://en.wikipedia.org/wiki/Akinetopsia) motion even though they see perfectly well! This information is also completed by other sources of information that help us "feel" the speed like feeling the wind, vibrations, noise and eye movements. So the feel of speed is not a rational process! Most likely it was developed to help us hunt or escape the attack of an animal. But not to judge the speed of an aircraft. Then somehow all this information reaches [Speed cells](http://www.sciencemag.org/news/2015/07/speed-cells-brain-track-how-fast-animals-run) that among Place cells, Grid cells, Boundary cell and Head direction cells form the "human GPS system". When in a plane, all the information reaching the speed cells is biased and thinking rationally about our real speed can't really change the slow "feel" of our speed cells. Maybe a lot of training could help. One should ask test or fighter pilots if they feel the real speed. People who vote your question down should get [a life.](https://www.cheaptickets.com/)
203,057
I just realized that both seems to mean the same thing. However, I am not sure if this is something that's context-dependent or not. What do you think? For example: > > I pressed and used the buttons at the right time and in the right > combination. > > > I pressed and used the buttons at the right time and with the right > combination. > > >
2019/03/30
[ "https://ell.stackexchange.com/questions/203057", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/91596/" ]
Interesting question! I've never thought about this before. This might depend on the individual and the dialect, so I will only be answering for myself and Australian English. **In** a combination is used to describe a series of actions (for example, pressing buttons) being done in a particular order. The actions themselves are the combination. > > I pressed the buttons in the right combination. > > > **With** a combination is used to describe an action (for example, opening a lock) that needs to *use* a combination (a particular sequence). The action is not part of the combination. > > I opened the lock with the right combination. > > > So in your question, **"in the right combination" is correct.**
I suggest using *I pressed and used the buttons in combination with right time and right combination*. If you'd like to use *with the right combination* I think you should add *of sth* after combination, i.e. *with the right combination of sth* Please refer to [this post](https://english.stackexchange.com/questions/76954/difference-between-combination-of-and-combination-between)
5,095,525
After discovering jQuery a few years ago, I realized how easy it was to really make interactive and user friendly websites without writing books of code. As the projects increased in size, so did also the time required to carry out any debugging or perhaps implementing a change or new feature. From reading various blogs and staying somewhat updated, I've read about libraries similar to [Backbone.js](http://documentcloud.github.com/backbone/) and [JavascriptMVC](http://www.javascriptmvc.com/) which both sound like good alternatives in order to make the code more modular and separated. However as being far from an Javascript or jQuery expert, I am not really not suited to tell what's a good cornerstone in a project where future ease of maintainability, debugging and development are prioritized. **So with this in mind - what's common sense when starting a project where Javascript and jQuery stands for the majority of the user experience and data presentation to the user?** Thanks a lot
2011/02/23
[ "https://Stackoverflow.com/questions/5095525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/198128/" ]
Check out this book, mainly chapter 10 <http://jqfundamentals.com/book/index.html#chapter-10>
My suggestion would be is to isolate as much javascript as possible into external js files, outside of your views, and simply reference them in the headers. Not only does this allow javascript re-use from page to page, but it separates your concerns on a fair level, spreading out your code to allow for easier debugging (easier by my beliefs anyway). In addition, this makes your js a little bit more secure as it is not rendered directly onto the browser page. Granted, that security is fairly negligible since tools like firebug or IE's Developer Tools can access layered away javascript files. Second, I would suggest using a tool like unto compress.msbuild to (at final compile for deployment) compress all your custom-written javascript to whatevertheirnameis-min.js. Not only does compacting everything to a single line actually reduce load and run-times for your code, it also obfuscates it into more secure. It is significantly more difficult to take apart a -min file, much less find any specific functions when all the code is a single line.
5,095,525
After discovering jQuery a few years ago, I realized how easy it was to really make interactive and user friendly websites without writing books of code. As the projects increased in size, so did also the time required to carry out any debugging or perhaps implementing a change or new feature. From reading various blogs and staying somewhat updated, I've read about libraries similar to [Backbone.js](http://documentcloud.github.com/backbone/) and [JavascriptMVC](http://www.javascriptmvc.com/) which both sound like good alternatives in order to make the code more modular and separated. However as being far from an Javascript or jQuery expert, I am not really not suited to tell what's a good cornerstone in a project where future ease of maintainability, debugging and development are prioritized. **So with this in mind - what's common sense when starting a project where Javascript and jQuery stands for the majority of the user experience and data presentation to the user?** Thanks a lot
2011/02/23
[ "https://Stackoverflow.com/questions/5095525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/198128/" ]
Both Backbone.js and JavascriptMVC are great examples of using a framework to organize large projects in a sane way ([SproutCore](http://www.sproutcore.com/) and [Cappuccino](http://cappuccino.org/) are nice too). I definitely suggest you choose a standard way of deal with data from the server, handling events from the DOM and responses from the sever, and view creation. Otherwise it can be a maintenance nightmare. Beyond an MVC framework, you should probably choose a solution for these problems: * Dependency management: how will you compile and load javascript files in the right order? My suggestion would be [RequireJS](http://requirejs.org/). * Testing: testing UI code is never easy but the guys over at jQuery have been doing for a while and their testing tool [QUnit](http://docs.jquery.com/Qunit) is well documented/tested. * Minification: you'll want to minify your code before deploying to production RequireJS has this built in but you could also use the [Closure Compiler](http://code.google.com/closure/compiler/) if you want to get crazy small source. * Build System: All these tools are great but you should pull them all together in one master build system so you can run a simple command on the commandline and have you debug or production application. The specific tool to use depends on your language of choice - Ruby => [Rake](http://rake.rubyforge.org/), Python -> Write your own, **NodeJS** as a build tool (i like this option the most) -> [Jake](https://github.com/jcoglan/jake) Beyond that just be aware if something feels clunky or slow (either tooling or framework) and refactor.
Check out this book, mainly chapter 10 <http://jqfundamentals.com/book/index.html#chapter-10>
5,095,525
After discovering jQuery a few years ago, I realized how easy it was to really make interactive and user friendly websites without writing books of code. As the projects increased in size, so did also the time required to carry out any debugging or perhaps implementing a change or new feature. From reading various blogs and staying somewhat updated, I've read about libraries similar to [Backbone.js](http://documentcloud.github.com/backbone/) and [JavascriptMVC](http://www.javascriptmvc.com/) which both sound like good alternatives in order to make the code more modular and separated. However as being far from an Javascript or jQuery expert, I am not really not suited to tell what's a good cornerstone in a project where future ease of maintainability, debugging and development are prioritized. **So with this in mind - what's common sense when starting a project where Javascript and jQuery stands for the majority of the user experience and data presentation to the user?** Thanks a lot
2011/02/23
[ "https://Stackoverflow.com/questions/5095525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/198128/" ]
Both Backbone.js and JavascriptMVC are great examples of using a framework to organize large projects in a sane way ([SproutCore](http://www.sproutcore.com/) and [Cappuccino](http://cappuccino.org/) are nice too). I definitely suggest you choose a standard way of deal with data from the server, handling events from the DOM and responses from the sever, and view creation. Otherwise it can be a maintenance nightmare. Beyond an MVC framework, you should probably choose a solution for these problems: * Dependency management: how will you compile and load javascript files in the right order? My suggestion would be [RequireJS](http://requirejs.org/). * Testing: testing UI code is never easy but the guys over at jQuery have been doing for a while and their testing tool [QUnit](http://docs.jquery.com/Qunit) is well documented/tested. * Minification: you'll want to minify your code before deploying to production RequireJS has this built in but you could also use the [Closure Compiler](http://code.google.com/closure/compiler/) if you want to get crazy small source. * Build System: All these tools are great but you should pull them all together in one master build system so you can run a simple command on the commandline and have you debug or production application. The specific tool to use depends on your language of choice - Ruby => [Rake](http://rake.rubyforge.org/), Python -> Write your own, **NodeJS** as a build tool (i like this option the most) -> [Jake](https://github.com/jcoglan/jake) Beyond that just be aware if something feels clunky or slow (either tooling or framework) and refactor.
My suggestion would be is to isolate as much javascript as possible into external js files, outside of your views, and simply reference them in the headers. Not only does this allow javascript re-use from page to page, but it separates your concerns on a fair level, spreading out your code to allow for easier debugging (easier by my beliefs anyway). In addition, this makes your js a little bit more secure as it is not rendered directly onto the browser page. Granted, that security is fairly negligible since tools like firebug or IE's Developer Tools can access layered away javascript files. Second, I would suggest using a tool like unto compress.msbuild to (at final compile for deployment) compress all your custom-written javascript to whatevertheirnameis-min.js. Not only does compacting everything to a single line actually reduce load and run-times for your code, it also obfuscates it into more secure. It is significantly more difficult to take apart a -min file, much less find any specific functions when all the code is a single line.
5,095,525
After discovering jQuery a few years ago, I realized how easy it was to really make interactive and user friendly websites without writing books of code. As the projects increased in size, so did also the time required to carry out any debugging or perhaps implementing a change or new feature. From reading various blogs and staying somewhat updated, I've read about libraries similar to [Backbone.js](http://documentcloud.github.com/backbone/) and [JavascriptMVC](http://www.javascriptmvc.com/) which both sound like good alternatives in order to make the code more modular and separated. However as being far from an Javascript or jQuery expert, I am not really not suited to tell what's a good cornerstone in a project where future ease of maintainability, debugging and development are prioritized. **So with this in mind - what's common sense when starting a project where Javascript and jQuery stands for the majority of the user experience and data presentation to the user?** Thanks a lot
2011/02/23
[ "https://Stackoverflow.com/questions/5095525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/198128/" ]
I would like to recommend using javascript in a functional style which can be helped by abstractions like [coffeescript](http://jashkenas.github.com/coffee-script/) and [underscore.js](http://documentcloud.github.com/underscore/). Also minimising the cross module interaction and relying on an event driven code is a great way to keep your entire project organized. I defiantly like the way that [backbone.js](http://documentcloud.github.com/backbone/) handles the module-view weak coupling by having view's bind the change events on modules. Functional event based code is great for macro structure. I would also advice coupling javascript to the DOM. (Again [backbone.js](http://documentcloud.github.com/backbone/) has a great example of how the model is completely dom independant and even the views aren't dependant on the dom. For all you care the views could be shooting data down a WebSocket) I'm personally also a fan of having one central file manager rather then having a complicated require/include structure on every page. Load javascript modules from your central loader based on a page by page feature detection. (See here for an [example](https://stackoverflow.com/questions/5083409/pattern-for-javascript-module-pattern-and-sub-module-initialization/5083571#5083571) of a central file manager). I would also like to advocate the growing possibility of good re-use through [node.js](http://nodejs.org/). There are quite a few people working on porting browser code verbatim to node.js or copying node.js code verbatim to the browser. (see [YUI3 running on nodejs](http://www.yuiblog.com/blog/2010/09/29/video-glass-node/), [node.js in the browser](https://github.com/Marak/gemini.js), [commonJS in the browser](https://github.com/Raynos/BrowserCJS) Admittedly most of these are WIP and not stable.)
My suggestion would be is to isolate as much javascript as possible into external js files, outside of your views, and simply reference them in the headers. Not only does this allow javascript re-use from page to page, but it separates your concerns on a fair level, spreading out your code to allow for easier debugging (easier by my beliefs anyway). In addition, this makes your js a little bit more secure as it is not rendered directly onto the browser page. Granted, that security is fairly negligible since tools like firebug or IE's Developer Tools can access layered away javascript files. Second, I would suggest using a tool like unto compress.msbuild to (at final compile for deployment) compress all your custom-written javascript to whatevertheirnameis-min.js. Not only does compacting everything to a single line actually reduce load and run-times for your code, it also obfuscates it into more secure. It is significantly more difficult to take apart a -min file, much less find any specific functions when all the code is a single line.
5,095,525
After discovering jQuery a few years ago, I realized how easy it was to really make interactive and user friendly websites without writing books of code. As the projects increased in size, so did also the time required to carry out any debugging or perhaps implementing a change or new feature. From reading various blogs and staying somewhat updated, I've read about libraries similar to [Backbone.js](http://documentcloud.github.com/backbone/) and [JavascriptMVC](http://www.javascriptmvc.com/) which both sound like good alternatives in order to make the code more modular and separated. However as being far from an Javascript or jQuery expert, I am not really not suited to tell what's a good cornerstone in a project where future ease of maintainability, debugging and development are prioritized. **So with this in mind - what's common sense when starting a project where Javascript and jQuery stands for the majority of the user experience and data presentation to the user?** Thanks a lot
2011/02/23
[ "https://Stackoverflow.com/questions/5095525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/198128/" ]
Both Backbone.js and JavascriptMVC are great examples of using a framework to organize large projects in a sane way ([SproutCore](http://www.sproutcore.com/) and [Cappuccino](http://cappuccino.org/) are nice too). I definitely suggest you choose a standard way of deal with data from the server, handling events from the DOM and responses from the sever, and view creation. Otherwise it can be a maintenance nightmare. Beyond an MVC framework, you should probably choose a solution for these problems: * Dependency management: how will you compile and load javascript files in the right order? My suggestion would be [RequireJS](http://requirejs.org/). * Testing: testing UI code is never easy but the guys over at jQuery have been doing for a while and their testing tool [QUnit](http://docs.jquery.com/Qunit) is well documented/tested. * Minification: you'll want to minify your code before deploying to production RequireJS has this built in but you could also use the [Closure Compiler](http://code.google.com/closure/compiler/) if you want to get crazy small source. * Build System: All these tools are great but you should pull them all together in one master build system so you can run a simple command on the commandline and have you debug or production application. The specific tool to use depends on your language of choice - Ruby => [Rake](http://rake.rubyforge.org/), Python -> Write your own, **NodeJS** as a build tool (i like this option the most) -> [Jake](https://github.com/jcoglan/jake) Beyond that just be aware if something feels clunky or slow (either tooling or framework) and refactor.
I would like to recommend using javascript in a functional style which can be helped by abstractions like [coffeescript](http://jashkenas.github.com/coffee-script/) and [underscore.js](http://documentcloud.github.com/underscore/). Also minimising the cross module interaction and relying on an event driven code is a great way to keep your entire project organized. I defiantly like the way that [backbone.js](http://documentcloud.github.com/backbone/) handles the module-view weak coupling by having view's bind the change events on modules. Functional event based code is great for macro structure. I would also advice coupling javascript to the DOM. (Again [backbone.js](http://documentcloud.github.com/backbone/) has a great example of how the model is completely dom independant and even the views aren't dependant on the dom. For all you care the views could be shooting data down a WebSocket) I'm personally also a fan of having one central file manager rather then having a complicated require/include structure on every page. Load javascript modules from your central loader based on a page by page feature detection. (See here for an [example](https://stackoverflow.com/questions/5083409/pattern-for-javascript-module-pattern-and-sub-module-initialization/5083571#5083571) of a central file manager). I would also like to advocate the growing possibility of good re-use through [node.js](http://nodejs.org/). There are quite a few people working on porting browser code verbatim to node.js or copying node.js code verbatim to the browser. (see [YUI3 running on nodejs](http://www.yuiblog.com/blog/2010/09/29/video-glass-node/), [node.js in the browser](https://github.com/Marak/gemini.js), [commonJS in the browser](https://github.com/Raynos/BrowserCJS) Admittedly most of these are WIP and not stable.)
66,144,093
This happens in Android Studio Beta. Used to work in the other builds as I recall. Now, I either have to generate a new key every time I generate the apk or manually enter my password due to this error. My password is ASCII. Simple characters A thru Z and a thru z. It takes the key just fine, but the next time I try to build this is what I get. Digging back one error up in the output I see this: Execution failed for task ':app:packageRelease'. > > A failure occurred while executing com.android.build.gradle.tasks.PackageAndroidArtifact$IncrementalSplitterRunnable > com.android.ide.common.signing.KeytoolException: Failed to read key key0 from store "C:\AndroidRelated\KeyStoreForCompanyA\KeyStore.jks": keystore password was incorrect > > > I have it remember my password from build to build though. Note that if I change this manually to what I know is correct, then it will build. Just a bug in Android Studio Beta maybe or is this overridden somewhere other than the Generate Signed Bundle or SDK dialog?
2021/02/10
[ "https://Stackoverflow.com/questions/66144093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/443654/" ]
**Solved**! Step; 1. Clean your project 2. Generate signed yours bundle/.apk file 3. Uncheck the "Remember password" 4. Manually put your password at "Key store password" and "Key password" 5. Click Next and you Done!. *Android studio update bring this bug.*
I faced the same problem. Android Studio saves passwords in a bad format. I solved it by typing the password manually again.
66,144,093
This happens in Android Studio Beta. Used to work in the other builds as I recall. Now, I either have to generate a new key every time I generate the apk or manually enter my password due to this error. My password is ASCII. Simple characters A thru Z and a thru z. It takes the key just fine, but the next time I try to build this is what I get. Digging back one error up in the output I see this: Execution failed for task ':app:packageRelease'. > > A failure occurred while executing com.android.build.gradle.tasks.PackageAndroidArtifact$IncrementalSplitterRunnable > com.android.ide.common.signing.KeytoolException: Failed to read key key0 from store "C:\AndroidRelated\KeyStoreForCompanyA\KeyStore.jks": keystore password was incorrect > > > I have it remember my password from build to build though. Note that if I change this manually to what I know is correct, then it will build. Just a bug in Android Studio Beta maybe or is this overridden somewhere other than the Generate Signed Bundle or SDK dialog?
2021/02/10
[ "https://Stackoverflow.com/questions/66144093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/443654/" ]
It is a bug in Android Studio 4.2.+ You have to manually type password each time you generate signed APK!
I faced the same problem. Android Studio saves passwords in a bad format. I solved it by typing the password manually again.
66,144,093
This happens in Android Studio Beta. Used to work in the other builds as I recall. Now, I either have to generate a new key every time I generate the apk or manually enter my password due to this error. My password is ASCII. Simple characters A thru Z and a thru z. It takes the key just fine, but the next time I try to build this is what I get. Digging back one error up in the output I see this: Execution failed for task ':app:packageRelease'. > > A failure occurred while executing com.android.build.gradle.tasks.PackageAndroidArtifact$IncrementalSplitterRunnable > com.android.ide.common.signing.KeytoolException: Failed to read key key0 from store "C:\AndroidRelated\KeyStoreForCompanyA\KeyStore.jks": keystore password was incorrect > > > I have it remember my password from build to build though. Note that if I change this manually to what I know is correct, then it will build. Just a bug in Android Studio Beta maybe or is this overridden somewhere other than the Generate Signed Bundle or SDK dialog?
2021/02/10
[ "https://Stackoverflow.com/questions/66144093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/443654/" ]
**Solved**! Step; 1. Clean your project 2. Generate signed yours bundle/.apk file 3. Uncheck the "Remember password" 4. Manually put your password at "Key store password" and "Key password" 5. Click Next and you Done!. *Android studio update bring this bug.*
It is a bug in Android Studio 4.2.+ You have to manually type password each time you generate signed APK!
66,144,093
This happens in Android Studio Beta. Used to work in the other builds as I recall. Now, I either have to generate a new key every time I generate the apk or manually enter my password due to this error. My password is ASCII. Simple characters A thru Z and a thru z. It takes the key just fine, but the next time I try to build this is what I get. Digging back one error up in the output I see this: Execution failed for task ':app:packageRelease'. > > A failure occurred while executing com.android.build.gradle.tasks.PackageAndroidArtifact$IncrementalSplitterRunnable > com.android.ide.common.signing.KeytoolException: Failed to read key key0 from store "C:\AndroidRelated\KeyStoreForCompanyA\KeyStore.jks": keystore password was incorrect > > > I have it remember my password from build to build though. Note that if I change this manually to what I know is correct, then it will build. Just a bug in Android Studio Beta maybe or is this overridden somewhere other than the Generate Signed Bundle or SDK dialog?
2021/02/10
[ "https://Stackoverflow.com/questions/66144093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/443654/" ]
**Solved**! Step; 1. Clean your project 2. Generate signed yours bundle/.apk file 3. Uncheck the "Remember password" 4. Manually put your password at "Key store password" and "Key password" 5. Click Next and you Done!. *Android studio update bring this bug.*
Android Studio tries to use the ASCII which is generated as you give your passwords but as it tries to re generate it based on your system language and datetime it can differ in some charachters. So, in some special cases it is better to use your keyboard to type your password. ![Generate Signed Bundle or APK](https://i.stack.imgur.com/9U0rI.png) As you see it makes different passwords in each part. It works correctly in some cases also but totally in act of preventing the trouble it is better to retype manually.
66,144,093
This happens in Android Studio Beta. Used to work in the other builds as I recall. Now, I either have to generate a new key every time I generate the apk or manually enter my password due to this error. My password is ASCII. Simple characters A thru Z and a thru z. It takes the key just fine, but the next time I try to build this is what I get. Digging back one error up in the output I see this: Execution failed for task ':app:packageRelease'. > > A failure occurred while executing com.android.build.gradle.tasks.PackageAndroidArtifact$IncrementalSplitterRunnable > com.android.ide.common.signing.KeytoolException: Failed to read key key0 from store "C:\AndroidRelated\KeyStoreForCompanyA\KeyStore.jks": keystore password was incorrect > > > I have it remember my password from build to build though. Note that if I change this manually to what I know is correct, then it will build. Just a bug in Android Studio Beta maybe or is this overridden somewhere other than the Generate Signed Bundle or SDK dialog?
2021/02/10
[ "https://Stackoverflow.com/questions/66144093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/443654/" ]
**Solved**! Step; 1. Clean your project 2. Generate signed yours bundle/.apk file 3. Uncheck the "Remember password" 4. Manually put your password at "Key store password" and "Key password" 5. Click Next and you Done!. *Android studio update bring this bug.*
You can resolve it quickly by **deleting the keystore file and create a new one** from the Build -> Generate signed bundle/APK -> APK -> create new options. [![enter image description here](https://i.stack.imgur.com/78EnQ.png)](https://i.stack.imgur.com/78EnQ.png)
66,144,093
This happens in Android Studio Beta. Used to work in the other builds as I recall. Now, I either have to generate a new key every time I generate the apk or manually enter my password due to this error. My password is ASCII. Simple characters A thru Z and a thru z. It takes the key just fine, but the next time I try to build this is what I get. Digging back one error up in the output I see this: Execution failed for task ':app:packageRelease'. > > A failure occurred while executing com.android.build.gradle.tasks.PackageAndroidArtifact$IncrementalSplitterRunnable > com.android.ide.common.signing.KeytoolException: Failed to read key key0 from store "C:\AndroidRelated\KeyStoreForCompanyA\KeyStore.jks": keystore password was incorrect > > > I have it remember my password from build to build though. Note that if I change this manually to what I know is correct, then it will build. Just a bug in Android Studio Beta maybe or is this overridden somewhere other than the Generate Signed Bundle or SDK dialog?
2021/02/10
[ "https://Stackoverflow.com/questions/66144093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/443654/" ]
It is a bug in Android Studio 4.2.+ You have to manually type password each time you generate signed APK!
Android Studio tries to use the ASCII which is generated as you give your passwords but as it tries to re generate it based on your system language and datetime it can differ in some charachters. So, in some special cases it is better to use your keyboard to type your password. ![Generate Signed Bundle or APK](https://i.stack.imgur.com/9U0rI.png) As you see it makes different passwords in each part. It works correctly in some cases also but totally in act of preventing the trouble it is better to retype manually.
66,144,093
This happens in Android Studio Beta. Used to work in the other builds as I recall. Now, I either have to generate a new key every time I generate the apk or manually enter my password due to this error. My password is ASCII. Simple characters A thru Z and a thru z. It takes the key just fine, but the next time I try to build this is what I get. Digging back one error up in the output I see this: Execution failed for task ':app:packageRelease'. > > A failure occurred while executing com.android.build.gradle.tasks.PackageAndroidArtifact$IncrementalSplitterRunnable > com.android.ide.common.signing.KeytoolException: Failed to read key key0 from store "C:\AndroidRelated\KeyStoreForCompanyA\KeyStore.jks": keystore password was incorrect > > > I have it remember my password from build to build though. Note that if I change this manually to what I know is correct, then it will build. Just a bug in Android Studio Beta maybe or is this overridden somewhere other than the Generate Signed Bundle or SDK dialog?
2021/02/10
[ "https://Stackoverflow.com/questions/66144093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/443654/" ]
It is a bug in Android Studio 4.2.+ You have to manually type password each time you generate signed APK!
You can resolve it quickly by **deleting the keystore file and create a new one** from the Build -> Generate signed bundle/APK -> APK -> create new options. [![enter image description here](https://i.stack.imgur.com/78EnQ.png)](https://i.stack.imgur.com/78EnQ.png)
5,469,736
I want paste button only when I select copy option. When copy option is not selected the paste button should not come on the screen. How can I do it?
2011/03/29
[ "https://Stackoverflow.com/questions/5469736", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
> > Where is the best place to start? > > > Here: <http://www.asp.net/mvc/mvc3>
I wrote a blog post on [Dealing with javascript or JSON results after an AJAX call with Ajax.ActionLink, unobtrusive AJAX and MVC 3](http://www.sharpedgesoftware.com/Blog/2011/02/27/dealing-with-javascript-or-json-results-after-an-ajax-call-with-ajaxactionlink-unobtrusive-ajax-and-) that might help you along.
5,469,736
I want paste button only when I select copy option. When copy option is not selected the paste button should not come on the screen. How can I do it?
2011/03/29
[ "https://Stackoverflow.com/questions/5469736", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
> > Where is the best place to start? > > > Here: <http://www.asp.net/mvc/mvc3>
I always found pluralsight tutorials very helpful and they have a couple on MVC3. <http://www.pluralsight-training.net/microsoft/olt/courses.aspx> Alternatively check out this article by Jon Galloway which lists lots of tutorials and resources: <http://weblogs.asp.net/jgalloway/archive/2011/03/17/asp-net-mvc-3-roundup-of-tutorials-videos-labs-and-other-assorted-training-materials.aspx> Or David Haydens blog: <http://davidhayden.com/blog/dave/archive/2011/01/05/ASPNETMVC3TutorialsIndex.aspx> Finally of course there is always the asp.net website: <http://www.asp.net/mvc/tutorials>
5,469,736
I want paste button only when I select copy option. When copy option is not selected the paste button should not come on the screen. How can I do it?
2011/03/29
[ "https://Stackoverflow.com/questions/5469736", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I always found pluralsight tutorials very helpful and they have a couple on MVC3. <http://www.pluralsight-training.net/microsoft/olt/courses.aspx> Alternatively check out this article by Jon Galloway which lists lots of tutorials and resources: <http://weblogs.asp.net/jgalloway/archive/2011/03/17/asp-net-mvc-3-roundup-of-tutorials-videos-labs-and-other-assorted-training-materials.aspx> Or David Haydens blog: <http://davidhayden.com/blog/dave/archive/2011/01/05/ASPNETMVC3TutorialsIndex.aspx> Finally of course there is always the asp.net website: <http://www.asp.net/mvc/tutorials>
I wrote a blog post on [Dealing with javascript or JSON results after an AJAX call with Ajax.ActionLink, unobtrusive AJAX and MVC 3](http://www.sharpedgesoftware.com/Blog/2011/02/27/dealing-with-javascript-or-json-results-after-an-ajax-call-with-ajaxactionlink-unobtrusive-ajax-and-) that might help you along.
1,191,444
I'd like to ask if there's a way to make chrome extension ONLY activated in incognito mode. I only want the extension activated in incognito mode and disabled in normal browsing. I do hope there's a way and thanks in advance.
2017/03/23
[ "https://superuser.com/questions/1191444", "https://superuser.com", "https://superuser.com/users/710152/" ]
You can't do exactly this. Closest way is to create a separate Chrome Profile and install extensions there.
You can do this but for that you require Opera Browser. Activating Extensions Only in Incognito is not possible in Chrome, Switch to Opera.
16,377
My cat catches a lot of mice and birds. It always leaves the heart, perfectly intact. My question is, what about the tails of the mice, the feathers of the birds? I sometimes find some feathers but never all of them. Does the cat eat it, and why does it leave the heart?
2017/02/10
[ "https://pets.stackexchange.com/questions/16377", "https://pets.stackexchange.com", "https://pets.stackexchange.com/users/-1/" ]
Cats are very picky with what they eat. Although hearts have a lot of good nutrients for cats, if a cat doesnt like the texture or taste, it might just reject it. Cats most likely won't eat the feathers or the tails, as they are harder to digest, but might occasionally eat a few. Although this is unrelated, and not a current issue for you, it will prevent future stress. Remember to get your cat regularily checked up on by the vet. Mice and birds can carry diseases.
Maybe the hearts are too tough to eat even with their sharp teeth, or maybe they are actually poisonous and therefore inedible. If either is the case, the cat knows, and therefore leaves it alone. As for the feathers, I think they are discarded since they're most likely as useless to the cat as the hearts.
16,377
My cat catches a lot of mice and birds. It always leaves the heart, perfectly intact. My question is, what about the tails of the mice, the feathers of the birds? I sometimes find some feathers but never all of them. Does the cat eat it, and why does it leave the heart?
2017/02/10
[ "https://pets.stackexchange.com/questions/16377", "https://pets.stackexchange.com", "https://pets.stackexchange.com/users/-1/" ]
Cats remove the feathers by licking them, and they’ll inevitably swallow some of them in the process; their mouth structure doesn’t really allow spitting stuff out like ours does. If it’s “your” cat, I assume you feed them? If so, they hunt primarily because their instincts tell them to, not for nutrition. They may eat the tastiest/easiest parts of their prey, but they’ll leave the less tasty or more difficult parts. Feral or stray cats will usually eat the entire prey, including bones, because they *need* to.
Maybe the hearts are too tough to eat even with their sharp teeth, or maybe they are actually poisonous and therefore inedible. If either is the case, the cat knows, and therefore leaves it alone. As for the feathers, I think they are discarded since they're most likely as useless to the cat as the hearts.
16,377
My cat catches a lot of mice and birds. It always leaves the heart, perfectly intact. My question is, what about the tails of the mice, the feathers of the birds? I sometimes find some feathers but never all of them. Does the cat eat it, and why does it leave the heart?
2017/02/10
[ "https://pets.stackexchange.com/questions/16377", "https://pets.stackexchange.com", "https://pets.stackexchange.com/users/-1/" ]
Cats are very picky with what they eat. Although hearts have a lot of good nutrients for cats, if a cat doesnt like the texture or taste, it might just reject it. Cats most likely won't eat the feathers or the tails, as they are harder to digest, but might occasionally eat a few. Although this is unrelated, and not a current issue for you, it will prevent future stress. Remember to get your cat regularily checked up on by the vet. Mice and birds can carry diseases.
Cats remove the feathers by licking them, and they’ll inevitably swallow some of them in the process; their mouth structure doesn’t really allow spitting stuff out like ours does. If it’s “your” cat, I assume you feed them? If so, they hunt primarily because their instincts tell them to, not for nutrition. They may eat the tastiest/easiest parts of their prey, but they’ll leave the less tasty or more difficult parts. Feral or stray cats will usually eat the entire prey, including bones, because they *need* to.
11,424
Me and my sister in law bought a house. We asked her help to acquire the loan. She signed the mortgage loan for a 5 year contract, and we got the house. Me and my wife paid all the expenses and down payment for the house, my sister in law never gave a single cent for acquiring the house. The title stated she has 5% share and 95% for me. We all live in the same house and she is paying me 600 a month because she came to live with us with her two kids and with the 600 everything is inclusive down to utilities. Something went wrong and now she wants her name out of the mortgage and she is claiming her 5% share. Me and my wife are paying the mortgage and never had any default, we pay property taxes, insurance and all the utilities, my wife maintains the house and we renovated the house significantly without any help from her. Do I have the right to refuse her demand to remove her name since I believe I cannot stand alone yet on the mortgage?
2016/07/07
[ "https://law.stackexchange.com/questions/11424", "https://law.stackexchange.com", "https://law.stackexchange.com/users/-1/" ]
I can't help with the relationship issues: here are the legal issues. 1. She legally owns 5% of the house and you own 95% 2. I presume that the loan agreement is a contract between you, her and the lender so removing her name from the loan is at the discretion of the lender, not you or her. I would be very surprised if the lender would allow this without totally refinancing the loan. 3. Whatever arrangements you had with your sister are *probably* not enforceable because the presumption is that arrangements between family members are *not* legally enforceable contracts. Unless you can provide evidence that both of you intended to create legally binding obligations for what you assert (like a signed document) then what you say is just hot air. Legally, neither of you have the power to get her name off the loan. As a co-owner she is entitled to live in the property rent free. Each of you is jointly (i.e. together) and severally (i.e. individually) liable for making the loan repayments - in what proportion that should be done is a matter for you two to sort out - the lender doesn't care who pays so long as they get paid. <https://www.law.cornell.edu/wex/tenancy_in_common>
You don't need to do anything - (or I won't) let her move to perfect her claimed interest. You have facts to show pattern of payment (600 that sets a contract) and other facts which would result in minimum costs - 1st get a comparable value of the house -in order to determine what 5% represents - let's say, the house needs work -new roof etc., that would subtract from comparable value - personally, I sit back and let her try to enforce the 5% but I be happy to take her name off it - then (if you want) give her a promissory note (that is allows for your discretion to pay) for the 5% (without interest) to be paid when ever the house is no longer under your control -which includes inheritance to wit: controlled by you still when transferred to your heirs - having 5% of something versus enforcing it is a whole other creature - given I see no power to enforce - in other words, seems like you are sitting in a good position - via you have no obligation to determine what the 5% represents and the ability to reduce an amount if she every comes up with a number - and no obligation to pay it once it is determined and even then, take her name off and pay her down the road- though, be careful if you give her a promissory note as to no enforcement time even specify up to your discretion
302,220
I saw that a chatroom event is going to be held. What is special about chatroom events? Aren't they just the same as the usual chatting? I went to the FAQ and the Help Center, but I see no useful information there.
2015/08/11
[ "https://meta.stackoverflow.com/questions/302220", "https://meta.stackoverflow.com", "https://meta.stackoverflow.com/users/5133585/" ]
Chatroom events are just easy ways for moderators of a room to communicate a scheduled function. They can be created with any purpose in mind, much like their real life counterparts. In the StackOverflow Python chat, we typically schedule our [room meetings as events](https://chat.stackoverflow.com/rooms/info/6/python?tab=schedule) so that everyone is reminded of the occurrence outside of having pinned messages in the starred/pinned messages area of the interface. Finally, the StackOverflow dev team was nice enough to make the events exportable, meaning it's easy to get "official" reminders in whatever calendar solution the user chooses.
They're just like real-life meetings, which are ultimately just people talking as they usually would. The difference is that the people are talking about something specific, in a pre-determined place, at a pre-determined time. If you didn't do that, you would not be able to guarantee that all interested parties were present at the same time in the same place, and your event wouldn't be very successful.
196,904
Without warning, a wizard casts *telepathy* targeting a friend who is on a different continent. > > You create a telepathic link between yourself and a willing creature with which you are familiar. > > > How is it decided whether the target is willing? Does the wizard get 6 seconds to quickly introduce herself? Does the target "feel" a "friend request/phone call" and envision the caster in their head and then decide whether to let them in/answer? I wondered if the spell assumes the target is willing until the target decides otherwise, but there is no option for the target to end the spell.
2022/03/18
[ "https://rpg.stackexchange.com/questions/196904", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/46827/" ]
### Short answer, you will need to establish an agreement ahead of time. As the check occurs before the connection is ever established. This is apparently not a spell used for cold calling. As explained by @Fie (emphasis theirs) > > according to the spell targetting rules in XGE (pp. 85–6), the validity of the target (i.e. whether the creature is willing) must be determined *before* the spell has any effect (including the target recognising you as the creature it ***is* communicating** with). > > > Meaning that though you are familiar with the target, they must be willing to receive your message before they receive it. A short conversation among friends can be had, "Hey, is it ok if I use telepathy to communicate with you in the future?" to establish that willingness in some regard ahead of time. If the target for some reason becomes unwilling, the spell will fail without contact ever being established and the target not realizing you attempted to reach them. Perhaps they are busy at the moment, but that doesn't stop them from later being willing once more. **The Willingness check will occur before connection is made on casting the spell.** Alternatively, there may be a discussion on the hypothetical, * "if A somehow knew that it was B calling, would they accept the call?" willingness being applied that way, as they are willing in this case. But it is unclear if the question instead is, * "are they willing to accept an unknown call before knowing who sent the request?" which might not be the same willingness. Could be stated more clearly in either case. Curious though, the duration *is 24 hours* without an off switch *other than* moving to another plane of existence. Once the spell succeeds, the willingness of the creature to continue the link is seemingly **no longer** considered.
The rules don't specify the exact mechanism, so the GM will need to adjudicate. If the target of the spell is a PC, the GM will need to give the player enough information to decide if the player is willing. If the target of the spell is an NPC, the GM will need to decide for the NPC. The GM may want to establish house rules for exactly how it works, but the need for those rules is going to vary tremendously from table to table. In many cases, the adjudication should be pretty easy. Let's assume the target is a PC: GM: You get a strange feeling. Your friend, Wendy the Wizard, is trying to establish a telepathic link with you. Are you willing? Player: I...how do I know it's Wendy? GM: You just know. Player: How do I know? GM: It's magic, but it's sort of like seeing someone's face or hearing their voice, you recognize them. At this point, the player decides whether to accept the connection or not, or maybe they ask more questions. A given table may need to have further discussion, or may not.
196,904
Without warning, a wizard casts *telepathy* targeting a friend who is on a different continent. > > You create a telepathic link between yourself and a willing creature with which you are familiar. > > > How is it decided whether the target is willing? Does the wizard get 6 seconds to quickly introduce herself? Does the target "feel" a "friend request/phone call" and envision the caster in their head and then decide whether to let them in/answer? I wondered if the spell assumes the target is willing until the target decides otherwise, but there is no option for the target to end the spell.
2022/03/18
[ "https://rpg.stackexchange.com/questions/196904", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/46827/" ]
On further review, later in the spell description it says (emphasis added): > > Until the spell ends, you and the target can instantaneously share words, images, sounds, and other sensory messages with one another through the link, **and the target recognizes you as the creature it is communicating with**. > > > I see two reasons for that text: 1. To handle the corner case of multiple telepathy effects (you know who is who among the many voices in your head); and 2. The word "willing" is in error earlier in the spell and this sentence is indicating that the target knows who is "calling them". Note that the sharing of thoughts is optional, so there is no risk for unwilling recipients except maybe distracting images? I agree that RAW, you need to arrange things ahead of time (as per above), but I'm thinking that RAI you make the connection and they know who you are and can choose whether to share their thoughts.
The rules don't specify the exact mechanism, so the GM will need to adjudicate. If the target of the spell is a PC, the GM will need to give the player enough information to decide if the player is willing. If the target of the spell is an NPC, the GM will need to decide for the NPC. The GM may want to establish house rules for exactly how it works, but the need for those rules is going to vary tremendously from table to table. In many cases, the adjudication should be pretty easy. Let's assume the target is a PC: GM: You get a strange feeling. Your friend, Wendy the Wizard, is trying to establish a telepathic link with you. Are you willing? Player: I...how do I know it's Wendy? GM: You just know. Player: How do I know? GM: It's magic, but it's sort of like seeing someone's face or hearing their voice, you recognize them. At this point, the player decides whether to accept the connection or not, or maybe they ask more questions. A given table may need to have further discussion, or may not.
66,273,005
I am trying to integrate my spring boot application with docusign for sending the documents and getting it signed from the users. I am not able to do it because I am not getting as how to get the JWT token to call the API's. I have downloaded the java code example from the docusign and configured accordingly. I am able to call the API's from the postman properly, But when I call any API from my application by rest template it gives me 302 redirect found. Is there any example project with java available for this ? Am I missing something.
2021/02/19
[ "https://Stackoverflow.com/questions/66273005", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6057802/" ]
I have figured out the solution myself. It's the issue with the session management and security in the example code. If I call any API from rest template there is no session at the beginning and it requires a JWT. So I need to pass a session Id in the header which is active and holds the value of the current user.
The Code Example Launchers that we have for Java do operate on SpringBoot and has an example framework for creating JWT tokens, they should work for you out of the box. Usually when we see someone getting stuck at a 302 it's because the application isn't following provided redirects -- most commonly when the baseUrl being targeted is using http over https. Can you doublecheck your baseUrl and make sure that's the case? It's common to see 302s like this but not so much for them to be a roadblock.
1,679,371
I have an ID (number format) that has trailing decimals in the source data. I need to retain the rounded number (see example attached) and have the trailing decimals removed from the data. The decimals are causing upload issues/error into a system that I use. I have tried formatting to number and removing decimal places, =INT, =ROUND, etc and I can get the cell view to reflect what I want but when I click on the cell, the formula bar still appears to retain the trailing decimals. How do I remove these? Thanks in advance! [![Example](https://i.stack.imgur.com/n3kOe.jpg)](https://i.stack.imgur.com/n3kOe.jpg)
2021/10/01
[ "https://superuser.com/questions/1679371", "https://superuser.com", "https://superuser.com/users/-1/" ]
All of IP addresses 1 - 7 may well exist only between your laptop and the 4G modem. As the whole path is wired, there's no way to set up a MITM in between; unless there's another way to access your firewall. The 4G modem requires connection to a service provider, and it will only connect to the provider who provided the SIM Card, so a MITM in between the modem and the service provider is also extremely unlikely. Tunnels in this context have nothing to do with caching, they're just a way to route traffic in a specific way. To quote [Wikipedia](https://en.wikipedia.org/wiki/IP_tunnel): > > An IP tunnel is an Internet Protocol (IP) network communications channel between two networks. It is used to transport another network protocol by encapsulation of its packets. > > > Usually tunnels are used because it simplifies the routing one way or another. In this context something may be internally configured to use a tunnel, for example the Ethernet-over-USB interface that connects the modem to the firewall. Can't say much more than that without knowing more of the setup; the hardware and the configuration. Connection refused has nothing to do with port 80, only the fact that your client isn't authorized to connect to that tunnel.
Your carrier is free to use any rfc1918 (ie private soace) in their network for parts you dont need to be able to reach - indeed doing this can save real IP space. You have not pointed out where your world routable or CGN IP is defined, and doing this could be useful in you understanding how tnings fit together. Also be aware that the reported IP addresses in a traceroute are for guidance purposes only It is entirely possible there are things messing with the ICMP (or equivalent) packet TTLs causing duplicates masking real IP addresses - but you would need to communicate with your ISP if those are a concern. HTTPS is cachable - although the word cachable is vague. Relatedly large content providers often will deploy equipment to ISPs to distribute and cache content. Typically the content provider will provide equipment to do caching including appropriate certificates so https does not fail, and work with the ISP to ensure traffic is appropriately routed to the caching bix in the ISP network.
84,684
Do we need to make changes for Salesforce Fast deploy to deploy our code via ANT migration tool ? OR does Salesforce take care of it automatically ? I would like to find some documentation which states that we don't have to make any changes for fast deploy.
2015/07/23
[ "https://salesforce.stackexchange.com/questions/84684", "https://salesforce.stackexchange.com", "https://salesforce.stackexchange.com/users/13059/" ]
If you will go through [this](http://releasenotes.docs.salesforce.com/en-us/winter15/release-notes/rn_quick_deployment.htm) article, it is clearly written : When deploying to non-production environments (sandbox), Apex tests aren’t required and aren’t run automatically. ***When using Metadata API (including the Force.com Migration Tool), Quick Deploy is supported in sandbox only for validations that explicitly enable the execution of tests (for example, via the runAllTests parameter for the Migration Tool)***. For change sets, Quick Deploy is not supported in sandbox because there is no option to enable test runs for change sets. So it seems like, using Metadata API(ANT), it is supported only in sandbox(with runAllTests paramater enabled)
I may be wrong here. But thought its worth to put my thoughts. Once you validate using ANT, you can go to salesforce and use the quick deploy feature. Guess as of now, through ANT quick deploy is not supported into prod instance.
84,410
In the second theme of the recapitulation of Beethoven's Pathetique Sonata where the bass line goes into the treble clef and the bass register becomes part of the melody, I see the marking ***ben tenuto il basso***. I tried looking it up last night and couldn't find an answer. Google translate said that this means "well held down", but that wouldn't be the musical definition, I know it wouldn't. From finding out that ben in Italian means well and my knowledge of Italian musical terms, here is the best that I could translate it to: > > Well held in the bass > > > So from that translation, here is what I think Beethoven is trying to get across with that marking: > > Hold the bass notes as though there was a fermata in the bass, in other words, longer than your normal tenuto, while the melody is an accented staccato. > > > **Is that what Beethoven is trying to get across with the marking?** EDIT: Here is what I see in the Schirmer edition, or whenever I look at Sonata Album Book II published by the same people. I use the Schirmer edition, partly because it makes it so easy to see where the recapitulation starts. But here is what I see in that edition: [![enter image description here](https://i.stack.imgur.com/klol4.png)](https://i.stack.imgur.com/klol4.png) As you can see, ***ben tenuto il basso*** is clearly marked here under the same measure that has the poco crescendo marking.
2019/04/30
[ "https://music.stackexchange.com/questions/84410", "https://music.stackexchange.com", "https://music.stackexchange.com/users/9749/" ]
Finding the edition the OP referred to (<https://imslp.org/wiki/Special:ImagefromIndex/11070/torat>) didn't take long. It's the sort of marking that always reminds me of a John Cleese joke in *Fawlty Towers*: "You should be on *Mastermind*. Specialist subject, the bleedin' obvious." Looking through the editors of editions on IMSPL to find somebody full of their own self importance who might write such a thing, my first guess was correct: Godowsky - the guy who rewrote 57 variations on the Chopin Etudes, because he didn't understand *why* the originals were so already so difficult, being unable to see past the mere technical problem of pressing down the piano keys in the right order. So, it means absolutely nothing. As pidgin-Italian, it translates as something like "that bass line should be well held down, man!!" The only logic for it that I can see is that Godowsky (not Beethoven!) decided the four notes in question ought to be slurred in two pairs, but playing the slurs that he invented for no reason is hard, so he decided to add some words saying "yes I really do mean this". Throw that edition away, and look at what *Beethoven* wrote. When *he* adds some textual instructions, they are always both practical and important. While I was writing this the OP posted an picture which confirms the source as the Godowsky edition. Schirmer just republish anything that is out of copyright, without bothering to say where they got it from. Before the internet, Schirmer editions were cheap, but mostly garbage. Now you can get better sources from the internet for free in sites like IMSLP, they are just garbage.
It simply means you need to play the bass tenuto. In this case I'd say he means the bass line in the treble clef, but it would help to see the relevant passage. (To me tenuto and fermata are quite different, i.e. I wouldn't describe it like that. Tenuto means you should hold the note to the maximum full value, but definitely not exceed it. While fermata *does* give you that liberty - or rather even implores you to do so)
379,117
I just set up nginx as a http/https reverse proxy and it worked well. After that, I realized that for some domains ftp services are available. I was able to install ftp.proxy and it also works well, although it just handles one single domain. My question is: Is there any possibility to reverse proxy ftp services based on hostnames/domains like I do with nginx for http?
2012/04/13
[ "https://serverfault.com/questions/379117", "https://serverfault.com", "https://serverfault.com/users/81551/" ]
There's a G-WAN [rewrite example here](http://gwan.com/developers#handler) (look at the second handler source code, the first handler example illustrates FLV pseudo-streaming).
You can Use a virtual host, or even an alias, see the [G-WAN FAQs](http://gwan.com/faq#listener). For more elaborate rewrites, you can use a handler, some examples are provided in the download archive.
22,702
*This question continues story of [one ESA vs NASA battle on Mars](https://worldbuilding.stackexchange.com/questions/22573/how-will-my-nerdy-mars-astronauts-do-battle-between-their-colonies). I tried to put everything important into this question. Also, being from Europe, I took ESA side as the one winning:* * **Year:** 2100ish * **Mars:** Two 'surface,' permanent scientific research colonies with a little over 100 scientists and engineers in each with an additional support crew (geologist, psychologist & medical, etc.) to about 150 people each, with the common utilities, agrarian setups, etc. The nerds in Colony ESA have just had it with Colony NASA, about 100km across [Elysium](http://maps.google.com/mars/#lat=0.522717&lon=534.222794&zoom=6) plains, and they went there and did burst their bubble. (Literally) **The question: How will Earth handle this?** It is safe to assume that both colonies are under constant surveillance from ground control. It is also safe to assume that NASA and ESA are still "friends" and the setup of having two colonies instead one is because they got great fundings - so why not build two? It is also safe to assume, that there is some crew rotation (people and material are being exchanged on regular basis) [![Example from Google Image Search](https://i.stack.imgur.com/2L6GD.jpg)](https://i.stack.imgur.com/2L6GD.jpg) The time is Battle + 15 minutes. Ground control of both teams is getting first shocking images of whats happening on Mars. And you know that until they receive first orders, everything will be over... What happens next? **Edit:** Trying to make the question less opinion based: Assume there is only one NASA+ESA joint resupply mission and next one is about to launch, arriving to Mars in half a year. Usually this resuply mission has about 7 months of food + oxygen + water supplies for all the crew + 12 people to go on rotation mission (6 NASA, 6 ESA). The resuply frequency is 6 months. So, lets scope the question like this: Who do I send out in next resupply mission? (profession wise obviously). And how has NASA + ESA proceed is they (obviously) want Mars mission to continue?
2015/08/16
[ "https://worldbuilding.stackexchange.com/questions/22702", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/2071/" ]
**NASA/ESA Repsonse** *Surprise* The officials should be shocked. After Houston kept telling the astronauts, for example, to 'just learn to get along,' they will have decided to take things into their own hands and through a series of discussions in 'safe' spaces or on notes of paper, they planned their battles. It wasn't until T minus an-hour-or-so before they departed that NASA/ESA realized their scientists weren't going to take it any more. *(Short-term) Attempt at Cover-up* This is a situation with precedent. Just as whenever there's a situation (riots or whatever), there's a scramble. I don't think it would be an attempt to cover-up permanently, because this will obviously not be successful, but governments and private companies, and even kids who broke a window with a baseball try to keep it a secret until they can think what to do. In this scenario, frantic phone calls are made between agencies internally and among the countries involved. "Hi, Mr. President, we have a situation..." "They what-the-WHAT?" *Damage Control* Heads of agencies, involved companies and government officials are scrambling to figure out what to do. It's hard to punish an employee who is a permanent colonist on a planet 140 million miles away. The press liasons are woken from bed to get onto the case and make sure that the spin is just so: the actions were unprecedented, atypical, and will be handled immediately. *Finger Pointing* This seems inevitable. Everyone wants to assign a cause to a situation, to help understand it, and there will be no end to this. It wouldn't be between the two agencies, rather between sub-sets of organizations and governments. The pressure put on the nerds by a rigorous schedule, or even petty things (the geologists from NASA were not allowed to study an area, because ESA claimed the area with a likelihood for astonishing discovery, for example). The economic and political effects can be complex. However, property destruction and harm to individuals are often immediately measurable and will be assigned. In order to separate government responsibility from the actions of the nerds, I speculate that they would assign this as a 'riot,' and not politically motivated or assigned actions. **Re-route Supplies and Relief Personnel** The relief supplies would have to be enhanced by shifting some of the weight of long-term supplies (soils & plants for future planting, laboratory equipment, etc.) with that of equipment and personnel to either return the remaining scientists or to contain them until they can be returned. This could include mechanisms for locating them (the nerds that won the battle know not to hang around, and have contingency for this), and then disabling their mobility. **Aside: Public Outcry** Phone calls will be made, statements will be made. People who already have disdain for space exploration will use this forever as an argument to defund future space exploration, manned or otherwise. Also, so much for childrens' role models.
While there would be the usual coverup/deflect/finger pointing aspect of bureaucratic and political response, I suspect that there would be another level of response, which is the sheer bloody mindedness of people. Many people will have been invested in the project, and of course the astronauts will have been celebrities in certain social media circles (as well as real celebrities in other aspects of their lives), as well as having lots of family and influential friends. There will be a massive outcry for something to be done to avenge the loss of so many lives and so much equipment, not to mention the political capital that had been expended in setting up the expedition in the first place. I suspect the joint resupply venture will be the first thing scuttled, as each side accuses the other of manipulating the manifest to the benefit/detriment of one group of astronauts. Certainly the astronauts themselves slated for the mission are going to have some grave doubts about the other parties aboard; if two teams of supposedly complimentary astronauts on Mars went to war, resulting in the destruction of a base and the deaths of many astronauts, then what could possibly happen aboard the resupply ship during its long transfer orbit to Mars? A secondary consideration is that one base has been destroyed, so any surviving astronauts will be busy trying to do a "Mark Watney" and need help as fast as possible. An unmanned supply vessel launched on a high energy trajectory transfer will be needed ASAP, and since you stipulate that the NASA side lost, NASA will be pulling *all* their resources and technical ability to the task and not to a joint resupply mission. (If the ESA side had lost, the same factors would apply). The anger would also spill over on the Earth at many levels. Boycotts against products made in the EU would begin an informal trade war between the two sides, and the EU would also lose American tourism and other forms of exchange, creating economic disruptions. More formal actions against the EU would be called for by politicians and political groups eager to hitch their wagon to a popular cause (and various parties in the EU would be calling for the same against the US, leading to a situation where parties like the UKIP, Front National, Golden Dawn and other more or less radical parties whip up popular support among the European voters). Depending on how far the agitation goes, this might provoke more robust actions between the two governments. Even if it does not by itself, nations like Russia and China will seek to inflame the situation through "Active Measures" (propaganda and other campaigns to mould public opinion in a desired direction), potentially causing the same effect. While the Americans might not want to kill the ESA astronauts in cold blood, they could make it very clear that the ESA will not be launching *anything* until the United States is satisfied that certain conditions are met (most likely the arrival of the relief ship and resupply of the American astronauts stranded on Mars). At this point, the huge resource mismatch between the United States and the EU will begin to be felt. The US can put lots of pressure on the EU through lots of different means (political, military and economic), and as noted, NASA and the private US space industry can respond to directly support their astronauts on Mars by sending relief supplies by high energy transfer orbits. The ESA has only one supplier to build extra rockets (unless they decide to hire the Russians or Chinese to build rockets for them, which would be a nasty can of political worms to open), while the United States can hire SpaceX, Orbital, Boeing, Lockheed-Martin, Northrop-Grunmann and and entire second tier of would be rocket makers to crank out orbital vehicles and infrastructure to not only rescue the US astronauts, but establish a robust permanent presence on Mars as well (even if the US government isn't explicitly interested in doing so, Elon Musk has openly stated that this is *his* goal; he is hardly going to turn down a gilded opportunity to fulfill his life dream). Winning the battle and losing the war is alway a risk when you take up arms, and the end results of armed conflict are rarely what either side envisioned before the start of the conflict. The only potentially "good" outcome of all this is a robust Martian colony is now possible, and the infrastructure for much larger scale space exploration is now in place. I suspect that the second order effect will be space colonies will have a robust protective service attached to prevent this sort of thing from happening again, ranging from private security forces hired by the colonists to actual contingents of national military forces brought by the colonizing nations themselves.
22,702
*This question continues story of [one ESA vs NASA battle on Mars](https://worldbuilding.stackexchange.com/questions/22573/how-will-my-nerdy-mars-astronauts-do-battle-between-their-colonies). I tried to put everything important into this question. Also, being from Europe, I took ESA side as the one winning:* * **Year:** 2100ish * **Mars:** Two 'surface,' permanent scientific research colonies with a little over 100 scientists and engineers in each with an additional support crew (geologist, psychologist & medical, etc.) to about 150 people each, with the common utilities, agrarian setups, etc. The nerds in Colony ESA have just had it with Colony NASA, about 100km across [Elysium](http://maps.google.com/mars/#lat=0.522717&lon=534.222794&zoom=6) plains, and they went there and did burst their bubble. (Literally) **The question: How will Earth handle this?** It is safe to assume that both colonies are under constant surveillance from ground control. It is also safe to assume that NASA and ESA are still "friends" and the setup of having two colonies instead one is because they got great fundings - so why not build two? It is also safe to assume, that there is some crew rotation (people and material are being exchanged on regular basis) [![Example from Google Image Search](https://i.stack.imgur.com/2L6GD.jpg)](https://i.stack.imgur.com/2L6GD.jpg) The time is Battle + 15 minutes. Ground control of both teams is getting first shocking images of whats happening on Mars. And you know that until they receive first orders, everything will be over... What happens next? **Edit:** Trying to make the question less opinion based: Assume there is only one NASA+ESA joint resupply mission and next one is about to launch, arriving to Mars in half a year. Usually this resuply mission has about 7 months of food + oxygen + water supplies for all the crew + 12 people to go on rotation mission (6 NASA, 6 ESA). The resuply frequency is 6 months. So, lets scope the question like this: Who do I send out in next resupply mission? (profession wise obviously). And how has NASA + ESA proceed is they (obviously) want Mars mission to continue?
2015/08/16
[ "https://worldbuilding.stackexchange.com/questions/22702", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/2071/" ]
**NASA/ESA Repsonse** *Surprise* The officials should be shocked. After Houston kept telling the astronauts, for example, to 'just learn to get along,' they will have decided to take things into their own hands and through a series of discussions in 'safe' spaces or on notes of paper, they planned their battles. It wasn't until T minus an-hour-or-so before they departed that NASA/ESA realized their scientists weren't going to take it any more. *(Short-term) Attempt at Cover-up* This is a situation with precedent. Just as whenever there's a situation (riots or whatever), there's a scramble. I don't think it would be an attempt to cover-up permanently, because this will obviously not be successful, but governments and private companies, and even kids who broke a window with a baseball try to keep it a secret until they can think what to do. In this scenario, frantic phone calls are made between agencies internally and among the countries involved. "Hi, Mr. President, we have a situation..." "They what-the-WHAT?" *Damage Control* Heads of agencies, involved companies and government officials are scrambling to figure out what to do. It's hard to punish an employee who is a permanent colonist on a planet 140 million miles away. The press liasons are woken from bed to get onto the case and make sure that the spin is just so: the actions were unprecedented, atypical, and will be handled immediately. *Finger Pointing* This seems inevitable. Everyone wants to assign a cause to a situation, to help understand it, and there will be no end to this. It wouldn't be between the two agencies, rather between sub-sets of organizations and governments. The pressure put on the nerds by a rigorous schedule, or even petty things (the geologists from NASA were not allowed to study an area, because ESA claimed the area with a likelihood for astonishing discovery, for example). The economic and political effects can be complex. However, property destruction and harm to individuals are often immediately measurable and will be assigned. In order to separate government responsibility from the actions of the nerds, I speculate that they would assign this as a 'riot,' and not politically motivated or assigned actions. **Re-route Supplies and Relief Personnel** The relief supplies would have to be enhanced by shifting some of the weight of long-term supplies (soils & plants for future planting, laboratory equipment, etc.) with that of equipment and personnel to either return the remaining scientists or to contain them until they can be returned. This could include mechanisms for locating them (the nerds that won the battle know not to hang around, and have contingency for this), and then disabling their mobility. **Aside: Public Outcry** Phone calls will be made, statements will be made. People who already have disdain for space exploration will use this forever as an argument to defund future space exploration, manned or otherwise. Also, so much for childrens' role models.
The most effective way for the US and EU to defuse any political fallout over the rogue actions of a bunch of scientists is to *call it exactly that in public*. Admit to their errors in judgement in choosing (as yet unknown) members of the Mars teams who could go on to commit murder, and state that the number one priority is now not only to rescue the innocent, but also to arrest and prosecute the guilty. To that end, the 12 crew members who were scheduled to go out on the next resupply mission would be withdrawn, and instead a 10-strong team of military- and police- trained investigators, all with combat experience in either a civilian or military conflict, as well as experience in rescue operations, drawn equally from both the US and EU and ideally having professional experience and friendships with one-another, would be substituted. All of the new/replacement scientific gear would be removed and substituted with military and investigatory equipment for the use of the investigators to bring the perpetrators of this "unwarranted violence" to justice. The last two positions would be specially selected psychologists, again from both the US and EU, who would be working to determine what caused the build-up of tensions that led to the open conflict, and what could be done to prevent them from accumulating in future missions. Despite both the US and the EU wanting the missions to continue, they must publically speak as if cancelling the missions entirely is a high probability - which in fact it is. Until the rogue elements in both facilities can be identified and arrested, the missions are in jeopardy. The best option from the Earth-siders point of view may be to entirely replace the existing Mars teams with entirely new teams. It is unlikely that the perpetrators could escape justice for long once the resupply ship arrives. While they know the environment better, Mars is still a hostile environment, and environment suits have limited supplies. They would be forced to either surrender or attempt to fight off the investigators from whatever bolthole they could find, and the investigators would have vastly superior combat skills, likely backed up by armoured environment gear that the rogue scientists would have trouble dealing with. In any event, it is one thing (as most likely happened) to pop an enemy environment dome using a drone, and entirely another thing to face multiple trained soldiers and police officers each armed with a gun selected and loaded to deal with precisely the sort of protection the scientists may be able to muster. In all likelihood, the scientists will surrender immediately rather than fight, and the probable fate of the first one who *doesn't* will encourage the rest to choose a wiser course of action. Once the scientists have all been rounded up and placed in protective custody, and all the bodies identified, or at least accounted for, the investigators will begin the investigatory side of the operation, attempting to determine who did what. At the very least, the suspects will be returned to Earth for trial on charges including murder and terrorism, and the investigators may conclude that the entire scientific team needs to be replaced eventually. Regardless, the suspects will be shipped back to Earth in custody, while the investigators remain to fulfil a police role in order to keep the peace between those who remain. Back on Earth, after debates over jurisdiction in which it is suggested that each body prosecute its own nationals, the perpetrators will appear in highly publicised trials in the Hague, which, based on the [Outer Space Treaty](https://en.wikipedia.org/wiki/Outer_Space_Treaty) has [jurisdiction over crimes committed in space](https://en.wikipedia.org/wiki/Space_jurisdiction). See also: <https://space.stackexchange.com/questions/683/jurisdiction-over-crime-in-space>. Since the crimes in question are murder and terrorism, there will be no question as to whether the alleged actions are worthy of prosecution or not, as both the US and the EU have laws concerning the accused's actions. As to the outcome of the trial... who can say?
22,702
*This question continues story of [one ESA vs NASA battle on Mars](https://worldbuilding.stackexchange.com/questions/22573/how-will-my-nerdy-mars-astronauts-do-battle-between-their-colonies). I tried to put everything important into this question. Also, being from Europe, I took ESA side as the one winning:* * **Year:** 2100ish * **Mars:** Two 'surface,' permanent scientific research colonies with a little over 100 scientists and engineers in each with an additional support crew (geologist, psychologist & medical, etc.) to about 150 people each, with the common utilities, agrarian setups, etc. The nerds in Colony ESA have just had it with Colony NASA, about 100km across [Elysium](http://maps.google.com/mars/#lat=0.522717&lon=534.222794&zoom=6) plains, and they went there and did burst their bubble. (Literally) **The question: How will Earth handle this?** It is safe to assume that both colonies are under constant surveillance from ground control. It is also safe to assume that NASA and ESA are still "friends" and the setup of having two colonies instead one is because they got great fundings - so why not build two? It is also safe to assume, that there is some crew rotation (people and material are being exchanged on regular basis) [![Example from Google Image Search](https://i.stack.imgur.com/2L6GD.jpg)](https://i.stack.imgur.com/2L6GD.jpg) The time is Battle + 15 minutes. Ground control of both teams is getting first shocking images of whats happening on Mars. And you know that until they receive first orders, everything will be over... What happens next? **Edit:** Trying to make the question less opinion based: Assume there is only one NASA+ESA joint resupply mission and next one is about to launch, arriving to Mars in half a year. Usually this resuply mission has about 7 months of food + oxygen + water supplies for all the crew + 12 people to go on rotation mission (6 NASA, 6 ESA). The resuply frequency is 6 months. So, lets scope the question like this: Who do I send out in next resupply mission? (profession wise obviously). And how has NASA + ESA proceed is they (obviously) want Mars mission to continue?
2015/08/16
[ "https://worldbuilding.stackexchange.com/questions/22702", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/2071/" ]
The most effective way for the US and EU to defuse any political fallout over the rogue actions of a bunch of scientists is to *call it exactly that in public*. Admit to their errors in judgement in choosing (as yet unknown) members of the Mars teams who could go on to commit murder, and state that the number one priority is now not only to rescue the innocent, but also to arrest and prosecute the guilty. To that end, the 12 crew members who were scheduled to go out on the next resupply mission would be withdrawn, and instead a 10-strong team of military- and police- trained investigators, all with combat experience in either a civilian or military conflict, as well as experience in rescue operations, drawn equally from both the US and EU and ideally having professional experience and friendships with one-another, would be substituted. All of the new/replacement scientific gear would be removed and substituted with military and investigatory equipment for the use of the investigators to bring the perpetrators of this "unwarranted violence" to justice. The last two positions would be specially selected psychologists, again from both the US and EU, who would be working to determine what caused the build-up of tensions that led to the open conflict, and what could be done to prevent them from accumulating in future missions. Despite both the US and the EU wanting the missions to continue, they must publically speak as if cancelling the missions entirely is a high probability - which in fact it is. Until the rogue elements in both facilities can be identified and arrested, the missions are in jeopardy. The best option from the Earth-siders point of view may be to entirely replace the existing Mars teams with entirely new teams. It is unlikely that the perpetrators could escape justice for long once the resupply ship arrives. While they know the environment better, Mars is still a hostile environment, and environment suits have limited supplies. They would be forced to either surrender or attempt to fight off the investigators from whatever bolthole they could find, and the investigators would have vastly superior combat skills, likely backed up by armoured environment gear that the rogue scientists would have trouble dealing with. In any event, it is one thing (as most likely happened) to pop an enemy environment dome using a drone, and entirely another thing to face multiple trained soldiers and police officers each armed with a gun selected and loaded to deal with precisely the sort of protection the scientists may be able to muster. In all likelihood, the scientists will surrender immediately rather than fight, and the probable fate of the first one who *doesn't* will encourage the rest to choose a wiser course of action. Once the scientists have all been rounded up and placed in protective custody, and all the bodies identified, or at least accounted for, the investigators will begin the investigatory side of the operation, attempting to determine who did what. At the very least, the suspects will be returned to Earth for trial on charges including murder and terrorism, and the investigators may conclude that the entire scientific team needs to be replaced eventually. Regardless, the suspects will be shipped back to Earth in custody, while the investigators remain to fulfil a police role in order to keep the peace between those who remain. Back on Earth, after debates over jurisdiction in which it is suggested that each body prosecute its own nationals, the perpetrators will appear in highly publicised trials in the Hague, which, based on the [Outer Space Treaty](https://en.wikipedia.org/wiki/Outer_Space_Treaty) has [jurisdiction over crimes committed in space](https://en.wikipedia.org/wiki/Space_jurisdiction). See also: <https://space.stackexchange.com/questions/683/jurisdiction-over-crime-in-space>. Since the crimes in question are murder and terrorism, there will be no question as to whether the alleged actions are worthy of prosecution or not, as both the US and the EU have laws concerning the accused's actions. As to the outcome of the trial... who can say?
While there would be the usual coverup/deflect/finger pointing aspect of bureaucratic and political response, I suspect that there would be another level of response, which is the sheer bloody mindedness of people. Many people will have been invested in the project, and of course the astronauts will have been celebrities in certain social media circles (as well as real celebrities in other aspects of their lives), as well as having lots of family and influential friends. There will be a massive outcry for something to be done to avenge the loss of so many lives and so much equipment, not to mention the political capital that had been expended in setting up the expedition in the first place. I suspect the joint resupply venture will be the first thing scuttled, as each side accuses the other of manipulating the manifest to the benefit/detriment of one group of astronauts. Certainly the astronauts themselves slated for the mission are going to have some grave doubts about the other parties aboard; if two teams of supposedly complimentary astronauts on Mars went to war, resulting in the destruction of a base and the deaths of many astronauts, then what could possibly happen aboard the resupply ship during its long transfer orbit to Mars? A secondary consideration is that one base has been destroyed, so any surviving astronauts will be busy trying to do a "Mark Watney" and need help as fast as possible. An unmanned supply vessel launched on a high energy trajectory transfer will be needed ASAP, and since you stipulate that the NASA side lost, NASA will be pulling *all* their resources and technical ability to the task and not to a joint resupply mission. (If the ESA side had lost, the same factors would apply). The anger would also spill over on the Earth at many levels. Boycotts against products made in the EU would begin an informal trade war between the two sides, and the EU would also lose American tourism and other forms of exchange, creating economic disruptions. More formal actions against the EU would be called for by politicians and political groups eager to hitch their wagon to a popular cause (and various parties in the EU would be calling for the same against the US, leading to a situation where parties like the UKIP, Front National, Golden Dawn and other more or less radical parties whip up popular support among the European voters). Depending on how far the agitation goes, this might provoke more robust actions between the two governments. Even if it does not by itself, nations like Russia and China will seek to inflame the situation through "Active Measures" (propaganda and other campaigns to mould public opinion in a desired direction), potentially causing the same effect. While the Americans might not want to kill the ESA astronauts in cold blood, they could make it very clear that the ESA will not be launching *anything* until the United States is satisfied that certain conditions are met (most likely the arrival of the relief ship and resupply of the American astronauts stranded on Mars). At this point, the huge resource mismatch between the United States and the EU will begin to be felt. The US can put lots of pressure on the EU through lots of different means (political, military and economic), and as noted, NASA and the private US space industry can respond to directly support their astronauts on Mars by sending relief supplies by high energy transfer orbits. The ESA has only one supplier to build extra rockets (unless they decide to hire the Russians or Chinese to build rockets for them, which would be a nasty can of political worms to open), while the United States can hire SpaceX, Orbital, Boeing, Lockheed-Martin, Northrop-Grunmann and and entire second tier of would be rocket makers to crank out orbital vehicles and infrastructure to not only rescue the US astronauts, but establish a robust permanent presence on Mars as well (even if the US government isn't explicitly interested in doing so, Elon Musk has openly stated that this is *his* goal; he is hardly going to turn down a gilded opportunity to fulfill his life dream). Winning the battle and losing the war is alway a risk when you take up arms, and the end results of armed conflict are rarely what either side envisioned before the start of the conflict. The only potentially "good" outcome of all this is a robust Martian colony is now possible, and the infrastructure for much larger scale space exploration is now in place. I suspect that the second order effect will be space colonies will have a robust protective service attached to prevent this sort of thing from happening again, ranging from private security forces hired by the colonists to actual contingents of national military forces brought by the colonizing nations themselves.
920
It is well-known that we can learn a lot about the structure of the lower crust, mantle, and core by observing the ways in which they refract different kinds of seismic waves. Do we have any *other* ways of imaging the deeper parts of the Earth, though?
2014/05/13
[ "https://earthscience.stackexchange.com/questions/920", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/67/" ]
Gravity can be used to investigate the lower crust and upper mantle (see for example [Fullea et al, 2014](http://dx.doi.org/10.1016%2Fj.jag.2014.02.003)). Satellite measurements of gravity could even be used to investigate deeper structures in the mantle, like subducting slabs ([Panet, 2014](http://dx.doi.org/10.1038%2Fngeo2063)). However, I couldn't find any use of gravity data to probe deeper, into the core for example. The [magnetotelluric method](http://en.wikipedia.org/wiki/Magnetotellurics) is sometimes used for deep crustal structure. And features of the geomagnetic field, like [secular variation](http://en.wikipedia.org/wiki/Geomagnetic_secular_variation), can be inverted to investigate as far down as the outer core ([Gubbins, 1996](http://dx.doi.org/10.1016/S0031-9201(96)03187-1)). However, the most common and well developed method to image the lower mantle and core is still through seismic waves.
One interesting and relatively new technique is by the detection of [geoneutrinos](http://en.wikipedia.org/wiki/Geoneutrino). These particles are produced by radioactive decay in the Earth's interior. They are uncommonly suitable for probing the deep Earth because -- unlike most particles and waves -- they can travel through thousands of kilometres of rock with very little absorption. Of course, this very same characteristic makes actually *detecting* them something of a challenge, and detectors tend to be [rather large](http://www.e15.ph.tum.de/research_and_projects/lena/) (in the thousands of cubic metres). [Araki et al. (2005)](http://www.nature.com/nature/journal/v436/n7050/abs/nature03980.html) gives some early results -- but as the Wikipedia article shows, there are more and bigger detectors on the drawing boards, so we should expect to see more geoneutrino results in the coming years and decades. [This abstract](http://scicolloq.gsfc.nasa.gov/McDonough.html) gives the best elevator pitch I've found so far for geoneutrino research: > > Radioactive decay of U and Th gives off ghost-like, neutrino particles that can be detected by 1000 ton detectors built a mile underground, where they are shielded from the cosmic rays that rain down on the Earth. Collaborations between physicists and geologists are detecting these "geo-neutrinos". Future underwater detectors, deployed at different points on the ocean floor, will create a neutrino tomographic image of mantle structures sited at the base of the mantle above the core. > > >
920
It is well-known that we can learn a lot about the structure of the lower crust, mantle, and core by observing the ways in which they refract different kinds of seismic waves. Do we have any *other* ways of imaging the deeper parts of the Earth, though?
2014/05/13
[ "https://earthscience.stackexchange.com/questions/920", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/67/" ]
Gravity can be used to investigate the lower crust and upper mantle (see for example [Fullea et al, 2014](http://dx.doi.org/10.1016%2Fj.jag.2014.02.003)). Satellite measurements of gravity could even be used to investigate deeper structures in the mantle, like subducting slabs ([Panet, 2014](http://dx.doi.org/10.1038%2Fngeo2063)). However, I couldn't find any use of gravity data to probe deeper, into the core for example. The [magnetotelluric method](http://en.wikipedia.org/wiki/Magnetotellurics) is sometimes used for deep crustal structure. And features of the geomagnetic field, like [secular variation](http://en.wikipedia.org/wiki/Geomagnetic_secular_variation), can be inverted to investigate as far down as the outer core ([Gubbins, 1996](http://dx.doi.org/10.1016/S0031-9201(96)03187-1)). However, the most common and well developed method to image the lower mantle and core is still through seismic waves.
Inverse problems are some of the most important mathematical problems in science and mathematics. Inverse problems arise in many branches of geophysics, medical imaging, remote sensing, ocean acoustic tomography, nondestructive testing, astronomy, physics and many other fields. Geophysicists remotely measure the seismic (acoustic), gravity, and electromagnetic fields of the earth and then treat the inverse problem to constrain the properties of the earth's interior.
920
It is well-known that we can learn a lot about the structure of the lower crust, mantle, and core by observing the ways in which they refract different kinds of seismic waves. Do we have any *other* ways of imaging the deeper parts of the Earth, though?
2014/05/13
[ "https://earthscience.stackexchange.com/questions/920", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/67/" ]
One interesting and relatively new technique is by the detection of [geoneutrinos](http://en.wikipedia.org/wiki/Geoneutrino). These particles are produced by radioactive decay in the Earth's interior. They are uncommonly suitable for probing the deep Earth because -- unlike most particles and waves -- they can travel through thousands of kilometres of rock with very little absorption. Of course, this very same characteristic makes actually *detecting* them something of a challenge, and detectors tend to be [rather large](http://www.e15.ph.tum.de/research_and_projects/lena/) (in the thousands of cubic metres). [Araki et al. (2005)](http://www.nature.com/nature/journal/v436/n7050/abs/nature03980.html) gives some early results -- but as the Wikipedia article shows, there are more and bigger detectors on the drawing boards, so we should expect to see more geoneutrino results in the coming years and decades. [This abstract](http://scicolloq.gsfc.nasa.gov/McDonough.html) gives the best elevator pitch I've found so far for geoneutrino research: > > Radioactive decay of U and Th gives off ghost-like, neutrino particles that can be detected by 1000 ton detectors built a mile underground, where they are shielded from the cosmic rays that rain down on the Earth. Collaborations between physicists and geologists are detecting these "geo-neutrinos". Future underwater detectors, deployed at different points on the ocean floor, will create a neutrino tomographic image of mantle structures sited at the base of the mantle above the core. > > >
Inverse problems are some of the most important mathematical problems in science and mathematics. Inverse problems arise in many branches of geophysics, medical imaging, remote sensing, ocean acoustic tomography, nondestructive testing, astronomy, physics and many other fields. Geophysicists remotely measure the seismic (acoustic), gravity, and electromagnetic fields of the earth and then treat the inverse problem to constrain the properties of the earth's interior.
13,102
I am in the process of building a dining table out of solid walnut. I don't have the skills or equipment to make the tabletop myself, so I got my wood supplier to do it for me. The tabletop is 6/4 walnut, 7' long by 40" wide, and is made up of 6 smaller boards glued together. The shop is reputable and seems to know what they are doing. I received it about a month ago, and since then, it's been standing behind my living room couch leaning against the wall (long side on the floor). It was leaning at maybe a 10 degree angle, with the underside of the piece facing away from the wall. Anyways, now that I've completed the base, I'm paying attention to the tabletop, and I notice that there is some significant cupping. The concave side (the underside) goes in about 3/16" in the middle down from the edges. Unfortunately, I didn't check for cupping when I initially received the piece, so I'm not sure if it came like this or if I caused it myself somehow. Regardless, the damage is done, so what are my options now? I've seen some articles/videos where they dampen the concave side of a cupped board and let it sit overnight to straighten it out, but will that work on my 6/4 tabletop? I tried applying this technique yesterday evening, but this morning I don't measure any difference in the amount of cupping. Any advice would be greatly appreciated. Note: I don't have a table saw, thickness planer, or jointer.
2021/09/18
[ "https://woodworking.stackexchange.com/questions/13102", "https://woodworking.stackexchange.com", "https://woodworking.stackexchange.com/users/2909/" ]
> > Any advice would be greatly appreciated. > > > 3/16” over the length and width of the table isn’t much; if you put the top on the base and leave it unattached, it might flatten out on its own after a week or two. You could encourage it by putting something heavy, like a box of books (or two) in the middle. Whether that helps or not, you can probably also pull it flat with a screw or two through the middle of the base into the bottom of the tabletop. Depending on your design, that might mean that you have to add a piece to the base across the middle.
There are a couple reasons for cupping like this. The first is assembly if you don't pay attention to the glue up job or don't actually make your joints square, you can get this, sometimes with rather exaggerated bends. It doesn't really sound like that is the case here, and I would certainly hope that a professional shop with any reputation wouldn't screw up like that. The next is some boards have a lot of 'spring' in them and will 'bend' when pressure is released, some times even after cutting and squaring it, I generally see this the length of the board, and I've had Hickory give me some major headaches (but that comes with milling your own logs...) but once again, a professional shop generally uses fairly high quality lumber and it's usually the wood with sap wood and/or flat sawn that does a lot of cupping. So the other much more likely culprit. Moisture if you had it against the wall near an air vent or under an open window or where it got a lot of sun throughout the day, there is a good chance that one side dried out faster than the other, with 6/4 lumber it's pretty easy to have uneven moisture content to happen. 'Fixing' this might be as simple as Caleb stated, putting it on your base and waiting a little bit for the moisture to restablize, it might take near the same amount of time as it did to cup in the first place. Good luck.
27,969
In an "alternate universe" where NASA continued to receive a mandate, funding and public support at say peak Apollo levels, could another ten or twenty years have gotten boots on Mars, with astronauts in those boots? Or would there be some clear technical challenge that really needed several more decades of development before this would have been possible? **Ideally:** a bit of math or some supporting links should be presented and not just an opinion or a list. This question is motivated in part by [this thoughtful answer](https://space.stackexchange.com/a/27968/12102).
2018/06/20
[ "https://space.stackexchange.com/questions/27969", "https://space.stackexchange.com", "https://space.stackexchange.com/users/12102/" ]
I second what Edlothiad suggested, Mars Direct would have been perfectly feasible. Mars Direct was created and proposed mainly by Dr. Robert Zubrin. His approach to a human Mars mission was precisely to find a way to do it using the available technology and not using technologies that are still in development or experimental, such as Nuclear Thermal Rockets, advanced propulsion, on orbit assembly, which would all drive up the costs. His research was initiated due to the 90-day report\* and the very high costs associated with a more extensive and elaborate plan for going to Mars. He also suggested a mission that is as lean as possible, requiring the least amount of mass to be launched to make this possible with the class of launch vehicle technology available. Some of the key points to his approach: * Use available Saturn V/Shuttle class launch vehicle * Launch, transfer, land and verify an unfueled return vehicle prior to crewed transfer * Create propellant for return voyage on Mars with the Sabatier reaction using small amount of hydrogen and a small nuclear reactor brought from earth(ISRU). * Transfer crew on direct orbit to Mars, no on orbit rendezvous, use a slower orbit that is safer due to it having a free return in case of emergency * Land all crew on mars(no astronaut stays on orbit) * Stay on mars for an extended time (due to conjunction orbit) Zubrin has been pushing this approach for decades without any real movement from NASA. The main reason for this is that, due to multiple reasons, NASA is no longer driven to a singular goal, such as mars, and has many technology driven goals instead. To do humans to mars with the available budget even in this cost effective way would mean giving up on other programs such as ISS. Some of the excuses given by opponents for why it would not work (or any other plan using available technology) are: * We do not know how to deal with radiation, so more radiation research needed * We want to go to Mars faster, so advanced propulsion needed * ISRU has never been done so we cannot rely on it, so we need to bring fuel, so we need a huge ship and advanced propulsion * The mass margins are too slim Which can all be disproved, and have been, by Zubrin. You can find multiple publications of this plan and the details such as: <http://www.marspapers.org/paper/Zubrin_1991.pdf> \*Report of the 90 Day Study on Human Exploration of the Moon and Mars, 1989
There's always the thought that if you put enough money/resources behind a project, you can do anything, and while this is true in many cases, the amount of resources needed without key innovations can be astronomical and ultimately prohibitive. An example I often cite is the Pyramids- they were built without the wheel, but the resources needed to do them without the wheel was an immense human-powered task! Much in the same way, if we funneled a virtually limitless volume of resources into NASA, we could have any number of major advances using the technology we have available, and the funding to fuel new, incremental breakthroughs. Would these breakthroughs ultimately lead to an easier path to Mars is impossible to speculate, but consider this- if money were no object, a nation could incrementally build an immense space-faring habitat that could, in time, make it's way to Mars. Whether peak Apollo-era funding would have been sufficient to achieve something of that scale is hard to say- but consider this: One Saturn V Rocket could move 310,000 pounds into orbit at the cost of roughly 190 million dollars. The modern International Space Station weighs 500 tons, or just about 1,000,000 pounds. Supposing the habitat that would travel to mars weighed a conservative double of that, or 2,000,000 pounds (1000 tons), you are looking at no less than 7 successful Saturn V rocket launches, totaling $1.3 Billion in launches alone, not counting the craft, support, development, labor, assembly, and other costs- all in 1970's dollars. At those dollar amounts, you're beginning to look at taking a huge slice out of the United States total budget pie-chart at the time. So, in summation, while I think will and determination can overcome pretty much any technical hurdle, the economic factors make the possibility of this very, very slim, even with peak Apollo-level funding.
27,969
In an "alternate universe" where NASA continued to receive a mandate, funding and public support at say peak Apollo levels, could another ten or twenty years have gotten boots on Mars, with astronauts in those boots? Or would there be some clear technical challenge that really needed several more decades of development before this would have been possible? **Ideally:** a bit of math or some supporting links should be presented and not just an opinion or a list. This question is motivated in part by [this thoughtful answer](https://space.stackexchange.com/a/27968/12102).
2018/06/20
[ "https://space.stackexchange.com/questions/27969", "https://space.stackexchange.com", "https://space.stackexchange.com/users/12102/" ]
I second what Edlothiad suggested, Mars Direct would have been perfectly feasible. Mars Direct was created and proposed mainly by Dr. Robert Zubrin. His approach to a human Mars mission was precisely to find a way to do it using the available technology and not using technologies that are still in development or experimental, such as Nuclear Thermal Rockets, advanced propulsion, on orbit assembly, which would all drive up the costs. His research was initiated due to the 90-day report\* and the very high costs associated with a more extensive and elaborate plan for going to Mars. He also suggested a mission that is as lean as possible, requiring the least amount of mass to be launched to make this possible with the class of launch vehicle technology available. Some of the key points to his approach: * Use available Saturn V/Shuttle class launch vehicle * Launch, transfer, land and verify an unfueled return vehicle prior to crewed transfer * Create propellant for return voyage on Mars with the Sabatier reaction using small amount of hydrogen and a small nuclear reactor brought from earth(ISRU). * Transfer crew on direct orbit to Mars, no on orbit rendezvous, use a slower orbit that is safer due to it having a free return in case of emergency * Land all crew on mars(no astronaut stays on orbit) * Stay on mars for an extended time (due to conjunction orbit) Zubrin has been pushing this approach for decades without any real movement from NASA. The main reason for this is that, due to multiple reasons, NASA is no longer driven to a singular goal, such as mars, and has many technology driven goals instead. To do humans to mars with the available budget even in this cost effective way would mean giving up on other programs such as ISS. Some of the excuses given by opponents for why it would not work (or any other plan using available technology) are: * We do not know how to deal with radiation, so more radiation research needed * We want to go to Mars faster, so advanced propulsion needed * ISRU has never been done so we cannot rely on it, so we need to bring fuel, so we need a huge ship and advanced propulsion * The mass margins are too slim Which can all be disproved, and have been, by Zubrin. You can find multiple publications of this plan and the details such as: <http://www.marspapers.org/paper/Zubrin_1991.pdf> \*Report of the 90 Day Study on Human Exploration of the Moon and Mars, 1989
It seems strange to me that no one has mentioned Project Orion (1958-1965), a theoretical study of using atomic bombs to launch rockets and propel them in interplanetary space. That would have been many times more efficient than chemical rockets and the members of project Orion hoped that it would enable manned lunar and interplanetary missions in the the 1960s and 1970s. In regard specifically to Mars, in 2003 the BBC aired a documentary about Project Orion titled *To Mars by A Bomb: The Secret History of Project Orion*. It seems quite possible that if Project Orion had been supported there could have been manned missions to Mars and other planets in the 1970s.
27,969
In an "alternate universe" where NASA continued to receive a mandate, funding and public support at say peak Apollo levels, could another ten or twenty years have gotten boots on Mars, with astronauts in those boots? Or would there be some clear technical challenge that really needed several more decades of development before this would have been possible? **Ideally:** a bit of math or some supporting links should be presented and not just an opinion or a list. This question is motivated in part by [this thoughtful answer](https://space.stackexchange.com/a/27968/12102).
2018/06/20
[ "https://space.stackexchange.com/questions/27969", "https://space.stackexchange.com", "https://space.stackexchange.com/users/12102/" ]
**Probably yes**. To add to the other already excellent answers: There is the very relevant blog [Spaceflight History](https://spaceflighthistory.blogspot.com/) which focusses on "space exploration history told through missions & programs that didn't happen". It has many [postings regarding Mars](https://spaceflighthistory.blogspot.com/search?q=mars). Some examples: * The [NERVA-Electric Piloted Mars Mission](https://spaceflighthistory.blogspot.com/2016/12/nerva-electric-mars-mission-1966.html) from 1966, which was the last great plan by Braun and would put humans on Mars in 23 September 1986. That might be what you are looking for: it's the most realistic plan from 1966, i.e. when Apollo was not yet doomed. * [After EMPIRE: Using Apollo Technology to Explore Mars and Venus](https://spaceflighthistory.blogspot.com/2015/07/after-empire-using-apollo-technology-to.html) from 1965, which would lead to a manned flyby in 1978. Certainly possible. * [A New Step in Spaceflight Evolution: To Mars by Flyby-Landing Excursion Mode](https://spaceflighthistory.blogspot.com/2017/09/a-new-step-in-spaceflight-evolution-to.html) from 1966, which describes a manned flyby with an orbiter in 1978. Also quite possible and didn't even need peek Apollo. * [Humans on Mars in 1995](https://spaceflighthistory.blogspot.com/2015/10/the-martian-adventure-humans-on-mars-in.html) from 1980-1981, which uses the Shuttle and puts humans on Mars in 1995. This is a realistic plan from 1980 with European involvement. * And if you really want to dream: [Dyna-Soar's Martian Cousin](https://spaceflighthistory.blogspot.com/2015/09/the-martian-adventure-dyna-soars.html) from 1960 which uses a version of X-20 Dyna-Soar to put humans on Mars in 1971. But that's more SF as you need a working X-20 in 1965.
There's always the thought that if you put enough money/resources behind a project, you can do anything, and while this is true in many cases, the amount of resources needed without key innovations can be astronomical and ultimately prohibitive. An example I often cite is the Pyramids- they were built without the wheel, but the resources needed to do them without the wheel was an immense human-powered task! Much in the same way, if we funneled a virtually limitless volume of resources into NASA, we could have any number of major advances using the technology we have available, and the funding to fuel new, incremental breakthroughs. Would these breakthroughs ultimately lead to an easier path to Mars is impossible to speculate, but consider this- if money were no object, a nation could incrementally build an immense space-faring habitat that could, in time, make it's way to Mars. Whether peak Apollo-era funding would have been sufficient to achieve something of that scale is hard to say- but consider this: One Saturn V Rocket could move 310,000 pounds into orbit at the cost of roughly 190 million dollars. The modern International Space Station weighs 500 tons, or just about 1,000,000 pounds. Supposing the habitat that would travel to mars weighed a conservative double of that, or 2,000,000 pounds (1000 tons), you are looking at no less than 7 successful Saturn V rocket launches, totaling $1.3 Billion in launches alone, not counting the craft, support, development, labor, assembly, and other costs- all in 1970's dollars. At those dollar amounts, you're beginning to look at taking a huge slice out of the United States total budget pie-chart at the time. So, in summation, while I think will and determination can overcome pretty much any technical hurdle, the economic factors make the possibility of this very, very slim, even with peak Apollo-level funding.
27,969
In an "alternate universe" where NASA continued to receive a mandate, funding and public support at say peak Apollo levels, could another ten or twenty years have gotten boots on Mars, with astronauts in those boots? Or would there be some clear technical challenge that really needed several more decades of development before this would have been possible? **Ideally:** a bit of math or some supporting links should be presented and not just an opinion or a list. This question is motivated in part by [this thoughtful answer](https://space.stackexchange.com/a/27968/12102).
2018/06/20
[ "https://space.stackexchange.com/questions/27969", "https://space.stackexchange.com", "https://space.stackexchange.com/users/12102/" ]
**Probably yes**. To add to the other already excellent answers: There is the very relevant blog [Spaceflight History](https://spaceflighthistory.blogspot.com/) which focusses on "space exploration history told through missions & programs that didn't happen". It has many [postings regarding Mars](https://spaceflighthistory.blogspot.com/search?q=mars). Some examples: * The [NERVA-Electric Piloted Mars Mission](https://spaceflighthistory.blogspot.com/2016/12/nerva-electric-mars-mission-1966.html) from 1966, which was the last great plan by Braun and would put humans on Mars in 23 September 1986. That might be what you are looking for: it's the most realistic plan from 1966, i.e. when Apollo was not yet doomed. * [After EMPIRE: Using Apollo Technology to Explore Mars and Venus](https://spaceflighthistory.blogspot.com/2015/07/after-empire-using-apollo-technology-to.html) from 1965, which would lead to a manned flyby in 1978. Certainly possible. * [A New Step in Spaceflight Evolution: To Mars by Flyby-Landing Excursion Mode](https://spaceflighthistory.blogspot.com/2017/09/a-new-step-in-spaceflight-evolution-to.html) from 1966, which describes a manned flyby with an orbiter in 1978. Also quite possible and didn't even need peek Apollo. * [Humans on Mars in 1995](https://spaceflighthistory.blogspot.com/2015/10/the-martian-adventure-humans-on-mars-in.html) from 1980-1981, which uses the Shuttle and puts humans on Mars in 1995. This is a realistic plan from 1980 with European involvement. * And if you really want to dream: [Dyna-Soar's Martian Cousin](https://spaceflighthistory.blogspot.com/2015/09/the-martian-adventure-dyna-soars.html) from 1960 which uses a version of X-20 Dyna-Soar to put humans on Mars in 1971. But that's more SF as you need a working X-20 in 1965.
It seems strange to me that no one has mentioned Project Orion (1958-1965), a theoretical study of using atomic bombs to launch rockets and propel them in interplanetary space. That would have been many times more efficient than chemical rockets and the members of project Orion hoped that it would enable manned lunar and interplanetary missions in the the 1960s and 1970s. In regard specifically to Mars, in 2003 the BBC aired a documentary about Project Orion titled *To Mars by A Bomb: The Secret History of Project Orion*. It seems quite possible that if Project Orion had been supported there could have been manned missions to Mars and other planets in the 1970s.
27,969
In an "alternate universe" where NASA continued to receive a mandate, funding and public support at say peak Apollo levels, could another ten or twenty years have gotten boots on Mars, with astronauts in those boots? Or would there be some clear technical challenge that really needed several more decades of development before this would have been possible? **Ideally:** a bit of math or some supporting links should be presented and not just an opinion or a list. This question is motivated in part by [this thoughtful answer](https://space.stackexchange.com/a/27968/12102).
2018/06/20
[ "https://space.stackexchange.com/questions/27969", "https://space.stackexchange.com", "https://space.stackexchange.com/users/12102/" ]
It seems strange to me that no one has mentioned Project Orion (1958-1965), a theoretical study of using atomic bombs to launch rockets and propel them in interplanetary space. That would have been many times more efficient than chemical rockets and the members of project Orion hoped that it would enable manned lunar and interplanetary missions in the the 1960s and 1970s. In regard specifically to Mars, in 2003 the BBC aired a documentary about Project Orion titled *To Mars by A Bomb: The Secret History of Project Orion*. It seems quite possible that if Project Orion had been supported there could have been manned missions to Mars and other planets in the 1970s.
There's always the thought that if you put enough money/resources behind a project, you can do anything, and while this is true in many cases, the amount of resources needed without key innovations can be astronomical and ultimately prohibitive. An example I often cite is the Pyramids- they were built without the wheel, but the resources needed to do them without the wheel was an immense human-powered task! Much in the same way, if we funneled a virtually limitless volume of resources into NASA, we could have any number of major advances using the technology we have available, and the funding to fuel new, incremental breakthroughs. Would these breakthroughs ultimately lead to an easier path to Mars is impossible to speculate, but consider this- if money were no object, a nation could incrementally build an immense space-faring habitat that could, in time, make it's way to Mars. Whether peak Apollo-era funding would have been sufficient to achieve something of that scale is hard to say- but consider this: One Saturn V Rocket could move 310,000 pounds into orbit at the cost of roughly 190 million dollars. The modern International Space Station weighs 500 tons, or just about 1,000,000 pounds. Supposing the habitat that would travel to mars weighed a conservative double of that, or 2,000,000 pounds (1000 tons), you are looking at no less than 7 successful Saturn V rocket launches, totaling $1.3 Billion in launches alone, not counting the craft, support, development, labor, assembly, and other costs- all in 1970's dollars. At those dollar amounts, you're beginning to look at taking a huge slice out of the United States total budget pie-chart at the time. So, in summation, while I think will and determination can overcome pretty much any technical hurdle, the economic factors make the possibility of this very, very slim, even with peak Apollo-level funding.
16,006
From <http://en.wikipedia.org/wiki/Mutant_%28fictional%29#DC_Comics> > > Mutants play a smaller, but still substantial role in DC Comics > > > However, that particular Wiki article only mentions very few mutants (Captain Comet, and a couple of Batman adversaries), neither of whom strike me as important enough to warrant "**substantial**" role. Was that expression inaccurate? Or are there really mutants who play genuinely substantial role in DC universe?
2012/05/03
[ "https://scifi.stackexchange.com/questions/16006", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/976/" ]
The issue at hand is the DC Universe's definition of mutant. Finishing off your quote: > > DC Comics **does not make a semantic or an abstract distinction between humans (or superheroes/villains) born with mutations making them different from humans mutated by outside sources**. > > > This is to say by DC standards, Spider-man is a mutant, not a mutate. This opens up our definition and the number of applicable characters. > > Also characters who were transformed through radiation or a mutagenic gas are sometimes identified as mutants instead of Marvel's term, 'mutates'. > > > All humans with powers are simply referred to, and treated as, one group collectively known as *metahumans*. The term mutant does still exist for humans born with actual powers instead of attaining them. > > > Here is a handy picture for DC supers: ![enter image description here](https://i.stack.imgur.com/BRAAq.jpg) And for Marvel: ![enter image description here](https://i.stack.imgur.com/jMTta.jpg) So anyone who's been 'mutated' by an external force would be considered a mutant in their lexicon. For instance, *The Flash* might be considered a mutant, as the chemicals he was exposed to mutated him. Because of the huge overlap with metahumans though, and being the more general case, they're often just called meta humans.
DC steers away from the Marvel version of *mutant*, perhaps because it plays such a major role at the *House of Ideas* or simply because they feel no need. There are mutants within the DC Universe and have been for some time; the first I can recall is ***Captain Comet*** who, if memory serves, left the Earth *because* he was a mutant and different from other humans. Others include **Jade** and **Obsidian**, the children of the Golden Age **Green Lantern** and **Nuklon** or **Atom Smasher** as he is now called - all of whom were founding members of **Infinity Inc.**, the legacy group of the **JSA**. *Metahumans* seems to be the operative word of choice at DC and encompasses (pretty much) all heroes and villains with powers.
30,072
So far in Dead Island I've noticed white, green, blue, and purple items. These seem to indicate different rarities, similar to the classification system used in Borderlands, but I don't know what these colors actually mean. Is green more rare than blue? What are the different rarity colors?
2011/09/10
[ "https://gaming.stackexchange.com/questions/30072", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/207/" ]
So far, here are all the colors I've found ordered in quality (based on overall stats when compared to one of lesser color and also how often I see weapons of that color): white->green->blue->purple->orange The in game text during loading says that some of the quest rewards gives you unique weapons, and (assuming Gabriel's Sledgehammer is a unique), it is still an orange colored weapon. So I'm going to assume that even unique weapons are orange. I've also found a Master Chef, which was also orange. [This link](http://www.g4tv.com/thefeed/blog/post/716093/dead-island-weaponsmods-location-guide/) seems to back up my answer.
Orange are the super rare or legendary weapons. Purple are rare, blue and green are rare but tend to be less rare than others.
30,072
So far in Dead Island I've noticed white, green, blue, and purple items. These seem to indicate different rarities, similar to the classification system used in Borderlands, but I don't know what these colors actually mean. Is green more rare than blue? What are the different rarity colors?
2011/09/10
[ "https://gaming.stackexchange.com/questions/30072", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/207/" ]
So far, here are all the colors I've found ordered in quality (based on overall stats when compared to one of lesser color and also how often I see weapons of that color): white->green->blue->purple->orange The in game text during loading says that some of the quest rewards gives you unique weapons, and (assuming Gabriel's Sledgehammer is a unique), it is still an orange colored weapon. So I'm going to assume that even unique weapons are orange. I've also found a Master Chef, which was also orange. [This link](http://www.g4tv.com/thefeed/blog/post/716093/dead-island-weaponsmods-location-guide/) seems to back up my answer.
As Ashley Nunn points out in an [answer to another question](https://gaming.stackexchange.com/a/46137/8366), the Dead Island wiki [explains each color in detail](http://deadisland.wikia.com/wiki/Item_Rarity): > > There are a total of FIVE different levels of rarity for an item in Dead Island. These are: > > > * White (Common) - Absolutely everywhere, most items and weapons are this color. > * Green (Uncommon) - Frequently found in chests and given as a reward from quests. Uncommon items are occasionaly [sic] dropped by zombies on death. > * Blue (Rare) - Often given as a reward from quests but can be found on zombies after death and in chests although the chance is low. > * Violet (Unique) - Occasionaly [sic] given as a reward from quests but can be found on zombies after death and in chests but the chance of this is very low. > * Orange (Legendary) - Rarely given as a rewards from quests and is both rarely dropped by zombies on death and found in chests. Finding one of these is truely [sic] a gem, congratulations if you have one. > > >
128,840
We need to do some user documentation for a product we have been working on for the past few sprints. We are now starting a new project in the next sprint and the PO is making the documentation for the product produced previously a User story for this sprint. I am just wondering your opinion on this approach. Personally, I don't agree that documentation is a User Story within Scrum because it doesn't produce any code. EDIT: Thanks for your opinions guys. I had it in the back of my head that a sprint was to implement an increment of working software, but your views have changed my outlook. Thank you for all your answers.
2012/01/06
[ "https://softwareengineering.stackexchange.com/questions/128840", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40251/" ]
"As a user of X, I need to know how X works" seems like a legitimate user story to me. This could result in written documentation or online help. The point isn't just code--it's meeting the users' requirements.
Ideally, documentation is part of every user story and never builds up. But, in the real world, that often doesn't happen. In that case, you should create a user story for catching up on a specific missing piece of documentation. You're right, it doesn't produce any code. But it does satisfy a user requirement and should be prioritised against other user requirements. If this means that it never gets done, because this and that functionality is being worked on then you probably didn't need the documentation that badly.
128,840
We need to do some user documentation for a product we have been working on for the past few sprints. We are now starting a new project in the next sprint and the PO is making the documentation for the product produced previously a User story for this sprint. I am just wondering your opinion on this approach. Personally, I don't agree that documentation is a User Story within Scrum because it doesn't produce any code. EDIT: Thanks for your opinions guys. I had it in the back of my head that a sprint was to implement an increment of working software, but your views have changed my outlook. Thank you for all your answers.
2012/01/06
[ "https://softwareengineering.stackexchange.com/questions/128840", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40251/" ]
Ideally, documentation is part of every user story and never builds up. But, in the real world, that often doesn't happen. In that case, you should create a user story for catching up on a specific missing piece of documentation. You're right, it doesn't produce any code. But it does satisfy a user requirement and should be prioritised against other user requirements. If this means that it never gets done, because this and that functionality is being worked on then you probably didn't need the documentation that badly.
I agree with pdr's documentation assessment if its about requirement, technical or project documentation. Ideally it should be incorporated into sprint work. Product documentation I feel is very different as it is an actual user requested deliverable and directly provides value to the user. This should be understood of course that Product Documentation is essentially **not** a Technical Task but a Functional Task, and may or may not be a suitable activity for a technical resource on the project. I think it should be a user story, however I feel that a project resource that has a firm understanding of the business requirements, user perspective and good technical writing skills should be assigned these tasks. Ideally this would be a business analyst if one is available, or perhaps a higher order QA tester with a firm understanding of the requirements, user stories and good technical writing skills. This could also be a developer, however product documentation written by developers tends not to be as high quality or as useful because developers usually are too close to the technical details.
128,840
We need to do some user documentation for a product we have been working on for the past few sprints. We are now starting a new project in the next sprint and the PO is making the documentation for the product produced previously a User story for this sprint. I am just wondering your opinion on this approach. Personally, I don't agree that documentation is a User Story within Scrum because it doesn't produce any code. EDIT: Thanks for your opinions guys. I had it in the back of my head that a sprint was to implement an increment of working software, but your views have changed my outlook. Thank you for all your answers.
2012/01/06
[ "https://softwareengineering.stackexchange.com/questions/128840", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40251/" ]
Ideally, documentation is part of every user story and never builds up. But, in the real world, that often doesn't happen. In that case, you should create a user story for catching up on a specific missing piece of documentation. You're right, it doesn't produce any code. But it does satisfy a user requirement and should be prioritised against other user requirements. If this means that it never gets done, because this and that functionality is being worked on then you probably didn't need the documentation that badly.
In our organization, the tooling team, in charge of our maintaining and enhancing our continuous integration system is using Scrum to help them manage their work. They are not writing code but they are practicing Scrum nonetheless. To answer your question specifically, I would ask if the team considers that the documentation is part of the "Definition of Done" or not. If the team considers that the documentation is part of the "definition of done" then, there is no need for an additional story and the story cannot be accepted unless the documentation is written and validated. If the team considers that the documentation is not part of the "definition of done", I would create a separate story so that the Product Owner can manage their work.
128,840
We need to do some user documentation for a product we have been working on for the past few sprints. We are now starting a new project in the next sprint and the PO is making the documentation for the product produced previously a User story for this sprint. I am just wondering your opinion on this approach. Personally, I don't agree that documentation is a User Story within Scrum because it doesn't produce any code. EDIT: Thanks for your opinions guys. I had it in the back of my head that a sprint was to implement an increment of working software, but your views have changed my outlook. Thank you for all your answers.
2012/01/06
[ "https://softwareengineering.stackexchange.com/questions/128840", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40251/" ]
"As a user of X, I need to know how X works" seems like a legitimate user story to me. This could result in written documentation or online help. The point isn't just code--it's meeting the users' requirements.
I agree with pdr's documentation assessment if its about requirement, technical or project documentation. Ideally it should be incorporated into sprint work. Product documentation I feel is very different as it is an actual user requested deliverable and directly provides value to the user. This should be understood of course that Product Documentation is essentially **not** a Technical Task but a Functional Task, and may or may not be a suitable activity for a technical resource on the project. I think it should be a user story, however I feel that a project resource that has a firm understanding of the business requirements, user perspective and good technical writing skills should be assigned these tasks. Ideally this would be a business analyst if one is available, or perhaps a higher order QA tester with a firm understanding of the requirements, user stories and good technical writing skills. This could also be a developer, however product documentation written by developers tends not to be as high quality or as useful because developers usually are too close to the technical details.
128,840
We need to do some user documentation for a product we have been working on for the past few sprints. We are now starting a new project in the next sprint and the PO is making the documentation for the product produced previously a User story for this sprint. I am just wondering your opinion on this approach. Personally, I don't agree that documentation is a User Story within Scrum because it doesn't produce any code. EDIT: Thanks for your opinions guys. I had it in the back of my head that a sprint was to implement an increment of working software, but your views have changed my outlook. Thank you for all your answers.
2012/01/06
[ "https://softwareengineering.stackexchange.com/questions/128840", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40251/" ]
"As a user of X, I need to know how X works" seems like a legitimate user story to me. This could result in written documentation or online help. The point isn't just code--it's meeting the users' requirements.
In our organization, the tooling team, in charge of our maintaining and enhancing our continuous integration system is using Scrum to help them manage their work. They are not writing code but they are practicing Scrum nonetheless. To answer your question specifically, I would ask if the team considers that the documentation is part of the "Definition of Done" or not. If the team considers that the documentation is part of the "definition of done" then, there is no need for an additional story and the story cannot be accepted unless the documentation is written and validated. If the team considers that the documentation is not part of the "definition of done", I would create a separate story so that the Product Owner can manage their work.
128,840
We need to do some user documentation for a product we have been working on for the past few sprints. We are now starting a new project in the next sprint and the PO is making the documentation for the product produced previously a User story for this sprint. I am just wondering your opinion on this approach. Personally, I don't agree that documentation is a User Story within Scrum because it doesn't produce any code. EDIT: Thanks for your opinions guys. I had it in the back of my head that a sprint was to implement an increment of working software, but your views have changed my outlook. Thank you for all your answers.
2012/01/06
[ "https://softwareengineering.stackexchange.com/questions/128840", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40251/" ]
I agree with pdr's documentation assessment if its about requirement, technical or project documentation. Ideally it should be incorporated into sprint work. Product documentation I feel is very different as it is an actual user requested deliverable and directly provides value to the user. This should be understood of course that Product Documentation is essentially **not** a Technical Task but a Functional Task, and may or may not be a suitable activity for a technical resource on the project. I think it should be a user story, however I feel that a project resource that has a firm understanding of the business requirements, user perspective and good technical writing skills should be assigned these tasks. Ideally this would be a business analyst if one is available, or perhaps a higher order QA tester with a firm understanding of the requirements, user stories and good technical writing skills. This could also be a developer, however product documentation written by developers tends not to be as high quality or as useful because developers usually are too close to the technical details.
In our organization, the tooling team, in charge of our maintaining and enhancing our continuous integration system is using Scrum to help them manage their work. They are not writing code but they are practicing Scrum nonetheless. To answer your question specifically, I would ask if the team considers that the documentation is part of the "Definition of Done" or not. If the team considers that the documentation is part of the "definition of done" then, there is no need for an additional story and the story cannot be accepted unless the documentation is written and validated. If the team considers that the documentation is not part of the "definition of done", I would create a separate story so that the Product Owner can manage their work.
46,316
Pleasures for us is pain & suffering for the animals which are slaughtered. When we are causing suffering & pain to others, we lose moral rights to expect not the same ever happened to us. In case of animal slaughtering, do we also lose the moral rights to mourn if our beloved ones are killed the same way which we, being non Vegetarian, kill other creatures? In other words, Don't we have moral rights to take it as injustice to ourselves if our beloved ones are killed, being Non Vegetarian? **EDIT**: Seems I am unable to express myself clearly, I would like to ask the same question in other words: Do non-vegetarians lose rights to ask GOD if their beloved ones are killed, presuming GOD doesn't want us to kill any creature intentionally for sensual pleasures. (Seems obvious if god is Benevolent & Equal to all the creatures.)?
2017/10/01
[ "https://philosophy.stackexchange.com/questions/46316", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/23084/" ]
> > *moral rights* > > > "Rights" aren't a moral or ethical category. They are a juridical category. > > *to mourn* > > > To "mourn", on the other hand, is psychological, not moral or juridical, phenomenon. So, the "moral right to mourn" is a conflation of disparate concepts, which becomes meaningless at all three - juridical, moral, psychological - levels. Juridically, there is nothing that can be done to prevent people from mourning. If they think that the defeat of their preferred soccer team is something to mourn, more than the death of thousands of people in a war on the other side of the planet, there is nothing to be done about. What would we do? Jail them? Tell them they are going to hell? Dose them some chemical that will make them unable to mourn? Morally, you can condemn whatever you want. It doesn't affect the rights of others. You may think that those who see too much television lose their moral grounds to complain about the moral decadence of society, for television is hugely responsible for such decadence. They will still watch TV and complain about the decaying mores of the commonwealth. Both things are, and should be, legal rights; a society in which either or both were forbidden would be horrible to live in. Psychologically, there is nothing that can be done about mourning or not mourning. One may think that I should mourn the extinction of the pox virus; but the fact is that if I am not, for any reason, psychologically attached to such virus, I won't mourn its extinction, and may be indifferent or happy about it. It's possible, I guess, to shame people into pretending that they are unhappy about a given event, but it is not possible to make them unhappy if they are not. And this - shaming people - is what this idea is probably about. It is not that we should not mourn the passing of our grandmother just because we just ate a barbecue; it is that we should not have eaten the barbecue, for we should think of the poor cow as we think of our grandmother. But as some of the comments above pointed out, the vast majority of human beings do not think a cow is equivalent to a human being, and consequently won't be able to act as if it was. --- Evidently, the idea of an all-encompassing equivalence among all living beings is pragmatically unsustainable. All human beings, vegetarians and animal-rights activists included, are "especiesist" - they do not think killing a vegetal is the same as killing an animal, they do not think killing an insect - or any invertebrate - is the same as killing a mammal, they rarely think that amphibians, fish, or lizards, are on the same standing as birds and mammals, and more often than not discriminate among mammals - who does empathise with a bat or a hyaena as much as with a panda, for instance? --- Talking about empathy, it is often repeated that a psychopath is someone who is devoid of empathy. But empathy is a complicated thing; as someone else put it, if you break your leg, do you want a physician who painfully pulls it into the right position, or a physician who hugs you and cries together? "Empathy" can be paralising in this sence. If no empathy makes one a psychopath, unqualified empathy may turn one *hysteric*. Most of us are by far more empathetic towards our own relatives, friends, neighbours, etc, than towards people we do not know and live far away. And the idea that you should not mourn your grandmother because you didn't properly mourn the victims of a hurricane in Texas or an earthquake in Mexico is somewhat disturbing - at some point, all-encompassing empathy veers dangerously in the direction of no empathy at all.
Sea urchins can kill off kelp forest if left unchecked. It was meat-eating sea otters who restored the forests, where other critters can have a place to live. Goats are capable of turning grasslands into deserts. It was meat-eating wolfs who checked the goats' number and kept alive the grassland, where other critters can live. When the Mongols were strong, farmers could not ravage the earth. There is nothing inherently morally superior in a vegetarian, and experience does not support this connection either. India, for example, probably has [the worst environmental record](https://www.google.com/search?q=cry%20of%20a%20river&hs=8Uk&channel=fs&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiOs9PEntLWAhVRziYKHUGjD_oQ_AUICygC&biw=1547&bih=855) of the whole world despite being mostly vegetarian.
1,556,716
I distributed an iPhone application to some testers who are using Windows. On their machines, the application that I built (in the form of an .app bundle) appeared as a directory. That makes sense because a .app bundle **is** actually a directory. What mechanism does Mac OS X use to display these bundles as files within Finder? Is there anyway to get the same kind of abstraction on Windows?
2009/10/12
[ "https://Stackoverflow.com/questions/1556716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/105382/" ]
No, but why would you want to? An .app on Mac is not equivalent to an .exe on Windows - if you explore the .app bundle on a Mac (using "Show Package Contents"), you'll see an actual executable program inside. *This* program is more the equivalent of a Windows .exe, and you can liken the entire .app bundle to the Program Files directory for an application on Windows. All the .app serves to do is to concentrate program resources in one directory, much like an install directory does on Windows.
It's kind of the same magic that lets zip files appear as "compressed folders". It's what the shell does but I am afraid you have no way of hooking into such things.
1,556,716
I distributed an iPhone application to some testers who are using Windows. On their machines, the application that I built (in the form of an .app bundle) appeared as a directory. That makes sense because a .app bundle **is** actually a directory. What mechanism does Mac OS X use to display these bundles as files within Finder? Is there anyway to get the same kind of abstraction on Windows?
2009/10/12
[ "https://Stackoverflow.com/questions/1556716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/105382/" ]
It's kind of the same magic that lets zip files appear as "compressed folders". It's what the shell does but I am afraid you have no way of hooking into such things.
If you are distributing to Windows users all will go much better if you give them IPA files instead of app bundles. The easiest way to create an IPA file is to drag the .app bundle into iTunes, and then look for the IPA file that it generates for you.
1,556,716
I distributed an iPhone application to some testers who are using Windows. On their machines, the application that I built (in the form of an .app bundle) appeared as a directory. That makes sense because a .app bundle **is** actually a directory. What mechanism does Mac OS X use to display these bundles as files within Finder? Is there anyway to get the same kind of abstraction on Windows?
2009/10/12
[ "https://Stackoverflow.com/questions/1556716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/105382/" ]
No, but why would you want to? An .app on Mac is not equivalent to an .exe on Windows - if you explore the .app bundle on a Mac (using "Show Package Contents"), you'll see an actual executable program inside. *This* program is more the equivalent of a Windows .exe, and you can liken the entire .app bundle to the Program Files directory for an application on Windows. All the .app serves to do is to concentrate program resources in one directory, much like an install directory does on Windows.
If you are distributing to Windows users all will go much better if you give them IPA files instead of app bundles. The easiest way to create an IPA file is to drag the .app bundle into iTunes, and then look for the IPA file that it generates for you.
401,043
What are some ways of documenting pitfalls and gotchas when working with a codebase with a team of developers? I'm working with a team of developers on a big codebase and there's lots of small little things that cause headaches and frustrations in code, configuration, and testing. It doesn't seem appropriate to put them into the github pages because they could be scattered across repositories and projects - so my question is what are some examples of documenting pitfalls and common gotchas?
2019/11/13
[ "https://softwareengineering.stackexchange.com/questions/401043", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/128090/" ]
Quirks and hacks which are easy to explain could be explained in a form of a **comment**. After all, the purpose of the comments is exactly that—come to a rescue when one cannot or doesn't have enough resources to make unclear code clearer. The goal here is to help a programmer who's reading a given line of code (or series of lines) and asks himself what went so wrong in the past that it resulted in this specific piece of code. Weird architecture decisions (which seemed right when they were taken, then appeared to be wrong, but nobody made the effort to refactor the codebase) should be documented in an **architecture document**. Put it in a location where anybody working on the project would find it easily, and make sure new programmers joining the project would necessarily read it. In both cases, make sure you (and your coworkers) understand that **there is nothing normal in having a source code full of pitfalls and gotchas that need to be documented**. Talk with your product owner to reserve a few hours or days per week for large refactoring tasks (i.e. changes at design or architecture level). If you don't, the project is doomed, as every other project where technical debt was left increasing. Moreover, code-level refactoring should be your constant activity: whenever you work on a piece of code, ensure you follow the boy scout rule: “Leave your code better than you found it.” If a method is unclear because someone used one-letter names for the variables, don't keep it this way: rename them. It doesn't take long, and it will pay out the next time someone won't lose ten minutes trying to figure out how this method works. Most refactoring techniques are pretty simple and quick to implement, and many are very effective.
No quirks would be better ------------------------- First and foremost, the [principle of least astonishment](https://en.wikipedia.org/wiki/Principle_of_least_astonishment) applies: > > A typical formulation of the principle, from 1984, is: "If a necessary feature has a high astonishment factor, it may be necessary to **redesign the feature**." > > > In other words: **remove the quirks**. The time your developers lose on dealing with the quirks is likely to be bigger than the time needed for removing the quirk altogether. [XKCD](https://xkcd.com/1205/) has a really handy guide to show you the rough numbers on how long you can justify spending time on fixing something based on how often you encounter it: [![enter image description here](https://i.stack.imgur.com/1Vp6q.png)](https://i.stack.imgur.com/1Vp6q.png) Taking a simple example, if your team runs into a particular quirk once a week (on average) and you spend 30 minutes on it (on average), and this product is expected to be used for the next 5 years, that gives you a whopping 21 hours or **just under 3 days** to work on fixing the quirk before you've spent more time than you've saved. Even if you encounter the same issue only on a monthly basis, that still gives you **about a day** to fix it. Note that the time spent on a quirk very quickly adds up. Bugfixing, rerunning a comprehensive test playlist, communication during a daily standup, merge conflicts, training, reading the documentation on the quirk, ... Also note that training counts for double time because it's a time investment on both the trainer and the trainee's part. Documentation is not a [panacea](https://en.wikipedia.org/wiki/Panacea_(medicine)) ---------------------------------------------------------------------------------- Don't get me wrong, documentation is essential, especially when you account for the [bus factor](https://en.wikipedia.org/wiki/Bus_factor), but documentation should not be your only line of defense. Whenever they encounter an issue, no one is going to read through the entire documentation before consulting anyone else (nor should they). If they think the problem is new, they will spend time digging into it and it will take time (quirks tend to inherently be counterintuitive). If they don't think the problem is new, they're going to ask someone more experienced if there's a known resolution. Documentation is great for referring to, but it doesn't inherently prevent developers distracting each other by asking about the problem. [Developer productivity tanks when they get distracted](https://www.gamasutra.com/view/feature/190891/programmer_interrupted.php), and the duration of the distraction is a surprisingly negligible factor here (whether the distraction takes seconds, minutes or hours). > > **No Distractions** > > Another approach for getting into the zone is to avoid all distractions. Some people are more sensitive to distraction than others, but we should always aim to minimise the kinds of distraction that pulls our mind out of what we're currently doing. **Avoiding the context switching cost is a big win**. > > > The rest of [the article](https://lifehacker.com/what-is-the-zone-anyway-5920484) is less relevant here but it touches on the importance of not switching context if you want to keep productivity up. When you need to document ------------------------- But sometimes, quirks are too ingrained, or you're unable to convince management to refactor the code. These things happen. And when they do, documentation is the least you can do (cynical pun intended). For one-off quirks (i.e. located on a single line or file), inline comments are the most easily noticed warnings. This ensures that developers who look at the quirky code are immediately alerted of the quirk. If the quirks are endemic to the architecture and thus spread across an entire layer (or worse; multiple layers), inline comments don't cut it. You don't want to pepper them across your codebase. Here, it becomes more appropriate to have a **developer guideline** for the project which outlines common pitfalls (among other things). Arseni's answer calls this an "architecture document" but the gist is the same. As a consultant, I tend to write these guidelines for projects where I'm unable to assist development or review the code on an ongoing basis (whether it be a limited contract duration or other responsibilities). The document contains more than just pitfalls, but the overall purpose of the content is to steer developers in the right direction in regards to implementation, separation of responsibilities over multiple layers, dependencies and the aforementioned quirks, pitfalls and FAQ. This may seem like overkill, but in my job context I'm often writing these for teams where bad practice is the norm or teams that struggle with consistency and/or good practice guidelines. Given your situation of having *"lots of small little things that cause headaches and frustrations in code, configuration, and testing"*, that is not too dissimilar and similar documentation may be warranted if you cannot continuously rely on the guiding hand of a more experienced developer/reviewer to avoid the quirks.
6,372,248
I have a problem and need some help. My application uses outlook to send email with attachments. Right now i need to find out when the email with attachment has been send out completely by outlook. I tried to follow this [link](http://social.msdn.microsoft.com/forums/en-US/vsto/thread/d891c669-21af-4ce4-b24b-8f6eb2308227) but the ItemEvents\_10\_SendEventHandler does not fulfil my task as outlook will still be attaching the document when this event is fired. I found out that the email takes time to send out due to the attachment and the duration depends on the attachment size. I want my program to be notified if possible or wait until the email has been send out completely. Can someone guide me or tell me the approach on how to get this to work. Any help provided will be greatly appericiated.
2011/06/16
[ "https://Stackoverflow.com/questions/6372248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/293240/" ]
Mostly not, modern CPU's are very fast... The most notable improvement is the speed in the download speeds. Strip out any unnecessary comments sent down the wire to the browser, minify CSS, JavaScript files, use CDN's etc.
It depends how many comments there are. In general though, most developers have a live version of all their code that is compressed- no whitespace outside of text formatting, no comments, etc. And an offline version that is developer-friendly with all the extra formatting and so on. Another thing to note is with '//' style comments, they won't really hinder performance since the parser skips straight to the next line. With /\*\*/ comments, the parsers has to keep reading all your comments until it encounters the closing \*/ so its *ever-so-slightly* more cpu intensive. Paragraph 2 though, imo. :)
91,001
I'm an ardent physicalist with a belief in the importance of the partial reduction of theories to physicalism. I have on occasion had discussions with philosophers here who challenge the existence of the natural/supernatural dichotomy, or in some way endorse supernaturalism. My question is directed at those who know something about theology and supernaturalism. Thus, despite ontologically rejecting supernaturalism and magic myself, I'm curious about the metaphysical presuppositions of others that entail belief. EDIT <<< (In response to comments: Naturalism for me is everything conveyed by an athiestic conception of a pluralism of sciences with partial reduction of theory with pragmatic criteria for the distinction of pseudoscience that takes a middle ground between realism and instrumentalism. Supernaturalism is therefore any ontological category outside of this.) <<< **Simply put, for philosophical positions (even of non-Western schools) that accept supernaturalism, does acceptance metaphysically necessitate 'magic' as a category?** My sense is modern theologians accept 'miracles', but reject 'magic' based on my own discussions with those who profess the [Book of Concord](https://en.wikipedia.org/wiki/Book_of_Concord) as a faithful characterization of Christian doctrine. But theology and supernaturalism is much broader than being a confessional Lutheran, so any relevant perspective, including historical philosophy such as the text of [Mauro Allegranza's link](https://en.wikipedia.org/wiki/Renaissance_magic) is of interest.
2022/05/03
[ "https://philosophy.stackexchange.com/questions/91001", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/40730/" ]
The range of supernaturalism is much broader and more general than the religious doctrines contained in the Book of Concord. Taking the definitions from [wikipedia](https://en.wikipedia.org/wiki/Supernatural#Magic) (emphasis J.W.): * The **supernatural** is phenomena or entities that are not subject to the laws of nature. […] The term is attributed to non-physical entities, such as angels, demons, gods, and spirits. It also includes claimed abilities embodied in or provided by such beings, including magic, telekinesis, levitation, precognition, and extrasensory perception. * The philosophy of **naturalism** contends that nothing exists beyond the natural world, and as such approaches supernatural claims with skepticism. * **Magic** or sorcery is the use of rituals, symbols, actions, gestures, or language with the aim of utilizing supernatural forces. Hence Magic presupposes the existence of supernatural entities and forces. From a logical point of view the opposite relation does not hold. Because one may believe in the existence of supernatural entities but assume that we cannot influence them.
There is no such thing as supernatural. Everything that exists is natural. Magic comes in two flavours: * Entertainment magic creates an *illusion* of something impossible. * "Real" magic is nothing but advanced technology that only looks impossible to an uneducated observer.
91,001
I'm an ardent physicalist with a belief in the importance of the partial reduction of theories to physicalism. I have on occasion had discussions with philosophers here who challenge the existence of the natural/supernatural dichotomy, or in some way endorse supernaturalism. My question is directed at those who know something about theology and supernaturalism. Thus, despite ontologically rejecting supernaturalism and magic myself, I'm curious about the metaphysical presuppositions of others that entail belief. EDIT <<< (In response to comments: Naturalism for me is everything conveyed by an athiestic conception of a pluralism of sciences with partial reduction of theory with pragmatic criteria for the distinction of pseudoscience that takes a middle ground between realism and instrumentalism. Supernaturalism is therefore any ontological category outside of this.) <<< **Simply put, for philosophical positions (even of non-Western schools) that accept supernaturalism, does acceptance metaphysically necessitate 'magic' as a category?** My sense is modern theologians accept 'miracles', but reject 'magic' based on my own discussions with those who profess the [Book of Concord](https://en.wikipedia.org/wiki/Book_of_Concord) as a faithful characterization of Christian doctrine. But theology and supernaturalism is much broader than being a confessional Lutheran, so any relevant perspective, including historical philosophy such as the text of [Mauro Allegranza's link](https://en.wikipedia.org/wiki/Renaissance_magic) is of interest.
2022/05/03
[ "https://philosophy.stackexchange.com/questions/91001", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/40730/" ]
The range of supernaturalism is much broader and more general than the religious doctrines contained in the Book of Concord. Taking the definitions from [wikipedia](https://en.wikipedia.org/wiki/Supernatural#Magic) (emphasis J.W.): * The **supernatural** is phenomena or entities that are not subject to the laws of nature. […] The term is attributed to non-physical entities, such as angels, demons, gods, and spirits. It also includes claimed abilities embodied in or provided by such beings, including magic, telekinesis, levitation, precognition, and extrasensory perception. * The philosophy of **naturalism** contends that nothing exists beyond the natural world, and as such approaches supernatural claims with skepticism. * **Magic** or sorcery is the use of rituals, symbols, actions, gestures, or language with the aim of utilizing supernatural forces. Hence Magic presupposes the existence of supernatural entities and forces. From a logical point of view the opposite relation does not hold. Because one may believe in the existence of supernatural entities but assume that we cannot influence them.
There are technical philosophical glosses of the concept of *magic* in modern fantasy writing. The problematique starts at the narrative level: "Is the guiding problem of the story a problem posed by magic? And how much magic is involved in the solution?" Then *magic* is formally read back into "the conditions of magical problems and solutions," so perhaps impredicatively, although then the [two](https://www.stephenrdonaldson.com/) [authors](https://faq.brandonsanderson.com/knowledge-base/what-are-sandersons-laws-of-magic/) who have developed this theory the most (to my knowledge, and separately) do go on to structure the use of magic in their stories in a pretty "rigorous" manner. So the stronger of the two embeds the ontology of magic, in his broadest work, into the matter/energy dichotomy, posing the counterfactual question, "If matter and energy are two species under an even more elementary genus, then what if there was a third term under the same genus?" Voila, presto, abracadabra, here you go: this third term, locally designated *investiture*, plays the narrative role of magic. Aleister Crowley defined magic like so: > > Magick is the Science and Art of causing Change to occur in conformity with Will. ... Every intentional act is a Magical act.... Magick is the Science of understanding oneself and one's conditions. It is the Art of applying that understanding in action. > > > And the narrative role of magic, in general, seems to be to give characters a way to cause highly distinct physical effects using pure free will (emotionally and/or intellectually interpreted). In a related historical vein, then, Hume portrayed the naive theory of promises as magical thinking ["at its finest"](https://plato.stanford.edu/entries/promises/#ProProObl): > > ... promissory obligations aren't just contingent upon acts of the will, like the obligations we might incur by deliberately damaging someone's property, but (at least it seems on first reflection) they are immediately created by acts of the will. When I promise to do something, it seems that *by so doing* I have created the obligation to do it. This feature makes promissory obligations a special puzzle for naturalistic ethical theories that hope to explain moral obligations without recourse to super-natural entities. The idea that we simply manufacture promissory obligations by speaking them, like an incantation, is decidedly mysterious. As Hume acidly remarked in the *Treatise*: > > > > > > > I shall further observe, that, since every new promise imposes a new obligation of morality on the person who promises, and since this new obligation arises from his will; it is one of the most mysterious and incomprehensible operations that can possibly be imagined, and may even be compared to *transubstantiation* or *holy orders*, where a certain form of words, along with a certain intention, changes entirely the nature of an external object, and even of a human creature. (*Treatise*, 3.2.5–14/15–524; emphasis in the original) > > > > > > > > > If, as with Kant, we try to locate free will "proper" in an eternal realm where God possibly exists, we will be hard-pressed to avoid the appearance of talking about magic, then; and what Hume mocked respecting obligations from promises becomes the manifold of all our responsibilities whatsoever instead, i.e. again the pure will. So for all that, narrative will-theoretic magic tropes might represent their content as embedded in what is otherwise a physical/natural world. What do we say to Clarke's "lemma," that sufficiently advanced technology is indistinguishable from magic? If God's nature is Its will (in some way that our mortal nature is not completely simultaneous with our will), then if God does something by magic, this is actually to do it by nature too, as well as supernaturally all at once. A feat worthy of an omnipotent paradox, perhaps.
91,001
I'm an ardent physicalist with a belief in the importance of the partial reduction of theories to physicalism. I have on occasion had discussions with philosophers here who challenge the existence of the natural/supernatural dichotomy, or in some way endorse supernaturalism. My question is directed at those who know something about theology and supernaturalism. Thus, despite ontologically rejecting supernaturalism and magic myself, I'm curious about the metaphysical presuppositions of others that entail belief. EDIT <<< (In response to comments: Naturalism for me is everything conveyed by an athiestic conception of a pluralism of sciences with partial reduction of theory with pragmatic criteria for the distinction of pseudoscience that takes a middle ground between realism and instrumentalism. Supernaturalism is therefore any ontological category outside of this.) <<< **Simply put, for philosophical positions (even of non-Western schools) that accept supernaturalism, does acceptance metaphysically necessitate 'magic' as a category?** My sense is modern theologians accept 'miracles', but reject 'magic' based on my own discussions with those who profess the [Book of Concord](https://en.wikipedia.org/wiki/Book_of_Concord) as a faithful characterization of Christian doctrine. But theology and supernaturalism is much broader than being a confessional Lutheran, so any relevant perspective, including historical philosophy such as the text of [Mauro Allegranza's link](https://en.wikipedia.org/wiki/Renaissance_magic) is of interest.
2022/05/03
[ "https://philosophy.stackexchange.com/questions/91001", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/40730/" ]
The range of supernaturalism is much broader and more general than the religious doctrines contained in the Book of Concord. Taking the definitions from [wikipedia](https://en.wikipedia.org/wiki/Supernatural#Magic) (emphasis J.W.): * The **supernatural** is phenomena or entities that are not subject to the laws of nature. […] The term is attributed to non-physical entities, such as angels, demons, gods, and spirits. It also includes claimed abilities embodied in or provided by such beings, including magic, telekinesis, levitation, precognition, and extrasensory perception. * The philosophy of **naturalism** contends that nothing exists beyond the natural world, and as such approaches supernatural claims with skepticism. * **Magic** or sorcery is the use of rituals, symbols, actions, gestures, or language with the aim of utilizing supernatural forces. Hence Magic presupposes the existence of supernatural entities and forces. From a logical point of view the opposite relation does not hold. Because one may believe in the existence of supernatural entities but assume that we cannot influence them.
As a preamble, I consider it to be poor practice to use the term "supernatural" for the non-physical. "Natural" has a very clear meaning in epistemology -- it describes the items/events/forces which are subject to the tools of methodological naturalism -- IE empiricism and reasoning. Super-naturalism, from an epistemological perspective, are those things which one *cannot* apply reasoning and empiricism to. Use of "supernatural" for the ontologic category "spiritual" or for *any* non-physical thing, which are different but common uses, leads almost immediately into equivocation errors relative to epistemological meanings of natural and supernatural. Note, there are many ontologically supernatural categories which are not epistemologically supernatural. All of Thomist philosophy and metaphysics, for instance, is purely rationalistic -- IE methodologically natural. And vitalism was a specialty within the science of biology at the start of the 20th century. And mathematics, logic, and ethics, do not appear to be physical fields, yet they are very much subject to methodological naturalism. With that preamble, I will try to answer your question. There are two questions at issue. The first has to do with the ontology of our universe. Are there multiple fundamentally different types of things in it, or only one? And further, you are assuming that the existence of matter is beyond question, so the one is assumed to be material. What you are describing as "supernaturalism" would encompass a diverse set of ontologies, ranging from Platonic Idealism, Tegmark/Pythagorean math as source of everything, thru the Mind-centric idealist-leaning Perennial Philosophy, any concept of interactive spiritual dualism, Popper's strongly emergent consciousness, and probably Russellian Monism as well. Your "supernatural" just appears to be the postulate that there are non physical things or planes in our universe. All of the above ontologic views assume that non-material things can be causal on the material -- IE that physics is not causally closed. This interaction, from outside physics, is generally described as "magic" by physicalists. "Magic" is the method by which the ontic supernatural interacts with the physical. Physicalism is committed to the presumption that physics is causally closed, hence magic is impossible. Physics is *not* causally closed, and cannot be for multiple reasons. 1. Physics is underdetermined, hence an outside influence can influence an outcome within that suite of options, entirely consistently with physics 2. Physics, so long as it is an actual science, is by definition incomplete, as well as uncertain, so cannot exclude any phenomenon or outcomes (both ontic "supernatural" and "magic" are consistent with physics, so long as physics is not complete). 3. No space within our universe, nor the universe itself, can ever be isolated from outside influence. Both entanglement and cosmology assume that no physical system can ever be completely isolated from outside influences. 4. There are no absolute "laws" in science, only occasionally violated regularities, so causal closure, applied as a "law" is contrary to science. Some principles of "magic", by which Platonic Forms influence their shadows in this world, or by which the Will to Power creates life events, or how Consciousness and Neurons both are reflections of the Russelian monod, are postulated or assumed in each of these differing ontologies. But most of the details are currently not filled it. Provided these "supernatural" ontologies are not epistemologically supernatural, than the application of methodological naturalism should allow the details to gradually get filled in as to how magic works in that ontology. Or else identify sufficient problems and/or contradictions that could lead to the abandonment of that ontology by most of its holders. Note philosophy over the millenia has been subject to methodological naturalism, and subject areas and sciences have emerged from it to become their own science or academic specialty, as progress is made in characterizing how to evaluate that subject. "Magic" based "supernatural" ontologies are generally committed to this as a future path for their assumed ontology. For an example of training a skill of "magic", IE applying methodological naturalism to refine a magic methodology, look a the methods developed by the CIA to perform remote viewing. Here is an example site that offers this training today: <https://remoteviewingtraining.com/> For an example of a work that presumes the Perennial Philosophy as a starting point for doing science, and intrinsically accepts magic see Beyond Physicalism: Toward Reconciliation of Science and Spirituality <https://www.amazon.com/gp/customer-reviews/RZY1A4EL2JOZ4?ref=pf_vv_at_pdctrvw_srp>. This work treats "magic" as able to potentially operate through all 4 of the exceptions to causal closure of the physical I noted. For one that takes a far more restrictive view of magic, and assumes that it will only act within the indeterminism of physics of the first exception, see Swinburne: Mind, Brain, and Free Will <https://www.amazon.com/gp/customer-reviews/R18J8OJA7QPLKX?ref=pf_vv_at_pdctrvw_srp>. Eccles also limits his magic to the indeterminate parts of physics: How the SELF Controls Its BRAIN [https://www.amazon.com/How-SELF-Controls-Its-BRAIN/dp/3642492266/ref=sr\_1\_1?crid=XBJGKN0G1Q5X&keywords=eccles+how+the+brain&qid=1651905829&sprefix=eccles+how+the+brain%2Caps%2C214&sr=8-1](https://rads.stackoverflow.com/amzn/click/com/3642492266) For an effort to understand how energy conservation could or could not apply to "magic" see this question and answer: 'The Zero Energy Hypothesis and its consequences for particle creation and dualist interactionism' <https://physics.stackexchange.com/q/494408/181964> As to whether one must accept magic if one accepts "supernatural" ontology -- that one need not do so is the position of epiphenomenalism. Epiphenomenalism has a challenge to explain its own coherence, as asserting the truth of epiphenomenalism appears to be intrinsically a refutation of its own premise (assertions are physical, and presumably asserting the reality of consciousness as a non-physical item is a consequence of the reality of consciousness, hence is an examples of consciousness being causal on the physical). Despite the coherence difficulties, there are epiphenomenalists among current active philosophers. Both Chalmers and Jackson are "supernaturalists" relative to consciousness, and anti-magic epiphenomenalists relative to the effect of consciousness on the physical.
91,001
I'm an ardent physicalist with a belief in the importance of the partial reduction of theories to physicalism. I have on occasion had discussions with philosophers here who challenge the existence of the natural/supernatural dichotomy, or in some way endorse supernaturalism. My question is directed at those who know something about theology and supernaturalism. Thus, despite ontologically rejecting supernaturalism and magic myself, I'm curious about the metaphysical presuppositions of others that entail belief. EDIT <<< (In response to comments: Naturalism for me is everything conveyed by an athiestic conception of a pluralism of sciences with partial reduction of theory with pragmatic criteria for the distinction of pseudoscience that takes a middle ground between realism and instrumentalism. Supernaturalism is therefore any ontological category outside of this.) <<< **Simply put, for philosophical positions (even of non-Western schools) that accept supernaturalism, does acceptance metaphysically necessitate 'magic' as a category?** My sense is modern theologians accept 'miracles', but reject 'magic' based on my own discussions with those who profess the [Book of Concord](https://en.wikipedia.org/wiki/Book_of_Concord) as a faithful characterization of Christian doctrine. But theology and supernaturalism is much broader than being a confessional Lutheran, so any relevant perspective, including historical philosophy such as the text of [Mauro Allegranza's link](https://en.wikipedia.org/wiki/Renaissance_magic) is of interest.
2022/05/03
[ "https://philosophy.stackexchange.com/questions/91001", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/40730/" ]
The range of supernaturalism is much broader and more general than the religious doctrines contained in the Book of Concord. Taking the definitions from [wikipedia](https://en.wikipedia.org/wiki/Supernatural#Magic) (emphasis J.W.): * The **supernatural** is phenomena or entities that are not subject to the laws of nature. […] The term is attributed to non-physical entities, such as angels, demons, gods, and spirits. It also includes claimed abilities embodied in or provided by such beings, including magic, telekinesis, levitation, precognition, and extrasensory perception. * The philosophy of **naturalism** contends that nothing exists beyond the natural world, and as such approaches supernatural claims with skepticism. * **Magic** or sorcery is the use of rituals, symbols, actions, gestures, or language with the aim of utilizing supernatural forces. Hence Magic presupposes the existence of supernatural entities and forces. From a logical point of view the opposite relation does not hold. Because one may believe in the existence of supernatural entities but assume that we cannot influence them.
It seems to me that magic is referring to the intervention of someone in the (accepted) laws of a system in order to attain something that otherwise cound not be attained, metaphorically speaking "bending" the laws. On the other hand supernatural refers to something that is happening that is considered outside of the (accepted) laws, either because of ignorance of the (real) laws or because of someone's magic. So magic is from the point of doing something and supernatural from the point of viewing. (It's like energy and information!).
91,001
I'm an ardent physicalist with a belief in the importance of the partial reduction of theories to physicalism. I have on occasion had discussions with philosophers here who challenge the existence of the natural/supernatural dichotomy, or in some way endorse supernaturalism. My question is directed at those who know something about theology and supernaturalism. Thus, despite ontologically rejecting supernaturalism and magic myself, I'm curious about the metaphysical presuppositions of others that entail belief. EDIT <<< (In response to comments: Naturalism for me is everything conveyed by an athiestic conception of a pluralism of sciences with partial reduction of theory with pragmatic criteria for the distinction of pseudoscience that takes a middle ground between realism and instrumentalism. Supernaturalism is therefore any ontological category outside of this.) <<< **Simply put, for philosophical positions (even of non-Western schools) that accept supernaturalism, does acceptance metaphysically necessitate 'magic' as a category?** My sense is modern theologians accept 'miracles', but reject 'magic' based on my own discussions with those who profess the [Book of Concord](https://en.wikipedia.org/wiki/Book_of_Concord) as a faithful characterization of Christian doctrine. But theology and supernaturalism is much broader than being a confessional Lutheran, so any relevant perspective, including historical philosophy such as the text of [Mauro Allegranza's link](https://en.wikipedia.org/wiki/Renaissance_magic) is of interest.
2022/05/03
[ "https://philosophy.stackexchange.com/questions/91001", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/40730/" ]
There are technical philosophical glosses of the concept of *magic* in modern fantasy writing. The problematique starts at the narrative level: "Is the guiding problem of the story a problem posed by magic? And how much magic is involved in the solution?" Then *magic* is formally read back into "the conditions of magical problems and solutions," so perhaps impredicatively, although then the [two](https://www.stephenrdonaldson.com/) [authors](https://faq.brandonsanderson.com/knowledge-base/what-are-sandersons-laws-of-magic/) who have developed this theory the most (to my knowledge, and separately) do go on to structure the use of magic in their stories in a pretty "rigorous" manner. So the stronger of the two embeds the ontology of magic, in his broadest work, into the matter/energy dichotomy, posing the counterfactual question, "If matter and energy are two species under an even more elementary genus, then what if there was a third term under the same genus?" Voila, presto, abracadabra, here you go: this third term, locally designated *investiture*, plays the narrative role of magic. Aleister Crowley defined magic like so: > > Magick is the Science and Art of causing Change to occur in conformity with Will. ... Every intentional act is a Magical act.... Magick is the Science of understanding oneself and one's conditions. It is the Art of applying that understanding in action. > > > And the narrative role of magic, in general, seems to be to give characters a way to cause highly distinct physical effects using pure free will (emotionally and/or intellectually interpreted). In a related historical vein, then, Hume portrayed the naive theory of promises as magical thinking ["at its finest"](https://plato.stanford.edu/entries/promises/#ProProObl): > > ... promissory obligations aren't just contingent upon acts of the will, like the obligations we might incur by deliberately damaging someone's property, but (at least it seems on first reflection) they are immediately created by acts of the will. When I promise to do something, it seems that *by so doing* I have created the obligation to do it. This feature makes promissory obligations a special puzzle for naturalistic ethical theories that hope to explain moral obligations without recourse to super-natural entities. The idea that we simply manufacture promissory obligations by speaking them, like an incantation, is decidedly mysterious. As Hume acidly remarked in the *Treatise*: > > > > > > > I shall further observe, that, since every new promise imposes a new obligation of morality on the person who promises, and since this new obligation arises from his will; it is one of the most mysterious and incomprehensible operations that can possibly be imagined, and may even be compared to *transubstantiation* or *holy orders*, where a certain form of words, along with a certain intention, changes entirely the nature of an external object, and even of a human creature. (*Treatise*, 3.2.5–14/15–524; emphasis in the original) > > > > > > > > > If, as with Kant, we try to locate free will "proper" in an eternal realm where God possibly exists, we will be hard-pressed to avoid the appearance of talking about magic, then; and what Hume mocked respecting obligations from promises becomes the manifold of all our responsibilities whatsoever instead, i.e. again the pure will. So for all that, narrative will-theoretic magic tropes might represent their content as embedded in what is otherwise a physical/natural world. What do we say to Clarke's "lemma," that sufficiently advanced technology is indistinguishable from magic? If God's nature is Its will (in some way that our mortal nature is not completely simultaneous with our will), then if God does something by magic, this is actually to do it by nature too, as well as supernaturally all at once. A feat worthy of an omnipotent paradox, perhaps.
There is no such thing as supernatural. Everything that exists is natural. Magic comes in two flavours: * Entertainment magic creates an *illusion* of something impossible. * "Real" magic is nothing but advanced technology that only looks impossible to an uneducated observer.
91,001
I'm an ardent physicalist with a belief in the importance of the partial reduction of theories to physicalism. I have on occasion had discussions with philosophers here who challenge the existence of the natural/supernatural dichotomy, or in some way endorse supernaturalism. My question is directed at those who know something about theology and supernaturalism. Thus, despite ontologically rejecting supernaturalism and magic myself, I'm curious about the metaphysical presuppositions of others that entail belief. EDIT <<< (In response to comments: Naturalism for me is everything conveyed by an athiestic conception of a pluralism of sciences with partial reduction of theory with pragmatic criteria for the distinction of pseudoscience that takes a middle ground between realism and instrumentalism. Supernaturalism is therefore any ontological category outside of this.) <<< **Simply put, for philosophical positions (even of non-Western schools) that accept supernaturalism, does acceptance metaphysically necessitate 'magic' as a category?** My sense is modern theologians accept 'miracles', but reject 'magic' based on my own discussions with those who profess the [Book of Concord](https://en.wikipedia.org/wiki/Book_of_Concord) as a faithful characterization of Christian doctrine. But theology and supernaturalism is much broader than being a confessional Lutheran, so any relevant perspective, including historical philosophy such as the text of [Mauro Allegranza's link](https://en.wikipedia.org/wiki/Renaissance_magic) is of interest.
2022/05/03
[ "https://philosophy.stackexchange.com/questions/91001", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/40730/" ]
As a preamble, I consider it to be poor practice to use the term "supernatural" for the non-physical. "Natural" has a very clear meaning in epistemology -- it describes the items/events/forces which are subject to the tools of methodological naturalism -- IE empiricism and reasoning. Super-naturalism, from an epistemological perspective, are those things which one *cannot* apply reasoning and empiricism to. Use of "supernatural" for the ontologic category "spiritual" or for *any* non-physical thing, which are different but common uses, leads almost immediately into equivocation errors relative to epistemological meanings of natural and supernatural. Note, there are many ontologically supernatural categories which are not epistemologically supernatural. All of Thomist philosophy and metaphysics, for instance, is purely rationalistic -- IE methodologically natural. And vitalism was a specialty within the science of biology at the start of the 20th century. And mathematics, logic, and ethics, do not appear to be physical fields, yet they are very much subject to methodological naturalism. With that preamble, I will try to answer your question. There are two questions at issue. The first has to do with the ontology of our universe. Are there multiple fundamentally different types of things in it, or only one? And further, you are assuming that the existence of matter is beyond question, so the one is assumed to be material. What you are describing as "supernaturalism" would encompass a diverse set of ontologies, ranging from Platonic Idealism, Tegmark/Pythagorean math as source of everything, thru the Mind-centric idealist-leaning Perennial Philosophy, any concept of interactive spiritual dualism, Popper's strongly emergent consciousness, and probably Russellian Monism as well. Your "supernatural" just appears to be the postulate that there are non physical things or planes in our universe. All of the above ontologic views assume that non-material things can be causal on the material -- IE that physics is not causally closed. This interaction, from outside physics, is generally described as "magic" by physicalists. "Magic" is the method by which the ontic supernatural interacts with the physical. Physicalism is committed to the presumption that physics is causally closed, hence magic is impossible. Physics is *not* causally closed, and cannot be for multiple reasons. 1. Physics is underdetermined, hence an outside influence can influence an outcome within that suite of options, entirely consistently with physics 2. Physics, so long as it is an actual science, is by definition incomplete, as well as uncertain, so cannot exclude any phenomenon or outcomes (both ontic "supernatural" and "magic" are consistent with physics, so long as physics is not complete). 3. No space within our universe, nor the universe itself, can ever be isolated from outside influence. Both entanglement and cosmology assume that no physical system can ever be completely isolated from outside influences. 4. There are no absolute "laws" in science, only occasionally violated regularities, so causal closure, applied as a "law" is contrary to science. Some principles of "magic", by which Platonic Forms influence their shadows in this world, or by which the Will to Power creates life events, or how Consciousness and Neurons both are reflections of the Russelian monod, are postulated or assumed in each of these differing ontologies. But most of the details are currently not filled it. Provided these "supernatural" ontologies are not epistemologically supernatural, than the application of methodological naturalism should allow the details to gradually get filled in as to how magic works in that ontology. Or else identify sufficient problems and/or contradictions that could lead to the abandonment of that ontology by most of its holders. Note philosophy over the millenia has been subject to methodological naturalism, and subject areas and sciences have emerged from it to become their own science or academic specialty, as progress is made in characterizing how to evaluate that subject. "Magic" based "supernatural" ontologies are generally committed to this as a future path for their assumed ontology. For an example of training a skill of "magic", IE applying methodological naturalism to refine a magic methodology, look a the methods developed by the CIA to perform remote viewing. Here is an example site that offers this training today: <https://remoteviewingtraining.com/> For an example of a work that presumes the Perennial Philosophy as a starting point for doing science, and intrinsically accepts magic see Beyond Physicalism: Toward Reconciliation of Science and Spirituality <https://www.amazon.com/gp/customer-reviews/RZY1A4EL2JOZ4?ref=pf_vv_at_pdctrvw_srp>. This work treats "magic" as able to potentially operate through all 4 of the exceptions to causal closure of the physical I noted. For one that takes a far more restrictive view of magic, and assumes that it will only act within the indeterminism of physics of the first exception, see Swinburne: Mind, Brain, and Free Will <https://www.amazon.com/gp/customer-reviews/R18J8OJA7QPLKX?ref=pf_vv_at_pdctrvw_srp>. Eccles also limits his magic to the indeterminate parts of physics: How the SELF Controls Its BRAIN [https://www.amazon.com/How-SELF-Controls-Its-BRAIN/dp/3642492266/ref=sr\_1\_1?crid=XBJGKN0G1Q5X&keywords=eccles+how+the+brain&qid=1651905829&sprefix=eccles+how+the+brain%2Caps%2C214&sr=8-1](https://rads.stackoverflow.com/amzn/click/com/3642492266) For an effort to understand how energy conservation could or could not apply to "magic" see this question and answer: 'The Zero Energy Hypothesis and its consequences for particle creation and dualist interactionism' <https://physics.stackexchange.com/q/494408/181964> As to whether one must accept magic if one accepts "supernatural" ontology -- that one need not do so is the position of epiphenomenalism. Epiphenomenalism has a challenge to explain its own coherence, as asserting the truth of epiphenomenalism appears to be intrinsically a refutation of its own premise (assertions are physical, and presumably asserting the reality of consciousness as a non-physical item is a consequence of the reality of consciousness, hence is an examples of consciousness being causal on the physical). Despite the coherence difficulties, there are epiphenomenalists among current active philosophers. Both Chalmers and Jackson are "supernaturalists" relative to consciousness, and anti-magic epiphenomenalists relative to the effect of consciousness on the physical.
There is no such thing as supernatural. Everything that exists is natural. Magic comes in two flavours: * Entertainment magic creates an *illusion* of something impossible. * "Real" magic is nothing but advanced technology that only looks impossible to an uneducated observer.
91,001
I'm an ardent physicalist with a belief in the importance of the partial reduction of theories to physicalism. I have on occasion had discussions with philosophers here who challenge the existence of the natural/supernatural dichotomy, or in some way endorse supernaturalism. My question is directed at those who know something about theology and supernaturalism. Thus, despite ontologically rejecting supernaturalism and magic myself, I'm curious about the metaphysical presuppositions of others that entail belief. EDIT <<< (In response to comments: Naturalism for me is everything conveyed by an athiestic conception of a pluralism of sciences with partial reduction of theory with pragmatic criteria for the distinction of pseudoscience that takes a middle ground between realism and instrumentalism. Supernaturalism is therefore any ontological category outside of this.) <<< **Simply put, for philosophical positions (even of non-Western schools) that accept supernaturalism, does acceptance metaphysically necessitate 'magic' as a category?** My sense is modern theologians accept 'miracles', but reject 'magic' based on my own discussions with those who profess the [Book of Concord](https://en.wikipedia.org/wiki/Book_of_Concord) as a faithful characterization of Christian doctrine. But theology and supernaturalism is much broader than being a confessional Lutheran, so any relevant perspective, including historical philosophy such as the text of [Mauro Allegranza's link](https://en.wikipedia.org/wiki/Renaissance_magic) is of interest.
2022/05/03
[ "https://philosophy.stackexchange.com/questions/91001", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/40730/" ]
It seems to me that magic is referring to the intervention of someone in the (accepted) laws of a system in order to attain something that otherwise cound not be attained, metaphorically speaking "bending" the laws. On the other hand supernatural refers to something that is happening that is considered outside of the (accepted) laws, either because of ignorance of the (real) laws or because of someone's magic. So magic is from the point of doing something and supernatural from the point of viewing. (It's like energy and information!).
There is no such thing as supernatural. Everything that exists is natural. Magic comes in two flavours: * Entertainment magic creates an *illusion* of something impossible. * "Real" magic is nothing but advanced technology that only looks impossible to an uneducated observer.
91,001
I'm an ardent physicalist with a belief in the importance of the partial reduction of theories to physicalism. I have on occasion had discussions with philosophers here who challenge the existence of the natural/supernatural dichotomy, or in some way endorse supernaturalism. My question is directed at those who know something about theology and supernaturalism. Thus, despite ontologically rejecting supernaturalism and magic myself, I'm curious about the metaphysical presuppositions of others that entail belief. EDIT <<< (In response to comments: Naturalism for me is everything conveyed by an athiestic conception of a pluralism of sciences with partial reduction of theory with pragmatic criteria for the distinction of pseudoscience that takes a middle ground between realism and instrumentalism. Supernaturalism is therefore any ontological category outside of this.) <<< **Simply put, for philosophical positions (even of non-Western schools) that accept supernaturalism, does acceptance metaphysically necessitate 'magic' as a category?** My sense is modern theologians accept 'miracles', but reject 'magic' based on my own discussions with those who profess the [Book of Concord](https://en.wikipedia.org/wiki/Book_of_Concord) as a faithful characterization of Christian doctrine. But theology and supernaturalism is much broader than being a confessional Lutheran, so any relevant perspective, including historical philosophy such as the text of [Mauro Allegranza's link](https://en.wikipedia.org/wiki/Renaissance_magic) is of interest.
2022/05/03
[ "https://philosophy.stackexchange.com/questions/91001", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/40730/" ]
There are technical philosophical glosses of the concept of *magic* in modern fantasy writing. The problematique starts at the narrative level: "Is the guiding problem of the story a problem posed by magic? And how much magic is involved in the solution?" Then *magic* is formally read back into "the conditions of magical problems and solutions," so perhaps impredicatively, although then the [two](https://www.stephenrdonaldson.com/) [authors](https://faq.brandonsanderson.com/knowledge-base/what-are-sandersons-laws-of-magic/) who have developed this theory the most (to my knowledge, and separately) do go on to structure the use of magic in their stories in a pretty "rigorous" manner. So the stronger of the two embeds the ontology of magic, in his broadest work, into the matter/energy dichotomy, posing the counterfactual question, "If matter and energy are two species under an even more elementary genus, then what if there was a third term under the same genus?" Voila, presto, abracadabra, here you go: this third term, locally designated *investiture*, plays the narrative role of magic. Aleister Crowley defined magic like so: > > Magick is the Science and Art of causing Change to occur in conformity with Will. ... Every intentional act is a Magical act.... Magick is the Science of understanding oneself and one's conditions. It is the Art of applying that understanding in action. > > > And the narrative role of magic, in general, seems to be to give characters a way to cause highly distinct physical effects using pure free will (emotionally and/or intellectually interpreted). In a related historical vein, then, Hume portrayed the naive theory of promises as magical thinking ["at its finest"](https://plato.stanford.edu/entries/promises/#ProProObl): > > ... promissory obligations aren't just contingent upon acts of the will, like the obligations we might incur by deliberately damaging someone's property, but (at least it seems on first reflection) they are immediately created by acts of the will. When I promise to do something, it seems that *by so doing* I have created the obligation to do it. This feature makes promissory obligations a special puzzle for naturalistic ethical theories that hope to explain moral obligations without recourse to super-natural entities. The idea that we simply manufacture promissory obligations by speaking them, like an incantation, is decidedly mysterious. As Hume acidly remarked in the *Treatise*: > > > > > > > I shall further observe, that, since every new promise imposes a new obligation of morality on the person who promises, and since this new obligation arises from his will; it is one of the most mysterious and incomprehensible operations that can possibly be imagined, and may even be compared to *transubstantiation* or *holy orders*, where a certain form of words, along with a certain intention, changes entirely the nature of an external object, and even of a human creature. (*Treatise*, 3.2.5–14/15–524; emphasis in the original) > > > > > > > > > If, as with Kant, we try to locate free will "proper" in an eternal realm where God possibly exists, we will be hard-pressed to avoid the appearance of talking about magic, then; and what Hume mocked respecting obligations from promises becomes the manifold of all our responsibilities whatsoever instead, i.e. again the pure will. So for all that, narrative will-theoretic magic tropes might represent their content as embedded in what is otherwise a physical/natural world. What do we say to Clarke's "lemma," that sufficiently advanced technology is indistinguishable from magic? If God's nature is Its will (in some way that our mortal nature is not completely simultaneous with our will), then if God does something by magic, this is actually to do it by nature too, as well as supernaturally all at once. A feat worthy of an omnipotent paradox, perhaps.
As a preamble, I consider it to be poor practice to use the term "supernatural" for the non-physical. "Natural" has a very clear meaning in epistemology -- it describes the items/events/forces which are subject to the tools of methodological naturalism -- IE empiricism and reasoning. Super-naturalism, from an epistemological perspective, are those things which one *cannot* apply reasoning and empiricism to. Use of "supernatural" for the ontologic category "spiritual" or for *any* non-physical thing, which are different but common uses, leads almost immediately into equivocation errors relative to epistemological meanings of natural and supernatural. Note, there are many ontologically supernatural categories which are not epistemologically supernatural. All of Thomist philosophy and metaphysics, for instance, is purely rationalistic -- IE methodologically natural. And vitalism was a specialty within the science of biology at the start of the 20th century. And mathematics, logic, and ethics, do not appear to be physical fields, yet they are very much subject to methodological naturalism. With that preamble, I will try to answer your question. There are two questions at issue. The first has to do with the ontology of our universe. Are there multiple fundamentally different types of things in it, or only one? And further, you are assuming that the existence of matter is beyond question, so the one is assumed to be material. What you are describing as "supernaturalism" would encompass a diverse set of ontologies, ranging from Platonic Idealism, Tegmark/Pythagorean math as source of everything, thru the Mind-centric idealist-leaning Perennial Philosophy, any concept of interactive spiritual dualism, Popper's strongly emergent consciousness, and probably Russellian Monism as well. Your "supernatural" just appears to be the postulate that there are non physical things or planes in our universe. All of the above ontologic views assume that non-material things can be causal on the material -- IE that physics is not causally closed. This interaction, from outside physics, is generally described as "magic" by physicalists. "Magic" is the method by which the ontic supernatural interacts with the physical. Physicalism is committed to the presumption that physics is causally closed, hence magic is impossible. Physics is *not* causally closed, and cannot be for multiple reasons. 1. Physics is underdetermined, hence an outside influence can influence an outcome within that suite of options, entirely consistently with physics 2. Physics, so long as it is an actual science, is by definition incomplete, as well as uncertain, so cannot exclude any phenomenon or outcomes (both ontic "supernatural" and "magic" are consistent with physics, so long as physics is not complete). 3. No space within our universe, nor the universe itself, can ever be isolated from outside influence. Both entanglement and cosmology assume that no physical system can ever be completely isolated from outside influences. 4. There are no absolute "laws" in science, only occasionally violated regularities, so causal closure, applied as a "law" is contrary to science. Some principles of "magic", by which Platonic Forms influence their shadows in this world, or by which the Will to Power creates life events, or how Consciousness and Neurons both are reflections of the Russelian monod, are postulated or assumed in each of these differing ontologies. But most of the details are currently not filled it. Provided these "supernatural" ontologies are not epistemologically supernatural, than the application of methodological naturalism should allow the details to gradually get filled in as to how magic works in that ontology. Or else identify sufficient problems and/or contradictions that could lead to the abandonment of that ontology by most of its holders. Note philosophy over the millenia has been subject to methodological naturalism, and subject areas and sciences have emerged from it to become their own science or academic specialty, as progress is made in characterizing how to evaluate that subject. "Magic" based "supernatural" ontologies are generally committed to this as a future path for their assumed ontology. For an example of training a skill of "magic", IE applying methodological naturalism to refine a magic methodology, look a the methods developed by the CIA to perform remote viewing. Here is an example site that offers this training today: <https://remoteviewingtraining.com/> For an example of a work that presumes the Perennial Philosophy as a starting point for doing science, and intrinsically accepts magic see Beyond Physicalism: Toward Reconciliation of Science and Spirituality <https://www.amazon.com/gp/customer-reviews/RZY1A4EL2JOZ4?ref=pf_vv_at_pdctrvw_srp>. This work treats "magic" as able to potentially operate through all 4 of the exceptions to causal closure of the physical I noted. For one that takes a far more restrictive view of magic, and assumes that it will only act within the indeterminism of physics of the first exception, see Swinburne: Mind, Brain, and Free Will <https://www.amazon.com/gp/customer-reviews/R18J8OJA7QPLKX?ref=pf_vv_at_pdctrvw_srp>. Eccles also limits his magic to the indeterminate parts of physics: How the SELF Controls Its BRAIN [https://www.amazon.com/How-SELF-Controls-Its-BRAIN/dp/3642492266/ref=sr\_1\_1?crid=XBJGKN0G1Q5X&keywords=eccles+how+the+brain&qid=1651905829&sprefix=eccles+how+the+brain%2Caps%2C214&sr=8-1](https://rads.stackoverflow.com/amzn/click/com/3642492266) For an effort to understand how energy conservation could or could not apply to "magic" see this question and answer: 'The Zero Energy Hypothesis and its consequences for particle creation and dualist interactionism' <https://physics.stackexchange.com/q/494408/181964> As to whether one must accept magic if one accepts "supernatural" ontology -- that one need not do so is the position of epiphenomenalism. Epiphenomenalism has a challenge to explain its own coherence, as asserting the truth of epiphenomenalism appears to be intrinsically a refutation of its own premise (assertions are physical, and presumably asserting the reality of consciousness as a non-physical item is a consequence of the reality of consciousness, hence is an examples of consciousness being causal on the physical). Despite the coherence difficulties, there are epiphenomenalists among current active philosophers. Both Chalmers and Jackson are "supernaturalists" relative to consciousness, and anti-magic epiphenomenalists relative to the effect of consciousness on the physical.
91,001
I'm an ardent physicalist with a belief in the importance of the partial reduction of theories to physicalism. I have on occasion had discussions with philosophers here who challenge the existence of the natural/supernatural dichotomy, or in some way endorse supernaturalism. My question is directed at those who know something about theology and supernaturalism. Thus, despite ontologically rejecting supernaturalism and magic myself, I'm curious about the metaphysical presuppositions of others that entail belief. EDIT <<< (In response to comments: Naturalism for me is everything conveyed by an athiestic conception of a pluralism of sciences with partial reduction of theory with pragmatic criteria for the distinction of pseudoscience that takes a middle ground between realism and instrumentalism. Supernaturalism is therefore any ontological category outside of this.) <<< **Simply put, for philosophical positions (even of non-Western schools) that accept supernaturalism, does acceptance metaphysically necessitate 'magic' as a category?** My sense is modern theologians accept 'miracles', but reject 'magic' based on my own discussions with those who profess the [Book of Concord](https://en.wikipedia.org/wiki/Book_of_Concord) as a faithful characterization of Christian doctrine. But theology and supernaturalism is much broader than being a confessional Lutheran, so any relevant perspective, including historical philosophy such as the text of [Mauro Allegranza's link](https://en.wikipedia.org/wiki/Renaissance_magic) is of interest.
2022/05/03
[ "https://philosophy.stackexchange.com/questions/91001", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/40730/" ]
It seems to me that magic is referring to the intervention of someone in the (accepted) laws of a system in order to attain something that otherwise cound not be attained, metaphorically speaking "bending" the laws. On the other hand supernatural refers to something that is happening that is considered outside of the (accepted) laws, either because of ignorance of the (real) laws or because of someone's magic. So magic is from the point of doing something and supernatural from the point of viewing. (It's like energy and information!).
As a preamble, I consider it to be poor practice to use the term "supernatural" for the non-physical. "Natural" has a very clear meaning in epistemology -- it describes the items/events/forces which are subject to the tools of methodological naturalism -- IE empiricism and reasoning. Super-naturalism, from an epistemological perspective, are those things which one *cannot* apply reasoning and empiricism to. Use of "supernatural" for the ontologic category "spiritual" or for *any* non-physical thing, which are different but common uses, leads almost immediately into equivocation errors relative to epistemological meanings of natural and supernatural. Note, there are many ontologically supernatural categories which are not epistemologically supernatural. All of Thomist philosophy and metaphysics, for instance, is purely rationalistic -- IE methodologically natural. And vitalism was a specialty within the science of biology at the start of the 20th century. And mathematics, logic, and ethics, do not appear to be physical fields, yet they are very much subject to methodological naturalism. With that preamble, I will try to answer your question. There are two questions at issue. The first has to do with the ontology of our universe. Are there multiple fundamentally different types of things in it, or only one? And further, you are assuming that the existence of matter is beyond question, so the one is assumed to be material. What you are describing as "supernaturalism" would encompass a diverse set of ontologies, ranging from Platonic Idealism, Tegmark/Pythagorean math as source of everything, thru the Mind-centric idealist-leaning Perennial Philosophy, any concept of interactive spiritual dualism, Popper's strongly emergent consciousness, and probably Russellian Monism as well. Your "supernatural" just appears to be the postulate that there are non physical things or planes in our universe. All of the above ontologic views assume that non-material things can be causal on the material -- IE that physics is not causally closed. This interaction, from outside physics, is generally described as "magic" by physicalists. "Magic" is the method by which the ontic supernatural interacts with the physical. Physicalism is committed to the presumption that physics is causally closed, hence magic is impossible. Physics is *not* causally closed, and cannot be for multiple reasons. 1. Physics is underdetermined, hence an outside influence can influence an outcome within that suite of options, entirely consistently with physics 2. Physics, so long as it is an actual science, is by definition incomplete, as well as uncertain, so cannot exclude any phenomenon or outcomes (both ontic "supernatural" and "magic" are consistent with physics, so long as physics is not complete). 3. No space within our universe, nor the universe itself, can ever be isolated from outside influence. Both entanglement and cosmology assume that no physical system can ever be completely isolated from outside influences. 4. There are no absolute "laws" in science, only occasionally violated regularities, so causal closure, applied as a "law" is contrary to science. Some principles of "magic", by which Platonic Forms influence their shadows in this world, or by which the Will to Power creates life events, or how Consciousness and Neurons both are reflections of the Russelian monod, are postulated or assumed in each of these differing ontologies. But most of the details are currently not filled it. Provided these "supernatural" ontologies are not epistemologically supernatural, than the application of methodological naturalism should allow the details to gradually get filled in as to how magic works in that ontology. Or else identify sufficient problems and/or contradictions that could lead to the abandonment of that ontology by most of its holders. Note philosophy over the millenia has been subject to methodological naturalism, and subject areas and sciences have emerged from it to become their own science or academic specialty, as progress is made in characterizing how to evaluate that subject. "Magic" based "supernatural" ontologies are generally committed to this as a future path for their assumed ontology. For an example of training a skill of "magic", IE applying methodological naturalism to refine a magic methodology, look a the methods developed by the CIA to perform remote viewing. Here is an example site that offers this training today: <https://remoteviewingtraining.com/> For an example of a work that presumes the Perennial Philosophy as a starting point for doing science, and intrinsically accepts magic see Beyond Physicalism: Toward Reconciliation of Science and Spirituality <https://www.amazon.com/gp/customer-reviews/RZY1A4EL2JOZ4?ref=pf_vv_at_pdctrvw_srp>. This work treats "magic" as able to potentially operate through all 4 of the exceptions to causal closure of the physical I noted. For one that takes a far more restrictive view of magic, and assumes that it will only act within the indeterminism of physics of the first exception, see Swinburne: Mind, Brain, and Free Will <https://www.amazon.com/gp/customer-reviews/R18J8OJA7QPLKX?ref=pf_vv_at_pdctrvw_srp>. Eccles also limits his magic to the indeterminate parts of physics: How the SELF Controls Its BRAIN [https://www.amazon.com/How-SELF-Controls-Its-BRAIN/dp/3642492266/ref=sr\_1\_1?crid=XBJGKN0G1Q5X&keywords=eccles+how+the+brain&qid=1651905829&sprefix=eccles+how+the+brain%2Caps%2C214&sr=8-1](https://rads.stackoverflow.com/amzn/click/com/3642492266) For an effort to understand how energy conservation could or could not apply to "magic" see this question and answer: 'The Zero Energy Hypothesis and its consequences for particle creation and dualist interactionism' <https://physics.stackexchange.com/q/494408/181964> As to whether one must accept magic if one accepts "supernatural" ontology -- that one need not do so is the position of epiphenomenalism. Epiphenomenalism has a challenge to explain its own coherence, as asserting the truth of epiphenomenalism appears to be intrinsically a refutation of its own premise (assertions are physical, and presumably asserting the reality of consciousness as a non-physical item is a consequence of the reality of consciousness, hence is an examples of consciousness being causal on the physical). Despite the coherence difficulties, there are epiphenomenalists among current active philosophers. Both Chalmers and Jackson are "supernaturalists" relative to consciousness, and anti-magic epiphenomenalists relative to the effect of consciousness on the physical.
143,675
My Windows 7 setup uses around 16GB while Windows XP only needs around 4GB hard disk space. Seems weird. I use Windows only for gaming so I don't need a lot of stuff they have to offer. What is the best way to reduce the size of Windows 7? What can I delete / uninstall and how? I'd also like to reduce the CPU and memory usage as much as possible (as long as it doesnt hurt game performance) - turning off all that fancy stuff and so on. What can I turn off and how?
2010/05/21
[ "https://superuser.com/questions/143675", "https://superuser.com", "https://superuser.com/users/25345/" ]
You can [turn off Aero](http://www.howtogeek.com/howto/windows-vista/disable-aero-on-windows-vista/) by selecting one of the basic themes. This will reduce the load on your graphics card as Aero uses DirectX, which should improve performance while in Windows. It won't affect gaming performance as that uses full screen mode which bypasses Windows. To reduce the disk space Windows takes up look at removing some components. On XP this was "Control Panel > Add/Remove Windows Components" - I don't have Windows 7 on this machine to double check if it's in the same place. As to what you could remove, I couldn't say. You'll know what you do and don't use. Also, I've just checked in XP and there's not a lot you can remove and those items such as the games won't take up that much space.
One idea would be to create a slipstream install cd for windows 7. This involves removing all the stuff you dont want installed and adding additional files (like drivers) Simply google slipstream windows 7 installation and there are so many sites who can help you with this
143,675
My Windows 7 setup uses around 16GB while Windows XP only needs around 4GB hard disk space. Seems weird. I use Windows only for gaming so I don't need a lot of stuff they have to offer. What is the best way to reduce the size of Windows 7? What can I delete / uninstall and how? I'd also like to reduce the CPU and memory usage as much as possible (as long as it doesnt hurt game performance) - turning off all that fancy stuff and so on. What can I turn off and how?
2010/05/21
[ "https://superuser.com/questions/143675", "https://superuser.com", "https://superuser.com/users/25345/" ]
You can [turn off Aero](http://www.howtogeek.com/howto/windows-vista/disable-aero-on-windows-vista/) by selecting one of the basic themes. This will reduce the load on your graphics card as Aero uses DirectX, which should improve performance while in Windows. It won't affect gaming performance as that uses full screen mode which bypasses Windows. To reduce the disk space Windows takes up look at removing some components. On XP this was "Control Panel > Add/Remove Windows Components" - I don't have Windows 7 on this machine to double check if it's in the same place. As to what you could remove, I couldn't say. You'll know what you do and don't use. Also, I've just checked in XP and there's not a lot you can remove and those items such as the games won't take up that much space.
1. Use [Disk Cleanup](http://en.wikipedia.org/wiki/Disk_Cleanup) frequently to clean out temporary things. 2. See where data is being spent with [WinDirStat](http://windirstat.info/) and look if you can clean something. 3. Remove programs and windows components you don't need from the Control Panel. 4. [Resource Monitor](http://edge.technet.com/Media/Windows-7-Screencast-Resource-Monitor-resmon/) is great for checking out what things are happening when your PC is idle, see it as an improved version of Task Manager that is meant for power users instead of system admins. 5. If you want to know more about a folder, you can Google for it, back-up if you don't know...
143,675
My Windows 7 setup uses around 16GB while Windows XP only needs around 4GB hard disk space. Seems weird. I use Windows only for gaming so I don't need a lot of stuff they have to offer. What is the best way to reduce the size of Windows 7? What can I delete / uninstall and how? I'd also like to reduce the CPU and memory usage as much as possible (as long as it doesnt hurt game performance) - turning off all that fancy stuff and so on. What can I turn off and how?
2010/05/21
[ "https://superuser.com/questions/143675", "https://superuser.com", "https://superuser.com/users/25345/" ]
4GB XP? 16GB Win7? That seems an awful lot. My vanilla win7 install on VirtualBox was around 5GB, and you can certainly get XP down to 700MB or so. Use a visualisation tool such as [WinDirStat](http://windirstat.info/) to find out what's taking up all that disc. Stuff to target, if you don't need it and you really know what you're doing: * unused Windows components (Control Panel -> Add/Remove -> Windows components, or in Vista+, All Control Panel -> Programs and Features -> Turn Windows features on or off) * System Restore * \Windows\System32\dllcache on XP * \Windows\System32\DriverStore\FileRepository on Vista+, if you're sure you're not going to need any more drivers out of it * \Windows\SoftwareDistribution\Download * %TEMP%, the IE cache, the trash * patch rollbacks (hidden $ folders in \Windows on XP) * example shared media and \Windows\Web * disable virtual memory to get rid of the pagefile (assuming you have enough memory to run swapless) * disable hibernation to get read of hiberfil * delete the large Chinese/Japanese/Korean fonts, if you don't use them * all vendor crapware (if an OEM install) must be destroyed as a matter of course For some of these on Win7 you have to take ownership of the files back from SYSTEM before you can delete them.
You can [turn off Aero](http://www.howtogeek.com/howto/windows-vista/disable-aero-on-windows-vista/) by selecting one of the basic themes. This will reduce the load on your graphics card as Aero uses DirectX, which should improve performance while in Windows. It won't affect gaming performance as that uses full screen mode which bypasses Windows. To reduce the disk space Windows takes up look at removing some components. On XP this was "Control Panel > Add/Remove Windows Components" - I don't have Windows 7 on this machine to double check if it's in the same place. As to what you could remove, I couldn't say. You'll know what you do and don't use. Also, I've just checked in XP and there's not a lot you can remove and those items such as the games won't take up that much space.
143,675
My Windows 7 setup uses around 16GB while Windows XP only needs around 4GB hard disk space. Seems weird. I use Windows only for gaming so I don't need a lot of stuff they have to offer. What is the best way to reduce the size of Windows 7? What can I delete / uninstall and how? I'd also like to reduce the CPU and memory usage as much as possible (as long as it doesnt hurt game performance) - turning off all that fancy stuff and so on. What can I turn off and how?
2010/05/21
[ "https://superuser.com/questions/143675", "https://superuser.com", "https://superuser.com/users/25345/" ]
You can [turn off Aero](http://www.howtogeek.com/howto/windows-vista/disable-aero-on-windows-vista/) by selecting one of the basic themes. This will reduce the load on your graphics card as Aero uses DirectX, which should improve performance while in Windows. It won't affect gaming performance as that uses full screen mode which bypasses Windows. To reduce the disk space Windows takes up look at removing some components. On XP this was "Control Panel > Add/Remove Windows Components" - I don't have Windows 7 on this machine to double check if it's in the same place. As to what you could remove, I couldn't say. You'll know what you do and don't use. Also, I've just checked in XP and there's not a lot you can remove and those items such as the games won't take up that much space.
Windows Vista/7 doesn't provide explicit way to disable hibernation. (I don't know why) You can make this with following steps : * Open command prompt with administrator privilege. * Type this : powercfg -h off If c:\hiberfil.sys file disappear, it has worked.
143,675
My Windows 7 setup uses around 16GB while Windows XP only needs around 4GB hard disk space. Seems weird. I use Windows only for gaming so I don't need a lot of stuff they have to offer. What is the best way to reduce the size of Windows 7? What can I delete / uninstall and how? I'd also like to reduce the CPU and memory usage as much as possible (as long as it doesnt hurt game performance) - turning off all that fancy stuff and so on. What can I turn off and how?
2010/05/21
[ "https://superuser.com/questions/143675", "https://superuser.com", "https://superuser.com/users/25345/" ]
4GB XP? 16GB Win7? That seems an awful lot. My vanilla win7 install on VirtualBox was around 5GB, and you can certainly get XP down to 700MB or so. Use a visualisation tool such as [WinDirStat](http://windirstat.info/) to find out what's taking up all that disc. Stuff to target, if you don't need it and you really know what you're doing: * unused Windows components (Control Panel -> Add/Remove -> Windows components, or in Vista+, All Control Panel -> Programs and Features -> Turn Windows features on or off) * System Restore * \Windows\System32\dllcache on XP * \Windows\System32\DriverStore\FileRepository on Vista+, if you're sure you're not going to need any more drivers out of it * \Windows\SoftwareDistribution\Download * %TEMP%, the IE cache, the trash * patch rollbacks (hidden $ folders in \Windows on XP) * example shared media and \Windows\Web * disable virtual memory to get rid of the pagefile (assuming you have enough memory to run swapless) * disable hibernation to get read of hiberfil * delete the large Chinese/Japanese/Korean fonts, if you don't use them * all vendor crapware (if an OEM install) must be destroyed as a matter of course For some of these on Win7 you have to take ownership of the files back from SYSTEM before you can delete them.
One idea would be to create a slipstream install cd for windows 7. This involves removing all the stuff you dont want installed and adding additional files (like drivers) Simply google slipstream windows 7 installation and there are so many sites who can help you with this
143,675
My Windows 7 setup uses around 16GB while Windows XP only needs around 4GB hard disk space. Seems weird. I use Windows only for gaming so I don't need a lot of stuff they have to offer. What is the best way to reduce the size of Windows 7? What can I delete / uninstall and how? I'd also like to reduce the CPU and memory usage as much as possible (as long as it doesnt hurt game performance) - turning off all that fancy stuff and so on. What can I turn off and how?
2010/05/21
[ "https://superuser.com/questions/143675", "https://superuser.com", "https://superuser.com/users/25345/" ]
Windows Vista/7 doesn't provide explicit way to disable hibernation. (I don't know why) You can make this with following steps : * Open command prompt with administrator privilege. * Type this : powercfg -h off If c:\hiberfil.sys file disappear, it has worked.
One idea would be to create a slipstream install cd for windows 7. This involves removing all the stuff you dont want installed and adding additional files (like drivers) Simply google slipstream windows 7 installation and there are so many sites who can help you with this
143,675
My Windows 7 setup uses around 16GB while Windows XP only needs around 4GB hard disk space. Seems weird. I use Windows only for gaming so I don't need a lot of stuff they have to offer. What is the best way to reduce the size of Windows 7? What can I delete / uninstall and how? I'd also like to reduce the CPU and memory usage as much as possible (as long as it doesnt hurt game performance) - turning off all that fancy stuff and so on. What can I turn off and how?
2010/05/21
[ "https://superuser.com/questions/143675", "https://superuser.com", "https://superuser.com/users/25345/" ]
4GB XP? 16GB Win7? That seems an awful lot. My vanilla win7 install on VirtualBox was around 5GB, and you can certainly get XP down to 700MB or so. Use a visualisation tool such as [WinDirStat](http://windirstat.info/) to find out what's taking up all that disc. Stuff to target, if you don't need it and you really know what you're doing: * unused Windows components (Control Panel -> Add/Remove -> Windows components, or in Vista+, All Control Panel -> Programs and Features -> Turn Windows features on or off) * System Restore * \Windows\System32\dllcache on XP * \Windows\System32\DriverStore\FileRepository on Vista+, if you're sure you're not going to need any more drivers out of it * \Windows\SoftwareDistribution\Download * %TEMP%, the IE cache, the trash * patch rollbacks (hidden $ folders in \Windows on XP) * example shared media and \Windows\Web * disable virtual memory to get rid of the pagefile (assuming you have enough memory to run swapless) * disable hibernation to get read of hiberfil * delete the large Chinese/Japanese/Korean fonts, if you don't use them * all vendor crapware (if an OEM install) must be destroyed as a matter of course For some of these on Win7 you have to take ownership of the files back from SYSTEM before you can delete them.
1. Use [Disk Cleanup](http://en.wikipedia.org/wiki/Disk_Cleanup) frequently to clean out temporary things. 2. See where data is being spent with [WinDirStat](http://windirstat.info/) and look if you can clean something. 3. Remove programs and windows components you don't need from the Control Panel. 4. [Resource Monitor](http://edge.technet.com/Media/Windows-7-Screencast-Resource-Monitor-resmon/) is great for checking out what things are happening when your PC is idle, see it as an improved version of Task Manager that is meant for power users instead of system admins. 5. If you want to know more about a folder, you can Google for it, back-up if you don't know...
143,675
My Windows 7 setup uses around 16GB while Windows XP only needs around 4GB hard disk space. Seems weird. I use Windows only for gaming so I don't need a lot of stuff they have to offer. What is the best way to reduce the size of Windows 7? What can I delete / uninstall and how? I'd also like to reduce the CPU and memory usage as much as possible (as long as it doesnt hurt game performance) - turning off all that fancy stuff and so on. What can I turn off and how?
2010/05/21
[ "https://superuser.com/questions/143675", "https://superuser.com", "https://superuser.com/users/25345/" ]
Windows Vista/7 doesn't provide explicit way to disable hibernation. (I don't know why) You can make this with following steps : * Open command prompt with administrator privilege. * Type this : powercfg -h off If c:\hiberfil.sys file disappear, it has worked.
1. Use [Disk Cleanup](http://en.wikipedia.org/wiki/Disk_Cleanup) frequently to clean out temporary things. 2. See where data is being spent with [WinDirStat](http://windirstat.info/) and look if you can clean something. 3. Remove programs and windows components you don't need from the Control Panel. 4. [Resource Monitor](http://edge.technet.com/Media/Windows-7-Screencast-Resource-Monitor-resmon/) is great for checking out what things are happening when your PC is idle, see it as an improved version of Task Manager that is meant for power users instead of system admins. 5. If you want to know more about a folder, you can Google for it, back-up if you don't know...
143,675
My Windows 7 setup uses around 16GB while Windows XP only needs around 4GB hard disk space. Seems weird. I use Windows only for gaming so I don't need a lot of stuff they have to offer. What is the best way to reduce the size of Windows 7? What can I delete / uninstall and how? I'd also like to reduce the CPU and memory usage as much as possible (as long as it doesnt hurt game performance) - turning off all that fancy stuff and so on. What can I turn off and how?
2010/05/21
[ "https://superuser.com/questions/143675", "https://superuser.com", "https://superuser.com/users/25345/" ]
4GB XP? 16GB Win7? That seems an awful lot. My vanilla win7 install on VirtualBox was around 5GB, and you can certainly get XP down to 700MB or so. Use a visualisation tool such as [WinDirStat](http://windirstat.info/) to find out what's taking up all that disc. Stuff to target, if you don't need it and you really know what you're doing: * unused Windows components (Control Panel -> Add/Remove -> Windows components, or in Vista+, All Control Panel -> Programs and Features -> Turn Windows features on or off) * System Restore * \Windows\System32\dllcache on XP * \Windows\System32\DriverStore\FileRepository on Vista+, if you're sure you're not going to need any more drivers out of it * \Windows\SoftwareDistribution\Download * %TEMP%, the IE cache, the trash * patch rollbacks (hidden $ folders in \Windows on XP) * example shared media and \Windows\Web * disable virtual memory to get rid of the pagefile (assuming you have enough memory to run swapless) * disable hibernation to get read of hiberfil * delete the large Chinese/Japanese/Korean fonts, if you don't use them * all vendor crapware (if an OEM install) must be destroyed as a matter of course For some of these on Win7 you have to take ownership of the files back from SYSTEM before you can delete them.
Windows Vista/7 doesn't provide explicit way to disable hibernation. (I don't know why) You can make this with following steps : * Open command prompt with administrator privilege. * Type this : powercfg -h off If c:\hiberfil.sys file disappear, it has worked.
52,468
Recently I watched a documentary film from "Star Media" (a Russian film) called "The First World War". Up to this moment, I thought that the February Revolution happened because there was a huge economic problem and people were living in very poor conditions. Now I understand that the world war contributed to this problem (but I hadn't thought about it before). According to the film (and I believe it's true), the Bolsheviks signed a peace treaty with very bad terms for the Russian Empire (the Russian Empire had to pay reparations and lost some territories). So now I'm confused: How did the Bolshevik leader become so popular in Russia and have a lot of monuments?
2019/05/05
[ "https://history.stackexchange.com/questions/52468", "https://history.stackexchange.com", "https://history.stackexchange.com/users/37707/" ]
This contains several questions. First, why was Lenin (more exactly, his party) popular in 1918-1922. The short answer is "because they distributed landowners land among peasants" (Only to seize it back after 15 years of dictatorship). They also stopped the unpopular war (by surrender) and declared the right of nations for self-determination (only to conquer most of them back within 2 years). Second, why was he popular after his death, and until now. Because his successors who established a dictatorship in the country with total control and huge propaganda apparatus created a cult of Lenin. They, the ruling party, not the people, erected all those monuments. For several generations the population was intensively brainwashed (and the part of it which resisted brainwashing was physically exterminated). These rulers maintained the cult of Lenin which exists (to a smaller extent) even now. In the independent Ukraine people destroyed all monuments to Lenin in a short period since 2014. But another revolution (of 2014) was necessary to make this possible. You may compare this with the cult of Mao. His rule led to death of more people than Stalin and Hitler combined, and completely ruined the economy. Still his cult exists. Because the party created by Mao still rules in China.
End of war promise and skillful organization -------------------------------------------- First we must address the issues of Russian Empire at the begging of 20th entry. Social and economic changes have long ago outgrown political institutions of that country. You had masses of recently freed serfs, but with little land ownership. You had growing industrial working class, also living in poverty and permeable to various socialist ideas. You had rising middle class of (university) educated people, but with little political power. You had various nationalities with little loyalty to Empire, dreaming their own ethnic states (Poles, Finns, Jews, even Ukrainians and various Central Asian ethnic states) . At the top of that you had autocratic Emperor surrounded with his entourage where real power in the country lay It was clear that such situation would lead to unrest at first opportunity. And this happened in [1905](https://en.wikipedia.org/wiki/1905_Russian_Revolution), when bad economic situation, defeat in [Russo-Japanese war](https://en.wikipedia.org/wiki/Russo-Japanese_War) and above all events surrounding [Bloody Sunday](https://en.wikipedia.org/wiki/Bloody_Sunday_(1905)) undermined faith in the empire, and led to strikes, demonstrations and even open rebellion in some military units (especially navy). Anyway, as a revolution, events in 1905 were failure but did force the government to enact [Russian Constitution of 1906](https://en.wikipedia.org/wiki/Russian_Constitution_of_1906). Constitution was work of compromise: Emperor retained lot of power, but this was limited by newly established Duma (parliament). Some political liberties were given to citizens and the press. However, little was done to improve economic situation, except perhaps insurance for factory workers. Peasants practically got nothing. More moderate and affluent among "revolutionaries" were satisfied and ready to move political battle from streets to legislature and institutions. More radical (socialists and communists) were not, but they did not have strength at that time for complete revolution. Instead, radical left groups that would become [Bolsheviks](https://en.wikipedia.org/wiki/Bolsheviks) in few years later, started organizing themselves in a firm, underground structure with centralized hierarchy and harsh, almost military, discipline. They were preparing themselves for violent takeover of power at opportune moment. That moment came when WW1 started. [Russia did not fare well](https://alphahistory.com/russianrevolution/world-war-i/) in that war. Huge losses, relatively incompetent leadership that used soldiers like pawns but still suffered defeats, general underequipment plus huge economic problems (i.e. hunger) on home front due to the lack of male workers (which were conscripted). It must me noted that average ethnically Russian soldier did not see this war as "patriotic" - Russia proper was not invaded, fighting was mostly limited to border regions of the Empire which were inhabited by non-Russian people that did not want to be in Russia anyway. Germans themselves did not show much inclination to let's say capture Moscow or St Petersburg (Petrograd). All of that fit well into Bolshevik's propaganda that this was not their (soldier's) war . Of course, the fact that [Lenin collaborated with Germans](https://spartacus-educational.com/Lenin_Sealed_Train.htm), and that they actually paid and organized his return to Russia to undermine the empire was not known to public at this time. Idea that we should "stop war" even with concessions to Germans (that later became [Treaty of Brest-Litovsk](https://en.wikipedia.org/wiki/Treaty_of_Brest-Litovsk)) was not outlandish to average Russian soldier at that time. This became increasingly important when Kerensky government after [February Revolution](https://en.wikipedia.org/wiki/February_Revolution) continued with Russian participation in the war. Luckily for the Bolsheviks, their cooperation with Germans was swept under the rug by German defeat in November of 1918. They didn't have to pay Brest-Litovsk agreed reparations, and control of territories was determined by force of arms as everywhere else in former Russian Empire. As for Lenin himself, he was also "lucky" to die relatively quickly after the success of revolution (in 1924). Possibility that his death was caused by failed assassination attempt by [Fanny Kaplan](https://en.wikipedia.org/wiki/Fanny_Kaplan) increased aura of martyrdom. Since Fanny Kaplan was Jewish, in later years Stalinist propaganda portrayed her as a foreign agent, or Trotskyist, although she likely was more inclined to [Socialist Revolutionary Party](https://en.wikipedia.org/wiki/Socialist_Revolutionary_Party) . In any case, Lenin could not be blamed for later "excesses" of Socialism, but as "founding father" of USSR always had prominent role in iconography. He remained romanticized figure, always ["so young"](https://www.youtube.com/watch?v=bnz-g8jt0zU) as his time was identified with early revolutionary hope and zeal. Dissatisfied people often discussed "what if" scenarios, i.e. what would have been if he stayed alive for few more years. In reality, we know now that [Red Terror](https://en.wikipedia.org/wiki/Red_Terror) actually started when he was still alive, and under his direct command. It is unlikely he would be more merciful than Stalin. But, as things went as they went, he remained relatively "clean" figure for communists to create cult of personality that remains to this day.
476,176
I want to ask what is the little numbers on top/over any random number? For example, 75 in² and 125 ft3. What is it called and what does it mean is my question. I couldn't find anything online since I don't know what it is called.
2018/12/08
[ "https://english.stackexchange.com/questions/476176", "https://english.stackexchange.com", "https://english.stackexchange.com/users/318987/" ]
In terms of formatting, numbers and letters that appear in the top half of a line are in *superscript*. Similarly, numbers in the bottom half of a line would be called *subscript*. "Super" and "sub" describe their position over or under the main text. In your examples, these superscript numbers stand for a mathematical exponent or power as applied to a unit of measurement. If I were reading it out loud, I might say "seventy-five inches squared," or (more clearly) "seventy-five square inches." It means that I am talking about a unit of distance extending in two dimensions. It's a measurement of surface area, just like houses can be measured in square feet (ft2) and land can be measured in square miles (mi2). The cubic version (in3) would refer to a measurement across three dimensions, or a measurement of volume. Other measurements can also be squared or cubed, especially when doing calculations in STEM fields, but square inches/feet/miles are the most common examples of this in the US.
The general term for a number (or other text) written like this is **[superscript](https://en.wikipedia.org/wiki/Subscript_and_superscript)**. The term in2 refers to [square inches](https://en.wikipedia.org/wiki/Square_inch), which is the amount of area in a square with sides an inch in length. Likewise, if it were in3, it would be referring to [cubic inches](https://en.wikipedia.org/wiki/Cubic_inch), which is the amount of volume in a cube with side lengths of an inch.
476,176
I want to ask what is the little numbers on top/over any random number? For example, 75 in² and 125 ft3. What is it called and what does it mean is my question. I couldn't find anything online since I don't know what it is called.
2018/12/08
[ "https://english.stackexchange.com/questions/476176", "https://english.stackexchange.com", "https://english.stackexchange.com/users/318987/" ]
The general term for a number (or other text) written like this is **[superscript](https://en.wikipedia.org/wiki/Subscript_and_superscript)**. The term in2 refers to [square inches](https://en.wikipedia.org/wiki/Square_inch), which is the amount of area in a square with sides an inch in length. Likewise, if it were in3, it would be referring to [cubic inches](https://en.wikipedia.org/wiki/Cubic_inch), which is the amount of volume in a cube with side lengths of an inch.
**Exponent** > > 3.6 Derived Units— Derived units are formed by combining base units according to the algebraic relations linking > the corresponding quantities. Symbols for derived units are obtained by means of mathematical signs for > multiplication, division, and the use of **exponents**. For example, the SI unit for speed is the meter per second > (m/s or m·s–1 and that for density is kilogram per cubic meter kg/m3 > or kg·m–3). Most derived units have only > their composite names, such as meter per second for speed or velocity. Others have special names, such as > newton (N), joule (J), watt (W), and pascal (Pa), given to SI units of force, energy, power, and pressure (or > stress), respectively. > > > SAE Technical Standards Board: Rules for SAE Use of SI (Metric) Units, Rev May 1999. <https://www.sae.org/standardsdev/tsb/tsb003.pdf>
476,176
I want to ask what is the little numbers on top/over any random number? For example, 75 in² and 125 ft3. What is it called and what does it mean is my question. I couldn't find anything online since I don't know what it is called.
2018/12/08
[ "https://english.stackexchange.com/questions/476176", "https://english.stackexchange.com", "https://english.stackexchange.com/users/318987/" ]
In terms of formatting, numbers and letters that appear in the top half of a line are in *superscript*. Similarly, numbers in the bottom half of a line would be called *subscript*. "Super" and "sub" describe their position over or under the main text. In your examples, these superscript numbers stand for a mathematical exponent or power as applied to a unit of measurement. If I were reading it out loud, I might say "seventy-five inches squared," or (more clearly) "seventy-five square inches." It means that I am talking about a unit of distance extending in two dimensions. It's a measurement of surface area, just like houses can be measured in square feet (ft2) and land can be measured in square miles (mi2). The cubic version (in3) would refer to a measurement across three dimensions, or a measurement of volume. Other measurements can also be squared or cubed, especially when doing calculations in STEM fields, but square inches/feet/miles are the most common examples of this in the US.
**Exponent** > > 3.6 Derived Units— Derived units are formed by combining base units according to the algebraic relations linking > the corresponding quantities. Symbols for derived units are obtained by means of mathematical signs for > multiplication, division, and the use of **exponents**. For example, the SI unit for speed is the meter per second > (m/s or m·s–1 and that for density is kilogram per cubic meter kg/m3 > or kg·m–3). Most derived units have only > their composite names, such as meter per second for speed or velocity. Others have special names, such as > newton (N), joule (J), watt (W), and pascal (Pa), given to SI units of force, energy, power, and pressure (or > stress), respectively. > > > SAE Technical Standards Board: Rules for SAE Use of SI (Metric) Units, Rev May 1999. <https://www.sae.org/standardsdev/tsb/tsb003.pdf>
38,237
For example, we always assumed that the data or signal error is a Gaussian distribution? why? I have asked this question on stackoverflow, the link: <https://stackoverflow.com/questions/12616406/anyone-can-tell-me-why-we-always-use-the-gaussian-distribution-in-machine-learni>
2012/09/29
[ "https://stats.stackexchange.com/questions/38237", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14477/" ]
I looked at the answers on SO. I don't think they are satisfactory. People often argue for the normal distribution because of the central limit theorem. That may be okay in large samples when the problem involves averages. But machine learning problems can be more complex and sample sizes are not always large enough for normal approximations to apply. Some argue for mathematical convenience. That is no justification especially when computers can easily handle added complexity and computer-intensive resampling approaches. But I think the question should be challenged. Who says the Guassian distribution is "always" used or even just predominantly used in machine learning. Taleb claimed that statistics is dominated by the Gaussian distribution especially when applied to finance. He was very wrong about that! In machine learning aren't kernel density classification approaches, tree classifiers and other nonparametric methods sometimes used? Aren't nearest neighbor methods used for clustering and classification? I think they are and I know statisticians use these methods very frequently.
i'm currently studying machine learning and the same question popped to my mind. What I think the reason should be is that in every machine learning problem we assume we have abundant observational data available and whenever data tends to infinity it gets normally distributed around its mean and thats what Normal distribution(Gaussian Distribution) says. Although its not necessary that Gaussian DIstribution will always be a perfect fit to any data that tends to infinity like take a case when your data is always positive then you can easily see if you try to fit gaussian dist to it it will give some weight to negative values of x also(although negligible but still some weight is given) so in such case distribution like Zeta is more suited.
38,237
For example, we always assumed that the data or signal error is a Gaussian distribution? why? I have asked this question on stackoverflow, the link: <https://stackoverflow.com/questions/12616406/anyone-can-tell-me-why-we-always-use-the-gaussian-distribution-in-machine-learni>
2012/09/29
[ "https://stats.stackexchange.com/questions/38237", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14477/" ]
I looked at the answers on SO. I don't think they are satisfactory. People often argue for the normal distribution because of the central limit theorem. That may be okay in large samples when the problem involves averages. But machine learning problems can be more complex and sample sizes are not always large enough for normal approximations to apply. Some argue for mathematical convenience. That is no justification especially when computers can easily handle added complexity and computer-intensive resampling approaches. But I think the question should be challenged. Who says the Guassian distribution is "always" used or even just predominantly used in machine learning. Taleb claimed that statistics is dominated by the Gaussian distribution especially when applied to finance. He was very wrong about that! In machine learning aren't kernel density classification approaches, tree classifiers and other nonparametric methods sometimes used? Aren't nearest neighbor methods used for clustering and classification? I think they are and I know statisticians use these methods very frequently.
I had the same question "what the is advantage of doing a Gaussian transformation on predictors or target?" Infact, caret package has a pre-processing step that enables this transformation. I tried reasoning this out and am summarizing my understanding - 1. Usually the data distribution in Nature follows a Normal distribution ( few examples like - age, income, height, weight etc., ) . So its the best approximation when we are not aware of the underlying distribution pattern. 2. Most often the goal in ML/ AI is to strive to make the data linearly separable even if it means projecting the data into higher dimensional space so as to find a fitting "hyperplane" (for example - SVM kernels, Neural net layers, Softmax etc.,). The reason for this being "Linear boundaries always help in reducing variance and is the most simplistic, natural and interpret-able" besides reducing mathematical / computational complexities. And, when we aim for linear separability, its always good to reduce the effect of outliers, influencing points and leverage points. Why? Because the hyperplane is very sensitive to the influencing points and leverage points (aka outliers) - To undertstand this - Lets shift to a 2D space where we have one predictor (X) and one target(y) and assume there exists a good positive correlation between X and y. Given this, if our X is normally distributed and y is also normally distributed, you are most likely to fit a straight line that has many points centered in the middle of the line rather than the end-points (aka outliers, leverage / influencing points). So the predicted regression line will most likely suffer little variance when predicting on unseen data. Extrapolating the above understanding to a n-dimensional space and fitting a hyperplane to make things linearly separable does infact really makes sense because it helps in reducing the variance.
38,237
For example, we always assumed that the data or signal error is a Gaussian distribution? why? I have asked this question on stackoverflow, the link: <https://stackoverflow.com/questions/12616406/anyone-can-tell-me-why-we-always-use-the-gaussian-distribution-in-machine-learni>
2012/09/29
[ "https://stats.stackexchange.com/questions/38237", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14477/" ]
I had the same question "what the is advantage of doing a Gaussian transformation on predictors or target?" Infact, caret package has a pre-processing step that enables this transformation. I tried reasoning this out and am summarizing my understanding - 1. Usually the data distribution in Nature follows a Normal distribution ( few examples like - age, income, height, weight etc., ) . So its the best approximation when we are not aware of the underlying distribution pattern. 2. Most often the goal in ML/ AI is to strive to make the data linearly separable even if it means projecting the data into higher dimensional space so as to find a fitting "hyperplane" (for example - SVM kernels, Neural net layers, Softmax etc.,). The reason for this being "Linear boundaries always help in reducing variance and is the most simplistic, natural and interpret-able" besides reducing mathematical / computational complexities. And, when we aim for linear separability, its always good to reduce the effect of outliers, influencing points and leverage points. Why? Because the hyperplane is very sensitive to the influencing points and leverage points (aka outliers) - To undertstand this - Lets shift to a 2D space where we have one predictor (X) and one target(y) and assume there exists a good positive correlation between X and y. Given this, if our X is normally distributed and y is also normally distributed, you are most likely to fit a straight line that has many points centered in the middle of the line rather than the end-points (aka outliers, leverage / influencing points). So the predicted regression line will most likely suffer little variance when predicting on unseen data. Extrapolating the above understanding to a n-dimensional space and fitting a hyperplane to make things linearly separable does infact really makes sense because it helps in reducing the variance.
i'm currently studying machine learning and the same question popped to my mind. What I think the reason should be is that in every machine learning problem we assume we have abundant observational data available and whenever data tends to infinity it gets normally distributed around its mean and thats what Normal distribution(Gaussian Distribution) says. Although its not necessary that Gaussian DIstribution will always be a perfect fit to any data that tends to infinity like take a case when your data is always positive then you can easily see if you try to fit gaussian dist to it it will give some weight to negative values of x also(although negligible but still some weight is given) so in such case distribution like Zeta is more suited.
376,572
How electrical energy can be power over time when electrical energy is actually potential energy? from a lot of sources (wiki ,study.com..) comes the information that electric energy is potential or kinetic energy. In other sources is said to be electric power p x t so i am confused by this if anyone could explain to me what was right.
2017/12/27
[ "https://physics.stackexchange.com/questions/376572", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/179787/" ]
> > Would not air pressure keep the paper pressed against the book's cover? > > > To an extent, possibly. This is not the main reason why it is done though; although it does relate to your newspaper-ruler example. The reason would be to minimize the effects of air drag on the piece of paper. Although all objects influenced the same by gravity; they react differently with the air. A piece of paper would fall a lot slower than the book if they were both exposed to air drag. Instead; the piece of paper avoids the drag by travelling in the streamline of the book. Realistically, if the piece of paper were to fall slower than the book under the influence of gravity (due to lower mass) then even with the air pressure, it should still slowly separate from the book. (the reason the ruler breaks in the newspaper experiment is because you are trying to act against the pressure *very quickly*). Either way; this isn't really a great experiment to demonstrate how different masses react the same to gravitational force. Both the book and the paper have very large areas; therefore the drag force is substantial. To eliminate the drag force they set up the experiment in a way that pressure differences could keep the paper and the book falling with similar speeds, even if gravity was acting unevenly. Really, this test should be done in a vacuum to eliminate this source of potential error. It's a good sign that you can *recognize* this error in the experiment. Noticing details like this is important when designing good experiments.
The actual experiment was performed with a stone and a feather in two vacuum tubes of sane length. It was demonstrated that both the stone and the feather fall the same distance in the same time, which shows that the time if fall is independent of the masses of the body. In the experiment you stated, there may be chances of the book and the paper separating due to air interaction and upthrust. It has to be performed in a place where air is absolutely static, otherwise there may be chances of both separating. One may argue that atmospheric pressure keeps the book and the paper together. The argument is completely valid when you look at the experiment solely. But in the light of the actual experiment that I stated above, this argument is ignored, and it is generally taken that air pressure doesn't act on the two bodies.
29,952
I've not seen a lot of material on this. I've seen people attempting to bring a C++ program back to C++ code, and claiming how hard it is, but C code from C++ program? I don't know. If it is *possible,* what obstacles would be in the way? If *impossible,* what particular features make it so?
2022/01/29
[ "https://reverseengineering.stackexchange.com/questions/29952", "https://reverseengineering.stackexchange.com", "https://reverseengineering.stackexchange.com/users/40222/" ]
If you're asking whether it's possible for an automated decompiler to produce C code as output for a given C++ input binary (rather than producing C++ code as output), the answer is that most decompilers are going to do this anyway, and are not going to give you a choice as to which language they produce as output. Machine-code decompilers generally produce a C-like pseudocode, which, in general, cannot be immediately recompiled to C code. A better question would be "to what extent can the C-like pseudocode produced by decompilers represent features of C++ that are not in C", to which the answer depends on the specific decompiler in question. If you're asking whether it's possible to manually decompile a C++ program into a C program, the answer is yes. Just as you can take a program written in C++ and manually translate it into C, you can manually decompile a binary into any language you want. There's going to be a lot of manual work regardless of which language you choose to decompile the program into. Although C and C++ are not the same language, the first C++ compilers worked by translating C++ code into C code, and using a regular C compiler on the result (see: CFront <https://en.wikipedia.org/wiki/Cfront>). You can simulate most C++ constructs in C, although it's going to be a lot more work than simply taking advantage of the features introduced by C++.
Of course, yes. Here, the problem is that you need to know well about the difference between c and c++ from the viewpoint of reversing, I suppose. For example, the c++ classes is converted to assem codes like c structure, and the virtual table of a class is converted a first member of a structure and other functions is converted like a regular functions. And the pointer of c++ class is converted a certain register like a "ecx" register in x84 assem code. If you know somethings like this, you can decompile c++ program to c.
4
I am looking for a starting point for my project, preferably using popular systems (ones there is a lot of support for). I have an Arduino Uno, a Raspberry Pi, and a lot of willpower :) Anyone here built a project using the systems above? Observation: I'd like to start with a simple line-following vehicle and build up afterwards.
2012/10/23
[ "https://robotics.stackexchange.com/questions/4", "https://robotics.stackexchange.com", "https://robotics.stackexchange.com/users/32/" ]
The Arduino is really an AVR Atmega328p. The Arduino is a fine off-the-shelf implementation of this microcontroller, but if you make many of them, you can buy the chip for less than $3 each in bulk, and it requires very little circuitry to run on its own -- a crystal and a couple of capacitors to run at 20 Mhz, or not even that if you can run at the built-in 8 MHz resonant oscillator frequency. It's fine for generating control signals: Servo PWM, step/direction, control for H-bridges, etc.) It's also OK for running sensors: Ultrasonic time measurement, IR voltage conversion measurement, on/off contactors, etc -- this includes whatever optical sensor you'd use for "line sensing." There will be a little code space left over after doing these tasks, so the simple control loop of "is the line to the right, left, or center of me -> do the appropriate turn" can be built into that system. However, as soon as you want to do something bigger, like path planning, environmental awareness, memory, SLAM, etc, you will not be able to fit that into the Arduino. Thus, the best system for your requirements probably includes tying all the physical hardware to the Arduino, and then talking to the Arduino from the Raspberry Pi. The RPi has a modicum of CPU power (700 MHz ARM) and RAM (256-512 MB RAM) and thus can run higher-level control algorithms like path planning, localization, SLAM, etc. If you go with a bare AVR controller, there are UART outputs on the Raspberry Pi, but the problem is that the RPi is 3.3V and the Arduino Uno is 5V. Either go with a 3.3V Arduino version, or use a voltage divider to step down 5.0V output from the Arduino to the 3.3V input of the Raspberry Pi. I use a 2.2 kOhm high and 3.3 kOhm low resistor and it works fine. You can feed the 3V output from the Raspberry Pi directly into the RXD of the AVR, because it will treat anything at 1.2V or up as "high."
I built a line following robot with an Arduino before. It was really simple to do and all we used were color sensors on the bottom inputted in the Arduino, and then of course some motors for the wheels. But using an Arduino allowed us to have plenty of room for other components we wanted to add on to make our robot do more things. Also, if you want to see some line following code we used just ask in a comment, but it obviously depends on your setup with the sensors and how you want it to turn at intersections and things like that.